MLOps Engineer at Menlo Research Pte Ltd
Ho Chi Minh City, , Vietnam -
Full Time


Start Date

Immediate

Expiry Date

04 Jun, 26

Salary

0.0

Posted On

06 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

PyTorch, DDP, Mixed Precision, vLLM, SGLang, Python, C++, NCCL, MPI, InfiniBand, RoCE, Docker, Kubernetes, RLHF, PPO, TensorRT

Industry

technology;Information and Internet

Description
Job Title: MLOps Engineer (PyTorch, Systems & Training Pipeline) About the Role As an MLOps Engineer, you will own and evolve the infrastructure behind our PyTorch-based training and inference workloads. You will work at the intersection of deep learning, systems programming, and infrastructure engineering, building pipelines that are robust, reproducible, and built to last. This role spans training infrastructure, inference serving, and platform reliability, and is ideal for someone who cares not just about getting models trained, but doing it right. Key Responsibilities Build and maintain training and inference pipelines using PyTorch, including support for DDP, mixed precision, checkpointing, experiment versioning, and reproducible evaluation workflows. Own and evolve inference serving infrastructure using vLLM and SGLang, including debugging issues in inference stacks such as tool call parsers and reasoning parsers, and optimizing for throughput and latency. Write and maintain robust tooling in Python and C++ to support the full training lifecycle, from data ingestion to model release. Optimize compute workloads for bare-metal environments, covering CPU/GPU utilization, memory bandwidth, and I/O throughput. Troubleshoot low-level networking issues, distributed training errors, and hardware bottlenecks across NCCL, MPI, and high-speed interconnects such as InfiniBand and RoCE. Set up and manage ML environments including containers, package management, GPU drivers, and runtime configurations. Establish CI/CD patterns for AI workloads covering training, evaluation, quantization, and model release workflows. Integrate monitoring, alerting, anomaly detection, and incident response for both training jobs and inference services. Contribute to shared platform capabilities across reliability, observability, and cost management. Build and maintain scalable runtime infrastructure for model-backed services and APIs, including support for LLM-backed APIs, MCP (Model Context Protocol) servers, and agentic systems. You Should Have Deep expertise in PyTorch internals, including DDP, FSDP, mixed precision training, TorchScript, and torch.compile. Strong programming skills in Python and C++, with the ability to read and safely modify unfamiliar codebases. Solid computer science fundamentals covering data structures, concurrency, operating systems, and memory management. Hands-on experience with vLLM and SGLang for production inference serving, including serving quantized models such as FP8, INT8, and NVFP4. Experience with RLHF and PPO training pipelines, including frameworks such as veRL and TRL, and reward model integration. Strong understanding of distributed training setups, networking, and interconnects including NCCL, MPI, InfiniBand, and RoCE. Experience debugging and tuning bare-metal Linux servers, including kernel parameters, NUMA topology, and GPU driver configuration. Familiarity with job schedulers such as Airflow and experience operating production-grade distributed infrastructure. Strong grasp of containerized and cloud-native environments including Docker and Kubernetes. Nice to Have Experience with ML compiler stacks such as LLVM, MLIR, TensorRT, or XLA. Familiarity with model quantization techniques and deployment optimization, including GPTQ, AWQ, and bitsandbytes. Contributions to open source ML projects, including PyTorch, vLLM, SGLang, or related inference and training tooling. Experience with infrastructure-as-code tools such as Ansible, Terraform, or Nix for reproducible cluster setup. Experience with custom or on-premise deployments, local clusters, or edge inference. Familiarity with observability stacks such as Prometheus, Grafana, or OpenTelemetry applied to training and inference workloads. Experience building infrastructure for agentic systems including secure tool access, orchestration, and isolation boundaries. Passion for clean, well-documented code and detail-oriented engineering. Location Ho Chi Min City, HCMC (Hybrid) Department Menlo HQ Employment Type Full-Time Minimum Experience Mid-level
Responsibilities
The MLOps Engineer will be responsible for building and maintaining robust, reproducible training and inference pipelines primarily using PyTorch, while also owning and evolving the inference serving infrastructure using tools like vLLM and SGLang. This role involves writing tooling in Python and C++ to support the full lifecycle, optimizing compute workloads, and troubleshooting low-level distributed systems issues.
Loading...