Member of Technical Staff — Training at RadixArk
Palo Alto, California, United States -
Full Time


Start Date

Immediate

Expiry Date

18 May, 26

Salary

0.0

Posted On

17 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

ML Systems, Distributed Systems, Large-scale Training Infrastructure, LLMs, Generative Models, GPU, TPU, PyTorch, JAX, Performance Engineering, Python, C++, Go, Rust, Fault Tolerance, Checkpointing

Industry

Description
About the Role RadixArk is seeking a Member of Technical Staff — Training to build and scale the systems that train frontier AI models. You will work on large-scale distributed training infrastructure for LLMs and generative models, pushing the limits of scale, efficiency, and reliability across thousands of GPUs. This role sits at the intersection of ML, systems, and performance engineering. Your work will directly impact how next-generation AI models are trained and scaled. This is a deeply technical, high-impact role for engineers who enjoy solving hard systems problems at extreme scale. Requirements 5+ years of experience in ML systems, distributed systems, or large-scale training infrastructure Strong experience with large-scale distributed training (data, tensor, and pipeline parallelism) Deep understanding of GPU/TPU architecture and performance trade-offs Strong knowledge of PyTorch or JAX distributed training stacks Experience debugging performance and stability issues in large training jobs Solid distributed systems fundamentals (networking, consensus, fault tolerance) Proficiency in Python plus a systems language (C++, Go, or Rust) Experience operating production ML systems at scale Strong Plus Experience training multi-billion-parameter models Familiarity with DeepSpeed, Megatron-LM, FSDP, or custom training stacks Experience with RDMA, InfiniBand, or high-speed interconnects Background in HPC or performance-critical computing Contributions to ML systems open-source projects Experience with checkpointing, fault recovery, and elastic training Experience optimizing training cost efficiency at scale Responsibilities Design and operate large-scale distributed training systems Optimize throughput, scalability, and hardware efficiency Improve reliability and fault tolerance for long-running training jobs Develop training frameworks and infrastructure tooling Collaborate with model researchers to support frontier experiments Debug and resolve cross-layer performance bottlenecks Build observability systems for training performance and reliability Drive capacity planning and cluster utilization strategies Contribute to long-term training infrastructure architecture About RadixArk RadixArk is an infrastructure-first AI company built by engineers who have shipped production AI systems, created SGLang (20K+ GitHub stars, the fastest open LLM serving engine), and developed Miles, our large-scale RL framework. We build world-class infrastructure for AI training and inference and partner with frontier AI teams and cloud providers. Our team has coordinated training across 10,000+ GPUs and optimized kernels serving billions of tokens daily. Join us in building the infrastructure that trains the next generation of AI. Compensation We offer competitive compensation with meaningful equity, comprehensive benefits, and flexible work arrangements. Compensation depends on location, experience, and level. Equal Opportunity RadixArk is an Equal Opportunity Employer and welcomes candidates from all backgrounds.
Responsibilities
The role involves designing and operating large-scale distributed training systems, focusing on optimizing throughput, scalability, and hardware efficiency for frontier AI models. Responsibilities also include improving reliability, developing training frameworks, and collaborating with researchers to support next-generation experiments.
Loading...