Member of Technical Staff, Training Infra at INCEPTION ARTIFICIAL INTELLIGENCE L.L.C - O.P.C
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

08 Jun, 26

Salary

0.0

Posted On

10 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Distributed Training Systems, GPU Optimization, High-Performance Optimization, Reusable Frameworks, PyTorch, TensorFlow, Python, C++, Rust, Go, Docker, Kubernetes, CI/CD, Performance Profiling, DeepSpeed, Megatron-LM

Industry

technology;Information and Internet

Description
The Role We're looking for engineers and scientists to design, optimize, and maintain the core systems that enable scalable, efficient training of LLM. Your goal is to make experimentation and training at Inception fast and reliable so our team can focus on science, not system bottlenecks. Key Responsibilities * Design, implement, and optimize distributed training systems that scale across thousands of GPUs and nodes. * Develop high-performance optimizations to maximize throughput and efficiency. * Develop reusable frameworks and libraries to improve training reproducibility, reliability, and scalability for new model architectures. Qualifications * BS/MS/PhD in Computer Science, Engineering, or a related field (or equivalent experience). * Understanding of ML frameworks (PyTorch, TensorFlow) from a systems perspective. * Strong engineering skills — ability to contribute performant, maintainable code and debug in complex codebases. * Proficiency in Python and at least one systems programming language (C++/Rust/Go). * Experience with containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Preferred Skills * Experience building and maintaining large-scale language models with tens of billions of parameters or more. * Experience with ML workflow orchestration tools (Kubeflow, Airflow). * Background in performance optimization and profiling of ML systems (Prometheus, Grafana, OpenTelemetry). * Familiarity with distributed frameworks such as PyTorch/XLA, DeepSpeed, Megatron-LM.
Responsibilities
The role involves designing, implementing, and optimizing distributed training systems capable of scaling across thousands of GPUs and nodes. Key tasks include developing high-performance optimizations and creating reusable frameworks to enhance training reliability and scalability for new model architectures.
Loading...