GPU Software Architecture Engineer at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

26 Jan, 26

Salary

0.0

Posted On

28 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

GPU Programming, High-Performance Computing, System Programming, C/C++, Python, Distributed Systems, Parallel Computing Architectures, Inter-Node Communication Technologies, ML Training, Inference, Tensor Frameworks, Model Development Lifecycle, ML Infrastructure

Industry

Computers and Electronics Manufacturing

Description
Apple Silicon GPU SW architecture team is seeking a senior/principal engineer to lead server-side ML acceleration and multi-node distribution initiatives. You will help define and shape our future GPU compute infrastructure on Private Cloud Compute that enables Apple Intelligence. DESCRIPTION In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack—from low-level memory access patterns to high-level distributed algorithms—to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily. This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure. MINIMUM QUALIFICATIONS Strong knowledge of GPU programming (CUDA, ROCm) and high-performance computing Must have excellent system programming skills in C/C++, Python is a plus Deep understanding of distributed systems and parallel computing architectures Experience with inter-node communication technologies (InfiniBand, RDMA, NCCL) in the context of ML training/inference Understand how tensor frameworks (PyTorch, JAX, TensorFlow) are used in distributed training/inference Technical BS/MS degree PREFERRED QUALIFICATIONS Familiar with model development lifecycle from trained model to large scale production inference deployment Proven track record in ML infrastructure at scale
Responsibilities
Architect and build next-generation distributed ML infrastructure, orchestrating massive network models across server clusters. Design parallelization strategies and optimize the stack for maximum hardware utilization and minimal latency.
Loading...