Cloud Infrastructure Engineer - Systems at Apple
Seattle, WA 98105, USA -
Full Time


Start Date

Immediate

Expiry Date

12 Nov, 25

Salary

302200.0

Posted On

12 Aug, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Qemu, Docker, Distributed Systems, Virtualization, It, Network Performance, Computer Science, Cloudstack, Kubernetes

Industry

Information Technology/IT

Description

People at Apple don’t just build products - they craft the kind of experience that has revolutionized entire industries. The diverse collection of our people and their ideas inspire innovation in everything we do. Imagine what you could do here! Join Apple, and help us leave the world better than we found it. The Apple Service Engineering (ASE) team builds and provides systems and infrastructure that power Apple’s services (such as iCloud, iTunes, Siri, and Maps). Apple’s uniquely seamless hardware, software and services integration means that you will get to work with world-class engineers from a variety of disciplines to design and deliver products that our customers love. Our services have to scale globally, stay highly available, and “just work.” If you love designing, engineering, and running systems that will help millions of customers, then this is the place for you! Apple Service Engineering (ASE)’s Compute team is seeking an experienced software engineer to build and enhance internal cloud infrastructure offerings. You will be responsible for core components of this cutting edge platform, integrating the latest cloud hardware technologies with Apple’s own hardware and software. In this role, you will collaborate with teams across Apple to deliver forward-looking high-performance virtualized infrastructure, supporting everything from LLM model training to maximum-security confidential computing environments. You will partner with internal application teams to understand their requirements, co-design operating system features and datacenter infrastructure to meet your needs, and look ahead at emerging technologies to incorporate them into our services.

DESCRIPTION

In this role you will be responsible for developing, debugging and maintaining an in-house virtualized infrastructure platform, and evaluating and integrating cutting edge compute hardware: - Design, implement, and optimize virtualized compute offerings on a wide variety of hardware types - Integrate and optimize high-performance virtual networking solutions for custom hardware, including Open vSwitch, DPDK, GPU Direct, and RoCE RDMA technologies - Work extensively with KVM, QEMU, and Linux kernel to efficiently enable functionality within virtual machines, including GPU passthrough and SR-IOV configurations - Evaluate and tune performance of low-latency, high-throughput GPU Direct / RoCE interconnects - Collaborate with cross-functional teams to understand and optimize for critical workloads - Tackle and resolve complex issues across accelerator, virtualization, and networking layers, ensuring robust performance, stability, and security - Research and prototype new hardware and datacenter architectures to stay at the forefront of the industry

MINIMUM QUALIFICATIONS

  • At least 5+ years and Bachelor’s Degree in Computer Science, or equivalent related experience.
  • 5+ years of experience in virtualization, specifically with KVM and QEMU.
  • Strong Linux development background, including kernel-level development and tuning for high-performance GPU and networking workloads.
  • Proficiency in high-speed networking, particularly RDMA (e.g., InfiniBand, RoCE), and network performance optimization in virtualized settings
  • Knowledge of advanced virtualization concepts, including nested virtualization, VM live migration, and NUMA optimization
  • Proven distributed systems and operating systems knowledge and experience applying it to build stable, performant, and secure execution environments

PREFERRED QUALIFICATIONS

  • Expertise in GPU development, including driver integration, configuration, and debugging, as well as hands-on experience with hypervisor GPU passthrough and SR-IOV
  • Familiarity with CUDA libraries and GPU compute frameworks.
  • Experience with CloudStack or similar cloud orchestration platforms.
  • Familiarity with Docker, Kubernetes, and containerization technologies.
  • Experience with distributed GPU workloads and optimizing GPU network performance in multi-node environments.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

Please refer the Job description for details

Loading...