GPU ML Engineer at Apple
Cupertino, California, USA -
Full Time


Start Date

Immediate

Expiry Date

01 Jun, 25

Salary

143100.0

Posted On

01 Mar, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Optimization, Parallel Programming, Algebra, Mathematics, Maintenance, Optimization Techniques, Computer Architecture

Industry

Information Technology/IT

Description

SUMMARY

Posted: Feb 19, 2025
Weekly Hours: 40
Role Number:200576617
Apple’s Compute Frameworks team in GPU, Graphics and Displays org provides a suite of high-performance data parallel algorithms for developers inside and outside of Apple for iOS, macOS and Apple TV. Our efforts are currently focused in the key areas of linear algebra, image processing, machine learning, along with other projects of key interest to Apple. We are always looking for exceptionally dedicated individuals to grow our outstanding team.

DESCRIPTION

Our team is seeking extraordinary machine learning and GPU programming engineers who are passionate about providing robust compute solutions for accelerating machine learning networks on Apple Silicon. Role has the opportunity to influence the design of compute and programming models in next generation GPU architectures. Responsibilities: Adding optimized GPU compute kernels across Machine Learning, Image Processing, Linear Algebra and Computer Vision. Defining and implementing APIs in Metal Performance Shaders. Performing in-depth analysis, compiler and kernel level optimizations to ensure the best possible performance across hardware families. Partnering with Platform Architecture teams to define Apple GPU’s compute hardware roadmap. Working with hardware team to analyze performance on future silicon. Develop/maintain/optimize ML Inference and Training acceleration technologies. Intended deliverables: GPU compute acceleration technology. Optimized compute kernels, computational graph and ML training / inference technologies across products. If this sounds of interest, we would love to hear from you!

MINIMUM QUALIFICATIONS

  • Proven programming and problem-solving skills.
  • Good understanding of machine learning fundamentals.
  • GPU compute programming models & optimization techniques.
  • GPU compute framework development, maintenance, and optimization.
  • Machine learning development using one or more ML frameworks (TensorFlow, PyTorch or JAX).

PREFERRED QUALIFICATIONS

  • Experience with adding computational graph support, runtime or device backend to Machine learning libraries (TensorFlow, PyTorch or JAX) support is a plus.
  • Experience with high performance parallel programming, GPU programming or LLVM/MLIR compiler infrastructure is a plus.
  • Experience with system level programming and computer architecture.
  • Background in mathematics, including linear algebra and numerical methods.
Responsibilities

Please refer the Job description for details

Loading...