Machine Learning Engineer at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

19 Feb, 26

Salary

0.0

Posted On

21 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Image Processing, Neuroscience, Information Theory, Compression, Algorithm Development, Python, PyTorch, TensorFlow, JAX, Generative Modeling, Optimization, Analytical Thinking, Creative Problem-Solving, Digital Imaging, Display Software

Industry

Computers and Electronics Manufacturing

Description
The Visual eXperience team is looking for a passionate researcher/engineer to help shape the next generation of imaging, rendering, compression, and display solutions for products across the apple ecosystem! The team features a highly collaborative and hands-on environment that fosters scientific and engineering excellence, creativity, and innovation in the interdisciplinary areas of vision science, information theory, compression, machine learning, image enhancement and processing, neuroscience, color science, and optics. This engineer will explore the foundations of perception-aligned loss functions, neural compression systems, and image realism modeling that enable breakthrough performance in our camera, AR/VR, display, and video processing pipelines. You will join a team of scientists and engineers who care deeply about elegant theory, robust implementation, and real-world impact that makes a tangible difference to our user’s experience. If you are excited by the intersection of information theory, perception, machine learning, and large-scale imaging systems—and want your work to ship in products used by millions—this role is for you. DESCRIPTION In this highly visible role, you will invent the next generation of perceptual loss functions used across Apple’s imaging ecosystem. Your work will span algorithm development, theoretical analysis, and deployment at scale. MINIMUM QUALIFICATIONS Bachelors Degree in Computer Science, Electrical and Computer Engineering, Neuroscience, Vision Science, or equivalent and 3+ years of relevant experience Experience in translating complex mathematical concepts into practical algorithms aligned with perceived image realism or quality Experience with full-reference or no-reference image metrics, generative modeling, optimization, or realism-driven evaluation frameworks PREFERRED QUALIFICATIONS Masters or Ph.D. in Computer Science, Electrical and Computer Engineering, Neuroscience, Vision Science, or equivalent Experience in information theory, probabilistic modeling, and/or machine learning Experience in Python and modern ML frameworks such as PyTorch, TensorFlow, and JAX Deep expertise in image compression, texture modeling, and rate-distortion optimization, with demonstrated ability to design new metrics and algorithms that outperform classical approaches Publication record in machine learning, compression, or information theory venues (NeurIPS, ICLR, ICML, ISIT, or related) Hands-on experience building learned compression systems end-to-end, including model design, training pipelines, ablations, and integration into large-scale frameworks Internship or industry experience integrating research models into production-scale frameworks is a strong plus Basic knowledge of human visual perception is a strong plus Strong analytical & critical thinking, and creative problem-solving skills Excellent written and verbal communication skills in English Excellent communication, collaboration, and scientific writing skills Basic understanding of digital imaging, and display software and hardware Swift/Metal programming is a plus
Responsibilities
You will invent the next generation of perceptual loss functions used across Apple’s imaging ecosystem. Your work will span algorithm development, theoretical analysis, and deployment at scale.
Loading...