Start Date
Immediate
Expiry Date
29 Jun, 25
Salary
143100.0
Posted On
29 Mar, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Learning Techniques, Keras, Network Optimization, Python, Uncertainty, Publications, Machine Learning, Conferences, C++, Computer Science, Research, Computer Engineering
Industry
Information Technology/IT
SUMMARY
Posted: Feb 25, 2025
Weekly Hours: 40
Role Number:200592777
The System Intelligence and Machine Learning (SIML) organization at Apple is looking for a Multi-Modal LLM Research Engineer to help shape the future of on-device Apple Intelligence. In this role, you will work at the intersection of large language models, neural network optimizations, and algorithm development, driving innovations that enhance real-world AI experiences for millions of users.
DESCRIPTION
As part of a collaborative team of deep learning experts and software engineers, you will explore the optimal trade-offs between model quality and efficiency, ensuring that innovative Multi-Modal LLMs can be seamlessly deployed on-device. You will translate the latest research into practical engineering solutions or innovate novel technologies, shaping key decisions on on-device model deployment and real-world performance. Working closely with various teams at Apple, you will help design Multi-Modal LLM architectures, refine training paradigms for real-world applications, and develop software optimized for emerging hardware architectures-potentially even influencing future hardware designs. If you want to be part of a science- and results-driven team and are comfortable embracing new challenges in a fast-paced, iterative environment, we’d love to hear from you. Your research and development will directly shape the next generation of Apple Intelligence experiences!
MINIMUM QUALIFICATIONS
PREFERRED QUALIFICATIONS
Please refer the Job description for details