Generative AI Research Engineer, Multimodal, Agent Modeling - SIML at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

30 Jan, 26

Salary

0.0

Posted On

01 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Generative AI, Machine Learning, Multimodal Modeling, AI Safety, Computer Vision, Reinforcement Learning, Privacy-Preserving Learning, Prototyping, Validation, Deployment, Neural Architectures, Distributed Training, ML Toolkits, Applied Research, Leadership, Innovation

Industry

Computers and Electronics Manufacturing

Description
Are you passionate about Generative AI? Are you interested in working on groundbreaking generative modeling technologies to enrich billions of people? We are driving multiple initiatives focused on advancing generative models, and we are seeking candidates experienced in training, adapting and deploying large-scale generative models. This role emphasizes AI safety, multimodal understanding and generation, and the development of agentic systems that push the boundaries of what AI can achieve responsibly. We are the Intelligence System Experience (ISE) team within Apple’s software organization. The team operates at the intersection of multimodal machine learning and system experiences. It oversees a range of experiences such as System Experience (Springboard, Settings), Image Generation, Genmoji, Writing tools, Keyboards, Pencil & Paper, Generative Shortcuts - all powered by production scale ML workflows. Our multidisciplinary ML teams focus on a broad spectrum of areas, including Visual Generation Foundation Models, Multimodal Understanding, Visual Understanding of People, Text, Handwriting, and Scenes, Personalization, Knowledge Extraction, Conversation Analysis, Behavioral Modeling for Proactive Suggestions, and Privacy-Preserving Learning. These innovations form the foundation of the seamless, intelligent experiences our users enjoy every day. We are looking for research engineers to architect and advance multimodal LLM and Agentic AI technologies, ensuring their safe and responsible deployment in the real world. An ideal candidate will have the ability to lead diverse cross functional efforts spanning ML modeling, prototyping, validation and privacy-preserving learning. A strong foundation in machine learning and generative AI, along with a proven ability to translate research innovations into production-grade systems, is essential. Industry experience in Vision-Language multimodal modeling, Reinforcement and Preference Learning, Multimodal Safety, and Agentic AI Safety & Security would be meaningful needs. SELECTED REFERENCES TO OUR TEAM’S WORK: https://arxiv.org/pdf/2507.13575 https://arxiv.org/pdf/2407.21075 https://www.apple.com/newsroom/2024/12/apple-intelligence-now-features-image-playground-genmoji-and-more/ DESCRIPTION We are looking for a candidate with a proven track record in applied ML research. Responsibilities in the role will include training large scale-multimodal (2D/3D vision-language) models on distributed backends, deploying efficient neural architectures on device and private cloud compute, addressing emerging safety challenges to make the model/agents robust and aligned with human values. A key focus of the position is ensuring real-world quality, emphasizing model and agent safety, fairness, and robustness. You will collaborate closely with ML researchers, software engineers, and hardware and design teams across multiple disciplines. The core responsibilities include advancing the multimodal capabilities of large language models and strengthening AI safety and security for agentic workflows. On the user experience front, the work will involve aligning image and video content to the space of LLMs for visual actions and multi-turn interactions, enabling rich, intuitive experiences powered by agentic AI systems. MINIMUM QUALIFICATIONS M.S. or PhD in Electrical Engineering/Computer Science or a related field (mathematics, physics or computer engineering), with a focus on computer vision and/or machine learning or comparable professional experience. Strong ML and Generative Modeling fundamentals Experience using one or more of the following: Pre-training or Post-training of Multimodal-LLMs, Reinforcement Learning, Distillation Familiarity with distributed training Proficiency in using ML toolkits, e.g., PyTorch You're aware of the challenges associated to the transition of a prototype into a final product Proven record of research innovation and demonstrated leadership in both applied research and development PREFERRED QUALIFICATIONS Experience with building & deploying AI agents, LLMs for tool use, and Multimodal-LLMs
Responsibilities
The role involves training large-scale multimodal models and deploying efficient neural architectures while addressing safety challenges. Collaboration with ML researchers and software engineers is essential to advance multimodal capabilities and ensure AI safety.
Loading...