AIML - Machine Learning Engineer, Responsible AI at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

13 Mar, 26

Salary

0.0

Posted On

13 Dec, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Generative AI, Data Science, Python, Foundation Models, Safety Evaluations, Human Evaluations, Auto-Grading, Cross-Functional Communication, Metrics Reporting, Scientific Investigation, Fairness, Bias, Robustness, Explainability, Uncertainty

Industry

Computers and Electronics Manufacturing

Description
Would you like to play a part in building the next generation of generative AI applications at Apple? We’re looking for Machine Learning Engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. This role is directed at assessing, quantifying, and improving the safety and inclusivity of Apple’s Generative-AI powered features and products. In this role you’ll have the opportunity to tackle innovative problems in machine learning, particularly focused on large language models for text generation, diffusion models for image generation, and mixed model systems for multimodal applications. As a member of Apple’s Responsible AI group you will be working on a wide array of new features and research in the generative AI space. Our team is currently interested in large generative models for vision and language, with particular interest on Responsible AI, safety, fairness, robustness, explainability, and uncertainty in models. DESCRIPTION This role focuses on developing, carrying-out, interpreting, and communicating pre- and post-ship evaluations of the safety of Apple Intelligence features. Both human grading and model-based auto-grading are thoughtfully leveraged to power these evaluations. Additionally, this role researches and develops auto-grading methodology & infrastructure to benefit ongoing and future Apple Intelligence safety evaluations. Producing safety evaluations that uphold Apple’s Responsible AI values requires thoughtful data sampling, creation, and curation for evaluation datasets; high quality, detailed annotations and careful auto-grading to assess feature performance; and mindful analysis to understand what the evaluation means for the user experience. This role heavily draws on applied data science, scientific investigation and interpretation, cross-functional communication and collaboration, and metrics reporting and presentation. MINIMUM QUALIFICATIONS MS, or PhD in Computer Science, Machine Learning, Statistics, or related fields; or an equivalent qualification acquired through other avenues. Experience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures. Strong engineering skills and experience in writing production-quality code in Python. Deep experience in foundation model-based AI programming (i.e.: using DSPy for optimizing foundation model prompts, for example) and a drive to innovate in this space. Experience working with noisy, crowd-based data labels and human evaluations. PREFERRED QUALIFICATIONS Experience working in the Responsible AI space. Prior scientific research and publication experience. Strong organizational and operational skills working with large, multi-functional, and diverse teams. Curiosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable.
Responsibilities
This role focuses on developing and communicating evaluations of the safety of Apple Intelligence features. It involves researching and developing auto-grading methodologies to enhance ongoing safety evaluations.
Loading...