AIML - ML Engineer, Responsible AI at Apple
New York, New York, United States -
Full Time


Start Date

Immediate

Expiry Date

19 Feb, 26

Salary

0.0

Posted On

21 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Generative AI, Natural Language Processing, LLMs, Diffusion Models, Failure Analysis, Quality Engineering, Robustness Analysis, Human Evaluations, Explainability, Interpretation, Python, Swift, Crowd-Based Annotations, Safety, Robustness

Industry

Computers and Electronics Manufacturing

Description
Would you like to play a part in building the next generation of generative AI applications at Apple? We're looking for scientists and engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. In this role, you'll have the opportunity to tackle innovative problems in machine learning, particularly focused on generative AI. As a member of the Apple HCMI group, you will be working on Apple's generative models that will power a wide array of new features. Our team is currently working on large generative models for vision and language, with particular interest on safety, robustness, and uncertainty in models. DESCRIPTION - Develop models, tools, metrics, and datasets for assessing and evaluating the safety of generative models over the model deployment lifecycle - Develop methods, models, and tools to interpret and explain failures in language and diffusion models - Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products - Prototype, implement, and evaluate new ML models and algorithms for red teaming LLMs MINIMUM QUALIFICATIONS Strong engineering skills and experience in writing production-quality code in Python, Swift or other programming languages Background in generative models, natural language processing, LLMs, or diffusion models Experience with failure analysis, quality engineering, or robustness analysis for AI/ML based features Experience working with crowd-based annotations and human evaluations Experience working on explainability and interpretation of AI/ML models Work with highly-sensitive content with exposure to offensive and controversial content PREFERRED QUALIFICATIONS BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues Proven track record of contributing to diverse teams in a collaborative environment
Responsibilities
Develop models, tools, metrics, and datasets for assessing and evaluating the safety of generative models. Build and maintain human annotation and red teaming pipelines to assess quality and risk of various Apple products.
Loading...