AI Evaluation Engineer - Health at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

02 Jan, 26

Salary

0.0

Posted On

04 Oct, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Statistical Analysis, Model Evaluation, Data Processing, LLM Development, Prompt Engineering, Quality Control, Human Annotation, Failure Analysis, Machine Learning, Data Management, Collaboration, Consumer Digital Health, Communication Skills, Statistical Reliability, Automated Evaluation Tools

Industry

Computers and Electronics Manufacturing

Description
The Health Sensing team builds outstanding technologies to support our users in living their healthiest, happiest lives by providing them with objective, accurate, and timely information about their health and well-being. As part of the larger Sensor SW & Prototyping team, we take a multimodal approach using a variety of data types across HW platforms, such as camera, PPG, and natural languages, to build products to support our users in living their healthiest, the happiest lives. In this role, you will be at the forefront of developing and validating evaluation methodologies for Generative AI systems in health and wellbeing applications. You will design comprehensive human annotation frameworks, build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems. Your work will directly impact the quality and trustworthiness of AI features by creating scalable evaluation pipelines that combine human insight with automated validation. DESCRIPTION In this role you will: - Design and implement evaluation frameworks for measuring model performance, including human annotation protocols, quality control mechanisms, statistical reliability analysis, and LLM-based autograders to scale evaluation - Apply statistical methods to extract meaningful signals from human-annotated datasets, derive actionable insights, and implement improvements to models and evaluation methodologies - Analyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, creating insight/interpretability tools to understand and predict failure modes. - Work across the entire ML development cycle, such as developing and managing data from various endpoints, managing ML training jobs with large datasets, and building efficient and scalable model evaluation pipelines - Collaborate with engineers to build reliable end-to-end pipelines for long-term projects - Work cross-functionally to apply algorithms to real-world applications with designers, clinical experts, and engineering teams across Hardware and Software - Independently run and analyze ML experiments for real improvements MINIMUM QUALIFICATIONS BS and a minimum of 10 years relevant industry experience Proficiency in Python and ability to write clean, performant code and collaborate using standard software development practices Experience in building data and inference pipelines to process large scale datasets Strong statistical analysis skills and experience validating data quality and model performance Experience with applied LLM development, prompt engineering, chain of thought, etc. PREFERRED QUALIFICATIONS MS or PhD in relevant fields Experience with LLM-based evaluation systems and synthetic data generation techniques, and evaluating and improving such systems Experience in rigorous, evidence-based approaches to test development, e.g. quantitative and qualitative test design, reliability and validity analysis Customer-focused mindset with experience or strong interest in building consumer digital health and wellness products Strong communication skills and ability to work cross-functionally with technical and non-technical stakeholders
Responsibilities
You will design and implement evaluation frameworks for measuring model performance and apply statistical methods to extract meaningful signals from human-annotated datasets. Your work will directly impact the quality and trustworthiness of AI features by creating scalable evaluation pipelines that combine human insight with automated validation.
Loading...