AI Evaluation Engineer - Health at Apple
Cupertino, California, USA -
Full Time


Start Date

Immediate

Expiry Date

12 Nov, 25

Salary

272100.0

Posted On

12 Aug, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Computer Science, Test Design, Data Science, Python, Statistics, Communication Skills, Reliability

Industry

Information Technology/IT

Description

The Health Sensing team builds outstanding technologies to support our users in living their healthiest, happiest lives by providing them with objective, accurate, and timely information about their health and well-being. As part of the larger Sensor SW & Prototyping team, we take a multimodal approach using a variety of data types across HW platforms, such as camera, PPG, and natural languages, to build products to support our users in living their healthiest, the happiest lives. In this role, you will be at the forefront of developing and validating evaluation methodologies for Generative AI systems in health and wellbeing applications. You will design comprehensive human annotation frameworks, build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems. Your work will directly impact the quality and trustworthiness of AI features by creating scalable evaluation pipelines that combine human insight with automated validation.

DESCRIPTION

In this role you will: - Design and implement evaluation frameworks for measuring model performance, including human annotation protocols, quality control mechanisms, statistical reliability analysis, and LLM-based autograders to scale evaluation - Apply statistical methods to extract meaningful signals from human-annotated datasets, derive actionable insights, and implement improvements to models and evaluation methodologies - Analyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, creating insight/interpretability tools to understand and predict failure modes. - Work across the entire ML development cycle, such as developing and managing data from various endpoints, managing ML training jobs with large datasets, and building efficient and scalable model evaluation pipelines - Collaborate with engineers to build reliable end-to-end pipelines for long-term projects - Work cross-functionally to apply algorithms to real-world applications with designers, clinical experts, and engineering teams across Hardware and Software - Independently run and analyze ML experiments for real improvements

MINIMUM QUALIFICATIONS

  • Bachelors in Computer Science, Data Science, Statistics, or a related field; or equivalent experience
  • Proficiency in Python and ability to write clean, performant code and collaborate using standard software development practices
  • Experience in building data and inference pipelines to process large scale datasets
  • Strong statistical analysis skills and experience validating data quality and model performance
  • Experience with applied LLM development, prompt engineering, chain of thought, etc.

PREFERRED QUALIFICATIONS

  • MS and a minimum of 3 years of relevant industry experience or PhD in relevant fields
  • Experience with LLM-based evaluation systems and synthetic data generation techniques, and evaluating and improving such systems
  • Experience in rigorous, evidence-based approaches to test development, e.g. quantitative and qualitative test design, reliability and validity analysis
  • Customer-focused mindset with experience or strong interest in building consumer digital health and wellness products
  • Strong communication skills and ability to work cross-functionally with technical and non-technical stakeholders

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

Please refer the Job description for details

Loading...