Start Date
Immediate
Expiry Date
12 Nov, 25
Salary
272100.0
Posted On
12 Aug, 25
Experience
3 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Computer Science, Test Design, Data Science, Python, Statistics, Communication Skills, Reliability
Industry
Information Technology/IT
The Health Sensing team builds outstanding technologies to support our users in living their healthiest, happiest lives by providing them with objective, accurate, and timely information about their health and well-being. As part of the larger Sensor SW & Prototyping team, we take a multimodal approach using a variety of data types across HW platforms, such as camera, PPG, and natural languages, to build products to support our users in living their healthiest, the happiest lives. In this role, you will be at the forefront of developing and validating evaluation methodologies for Generative AI systems in health and wellbeing applications. You will design comprehensive human annotation frameworks, build automated evaluation tools, and conduct rigorous statistical analyses to ensure the reliability of both human and AI-based assessment systems. Your work will directly impact the quality and trustworthiness of AI features by creating scalable evaluation pipelines that combine human insight with automated validation.
DESCRIPTION
In this role you will: - Design and implement evaluation frameworks for measuring model performance, including human annotation protocols, quality control mechanisms, statistical reliability analysis, and LLM-based autograders to scale evaluation - Apply statistical methods to extract meaningful signals from human-annotated datasets, derive actionable insights, and implement improvements to models and evaluation methodologies - Analyze model behavior, identify weaknesses, and drive design decisions with failure analysis. Examples include, but not limited to: model experimentation, adversarial testing, creating insight/interpretability tools to understand and predict failure modes. - Work across the entire ML development cycle, such as developing and managing data from various endpoints, managing ML training jobs with large datasets, and building efficient and scalable model evaluation pipelines - Collaborate with engineers to build reliable end-to-end pipelines for long-term projects - Work cross-functionally to apply algorithms to real-world applications with designers, clinical experts, and engineering teams across Hardware and Software - Independently run and analyze ML experiments for real improvements
MINIMUM QUALIFICATIONS
PREFERRED QUALIFICATIONS
How To Apply:
Incase you would like to apply to this job directly from the source, please click here
Please refer the Job description for details