Applied Scientist, AI Evaluation Platform at Apple
Seattle, Washington, United States -
Full Time


Start Date

Immediate

Expiry Date

13 Mar, 26

Salary

0.0

Posted On

13 Dec, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Applied Scientist, AI Evaluation, Benchmarking, LLM Agents, Evaluation Frameworks, Python, Swift, Software Engineering, AI/ML Models, Analytical Skills, Communication Skills, Experimental Design, Automated Testing, Human-in-the-loop, Program Synthesis, Collaboration

Industry

Computers and Electronics Manufacturing

Description
Apple is where individual imaginations gather together, committing to the values that lead to great work. Every new product we build, service we create, or Apple Store experience we deliver is the result of us making each other’s ideas stronger. That happens because every one of us shares a belief that we can make something wonderful and share it with the world, changing lives for the better. It’s the diversity of our people and their thinking that inspires the innovation that runs through everything we do. When we bring everybody in, we can do the best work of our lives. Here, you’ll do more than join something — you’ll add something. DESCRIPTION Our team, part of Apple Services Engineering, is looking for an Applied Scientist to lead the design and continuous development of automated benchmarking methodologies for AI-powered code assistant tools. In this role, you will investigate how coding-focused LLM agents behave, create rigorous evaluation frameworks, and establish scientific standards for assessing their quality and reliability. This role is crafted to enable the development of scalable evaluation frameworks that ensure our engineers have the right tools to create products that surprise and delight our customers. The successful candidate will have a proactive approach with the ability to work independently and collaboratively on a wide range of projects. In this role, you will work alongside a small but impactful team, collaborating with ML and data scientists, software developers, project managers and other teams at Apple to understand requirements and translate them into scalable, reliable, and efficient evaluation frameworks. MINIMUM QUALIFICATIONS Advanced degree (MS or PhD) in Computer Science, Software Engineering, or equivalent research/work experience. Strong research background in empirical evaluation, experimental design, or benchmarking. Strong proficiency in Python. Intermediate proficiency in Swift. Deep familiarity with software engineering workflows and developer tools. Experience working with or evaluating AI/ML models, preferably LLMs or program synthesis systems. Strong analytical and communication skills, including the ability to write clear reports. PREFERRED QUALIFICATIONS Publications in ML evaluation or related fields. Experience with automated testing frameworks. Experience constructing human-in-the-loop or multi-turn evaluation setups. Prior work on agentic developer tools.
Responsibilities
Lead the design and continuous development of automated benchmarking methodologies for AI-powered code assistant tools. Collaborate with various teams to create scalable, reliable, and efficient evaluation frameworks.
Loading...