Staff Research Scientist, Applied Machine Learning Security (Agent Systems) at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

21 Apr, 26

Salary

0.0

Posted On

21 Jan, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Security, Systems Security, Adversarial ML, Experimental Skills, Engineering Skills, Research, Platform Engineering, Product Security, Risk Reduction, Architectural Decisions, Design Changes, Long-term Hardening Strategies, LLM-based Systems, Tool-augmented ML Systems, Reproducibility, Operational Relevance

Industry

Computers and Electronics Manufacturing

Description
At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create. The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve. We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You'll lead original security research on agentic ML systems deployed at scale—driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You'll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users. Your impact will be measured by risk reduction in production systems that ship. DESCRIPTION This role focuses on applied security research for production ML systems, with an emphasis on agentic and tool-using models deployed at scale. You will lead research efforts that surface real security risks in shipped or near-shipped systems, and you will drive mitigations that integrate cleanly into Apple’s ML platforms and products. You will operate at the boundary between research, platform engineering, and product security, conducting original research grounded in real system behavior and translating it into concrete design changes, launch requirements, and long-term hardening strategies. Impact is measured by risk reduction in production, not theoretical results alone. MINIMUM QUALIFICATIONS Ph.D. or equivalent experience in machine learning, security, systems, or a related field. Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact. Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance. PREFERRED QUALIFICATIONS Experience researching or securing LLM-based or tool-augmented ML systems. Ability to work fluidly across research, engineering, and security review processes. Track record of influencing production systems through research-driven insights. Publications in top venues are a plus, but production impact is the primary signal.
Responsibilities
Lead original security research on agentic ML systems deployed at scale and drive secure agentic design into shipping products. Identify vulnerabilities in tool-using models and design adversarial evaluations reflecting actual attacker behavior.
Loading...