Research Intern - MSRC AI Security Research at Microsoft
Cambridge, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

03 Apr, 26

Salary

0.0

Posted On

03 Jan, 26

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

AI Security, Privacy, Research, Collaboration, Prototyping, Publishing, Vulnerability Analysis, Mitigations, Cross-Functional Teams, Machine Learning, Security Solutions, Data Analysis, Concept Validation, Industry Knowledge, Presentation Skills, Problem Solving

Industry

Software Development

Description
Overview The Microsoft Security Response Center (MSRC) works to protect customers and Microsoft from current and emerging threats to security and privacy. Within MSRC, our AI Vulnerabilities and Mitigations team analyzes all reported security vulnerabilities in Microsoft’s AI systems and develops new mitigations through deep research. We are looking for Research Interns to work with us on developing new mitigations for AI systems. Our team is uniquely placed to solve real-world security and privacy challenges, through cutting-edge scientific research, informed by vulnerability data from production AI systems. Some of our team’s recent research and collaborations include: TaskTracker: Catching LLM Task Drift with Activation Deltas https://arxiv.org/abs/2406.00799 LLMail-Inject Adaptive Prompt Injection Challenge https://microsoft.github.io/llmail-inject/) Highlight & Summarize https://arxiv.org/abs/2508.02872 Design Patterns for Securing LLM Agents against Prompt Injections https://arxiv.org/abs/2506.08837 Compromising Autonomous LLM Agents Through Malfunction Amplification https://aclanthology.org/2025.emnlp-main.1771/ Securing AI Agents with Information-Flow Control https://arxiv.org/abs/2505.23643 The Hawthorne Effect in Reasoning Models https://arxiv.org/abs/2505.14617 VerifiableFL: Verifiable Claims for Federated Learning using Exclaves https://arxiv.org/abs/2412.10537 Responsibilities Conduct cutting-edge research in AI security and privacy by proposing, exploring, and evaluating new ideas. Develop and implement prototypes to validate concepts and demonstrate real-world applicability. Collaborate with cross-functional teams to integrate security and privacy solutions into products and services. Publish findings and present insights to internal stakeholders and external conferences to advance industry knowledge. Qualifications Required/Minimum Qualifications: Students enrolled in a PhD program or outstanding undergraduate/master’s students with research experience. Preferred/Additional Qualifications: One or more papers at top security conferences (e.g., USENIX, CCS, S&P, NDSS) or papers focusing on security, safety, or privacy and appearing at top machine learning conferences (e.g., NeurIPS, ICML, ICLR) are strongly desired. We’d love to see your work — please share any personal projects, GitHub repositories, portfolios, or examples of work you’re proud of in your application. This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Responsibilities
Conduct cutting-edge research in AI security and privacy, proposing and evaluating new ideas. Develop prototypes to validate concepts and collaborate with teams to integrate solutions into products.
Loading...