AI Safety Researcher at Market Partner
Stockholm, , Sweden -
Full Time


Start Date

Immediate

Expiry Date

08 Apr, 26

Salary

0.0

Posted On

08 Jan, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Safety Experience, Python, Java, SQL, AI Expertise, Adversarial Testing, System Alignment, Prompt Engineering, Context Engineering, Preference Tuning, Automatic Prompt Optimisation, Data Management, Benchmarking, Cross-Functional Collaboration, Research, Trust & Safety, Engineering

Industry

Advertising Services

Description
The world’s most popular audio streaming subscription service is looking for a AI Safety Researcher to join the band in a consultant assignment. The client has transformed music listening forever when launched in 2008. Period: ASAP to 2026-07-25 (full-time), with a possibility of extension. About the role The Personalization mission makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, some of client's most-loved features. The team built them by understanding the world of music and podcasts better than anyone else. Join the team and you’ll keep millions of users listening by making great recommendations to each and every one of them. We are looking for a researcher to further strengthen our client's work on AI safety. You will work with a cross functional team of highly skilled researchers, engineers, and domain experts on making sure our features are safe and trustworthy. You have a strong technical background and are able to work hands-on with complex systems and data. What you'll do Working with a cross functional team including Research, Trust & Safety and Engineering. Adversarial Testing: Stress test systems, e.g. via red-teaming campaigns, to identify material gaps and produce training data. Working hands on with querying and managing data, automated red teaming frameworks, LLM-as-ajudge, and more. Benchmarking with similar services. System alignment: Work with the teams to better align systems with evolving safety policies, focusing on robust and scalable processes. Prompt and context engineering; Preference Tuning; Automatic prompt optimisation. Producing high quality test and training data.. Preferably work full time during the contract, but part time can be applicable as well. Who you are Essential Safety Experience: Proven experience contributing to safety-related projects or research (e.g., adversarial testing, system alignment). Technical Stack: Strong proficiency in Python, Java, and SQL. AI Expertise: Hands-on experience with LLMs and prompt/context engineering. Academic Requirement: Preferably pursuing or holding an MSc or PhD in an AI/ML-related field, with a focus on safety or agentic systems. Plus: Experience working with cross-language models. Core Expertise: Safety Research and advanced model alignment techniques. Responsibilities: Lead adversarial testing/red-teaming campaigns to identify material gaps, focusing on robust and scalable system alignment (e.g., Preference Tuning, automatic prompt optimisation). We are Market Partner Market Partner is proud to be an equal opportunity employer. You are welcome to our community regardless of who you are, no matter where you come from, or what you look like. We apply ongoing selection and may fill the position as soon as we find the right candidate.
Responsibilities
Lead adversarial testing and red-teaming campaigns to identify material gaps in the system. Focus on robust and scalable system alignment, including preference tuning and automatic prompt optimization.
Loading...