Start Date
Immediate
Expiry Date
08 Nov, 25
Salary
0.0
Posted On
09 Aug, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Games, C++, Artificial Intelligence, Self Confidence, Communication Skills, Research, Workshops, Robotics, Scipy, Critical Environments, Journals, Subject Matter Experts, Sql, Reinforcement Learning, Python, Clear Vision, Machine Learning, Ml, Java, Conferences, Strategy
Industry
Information Technology/IT
“This role offers a unique opportunity to contribute to cutting-edge AI research in Trust and Safety — driving innovation, building trust, and enabling adoption of AI. You’ll collaborate with leading researchers and practitioners to turn breakthroughs into real-world impact. There’s so much meaningful work ahead and we’re eager to build it together.”
QUALIFICATIONS:
KNOWLEDGE, SKILLS AND ABILITIES:
WHAT YOU’LL LOVE ABOUT US
ABOUT THE ROLE
Reporting to the Director of AI Trust and Safety, the Applied Research Scientist will lead innovative research focused on AI Trust and Safety (T&S), with a particular focus on advanced AI systems and their systemic and societal risks. They will contribute to a dynamic research agenda that builds on Amii’s strengths, addresses critical gaps in the T&S landscape, and is informed by emerging national and international research priorities. Amii’s strategy is grounded in advancing AI T&S to support innovation, responsible adoption, and the diffusion of AI technologies, transforming research outputs into practical tools, frameworks, and resources that have real-world impact.
The Applied Research Scientist will leverage their work through academic papers, whitepapers, conference presentations, and practical tools to position Amii as a leader in AI safety. Their research will advance the understanding of AI risks, drive the development of more trustworthy AI systems, and support the responsible adoption of AI technologies.
The position centers on three key areas:
Research priorities include advancing rigorous methodologies for risk assessment and system evaluations, studying the real-world behavior of complex AI systems, and developing new approaches for designing safer AI models. Particular emphasis will be placed on safety challenges in reinforcement learning (RL) and continual learning (CL), where systems adapt over time or learn from ongoing experience in dynamic environments.
Focus areas span topics such as:
“This role offers a unique opportunity to contribute to cutting-edge AI research in Trust and Safety — driving innovation, building trust, and enabling adoption of AI. You’ll collaborate with leading researchers and practitioners to turn breakthroughs into real-world impact. There’s so much meaningful work ahead and we’re eager to build it together.”