Start Date
Immediate
Expiry Date
24 Sep, 25
Salary
58040.0
Posted On
25 Jun, 25
Experience
5 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Communication Skills, Code, Mentoring, Shipping, Preparedness, Technical Standards, Government
Industry
Other Industry
JOB SUMMARY
The AI Security Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world’s first state-backed organisation dedicated to advancing AI security for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber-attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we’ve assembled one of the largest and most respected research teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we’re building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you’ll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you’re a researcher, engineer, or policy expert, at AISI, you’re not just advancing your career – you’re positioned to have significant impact in the age of artificial intelligence.
JOB DESCRIPTION
The AI Security Institute research unit is looking for exceptionally motivated Work-stream Lead to join its Safeguard Analysis Team.
Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Security Institute’s Safeguard Analysis Team researches such interventions, which it refers to as ‘safeguards’, evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future.
The Safeguard Analysis Team takes a broad view of security threats and interventions. It’s keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed.
The Team seeks a lead with skill-sets leaning in the direction of either or both of Research Scientist and Research Engineer. The Team’s priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations.
The work-stream will be situated within AISI’s Research Unit, and you will report to our Safeguards Research Director.
ESSENTIAL SKILLS
DESIRABLE SKILLS
NATIONALITY REQUIREMENTS
Open to UK nationals only.
ROLE SUMMARY
As work-stream lead of a novel team, you will build a team to Evaluate safety components of AI systems; support improvements. You will need to: