Start Date
Immediate
Expiry Date
19 Sep, 25
Salary
58040.0
Posted On
19 Jun, 25
Experience
5 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Machine Learning, Python, Training, Engineers
Industry
Civil Engineering
JOB SUMMARY
The AI Security Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world’s first state-backed organisation dedicated to advancing AI security for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we’ve assembled one of the largest and most respected research teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we’re building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you’ll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you’re a researcher, engineer, or policy expert, at AISI, you’re not just advancing your career – you’re positioned to have significant impact in the age of artificial intelligence.
JOB DESCRIPTION
We are looking for an experienced Workstream Lead who specialises in AI/ML.
You will be part of a cross-cutting team within the Human Influence work stream at the AI Security Institute. Our role is to advance the security science of advanced AI models including LLMs and more specialised AI models within human influence and to inform the wider policy environment.
You will join a team researching specialised models within human influence, focused on assessing and mitigating societal-level harms caused by advanced AI systems, particularly in the areas of criminal activity, including radicalisation, social engineering, and fraud. This is a technical role ideally suited for someone with a strong machine learning background and experience in computational scientific research.
The workstream will be situated within AISI’s Research Unit, and you will report to Chris Summerfield, our Societal Impacts Research Director. This post requires Security Clearance (SC) and any continued employment will be conditional on earning and maintaining this level of clearance.
ESSENTIAL SKILLS
DESIRABLE SKILLS
NATIONALITY REQUIREMENTS
Open to UK nationals only.
ROLE SUMMARY
As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing human influence-level risks that Frontier AI systems may exacerbate. You will need to: