Workstream Lead Human Influence - Level 5 Low at Department for Science Innovation Technology
London, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

19 Sep, 25

Salary

58040.0

Posted On

19 Jun, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Python, Training, Engineers

Industry

Civil Engineering

Description

JOB SUMMARY

The AI Security Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world’s first state-backed organisation dedicated to advancing AI security for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we’ve assembled one of the largest and most respected research teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we’re building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you’ll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you’re a researcher, engineer, or policy expert, at AISI, you’re not just advancing your career – you’re positioned to have significant impact in the age of artificial intelligence.

JOB DESCRIPTION

We are looking for an experienced Workstream Lead who specialises in AI/ML.
You will be part of a cross-cutting team within the Human Influence work stream at the AI Security Institute. Our role is to advance the security science of advanced AI models including LLMs and more specialised AI models within human influence and to inform the wider policy environment.
You will join a team researching specialised models within human influence, focused on assessing and mitigating societal-level harms caused by advanced AI systems, particularly in the areas of criminal activity, including radicalisation, social engineering, and fraud. This is a technical role ideally suited for someone with a strong machine learning background and experience in computational scientific research.
The workstream will be situated within AISI’s Research Unit, and you will report to Chris Summerfield, our Societal Impacts Research Director. This post requires Security Clearance (SC) and any continued employment will be conditional on earning and maintaining this level of clearance.

ESSENTIAL SKILLS

  • Background in machine learning, having worked directly on training, tuning or evaluating machine learning models using PyTorch or similar.
  • Experience working on biological (frontier) AI models, such as protein or genomic language models, structure prediction (AlphaFold) or protein design models (RFDiffusion).
  • Proficient at coding in Python.
  • Strong track record of leading multidisciplinary teams to deliver multiple exceptional scientific breakthroughs or high-quality products. We’re looking for evidence of an ability to lead exceptional teams.
  • Strong experience with mentorship of more junior team members.

DESIRABLE SKILLS

  • Strong background in human influence.
  • Good scientific research experience, and a motivation to follow research best practices to solve open questions at the intersection of AI and human influence.
  • Experience writing production-level code that is scalable, robust and easy to maintain, ideally in Python.
  • Experience working in small cross-functional teams, including both scientists and engineers.
  • Experience in communicating technical work to a mixture of technical and non-technical audiences.

NATIONALITY REQUIREMENTS

Open to UK nationals only.

Responsibilities

ROLE SUMMARY

As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing human influence-level risks that Frontier AI systems may exacerbate. You will need to:

  • Build and lead a talent-dense, multidisciplinary, and mission-driven team;
  • Develop and deliver a strategy for building a cutting-edge crime and social destabilisation research agenda;
  • Develop cutting edge evaluations which relate to these threat-models which can reliably assess the capability of Frontier AI systems;
  • Deliver additional impactful research by overseeing a diverse portfolio of research projects, potentially included a portfolio of externally delivered research;
  • Ensure that research outcomes are disseminated to relevant stakeholders within government and the wider community;
  • Forge relationships with key partners in industry, academia, and across Government, including the national security community;
  • Act as part of AISI’s overall leadership team, setting the culture and supporting staff;
  • The position offers a unique opportunity to push forward an emerging field, whilst part of an organization that is a unique and fast-growing presence in AI research and governance.
Loading...