Workstream Lead Safeguards - Level 5 Low at Department for Science Innovation Technology
London, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

24 Sep, 25

Salary

58040.0

Posted On

25 Jun, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Communication Skills, Code, Mentoring, Shipping, Preparedness, Technical Standards, Government

Industry

Other Industry

Description

JOB SUMMARY

The AI Security Institute (AISI), launched at the 2023 Bletchley Park AI Safety Summit, is the world’s first state-backed organisation dedicated to advancing AI security for the public interest. Our mission is to assess and mitigate risks from frontier AI systems, including cyber-attacks on critical infrastructure, AI-enhanced chemical and biological threats, large-scale societal disruptions, and potential loss of control over increasingly powerful AI. In just one year, we’ve assembled one of the largest and most respected research teams, featuring renowned scientists and senior researchers from leading AI labs such as Anthropic, DeepMind, and OpenAI.
At AISI, we’re building the premier institution for impacting both technical AI safety and AI governance. We conduct cutting-edge research, develop novel evaluation tools, and provide crucial insights to governments, companies, and international partners. By joining us, you’ll collaborate with the brightest minds in the field, directly shape global AI policies, and tackle complex challenges at the forefront of technology and ethics. Whether you’re a researcher, engineer, or policy expert, at AISI, you’re not just advancing your career – you’re positioned to have significant impact in the age of artificial intelligence.

JOB DESCRIPTION

The AI Security Institute research unit is looking for exceptionally motivated Work-stream Lead to join its Safeguard Analysis Team.
Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Security Institute’s Safeguard Analysis Team researches such interventions, which it refers to as ‘safeguards’, evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future.
The Safeguard Analysis Team takes a broad view of security threats and interventions. It’s keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models, but is also keen to hire security researchers who have historically worked outside of AI, such as in - non-exhaustively - computer security, information security, web technology policy, and hardware security. Diverse perspectives and research interests are welcomed.
The Team seeks a lead with skill-sets leaning in the direction of either or both of Research Scientist and Research Engineer. The Team’s priorities include research-oriented responsibilities – like assessing the threats to frontier systems and developing novel attacks – and engineering-oriented ones, such as building infrastructure for running evaluations.
The work-stream will be situated within AISI’s Research Unit, and you will report to our Safeguards Research Director.

ESSENTIAL SKILLS

  • Comprehensive understanding of large language models (e.g., GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with; writing code, frontier model evaluations knowledge, pre-training or fine tuning LLMs.
  • Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.
  • Strong track record of leading multidisciplinary teams to deliver multiple exceptional scientific breakthroughs or high-quality products. We’re looking for evidence of an ability to lead exceptional teams.
  • Strong experience with mentorship of more junior team members.

DESIRABLE SKILLS

  • Red-teaming experience against any sort of system.
  • Strong written and verbal communication skills.
  • Ability to work in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem.
  • Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team’s success and find new ways of getting things done within government.
  • Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done.
  • Writing production quality code.
  • Improving technical standards across a team through mentoring and feedback.
  • Designing, shipping, and maintaining complex tech products.

NATIONALITY REQUIREMENTS

Open to UK nationals only.

Responsibilities

ROLE SUMMARY

As work-stream lead of a novel team, you will build a team to Evaluate safety components of AI systems; support improvements. You will need to:

  • Build and lead a talent-dense, multidisciplinary, and mission-driven team;
  • Develop and deliver a strategy for building a cutting-edge crime and social destabilisation research agenda;
  • Develop cutting edge evaluations which relate to these threat-models which can reliably assess the capability of Frontier AI systems;
  • Deliver additional impactful research by overseeing a diverse portfolio of research projects, potentially included a portfolio of externally delivered research;
  • Ensure that research outcomes are disseminated to relevant stakeholders within government and the wider community;
  • Forge relationships with key partners in industry, academia, and across Government, including the national security community;
  • Act as part of AISI’s overall leadership team, setting the culture and supporting staff;
  • The position offers a unique opportunity to push forward an emerging field, whilst part of an organization that is a unique and fast-growing presence in AI research and governance.
Loading...