National Security Threat Researcher

at  OpenAI

San Francisco, California, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate25 May, 2024USD 370000 Annual29 Feb, 2024N/AGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

ABOUT THE TEAM

Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, we have dedicated a team to help us best prepare for the development of increasingly capable frontier AI models. This team, Preparedness, reports directly to our CTO and is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.

Specifically, the mission of the Preparedness team is to:

  • Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic (not necessarily existential) to our society; and
  • Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and, more broadly, to safely handle the development of powerful AI systems.

Our team will tightly connect capability assessment, evaluations, and internal red teaming for frontier models, as well as overall coordination on AGI preparedness. The team’s core goal is to ensure that we have the infrastructure needed for the safety of highly-capable AI systems—from the models we develop in the near future to those with AGI-level capabilities.

ABOUT YOU

We are looking to hire exceptional talent from diverse technical backgrounds (e.g., cybersecurity, CBRN-related expertise, national security/public safety) that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end.

In this role, you will:

  • Use your domain expertise to build our understanding of national-security-related AI safety risks
  • Design (and then continuously refine) evaluations of frontier AI models that assess the extent of these risks
  • Contribute to the refinement of risk management and the overall development of “best practice” guidelines for AI safety evaluations

We expect you to have:

  • Hands-on experience with national security threat prevention, preferably in cybersecurity
  • A deep interest in building understanding of the underpinnings of AI safety
  • Familiarity with software engineering
  • Ability to think outside the box and have a robust “red-teaming mindset”
  • Ability to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end

It would be great if you also have:

  • Experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, or another technical domain applicable to AI risk
  • A good understanding of the (nuances of) societal aspects of AI deployment
  • An ability to work cross-functionally
  • Excellent communication skills

Responsibilities:

  • Use your domain expertise to build our understanding of national-security-related AI safety risks
  • Design (and then continuously refine) evaluations of frontier AI models that assess the extent of these risks
  • Contribute to the refinement of risk management and the overall development of “best practice” guidelines for AI safety evaluation


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Other

Graduate

Proficient

1

San Francisco, CA, USA