Model Policy Manager
at OpenAI
San Francisco, California, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 07 Feb, 2025 | USD 310000 Annual | 10 Nov, 2024 | N/A | Good communication skills | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
ABOUT THE TEAM
The Safety Systems team is at the forefront of OpenAI’s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models. Key focus areas include: addressing critical societal challenges like misinformation risks and how the model should respond in health contexts; defining evaluation criteria for foundational models’ ability to reason about safety, values, and questions of cultural norms; and driving rapid policy taxonomy iteration based on data.
Responsibilities:
ABOUT THE ROLE
Providing access to powerful AI models introduces a host of challenging questions when it comes to model safety: How do we define safe behavior for how a model should behave? To what end? How do we do this in such a way that is actionable, objective and sustains replicability?
This is a senior role in which you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies are truly used to benefit all people. The ideal candidate can identify and develop cohesive and thoughtful policies with a sense of urgency. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Importantly, this role is embedded in our research teams and will directly inform model training.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
IN THIS ROLE, YOU’LL:
- Design model policies that govern safe model behavior in an objective and defensible way - e.g. how should the model respond in risky/unsafe scenarios? What does unsafe mean?
- Develop taxonomies that inform data collection campaigns, as well as zero-shot classifier development. How do you construct a taxonomy that lay people, AI researchers and AI models understand and can immediately use?
- Lead prioritization for safety efforts across the company for new model launches, understanding and addressing technical and business trade-offs.
- Develop a broad range of subject matter expertise while maintaining agility across topics.
- Experiment and design policies using our latest advancements in AI research.
YOU MIGHT THRIVE IN THIS ROLE IF YOU:
- Have extensive experience in AI, tech policy, product counsel/tech law, linguistics, and moral reasoning, and/or enjoy classification problems.
- Have extensive experience defining, refining and enforcing policies for ML models.
- Deeply understand the operational challenges of enforcing policies with RLHF and can incorporate this into policy design.
- Can analyze the benefits and risks of open-ended problem spaces; can generate ideas required to solve ambiguous problems and take full ownership of the solution.
- Are very familiar with policy and safety/responsibility questions related to AI specifically.
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Other
IT
Graduate
Proficient
1
San Francisco, CA, USA