Software Engineer, Safety

at  OpenAI

San Francisco, California, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate26 May, 2024USD 370000 Annual01 Mar, 20243 year(s) or aboveGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

ABOUT THE TEAM

The Applied AI team safely brings OpenAI’s advanced technology to the world. We released the GPT-3 API, Codex (which powers GitHub Copilot), and DALL-E. More is coming very soon.
We empower developers with APIs offering state-of-the-art AI capabilities, which power product features that were never before possible. We also build AI-driven consumer applications.
Across all product lines, we ensure that these powerful tools are used responsibly. This is a key part of OpenAI’s path towards safely deploying broadly beneficial Artificial General Intelligence (AGI). Safety is more important to us than unfettered growth.

OUR TECH STACK

  • Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills and the ability to quickly pick up new tools and frameworks.

Responsibilities:

ABOUT THE ROLE

At OpenAI, we’re dedicated to advancing artificial intelligence, and we know that creating a secure and reliable platform is vital to our mission. That’s why we’re seeking a software engineer to help us build out our trust and safety capabilities.
In this role, you’ll work with our entire engineering team to design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You’ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.

IN THIS ROLE, YOU WILL:

  • Architect, build, and maintain anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.
  • Work closely with our other engineers and researchers to utilize both industry standard and novel AI techniques to combat abuse and toxic content.
  • Assist with response to active incidents on the platform and build new tooling and infrastructure that address the fundamental problems.

YOU MIGHT THRIVE IN THIS ROLE IF YOU:

  • Have at least 3 years of professional software engineering experience.
  • Have experience setting up and maintaining production backend services and data pipelines.
  • Have a humble attitude, an eagerness to help your colleagues, and a desire to do whatever it takes to make the team succeed.
  • Are self-directed and enjoy figuring out the best way to solve a particular problem
  • Own problems end-to-end, and are willing to pick up whatever knowledge you’re missing to get the job done.
  • Care about AI Safety in production environments and have the expertise to build software systems that defend against abuse.
  • Build tools to accelerate your own workflows, but only when off-the-shelf solutions would not do.


REQUIREMENT SUMMARY

Min:3.0Max:8.0 year(s)

Information Technology/IT

IT Software - System Programming

Software Engineering

Graduate

Proficient

1

San Francisco, CA, USA