Research Scientist, Agentic Safety at DeepMind
Mountain View, California, United States -
Full Time


Start Date

Immediate

Expiry Date

30 Jan, 26

Salary

0.0

Posted On

01 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Python, AI, Safety, Research, Programming, Problem Solving, Agentic Technologies, Prototyping, Collaboration, Innovative Technologies, Deployment, Scalable Systems, Formal Methods, GenAI Language Models, Libraries, Frameworks

Industry

Research Services

Description
Snapshot Accelerate research in strategic projects that enable trustworthy, robust and reliable Agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems. About Us We’re a dedicated scientific community, committed to “solving intelligence” and ensuring our technology is used for widespread public benefit. We’ve built a supportive and inclusive environment where collaboration is encouraged and learning is shared freely. We don’t set limits based on what others think is possible or impossible. We drive ourselves and inspire each other to push boundaries and achieve ambitious goals The Role As a Research Scientist in Strategic Initiatives, you will use your machine learning expertise to collaborate with other machine learning scientists and engineers within our strategic initiatives programs. Your primary focus will be on building technologies to make AI agents safer. AI agents are increasingly used in sensitive contexts with powerful capabilities, having abilities to access personal data, confidential enterprise data and code, interact with third party applications or websites, or write and execute code in order to fulfil user tasks. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact. In this role, you will serve this mission by proposing and evaluating novel approaches to agentic safety, building prototype implementations and production grade systems to validate and ship your ideas, in collaboration with a team of researchers and engineers from SSI, and the rest of Google and GDM. Key responsibilities: Invent and implement novel recipes for making agents safer, involving both improving models that power the agents, as well as systems that are built around the agents Develop strategies to hill-climb leaderboards and debug possible performance and safety issues in frontier agents Integrate novel agentic technologies into research & production grade prototypes Work with product teams to gather research requirements and consult on the deployment of research-based solutions to help deliver value incrementally Amplify the impact by generalizing solutions into reusable libraries and frameworks for safer AI agents across Google, and by sharing knowledge through design docs, open source, or external blog posts About You In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience: PhD in computer science, security or related field, or equivalent practical experience Passion for accelerating the development of safe agents using innovative technologies, demonstrated via a portfolio of prior projects (github repos, papers, blog posts) Strong programming experience. Demonstrated record of python implementations of LLM pipelines. Strong AI and Machine Learning background In addition, the following would be an advantage: Experience in applying machine learning techniques to problems surrounding scalable, robust and trustworthy deployments of models. Experience with GenAI language models, programming languages, compilers, formal methods, and/or private storage solutions. Demonstrated success in creative problem solving for scalable teams and systems A real passion for AI!
Responsibilities
As a Research Scientist, you will focus on building technologies to enhance the safety of AI agents. You will propose and evaluate novel approaches to agentic safety and collaborate with a team to validate and implement your ideas.
Loading...