Senior AI Product Security Researcher at GitLab
San Francisco, California, USA -
Full Time


Start Date

Immediate

Expiry Date

05 Dec, 25

Salary

266400.0

Posted On

06 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Conference Presentations, Security Research, Oscp, Gitlab, Gpen, Platforms, Communication Protocols

Industry

Computer Software/Engineering

Description

GitLab is an open-core software company that develops the most comprehensive AI-powered DevSecOps Platform, used by more than 100,000 organizations. Our mission is to enable everyone to contribute to and co-create the software that powers our world. When everyone can contribute, consumers become contributors, significantly accelerating human progress. Our platform unites teams and organizations, breaking down barriers and redefining what’s possible in software development. Thanks to products like Duo Enterprise and Duo Agent Platform, customers get AI benefits at every stage of the SDLC.
The same principles built into our products are reflected in how our team works: we embrace AI as a core productivity multiplier, with all team members expected to incorporate AI into their daily workflows to drive efficiency, innovation, and impact. GitLab is where careers accelerate, innovation flourishes, and every voice is valued. Our high-performance culture is driven by our values and continuous knowledge exchange, enabling our team members to reach their full potential while collaborating with industry leaders to solve complex problems. Co-create the future with us as we build technology that transforms how the world develops software.

NICE TO HAVE QUALIFICATIONS:

  • Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures
  • Published security research or conference presentations on AI security topics
  • Background in software engineering with distributed systems expertise
  • Security certifications such as OSCP, OSCE, GPEN, or similar
  • Experience with GitLab or similar DevSecOps platforms
  • Knowledge of AI agent communication protocols and multi-agent architectures
Responsibilities

AN OVERVIEW OF THIS ROLE

We are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab’s AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.
In this role, you’ll be at the forefront of AI security research, working with GitLab Duo Agent Platform, GitLab Duo Chat, and AI workflows that represent the future of human/AI collaborative development. You’ll develop novel testing methodologies for AI agent security, conduct hands-on penetration testing of multi-agent orchestration systems, and translate emerging AI threats into actionable security improvements. Your research will directly influence how we build and secure the next generation of AI-powered DevSecOps tools, ensuring GitLab remains the most secure software factory platform on the market.
This position offers the unique opportunity to shape AI security practices in one of the world’s largest DevSecOps platforms, working with engineering teams who are pushing the boundaries of what’s possible with AI-assisted software development. You’ll have access to cutting-edge AI systems and the freedom to explore creative attack scenarios while contributing to the security of millions of developers worldwide.

WHAT YOU’LL DO

  • Identify and validate security vulnerabilities in GitLab’s AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios
  • Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques
  • Research emerging AI security threats and attack techniques to assess their potential impact on GitLab’s AI-powered platform
  • Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation
  • Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies
  • Collaborate with AI engineering teams to validate security fixes through iterative testing and verification
  • Contribute to the development of AI security testing frameworks and automated validation tools
  • Partner with Security Architecture to inform architectural improvements based on research findings
  • Share knowledge and mentor team members on AI security testing techniques and vulnerability discovery
Loading...