Intermediate Machine Learning Engineer, AI Framework New at GitLab
Remote, British Columbia, Canada -
Full Time


Start Date

Immediate

Expiry Date

04 Sep, 25

Salary

0.0

Posted On

05 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Ml, Open Source Development, Evaluation Tools

Industry

Information Technology/IT

Description

GitLab is an open core software company that develops the most comprehensive AI-powered DevSecOps Platform, used by more than 100,000 organizations. Our mission is to enable everyone to contribute to and co-create the software that powers our world. When everyone can contribute, consumers become contributors, significantly accelerating the rate of human progress. This mission is integral to our culture, influencing how we hire, build products, and lead our industry. We make this possible at GitLab by running our operations on our product and staying aligned with our values. Learn more about Life at GitLab.
Thanks to products like Duo Enterprise, and Duo Workflow, customers get the benefit of AI at every stage of the SDLC. The same principles built into our products are reflected in how our team works: we embrace AI as a core productivity multiplier. All team members are encouraged and expected to incorporate AI into their daily workflows to drive efficiency, innovation, and impact across our global organisation.

Responsibilities

AN OVERVIEW OF THIS ROLE

Are you passionate about building robust frameworks to evaluate and ensure the reliability of AI models? As a Machine Learning Engineer on GitLab’s AI framework team, you’ll play a critical role in shaping the future of AI-powered features at GitLab. This is an exciting opportunity to work on impactful projects that directly influence the quality of GitLab’s AI capabilities.
You’ll help merge cutting-edge evaluation tools, optimize dataset management, and scale our validation infrastructure. Working closely with other AI feature teams, you’ll ensure that every AI feature we deliver is robust, reliable, and meets the highest quality standards.
Some challenges in this role include designing scalable solutions for LLM evaluation, consolidating disparate validation tools, and contributing to GitLab’s innovative AI roadmap.
Some examples of our projects:

Consolidating Evaluation Tooling | The GitLab HandbookGitLab.org / AI Powered / ELI5

  • GitLab.org / ModelOps / AI Model Validation and Research / AI Evaluation / Prompt Library

WHAT YOU’LL DO

  • Design and implement technical evaluators for LLM assessment.
  • Contribute to evaluation infrastructure consolidation efforts.
  • Build scalable evaluation pipelines and frameworks.
  • Develop and manage datasets and evaluation metrics.
  • Collaborate with feature teams to integrate validation solutions.
  • Optimize performance across ML evaluation systems.
  • Support improvements to GitLab’s AI-powered tools through validation.
  • Ensure all solutions align with GitLab’s infrastructure and security protocols.
Loading...