ML Engineer - Scaling at Helical
London, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

08 Apr, 26

Salary

0.0

Posted On

08 Jan, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Python, PyTorch, JAX, TensorFlow, MLOps, Transformers, Diffusion Models, SSMs, Model Performance, Scalable Systems, Experiment Tracking, Bio Foundation Models, Model Training, Model Inference, System Design

Industry

Biotechnology Research

Description
Helical is building the in-silico labs for biology Drug discovery still relies on wet labs: slow, expensive, and constrained by physical trial-and-error. Helical is changing that. We build the application layer that makes Bio Foundation Models usable in real-world drug discovery, enabling pharma and biotech teams to run millions of virtual experiments in days, not years. Today, leading global pharma companies already use Helical, and we’re at the start of a highly ambitious growth journey. We’re a founder-led, talent-dense team building a category-defining company from Europe. We care deeply about the quality of our work, move fast, and expect ownership. If you’re excited by complexity, real responsibility, and shaping how a company actually operates as it scales, you’ll feel at home here. Our github: https://github.com/helicalAI/helical/ Our Website: https://www.helical-ai.com/ Your Role As a Machine Learning Engineer - Scaling at Helical, you’ll build, optimize, and scale real-world applications of bio foundation models You’ll work closely with researchers and product engineers to productionize model training, inference, and deployment workflows. You’ll also help push the limits of foundation models by prototyping new methods, contributing to our core ML infrastructure, and translating research into fast, iterative code. This is a deeply technical role with high ownership — ideal for engineers who want to operate at the bleeding edge of AI infrastructure, model development, and system design. What You’ll Do Build and maintain scalable training/inference pipelines for foundation models (e.g. Transformers, SSMs). Optimize model performance, latency, and throughput across environments. Design modular, reusable ML components for internal and open-source use. Collaborate with researchers to scale notebooks into production-grade systems. Own ML infrastructure components (data loading, distributed compute, experiment tracking, etc.). Essentials MSc or PhD in Machine Learning, Computer Science, Applied Math, or similar. Strong Python programming skills, with deep knowledge of PyTorch, JAX, or TensorFlow. Hands-on experience building and scaling ML pipelines in real-world settings. Comfort with MLOps tools and practices (e.g. Weights & Biases, Ray, Docker, etc.). Experience with modern ML architectures — Transformers, Diffusion Models, SSMs, etc. High agency, fast iteration speed, and comfort with ambiguity in early-stage environments Bonus Points Contributions to open-source ML libraries or tooling. Experience with distributed training, model compression, or serving at scale. Scaling AI Systems For Large Post-Training Runs. Knowledge of how to integrate ML systems into user-facing applications or APIs. Interest in the biology/pharma space (not required, but you’ll pick it up fast here!).
Responsibilities
As a Machine Learning Engineer - Scaling, you will build, optimize, and scale applications of bio foundation models. You will collaborate with researchers and product engineers to productionize model training and deployment workflows.
Loading...