Senior ML Engineer at Nebius Group
Amsterdam, , Netherlands -
Full Time


Start Date

Immediate

Expiry Date

21 Nov, 25

Salary

0.0

Posted On

23 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Articulation, Web Services, Communication Skills, Distributed Systems

Industry

Information Technology/IT

Description

WHY WORK AT NEBIUS

Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.

PREVIOUS EXPERIENCE WORKING WITH LANGUAGE MODELS OR OTHER SIMILAR NLP TECHNOLOGIES.

  • Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization
  • A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.
  • Strong engineering skills, including experience in developing large distributed systems or high-load web services.
  • Open-source projects that showcase your engineering prowess
  • Excellent command of the English language, alongside superior writing, articulation, and communication skills.
Responsibilities

THE ROLE

AI Studio is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference & fine-tuning platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to train & deploy at massive scale.
This role will require expertise in distributed LLMs training and inference.

YOUR RESPONSIBILITIES WILL INCLUDE:

  • Enhancing fine-tuning methodologies - both LoRA-based and full-parameter - for cutting-edge LLMs (e.g., GPT-OSS, Kimi K2, DeepSeek V3/R1, GLM-4.5), focusing on both model quality and training efficiency
  • Researching and implementing advanced inference optimization techniques, such as speculative decoding, quantization, and large-scale draft model training
  • Re-implementing state-of-the-art open-source LLM architectures in JAX
Loading...