Machine Learning Engineer (Diffusion/Vision) at Bjak
Deutschland, , Germany -
Full Time


Start Date

Immediate

Expiry Date

27 Nov, 25

Salary

0.0

Posted On

28 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Flux

Industry

Information Technology/IT

Description

TRANSFORM VISUAL MODELS INTO REAL-WORLD APPLICATIONS

We’re building AI systems for a global audience. We are living in an era of AI transition - this new project team will be focusing on building applications to enable more real world impact and highest usage for the world.
This role is a global role with hybrid work arrangement - combining flexible remote work with in-office collaboration at our HQ. You’ll work closely with regional teams across product, engineering, operations, infrastructure and data to build and scale impactful AI solutions.

REQUIREMENTS

  • Strong experience with diffusion models and generative vision (Stable Diffusion, SDXL, Flux, etc.).
  • Hands-on skills with DreamBooth, LoRA/QLoRA, and fine-tuning methods.
  • Proficiency with PyTorch (preferred).
  • Experience in dataset preparation (captioning, tagging, filtering, augmentation).
  • Knowledge of GPU optimization, latent diffusion, and efficient training techniques.
  • Strong foundations in software engineering, algorithms, and clean code practices.
Responsibilities

WHY THIS ROLE MATTERS

You’ll fine-tune state-of-the-art models, design evaluation frameworks, and bring AI features into production. Your work ensures our models are not only intelligent, but also safe, trustworthy, and impactful at scale.

WHAT YOU’LL DO

  • Fine-tune & Adapt – Train and customize diffusion models (SDXL, Flux, Stable Diffusion variants) using LoRA, DreamBooth, and other parameter-efficient methods.
  • Curate Datasets – Build, clean, and annotate large-scale image datasets with captioning, tagging, and NSFW filtering for safe and aligned generation.
  • Evaluate & Align – Develop pipelines to measure fidelity, diversity, style adherence, and safety across generated outputs.
  • Optimize Performance – Apply GPU memory optimization, latent diffusion tricks, and distributed training for efficient scaling.
  • Deploy & Monitor – Ship diffusion-powered features into production with monitoring for drift, latency, and quality.
  • Collaborate & Deliver – Work with product and design to integrate generative vision capabilities into user experiences.
Loading...