Member of Technical Staff, Large Generative Models at Mirage
New York, New York, USA -
Full Time


Start Date

Immediate

Expiry Date

04 Dec, 25

Salary

300000.0

Posted On

05 Sep, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Optimization, Empirical Research, Computer Science, Triton, Experimental Design, Rapid Prototyping, Software Engineering Practices, Cuda, Machine Learning, Testing, Analytical Skills, Code Review, Training, Dynamics, Diffusion, Optimization Techniques

Industry

Information Technology/IT

Description

Mirage is redefining short-form video with frontier AI research.
We’re building full-stack foundation models and products that are changing the future of this format and video creation, production and editing more broadly. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.
We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company’s culture.

REQUIREMENTS:

Research Experience:

  • Master’s or PhD in Computer Science, Machine Learning, or related field
  • Track record of research contributions at top ML conferences (NeurIPS, ICML, ICLR)
  • Demonstrated experience implementing and improving upon state-of-the-art architectures
  • Deep expertise in generative modeling approaches (diffusion, autoregressive, VAEs, etc.)
  • Strong background in optimization techniques and loss function design
  • Experience with empirical scaling studies and systematic architecture research

Technical Expertise:

  • Strong proficiency in modern deep learning tooling (PyTorch, CUDA, Triton, FSDP, etc.)
  • Experience training diffusion models with 10B+ parameters
  • Experience with very large language models (200B+ parameters) is a plus
  • Deep understanding of attention, transformers, and modern multimodal architectures
  • Expertise in distributed training systems and model parallelism
  • Proven ability to implement and improve complex model architectures
  • Track record of systematic empirical research and rigorous evaluation

Engineering Capabilities:

  • Ability to write clean, modular research code that scales
  • Strong software engineering practices including testing and code review
  • Experience with rapid prototyping and experimental design
  • Strong analytical skills for debugging model behavior and training dynamics
  • Facility with profiling and optimization tools
  • Track record of bringing research ideas to production
  • Experience maintaining high code quality in a research environment

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ABOUT THE ROLE:

Captions is seeking an exceptional Research Engineer (MOTS) to advance the state-of-the-art in large-scale multimodal video diffusion models. You’ll conduct novel research on generative modeling architectures, develop new training techniques, and scale models to billions of parameters. As a key member of our ML Research team, you’ll work at the cutting edge of multimodal generation while building systems that enable natural, controllable video creation. We’re already training large-scale models with demonstrated product impact, and we’re excited to continue expanding the scope and capabilities of our research.
We’re especially excited about pushing the boundaries of audio-video generation, with a focus on realistic and charismatic human behavior that enables natural storytelling and creative iteration. Our models power creative tools used by millions of creators, and we’re tackling fundamental challenges in how to generate compelling human motion, expression, and speech.

KEY RESPONSIBILITIES:

Research & Architecture Development:

  • Design and implement novel architectures for large-scale video and multimodal diffusion models
  • Develop new approaches to multimodal fusion, temporal modeling, and video control
  • Research temporal video editing techniques and controllable generation
  • Research and validate scaling laws for video generation models
  • Create new loss functions and training objectives for improved generation quality
  • Drive rapid experimentation with model architectures and training strategies
  • Validate research directly through product deployment and user feedback

Model Training & Optimization:

  • Train and optimize models at massive scale (10s-100s of billions of parameters)
  • Develop sophisticated distributed training approaches using FSDP, DeepSpeed, Megatron-LM
  • Design and implement model surgery techniques (pruning, distillation, quantization)
  • Create new approaches to memory optimization and training efficiency
  • Research techniques for improving training stability at scale
  • Conduct systematic empirical studies of architecture and optimization choices

Technical Innovation:

  • Advance state-of-the-art in video model architecture design and optimization
  • Develop new approaches to temporal modeling for video generation
  • Create novel solutions for multimodal learning and cross-modal alignment
  • Research and implement new optimization techniques for generative modeling and sampling
  • Design and validate new evaluation metrics for generation quality
  • Systematically analyze and improve model behavior across different regimes
Loading...