Member of Technical Staff, GPU Optimization at Mirage
New York, New York, USA -
Full Time


Start Date

Immediate

Expiry Date

04 Dec, 25

Salary

300000.0

Posted On

04 Sep, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Cluster Management, Kubernetes, Computer Science, Communication Skills, Azure, Optimization Techniques, Aws, Containerization

Industry

Information Technology/IT

Description

Mirage is redefining short-form video with frontier AI research.
We’re building full-stack foundation models and products that are changing the future of this format and video creation, production and editing more broadly. Over 20 million creators and businesses use Mirage’s products to reach their full creative and commercial potential.
We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you’ll have an opportunity to have an outsized impact on our products and our company’s culture.

REQUIRED QUALIFICATIONS

  • Bachelor’s degree in Computer Science, Electrical/Computer Engineering, or equivalent practical experience
  • 3+ years of hands-on experience writing and optimizing CUDA kernels for production ML workloads
  • Deep understanding of GPU architecture: memory hierarchies, warp scheduling, tensor cores, register pressure, and occupancy tuning
  • Strong Python skills and familiarity with PyTorch internals, TorchScript, and distributed data-parallel training
  • Proven track record profiling and accelerating large-scale training and inference jobs (e.g., mixed precision, kernel fusion, custom collectives)
  • Comfort working in Linux environments with modern CI/CD, containerization, and cluster managers such as Kubernetes

PREFERRED QUALIFICATIONS

  • Advanced degree (MS/PhD) in Computer Science, Electrical/Computer Engineering, or related field
  • Experience with multi-modal AI systems, particularly video generation or computer vision models
  • Familiarity with distributed training frameworks (DeepSpeed, FairScale, Megatron) and model parallelism techniques
  • Knowledge of compiler optimization techniques and experience with MLIR, XLA, or similar frameworks
  • Experience with cloud infrastructure (AWS, GCP, Azure) and GPU cluster management
  • Ability to translate research goals into performant code, balancing numerical fidelity with hardware constraints
  • Strong communication skills and experience mentoring junior engineers

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ABOUT THE ROLE

As an expert in making AI models run fast—really fast—you live at the intersection of CUDA, PyTorch, and generative models, and get excited by the idea of squeezing every last bit of performance out of modern GPUs. You will have the opportunity to turn our cutting-edge video generation research into scalable, production-grade systems. From designing custom CUDA or Triton kernels to profiling distributed inference pipelines, you’ll work across the full stack to make sure our models train and serve at peak performance.

KEY RESPONSIBILITIES

  • Optimize model training and inference pipelines, including data loading, preprocessing, checkpointing, and deployment, for throughput, latency, and memory efficiency on NVIDIA GPUs
  • Design, implement, and benchmark custom CUDA and Triton kernels for performance-critical operations
  • Integrate low-level optimizations into PyTorch-based codebases, including custom ops, low-precision formats, and TorchInductor passes
  • Profile and debug the entire stack—from kernel launches to multi-GPU I/O paths—using Nsight, nvprof, PyTorch Profiler, and custom tools
  • Work closely with colleagues to co-design model architectures and data pipelines that are hardware-friendly and maintain state-of-the-art quality
  • Stay on the cutting edge of GPU and compiler tech (e.g., Hopper features, CUDA Graphs, Triton, FlashAttention, and more) and evaluate their impact
  • Collaborate with infrastructure and backend experts to improve cluster orchestration, scaling strategies, and observability for large experiments
  • Provide clear, data-driven insights and trade-offs between performance, quality, and cost
  • Contribute to a culture of fast iteration, thoughtful profiling, and performance-centric design
Loading...