Sr/Staff ML Engineer, Robotics at Diligent Robotics
Austin, Texas, USA -
Full Time


Start Date

Immediate

Expiry Date

04 Dec, 25

Salary

0.0

Posted On

05 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Slam, Computer Science, Computer Vision, Radar, Lidar, Python, Segmentation, Machine Learning, Path Planning, Production Deployment, C++, Robotics

Industry

Information Technology/IT

Description

WHAT WE’RE DOING ISN’T EASY, BUT NOTHING WORTH DOING EVER IS.

We envision a future powered by robots that work seamlessly with human teams. We build artificial intelligence that enables service robots to collaborate with people and adapt to dynamic human environments. Join our mission-driven, venture-backed team as we build out current and future generations of humanoid robots.
As a Sr/Staff ML Engineer, Perception / Robotics, you will develop, deploy, and optimize machine learning models that enable robots to understand and navigate complex human environments. You will lead the design of ML systems, from sensor fusion to real-time inference, ensuring robustness in safety-critical, real-world deployments.

SKILLS AND EXPERIENCE

  • Master’s or PhD in Computer Science, Robotics, Machine Learning, or related field.
  • 5+ years of experience in applied machine learning, computer vision, or robotics perception.
  • Strong background in deep learning frameworks (PyTorch, TensorFlow, JAX).
  • Hands-on experience with real-time perception/navigation tasks (detection, tracking, segmentation,path planning).
  • Expertise in one or more sensor modalities: RGB/depth cameras, LIDAR, radar, or multimodal fusion.
  • Experience deploying ML models on edge/embedded hardware (e.g., Jetson, TPU, ARM-based platforms).
  • Familiarity with SLAM, mapping, and navigation pipelines.
  • Solid software engineering skills in Python and C++ for ML system integration.
  • Proven ability to take ML models from research prototype to production deployment.
  • Strong debugging skills for diagnosing ML performance gaps in fielded systems.
Responsibilities
  • Develop and deploy ML models for perception/navigation tasks such as object detection, semantic segmentation, tracking, scene understanding, localization, and path prediction.
  • Design and implement sensor fusion and mapping pipelines combining vision, depth, LIDAR, IMU, and other signals for robust perception and navigation in dynamic spaces.
  • Build real-time ML inference pipelines optimized for robotic hardware and embedded compute.
  • Setup data collection, labeling strategies, dataset curation, and synthetic data augmentation for training and evaluation.
  • Establish metrics, benchmarks, and test frameworks to validate ML models in both simulation and real-world environments.
  • Collaborate with robotics software engineers to integrate perception and navigation intelligence into autonomy stacks.
  • Work with operations to analyze field data, diagnose performance gaps, and iterate on model improvements.
  • Contribute to long-term ML and perception and navigation architecture decisions, influencing the roadmap for future robots.
  • Mentor junior ML engineers and contribute to building strong applied ML best practices within the team.
Loading...