 
        Start Date
Immediate
Expiry Date
03 Dec, 25
Salary
0.0
Posted On
03 Sep, 25
Experience
8 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Robotics, Error Analysis, Metrics, Ee, Computer Science, Ml, Code, Avs, Benchmarking, Engineers, Reliability
Industry
Information Technology/IT
WHAT WE’RE DOING ISN’T EASY, BUT NOTHING WORTH DOING EVER IS.
We envision a future powered by robots that work seamlessly with human teams. We build artificial intelligence that enables service robots to collaborate with people and adapt to dynamic human environments. Join our mission-driven, venture-backed team as we build out current and future generations of humanoid robots.
The TLM, AI Evaluation Science will lead the team responsible for advancing the state of the art to measure the performance of physical AI systems, and measuring and validating how our AI systems perform in the real world. This group defines requirements, builds metrics, and creates rigorous evaluation pipelines. This work ensures that our robots meet high bars for safety, reliability, task performance and human trust. You’ll own simulation, testing, labeling, and interpretability frameworks, making sure our robots not only work, but work safely, repeatably, and explainably.
This is a hands-on leadership role in a startup environment. You’ll be both strategist and player-coach: defining evaluation standards, coding tools and models, and building the team that ensures our embodied AI is ready for deployment.
SKILLS AND EXPERIENCE