Inference Platform Technical Lead at Wayve
Sunnyvale, California, USA -
Full Time


Start Date

Immediate

Expiry Date

07 Nov, 25

Salary

0.0

Posted On

08 Aug, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

At Wayve we’re committed to creating a diverse, fair and respectful culture that is inclusive of everyone based on their unique skills and perspectives, and regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, veteran status, pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law.

ABOUT US

Founded in 2017, Wayve is the leading developer of Embodied AI technology. Our advanced AI software and foundation models enable vehicles to perceive, understand, and navigate any complex environment, enhancing the usability and safety of automated driving systems.
Our vision is to create autonomy that propels the world forward. Our intelligent, mapless, and hardware-agnostic AI products are designed for automakers, accelerating the transition from assisted to automated driving.
In our fast-paced environment big problems ignite us—we embrace uncertainty, leaning into complex challenges to unlock groundbreaking solutions. We aim high and stay humble in our pursuit of excellence, constantly learning and evolving as we pave the way for a smarter, safer future.
At Wayve, your contributions matter. We value diversity, embrace new perspectives, and foster an inclusive work environment; we back each other to deliver impact.
Make Wayve the experience that defines your career!

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

As the Tech Lead for our Inference Platform, you will spearhead the development and evolution of our machine learning inference infrastructure, tackling complex challenges around job scheduling, resource efficiency, and platform reliability. You will define and implement technical strategies that ensure optimal utilization of high-performance GPU clusters, facilitating rapid iteration and seamless deployment of cutting-edge ML models.
Your leadership will directly influence the efficiency and scalability of inference services, addressing challenging technical problems such as intelligent workload scheduling, dynamic resource allocation (persistent & burst capacity), low-latency inference delivery, and multi-model inference pipelines. Solving these problems will significantly enhance the productivity of our machine learning engineers and researchers, enabling groundbreaking advancements in ML at scale.

Loading...