Product Manager - Gen AI Inference Platform

at  NVIDIA

Santa Clara, CA 95051, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate09 Oct, 2024USD 310500 Annual09 Jul, 20243 year(s) or aboveGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

To realize value with AI, neural networks need to be deployed for inference powering applications running in the cloud, datacenter or at the edge. Common services that invoke AI inference include large language models, recommender systems, virtual assistants and generative AI.
NVIDIA is at the forefront of advancing the latest research and optimizations to make the cost-efficient inferencing of customized GenAI models a reality for everybody. To keep pace with this multifaceted field, we seek a passionate product manager who understands inference and its ecosystem. We need a self-starter to continue to grow this area and work with customers, to define the future of inference. We’re looking for the rare blend of both technical and product skills and passion about groundbreaking technology. If this sounds like you, we would love to learn more about you!

What You’ll be Doing:

  • Develop NVIDIA’s enterprise inference strategy in alignment with NVIDIA’s portfolio of AI products and services
  • Distill insights from strategic customer engagements and define, prioritize and drive execution of product roadmap
  • Collaborate cross-organization with machine learning engineers and product teams to introduce new techniques and tools that improve performance, latency, throughput while optimizing for cost.
  • Build outstanding developer experience with inference APIs providing seamless integration with the modern software development stack and relevant ecosystem partners.
  • Ensure operational excellence and reliability of distributed inference serving systems - build processes around a robust set of analytics and alerting tooling focused on uptime SLAs and overall QoS
  • Develop industry and workload focused GTM strategy and playbook with marketing, sales and in partnership with NVIDIA’s ecosystem of partners to drive enterprise adoption and establish leadership in inference

What We Need to See:

  • BS or MS degree in Computer Science, Computer Engineering, or similar field or equivalent experience
  • 8+ years of product management, or similar, experience at a technology company
  • 3+ years of experience in defining and building inference software
  • Proven experience managing & operationalizing a cloud services product
  • Strong understanding of Kubernetes and operationalizing ML
  • Strong communication and interpersonal skills

Ways to Stand Out from the Crowd:

  • Understanding of modern ML architectures and an intuition for how to optimize their TCO, particularly for inference
  • Advanced knowledge of inference acceleration libraries and runtimes such NVIDIA Triton Inference Server, TensorRT, Ray, vLLM and TGI
  • Familiarity with the MLOps ecosystem and experience building integrations with popular MLOps tooling such as MLflow and Weights & Biases

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you’re creative and a self-starter, we want to hear from you!
The base salary range is 160,000 USD - 310,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and
benefits
. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Responsibilities:

  • Develop NVIDIA’s enterprise inference strategy in alignment with NVIDIA’s portfolio of AI products and services
  • Distill insights from strategic customer engagements and define, prioritize and drive execution of product roadmap
  • Collaborate cross-organization with machine learning engineers and product teams to introduce new techniques and tools that improve performance, latency, throughput while optimizing for cost.
  • Build outstanding developer experience with inference APIs providing seamless integration with the modern software development stack and relevant ecosystem partners.
  • Ensure operational excellence and reliability of distributed inference serving systems - build processes around a robust set of analytics and alerting tooling focused on uptime SLAs and overall QoS
  • Develop industry and workload focused GTM strategy and playbook with marketing, sales and in partnership with NVIDIA’s ecosystem of partners to drive enterprise adoption and establish leadership in inferenc


REQUIREMENT SUMMARY

Min:3.0Max:8.0 year(s)

Information Technology/IT

IT Software - Application Programming / Maintenance

Software Engineering

Graduate

Proficient

1

Santa Clara, CA 95051, USA