Principal AI and ML Engineer — AI for Networking at NVIDIA
Santa Clara, CA 95050, USA -
Full Time


Start Date

Immediate

Expiry Date

08 Aug, 25

Salary

0.0

Posted On

09 May, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

NVIDIA redefines what’s possible. NVIDIA has been reinventing computer graphics, PC gaming, and accelerated computing for 30 years. It is a unique legacy of innovation that’s fueled by great technology and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, generative AI, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work.
Our company is at the forefront of technological innovation, and we are dedicated to driving efficiency and optimizing the performance of our infrastructure both on-prem and cloud. Join us in this exciting endeavor! We are seeking a highly skilled Principal AI/ML Engineer to join our dynamic team to build the next generation of IT Networking space and help lead the team through a major technology transformation into running AI on-prem and build infrastructure by integrating Enterprise ready platforms while building a solid foundation with automation. We are looking for a passionate engineer who will solve networking problems with AI.

WHAT WE NEED TO SEE:

  • 10+ years of engineering experience with at least 5 years leading initiatives in ML infrastructure, AI systems, or applied NLP/LLM development.
  • 5+ years of experience in Networking and infrastructure.
  • Bachelor’s, Master’s, or Ph.D. in Computer Science, Engineering, Machine Learning, or a related field (or equivalent experience).
  • Deep expertise with:
  • Generative AI concepts such as embeddings, RAG, semantic search, and transformer-based LLMs
  • MCP workflows and Agentic ecosystem
  • Vector databases (e.g., FAISS, Pinecone, Weaviate) and data pipelines
  • Programming in Python (preferred) and/or Go, and software engineering best practices
  • Experience deploying and tuning LLMs using techniques like LoRA, QLoRA, and instruction tuning.
  • Strong understanding of infrastructure automation pipeline (Terraform, Ansible, Salt), monitoring (Prometheus, Grafana), and DevOps tools.
  • Hands-on experience working with petabyte-scale datasets, schema design, and distributed processing.
  • Strong background in working with infrastructure related data collections and logs related to network data. Ability to run simulations of network state with AI tools.
Responsibilities
  • Architect and implement infrastructure platforms tailored for AI/ML workloads, with a focus on scaling private cloud environments to support high-throughput training, inference, and Agentic workflows and pipelines.
  • Lead initiatives in Generative AI systems design, including Retrieval-Augmented Generation (RAG), LLM fine-tuning, semantic search, and multi-modal data processing.
  • Build and optimize ML systems for document understanding, vector-based retrieval, and knowledge graph integration using advanced NLP and information retrieval techniques.
  • Design and develop scalable services and tools to support GPU-accelerated AI pipelines, leveraging Kubernetes, Python/Go, and observability frameworks.
  • Mentor and collaborate with a multidisciplinary team of network engineers, automation engineers, AI and ML scientists, product managers, and multiple domain experts.
  • Build and drive adoption of emerging AIOPs technologies, integrating AI Agents, RAGs, and LLMs using MCP workflows to streamline automation, performance tuning, and large-scale data insights.
Loading...