System Software Engineer - RAG

at  NVIDIA

Santa Clara, CA 95051, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate24 Aug, 2024USD 339250 Annual25 May, 20246 year(s) or aboveGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

NVIDIA’s technology is at the heart of the AI revolution, touching people across the planet by powering everything from self-driving cars, robotics, co-pilots and more. Join us at the forefront of technological advancement in intelligent assistants and information retrieval. What Is Retrieval-Augmented Generation, aka RAG? Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.
NVIDIA is looking for a System Software Engineer - RAG to develop pipelines for indexing and querying multi-modal content. We are looking for someone with a passion for working with the world’s most complicated problems in Generative AI, LLM, MLLM, and RAG spaces using our innovative hardware and software platforms. You will develop tools for building powerful, flexible, multi-modal retrievers and agents driven by Large Language Models(LLM) thereby improving the experience of millions of customers. If you’re creative & passionate about solving real world conversational AI problems, come join us.

What You’ll Be Doing:

  • Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.
  • Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.
  • Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.
  • Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.
  • Work closely with complementary teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.
  • Build amazing products to improve employee productivity using Gen-AI & Co-pilot experiences!
  • Collaborate with your peers to craft, develop, test, and maintain integrated applications and features.
  • Develop integrated systems enabling unified experience across applications and driving insights for end-to-end user experience.
  • Help build and maintain our Continuous Delivery pipeline with the goal of moving changes to production faster and safer, while ensuring key operational standards.
  • Provide peer reviews to other specialists including feedback on performance, scalability, and correctness.
  • Actively contribute to the adoption of frameworks, standards, and new technologies

What We Need To See:

  • Bachelor’s or Master’s Degree program in Computer Science, Computer Engineering, or a related field (or equivalent experience).
  • 6+ years of demonstrated experience in a similar or related role
  • Python programming expertise with Deep Learning (DL) frameworks such as PyTorch.
  • Experience delivering software in a cloud context and is familiar with the patterns and process of handling cloud infrastructure
  • Knowledge of MLOps technologies such as Docker-Compose, Containers, Kubernetes, data center deployments etc.
  • Excellent in-depth hands-on understanding of NLP, LLM, MLLM, Generative AI , and RAG workflows
  • Self-starter with a passion for growth, enthusiasm for continuous learning and sharing findings across the team
  • Extremely motivated, highly passionate, and curious about new technologies.
  • Outstanding communication skills for distilling sophisticated topics down to understandable, impactful conclusions as well as the ability to work successfully with multi-functional teams, principals, and architects. Coordinates optimally across organizational boundaries and geographies.

If you are passionate about technology, have a proven track record in system software engineering, and are eager to make a significant impact in the industry, we would love to hear from you. Join us at NVIDIA and help us craft the future of visual computing!
The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.
You will also be eligible for equity and
benefits
. NVIDIA accepts applications on an ongoing basis.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Responsibilities:

  • Develop and optimize Python-based data processing frameworks, ensuring efficient handling of large datasets on GPU-accelerated environments, vital for LLM training.
  • Contribute to the design and implementation of RAPIDS and other GPU-accelerated libraries, focusing on seamless integration and performance enhancement in the context of LLM training data preparation and RAG pipelines.
  • Lead development and iterative optimization of components for RAG pipelines, ensuring they demonstrate GPU acceleration & the best performing models for improved TCO.
  • Collaborate with teams of LLM & ML researchers in the development of full-stack, GPU-accelerated data preparation pipelines for multimodal models Implement benchmarking, profiling, and optimization of innovative algorithms in Python in various system architectures, specifically targeting LLM applications.
  • Work closely with complementary teams to understand requirements, build & evaluate POCs, and develop roadmaps for production level tools and library features within the growing LLM ecosystem.
  • Build amazing products to improve employee productivity using Gen-AI & Co-pilot experiences!
  • Collaborate with your peers to craft, develop, test, and maintain integrated applications and features.
  • Develop integrated systems enabling unified experience across applications and driving insights for end-to-end user experience.
  • Help build and maintain our Continuous Delivery pipeline with the goal of moving changes to production faster and safer, while ensuring key operational standards.
  • Provide peer reviews to other specialists including feedback on performance, scalability, and correctness.
  • Actively contribute to the adoption of frameworks, standards, and new technologie


REQUIREMENT SUMMARY

Min:6.0Max:11.0 year(s)

Computer Software/Engineering

IT Software - Application Programming / Maintenance

Software Engineering

Graduate

Proficient

1

Santa Clara, CA 95051, USA