Senior Performance Software Engineer, Deep Learning Libraries at NVIDIA
Santa Clara, CA 95050, USA -
Full Time


Start Date

Immediate

Expiry Date

11 Aug, 25

Salary

0.0

Posted On

12 May, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Computer Software/Engineering

Description

We are now looking for a Senior Performance Software Engineer for Deep Learning Libraries! Do you enjoy tuning parallel algorithms and analyzing their performance? If so, we want to hear from you! As a deep learning library performance software engineer, you will be developing optimized code to accelerate linear algebra and deep learning operations on NVIDIA GPUs. The team delivers high-performance code to NVIDIA’s cuDNN, cuBLAS, and TensorRTlibraries to accelerate deep learning models. The team is proud to play an integral part in enabling the breakthroughs in domains such as image classification, speech recognition, and natural language processing. Join the team that is building the underlying software used across the world to power the revolution in artificial intelligence! We’re always striving for peak GPU efficiency on current and future-generation GPUs. To get a sense of the code we write, check out our CUTLASS open-source project showcasing performant matrix multiply on NVIDIA’s Tensor Cores with CUDA. This specific position primarily deals with code lower in the deep learning software stack, right down to the GPU HW.

WHAT WE NEED TO SEE:

  • Masters or PhD degree or equivalent experience in Computer Science, Computer Engineering, Applied Math, or related field
  • 6+ years of relevant industry experience
  • Demonstrated strong C++ programming and software design skills, including debugging, performance analysis, and test design
  • Experience with performance-oriented parallel programming, even if it’s not on GPUs (e.g. with OpenMP or pthreads)
  • Solid understanding of computer architecture and some experience with assembly programming
Responsibilities
  • Writing highly tuned compute kernels, mostly in C++ CUDA, to perform core deep learning operations (e.g. matrix multiplies, convolutions, normalizations)
  • Following general software engineering best practices including support for regression testing and CI/CD flows
  • Collaborating with teams across NVIDIA:
  • CUDA compiler team on generating optimal assembly code
  • Deep learning training and inference performance teams on which layers require optimization
  • Hardware and architecture teams on the programming model for new deep learning hardware features
Loading...