Senior Deep Learning Engineer, Deep Learning Algorithms at NVIDIA
Warsaw, Masovian Voivodeship, Netherlands -
Full Time


Start Date

Immediate

Expiry Date

01 Feb, 26

Salary

0.0

Posted On

03 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Deep Learning, Performance Analysis, Optimization, GPU Architecture, Python Programming, PyTorch, TensorFlow, JAX, Docker, CUDA, OpenCL, DevOps, MLOps, CI Systems, Algorithms, Analytical Skills

Industry

Computer Hardware Manufacturing

Description
We are looking for senior engineers who are mindful of performance analysis and optimization to help us squeeze every last clock cycle out of Deep Learning training, inference and NVIDIA AI Services. We are working across all layers of the hardware/software stack, from GPU architecture, Deep Learning Frameworks all the way up to large scale computing and orchestration, , to achieve peak performance. This role offers an opportunity to directly impact the hardware and software roadmap in a fast-growing company that leads the AI revolution. Join the team building software used by the entire world. Work with world class software engineers to implement blazingly fast SOTA deep learning models that help understanding the end-to-end performance of NVIDIA’s DL software and hardware stack. Work on most powerful, enterprise-grade GPU clusters capable of hundreds of Peta FLOPS and on unreleased hardware before anyone in the world. Are you ready for this challenge? What you’ll be doing: Implement deep learning models from multiple data domains (CV, NLP/LLMs, ASR, TTS, RecSys and others) in multiple DL frameworks (PyT, JAX, TF2, DGL and others) Implement and test new SW features (Graph Compilation, reduced precision training) that use the most recent HW functionalities. Analyze, profile, and optimize deep learning workloads on state-of-the-art hardware and software platforms. Collaborate with researchers and engineers across NVIDIA, providing guidance on improving the design, usability and performance of workloads. Lead best-practices for building, testing, and releasing DL software. Contribute to creation of large scale benchmarking system, capable of testing thousands of models on vast diversity of hardware and software stacks. What we need to see: 3+ years of experience in DL model implementation and SW Development. BSc, MS or PhD degree in Computer Science, Computer Architecture or related technical field. Excellent Python programming skills. Extensive knowledge of at least one DL Framework (PyTorch, TensorFlow, JAX, MxNet) with practical experience in PyTorch required. Strong problem solving and analytical skills. Algorithms and DL fundamentals. Docker containerization fundamentals. Ways to stand out from the crowd: Experience in performance measurements and profiling. Experience with containerization technologies such as Docker. GPU programming experience (CUDA or OpenCL) is a plus but not required. Knowledge and love for DevOps/MLOps practices for Deep Learning-based product’s development. Experience with CI systems (preferably GitLab). NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most brilliant and forward-thinking people in the world working for us. If you're creative and autonomous, we want to hear from you! We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.
Responsibilities
Implement deep learning models across various data domains and frameworks while optimizing performance on state-of-the-art hardware. Collaborate with researchers and engineers to enhance the design and usability of deep learning workloads.
Loading...