Developer Technology Engineer - AI at NVIDIA
Seoul, , South Korea -
Full Time


Start Date

Immediate

Expiry Date

23 May, 26

Salary

0.0

Posted On

22 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

C++, Software Development, Programming Techniques, AI Algorithms, Parallel Algorithms, System Optimization, GPU Architectures, Multi-modal Model Training, Inference, Reinforcement Learning, LLMs

Industry

Computer Hardware Manufacturing

Description
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. Join our global Developer Technology (DevTech) team at NVIDIA, where we drive innovation and improve the value of our platforms for developers. We are seeking a passionate colleague to work as an AI Developer Technology Engineer. Our DevTech team is a global organization dedicated to pushing the boundaries of AI and computing. Would you enjoy researching parallel algorithms to accelerate AI workloads on advanced computer architectures? Is it rewarding to investigate, find, and eliminate system bottlenecks to achieve the best possible performance of computer hardware? If so, we invite you to consider this role! What you'll be doing: Collaborating closely with key application developers to understand and address their current and future challenges. Developing and optimizing core parallel algorithms and data structures, providing top solutions with GPUs through reference code and direct app contributions. Working intimately with diverse groups at NVIDIA including architecture, research, libraries, tools, and system software teams. Your insights will influence the development of next-generation architectures, software platforms, and programming models by investigating their impact on application performance and developer efficiency. Researching and developing innovative techniques in AI. You'll conduct comprehensive analysis and optimization to ensure the best possible performance on current and next-generation GPU architectures. What we need to see: MS or PhD degree in AI computation or system optimization with a strong computational profile, or equivalent experience and 3+ years of relevant work. Strong knowledge of C++, software development, programming techniques, and AI algorithms. Strong communication and organization skills, with a logical approach to problem solving, good time management, and task prioritization skills. Proficiency in a specific domain, such as multi-modal model training/inference or reinforcement learning for LLMs. Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/ NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.
Responsibilities
The role involves collaborating with key application developers to address challenges, and developing/optimizing core parallel algorithms and data structures to provide top solutions using GPUs through reference code and direct application contributions. Responsibilities also include researching innovative AI techniques and conducting comprehensive analysis to ensure optimal performance on current and next-generation GPU architectures.
Loading...