GPU Kernel Optimization Engineer at Advanced Micro Devices
Folsom, CA 95630, USA -
Full Time


Start Date

Immediate

Expiry Date

09 Sep, 25

Salary

214920.0

Posted On

10 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Deep Learning, Performance Tuning, Test Design, Scalability, Throughput, Compiler Optimization, Cuda, C++, Llvm, Low Level Programming, Triton, Debugging, Optimization, Python, Software Solutions, Hip, Assembly

Industry

Computer Software/Engineering

Description

PREFERRED EXPERIENCE:

  • GPU Kernel Development & Optimization: Experienced in designing and optimizing GPU kernels for deep learning on AMD GPUs using HIP, CUDA, and assembly (ASM). Strong knowledge of AMD architectures (GCN, RDNA) and low-level programming to maximize performance for AI operations, leveraging tools like Compute Kernel (CK), CUTLASS, and Triton for multi-GPU and multi-platform performance.

  • Deep Learning Integration: Experienced in integrating optimized GPU performance into machine learning frameworks (e.g., TensorFlow, PyTorch) to accelerate model training and inference, with a focus on scaling and throughput.

  • Software Engineering: Skilled in Python and C++, with experience in debugging, performance tuning, and test design to ensure high-quality, maintainable software solutions.
  • High-Performance Computing: Solid experienced in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability.
  • Compiler Optimization: Foundational understanding of compiler theory and tools like LLVM and ROCm for kernel and system performance optimization.
Responsibilities

WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_

THE ROLE:

As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.

KEY RESPONSIBILITIES:

  • Optimize Deep Learning Frameworks: Enhance and optimize frameworks like TensorFlow and PyTorch for AMD GPUs in open-source repositories.

  • Develop GPU Kernels: Create and optimize GPU kernels to maximize performance for specific AI operations.

  • Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance.
  • Collaborate with GPU Library Teams: Work closely with internal teams to analyze and improve training and inference performance on AMD GPUs.
  • Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream.
  • Work in Distributed Computing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems.
  • Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to improve deep learning performance.
  • Optimize Deep Learning Pipeline: Enhance the full pipeline, including integrating graph compilers.
  • Software Engineering Best Practices: Apply sound engineering principles to ensure robust, maintainable solutions.
Loading...