MTS - Deep Learning Software Engineer - AI Models at Advanced Micro Devices Inc
Austin, TX 78735, USA -
Full Time


Start Date

Immediate

Expiry Date

28 Apr, 25

Salary

0.0

Posted On

28 Jan, 25

Experience

0 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Design Skills, Test Design, Performance Analysis, Natural Language Processing

Industry

Computer Software/Engineering

Description

PREFERRED EXPERIENCE:

  • Knowledge of GPU computing (HIP, CUDA, OpenCL)
  • AI model experience or knowledge - Natural Language Processing, Vision, Audio, Recommendation systems
  • Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design.
  • Experiences to run workloads on large scale heterogeneous cluster is a plus
  • Experiences to optimize GPU kernels for performance is a plus
Responsibilities

WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
Responsibilities:

THE ROLE:

AMD is looking for a software engineer who is passionate about expanding AI models on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology.

KEY RESPONSIBILITIES:

  • Enable and optimize key AI models (LLM, Vision, MultiModal, etc.) on AMD GPUs
  • Optimize AI frameworks like PyTorch, TensorFlow, etc. on AMD GPUs in upstream open-source repositories
  • Collaborate and interact with internal GPU library teams to analyze and optimize training and inference for AI
  • Work with open-source framework maintainers to understand their requirements – and have your code changes integrated upstream
  • Optimize GPU kernels, understand and drive AI operator performance (GEMM, Attention, etc.) with specialized teams
  • Work in a distributed computing setting to optimize for both scale-up (multi-GPU) and scale-out (multi-node) systems
  • Apply your knowledge of software engineering best practices
Loading...