Research Associate - PhD at Advanced Micro Devices Inc
San Jose, CA 95124, USA -
Full Time


Start Date

Immediate

Expiry Date

24 Jul, 25

Salary

0.0

Posted On

24 Apr, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Triton, C++

Industry

Computer Software/Engineering

Description

KEY QUALIFICATIONS:

  • Strong programming skills in C++ and familiarity with compiler frameworks.
  • Prior experience or understanding of MLIR-based compilers, which might include exposure to Triton, MLIR-AIR or MLIR-AIE
  • Prior experience developing using hardware accelerators e.g. AMD GPUs or AI Engines.
Responsibilities

WHAT YOU DO AT AMD CHANGES EVERYTHING

We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences – the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives.
AMD together we advance_
Responsibilities:
AMD Research and Advanced Development (RAD) is a great place to continue your research and impact the industry! RAD is an organization with a strong track record of driving research innovations into AMD products. RAD is a unique industrial research laboratory that is constantly exploring and innovating new technologies. As part of RAD, you will have a role on a winning team to invent the hardware and software and technologies of next generation computing platforms driving advancement in AI, scientific computing, gaming and graphics, and embedded applications.

WHAT YOU’LL BE DOING:

The successful candidate will be involved in work to develop machine learning (ML) inference accelerator designs targeting AMD’s AI Engine devices. The software stack enables designs, expressed using Triton to be compiled using MLIR, with a chain of intuitive IR transformations enabling efficient execution. This role plays an integral part in our long-term vision of streamlining the inference compiler workflow, enabling high-level ML models, described using Triton, to run efficiently execution on AI Engines and GPUs. The candidate will join a team developing the core transformations, driving Machine Learning applications through the toolchain, profiling the end-to-end inference speed and evaluating design tradeoffs in compute, memory, and communication implementation.

Loading...