Director, HBM Design Architecture at Micron Technology
Richardson, Texas, United States -
Full Time


Start Date

Immediate

Expiry Date

07 Jan, 26

Salary

0.0

Posted On

09 Oct, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

HBM Design Architecture, AI/ML Computing Architectures, Training/Inference Pipelines, Memory Bandwidth Requirements, Compute-Memory Interconnects, Memory Subsystem Bottlenecks, HBM Optimization, Technical Briefings, Technology Roadmap SWOT Reviews, Co-Packaged Memory, Chiplet Architectures, Advanced Packaging Solutions, Collaboration, Customer Engagement, Industry Consortia, Semiconductor Technology

Industry

Semiconductor Manufacturing

Description
Responsibilities will include, but are not limited to: Build and maintain relationships with ecosystem partners including hyperscalers, accelerator vendors, IP providers, and academic collaborators. Develop deep technical expertise in AI/ML and LLM computing architectures, including training/inference pipelines, memory bandwidth requirements, and compute-memory interconnects. Work with technical experts to analyze emerging AI workloads and software frameworks to identify memory subsystem bottlenecks and opportunities for HBM optimization. Develop and present technical briefings and technology roadmap SWOT reviews to senior leadership and technical stakeholders. Contribute to pathfinding efforts for future HBM generations, including co-packaged memory, chiplet architectures, and advanced packaging solutions. Collaborate with internal architecture, design, and product teams to align HBM roadmap features with AI ecosystem needs. Experience engaging with customers, partners, and industry consortia at a deep technical level. PhD or Master's degree in Electrical Engineering, Computer Engineering, or related field. 15+ years of experience in semiconductor technology. Demonstrated impact in shaping technology roadmaps at the organizational level.
Responsibilities
The Director of HBM Design Architecture will build and maintain relationships with ecosystem partners and develop technical expertise in AI/ML computing architectures. They will analyze emerging AI workloads to identify memory subsystem bottlenecks and contribute to future HBM generations.
Loading...