Deep Learning Compiler Engineer (MLIR/LLVM) at Intel - Dubai
Tel-Aviv, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

22 Mar, 26

Salary

0.0

Posted On

22 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

C++, MLIR, LLVM, High-Performance Computing, Parallel Programming, Analytical Skills, Problem-Solving Skills, Linux, Compiler Design, Middle-End Optimizations, Code Generation, Performance Analysis, Tuning, Dataflow Graphs, IR Transformation, Optimization Techniques

Industry

Semiconductor Manufacturing

Description
Job Details: Job Description: Join our compiler team and contribute to the development of an MLIR-based compiler that drives performance improvements on Intel deep learning accelerators. This compiler delivers significant performance gains across Intel products and directly impacts cutting-edge deep learning workloads. In this role, you will: Design and implement new optimizations within the MLIR and LLVM frameworks to enhance model-level performance for deep learning applications. Collaborate with architecture and performance teams to identify and address bottlenecks in the compiler pipeline. Engage with internal customers and developers to understand requirements and support model-level performance tuning. Explore and prototype novel compilation techniques to improve hardware utilization and efficiency. This position is expected to change location in the near future to the Petah Tikva campus. Qualifications: 5+ years of experience in C++ and familiarity with modern software development practices. Background in high-performance computing or parallel programming. Strong analytical and problem-solving skills. Experience with development on Linux. Advantage: Strong experience with compiler design and middle-end optimizations, preferably using MLIR and LLVM. Experience with code generation, performance analysis, and tuning for hardware accelerators. Knowledge of dataflow graphs, IR transformation, and optimization techniques. Job Type: Experienced Hire Shift: Shift 1 (Israel) Primary Location: Israel, Tel Aviv Additional Locations: Business group: The Software Team drives customer value by enabling differentiated experiences through leadership AI technologies and foundational software stacks, products, and services. The group is responsible for developing the holistic strategy for client and data center software in collaboration with OSVs, ISVs, developers, partners and OEMs. The group delivers specialized NPU IP to enable the AI PC and GPU IP to support all of Intel's market segments. The group also has HW and SW engineering experts responsible for delivering IP, SOCs, runtimes, and platforms to support the CPU and GPU/accelerator roadmap, inclusive of integrated and discrete graphics. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. * Job posting details (such as work model, location or time type) are subject to change. Intel’s official careers website. Find your next job and take on projects that shape tomorrow’s technology. Benefits Internships Life at Intel Locations Recruitment Process Discover your place in our world-changing work.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The role involves designing and implementing optimizations within the MLIR and LLVM frameworks to enhance performance for deep learning applications. Additionally, the engineer will collaborate with teams to identify bottlenecks and support performance tuning.
Loading...