Principal Machine Learning Engineer at Red River
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

13 May, 26

Salary

0.0

Posted On

12 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

PyTorch, Machine Learning, Open Source, C++, Python, Performance Optimization, Hardware Acceleration, Linux, Git, Debugging, Benchmarking, Custom Ops, Numerical Computing, CUDA, ROCm, oneAPI

Industry

IT Services and IT Consulting

Description
About the Job: The AI Platform Core Components organization, part of AI Engineering, is looking for individuals with a passion for Open Source and Machine Learning enthusiasts helping grow the impact of Red Hat’s AI offerings for our customers and the community. We are looking for a PyTorch Machine Learning Engineer to help improve, extend, and upstream PyTorch on Red Hat platforms. You will work primarily on PyTorch, contributing to PyTorch core, improving performance on modern hardware, and collaborating with the upstream community. What you will do : Design, implement, and maintain features in PyTorch core (Python and C++), including ops, kernels, and tooling. Profile and optimize PyTorch execution on CPU and GPU/accelerators (Intel, AMD, NVIDIA CUDA). Build tests, benchmarks, and minimal examples to validate correctness and performance. Debug issues across the stack (PyTorch, libraries, hardware, drivers) and contribute fixes upstream. Collaborate with upstream PyTorch maintainers and internal teams; write clear docs and design notes. Contribute to the PyTorch upstream community What you will bring : 2-6 years of experience in ML systems. Experience contributing to Open Source projects Strong skills in C++ and Python. Hands-on experience with PyTorch (internals, custom ops, or advanced usage). Solid understanding of algorithms, data structures, and performance-oriented coding. Comfortable working in Linux, Git, and modern development workflows The following are considered a Plus : Familiarity with numerical computing, vectorization, and low-level performance profiling tools. Prior contributions to PyTorch or other ML/AI open-source projects. Experience with CUDA, ROCm/AMD GPUs, or Intel GPU/oneAPI. #LI-AK1 About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge. At Red Hat, our commitment to open source extends beyond technology into virtually everything we do. We collaborate and share ideas, create inclusive communities, and welcome diverse perspectives from all Red Hatters, no matter their role. It’s what makes us who we are. Some of the most knowledgeable and passionate people in the technology industry work here. Whether we’re building software, championing our products, or training new associates, we’re collaborating openly to make a difference in the world of open source and beyond.
Responsibilities
The role involves designing, implementing, and maintaining features within PyTorch core using Python and C++, focusing on improving performance across CPU and GPU/accelerators like Intel, AMD, and NVIDIA CUDA. Responsibilities also include building tests, debugging issues across the stack, and collaborating with upstream PyTorch maintainers.
Loading...