HPC and AI Cluster Engineer at NVIDIA
Beijing, Beijing, China -
Full Time


Start Date

Immediate

Expiry Date

15 May, 26

Salary

0.0

Posted On

14 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

HPC, AI Cluster, Linux, Slurm, Kubernetes, GPU, Networking, Firewalls, Iptables, Wireshark, TCP, DHCP, DNS, Python, Bash Scripting, Ansible

Industry

Computer Hardware Manufacturing

Description
NVIDIA is looking for a HPC and AI Cluster Engineer to join the Networking clusters solutions HPC/AI Infrastructure team. We are building supercomputers and AI clusters based on groundbreaking technologies. We are looking for a cluster engineer to be a key player to the most exciting computing hardware and software to contribute to the latest breakthroughs in artificial intelligence and GPU computing You will work with the latest Accelerated computing and Deep Learning software and hardware platforms, and with many scientific researchers, developers, and customers to craft improved workflows and develop new, leading differentiated solutions. You will interact with HPC, OS, GPU compute, and systems specialist to architect, develop and bring up large scale performance platforms. Does this sound like you? If so, we would love to hear from you! What you will be doing: Deploy, manage and maintain large scale HPC/AI clusters Managing Linux job/workload schedules and orchestration tools Support and maintain continuous integration and delivery pipelines Troubleshooting and fixing, bottom up from bare metal, operating system, software stack and application level Supporting Research & Development activities and engaging in POCs for future improvements What we need to see: Bachelor's Degree in Computer Science, Engineering, or a related field; or equivalent experience 3+ years of experience Knowledge of HPC and AI solution technologies from CPU’s and GPU’s to high speed interconnects and supporting software Experience with job scheduling workloads and orchestration tools such as Slurm, K8s Excellent knowledge of Windows and Linux (Redhat/CentOS and Ubuntu) networking (sockets, firewalls, iptables, wireshark, etc.) and internals, ACLs and OS level security protection and common protocols e.g. TCP, DHCP, DNS, etc. Python programming and bash scripting experience, automation and configuration management tools such as Jenkins, Ansible, Gitops Experience with virtual systems (for example VMware, Hyper-V, KVM) Ways to stand out from the crowd: Knowledge of CPU and/or GPU architecture Knowledge of Kubernetes, container related microservice technologies Experience with GPU-focused hardware/software (DGX, Cuda) Experience with multiple storage solutions such as Lustre, GPFS, familiarity with newer and emerging storage technologies. Background with RDMA (InfiniBand or RoCE) fabrics NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. We have a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. Our teams are composed of driven, innovative professionals dedicated to pushing the boundaries of technology. We offer highly competitive salaries, an extensive benefits package, and a work environment that promotes diversity, inclusion, and flexibility. As an equal opportunity employer, we are committed to fostering a supportive and empowering workplace for all NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.
Responsibilities
The engineer will deploy, manage, and maintain large-scale HPC/AI clusters, including managing Linux job scheduling and orchestration tools. They will also support continuous integration pipelines and troubleshoot issues from bare metal up to the application level.
Loading...