Start Date
Immediate
Expiry Date
11 Sep, 25
Salary
337000.0
Posted On
12 Jun, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Python, Rust, Reliability, C++, Computer Science, Cuda, Research, Security
Industry
Information Technology/IT
QUALIFICATIONS
Minimum Qualifications- Master’s degree in Computer Science, Engineering, or a related technical field.- 5+ years in infrastructure or systems engineering focused roles, with 2-5 years focused on ML/AI infrastructure.- Strong programming skills in Python, C++, Go, or Rust for systems development and automation.- Ability to design end-to-end systems that balance performance, reliability, security, and cost.- Excellent communicator able to bridge research and production teams.- Strong problem-solving aptitude and a drive to push the state of the art in ML infrastructure.Preferred Qualifications- PhD Degree is preferred- Hands-on experience with ML training frameworks (PyTorch, TensorFlow, JAX) at scale.- Knowledge of hardware-level optimization: CUDA, ROCm, kernel bypass, FPGA/ASIC integration.- Experience with Heterogeneous Computing for AI, Bigdata, HPC.- Open-source contributions or patents in the ML systems space.- Publications in top tier ML or System Conferences such as MLSys, ICML, ICLR, KDD, NeurIPS (Preferred)
Team Introduction:The infra4AI Research and Architecture Team is responsible for the foundational hardware and software systems specifically engineered to support the demanding and often experimental workloads of developing new artificial intelligence models and systems. It serves as the bedrock upon which researchers and engineers create, train, test, and iterate on novel AI architectures, from large language models (LLMs) to specialized neural networks.We are seeking a highly skilled and motivated AI Infrastructure Researchers and Engineers to join our dynamic team. In this role, you will be responsible for designing, building, deploying, and maintaining the robust and scalable infrastructure that powers our cutting-edge artificial intelligence (AI) and machine learning (ML) initiatives. You will work closely with our AI/ML researchers, data scientists, and software engineers to create an efficient, high-performance environment for training, inference, and data processing. Your expertise will be critical in enabling the next generation of AI-driven products and services.ResponsibilitiesThe ideal candidate should be an expert in at least one of the following fields to define and design the next-gen AI Infrastructure:- Infrastructure Design & Architecture- Lead end-to-end design of scalable, reliable AI infrastructure (AI accelerators, compute clusters, storage, networking) for training and serving large ML workloads.- Define and implement service-oriented, containerized architectures (Kubernetes, VM frameworks, unikernels) optimized for ML performance and security.- Performance Optimization- Profile and optimize every layer of the ML stack—ML Compiler, GPU/TPU scheduling, NCCL/RDMA networking, data preprocessing, and training/inference frameworks.- Develop low-overhead telemetry and benchmarking frameworks to identify and eliminate bottlenecks in distributed training and serving.- Distributed Systems & Scalability- Build and operate large-scale deployment and orchestration systems that auto-scale across multiple data centers (on-premises and cloud).- Champion fault-tolerance, high availability, and cost-efficiency through smart resource management and workload placement.- Data Pipeline & Workflow Engineering- Architect and implement robust ETL and data ingestion pipelines (Spark/Beam/Dask/Flume) tailored for petabyte-scale ML datasets.- Integrate experiment management and workflow orchestration tools (Airflow, Kubeflow, Metaflow) to streamline research-to-production.- Collaboration & Mentorship- Partner with ML researchers to translate prototype requirements into production-grade systems.- Mentor and coach engineers on best practices in performance tuning, systems design, and reliability engineering.