Solution Architect Intern, AI in Industry - 2025 at NVIDIA
Beijing, Beijing, China -
Full Time


Start Date

Immediate

Expiry Date

24 Dec, 25

Salary

0.0

Posted On

25 Sep, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Computer Science, AI, Machine Learning, Python, C++, TensorRT, ONNX Runtime, Problem Solving, Collaboration, DevOps, MLOps, Docker, Git, CI/CD, HPC, Enterprise Computing

Industry

Computer Hardware Manufacturing

Description
NVIDIA is leading company of AI computing. At NVIDIA, our employees are passionate about AI, HPC , VISUAL, GAMING. Our Solution Architect team is more focusing to bring NVIDIA new technology into difference industries. We help to design the architecture of AI computing platform, analyze the AI and HPC applications to deliver our value to customers. This role will be instrumental in leveraging NVIDIA's cutting-edge technologies to optimize open-source and proprietary large models, create AI workflows, and support our customers in implementing advanced AI solutions. What you’ll be doing: Drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutions Use NVIDIA NIM Factory Pipeline to package optimized models (including LLM, VLM, Retriever, CV, OCR, etc.) into containers providing standardized API access Refine NIM tools for the community, help the community to build their performant NIMs Design and implement agentic AI tailored to customer business scenarios using NIMs Deliver technical projects, demos and customer support tasks Provide technical support and guidance to customers, facilitating the adoption and implementation of NVIDIA technologies and products Collaborate with cross-functional teams to enhance and expand our AI solutions What we need to see: Pursuing Bachelor or Master in Computer Science, AI, or a related field; Or PhD candidates in ML Infra or data systems for ML. Proficiency in at least one inference framework (e.g., TensorRT, ONNX Runtime, PyTorch) Strong programming skills in Python or C++ Excellent problem-solving skills and ability to troubleshoot complex technical issues Demonstrated ability to collaborate effectively across diverse, global teams, adapting communication styles while maintaining clear, constructive professional interactions Ways to stand out from the crowd: Expertise in model optimization techniques, particularly using TensorRT Familiarity with disaggregated LLM Inference CUDA optimization experience, extensive experience designing and deploying large scale HPC and enterprise computing systems Familiarity with main stream inference engines (e.g., vLLM, SGLang) Experience with DevOps/MLOps such as Docker, Git, and CI/CD practices NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.
Responsibilities
Drive the implementation and deployment of NVIDIA Inference Microservice solutions and provide technical support to customers. Collaborate with cross-functional teams to enhance and expand AI solutions.
Loading...