Member of Technical Staff, Backend, LLM Applications at INCEPTION ARTIFICIAL INTELLIGENCE L.L.C - O.P.C
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

08 Jun, 26

Salary

0.0

Posted On

10 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Backend, LLM Applications, Model Serving, Inference Requests, Latency, Throughput, Reliability, Scalable Backend Services, Load Balancing, Autoscaling, Traffic Routing, Model Versioning, Canary Deployments, Monitoring, Alerting, Python

Industry

technology;Information and Internet

Description
The Role We seek experienced backend engineers to own the systems that serve our diffusion LLMs in production. You'll build and operate infrastructure that handles billions of inference requests — optimizing for latency, throughput, cost, and reliability. This role sits at the intersection of ML systems and backend infrastructure. Key Responsibilities * Design, build, and operate scalable backend services and model serving infrastructure for our diffusion LLMs. * Implement and manage load balancing, autoscaling, and traffic routing for model endpoints. * Build systems for model versioning, canary deployments, and zero-downtime rollouts. * Develop monitoring, alerting, and observability tooling to ensure SLA compliance and rapid incident response. * Benchmark and evaluate serving frameworks and hardware configurations to inform infrastructure decisions. Qualifications * BS/MS/PhD in Computer Science or a related field (or equivalent experience). * 5+ years of experience building production backend systems. * Strong proficiency in Python, including async programming and concurrent systems. * Solid understanding of distributed systems, networking, and load balancing at scale. * Familiarity with Kubernetes, CI/CD pipelines, and cloud infra (AWS and/or Azure). Preferred Skills * Experience serving LLMs or other large generative models in production at scale. * Experience with cloud infrastructure (AWS, Azure), including GPU instance management and cost optimization. * Experience with infrastructure as code tools (Terraform) and deployment automation. * Experience with monitoring and observability tools (Prometheus, Grafana). * Familiarity with model serving frameworks (vLLM, Triton Inference Server, TensorRT-LLM).
Responsibilities
The role involves designing, building, and operating scalable backend services and model serving infrastructure specifically for diffusion LLMs in production. Key tasks include implementing traffic management, developing deployment systems, and creating observability tooling to ensure service level agreements are met.
Loading...