ML Ops at Evismart
Vancouver, British Columbia, Canada -
Full Time


Start Date

Immediate

Expiry Date

28 May, 26

Salary

0.0

Posted On

27 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

MLOps, ML Infrastructure, Docker, GPU Optimization, Asynchronous Job Orchestration, Container Orchestration, CI/CD, Model Versioning, Kubernetes, Terraform, FastAPI, Linux Systems Administration, Observability, Fault Tolerance, 3D Mesh Processing, Inference Services

Industry

Software Development

Description
About EviSmart EviSmart is a B2B SaaS platform transforming how dental labs, dentists, and design centers work together. We connect practices across 28+ countries with AI-powered workflow automation, overnight CAD design services, and intelligent case management. We're a team of ~60 across Vancouver, Manila, and Seoul—building the operating system for modern dentistry. The Opportunity Our core AI research is done. The 3D segmentation models work. The generative models are built. The frontend is live, the storage layer is operational. What's missing is the production backbone—the compute layer that turns research-grade models into fast, reliable, scalable services that real users depend on every day. This role is about building that backbone. You'll architect and operate the infrastructure that powers heavy 3D mesh processing and GPU-intensive workloads for both internal teams and external customers. You'll own the systems that make our AI actually work in production—reproducibly, efficiently, and at scale. This is a systems-level engineering role, not model research. If you're the person who bridges the gap between "it works in a notebook" and "it works for 10,000 users," we want to talk to you. What You'll Be Doing Productionizing Research Models Convert research models into scalable, production-grade inference services Design reproducible, version-controlled Docker environments with clear deployment standards Optimize GPU memory usage and runtime performance for 3D workloads Implement stateless service architectures with model versioning and rollback capabilities Building Hybrid GPU Infrastructure Architect asynchronous job orchestration across on-prem GPU servers and cloud GPU instances Implement GPU scheduling, isolation, and intelligent workload routing Design for concurrent users with fault tolerance and graceful failure recovery Support hybrid infrastructure with cloud burst capacity for peak demand Designing a Scalable Inference Platform Build modular pipelines with clean separation between preprocessing, model inference, and postprocessing Create well-defined service boundaries between backend APIs and compute services Establish a platform architecture that allows future models to integrate without major refactoring Deployment & Automation Build CI/CD pipelines for model updates and infrastructure changes Implement version control, rollback strategies, and release management across environments Automate deployment workflows across hybrid infrastructure Reliability & Observability Implement logging, monitoring, and system observability across the compute layer Track GPU utilization, job performance, and queue health Design retry mechanisms and automated failure handling Optimize latency and throughput to maintain stability under variable load What We're Looking For Required 5+ years in ML Infrastructure, MLOps, or backend systems engineering Proven track record deploying ML models into production environments Strong expertise in Docker and container lifecycle management Hands-on experience managing and optimizing GPU-based workloads Experience designing asynchronous job systems or queue-based architectures Solid Linux systems administration skills Experience integrating ML services with backend APIs (FastAPI, Flask, etc.) Preferred Kubernetes or other container orchestration platforms Hybrid cloud + on-prem infrastructure deployment CI/CD pipeline design and automation Monitoring and logging tools (Prometheus, Grafana, ELK, etc.) Infrastructure-as-Code (Terraform, Pulumi, etc.) Experience with high-compute 3D ML workloads Model versioning systems and experiment tracking You'll Thrive Here If You… Have successfully transitioned ML systems from research to commercial deployment Think in terms of platform architecture, not one-off deployments Care about reliability, scalability, and production SLAs Are comfortable debugging across GPU, container, and infrastructure layers Design systems for long-term extensibility—not just what works today EviSmart™ | Where Dental Work Flows Vancouver | Manila | Seoul
Responsibilities
This role focuses on building the production backbone for AI services, involving architecting and operating infrastructure for heavy 3D mesh processing and GPU-intensive workloads. Key tasks include productionizing research models into scalable inference services and designing a robust, hybrid GPU infrastructure.
Loading...