Start Date
Immediate
Expiry Date
09 Nov, 25
Salary
0.0
Posted On
13 Aug, 25
Experience
8 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Python, Aws, Docker, Bash, Azure, Automation, Ml
Industry
Information Technology/IT
PREFERRED QUALIFICATIONS
· Cloud certifications (AWS, Azure, GCP).
· Experience with model explainability tools (SHAP, LIME).
· Exposure to deep learning deployment (TensorFlow Serving, TorchServe, ONNX).
· Knowledge of API frameworks (FastAPI, Flask) for ML inference services.
· Experience with real-time streaming integrations (Kafka, Kinesis, Pub/Sub)
Job Types: Full-time, Contract
Application Deadline: 11/09/202
ROLE OVERVIEW
We are seeking a high-caliber MLOps Engineer to design, automate, and scale end-to-end ML model lifecycles in production. You will own the deployment, monitoring, and governance of models, bridging data science and engineering to deliver secure, high-performance AI solutions at enterprise scale.
CORE RESPONSIBILITIES
· Architect & manage ML model deployment pipelines using MLflow, Kubeflow, or similar frameworks.
· Build CI/CD pipelines tailored for ML workloads (GitHub Actions, Jenkins, GitLab CI).
· Containerize and orchestrate ML services using Docker & Kubernetes (EKS, AKS, GKE).
· Integrate models into cloud ML platforms (AWS SageMaker, Azure ML, GCP Vertex AI).
· Implement model monitoring for accuracy, drift detection, and retraining automation.
· Establish feature stores and integrate with data pipelines (Feast, Tecton, Hopsworks).
· Ensure compliance with AI governance, security, and responsible AI practices.
· Optimize inference performance and reduce serving latency for large-scale deployments.
· Collaborate with cross-functional teams to translate ML research into production-grade APIs.