MLE - Intermediate / Senior Machine Learning Engineer (Production & MLOps F at NTT DATA
Chennai, tamil nadu, India -
Full Time


Start Date

Immediate

Expiry Date

22 Jan, 26

Salary

0.0

Posted On

24 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, MLOps, Python, SQL, Terraform, Docker, Kubernetes, Airflow, Vertex AI, CI/CD, Model Monitoring, Feature Stores, Statistical Methods, Deep Learning, Big Data, Streaming Technologies

Industry

IT Services and IT Consulting

Description
Design, deploy, and maintain production-ready ML models and pipelines for real-world applications. Build and scale ML pipelines using Vertex AI Pipelines, Kubeflow, Airflow, and manage infra-as-code with Terraform/Helm. Implement automated retraining, drift detection, and re-deployment of ML models. Develop CI/CD workflows (GitHub Actions, GitLab CI, Jenkins) tailored for ML. Implement model monitoring, observability, and alerting across accuracy, latency, and cost. Integrate and manage feature stores, knowledge graphs, and vector databases for advanced ML/RAG use cases. Ensure pipelines are secure, compliant, and cost-optimized. Drive adoption of MLOps best practices: develop and maintain workflows to ensure reproducibility, versioning, lineage tracking, governance. Mentor junior engineers and contribute to long-term ML platform architecture design and technical roadmap. Stay current with the latest ML research and apply new tools pragmatically to production systems. Collaborate with product managers, DS, and engineers to translate business problems into reliable ML systems. Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or related field (PhD is a plus). 5+ years of experience in machine learning engineering, MLOps, or large-scale AI/DS systems. Strong foundations in data structures, algorithms, and distributed systems. Proficient in Python (scikit-learn, PyTorch, TensorFlow, XGBoost, etc.) and SQL. Hands-on experience building and deploying ML models at scale in cloud environments (GCP Vertex AI, AWS SageMaker, Azure ML). Experience with containerization (Docker, Kubernetes) and orchestration (Airflow, TFX, Kubeflow). Familiarity with CI/CD pipelines, infrastructure-as-code (Terraform/Helm), and configuration management. Experience with big data and streaming technologies (Spark, Flink, Kafka, Hive, Hadoop). Practical exposure to model observability tools (Prometheus, Grafana, EvidentlyAI) and governance (WatsonX) Strong understanding of statistical methods, ML algorithms, and deep learning architectures. Experience with real-time inference systems or low-latency streaming platforms (e.g. Kafka Streams). Hands-on with feature stores and enterprise ML platforms (IBM WatsonX, Vertex AI). Knowledge of model interpretability and fairness frameworks (SHAP, LIME, Fairlearn) and responsible AI principles. Strong understanding of data/model governance, lineage tracking, and compliance frameworks. Contributions to open-source ML/MLOps libraries or strong participation in ML competitions (e.g., Kaggle, NeurIPS). Domain experience in Logistics, supply chain, or large-scale consumer platforms.
Responsibilities
Design, deploy, and maintain production-ready ML models and pipelines for real-world applications. Drive adoption of MLOps best practices and mentor junior engineers.
Loading...