Senior Data Platform & MLOps Engineer EMEA - Dusseldorf at Boston Scientific Corporation Malaysia
Düsseldorf, North Rhine-Westphalia, Germany -
Full Time


Start Date

Immediate

Expiry Date

17 Feb, 26

Salary

0.0

Posted On

19 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Cloud Experience, MLOps Expertise, Python, SQL, Distributed Processing, Infrastructure-as-Code, CI/CD Pipelines, Containerization, Model Lifecycle Management, Observability, Data Quality, Compliance, Collaboration Skills, Security Best Practices, Machine Learning, Data Engineering

Industry

Medical Equipment Manufacturing

Description
As a Senior Data Platform & MLOps Engineer, you will design, build, and operate the cloud data platforms that powers analytics and AI solutions across EMEA. You will partner closely with data engineers, AI engineers, data scientists, product managers, architects, and governance teams to create secure, reliable, and reusable platform services. This role combines hands-on engineering with cross-functional collaboration and plays a key part in implementing our new global data strategy. You will work across a wide range of platform components, from data ingestion and orchestration to observability, security, and automation, ensuring that regional requirements such as GDPR and data residency are built into platform design without slowing innovation. Crucially, you will build and maintain the shared infrastructure that enables AI engineering and data science teams to develop, deploy, and scale machine learning solutions efficiently and responsibly. Your work will accelerate the delivery of AI-driven insights and automation across EMEA, ensuring that advanced analytics and ML capabilities are grounded in a secure, compliant, and scalable data foundation. What you will do You will design, build, and operate cloud data platform components, including data lakes, warehouses, streaming systems, orchestration layers, and metadata tooling. Your work will focus on making these services secure, observable, automated, and scalable, enabling analytics, AI, and data science teams to innovate with confidence. In parallel, you will design, implement, and maintain MLOps pipelines that support the full machine learning lifecycle, from experimentation and model training to deployment, monitoring, and continuous improvement. You will embed model governance, lineage tracking, and performance observability into the platform, ensuring that ML solutions are reliable, compliant, and production-ready. You'll also collaborate closely with data scientists and AI engineers to streamline model delivery using modern tools such as SageMaker, MLflow, Kubeflow, or Vertex AI. In this role, you will: Operate and enhance cloud-based data platform components across compute, storage, orchestration, streaming, and metadata. Build and manage end-to-end MLOps pipelines, integrating CI/CD practices and automation for ML workflows. Monitor platform and model health, pipeline performance, latency, and costs, driving continuous optimization. Collaborate with global engineering and architecture teams to deliver infrastructure-as-code, secure patterns, and reusable services. Translate business and product needs into reusable platform and MLOps capabilities, templates, and standards. Provide guidance and coaching to teams on best practices, self-service tooling, and platform adoption. Strengthen observability, data quality, and compliance through metrics, logging, lineage, and GDPR-aligned controls. Contribute to global architecture and platform standards while representing EMEA-specific requirements and priorities. We're seeking someone with deep engineering craft, strong MLOps expertise, and the ability to balance speed with governance in a complex environment. You enjoy solving platform challenges, building scalable, reusable patterns, and enabling data and AI teams to develop, deploy, and scale models efficiently. You do not need a MedTech background, though experience in regulated industries or working with sensitive data is an advantage. Bachelor's degree in computer science, engineering, or a related field, with 5-8+ years of experience in data, platform, or MLOps engineering. Strong cloud experience, particularly with AWS and Snowflake (Azure is a plus), combined with hands-on skills in Python, SQL, and distributed processing frameworks (e.g., Spark/EMR). Expertise with infrastructure-as-code (Terraform or similar), CI/CD pipelines (GitHub Actions, Azure DevOps), and containerized/orchestrated environments (Docker, Kubernetes). Demonstrated ability to design, implement, and maintain MLOps pipelines covering model training, deployment, monitoring, and retraining, using tools such as SageMaker, MLflow, Kubeflow, or Vertex AI. Model lifecycle management: Familiarity with model registries, feature stores, and governance frameworks that include versioning, lineage tracking, explainability, and compliance. Experience in implementing observability for ML and data systems, including metrics collection, model-drift detection, and performance dashboards. Security and compliance: Solid understanding of cloud security best practices (IAM, encryption, secrets management, policy-as-code) and the ability to design solutions aligned with governance and regulatory requirements. Strong communication and collaboration skills across technical and non-technical stakeholders, fluency in English, and eligibility to work in one of the listed hub locations. Experience in MedTech, Pharma, Consulting, or other regulated industries.
Responsibilities
You will design, build, and operate cloud data platform components, ensuring they are secure, observable, automated, and scalable. Additionally, you will implement and maintain MLOps pipelines that support the full machine learning lifecycle.
Loading...