Data & AI Platform Engineer at Pandora Jewelry
København, , Denmark -
Full Time


Start Date

Immediate

Expiry Date

02 Dec, 25

Salary

0.0

Posted On

03 Sep, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Products, Airflow, Collaboration, Data Systems, Documentation, Code, Release Engineering, Pipelines, Infrastructure

Industry

Information Technology/IT

Description

DATA & AI PLATFORM ENGINEER

Join us in one of the most interesting challenges in Data Engineering in 2025: Crafting a Data & AI platform that makes creating and using data products a seamless and fast experience. We provide the platform for teams across consumer engagement, supply chain, finance, merchandising, and manufacturing domains to build data, analytics, and AI products for the world’s largest jewellery company. As a Data & AI Platform Engineer, you will play a crucial role in designing, developing, and maintaining the infrastructure and platform essential for scaling our data impact across the organization. Your contribution will be pivotal in creating a secure and private data platform, delivering SDKs that speed up data product deliveries, ensuring seamless creation of resources, and creating robust automated deployment flows.

EDUCATION & EXPERIENCE

  • 2–6 years experience building production-quality software & data products.
  • Bachelors or Masters degree and/or equivalent professional experience

Your skills & Background

  • Solid experience delivering end-to-end systems as a Data/Platform/MLOps/DevOps Engineer.
  • Strong software engineering skills in distributed, multi-language environments and collaborative code practices.
  • Fluency in Python and experience shipping production-grade services or pipelines.
  • Hands-on with Infrastructure as Code (Terraform) and reusable modules/patterns.
  • Experience with cloud data systems: Databricks/Spark, SQL/NoSQL stores, object storage, and data lake/lakehouse concepts.
  • Ability to build API-based integrations and contribute to platform SDKs or internal tooling.
  • Familiarity with one or more workflow/orchestration tools (Airflow, Databricks Workflows, Prefect, or Dagster).
  • Working knowledge of CI/CD, automated testing, and release engineering for data and ML workloads.
  • Basic security and privacy practices (IAM, secrets management, network basics) and willingness to deepen expertise.
  • Exposure to observability (Prometheus, Grafana, New Relic, ELK) and data quality/lineage tooling.
  • Awareness of FinOps principles and an interest in building cost-efficient solutions.
  • Effective communication, documentation, and collaboration with cross-functional partners
Responsibilities

KEY RESPONSIBILITIES

  • Build and maintain platform services, shared libraries, and components that power the Data & AI platform.
  • Implement and support AI/GenAI service deployments (model endpoints, vector databases, feature stores) under established guardrails.
  • Automate resource provisioning and access patterns to make data products quick to create and easy to consume.
  • Develop integrations across batch, streaming, and ML/AI workloads using approved patterns and data contracts.
  • Build CI/CD pipelines, tests, and release automations for data pipelines and ML workloads; improve developer experience.
  • Configure and operate DevOps tooling (IaC, secrets, orchestration, observability); create useful dashboards and alerts.
  • Collaborate with stakeholders to refine requirements; write clear documentation and participate in demos and support rotations.

Your skills & Background

  • Solid experience delivering end-to-end systems as a Data/Platform/MLOps/DevOps Engineer.
  • Strong software engineering skills in distributed, multi-language environments and collaborative code practices.
  • Fluency in Python and experience shipping production-grade services or pipelines.
  • Hands-on with Infrastructure as Code (Terraform) and reusable modules/patterns.
  • Experience with cloud data systems: Databricks/Spark, SQL/NoSQL stores, object storage, and data lake/lakehouse concepts.
  • Ability to build API-based integrations and contribute to platform SDKs or internal tooling.
  • Familiarity with one or more workflow/orchestration tools (Airflow, Databricks Workflows, Prefect, or Dagster).
  • Working knowledge of CI/CD, automated testing, and release engineering for data and ML workloads.
  • Basic security and privacy practices (IAM, secrets management, network basics) and willingness to deepen expertise.
  • Exposure to observability (Prometheus, Grafana, New Relic, ELK) and data quality/lineage tooling.
  • Awareness of FinOps principles and an interest in building cost-efficient solutions.
  • Effective communication, documentation, and collaboration with cross-functional partners.
Loading...