Senior Data Engineer at Publicis Groupe Holdings BV
London, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

22 Nov, 25

Salary

0.0

Posted On

23 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql Server, Javascript, Git, Python, Web Scraping, Technical Leadership, Data Modeling, Solution Delivery, Oracle, Glue, Postgresql, Sql, Aws, Azure, Distributed Systems, Data Infrastructure, Design

Industry

Information Technology/IT

Description

THE SPIRIT OF VIVA LA DIFFERENCE

Viva La Difference is deeply rooted in everything we do. It has always been in our DNA. From the birth of Publicis, 94 years ago, when Marcel Bleustein-Blanchet, our founder, invented French advertising. Viva La Difference expresses how we value and respect each individual and recognize what makes us distinctive. This is the charge that inspires our teams to celebrate the differences in identity, background, culture, and experience of all of us. It is how we behave with each other and our clients, and it is deeply rooted throughout our work, to elevate and bring to life our differences throughout the platform world..
Overview
We are seeking a proactive and self-motivated Senior Data Engineer with a proven track record in building scalable cloud-based data solutions across multiple cloud platforms to support our work in architecting, building and maintaining the data infrastructure. The specific focus for this role will start with GCP however we require experience with Snowflake and Databricks also.
As a senior member within the data engineering space, you will play a pivotal role in designing scalable data pipelines, optimizing data workflows, and ensuring data availability and quality for production technology.
The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across business and technology functions, drive strategic initiatives, and independently problem solve will be key to success in this role.

EXPERIENCE

  • Relevant experience in data engineering and solution delivery, with a strong track record of technical leadership
  • Deep understanding of data modeling, data warehousing concepts, and distributed systems
  • Excellent problem-solving skills and ability to progress with design, build and validate output data independently
  • Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools
  • Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure
  • Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle)

DESIRABLE SKILLS

  • Familiarity with machine learning pipelines and MLOps practices
  • Additional experience with Databricks and specific AWS such as Glue, S3, Lambda
  • Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps)
  • Hands-on experience with web scraping, REST API integrations, and streaming data pipelines
  • Knowledge of JavaScript and front-end frameworks (e.g., React)
    Additional information
    Publicis Groupe operates a hybrid working pattern with full time employees being office-based three days during the working week.
    We are supportive of all candidates and are committed to providing a fair assessment process. If you have any circumstances (such as neurodiversity, physical or mental impairments or a medical condition) that may affect your assessment, please inform your Talent Acquisition Partner. We will discuss possible adjustments to ensure fairness. Rest assured, disclosing this information will not impact your treatment in our process.
    Please make sure you check out the Publicis Career Page which showcases our Inclusive Benefits and our EAG’s (Employee Action Groups).

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.)
  • Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions
  • Understand requirements from operations and product to ensure data and reporting needs are met
  • Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data
  • Optimize performance of ETL/ELT workflows and improve infrastructure scalability
    Qualifications
Loading...