Senior Data Engineer (m/f/d) at PATRIZIA SE
Augsburg, , Germany -
Full Time


Start Date

Immediate

Expiry Date

06 Nov, 25

Salary

0.0

Posted On

07 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Apache Spark, Data Modeling, German, Python, Microsoft Azure, Scala, English

Industry

Information Technology/IT

Description

Du suchst ein dynamisches, integratives und internationales Arbeitsumfeld mit einem ausgeprägten Unternehmergeist im Bereich Real Asset Investments? Dann bist Du bei PATRIZIA genau richtig! Als Teil unseres globalen Teams von rund 900 Expertinnen und Experten an 26 Standorten weltweit gestaltest Du unser Unternehmen aktiv mit. Deine individuellen Fähigkeiten und Dein Engagement schaffen die Basis für unseren Erfolg.
We are looking for a Senior Data Engineer (m/f/d) to join our data team in Augsburg or Frankfurt/Main and play a key role in shaping and building our modern data platform based on Databricks.
In this role, you will be responsible for designing and developing robust and scalable ETL pipelines, integrating data from various internal and external sources. You will work closely with business stakeholders, data scientists, and data analysts to deliver reliable data products and enable data-driven decision-making across the company.

Main Responsibilities

  • Design, build, and maintain scalable and efficient ETL processes on our Databricks-based data platform
  • Integrate data from multiple sources using batch and streaming technologies
  • Develop and implement a modern data architecture, including data modeling, data quality, and metadata management
  • Ensure high performance, reliability, and maintainability of data pipelines
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions
  • Drive best practices in data engineering, including version control, testing, and deployment automation
  • Contribute to the continuous improvement of our data infrastructure and tooling

SKILLS & EXPERIENCE REQUIRED

  • Proven experience as a Data Engineer, with strong expertise in Databricks or similar big data platforms
  • Hands-on experience in developing and managing ETL workflows using tools such as Apache Spark, Delta Lake, SQL, Python, or Scala
  • Deep understanding of data architecture principles and data modeling (e.g. dimensional modeling, data lakes, data warehouses)
  • Experience with cloud platforms (esp. Microsoft Azure) is a plus
  • Strong problem-solving skills and a proactive mindset
  • Fluent in English; German is a plus
Responsibilities
  • Design, build, and maintain scalable and efficient ETL processes on our Databricks-based data platform
  • Integrate data from multiple sources using batch and streaming technologies
  • Develop and implement a modern data architecture, including data modeling, data quality, and metadata management
  • Ensure high performance, reliability, and maintainability of data pipelines
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions
  • Drive best practices in data engineering, including version control, testing, and deployment automation
  • Contribute to the continuous improvement of our data infrastructure and toolin
Loading...