Data Engineer at BoomBit
Niedersachsen, Niedersachsen, Germany -
Full Time


Start Date

Immediate

Expiry Date

05 Jul, 25

Salary

0.0

Posted On

06 Apr, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Fusion, Cloud Services, Python, Data Manipulation, Shell Scripting, Transformation, Tableau, Sql, Aws, Automation, Data Modeling, Quantitative Data, Kafka, Kubernetes, Docker, Power Bi, Airflow, Data Engineering, Data Governance

Industry

Information Technology/IT

Description

WHO WE ARE:

We are a full-Service agency & content Studio helping companies to thrive through strategy, creative, technology services, and human talent.

REQUIRED SKILLS AND EXPERIENCE:

  • 5+ years of experience in data engineering or a similar role.
  • Strong hands-on experience with GCP services including Dataflow, Pub/Sub, BigQuery, and Cloud Data Fusion.
  • Proficiency in building stream processing systems using Kafka.
  • Familiarity with Docker, Kubernetes, and cloud services (AWS, GCP).
  • Advanced knowledge of Python and Linux shell scripting.
  • Proven expertise in streaming data architectures and real-time processing.
  • Experience ingesting and integrating data from on-premise sources, data lakes, and streaming platforms.
  • Experience with business intelligence software (e.g., Power BI, Tableau) and the graphic display of quantitative data.
  • Proficient in Python and SQL for data manipulation, transformation, and automation.
  • Skilled in pipeline orchestration tools such as Airflow, Cloud Composer, or equivalent.
  • Solid understanding of data modeling, data governance, and performance optimization best practices.
Responsibilities

JOB PURPOSE:

We’re looking for a Senior Data Engineer with deep expertise in building scalable, efficient, and secure data pipelines on Google Cloud Platform (GCP). You will play a key role in designing and implementing robust data solutions that empower data-driven decisions across the organization.
If you have a passion for cloud-native architectures, streaming data, and modern data integration strategies, we’d love to talk.

KEY RESPONSIBILITIES:

  • Design, build, and maintain data pipelines using Dataflow, Cloud Pub/Sub, BigQuery, and Cloud Data Fusion.
  • Develop scalable streaming and batch pipelines to support real-time and historical data use cases.
  • Lead data ingestion efforts from on-premise systems, data lakes, and external APIs into the cloud environment.
  • Collaborate with data scientists, analysts, and platform teams to ensure data availability and quality.
  • Write efficient and production-grade Python and SQL code for data transformation and validation.
  • Implement pipeline orchestration using tools such as Cloud Composer, Airflow, or similar.
  • Monitor, troubleshoot, and optimize data pipelines to ensure performance and reliability.
  • Contribute to architecture and design decisions that support long-term scalability and maintainability.
Loading...