(Senior) Data Engineer (m/f/d) at Valmet Inc
Berlin, Berlin, Germany -
Full Time


Start Date

Immediate

Expiry Date

06 May, 25

Salary

0.0

Posted On

06 Feb, 25

Experience

2 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Information Systems, Continuous Delivery, Python, Integration, Etl, Stem, Data Modeling, Statistics, Aws

Industry

Information Technology/IT

Description

(SENIOR) DATA ENGINEER (M/F/D)

You want to redefine how entire industries work by leveraging IoT, Smart Manufacturing and Industry 4.0? Would you like to be part of the success of a digital solution which will revolutionize the manufacturing process to improve shop floor performance? If your answer is a big yes, you should continue reading!
With HQ in Berlin, FactoryPal is a corporate Start-Up – with an additional location in Porto/Portugal. The venture is poised to become the leading end-to-end IoT solution for machine efficiency and equipment effectiveness. The digitally enabled solution is not just completely reshaping how companies produce and elevate their efficiency levels, but it is fundamentally augmenting the way manufacturing employees do their job.
We are data scientists, engineers, designers, IIoT experts, product managers, and manufacturing operations consultants. We are a team, united by our shared ambition: revolutionize manufacturing and transform the way it is done to ensure smooth operations.

QUALIFICATIONS

  • Bachelor’s degree in Management Information Systems, Statistics, Software Engineering, STEM, or a related technical/quantitative field.
  • 5+ years of experience with ETL, data modeling, and data lake approaches.
  • 5+ years of experience with processing multi-dimensional datasets from different sources and automating the end-to-end ETL pipeline.
  • 3+ years of experience in Python.
  • 3+ years of experience in cloud technologies (AWS)
  • 3+ years of experience with streaming-based systems (Kafka/Kinesis) and event-driven design.
  • 2+ years of experience in distributed computing systems (such as Spark/Flink)
  • Experience with continuous delivery and integration.
  • Ability to effectively communicate with both business and technical teams.
Responsibilities

ROLE AND RESPONSIBILITIES

  • Design, develop, and deploy real-time data pipelines using stream processing platforms such as Apache Kafka, Apache Flink, and AWS Glue.
  • Build a high-performance, ACID-compliant Data Lake using Apache Iceberg.
  • Create, enhance, and optimize data models and implement data warehousing solutions within the Snowflake platform.
  • Monitor, identify, and proactively reduce technical debt to maintain system health.
  • Develop and improve the current data architecture, emphasizing data lake security, data quality and timeliness, scalability, and extensibility.
  • Deploy and use various big data technologies and run pilots to design low-latency data architectures at scale
  • Contribute to automating and monitoring data pipelines, as well as streamlining client onboarding.
  • Collaborate with cross-functional teams, including Software Engineers, Product Owners, Data Scientists, Data Analysts, and shopfloor consultants, to build and improve our data and analytics solutions.
Loading...