Data Engineer

at  DP World

Swarzędz, wielkopolskie, Poland -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate27 Apr, 2025Not Specified28 Jan, 2025N/APython,Programming Languages,Github,Gitlab,Git,Computer Science,Scala,Machine Learning,Sql,AzureNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

DP World, the fast-growing logistics leader that currently manages more than 10% of world trade, is seeking a Data Engineer to join our Team in Poland.
As a Data Engineer you will be responsible for designing, developing, and maintaining robust data pipelines and systems, with a focus on Databricks, machine learning, and data modelling. The ideal candidate will have a strong background in data engineering, a deep understanding of Databricks, and experience working with machine learning frameworks.
We offer the possibility to work remotely.

KEY QUALIFICATIONS & COMPETENCIES:

  • Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field
  • Proven experience as a Data Engineer with a focus on Databricks and machine learning
  • Strong proficiency in programming languages such as Python or Scala (PySpark preferred)
  • Proficient in SQL with a solid understanding of data modeling concepts and best practices
  • Experience building data pipelines using orchestration tools like Azure Data Factory and Databricks workflows
  • Familiarity with version control systems such as Git, GitHub, or GitLab
  • Hands-on experience with machine learning frameworks (e.g., TensorFlow, PyTorch), including GenAI applications and Large Language Models (LLMs)
  • Extensive experience working with cloud platforms (Azure)
  • Databricks Associate Certification or Azure Data Engineer Associate(DP-203) required
  • Knowledge and experience in using the MS Azure Analytics Services, especially Azure Databricks, Azure Data Factory, FiveTran as well as MS Purview
  • Excellent problem-solving skills and attention to detail
  • Strong communication and collaboration skillsFluent English – additional languages are a bonus, in particular German.
-

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities:

  • Design, implement, and maintain scalable and efficient data pipelines using Databricks
  • Align to the global BI Team to implement data platform / data lake solutions and workflows based on the Group cloud technology
  • Collaborate with data scientists and analysts to integrate machine learning models into the data pipeline.
  • Implement and optimize machine learning workflows and processes.
  • Design and implement data models to support analytics and reporting requirements.
  • Ensure data models are scalable, efficient, and aligned with business objectives.
  • Serve as a subject matter expert on Databricks, providing guidance on best practices and optimizations.
  • Work on performance tuning and optimization of Databricks clusters.
  • Implement data quality checks and governance processes to ensure data accuracy and reliability.Possible business travel, mainly to Germany
-


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Computer Science, Engineering

Proficient

1

Swarzędz, wielkopolskie, Poland