Senior/Lead Data Engineer (Databricks, PySpark) at EPAM Systems Inc
London, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

30 Nov, 25

Salary

0.0

Posted On

31 Aug, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Mathematics, Sql, Data Engineering, Computer Science, Git, Learning, Data Solutions

Industry

Information Technology/IT

Description

EPAM is seeking multiple Senior/Lead Data Engineers with expertise with Databricks and PySpark to join our growing team in London. As part of our expansion into several large client accounts, we are looking for current hands-on coding professionals who are passionate about data engineering and eager to solve complex problems at scale. In this role, you will collaborate with diverse teams to build, optimise and maintain robust data and analytics solutions.
The role requires 3-4 days working on client site in central London.
Applicants must have the right to work in the UK, as we are unable to offer visa sponsorship for this role.

REQUIREMENTS

  • Bachelor’s or Master’s Degree in Computer Science, Engineering, Mathematics or related fields or relevant work experience
  • Strong experience in data engineering, with recent hands-on coding as a core part of your daily role
  • Expertise in PySpark for building high-performance, distributed data pipelines
  • Expertise in Databricks for large-scale data engineering and analytics workloads
  • Strong experience with cloud platforms (Azure preferred)
  • Solid understanding of SQL and relational database concepts
  • Experience with CI/CD, Git and modern DevOps practices for data solutions
  • Strong problem-solving, communication and client-facing collaboration skills
  • Exposure to machine learning or data science workflows is a plus but not required
Responsibilities
  • Design, develop and maintain scalable data pipelines and ETL processes using PySpark and Databricks
  • Work closely with data architects, data scientists and business analysts to transform requirements into technical solutions
  • Implement data quality, reliability and performance improvements across large, complex datasets
  • Collaborate with DevOps and Cloud teams to deploy and optimise data solutions in Azure (or other cloud platforms)
  • Troubleshoot, optimise and refactor existing pipelines for performance and scalability
  • Contribute to best practices, coding standards and technical documentation
  • Mentor junior engineers and lead technical discussions within client and internal teams
Loading...