Freelance Senior Data Engineer at Elsewhen
London EC2A, , United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

10 Oct, 25

Salary

500.0

Posted On

10 Jul, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Microsoft Azure, Kubernetes, Data Engineering

Industry

Information Technology/IT

Description

Location: London (Hybrid, 2–3 days/week in office - located in East India Doc, next to Canary Wharf)
Duration: 6–9 months - Outside of IR35
Start date: ASAP
Budget: £450-£500 per day
Elsewhen is a London based consultancy designing digital products & services for the likes of Spotify, Inmarsat and Zego. Over the past 11 years, Elsewhen has created a working environment that is impactful, driven, open and friendly. We value outcomes over hours and agility over rigid processes.
Join the team — https://www.elsewhen.com.
About the Role:
We’re looking for an experienced, self-driven Senior Data Engineer to join a fast-paced
project building a new product on a Microsoft Azure-based platform. You’ll work
closely with an established team to design and deliver robust data pipelines and
infrastructure using modern tools and best practices.

Key Responsibilities:

  • Design, develop, and maintain data pipelines using Azure Databricks andPySpark.
  • Build and manage data processing workflows for real-time and batch data.
  • Collaborate with DevOps to deploy solutions using CI/CD pipelines.
  • Implement infrastructure as code with Terraform and container orchestrationwith Kubernetes (where required).
  • Work independently with minimal supervision in a time-sensitive environment.
  • Align deliverables with the existing team’s tech stack and contribute to rapid scaling of the product.

Key Requirements:

  • 5–6+ years of hands-on experience in data engineering.
  • Strong practical experience with Microsoft Azure, especially Azure Databricks.
  • Solid working knowledge of PySpark.
  • Familiarity with Terraform and/or Kubernetes (does not need to be expert level).
  • Good understanding of CI/CD pipelines and best practices for deployment.
  • Experience delivering both real-time and batch data processing solutions.
Responsibilities
  • Design, develop, and maintain data pipelines using Azure Databricks andPySpark.
  • Build and manage data processing workflows for real-time and batch data.
  • Collaborate with DevOps to deploy solutions using CI/CD pipelines.
  • Implement infrastructure as code with Terraform and container orchestrationwith Kubernetes (where required).
  • Work independently with minimal supervision in a time-sensitive environment.
  • Align deliverables with the existing team’s tech stack and contribute to rapid scaling of the product
Loading...