Data Engineer at VIRGULE INTERNATIONAL LIMITED
Leicester, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

09 Oct, 25

Salary

0.0

Posted On

12 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Snowflake, Apache Spark, Aws, Data Modeling, Azure, Data Warehouse, Airflow, Devops, Dbt, Data Engineering, Python

Industry

Information Technology/IT

Description

Reference: Vrg2425091
Job title: Data Engineer
Role Overview
We are seeking an exceptional Data Engineer to design, develop, and scale data pipelines, warehouses, and streaming systems that power mission-critical analytics and AI workloads. This is a hands-on engineering role where you will work with cutting-edge technologies, tackle complex data challenges, and shape the organization’s data architecture.

Core Responsibilities

  • Design & own robust ETL/ELT pipelines using Python and Apache Spark for batch and near real-time processing.
  • Architect and optimize enterprise data warehouses (Snowflake, BigQuery, Redshift, Azure Synapse) for performance and scalability.
  • Build data models and implement governance frameworks ensuring data quality, lineage, and compliance.
  • Engineer streaming data solutions using Kafka and Kinesis for real-time insights.
  • Collaborate cross-functionally to translate business needs into high-quality datasets and APIs.
  • Proactively monitor, tune, and troubleshoot pipelines for maximum reliability and cost efficiency.

Must-Have Expertise

  • 5–9+ years in data engineering, with proven delivery of enterprise-scale solutions.
  • Advanced skills in Python, Apache Spark, and SQL optimization.
  • Deep expertise in at least one leading data warehouse (Snowflake, BigQuery, Redshift, Azure Synapse).
  • Strong knowledge of data modeling, governance, and compliance best practices.
  • Hands-on experience with streaming data technologies (Kafka, Kinesis).
  • Cloud proficiency in AWS, Azure, or GCP.

Preferred Qualifications

  • Experience with Airflow, dbt, or similar orchestration frameworks.
  • Exposure to DevOps & CI/CD in data environments.
  • Familiarity with ML pipelines and feature stores.

Job Types: Full-time, Fixed term contract
Application deadline: 10/09/2025
Reference ID: Vrg242509

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design & own robust ETL/ELT pipelines using Python and Apache Spark for batch and near real-time processing.
  • Architect and optimize enterprise data warehouses (Snowflake, BigQuery, Redshift, Azure Synapse) for performance and scalability.
  • Build data models and implement governance frameworks ensuring data quality, lineage, and compliance.
  • Engineer streaming data solutions using Kafka and Kinesis for real-time insights.
  • Collaborate cross-functionally to translate business needs into high-quality datasets and APIs.
  • Proactively monitor, tune, and troubleshoot pipelines for maximum reliability and cost efficiency
Loading...