Data Engineer at Somerset Bridge Group
NUTN4, , United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

03 Sep, 25

Salary

68500.0

Posted On

04 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Cd, Sql, Continuous Delivery, Transformation, Infrastructure, Reporting, Ci, Optimization Strategies, Performance Tuning, It, Data Governance, Metadata Management, Emerging Technologies, Scala, Kafka, Database Design, Data Extraction, Agile Environment, Access Control

Industry

Information Technology/IT

Description

DESCRIPTION

We’re building something special — and we need a talented Data Engineer to help bring our Azure data platform to life.
This is your chance to work on a greenfield Enterprise Data Warehouse programme in the insurance sector, shaping data pipelines and platforms that power smarter decisions, better pricing, and sharper customer insights.
The Data Engineer will design, build, and optimise scalable data pipelines within Azure Databricks, ensuring high-quality, reliable data is available to support pricing, underwriting, claims, and operational decision-making. This role is critical in modernising SBG’s cloud-based data infrastructure, ensuring compliance with FCA/PRA regulations, and enabling AI-driven analytics and automation.
By leveraging Azure-native services, such as Azure Data Factory (ADF) for orchestration, Delta Lake for ACID-compliant data storage, and Databricks Structured Streaming for real-time data processing, the Data Engineer will help unlock insights, enhance pricing accuracy, and drive innovation. The role also includes optimising Databricks query performance, implementing robust security controls (RBAC, Unity Catalog), and ensuring enterprise-wide data reliability.
Working closely with Data Architects, Pricing Teams, Data Analysts, and IT, this role will ensure our Azure Databricks data ecosystem is scalable, efficient, and aligned with business objectives. Additionally, the Data Engineer will contribute to cost optimisation, governance, and automation within Azure’s modern data platform.

SKILLS, KNOWLEDGE AND EXPERTISE

  • Hands-on experience in building ELT pipelines and working with large-scale datasets using Azure Data Factory (ADF) and Databricks.
  • Strong proficiency in SQL (T-SQL, Spark SQL) for data extraction, transformation, and optimisation.
  • Proficiency in Azure Databricks (PySpark, Delta Lake, Spark SQL) for big data processing.
  • Knowledge of data warehousing concepts and relational database design, particularly with Azure Synapse Analytics.
  • Experience working with Delta Lake for schema evolution, ACID transactions, and time travel in Databricks.
  • Strong Python (PySpark) skills for big data processing and automation.
  • Experience with Scala (optional but preferred for advanced Spark applications).
  • Experience working with Databricks Workflows & Jobs for data orchestration.
  • Strong knowledge of feature engineering and feature stores, particularly in Databricks Feature store for ML training and inference.
  • Experience with data modelling techniques to support analytics and reporting.
  • Familiarity with real-time data processing and API integrations (e.g., Kafka, Spark Streaming).
  • Proficiency in CI/CD pipelines for data deployment using Azure DevOps, GitHub Actions, or Terraform for Infrastructure as Code (IaC).
  • Understanding of MLOps principles, including continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning models.
  • Experience with performance tuning and query optimisation for efficient data workflows.
  • Strong understanding of query optimisation techniques in Databricks (caching, partitioning, indexing, and auto-scaling clusters).
  • Experience monitoring Databricks workloads using Azure Monitor, Log Analytics, and Databricks Performance Insight
  • Familiarity with cost optimization strategies in Databricks and ADLS Gen2 (e.g., managing compute resources efficiently).
  • Problem-solving mindset – Ability to diagnose issues and implement efficient solution
  • Experience implementing Databricks Unity Catalog for data governance, access control, and lineage tracking.
  • Understanding of Azure Purview for data cataloging and metadata management.
  • Familiarity with object-level and row-level security in Azure Synapse and Databricks
  • Experience working with Azure Event Hubs, Azure Data Explorer, or Kafka for real-time data streaming.
  • Hands-on experience with Databricks Structured Streaming for real-time and near-real-time data processing.
  • Understanding of Delta Live Tables (DLT) for automated ELT and real-time transformations.
  • Analytical thinking – Strong ability to translate business needs into technical data solution
  • Attention to detail – Ensures accuracy, reliability, and quality of data.
  • Communication skills – Clearly conveys technical concepts to non-technical stakeholders.
  • Collaboration – Works effectively with cross-functional teams, including Pricing, Underwriting, and IT.
  • Adaptability – Thrives in a fast-paced, agile environment with evolving priorities.
  • Stakeholder management – Builds strong relationships and understands business requirements
  • Innovation-driven – Stays up to date with emerging technologies and industry trends.

ABOUT SOMERSET BRIDGE GROUP

Somerset Bridge Group is dedicated to delivering fair products and innovative services in the insurance industry. Our group focuses on underwriting, broking, and claims handling to provide sustainable and innovative insurance solutions. Somerset Bridge Insurance Services Limited, operating under GoSkippy and Vavista, offers insurance coverage to over 700,000 customers. Somerset Bridge Limited handles underwriting and claims, processing almost 50,000 claims annually. Somerset Bridge Shared Services Limited provides essential support functions to ensure operational efficiency and compliance. With a strong commitment to values, culture, and customer service excellence, Somerset Bridge Group is recognised for its industry awards and growth. Join us to be part of a dynamic team that fosters creative thinking and personal development.
We are very proud to have been awarded a Silver Accreditation from Investors in People! We recognise that all of our people contribute to our success. That’s why we are always looking for talented people to join our team - people who share our vision, who are passionate about what they do, and who want to be part of something special.

Responsibilities
  • Data Pipeline Development – Design, build, and maintain scalable ELT pipelines using Azure Databricks, Azure Data Factory (ADF), and Delta Lake to automate real-time and batch data ingestion.
  • Cloud Data Engineering – Develop and optimise data solutions within Azure, ensuring efficiency, cost-effectiveness, and scalability, leveraging Azure Synapse Analytics, ADLS Gen2, and Databricks Workflows
  • Data Modelling & Architecture – Implement robust data models to support analytics, reporting, and machine learning, using Delta Lake and Azure Synapse.
  • Automation & Observability – Use Databricks Workflows, dbt, and Azure Monitor to manage transformations, monitor query execution, and implement data reliability checks.
  • Data Quality & Governance – Ensure data integrity, accuracy, and compliance with industry regulations (FCA, Data Protection Act, PRA) using Databricks Unity Catalog and Azure Purview.
  • Collaboration & Stakeholder Engagement – Work closely with Data Scientists, Pricing, Underwriting, and IT to deliver data-driven solutions aligned with business objectives.
  • Data Governance & Security – Implement RBAC, column-level security, row-access policies, and data masking to protect sensitive customer data and ensure FCA/PRA regulatory compliance.
  • Innovation & Continuous Improvement – Identify and implement emerging data technologies within the Azure ecosystem, such as Delta Live Tables (DLT), Structured Streaming, and AI-driven analytics to enhance business capabilities.
Loading...