Big Data Lead at FalconSmartIT
Wimbledon SW19, England, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

11 Sep, 25

Salary

0.0

Posted On

12 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Integration, Leadership, Devops, Etl, Platform Development, Infrastructure, Strategy, Code, Data Access, Data Science, Data Quality, Hadoop, Availability, Reliability, Team Management, Data Governance, Technical Standards, Apache Spark, Glue, Data Services, Datastage

Industry

Information Technology/IT

Description

Job TItle: Big Data Lead
Job Type: Contract
Job Location: Wimbledon , UK
Job Description:
For this role, senior experience of Data Engineering and building automated data pipelines on IBM Datastage & DB2, AWS and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies where experience of delivering complex pipelines will be significantly valuable to how D&G maintain and deliver world class data pipelines.
Knowledge in the following areas essential:

DATA ENGINEERING EXPERIENCE:

  • Databricks: Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.
  • AWS Cloud: Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.
  • IBM Skills: DB2, Datastage, Tivoli Workload Scheduler, Urban Code
  • Programming Languages: Proficiency in Python, SQL.
  • Data Warehousing & ETL: Experience with modern ETL frameworks and data warehousing techniques.
  • DevOps & CI/CD: Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).
  • Familiarity with big data technologies like Apache Spark, Hadoop, or similar.
  • Test automation skills
  • ETL/ELT tools and creating common data sets across on-prem (IBMDatastage ETL) and cloud data stores
  • Leadership & Strategy: Lead Data Engineering team(s) in designing, developing, and maintaining highly scalable and performant data infrastructures.
  • Customer Data Platform Development: Architect and manage our data platforms using IBM (legacy platform) & Databricks on AWS technologies (e.g., S3, Lambda, Glacier, Glue, EventBridge, RDS) to support real-time and batch data processing needs.
  • Data Governance & Best Practices: Implement best practices for data governance, security, and data quality across our data platform. Ensure data is well-documented, accessible, and meets compliance standards.
  • Pipeline Automation & Optimisation: Drive the automation of data pipelines and workflows to improve efficiency and reliability.
  • Team Management: Mentor and grow a team of data engineers, ensuring alignment with business goals, delivery timelines, and technical standards.
  • Cross Company Collaboration: Work closely with all levels of business stakeholder including data scientists, finance analysts, MI and cross-functional teams to ensure seamless data access and integration with various tools and systems.
  • Cloud Management: Lead efforts to integrate and scale cloud data services on AWS, optimising costs and ensuring the resilience of the platform.
  • Performance Monitoring: Establish monitoring and alerting solutions to ensure the high performance and availability of data pipelines and systems to ensure no impact to downstream consumers.
Responsibilities

Please refer the Job description for details

Loading...