Data Engineer at HSBC Global Services Limited
Sheffield S1 4NB, , United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

20 Nov, 25

Salary

0.0

Posted On

20 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Version Control, Python, Data Engineering, Data Processing, Pipeline Development, Sql, Github, Transformation

Industry

Information Technology/IT

Description

If you’re looking to take an exciting new direction with your HSBC career, an internal move can open the door to many opportunities, allowing you to take on a new challenge, and develop your skills. Bring your knowledge of our brand to a new role and grow yourself further.
Technology teams in the UK work closely with our global businesses to help design and build digital services that allow our millions of customers around the world, to bank quickly, simply and securely. They also run and manage our IT infrastructure, data centres and core banking systems that power the world’s leading international bank. Our multi-disciplined teams include: DevOps Engineers, IT Architects, Front and Back End Developers, Infrastructure specialists, Cyber experts, as well as Project and Programme managers.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

In this role you will:

  • Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy.
  • Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration.
  • Develop and optimize complex SQL queries and Python-based data transformation logic.
  • Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes.
  • Automate deployment of data pipelines using CI/CD practices in Azure DevOps.
  • Ensure data quality, security, and compliance with best practices.
  • Monitor and troubleshoot performance issues in data pipelines

To be successful in this role you should meet the following requirements:

  • Must have experience with Delta Lake and Lakehouse architecture.
  • Proven experience in data engineering, working with Azure Databricks, PySpark, and SQL.
  • Hands-on experience with Prophesy for data pipeline development.
  • Proficiency in Python for data processing and transformation.
  • Experience with Apache Airflow for workflow orchestration.
  • Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes.
  • Familiarity with GitHub and Azure DevOps for version control and CI/CD automation
Loading...