Senior Azure Data Engineer (Databricks)
at Capco
London, England, United Kingdom -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 07 May, 2025 | Not Specified | 07 Feb, 2025 | N/A | Spark,Sql,Python,Hive,Transformation,Nosql,Structured Data,Etl,Kafka,Orchestration,Devops,Pipelines,Jenkins,Pii,Distributed Systems,Working Experience,Hadoop | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
SENIOR AZURE DATA ENGINEER (DATABRICKS)
Joining Capco means joining an organisation that is committed to an inclusive working environment where you’re encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It’s important to us that we recruit and develop as diverse a range of talent as we can, and we believe that everyone brings something different to the table – so we’d love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help.
ABOUT YOU
Capco is looking for hardworking, innovative, and creative people to join our Digital Engineering team.
We’d also like to see:
- Practical experience of engineering best practices, while being obsessed with continuous improvement.
- Deep technical knowledge of two or more technologies and a curiosity for learning other parts of the stack.
- Experience delivering software/technology projects leveraging Agile methodologies.
- You have personally made valuable contributions to products, solutions and teams and can articulate the value to customers.
- You have played a role in the delivery of critical business applications and ideally customer facing applications.
- You can communicate complex ideas to non-experts with eloquence and confidence.
- You bring an awareness and understanding of new technologies being used in finance and other industries and loves to experiment.
- A passion for being part of the engineering team that is forming the future of finance.
SKILLS & EXPERTISE:
You will have experience working with some of the following Methodologies/Technologies.
- Excellent experience in the Data Engineering Lifecycle, you will have created data pipelines which take data through all layers from generation, ingestion, transformation and serving.
- Experience of modern Software Engineering principles and experience of creating well tested, clean and applications.
- Enthusiasm and ability to pick up new technologies as needed to solve problems.
- Hands on working experience of the Databricks platform, must have experience of delievering projects which use DeltaLake, Orchestration, Unity Catalog, Spark Structured Streaming on Databricks.
- Experience with Data Lakehouse architecture and data warehousing principles, experience with Data Modelling, Schema design and using semi-structured and structured data.
- Extensive experience using Python, Pyspark and the Python Ecosystem with good exposure to python libraries, Proficient in SQL, Experience Developing in other languages e.g. Scala/Java.
- Experience with Big Data technologies and Distributed Systems such as Hadoop, HDFS, HIVE, Spark, Databricks, Cloudera.
- Experience developing near real time event streaming pipelines with tools such as – Kafka, Spark Streaming, Azure Event Hubs.
- Good understanding of the differences and trade-offs between SQL and NoSQL, ETL and ELT.
- Proven experience DevOps and using building robust production datapipelines, CI/CD Pipelines on e.g. Azure DevOps, Jenkins, CircleCI, GitHub Actions etc.
- Exposure to working with PII, Sensitive Data and understands data regulations such as GDPR.
Responsibilities:
- Excellent experience in the Data Engineering Lifecycle, you will have created data pipelines which take data through all layers from generation, ingestion, transformation and serving.
- Experience of modern Software Engineering principles and experience of creating well tested, clean and applications.
- Enthusiasm and ability to pick up new technologies as needed to solve problems.
- Hands on working experience of the Databricks platform, must have experience of delievering projects which use DeltaLake, Orchestration, Unity Catalog, Spark Structured Streaming on Databricks.
- Experience with Data Lakehouse architecture and data warehousing principles, experience with Data Modelling, Schema design and using semi-structured and structured data.
- Extensive experience using Python, Pyspark and the Python Ecosystem with good exposure to python libraries, Proficient in SQL, Experience Developing in other languages e.g. Scala/Java.
- Experience with Big Data technologies and Distributed Systems such as Hadoop, HDFS, HIVE, Spark, Databricks, Cloudera.
- Experience developing near real time event streaming pipelines with tools such as – Kafka, Spark Streaming, Azure Event Hubs.
- Good understanding of the differences and trade-offs between SQL and NoSQL, ETL and ELT.
- Proven experience DevOps and using building robust production datapipelines, CI/CD Pipelines on e.g. Azure DevOps, Jenkins, CircleCI, GitHub Actions etc.
- Exposure to working with PII, Sensitive Data and understands data regulations such as GDPR
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - System Programming
Software Engineering
Graduate
Proficient
1
London, United Kingdom