Senior Data Engineer
at Capco
London, England, United Kingdom -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 18 Feb, 2025 | Not Specified | 19 Nov, 2024 | N/A | Snowflake,Sql,Glue,Spark,Cloudera,Git,Jenkins,Athena,Hadoop,Etl,Aws,It,Working Environment,Nosql,Azure,Flume,Oozie,Hive,Data Structures,Sqoop,Kafka,Graph Databases,Airflow | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
SKILLS & EXPERTISE:
You will have experience working with some of the following Methodologies/Technologies;
- Strong cloud provider’s experience on at least one of AWS, Azure or GCP
- Hands on experience using Scala/Python/Java
- Experience in most of data and cloud technologies such as Hadoop, HIVE, Spark, Pig, SQOOP, Flume, PySpark, Databricks, Cloudera, Airflow, Oozie, S3, Glue, Athena, Terraform etc.
- Experience with schema design using semi-structured and structured data structures
- Experience on messaging technologies – Kafka, Spark Streaming, Amazon Kinesis
- Strong experience in SQL
- Good understanding of the differences and tradeoff between SQL and NoSQL, ETL and ELT
- Understanding of containerisation, Graph Databases and ML Algorithms
- Experience with data lake formation and data warehousing principles and technologies – BigQuery, Redshift, Snowflake
- Experience using version control tool such as Git
- Experiencing building CI/CD Pipelines on Jenkins, CircleCI
- Enthusiasm and ability to pick up new technologies as needed to solve your problems
Joining Capco means joining an organisation that is committed to an inclusive working environment where you’re encouraged to #BeYourselfAtWork. We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It’s important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table – so we’d love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help
Responsibilities:
- Strong cloud provider’s experience on at least one of AWS, Azure or GCP
- Hands on experience using Scala/Python/Java
- Experience in most of data and cloud technologies such as Hadoop, HIVE, Spark, Pig, SQOOP, Flume, PySpark, Databricks, Cloudera, Airflow, Oozie, S3, Glue, Athena, Terraform etc.
- Experience with schema design using semi-structured and structured data structures
- Experience on messaging technologies – Kafka, Spark Streaming, Amazon Kinesis
- Strong experience in SQL
- Good understanding of the differences and tradeoff between SQL and NoSQL, ETL and ELT
- Understanding of containerisation, Graph Databases and ML Algorithms
- Experience with data lake formation and data warehousing principles and technologies – BigQuery, Redshift, Snowflake
- Experience using version control tool such as Git
- Experiencing building CI/CD Pipelines on Jenkins, CircleCI
- Enthusiasm and ability to pick up new technologies as needed to solve your problem
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Other
Software Engineering
Graduate
Proficient
1
London, United Kingdom