Start Date
Immediate
Expiry Date
25 Jun, 24
Salary
10500.0
Posted On
27 Mar, 24
Experience
0 year(s) or above
Remote Job
No
Telecommute
No
Sponsor Visa
No
Skills
Analytical Skills, Clp, Hadoop, Job Scheduling, Parallel Processing, Rpg, Integration, Hive, Sql
Industry
Information Technology/IT
We are looking for a highly skilled Data Engineer/Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining data pipelines, ETL processes, and data models. They should have a strong background in database management, data warehousing, and programming languages such as SQL, Python, or Java. The candidate should also have experience working with big data technologies such as Hadoop, Spark, or Kafka. Strong analytical and problem-solving skills are essential for this role.
Duration: 12 months contract
Working hours: Monday – Friday (8.30AM to 6.00PM / 9.00AM to 6.30PM)
Working locations: Central.
Salary up to $10500
Design, develop, and implement data processing pipelines to process large volumes of structured and unstructured data
Should have good knowledge and working experience in Database and Hadoop (Hive, Impala, Kudu).
Should have good knowledge and working experience in scripting using (Shell script, awk programming, quick automation to integrating any third party tools), BMC monitoring tools
Good understanding and knowledge in Data Modelling area using industry standard data model like (FSLDM)
Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
It will be good to have experience in working with No SQL as well as virtualized Database Environment
Implement data transformations, aggregations, and computations using Spark RDDs, DataFrames, and Datasets, and integrate them with Elasticsearch
Develop and maintain scalable and fault-tolerant Spark applications, adhering to industry best practices and coding standards
Troubleshoot and resolve issues related to data processing, performance, and data quality in the Spark-Elasticsearch integration
Monitor and analyze job performance metrics, identify bottlenecks, and propose optimizations in both Spark and Elasticsearch components
Prior experience in developing banking application using ETL, Hadoop is mandatory. In depth knowledge of technology stack at global banks is mandatory.