Data Engineer/Developer
at PERSOLKELLY
Central Singapore, Southeast, Singapore -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 25 Jun, 2024 | USD 10500 Monthly | 27 Mar, 2024 | N/A | Analytical Skills,Clp,Hadoop,Job Scheduling,Parallel Processing,Rpg,Integration,Hive,Sql | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
We are looking for a highly skilled Data Engineer/Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining data pipelines, ETL processes, and data models. They should have a strong background in database management, data warehousing, and programming languages such as SQL, Python, or Java. The candidate should also have experience working with big data technologies such as Hadoop, Spark, or Kafka. Strong analytical and problem-solving skills are essential for this role.
Duration: 12 months contract
Working hours: Monday – Friday (8.30AM to 6.00PM / 9.00AM to 6.30PM)
Working locations: Central.
Salary up to $10500
Responsibilities:
Design, develop, and implement data processing pipelines to process large volumes of structured and unstructured data
Should have good knowledge and working experience in Database and Hadoop (Hive, Impala, Kudu).
Should have good knowledge and working experience in scripting using (Shell script, awk programming, quick automation to integrating any third party tools), BMC monitoring tools
Good understanding and knowledge in Data Modelling area using industry standard data model like (FSLDM)
Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
It will be good to have experience in working with No SQL as well as virtualized Database Environment
Implement data transformations, aggregations, and computations using Spark RDDs, DataFrames, and Datasets, and integrate them with Elasticsearch
Develop and maintain scalable and fault-tolerant Spark applications, adhering to industry best practices and coding standards
Troubleshoot and resolve issues related to data processing, performance, and data quality in the Spark-Elasticsearch integration
Monitor and analyze job performance metrics, identify bottlenecks, and propose optimizations in both Spark and Elasticsearch components
Prior experience in developing banking application using ETL, Hadoop is mandatory. In depth knowledge of technology stack at global banks is mandatory.
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Proficient
1
Central Singapore, Singapore