Data Engineer
at SAKSOFT PTE LIMITED
Singapore, Southeast, Singapore -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 02 Sep, 2024 | USD 9500 Monthly | 02 Jun, 2024 | 7 year(s) or above | Hive,Sql,Hadoop,Integration,Parallel Processing,Clp,Analytical Skills,Job Scheduling,Rpg | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
Role: Data Engineer
Job level: More than 10 years of relevant experience (L4)
Key skills :
AS400, Hadoop, Data Lake, SQL, Informatica
KEY REQUIREMENTS:
- 7+ years of AS400, RPG, CLP & SQL
- Knowledge of Aldon Change Management System(ACMS) is a must·
- Experience in developing Hadoop
- Experience in data lake (integration of different data sources into the data lake)
- SQL Stored Procedures/Queries/Functions
- Unix Scripting
- Experience with distributed computing, parallel processing, and working with large datasets
- Familiarity with big data technologies such as Hadoop, Hive, and HDFS
- Job Scheduling in Control-M
- Strong problem-solving and analytical skills with the ability to debug and resolve complex issues
- Familiarity with version control systems (e.g., Git) and collaborative development workflows
- Excellent communication and teamwork skills with the ability to work effectively in cross-functional teams.
Responsibilities:
- Design, develop, and implement data processing pipelines to process large volumes of structured and unstructured data
- Should have good knowledge and working experience in Database and Hadoop (Hive, Impala, Kudu).
- Should have good knowledge and working experience in scripting using (Shell script, awk programming, quick automation to integrating any third-party tools), BMC monitoring tools
- Good understanding and knowledge in Data Modelling area using industry standard data model like (FSLDM)
- Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
- It will be good to have experience in working with No SQL as well as virtualized Database Environment
- Implement data transformations, aggregations, and computations using Spark RDDs, Data Frames, and Datasets, and integrate them with Elasticsearch
- Develop and maintain scalable and fault-tolerant Spark applications, adhering to industry best practices and coding standards
- Troubleshoot and resolve issues related to data processing, performance, and data quality in the Spark-Elasticsearch integration
- Monitor and analyze job performance metrics, identify bottlenecks, and propose optimizations in both Spark and Elasticsearch components
- Prior experience in developing banking application using ETL, Hadoop is mandatory. In depth knowledge of technology stack at global banks is mandatory.
- Flexibility to stretch and take challenges
- Communication & Interpersonal skills
- Attitude to learn and execute
REQUIREMENT SUMMARY
Min:7.0Max:10.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Proficient
1
Singapore, Singapore