DATA ANALYST (Hadoop)
at Dezire Technologies Pte Ltd
Singapore, Southeast, Singapore -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 17 Aug, 2024 | USD 9000 Monthly | 18 May, 2024 | 3 year(s) or above | Hbase,Relational Databases,Computer Science,Hadoop,Information Technology,Performance Tuning,Hive,Sql,Optimization,Spark,Transformation | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
Responsibilities:
- Design, develop, and maintain Hadoop-based data processing applications, workflows, and pipelines.
- Leverage Hadoop technologies such as HDFS, MapReduce, Spark, Hive, and HBase for batch and real-time data processing.
- Develop and optimize data ingestion processes, ensuring data is collected, transformed, and loaded into the Hadoop cluster efficiently.
- Monitor and optimize the performance of Hadoop jobs and data pipelines to ensure scalability and efficiency.
- Work closely with data engineers, data scientists, and other cross-functional teams to understand data requirements and deliver effective solutions.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- Minimum 3 years experience as a Hadoop Developer with strong knowledge of Hadoop ecosystem components like Spark, Hive, HBase, etc.
- Hands-on experience with data ingestion, transformation, and ETL processes using Hadoop.
- Strong knowledge of SQL and experience with relational databases.
- Experience with performance tuning and optimization of Hadoop applications.
- Good understanding of data warehousing concepts and methodologies.
Responsibilities:
- Design, develop, and maintain Hadoop-based data processing applications, workflows, and pipelines.
- Leverage Hadoop technologies such as HDFS, MapReduce, Spark, Hive, and HBase for batch and real-time data processing.
- Develop and optimize data ingestion processes, ensuring data is collected, transformed, and loaded into the Hadoop cluster efficiently.
- Monitor and optimize the performance of Hadoop jobs and data pipelines to ensure scalability and efficiency.
- Work closely with data engineers, data scientists, and other cross-functional teams to understand data requirements and deliver effective solutions
REQUIREMENT SUMMARY
Min:3.0Max:8.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Computer Science, Information Technology, Technology
Proficient
1
Singapore, Singapore