Start Date
Immediate
Expiry Date
16 Nov, 25
Salary
80.0
Posted On
16 Aug, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Good communication skills
Industry
Information Technology/IT
Experience: 10 plus years and
· Experience in Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentr
· Strong development experience in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats
· writing Hadoop/Hive/Impala scripts for gathering stats on table post data loads.
· Hands-on experience with Cloud databases.
· Hands-on data migration experience from the Big data environment to Snowflake environment.
· Hands-on experience with the Snowflake platform along with Snowpipe and Snowpark.
· Experience with Big Data, Hadoop on Data Warehousing or Data Integration projects
· Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Bigdata Technologies
· Creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling
· BS/BA degree or combination of education & experience
· Strong SQL experience (Oracle and Hadoop (Hive/Impala etc.)).
Job Type: Contract
Pay: $75.00 - $80.00 per hour
Work Location: In perso
Please refer the Job description for details