Senior Big Data Hadoop ML Engineer (GCP) - Canada
at Rackspace
Remote, British Columbia, Canada -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 26 Nov, 2024 | Not Specified | 29 Aug, 2024 | 5 year(s) or above | Critical Thinking,Code,Cloud Services,Continuous Integration,Java,Hbase,Infrastructure,Spark,Hive,Technical Requirements,Communication Skills,Python,Oozie | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
REQUIREMENTS:
- Proficiency in in the Hadoop ecosystem with Map Reduce, Oozie, Hive, Pig, HBase, Storm
- Strong programming skills with Java, Python, and Spark
- Knowledge in public cloud services, particularly in GCP.
- Experienced in Infrastructure and Applied DevOps principles in daily work. Utilize tools for continuous integration and continuous deployment (CI/CD), and Infrastructure as Code (IaC) like Terraform to automate and improve development and release processes.
- Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.
- Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.
- Proven experience in engineering batch processing systems at scale.
- Hands-on experience in public cloud platforms, particularly GCP. Additional experience with other cloud technologies is advantageous.
Responsibilities:
ABOUT THE ROLE:
We are seeking a highly skilled and experienced Senior Big Data Engineer to join our dynamic team. The ideal candidate will have a strong background in developing batch processing systems, with extensive experience in the Apache Hadoop ecosystem (Map Reduce, Oozie, Hive, Pig, HBase, Storm). This role involves working in Java, and working on Machine Learning pipelines for data collection or batch inference. This is a remote position, requiring excellent communication skills and the ability to solve complex problems independently and creatively.
Work Location: US-Remote
WHAT YOU WILL BE DOING:
- Develop scalable and robust code for large scale batch processing systems using Hadoop, Oozie, Pig, Hive, Map Reduce, Spark (Java), Python, Hbase
- Develop, manage, and maintain batch pipelines supporting Machine Learning workloads
- Leverage GCP for scalable big data processing and storage solutions
- Implementing automation/DevOps best practices for CI/CD, IaC, etc.
REQUIREMENT SUMMARY
Min:5.0Max:10.0 year(s)
Information Technology/IT
IT Software - Application Programming / Maintenance
Software Engineering
Graduate
Computer Science, Software Engineering, Engineering
Proficient
1
Remote, Canada