Senior Data Engineer - GCP (Dataproc & GKE) at Jobgether
, Texas, United States -
Full Time


Start Date

Immediate

Expiry Date

02 Feb, 26

Salary

0.0

Posted On

04 Nov, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, GCP, Dataproc, GKE, Data Architecture, Cloud Solutions, IAM, Version Control, Cloud Security, Python, PySpark, GitHub Actions, Composer, Kubernetes

Industry

Internet Marketplace Platforms

Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer – GCP (Dataproc & GKE) in Texas (USA). In this role, you will design, implement, and optimize cloud-based data solutions leveraging Google Cloud Platform (GCP) services such as Dataproc and GKE. You will work closely with data architects, engineers, and stakeholders to translate business requirements into scalable, secure, and efficient architectures. The position requires hands-on experience with cloud infrastructure, data pipelines, and orchestration tools while ensuring high-quality, reliable data solutions. Collaboration with internal and external teams will be critical, and you will actively contribute to problem-solving, system improvements, and best practice adoption. This role offers a dynamic, remote-friendly environment with opportunities to influence enterprise-scale data solutions. Accountabilities: Design and implement robust, scalable data architectures on GCP, including Dataproc and GKE. Build, maintain, and optimize data pipelines and workflows to ensure efficient processing of large-scale data. Collaborate with stakeholders to understand requirements and deliver reliable data solutions. Troubleshoot and resolve issues related to data systems, cloud infrastructure, and orchestration processes. Ensure compliance with best practices for security, governance, and data quality. Participate in code reviews, documentation, and knowledge sharing with team members. Engage with clients to support data requirements, problem-solving, and solution adoption. 12+ years of experience in data engineering or related roles. Hands-on experience with GCP services, specifically Dataproc and GKE. Strong understanding of data architecture design and cloud-based solutions. Familiarity with IAM, version control (GitHub), and cloud security practices. Excellent communication skills to interact with clients, stakeholders, and technical teams. Optional but beneficial skills: Python, PySpark, GitHub Actions, Composer, Kubernetes orchestration. Competitive salary with remote work flexibility. Comprehensive healthcare, dental, and vision coverage. Retirement plans with employer contributions. Paid time off and company holidays. Professional development and training opportunities. Collaborative, innovative, and inclusive work environment. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job’s core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the three candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team. Thank you for your interest! #LI-CL1
Responsibilities
Design and implement robust, scalable data architectures on GCP, including Dataproc and GKE. Build, maintain, and optimize data pipelines and workflows to ensure efficient processing of large-scale data.
Loading...