Senior Data Pipeline Engineer - Global Team (India) at Jobgether
, , India -
Full Time


Start Date

Immediate

Expiry Date

29 Dec, 25

Salary

0.0

Posted On

30 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Pipeline Engineering, PySpark, SQL, ETL Tools, Databricks, Data Modeling, Problem-Solving, Collaboration, Communication, Self-Starter, Airflow, dbt, Snowflake, Large-Scale Datasets, Documentation, Architecture Diagrams

Industry

Internet Marketplace Platforms

Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Pipeline Engineer - Global Team in India. As a Senior Data Pipeline Engineer, you will be a key contributor to a global data engineering team, designing and building end-to-end data pipelines that drive high-impact business insights. You will work with large-scale datasets, leveraging modern data engineering tools to optimize reliability, performance, and efficiency. Collaborating closely with stakeholders and cross-functional teams, you will ensure data pipelines meet business requirements while maintaining high standards for code quality and documentation. This role offers a unique opportunity to work on strategic, high-visibility projects in a fast-paced, remote-friendly environment while shaping the future of the data engineering function in India. Accountabilities Design, build, and maintain end-to-end data pipelines supporting critical business operations. Implement best practices for data modeling, pipeline development, and architecture. Collaborate with stakeholders to integrate business logic into centralized pipelines. Troubleshoot complex data pipeline issues using PySpark, SQL, and other ETL tools. Create comprehensive documentation, architecture diagrams, and training materials. Learn and apply modern data engineering technologies, including Databricks, Spark, and Delta. Support the growth of the data engineering team and contribute to strategic pipeline improvements. Bachelor’s or Master’s degree in Computer Science, STEM, or related technical discipline. 6+ years of experience as a Data Engineer or in related technical functions. Strong expertise in PySpark, SQL, and building large-scale data pipelines. Experience with Databricks, ETL tools, and handling complex datasets. Familiarity with data modeling and best practices in pipeline development. Strong problem-solving, communication, and collaboration skills. Self-starter with ability to work independently and within a global team. Bonus: Experience with Airflow, dbt, Snowflake, or equivalent platforms. Competitive salary and comprehensive benefits package. Flexible work hours and remote-friendly environment. Generous vacation policy, parental leave, and wellness budget. Learning reimbursement, career coaching, and professional development opportunities. High-impact work environment with exposure to strategic global projects. Empowering culture focused on ownership, respect, and trust. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job’s core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team. Thank you for your interest! #LI-CL1
Responsibilities
Design, build, and maintain end-to-end data pipelines supporting critical business operations. Collaborate with stakeholders to integrate business logic into centralized pipelines and troubleshoot complex data pipeline issues.
Loading...