Sr Data Engineer (India) at Jobgether
, , India -
Full Time


Start Date

Immediate

Expiry Date

18 Jan, 26

Salary

0.0

Posted On

20 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Python, SQL, ETL, Cloud Platforms, Data Warehousing, Data Governance, Data Quality, Streaming Data, Data Modeling, Infrastructure as Code, Problem Solving, Communication, Mentoring, Collaboration, Analytics

Industry

Internet Marketplace Platforms

Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Sr Data Engineer in India. We are seeking a highly skilled Sr Data Engineer to design, build, and maintain scalable data infrastructure and pipelines that power a global platform. In this role, you will handle vast amounts of structured and unstructured data, ensuring high reliability, performance, and data quality. You will collaborate closely with cross-functional teams, including analytics, product, and engineering, to deliver solutions that enable data-driven decisions. This position provides the opportunity to influence data architecture, optimize pipelines, implement best practices, and mentor junior engineers. Ideal candidates thrive in a remote-first, innovative environment and are passionate about building robust, modern data platforms. Your work will directly impact business growth and the efficiency of global operations. Accountabilities: Design, implement, and maintain robust batch and streaming data pipelines. Develop and optimize ETL/ELT processes to ensure data quality and consistency. Build scalable data architectures using cloud-native technologies and modern platforms. Collaborate with Data Scientists, Analytics Engineers, Product Managers, and Engineering teams to deliver data solutions. Implement data quality monitoring frameworks and resolve data issues proactively. Optimize performance of data systems by analyzing query patterns and fine-tuning configurations. Maintain and enhance data warehouses, lakes, and processing clusters for high availability and security. Promote engineering best practices including code reviews, automated testing, CI/CD, and documentation. Support data governance initiatives through cataloging, lineage tracking, and access controls. Mentor junior engineers and contribute to team knowledge sharing and innovation. 6+ years of experience in data engineering with production-grade data systems. Expert proficiency in Python and data processing libraries (PySpark, Pandas, NumPy). Advanced SQL skills, including query optimization, indexing, and performance tuning. Hands-on experience with modern data platforms such as Databricks, Snowflake, Redshift, BigQuery, or Azure Synapse. Strong experience with ETL/ELT tools (Airflow, dbt, Fivetran, or similar). Proficiency with cloud platforms (AWS preferred) including S3, Lambda, Glue, EMR, Kinesis. Experience with streaming data technologies like Kafka or AWS Kinesis. Knowledge of data modeling (dimensional modeling, data vault, denormalization strategies). Familiarity with data governance, cataloging, and lineage tools (Atlan, Alation, Informatica, Collibra). Experience implementing data quality frameworks (Monte Carlo, Great Expectations, or custom solutions). Knowledge of Infrastructure as Code (Terraform, CloudFormation). Strong analytical, problem-solving, and communication skills. Bachelor’s degree in Computer Science, Engineering, or related field; advanced degrees are a plus. Fluent English communication skills and ability to work in a global, distributed team. Competitive, location-based compensation. Fully remote work flexibility. Opportunity to work on high-impact, global data infrastructure projects. Collaborative, learning-focused, and innovation-driven environment. Ownership of data architecture and influence on engineering best practices. Career growth and skill development in modern cloud and data technologies. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job’s core requirements and past success factors to determine your match score. 🎯 The top 3 candidates with the highest match are automatically shortlisted. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong candidate is overlooked. The process is transparent, skills-based, and unbiased, focusing solely on your fit for the role. Once the shortlist is completed, it is shared with the hiring company, who then determines next steps such as interviews or additional assessments. Thank you for your interest! #LI-CL1
Responsibilities
Design, implement, and maintain robust batch and streaming data pipelines while collaborating with cross-functional teams to deliver data solutions. Optimize data systems for performance and ensure high data quality and reliability.
Loading...