Sr Data Engineer - Contract Role at InnoWave
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

07 Jun, 26

Salary

0.0

Posted On

09 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, SQL, ETL, ELT, Airflow, Prefect, AWS, Azure, Google Cloud, Snowflake, Redshift, BigQuery, Azure Synapse, Spark, Hadoop, Kafka

Industry

IT Services and IT Consulting

Description
We are looking for a Senior Data Engineer to design, build, and maintain scalable data pipelines and data platforms that support analytics, reporting, and machine learning initiatives. The ideal candidate should have strong expertise in data architecture, ETL/ELT processes, and modern cloud-based data platforms, along with the ability to work closely with data scientists, analysts, and engineering teams. Key Responsibilities Design, develop, and maintain scalable and reliable data pipelines for ingesting, processing, and transforming large volumes of data. Build and optimize ETL/ELT workflows for structured and unstructured data sources. Develop and manage data warehouses, data lakes, and data marts. Ensure data quality, integrity, and security across all data systems. Work closely with data scientists, analysts, and product teams to support data-driven initiatives. Optimize database performance and troubleshoot data pipeline failures or performance issues. Implement data governance, monitoring, and documentation best practices. Mentor junior data engineers and contribute to improving data engineering standards. Collaborate with DevOps teams for CI/CD pipelines and deployment of data solutions. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or related field. 6+ years of experience in Data Engineering or related roles. Strong experience with Python, SQL, and data processing frameworks. Hands-on experience with ETL tools and data pipeline orchestration tools (Airflow, Prefect, etc.). Experience with cloud platforms such as AWS, Azure, or Google Cloud. Expertise in data warehousing solutions such as Snowflake, Redshift, BigQuery, or Azure Synapse. Experience working with big data technologies such as Spark, Hadoop, or Kafka. Strong understanding of data modeling and database design. Familiarity with version control systems and CI/CD practices.
Responsibilities
The role involves designing, building, and maintaining scalable data pipelines and platforms to support analytics, reporting, and machine learning initiatives. Key tasks include building and optimizing ETL/ELT workflows, managing data warehouses/lakes, and ensuring data quality and security.
Loading...