Data Engineer at Weekday AI
, , India -
Full Time


Start Date

Immediate

Expiry Date

05 Mar, 26

Salary

0.0

Posted On

05 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Databricks, ETL, ELT, Spark, Hive, Glue, SQL, Python, Scala, Cloud Platforms, Data Governance, Data Security, Documentation, Mentorship, Data Warehousing

Industry

technology;Information and Internet

Description
This role is for one of the Weekday's clients Min Experience: 4 years Location: India JobType: full-time We are seeking an experienced Data Engineer with strong expertise in Databricks and modern data engineering practices. The ideal candidate will have 4+ years of hands-on experience in developing scalable data pipelines, managing distributed data systems, and supporting end-to-end CI/CD processes. This role involves architecting and optimizing data workflows that enable seamless data-driven decision-making across the organization. Responsibilities Design, build, and maintain scalable ETL/ELT pipelines for large-scale datasets using Spark, Hive, or Glue. Develop and optimize data integration workflows using ETL tools such as Informatica, Talend, or SSIS. Write, optimize, and maintain complex SQL queries for data transformation and analytics. Collaborate with cross-functional teams including data scientists, analysts, and product stakeholders to translate requirements into technical solutions. Deploy data workflows using CI/CD pipelines and ensure smooth automated releases. Monitor and optimize data workflows for performance, scalability, and reliability. Ensure data accuracy, governance, security, and compliance across pipelines. Work with cloud-based data platforms such as Azure (ADF, Synapse, Databricks) or AWS (EMR, Glue, S3, Athena). Maintain clear documentation of data systems, architectures, and processes. Provide mentorship and technical guidance to junior team members. Stay current with emerging data engineering tools, technologies, and best practices. What You’ll Bring Bachelor’s degree in IT, Computer Science, or related field. 4+ years of experience in data engineering and distributed data processing. Strong hands-on experience with Databricks or equivalent technologies (Spark, EMR, Hadoop). Proficiency in Python or Scala. Experience with modern data warehouses (Snowflake, Redshift, Oracle). Solid understanding of distributed storage systems (HDFS, ADLS, S3) and formats such as Parquet and ORC. Familiarity with orchestration tools such as ADF, Airflow, or Step Functions. Databricks Data Engineering Professional certification (preferred / required as needed). Experience in multi-cloud or migration-based projects is a plus.
Responsibilities
The Data Engineer will design, build, and maintain scalable ETL/ELT pipelines for large-scale datasets. This role also involves collaborating with cross-functional teams to translate requirements into technical solutions and ensuring data accuracy and compliance across pipelines.
Loading...