Start Date
Immediate
Expiry Date
09 Nov, 25
Salary
0.0
Posted On
13 Aug, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Data Engineering, Aws, Devops, Apache Spark, Snowflake, Data Modeling, Python, Azure, Data Warehouse, Airflow, Dbt
Industry
Information Technology/IT
PREFERRED QUALIFICATIONS
· Experience with Airflow, dbt, or similar orchestration frameworks.
· Exposure to DevOps & CI/CD in data environments.
· Familiarity with ML pipelines and feature stores.
Job Types: Full-time, Contract
Application Deadline: 11/09/202
ROLE OVERVIEW
We are seeking an exceptional Data Engineer to design, develop, and scale data pipelines, warehouses, and streaming systems that power mission-critical analytics and AI workloads. This is a hands-on engineering role where you will work with cutting-edge technologies, tackle complex data challenges, and shape the organization’s data architecture.
CORE RESPONSIBILITIES
· Design & own robust ETL/ELT pipelines using Python and Apache Spark for batch and near real-time processing.
· Architect and optimize enterprise data warehouses (Snowflake, BigQuery, Redshift, Azure Synapse) for performance and scalability.
· Build data models and implement governance frameworks ensuring data quality, lineage, and compliance.
· Engineer streaming data solutions using Kafka and Kinesis for real-time insights.
· Collaborate cross-functionally to translate business needs into high-quality datasets and APIs.
· Proactively monitor, tune, and troubleshoot pipelines for maximum reliability and cost efficiency.