Data Engineer at Weekday AI
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

20 Jan, 26

Salary

4500000.0

Posted On

22 Oct, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Big Data, Databricks, Data Lakes, Data Modeling, Monitoring, Performance Optimization, Python, SQL, Scala, Cloud Platforms, Data Governance, Data Quality, Data Security, ETL/ELT Tools, CI/CD Practices, Problem Solving, Collaboration

Industry

technology;Information and Internet

Description
This role is for one of the Weekday's clients Salary range: Rs 2000000 - Rs 4500000 (ie INR 20-45 LPA) Min Experience: 2 years Location: Bengaluru JobType: full-time We are seeking a passionate and skilled Data Engineer with 2–5 years of experience to join our dynamic data team. The ideal candidate will have hands-on experience with Big Data technologies, Databricks, Data Lakes, and Data Modeling, along with strong monitoring and performance optimization expertise using Datadog. You will play a key role in designing, building, and maintaining scalable data pipelines and architectures to support business intelligence, analytics, and data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain data pipelines to ingest, transform, and deliver structured and unstructured data from various sources to the enterprise data lake and downstream systems. Develop and optimize data models (both logical and physical) that support analytics and reporting needs across teams. Implement and manage data lakes and data warehouses using modern data engineering tools and frameworks to ensure data consistency, reliability, and performance. Leverage Databricks for big data processing, transformation, and advanced analytics, ensuring efficient data workflows and reusability of code and components. Monitor and optimize data pipeline performance using Datadog, setting up alerts, dashboards, and logging to ensure data systems operate at peak performance. Collaborate with data analysts, scientists, and software engineers to understand data requirements and translate them into scalable engineering solutions. Ensure data governance, quality, and security by implementing best practices in metadata management, lineage tracking, and access controls. Participate in code reviews, documentation, and knowledge-sharing to maintain high standards of data engineering practices. Continuously evaluate emerging technologies and tools to enhance the data ecosystem’s performance, scalability, and reliability. Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. 2–5 years of experience as a Data Engineer or in a similar role. Proven experience with Big Data technologies such as Spark, Hadoop, Hive, or similar frameworks. Strong hands-on experience with Databricks for building and managing scalable data pipelines and transformations. Proficiency in data modeling and understanding of star schema, snowflake schema, and normalization principles. Deep understanding of Data Lakes and Data Lakehouse architectures for managing large-scale datasets. Experience with Datadog for monitoring data workflows, performance tuning, and system observability. Strong programming skills in Python, SQL, or Scala for data transformation and automation. Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services such as S3, ADLS, BigQuery, or Snowflake. Excellent problem-solving and analytical skills, with a keen attention to detail and data accuracy. Strong communication and collaboration skills with cross-functional teams. Preferred Skills: Experience with ETL/ELT tools and workflow orchestration (e.g., Airflow, Prefect). Knowledge of CI/CD practices for data engineering workflows. Exposure to data security, compliance, and governance frameworks.
Responsibilities
Design, build, and maintain data pipelines to ingest, transform, and deliver structured and unstructured data. Collaborate with data analysts, scientists, and software engineers to understand data requirements and translate them into scalable engineering solutions.
Loading...