Senior Data Engineer (Microsoft Fabric Engineer) at Weekday AI
, , India -
Full Time


Start Date

Immediate

Expiry Date

07 Jun, 26

Salary

0.0

Posted On

09 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Microsoft Fabric, Etl / Elt, Data Engineering, Data Warehousing, Data Pipelines, Azure Data Lake, Data Management, Data Architecture, Azure Data Factory, Databricks, Spark, Delta Lake, Python, Sql, Airflow, Llms

Industry

technology;Information and Internet

Description
This role is for one of the Weekday's clients Min Experience: 5 years Location: Remote (India) JobType: full-time We are seeking an experienced Senior Data Engineer (Microsoft Fabric Engineer) to design, build, and scale modern cloud-native data platforms. This role focuses on developing robust ETL/ELT pipelines, data architectures, and high-performance data engineering solutions using Microsoft Fabric and Azure data technologies. The ideal candidate will combine strong architectural thinking with hands-on engineering expertise to build scalable data pipelines, support advanced analytics, and collaborate with machine learning teams on AI-driven data workflows. The role requires deep experience with Databricks, Spark, Delta Lake, Python, and SQL, along with modern data orchestration and governance practices. Key ResponsibilitiesData Architecture & Platform Design Design and implement scalable cloud-native data architectures using Microsoft Fabric and Azure data services. Define best practices for data governance, architecture standards, and platform scalability. Build robust data models and data warehouse architectures to support analytics and AI workloads. ETL/ELT Pipeline Development Design and develop high-performance ETL and ELT pipelines for large-scale data processing. Build and maintain data pipelines using Python and SQL to process and transform complex datasets. Ensure reliability, scalability, and performance optimization across data workflows. Data Engineering & Platform Development Develop and manage data engineering workflows using Databricks, Spark, and Delta Lake. Implement data ingestion frameworks and support large-scale data processing environments. Optimize data pipelines for performance, reliability, and cost efficiency. Orchestration & Automation Design workflow orchestration using tools such as Airflow or Azure-native orchestration services. Automate data processing pipelines and maintain operational reliability across systems. AI & Advanced Data Workflows Collaborate with machine learning teams to support LLM, NLP, and AI-driven data workflows. Enable feature engineering and data pipelines that support advanced analytics and AI models. Governance & Best Practices Establish best practices for data architecture, pipeline management, documentation, and security. Ensure compliance with enterprise data governance and quality standards. Required Skills & Experience 4+ years of experience in data engineering, data architecture, or ETL development. Hands-on experience with Microsoft Fabric data engineering capabilities. Strong expertise in ETL/ELT development and data pipeline design. Experience working with Databricks, Apache Spark, and Delta Lake. Strong programming skills in Python and SQL. Experience building scalable data platforms on Azure cloud environments. Knowledge of data warehousing, data modeling, and large-scale data processing. Familiarity with LLM/NLP workflows or AI-driven data pipelines is an advantage. Bachelor’s degree in Computer Science, Information Technology, or related field preferred. Key Skills Microsoft Fabric ETL / ELT Data Engineering Data Warehousing Data Pipelines Azure Data Lake Data Management Data Architecture Azure Data Factory Databricks / Spark / Delta Lake
Responsibilities
The role involves designing and implementing scalable cloud-native data architectures using Microsoft Fabric and Azure data services, alongside developing robust ETL/ELT pipelines using Python and SQL for large-scale data processing. Responsibilities also include building data engineering workflows with Databricks, Spark, and Delta Lake, and collaborating with machine learning teams on AI-driven data workflows.
Loading...