Sr Azure Data Engineer at ZapCom Solutions
Dallas, TX 75201, USA -
Full Time


Start Date

Immediate

Expiry Date

21 Jun, 25

Salary

0.0

Posted On

21 Mar, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Indexing, Scala, Power Bi, Tableau, Apache Spark, Query Optimization, Data Visualization, Data Integration, Data Transformation, Performance Tuning, Data Engineering, Stored Procedures, Python, Sql Server, Data Processing, Data Warehousing

Industry

Information Technology/IT

Description

JOB INFORMATION

Date Opened
03/06/2025
Job Type
Full time
Industry
Financial Services
City
Dallas
State/Province
Texas
Country
United States
Zip/Postal Code
75201

ABOUT US

Zapcom is a global Product Engineering and Technology Services company, specializing in bespoke, customer-centric solutions across industries like BFSI, e-commerce, retail, travel, transportation, and hospitality. Headquartered in the US, with a presence in India, Europe, Canada, and MENA, we excel in transforming ideas into tangible outcomes using AI, ML, Cloud solutions, and full-stack development.
At Zapcom, we value accountability, ownership, and equality, empowering you to excel. We listen to your aspirations and provide the support needed to achieve them. Our diverse, collaborative culture ensures every voice is heard, driving innovation and business value. With global opportunities and expansion plans, now is the perfect time to join our team. Work on impactful projects that shape the future. Apply today and be part of something extraordinary!

JOB DESCRIPTION

  • Design and implement data ingestion, transformation, and movement using Azure Data Factory (ADF), Azure Synapse, and Data Lake.
  • Collaborate with business stakeholders, data scientists, and engineers to build robust data solutions.
  • Develop and manage high-volume, real-time, and batch data ingestion pipelines using Azure Data Factory (ADF).
  • Implement event-driven architectures for real-time data movement and processing.
  • Develop large-scale data processing solutions using Azure Databricks and Apache Spark with PySpark, Scala, or Python.
  • Optimize data partitioning, caching, and indexing for efficient performance.
  • Manage complex transformations and aggregations for structured and unstructured datasets.
  • Design and implement high-performance data models in Azure Synapse Analytics using dedicated SQL Pools and Spark Pools.
  • Optimize query performance, workload management, and cost efficiency in Synapse Analytics.
  • Implement column store indexes, partitioning strategies, and data caching to enhance performance.
  • Design and manage secure, scalable data lakes using Azure Data Lake Storage Gen2 (ADLS Gen2).
  • Implement RBAC (Role-Based Access Control), encryption, and data masking to ensure security.
  • Implement Azure Monitor, Log Analytics, and Application Insights for data pipeline monitoring and troubleshooting.
  • Optimize cost management, auto-scaling, and performance tuning across Azure services.

REQUIREMENTS

  • 8+ years of experience in data engineering, ETL development, and cloud-based data integration,
  • Strong experience with Azure Data Factory (ADF): ETL/ELT pipeline orchestration and data movement.
  • Azure Databricks: Large-scale data transformation and big data processing with Apache Spark, PySpark, Scala, or Python.
  • Azure Synapse Analytics: Data warehousing, SQL Pools, Spark Pools, and performance tuning.
  • Azure Data Lake Storage Gen2 (ADLS Gen2): Secure, scalable data lake architecture.
  • SQL Server: Advanced T-SQL, stored procedures, indexing, and query optimization.
  • Power BI, Tableau, or other BI tools for data visualization.
  • Financial/Banking experience.
Responsibilities

Please refer the Job description for details

Loading...