Azure Databricks Developer-4 at Realign LLC
Pennsylvania, Pennsylvania, USA -
Full Time


Start Date

Immediate

Expiry Date

09 Nov, 25

Salary

0.0

Posted On

10 Aug, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Computer Science, Data Security, Sql, Etl, Software Development, Git, Apache Spark

Industry

Information Technology/IT

Description

JOB DESCRIPTION:

We are seeking an experienced Azure Databricks Developer with a strong background in cloud-based data engineering and analytics solutions. The ideal candidate will have hands-on experience in building scalable data pipelines, transforming data, and integrating various services in the Azure ecosystem with a focus on Databricks, Spark, and Azure Data Lake.

Key Responsibilities:

  • Design and develop scalable data pipelines using Azure Databricks and Apache Spark
  • Perform data ingestion from multiple sources into Azure Data Lake / Delta Lake
  • Collaborate with data analysts, architects, and business stakeholders to translate business requirements into technical solutions
  • Implement CI/CD pipelines and ensure efficient deployment of Databricks notebooks and related components
  • Work with Azure Data Factory (ADF) for orchestration and integration
  • Ensure data quality, security, and governance best practices are followed
  • Monitor and optimize performance of big data workloads

REQUIRED SKILLS & QUALIFICATIONS:

  • 10+ years of experience in data engineering or software development
  • 5+ years of hands-on experience with Azure Databricks and Apache Spark
  • Proficiency with PySpark, SQL, and Delta Lake
  • Experience with Azure Data Factory, Azure Data Lake Storage (Gen2)
  • Strong understanding of distributed computing and ETL workflows
  • Familiarity with DevOps practices, Git, and CI/CD pipelines
  • Solid understanding of data security and governance on Azure platform
  • Bachelor’s degree in Computer Science, Engineering, or a related field

REQUIRED SKILLS

SQL Application Develope

Responsibilities
  • Design and develop scalable data pipelines using Azure Databricks and Apache Spark
  • Perform data ingestion from multiple sources into Azure Data Lake / Delta Lake
  • Collaborate with data analysts, architects, and business stakeholders to translate business requirements into technical solutions
  • Implement CI/CD pipelines and ensure efficient deployment of Databricks notebooks and related components
  • Work with Azure Data Factory (ADF) for orchestration and integration
  • Ensure data quality, security, and governance best practices are followed
  • Monitor and optimize performance of big data workload
Loading...