Data Engineer - Azure Synapse & Databricks | Sydney at Deloitte
Sydney NSW 2000, , Australia -
Full Time


Start Date

Immediate

Expiry Date

09 Dec, 25

Salary

0.0

Posted On

10 Sep, 25

Experience

4 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Data Processing, Version Control

Industry

Information Technology/IT

Description

DESCRIPTION:

Job Requisition ID: 38009
World class training and development.
Truly flexible work.
Hands-on mentoring on emerging technologies.
What will your typical day look like?
Deloitte Data and AI is changing the way that businesses leverage cloud-native technology to solve the toughest Data and AI challenges faced by our customers! Join us as we continue to expand into a cloud-centric Data and AI driven future, working on some of the largest and most complex projects within AU.
About the team
Our Data & AI team comprises of over 500 specialist practitioners across Australia, each of whom are constantly curious and excited about combining their business acumen and technological expertise to create data-centric solutions that help solve complex problems and transform or reinvent our client’s business.
Our Data & AI team work with public and private sector clients helping them to design and implement world leading solutions and capabilities. From solving air and ground transport network issues through our industry-first Optimal Reality digital twin solution, bringing automation capabilities to vaccine rollout programs, to helping our clients move to a sustainable world, we are the partner of choice for our clients and our alliance vendors.
Our technology relationships with market leading vendors such as Amazon, Microsoft, Google, Apple, Snowflake, Salesforce, and Informatica enable our practitioners to be at the forefront of new and emerging capabilities and to deliver our suite of services at scale. We are consistently recognised as leaders in Cloud, Data, Analytics and AI, including in the latest analyst reports.
Enough about us, let’s talk about you.
We are seeking a highly skilled Data Engineer with strong expertise in Azure Synapse, Databricks, PySpark, Spark SQL, and SQL. The ideal candidate will have a solid understanding of modern data architectures, particularly Data Mesh principles, and experience building scalable and performant data pipelines. Data modeling experience is a plus, and prior work in the finance domain is a strong advantage.

REQUIRED SKILLS & QUALIFICATIONS:

4-8 years of experience working with Azure Synapse, Databricks, PySpark, Spark SQL, and SQL.
Solid understanding of distributed data processing.
Working knowledge of data mesh architecture and concepts.
Proficiency in developing and optimizing ETL/ELT pipelines in a cloud-based environment.
Familiarity with version control and CI/CD pipelines.
Strong analytical and problem-solving skills.

Responsibilities

Design, build, and maintain scalable data pipelines using Databricks, PySpark, and Spark SQL.
Work with Azure Synapse to develop data integration and transformation solutions on spark and sql pools.
Write optimized and well-structured Spark and SQL queries for complex data sets.
Implement data quality and validation checks to ensure reliable data processing.
Collaborate with data architects and analysts to align on data mesh architecture principles.
Support the deployment and monitoring of data pipelines in production environments.
Engage in performance tuning and optimization of big data processing jobs.
Participate in code reviews, technical discussions, and architecture planning.

Loading...