Modern Data Platform Data Engineer at DXC Technology
Sydney, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

10 May, 26

Salary

0.0

Posted On

09 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Azure Data Factory, Databricks, Microsoft Fabric, Snowflake, SQL, Data Pipelines, Data Transformation, Data Quality, Collaboration, Orchestration, Monitoring, Troubleshooting, Automation, Data Modelling, AI Initiatives, Cloud Environments

Industry

IT Services and IT Consulting

Description
Job Description: DXC Technology (NYSE: DXC) is a leading enterprise technology and innovation partner delivering software, services, and solutions to global enterprises and public sector organizations — helping them harness AI to drive outcomes at a time of exponential change with speed. With deep expertise in Managed Infrastructure Services, Application Modernization, and Industry-Specific Software Solutions, DXC modernizes, secures, and operates some of the world’s most complex technology estates. Learn more on dxc.com We are seeking a hands-on Data Engineer experienced in building and supporting modern cloud-based data platforms. This role focuses on designing, developing, and operating scalable data pipelines and transformation workloads using technologies such as Azure Data Factory, Databricks, Microsoft Fabric, and Snowflake. The successful candidate will contribute to delivering reliable, governed, and high-performing data solutions that support analytics, reporting, and AI initiatives. This role suits an engineer comfortable working across ingestion, transformation, and modelling layers in a collaborative, delivery-focused environment. Key Responsibilities Data Pipeline Development Design, build, and maintain data ingestion and transformation pipelines using Azure Data Factory and Microsoft Fabric Develop batch and near real-time data workflows across structured and semi-structured sources Implement orchestration, scheduling, and dependency management Monitor pipeline execution and troubleshoot failures Databricks Engineering Develop scalable data transformation solutions using Databricks (PySpark / Spark SQL) Build and maintain notebooks, jobs, and reusable frameworks Optimise cluster utilisation, job performance, and cost efficiency Support Delta Lake design and implementation Modern Data Platform Delivery Contribute to architecture and implementation of lakehouse or warehouse platforms Support data modelling and data preparation for downstream analytics Implement data quality, validation, and lineage practices Participate in platform enhancements and technical improvements Collaboration & Integration Work with architects, analysts, and stakeholders to translate requirements into technical solutions Support integration across enterprise systems and cloud services Participate in code reviews and collaborative development practices Document pipelines, processes, and operational procedures Operations & Continuous Improvement Monitor performance and reliability of data workflows Troubleshoot incidents and perform root cause analysis Implement logging, alerting, and observability practices Identify automation opportunities to improve platform efficiency Mandatory Skills & Experience Applicants must demonstrate: Proven hands-on experience building data pipelines in modern cloud environments Strong experience with Azure Data Factory or Fabric Data Pipelines Experience developing transformations in Databricks using PySpark or Spark SQL Experience working with Snowflake for data warehousing, transformation, or data sharing workloads Solid SQL skills and understanding of data transformation techniques Experience working with structured and semi-structured datasets Understanding of data engineering lifecycle practices and delivery processes Technical Platform Experience: Microsoft Fabric ecosystem exposure (Lakehouse, Warehouse, or Pipelines) Azure data platform components Snowflake platform experience including schema design, performance optimisation, and workload management Git-based source control practices Desirable / Nice-to-Have Skills Experience designing lakehouse architectures Exposure to streaming or event-driven data ingestion technologies (Kafka, Event Hubs, Kinesis) Experience with modern transformation and modelling frameworks (dbt or similar) Exposure to containerisation or orchestration technologies (Docker, Kubernetes) Data modelling for analytics or reporting workloads Experience with Power BI or analytics integration Infrastructure-as-code or deployment automation (Terraform, Bicep, ARM) Knowledge of data governance and cataloguing platforms (Purview, Collibra, Alation) Exposure to AI/ML data preparation workflows Familiarity with observability or data reliability tooling (Monte Carlo, Great Expectations, or similar) What We’re Looking For A delivery-focused engineer comfortable building solutions hands-on Strong problem-solving and troubleshooting skills Ability to collaborate across technical and business teams Clear communicator with good documentation practices Curiosity to learn evolving data platform technologies Key Outcomes Reliable and scalable data pipelines Well-performing Databricks workloads Trusted, high-quality datasets for analytics and reporting Continuous improvement of platform reliability and maintainability At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here. DXC Technology (NYSE: DXC) helps global companies run their mission-critical systems and operations while modernizing IT, optimizing data architectures, and ensuring security and scalability across public, private and hybrid clouds. The world's largest companies and public sector organizations trust DXC to deploy services to drive new levels of performance, competitiveness, and customer experience across their IT estates. Learn more about how we deliver excellence for our customers and colleagues at DXC.com.
Responsibilities
The Data Engineer will design, develop, and operate scalable data pipelines and transformation workloads. They will also contribute to the architecture and implementation of data platforms and ensure data quality and reliability.
Loading...