Data Engineer at DXC Technology
Macquarie Park, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

01 Oct, 25

Salary

0.0

Posted On

01 Jul, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Data Engineering, Storage Solutions, Computer Science, Data Governance, Sql, Data Transformation

Industry

Information Technology/IT

Description

JOB DESCRIPTION:

DXC Technology (NYSE: DXC) helps global companies run their mission-critical systems and operations while modernizing IT, optimizing data architectures, and ensuring security and scalability across public, private and hybrid clouds. The world’s largest companies and public sector organizations trust DXC to deploy services to drive new levels of performance, competitiveness, and customer experience across their IT estates. Learn more about how we deliver excellence for our customers and colleagues at DXC.com.
At DXC we pride ourselves on delivering excellence in everything we do. What this means for you is the opportunity to be a part of delivering innovative solutions and helping to solve real business problems for a wide variety of valued clients. Be a trusted advisor to our clients and great leader to our practice management team in New Zealand.

THE SKILLS YOU WILL BRING

  • Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent experience).
  • 2+ years of experience in data engineering, with a strong focus on Databricks.
  • Proficiency in SQL and Python for data transformation and analysis.
  • Strong experience in building and optimizing data pipelines, data lakes, and data warehouses.
  • Familiarity with cloud environments (e.g.Azure) and relevant data storage solutions (S3, ADLS, BigQuery).
  • Experience with Delta Lake and data lake management is a plus.
  • Knowledge of data governance, data quality best practices, and security standards.
  • Strong problem-solving skills, with the ability to troubleshoot and optimize complex data workflows.
Responsibilities

WHAT YOU WILL BE DOING

As a Data Engineer with a focus on Databricks, you will be responsible for designing, implementing, and optimizing data pipelines and workflows within the Databricks environment. You will work closely with data scientists, analysts, and other engineers to develop robust and scalable data solutions, enabling efficient data processing and advanced analytics across the organization.

THIS ROLE WILL:

  • Design and develop scalable data pipelines using Databricks and Apache Spark to support data processing and analytics needs.
  • Collaborate with data scientists and business analysts to implement data workflows that support machine learning, BI, and other analytical use cases.
  • Optimize and monitor data pipelines for performance, ensuring efficient processing and minimizing costs within Databricks.
  • Manage and organize large datasets in data lakes (e.g., Delta Lake) to support batch and real-time data processing.
  • Implement data governance, quality checks, and security best practices in Databricks.
  • Automate data workflows using Databricks’ job scheduling and orchestration features.
  • Stay updated with Databricks features and releases, recommending improvements and driving adoption of best practices.
Loading...