Staff Data Engineer at Commonwealth Bank
Sydney, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

30 Oct, 25

Salary

0.0

Posted On

30 Jul, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Code, Sql, Aws, Perl, Apache Kafka, Glue, Scripting, Teamcity, Octopus, Ab Initio, Tableau, Dbt, Languages

Industry

Information Technology/IT

Description

STAFF DATA ENGINEER

  • You are passionate to stay ahead of the latest AWS Cloud, and Data Lake technologies
  • We’re one of the largest and most advanced Data Engineering teams in the country
  • Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers

TECHNICAL SKILLS:

This is a senior technical role, requiring a broad range of tools, languages, and frameworks. You will be a good match if you have previous experience in:

  • Building data warehouses or data lakes
  • Proficient in SQL and DBT
  • Proficiency in Ab Initio, Unix/Linux, SQL and scripting languages such as shell or Perl
  • Experience and knowledge of cloud-based infrastructure (AWS)
  • Experience with AWS services such as Glue, Lambda, SageMaker, or EMR
  • Infrastructure as code using AWS CDK/CloudFormation or Terraform (desirable)
  • CI/CD tools (i.e. GitHub Actions), TeamCity or Octopus
  • Familiarity with streaming technologies such as Apache Kafka, AWS Kinesis, or similar
  • Familiarity with visualisation tools such as PowerBI, Tableau or QuickSight
Responsibilities

You will be a part of the Data Provisioning and Enrichment Crew, which is responsible for:

  • Design and deliver an integrated Corporate Services data platform providing a single source of truth for timely and accurate data.
  • Delivery Risk Treasury Finance Data Platform outcomes; and
  • Overseeing the delivery of all data outcomes to support Finance Risk and Treasury

This is a senior technical role, requiring a broad range of tools, languages, and frameworks. You will be a good match if you have previous experience in:

  • Building data warehouses or data lakes
  • Proficient in SQL and DBT
  • Proficiency in Ab Initio, Unix/Linux, SQL and scripting languages such as shell or Perl
  • Experience and knowledge of cloud-based infrastructure (AWS)
  • Experience with AWS services such as Glue, Lambda, SageMaker, or EMR
  • Infrastructure as code using AWS CDK/CloudFormation or Terraform (desirable)
  • CI/CD tools (i.e. GitHub Actions), TeamCity or Octopus
  • Familiarity with streaming technologies such as Apache Kafka, AWS Kinesis, or similar
  • Familiarity with visualisation tools such as PowerBI, Tableau or QuickSigh
Loading...