Senior Data Engineer - Big Data & AWS Cloud at Commonwealth Bank
Sydney, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

28 Jul, 25

Salary

0.0

Posted On

28 Apr, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sqoop, Hadoop, Python, Mapreduce, Languages, Data Integration, Ab Initio, Data Warehousing, Kafka, Spark, Data Models, Scala, Glue, Data Vault, It, Teradata, Shell Scripting, Processing, Java, Oracle, Rdbms, Design Patterns, Kimball

Industry

Information Technology/IT

Description

SENIOR DATA ENGINEER – BIG DATA & AWS CLOUD

  • You are determined to stay ahead of the latest Cloud, Big Data and Data warehouse technologies.
  • We’re one of the largest and most advanced Data Engineering teams in the country.
  • T oget h er we can build state-of-the-art data solutions that power seamless experiences for millions of customers.

TECHNICAL SKILLS

We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team.

  • Experience in designing, building, and delivering enterprise-wide data ingestion, data integration and data pipeline solutions using common programming language (Scala, Java, or Python) in a Big Data and Data Warehouse platform.
  • Experience in building data solution in Hadoop platform, using Spark, MapReduce, Sqoop, Kafka and various ETL frameworks for distributed data storage and processing.
  • Experience in building data solution using AWS Cloud technology (EMR, Glue, Iceberg, Kinesis, MSK/Kafka, Redshift/PostgresSQL, DocumentDB/MongoDB, S3, etc.).
  • Possess ability to produce conceptual, logical and physical data models using data modelling techniques such as Data Vault, Kimball, 3NF, etc. and demonstrate expertise in design patterns (FSLDM, IBM IFW DW).
  • Strong Unix/Linux Shell scripting and programming skills in Scala, Java, or Python.
  • Proficient in common programming language (Scala, Java or Python) and SQL scripting, writing complex SQLs for building data pipelines.
  • Familiarity with data warehousing and/or data mart build experience in Teradata, Oracle or RDBMS system is a plus.
  • Certification on Cloudera CDP, Hadoop, Spark, Teradata, AWS Data Practitioner/Architect, Ab Initio is a plus.
  • Experience in Ab Initio software products (GDE, Co>Operating System, Express>It, etc.) is a plus.
Responsibilities

Please refer the Job description for details

Loading...