Senior Data Engineer - Big Data at Commonwealth Bank
Sydney, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

24 Jun, 25

Salary

0.0

Posted On

24 Mar, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Vault, Scala, Glue, Sqoop, Processing, Teradata, Ab Initio, Spark, Kafka, Oracle, Python, Java, Design Patterns, Data Models, Shell Scripting, Data Warehousing, Hadoop, Mapreduce, Kimball, It, Rdbms, Data Integration, Languages

Industry

Information Technology/IT

Description

SENIOR DATA ENGINEER – BIG DATA

  • You are determined to stay ahead of the latest Cloud, Big Data and Data warehouse technologies.
  • We’re one of the largest and most advanced Data Engineering teams in the country.
  • Together we can build state-of-the-art data solutions that power seamless experiences for millions of customers.

TECHNICAL SKILLS

We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all but experience or exposure with some of these (or equivalents) will set you up for success in this team.

  • Experience in designing, building, and delivering enterprise-wide data ingestion, data integration and data pipeline solutions using common programming language (Scala, Java, or Python) in a Big Data and Data Warehouse platform. Preferably with at least 5+ years of hands-on experience in a Data Engineering role.
  • Experience in building data solution in Hadoop platform, using Spark, MapReduce, Sqoop, Kafka and various ETL frameworks for distributed data storage and processing. Preferably with at least 5+ years of hands-on experience.
  • Experience in building data solution using AWS Cloud technology (EMR, Glue, Iceberg, Kinesis, MSK/Kafka, Redshift/PostgresSQL, DocumentDB/MongoDB, S3, etc.). Preferably with 3+ years of hands-on experience and certified AWS Data Engineer Associate.
  • Possess ability to produce conceptual, logical and physical data models using data modelling techniques such as Data Vault, Kimball, 3NF, etc. and demonstrate expertise in design patterns (FSLDM, IBM IFW DW).
  • Strong Unix/Linux Shell scripting and programming skills in Scala, Java, or Python.
  • Proficient in SQL scripting, writing complex SQLs for building data pipelines.
  • Familiarity with data warehousing and/or data mart build experience in Teradata, Oracle or RDBMS system is a plus.
  • Certification on Cloudera CDP, Hadoop, Spark, Teradata, AWS Data Practitioner/Architect, Ab Initio is a plus.
  • Experience in Ab Initio software products (GDE, Co>Operating System, Express>It, etc.) is a plus.
Responsibilities

Please refer the Job description for details

Loading...