Senior Staff Data Engineer (AWS Cloud)

at  Commonwealth Bank

Sydney, New South Wales, Australia -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate23 Apr, 2025Not Specified23 Jan, 2025N/AApache Spark,Automation,Sql,Containerization,Data Visualization,Glue,Data Governance,Scala,Splunk,Athena,Dbt,Application Security,Infrastructure,Python,Scripting,Code,Languages,Appdynamics,Tableau,Data Flow,Docker,Productivity,Apache KafkaNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

SEE YOURSELF IN OUR TEAM

You will be joining our CTO Engineering team. We are seeking passionate , c loud savvy , data engineers across diverse levels to help drive our commitments in our Data Strategy and achieve our vision for Commbank.data , a unified, world-class data platform enabling the next generation of AI (Artificial Intelligence) enabled applications.
You will collaborate with industry leaders to drive forward our mission of revolutionising the fintech landscape. As part of our team, you wi ll be empowered to deliver world-class, competitive banking products and services with unparalleled levels of service and reliability.
We support our people with the flexibility to balance where work is done with at least half your time each month connecting in office. We also have many other flexible working options available including changing start and finish times, part-time arrangements and job share to name a few. Talk to us about how these arrangements might work for you.

We are interested in hearing from people who:

  • Possess expertise in programming languages such as Python, SQL, and DBT.
  • Have comprehensive knowledge of AWS services, particularly those related to data storage and processing, including S3, RDS, Redshift, and Glue.
  • Are proficient in the data development lifecycle, with a focus on data ingestion processes, data transformation pipelines, data integration, and visualization.
  • Have a deep understanding of DataOps practices.
  • Are skilled in data profiling, basic numerical statistics, and data quality assessments.
  • Have experience managing large-scale data processing and analytics.
  • Can ensure code quality through peer programming, code reviews, and automated pipeline release management.
  • Are capable of mentoring junior engineers and sharing knowledge to enhance software development practices.

Te chnical skills

This is a senior technical role, requiring a broad range of tools, languages, and frameworks. You will be a good match if you have previous experience in:

  • S trong experience in DataOps practices such as:
  • Automation of data pipeline deployments
  • Monitoring and maintaining data pipeline health and performance
  • Implementing robust data governance and data quality frameworks
  • Utilizing CI/CD tools to streamline data flow and enhance productivity
  • Developing data warehouses or data lakes.
  • Experience with AWS services such as Glue, Lambda, SageMaker, or EMR.
  • Familiarity with Databricks and Apache Spark is highly desirable.
  • Infrastructure as code using CloudFormation or Terraform
  • Microservices design and implementation of highly scalable APIs
  • CI/CD tools (i.e., GitHub Actions)
  • Exposure to technologies like Apache Kafka, AWS Kinesis, or similar
  • Visualization tools such as PowerBI , Tableau, or similar
  • AWS certification with exposure to services like Lambda, S3, Redshift, Athena, EMR, Glue, DynamoDB
  • Scripting using Python, SQL, DBT, Scala, Go

DEVELOPED KNOWLEDGE/UNDERSTANDING OF (GOOD TO HAVE)

  • Application Security
  • Containerization (Docker, Kubernetes)
  • Observability tools (i.e. Observe , Splunk or AppDynamics).
  • Additional experience with Tableau, PowerBI for data visualization
  • Experience with Advanced data profiling techniques , Statistical methods , Data structure analysis , Data content analysis will be a plus.

WORKING WITH US

Whether you are passionate about customer service, driven by data, or called by creativity, a career with us is for you.
Our people bring their diverse backgrounds and unique perspectives to build a respectful, inclusive, and flexible workplace with flexible work locations. We are looking for people who truly live our values, care, courage and commitment and we will offer you exceptional opportunities to develop your career with us.
If you are ready to be part of a forward-thinking company that values innovation, teamwork, and security, apply now and help shape the future of fintech with us!
If you’re already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you’ll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career.
We’re aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696.
Advertising End Date: 06/02/202

Responsibilities:

  • S trong experience in DataOps practices such as:
  • Automation of data pipeline deployments
  • Monitoring and maintaining data pipeline health and performance
  • Implementing robust data governance and data quality frameworks
  • Utilizing CI/CD tools to streamline data flow and enhance productivity
  • Developing data warehouses or data lakes.
  • Experience with AWS services such as Glue, Lambda, SageMaker, or EMR.
  • Familiarity with Databricks and Apache Spark is highly desirable.
  • Infrastructure as code using CloudFormation or Terraform
  • Microservices design and implementation of highly scalable APIs
  • CI/CD tools (i.e., GitHub Actions)
  • Exposure to technologies like Apache Kafka, AWS Kinesis, or similar
  • Visualization tools such as PowerBI , Tableau, or similar
  • AWS certification with exposure to services like Lambda, S3, Redshift, Athena, EMR, Glue, DynamoDB
  • Scripting using Python, SQL, DBT, Scala, G


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - DBA / Datawarehousing

Software Engineering

Graduate

Proficient

1

Sydney NSW, Australia