Senior Data Engineer at Origin Energy
Melbourne VIC 3000, Victoria, Australia -
Full Time


Start Date

Immediate

Expiry Date

10 Sep, 25

Salary

0.0

Posted On

10 Jun, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Data Engineering, Transformations, Architecture, Automation, Data Warehousing, Airflow

Industry

Information Technology/IT

Description
  • Own scalable data solutions in a cloud-first, purpose driven team
  • Power the future of energy by engineering data for real-world impact.
  • Melbourne based

SKILLS & REQUIREMENTS

  • 5+ years’ experience in cloud-based data engineering (AWS, Redshift, S3).
  • Expert in SQL with a strong track record in complex data modelling and transformations.
  • Proficient in orchestration tools such as Airflow or similar.
  • Strong understanding of data warehousing, architecture, and performance optimisation.
  • Skilled in building and managing data pipelines from diverse sources.
  • Experience with CI/CD, automation, and operationalising analytics solutions.
  • Able to work independently and deliver outcomes in complex or evolving environments.
Responsibilities

ABOUT THE ROLE:

This role is responsible for designing and maintaining scalable data pipelines that power insights and automation for Virtual Power Plant (VPP) initiatives. Leveraging advanced SQL, data modelling, and the AWS tech stack, it ensures reliable, high-quality data flows to support critical operational and analytical needs.

ROLES & RESPONSIBILITIES

  • Design, develop and maintain robust, scalable data pipelines to process and transform data from multiple sources including external APIs, database and flat files.
  • Collaborate with stakeholders across engineering, product, and analytics teams to understand data requirements and deliver high-quality solutions.
  • Apply expert-level SQL skills to transform and model data for analytics and operational use.
  • Implement and advocate for data modelling best practices (e.g. dimensional modelling, normalized/denormalized structures).
  • Use AWS data tools and services (e.g., Redshift, Dynamo DB, S3, etc) to build cloud-native data solutions.
  • Operate and enhance internal orchestration frameworks to ensure robust pipeline scheduling and monitoring.
  • Continuously identify opportunities for performance tuning and pipeline optimizations.
  • Document data pipeline, workflows, APIs for maintainability and knowledge sharing.
Loading...