Cloud Solution Engineer (Data Integration) at EnergyAustralia
Melbourne, Victoria, Australia -
Full Time


Start Date

Immediate

Expiry Date

19 Nov, 25

Salary

0.0

Posted On

20 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Interfaces, Code, Reliability, Design, Continuous Integration, Data Flow

Industry

Information Technology/IT

Description

BE IMPACTFUL WHEN YOU ARE APPLYING:

This role is central to ingesting and transforming data from multiple systems, applying scalable, event-driven and batch processing techniques, and delivering harmonised datasets through modern integration interfaces. To be successful you will need most of the following:

  • Degree in Information Technology, Computer Science, or Information Systems or formal training through a focused, reputable provider (eg. General Assembly)
  • Proven experience delivering end-to-end cloud-based data integration solutions
  • Familiarity with data modelling, data profiling, and mapping between source and target systems.
  • Strong understanding of DevOps/DataOps principles, including version-controlled deployments, monitoring, alerting, and automation.
  • The following technical capabilities;
  • Programming and scripting: TypeScript, Python/Pyspark (preferred), or similar.
  • Infrastructure as Code: Terraform or equivalent.
  • Cloud-native integration services (e.g., function-as-a-service, containers, pub/sub, workflow orchestration).
  • Streaming platforms and event-based architectures (e.g., Kafka or equivalents).
  • CI/CD pipelines: Git-based workflows, automated build/test/deploy practices. Azure DevOps desirable
  • Strong SQL programming (Postgres SQL) for ETL, and query performance tuning.
  • Exposure to AWS Neptune (desirable)
  • Strong problem solving and analytical skills, and able to provide technical leadership to team
Responsibilities

ABOUT THE ROLE:

The Cloud Solution Engineer (Data Integration team) plays a key role in designing, building, and operating reliable, cloud-native data integration solutions that power business insights and operations. Primarily responsible for enabling efficient data flow between systems, supporting both real-time and batch requirements, while ensuring the solutions are secure, observable, and aligned to modern engineering standards. Additionally;

  • Design and implement streaming and batch data ingestion pipelines using cloud-native services
  • Transform and harmonise data from multiple systems into a common data model, staged in relational data stores and exposed through APIs or messaging interfaces
  • Use infrastructure-as-code and CI/CD pipelines to manage integration infrastructure and application deployments
  • Apply DevOps and DataOps principles including continuous integration, configuration-as-code, and automated testing
  • Participate in ongoing support and BAU operations for data integration services, ensuring high system availability and reliability
  • Provide technical mentorship and participate in chapter/guild activities to support capability uplift across the team

This role is central to ingesting and transforming data from multiple systems, applying scalable, event-driven and batch processing techniques, and delivering harmonised datasets through modern integration interfaces. To be successful you will need most of the following:

  • Degree in Information Technology, Computer Science, or Information Systems or formal training through a focused, reputable provider (eg. General Assembly)
  • Proven experience delivering end-to-end cloud-based data integration solutions
  • Familiarity with data modelling, data profiling, and mapping between source and target systems.
  • Strong understanding of DevOps/DataOps principles, including version-controlled deployments, monitoring, alerting, and automation.
  • The following technical capabilities;
  • Programming and scripting: TypeScript, Python/Pyspark (preferred), or similar.
  • Infrastructure as Code: Terraform or equivalent.
  • Cloud-native integration services (e.g., function-as-a-service, containers, pub/sub, workflow orchestration).
  • Streaming platforms and event-based architectures (e.g., Kafka or equivalents).
  • CI/CD pipelines: Git-based workflows, automated build/test/deploy practices. Azure DevOps desirable
  • Strong SQL programming (Postgres SQL) for ETL, and query performance tuning.
  • Exposure to AWS Neptune (desirable)
  • Strong problem solving and analytical skills, and able to provide technical leadership to tea
Loading...