Data Engineer at Easygo Gaming
Melbourne VIC 3000, Victoria, Australia -
Full Time


Start Date

Immediate

Expiry Date

10 May, 25

Salary

0.0

Posted On

11 Feb, 25

Experience

0 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Glue, Cloud Services, Hadoop, Infrastructure, Code, Computer Science, Data Engineering, Data Governance, Information Systems, Aws, Spark, Communication Skills, Data Analytics, Python, Sql

Industry

Information Technology/IT

Description

Are you a passionate and ambitious data engineer ready to dive into an environment that fosters innovation, continuous learning, and professional growth? We’re seeking talented individuals who are eager to tackle complex big data problems, build scalable solutions, and collaborate with some of the finest engineers in the entertainment industry.

  • Complex Projects, Creative Solutions: Dive into intricate projects that challenge and push boundaries. Solve complex technical puzzles and craft scalable solutions.
  • Accelerate Your Growth: Access mentorship, training, and hands-on experiences to level up your skills. Learn from industry experts and gain expertise in scaling software.
  • Collaborate with Industry Leaders: Work alongside exceptional engineers, exchanging ideas and driving innovation forward through collaboration.
  • Caring Culture, Career Development: We deeply care about your career. Our culture prioritizes your growth with tailored learning programs and mentorship.
  • Embrace Challenges, Celebrate Success: Take on challenges, learn from failures, and celebrate achievements together.
  • Shape the Future: Your contributions will shape the future of entertainment.

MINIMUM QUALIFICATIONS:

  • A Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or a related field or equivalent practical experience.
  • 3 - 6 years of experience in data engineering, with a focus on ETL development, data modelling, database management, and real-time data pipelines.
  • Proficiency in SQL, Python, or PySpark, with hands-on experience using cloud services such as Glue, Redshift, Kinesis, Lambda, S3 and DMS.
  • Experience with orchestration tools (e.g., Apache Airflow), version control systems (e.g., GitHub), and big data technologies such as Spark or Hadoop.
  • Experience designing and implementing modern cloud-based data platforms, preferably on AWS, using Infrastructure as Code (IaC) tools like Terraform.
  • Knowledge of data governance and compliance standards.
  • Strong problem-solving, analytical, and communication skills for engaging with cross-functional teams.

PREFERRED QUALIFICATIONS

  • Experience with DataOps principles, CI/CD pipelines, and agile development methodologies.
  • Knowledge of machine learning concepts and their application in data engineering.
  • AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Data Analytics) or similar cloud certifications.
Responsibilities
  • Design, develop, and maintain scalable ETL pipelines using AWS Glue and orchestrate workflows with Airflow to extract, transform, and load data from various sources (e.g., databases, APIs, flat files, streaming services) into the data lake, following medallion architecture principles.
  • Build and implement secure and efficient data systems using AWS services and Terraform, ensuring performance and compliance.
  • Collaborate with cross-functional teams to transform data from the gold layer in the data lake to Redshift using dbt, enabling high-quality analytics and machine learning insights.
  • Monitor and optimise data pipelines for performance, scalability, and cost-efficiency, ensuring observability through monitoring and alerting systems.
  • Document end-to-end processes, including ingestion, transformation, storage and governance, to support knowledge sharing and scalability.
  • Implement data governance practices such as data lineage, classification, access control, and compliance with GDPR and other regulatory requirements.
  • Build and optimise real-time data pipelines using PySpark, Glue Spark, and Kinesis, focusing on Change Data Capture (CDC) for seamless operations and reliability.
  • Ensure pipelines are thoroughly tested and optimised, with comprehensive monitoring and alerting systems for reliability and performance.
  • Participate in peer code reviews to ensure adherence to best practices, coding standards, and high-quality development.
Loading...