Data Engineer - AWS Databricks at Capgemini
Melbourne, Victoria, Australia -
Full Time


Start Date

Immediate

Expiry Date

04 Oct, 25

Salary

0.0

Posted On

05 Jul, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Git, Python, Technology, Jenkins, It, Aws, Sql, Design, Strategy, Snowflake

Industry

Information Technology/IT

Description

REQUIRED SKILLS & EXPERIENCE

  • 3+ years of hands-on experience with AWS (EC2, S3, IAM, VPC, RDS, Glue ).
  • 2+ years of experience with Databricks (including DLT, Delta Lake, and workspace management).
  • Strong programming skills in Python and SQL ; experience with PySpark is essential.
  • Experience with DevOps practices and tools (Terraform, Git, Jenkins).
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience working in Agile/Scrum environments.
  • Strong analytical and problem-solving skills with attention to detail.
Responsibilities

KEY RESPONSIBILITIES

  • Design and implement robust data pipelines using Databricks , Delta Lake, and AWS services (e.g., S3, Glue, Redshift, Lambda ).
  • Develop and optimize ETL/ELT workflows using PySpark, SQL, and orchestration tools like Airflow or Jenkins.
  • Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
  • Monitor and troubleshoot Databricks jobs and platform-level issues to ensure performance and reliability.
  • Implement CI/CD pipelines using GitHub, Terraform, and Jenkins for infrastructure and code deployments.
  • Ensure data quality, security, and compliance across the platform.
  • Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming.
Loading...