AWS Data Engineer at Adastra Corporation
Calgary, AB T2P 2W2, Canada -
Full Time


Start Date

Immediate

Expiry Date

22 May, 25

Salary

0.0

Posted On

22 Feb, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Scala, Optimization, Information Technology, Java, Glue, Programming Languages, Databases, Python, Communication Skills, Nosql, Sql, Computer Science, Athena, Data Engineering, Data Security

Industry

Information Technology/IT

Description

Overview:
Adastra is hiring and expanding the AWS Practice! We are looking for a highly skilled AWS Data Engineer who will be responsible for building and maintaining our data pipelines, ensuring the efficient and reliable extraction, transformation, and loading (ETL) of large datasets. The ideal candidate will have a strong background in cloud-based data engineering, particularly within the AWS ecosystem, and will be passionate about data-driven decision-making.
Location: Calgary (AB), or willing to relocate
Status: Full-Time or Contract (Long-Term)

QUALIFICATIONS, SKILLS & EXPERIENCE:

  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. A Master’s degree is a plus
  • 3+ years of experience in data engineering
  • Extensive hands-on experience working with Databricks, including:


    • the development, optimization, and management of data pipelines

    • for advanced data analysis workflows
    • Strong experience with AWS services such as S3, Glue, Lambda, Redshift, DynamoDB, and Athena
    • Proficiency in programming languages such as Python, Java, or Scala
    • Experience with SQL and NoSQL databases
    • Solid understanding of data warehousing principles, ETL workflows, and data modeling techniques
    • Knowledge of data security, governance, and compliance best practices
    • Strong problem-solving skills and attention to detail
    • Excellent communication skills, with the ability to collaborate effectively with cross-functional teams
    Responsibilities
    • Design, develop, and maintain scalable ETL pipelines using AWS services such as Glue, Lambda, S3, and Redshift
    • Develop and optimize complex SQL queries for data transformation, validation, and reporting
    • Utilize Python for data processing, automation, and scripting within ETL workflows
    • Work with Databricks to develop, optimize, and scale data pipelines and analytics solutions
    • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data availability and integrity
    • Implement and optimize data storage solutions, including data lakes and data warehouses, to support analytics and reporting
    • Monitor and troubleshoot data pipelines to ensure continuous and reliable data flow
    • Ensure data security and compliance with industry standards and best practices
    • Automate data integration and processing tasks using AWS tools like Step Functions, CloudFormation, and others
    • Document data flows, data models, and processes for internal use and future reference
    • Stay updated with the latest trends and best practices in data engineering and AWS technologies
    Loading...