Data Engineer at Ledcor
Vancouver, BC, Canada -
Full Time


Start Date

Immediate

Expiry Date

27 Nov, 25

Salary

93200.0

Posted On

27 Aug, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Star Schema, Dbt, Data Vault, Kafka, Devops, Information Management, Data Governance, Computer Science, Python, Snowflake

Industry

Information Technology/IT

Description

Location: Vancouver, BC, Canada
Date Posted: Aug 26, 2025
Job ID: R25574
Job Status: Full-Time
Description
Are you passionate about building robust data pipelines and transforming raw data into well-structured, high-quality insights? We’re seeking a Data Engineer to join our dynamic data team and help shape the future of our enterprise data lakehouse on Azure and Databricks.
In this role, you will design, build, and maintain ingestion pipelines from multiple source systems into our medallion architecture (bronze, silver, gold layers). Your work will span Python development, SQL transformations, and data modeling, ensuring data is accurate, performant, and business ready.
You’ll work hands-on optimizing ELT processes, ensuring data quality and performance, and collaborating across teams—from security to data architecture—to deliver scalable, secure, and efficient solutions. You’ll also support the development of data models, implement both batch and selective real-time processing, and contribute to a strong testing and documentation culture.
Apply today to join our True-Blue team in Vancouver, BC!

Essential Responsibilities:

  • Build, maintain, and support the ingestion of data from source systems (e.g., ERP, CRM, HR) into the Databricks lakehouse
  • Monitor and optimize performance and quality of data transfers across medallion layers
  • Collaborate with Information Security on securing pipelines and access controls
  • Set up schedules and triggers for full and incremental ingestion using Azure Data Factory and Databricks
  • Design, build, and test data pipelines using Python, SQL, and Databricks notebooks
  • Profile source data to assess structure, quality, and relationships
  • Work with the Data Architect to define and implement data models (schemas, tables, relationships, and mapping logic)
  • Develop automated testing strategies for data quality, completeness, and performance (unit, integration, and regression testing)
  • Build ELT processes to transform raw data into curated datasets within Databricks
  • Implement batch-oriented solutions, with selective adoption of near real-time pipelines
  • Apply partitioning, indexing, and optimization techniques for scalable storage and query performance
  • Create documentation, runbooks, and training materials to support adoption and knowledge sharing
  • Establish methods to track and improve data quality, completeness, and consistency

Qualifications:

  • 4–6 years of relevant data engineering experience.
  • Bachelor’s degree in computer science, Information Management, or related field (or equivalent experience)
  • Proficiency in Python and advanced SQL is a must have
  • Experience with Azure Data Factory, Databricks (Delta Lake), and Git/GitHub.
  • Strong data modeling skills (e.g., star schema, Data Vault, normalization/denormalization)
  • Familiarity with Agile delivery, DevOps, and CI/CD practices for data pipelines
  • Strong analytical, problem-solving, and collaboration skills
  • Experience with Snowflake, dbt, or other modern data stack tools is an asset
  • Familiarity with data governance and metadata management tools is nice to have
  • Exposure to event streaming (Kafka, Event Hub) is nice to have

Working Conditions:

  • This is a hybrid position with remote flexibility available

Compensation

  • $93,200-$128,150 annually

This is the expected base pay range for this role. Individual base pay will be determined based on a variety of factors including experience, knowledge, skills, education and location.
Our competitive total rewards package provides compensation and benefits that support your physical, mental and financial wellbeing. We offer exciting, challenging work with opportunities to develop your skills and knowledge.
Additional Information
The Ledcor Group of Companies is one of North America’s most diversified construction companies. Ledcor is a company built on a rich history of long-standing project successes.
Our workplace culture has been recognized as one of Canada’s Best Diversity Employers, Canada’s Most Admired Corporate Cultures, and a Top 100 Inspiring Workplace in North America.
Employment Equity
At Ledcor we believe diversity, equity, and inclusion should be part of everything we do. We are proud to be an equal-opportunity employer. All qualified individuals, regardless of race, color, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, Veteran status or any other identifying characteristic are encouraged to apply.
Our True Blue team consists of individuals from all backgrounds who contribute diverse perspectives and experiences to Ledcor. We are committed to continuing to build on our culture of empowerment, inclusion and belonging.
Adjustments will be provided in all parts of our hiring process. Applicants need to make their needs known in advance by submitting a request via email. For more information about Ledcor’s Inclusion and Diversity initiatives, please visit our I&D page.
1055 West Hastings St, Vancouver, B

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Build, maintain, and support the ingestion of data from source systems (e.g., ERP, CRM, HR) into the Databricks lakehouse
  • Monitor and optimize performance and quality of data transfers across medallion layers
  • Collaborate with Information Security on securing pipelines and access controls
  • Set up schedules and triggers for full and incremental ingestion using Azure Data Factory and Databricks
  • Design, build, and test data pipelines using Python, SQL, and Databricks notebooks
  • Profile source data to assess structure, quality, and relationships
  • Work with the Data Architect to define and implement data models (schemas, tables, relationships, and mapping logic)
  • Develop automated testing strategies for data quality, completeness, and performance (unit, integration, and regression testing)
  • Build ELT processes to transform raw data into curated datasets within Databricks
  • Implement batch-oriented solutions, with selective adoption of near real-time pipelines
  • Apply partitioning, indexing, and optimization techniques for scalable storage and query performance
  • Create documentation, runbooks, and training materials to support adoption and knowledge sharing
  • Establish methods to track and improve data quality, completeness, and consistenc
Loading...