Data Engineer at WELL Health Technologies Corp
, , -
Full Time


Start Date

Immediate

Expiry Date

05 May, 26

Salary

150000.0

Posted On

04 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Snowflake, SQL, Python, Data Pipelines, CI/CD, Data Modeling, Orchestration Tools, dbt, Cloud Data Platforms, Observability, Data Governance, Performance Tuning, Event-Driven Ingestion, CI/CD Pipelines, PII/PHI Handling, Git

Industry

Description
  Entity: WELL Health Technologies Corp Position Title: Data Engineer Salary Range: $130k –  $150k CAD per annum Job Class: Full Time Work Location: Hybrid, Toronto      About the Company:  WELL Health is an innovative technology enabled healthcare company whose overarching objective is to positively impact health outcomes by leveraging technology to empower and support healthcare practitioners and their patients.  WELL has built an innovative practitioner enablement platform that includes comprehensive end to end practice management tools inclusive of virtual care and digital patient engagement capabilities as well as Electronic Medical Records (EMR), Revenue Cycle Management (RCM) and data protection services.  WELL, uses this platform to power healthcare practitioners both inside and outside of WELL’s own omni-channel patient services offerings.   WELL, owns and operates Canada's largest network of outpatient medical clinics serving primary and specialized healthcare services and is the provider of a leading multi-national multi-disciplinary telehealth offering.  WELL, is publicly traded on the Toronto Stock Exchange under the symbol "WELL". To learn more about the company, please visit: www.well.company.   Position Summary: As a Data Engineer on the WELL Intelligence Platform, you will design, build, and operate reliable, secure, and scalable data pipelines and data products that power analytics, automation, and AI. This role is Snowflake-heavy and suited for a hands-on builder who can implement complex solutions, troubleshoot issues end-to-end, and continuously improve platform engineering quality. You will work under the architectural direction of the Senior Data Architect while contributing to design discussions, proposing implementation approaches, and delivering high-quality code that meets platform standards for governance, security, performance, and observability. Work split: during the initial platform build, expect ~90–100% focus on design/build and delivery. After implementation and stabilization, the role will typically shift to ~50/50 between platform maintenance/reliability and continued platform evolution (new sources, new data products, performance/cost improvements).   What you will be doing:  1) BUILD DATA PIPELINES & DATA PRODUCTS * Implement robust batch and incremental ingestion pipelines. * Implement event-driven ingestion patterns where required, ensuring reliability, ordering/duplication handling, and replay/backfill strategies. * Build curated datasets and data products that are well-modeled, tested, and documented. * Develop reusable components and frameworks to accelerate onboarding of new sources. 2) TRANSFORMATIONS, MODELING & SERVING * Implement transformation logic using strong SQL and appropriate tooling. * Contribute to dimensional models and conformed entities under the guidance of the Senior Data Architect. * Support semantic layer artifacts and standardized KPI definitions through consistent data structures. 3) PLATFORM ENGINEERING QUALITY * Build and maintain CI/CD pipelines for data workloads (tests, deployment, promotion across environments). * Write production-grade code with version control, code review discipline, and automated testing. * Implement data quality checks, reconcile counts/totals, and ensure pipeline idempotency and recoverability. 4) OBSERVABILITY, RELIABILITY & OPERATIONS * Instrument pipelines with monitoring/alerting and meaningful operational metrics. * Troubleshoot failures across ingestion, compute, warehouse, orchestration, and upstream dependencies. * Participate in on-call or incident response as defined; contribute to runbooks and post-incident improvements. 5) SECURITY, GOVERNANCE & COMPLIANCE WITH DESIGN * Implement access controls, secure connectivity, and least-privilege patterns. * Apply privacy controls for PII/PHI handling (masking/tokenization patterns where required). * Contribute metadata for cataloging and lineage; follow data governance standards. 6) COLLABORATION & CONTINUOUS IMPROVEMENT * Work with cross-functional teams (Cloud, Security, Apps, BI/Analytics) to deliver outcomes. * Collaborate with stakeholders to validate requirements, definitions, and acceptance criteria. * Propose optimizations that improve cost, performance, and delivery velocity.   You have:  EXPERIENCE * 4–8+ years of hands-on data engineering experience delivering production data pipelines. * Experience building on modern cloud data platforms (Azure and/or AWS). CORE TECHNICAL SKILLS * Strong SQL (advanced query design, performance tuning) and strong Python for data engineering. * Deep, hands-on experience with Snowflake (data modeling, performance/cost optimization, workload patterns) or equivalent modern warehouse experience with demonstrated ability to ramp quickly. * Strong knowledge of data pipeline patterns: incremental loads, CDC, backfills, and handling late-arriving data. * Experience with orchestration tools (e.g., ADF, Airflow, Dagster, etc.). * Experience with transformation frameworks (e.g., dbt or equivalent). * Working knowledge of distributed compute concepts (e.g., Spark/Databricks) and when to use them. * Exposure to eventing/streaming concepts (e.g., Kafka/Kinesis/Event Hubs) and the operational realities of mixed batch + event-driven pipelines. ENGINEERING PRACTICES * Proficiency with Git, code reviews, testing, and CI/CD. * Familiarity with Infrastructure-as-Code concepts and working effectively with Cloud Engineering. * Ability to implement observability: logging, monitoring, alerting, and operational dashboards. DATA FUNDAMENTALS * Strong understanding of data modeling basics (dimensional modeling concepts, normalization trade-offs). * Understanding of data governance concepts: lineage, cataloging, quality, and stewardship. NICE TO HAVE * Healthcare or other data-sensitive industry experience. * Experience implementing row/column-level security, masking, tokenization, and secure data sharing. * Experience supporting AI enablement (AI-ready datasets, feature patterns, or controlled access patterns).   The salary for this position falls within a defined range and will be determined based on several factors, including the candidate’s experience, qualifications, skills, and the needs of the organization. At WELL, we are committed to fair and equitable compensation and aim to provide a competitive salary that reflects the value and expertise of the successful candidate. WELL, is committed to fostering a diverse, inclusive, and accessible workplace. We welcome and celebrate the diversity of applicants and team members across ability, race, gender identity, sexual orientation, and lived experience. We strive to create an environment where differences are valued and contribute to our collective success – this is the WELL Way. This recruitment process uses automated tools, including artificial intelligence, to help review applications. Qualified human decision-makers review these results and make all final hiring decisions. WELL has been independently certified as a Great Place to Work® by the Great Place to Work Institute® Canada. This recognition reflects our commitment to building a workplace culture rooted in trust, inclusivity, and employee well-being. It also aligns with our Healthy Place to Work pillar and the priorities outlined in our annual Sustainability Impact Report [https://well.company/sustainability-impact-report/#esg-heart].  Want Read more about us: https://stories.well.company/ [https://stories.well.company/]     
Responsibilities
The Data Engineer will design, build, and operate reliable, secure, and scalable data pipelines and data products, focusing heavily on Snowflake implementation for analytics, automation, and AI enablement. Responsibilities include implementing robust batch and event-driven ingestion, building curated datasets, developing reusable components, and ensuring platform engineering quality through testing and CI/CD.
Loading...