Data Engineer at Street Context
Toronto, ON, Canada -
Full Time


Start Date

Immediate

Expiry Date

09 Dec, 25

Salary

0.0

Posted On

10 Sep, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Monte Carlo, Automation, Glue, Performance Tuning, Data Manipulation, Dbt, Snowflake, Code, Airflow

Industry

Information Technology/IT

Description

We’re looking for a motivated Data Engineer to help build and scale our cloud-native data pipelines using Snowflake and dbt. This role is ideal for someone who enjoys solving data challenges, improving performance, and delivering trusted data to stakeholders across the business.

REQUIRED SKILLS & QUALIFICATIONS:

  • 3–6 years of experience as a data engineer in a cloud-based environment.
  • Hands-on experience with dbt and Snowflake (must-have).
  • Strong SQL skills and Python scripting for data manipulation and automation.
  • Familiarity with data orchestration tools like Airflow or Dagster.
  • Experience with cloud data ecosystems, especially AWS (S3, Glue, Lambda).
  • Understanding of data warehouse design, ELT pipelines, and performance tuning.

PREFERRED SKILLS & QUALIFICATIONS:

  • Experience with data observability and quality tools (e.g., Great Expectations, Monte Carlo).
  • Exposure to infrastructure-as-code (e.g., Terraform).
  • Prior involvement in cloud migration or legacy-to-modern data stack projects

WHAT WE DO

Street Context is an entity of BlueMatrix. Together they develop one-of-a-kind web applications for the authoring, distribution, and analysis of investment research, and for internal knowledge management and digital communications. Our mission is to streamline the publishing process on a global scale.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design, develop, and maintain ELT pipelines using dbt and Snowflake.
  • Build and support data ingestion processes from internal and external sources using both batch and streaming pipelines.
  • Create and maintain data models and transformations in dbt, following version control and CI/CD practices.
  • Implement robust data quality checks, lineage, and testing processes.
  • Collaborate with DevOps and cloud teams to monitor pipeline health and optimize infrastructure in AWS.
  • Automate recurring workflows and contribute to self-service data tooling for internal teams.
Loading...