Data Engineer at Charger Logistics Inc
Brampton, ON, Canada -
Full Time


Start Date

Immediate

Expiry Date

04 Dec, 25

Salary

0.0

Posted On

04 Sep, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Tableau, Python, Looker, Sql, Query Optimization, Data Services, Docker, Optimization, Computer Science, Pandas, Scripting, Testing, Git, Dimensional Modeling, Power Bi, Airflow, Performance Tuning, Analytics, Sqlalchemy, Data Processing, Data Science, Automation, Snowflake

Industry

Information Technology/IT

Description

Charger Logistics Inc. is a world-class asset-based carrier with locations across North America. With over 20 years of experience providing best-in-class logistics solutions, Charger Logistics has evolved into a premier transport provider and continues to expand rapidly.
Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are entrepreneurial-minded organization that welcomes and support individual idea and strategies. We are seeking skilled Data Engineer with strong SQL and Python expertise to join our modern data team. The successful candidate will build scalable, maintainable data transformation pipelines using SQL and Python that power our analytics and business intelligence initiatives.

REQUIRED QUALIFICATIONS:

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field.
  • 2+ years of experience in data engineering roles, with strong emphasis on SQL and Python.
  • Expert-level SQL skills: CTEs, window functions, query optimization, analytical queries.
  • Solid Python programming experience: data processing, scripting, automation, APIs.
  • Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift, or Databricks).
  • Strong understanding of data warehouse design, dimensional modeling, and ELT/ETL pipelines.
  • Experience with version control systems like Git and collaborative development workflows.
  • Knowledge of data quality frameworks and testing strategies using SQL and Python.

PREFERRED QUALIFICATIONS:

  • Experience with cloud data platforms and native data services.
  • Familiarity with workflow orchestration tools such as Airflow, Prefect, or Dagster.
  • Knowledge of data visualization tools (Looker, Tableau, Power BI).
  • Exposure to real-time data processing and streaming architectures.
  • Understanding of DataOps and analytics engineering best practices.
  • Experience with Infrastructure as Code tools like Terraform or CloudFormation.

TECHNICAL SKILLS:

  • SQL: Advanced querying, performance tuning, data modeling, optimization.
  • Python: pandas, requests, sqlalchemy, API integration, ETL development.
  • Data Warehouses: Snowflake, BigQuery, Redshift, Databricks (or similar platforms).
  • Tools: Git, Docker, CI/CD pipelines, orchestration tools (Airflow, Prefect).
  • Concepts: Dimensional modeling, data testing, DataOps, analytics engineering.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design and maintain high-performance SQL-based data transformation pipelines.
  • Build reusable, modular SQL code using software engineering best practices.
  • Develop Python applications for data ingestion, transformation, and pipeline orchestration.
  • Optimize complex SQL queries for performance, scalability, and reliability.
  • Implement robust data quality checks and maintain metadata and documentation.
  • Automate ETL/ELT workflows using Python and cloud-native tools.
  • Work with analytics and business teams to translate logic into SQL data models.
  • Implement version control (Git) and CI/CD workflows for testing and deployment of pipelines.
  • Monitor and optimize data workflows and identify opportunities for performance improvement.
  • Mentor junior team members on SQL optimization and Python scripting practices.
Loading...