Data Engineer at Charger Logistics Inc
Brampton, Ontario, Canada -
Full Time


Start Date

Immediate

Expiry Date

06 Jun, 26

Salary

0.0

Posted On

08 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Python, Risingwave, Dbt, PostgreSQL, Bigquery, Snowflake, Mongodb, Kafka, Airflow, Gcp, AlloyDB, Cdc, Pandas, Sqlalchemy, Git

Industry

Truck Transportation

Description
Charger logistics Inc. is a world- class asset-based carrier with locations across North America. With over 20 years of experience providing the best logistics solutions, Charger logistics has transformed into a world-class transport provider and continue to grow. Charger logistics invests time and support into its employees to provide them with the room to learn and grow their expertise and work their way up. We are seeking a Data Engineer with expertise in SQL, Python, DBT and RisingWave to join our modern data team. Responsibilities: Design high-performance SQL pipelines across PostgreSQL, BigQuery, Snowflake, and MongoDB. Develop Python applications for data ingestion, transformation, and automation. Implement RisingWave streaming pipelines for real-time analytics. Build Apache Kafka architectures for high-throughput data processing. Orchestrate workflows using Apache Airflow on Google Cloud Platform. Optimize queries and implement data quality checks across multiple platforms. Mentor team members and collaborate with business stakeholders. Deploy CI/CD workflows using Git for reliable pipeline management. Required Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of data engineering experience with SQL, Python, and RisingWave. Must have AlloyDB and CDC experience (DataStream/Debezium) Expert DBT skills across Big Query, Snowflake and AlloyDB. Expert SQL skills: CTEs, window functions, optimization across PostgreSQL, BigQuery, Snowflake. Advanced Python: pandas, sqlalchemy, API integration, streaming data processing. Production experience with Apache Kafka, Apache Airflow, and Google Cloud Platform. Experience with MongoDB, dimensional modeling, and both batch/streaming ETL pipelines. Strong Git and collaborative development experience. Technical Skills: Core: SQL (advanced), Python, RisingWave (required). Cloud: Google Cloud Platform, BigQuery, GCP native services. Streaming: Apache Kafka, real-time data processing. Orchestration: Apache Airflow (production experience). Databases: PostgreSQL, Snowflake, MongoDB. Tools: Git, Docker, CI/CD pipelines. Preferred Qualifications: GCP certifications, Terraform/CloudFormation experience. previous experience with RisingWave is strongly preferred Data visualization tools (Looker, Tableau, Power BI). DataOps and analytics engineering best practices. ClickHouse experience is preferred What You'll Build: Scalable SQL pipelines across multiple database systems. Python-based ETL/ELT solutions spanning cloud and on-premise. Real-time streaming pipelines using RisingWave and Kafka. GCP-native data solutions with automated quality checks. Airflow-orchestrated workflows with CI/CD deployment. Competitive Salary Healthcare Benefits Package Career Growth
Responsibilities
The Data Engineer will be responsible for designing high-performance SQL pipelines across various databases, developing Python applications for data ingestion and automation, and implementing RisingWave streaming pipelines for real-time analytics. Key tasks also include building Kafka architectures, orchestrating workflows via Airflow on GCP, and optimizing queries while mentoring team members.
Loading...