Data Software Engineer at Fastloop
Vancouver, BC V6B 2W5, Canada -
Full Time


Start Date

Immediate

Expiry Date

16 Oct, 25

Salary

0.0

Posted On

17 Jul, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Cdc, Production Experience, Google Cloud Platform, Scripting, Dbt, Python, Distributed Systems, Software Development, Orchestration

Industry

Computer Software/Engineering

Description

REQUIRED SKILLS & EXPERIENCE:

  • 3–6 years of experience in software development or data engineering roles, with a strong focus on software engineering principles.
  • Proficiency in Python for application development, scripting, and data engineering tasks.
  • Solid production experience with cloud-native tools and services, with a primary focus on Google Cloud Platform (GCP) (e.g., Cloud Run, Kubernetes, Compute Engine/VMs, BigQuery, Cloud Composer, Cloud SQL). Experience with other cloud providers (e.g., Azure equivalents) is a plus.
  • Experience with Debezium (CDC), Apache Airflow, and DBT.
  • Deep familiarity with troubleshooting and debugging complex distributed systems, orchestration, pipeline errors, job dependencies, and DAG performance.
  • Strong understanding of Git-based workflows and comfort with CI/CD and DevOps practices in software and data environments.
  • Exposure to ML pipeline enablement, including feature engineering, data prep, and model deployment support.
  • Experience with frontend development for building simple user interfaces, utilizing any modern UI framework (e.g., React, Angular, Vue.js) or plain HTML/CSS/JavaScript.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ABOUT THE ROLE:

We’re looking for a mid-senior Software Engineer / Data Engineer to join our consulting team and help build robust software solutions and support the data foundations behind our AI/ML initiatives. This role is ideal for someone with strong experience in software development, including designing, building, and deploying applications, coupled with a deep understanding of how these systems enable machine learning and AI product delivery.
You’ll play a key part in developing and maintaining our core software services and data stack across dev and prod environments, while working closely with data scientists, analysts, and technical leads to ensure our applications and ML/AI workflows are scalable, reliable, and well-orchestrated. You will be a versatile engineer capable of debugging and fixing various systems across the stack.

KEY RESPONSIBILITIES:

Full-Stack Software Development & System Ownership• Design, develop, and deploy scalable and reliable Python applications using GCP services like Cloud Run, Kubernetes, and Compute Engine (VMs).

  • Implement and maintain APIs, microservices, and backend systems to support various business functionalities and AI/ML initiatives.
  • Debug, troubleshoot, and resolve complex issues across software applications, data pipelines, and infrastructure layers in both development and production environments.
  • Develop, test, and optimize transformation logic using DBT, ensuring high data quality for downstream use in analytics and ML workflows.
  • Build and maintain robust data ingestion pipelines using Debezium for CDC and Airflow for orchestration.

Infrastructure Support & Evolution• Monitor and maintain infrastructure reliability, supporting production ML systems, batch/streaming data use cases, and core application services.

  • Continuously improve infrastructure based on tagged backlog items (e.g., flagged DE improvements and Liudas/Udit priorities).
  • Collaborate with engineering leadership to evolve the platform architecture supporting both analytics, AI products, and general application needs.

ML & AI Enablement• Collaborate with data scientists to build pipelines that support training, inference, and model performance monitoring.

  • Support orchestration of ML workflows (e.g., model scoring jobs, batch inference, feature extraction) alongside DBT data pipelines.
  • Enable automated data and model refresh cycles through Airflow or custom scheduling/orchestration logic.
  • Ensure data pipelines produce structured, reliable, and scalable features usable across ML and conversational agent use cases.

Cross-Functional Collaboration & Consulting• Participate in discovery and planning sessions to align technical implementation with business objectives.

  • Provide expert-level guidance on software architecture, pipeline, and orchestration design decisions for client and internal projects.
  • Partner with functional consultants and AI/ML practitioners to understand solution requirements and integrate workflows effectively.
Loading...