Senior Data Solution Architect at FULFILLMENT IQ INC
Toronto, Ontario, Canada -
Full Time


Start Date

Immediate

Expiry Date

05 Jun, 26

Salary

145.0

Posted On

07 Mar, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Architecture, Data Engineering, Google Cloud Platform (GCP), Snowflake, Data Modeling, BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud SQL, Cloud Spanner, Apache Iceberg, CDC, Apache Flink, Apache Kafka, SQL

Industry

IT Services and IT Consulting

Description
Description: General Information: Job Title: Senior Data Solution Architect Location: Toronto (Remote/ Hybrid) Job Type: Full-Time or Contract for 12+ months Reporting Line: SVP, Architecture Salary Range: $120–$145 CAD per hour (negotiable) About Fulfillment IQ (FIQ): Fulfillment IQ is a supply chain engineering and transformation company that helps brands, retailers, and 3PLs design, build, and scale high-performance logistics operations. We work at the intersection of strategy, operations, and technology where we solve complex, real-world problems across warehouse design, automation, order management, transportation, and end-to-end supply chain execution. Our teams combine deep domain expertise with strong technical capability, delivering outcomes through consulting, systems implementation, and proprietary platforms that accelerate time-to-value and reduce delivery risk. If you enjoy working in complex environments, partnering closely with clients, and seeing your work make a tangible impact on how global commerce moves, this is the place where your skills and judgment truly come to life. Role Overview: We are seeking an experienced Senior Data Solution Architect to design and implement a data architecture for a large-scale warehouse intelligence platform on Google Cloud Platform (GCP). The ideal candidate will have a strong background in data architecture, data engineering, and a deep understanding of the supply chain and logistics domains. The role requires a hands-on approach, with a focus on designing data pipelines, integrating multiple warehouse management systems, and implementing a real-time streaming layer. Must Have: 8+ years of experience in data architecture or data engineering, with at least 3 years in a solution architect capacity 3+ years of experience with Snowflake, including data engineering, data modeling, and data warehousing Deep GCP experience, including BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud SQL, and Cloud Spanner Hands-on experience with Apache Iceberg and CDC expertise Experience with streaming architecture, including Apache Flink, GCP Dataflow, or Apache Kafka Streams SQL mastery and experience with Oracle databases Strong understanding of the supply chain and logistics domains Strong communication and collaboration skills Preferred Qualifications: Experience with Apache Kafka, Blue Yonder, MuleSoft integration patterns, and multi-tenant/multi-site data architectures Familiarity with GenAI/LLM architectures and their data requirements Experience with MDM tools or patterns GCP Professional Data Engineer or equivalent certification Nice-to-Have Qualifications: Familiarity with Google Cloud Spanner (DaaS) Experience with Polaris catalog for Iceberg table management Key Responsibilities: Design and implement the end-to-end data architecture for a multi-site warehouse intelligence platform on GCP Develop a dual-layer data strategy, including analytics and real-time operational data layers Design and implement CDC pipelines using FiveTran, Debezium, or Oracle GoldenGate Develop the real-time operational data layer using Apache Flink or GCP Dataflow Design integration patterns between the platform, Blue Yonder WMS, MuleSoft middleware, and downstream analytics consumers Develop data pipelines built to scale across 50+ site production volumes Collaborate with the BI team to configure Polaris catalog and Iceberg table partitioning strategy Establish data quality, lineage, and observability standards across all pipelines Participate in architecture reviews and provide technical leadership on data-related decisions What Success Looks Like in the First 90 Days: By Day 30 Gain a deep understanding of the warehouse intelligence platform vision, existing client environments, and data ecosystem across GCP, Snowflake, Iceberg, and streaming components. Establish strong working relationships with Architecture, BI, Platform Engineering, and client stakeholders. Review and validate current data flows, WMS integrations, CDC requirements, and multi-site data ingestion patterns. By Day 60 Deliver a high‑level data architecture blueprint covering analytics, operational real‑time layers, and integration patterns. Prototype key components: CDC pipeline (e.g., Debezium/GoldenGate), Iceberg table structure, and initial Dataflow/Flink streaming jobs. Define data quality, lineage, and observability frameworks aligned with project needs. By Day 90 Implement the first production-ready data pipelines for at least one warehouse site, including CDC ingestion and real-time data layer. Validate scalability approach for multi-site ingestion (50+ sites). Finalize architecture documentation, standards, and handoff patterns for Engineering and BI teams. Act as the primary data architecture authority in architecture reviews and cross-functional technical decisions. Key Performance Indicators (KPIs): Delivery of approved end-to-end data architecture blueprint On-time deployment of CDC and streaming pipelines Pipeline performance: latency, throughput, data freshness Data quality:
Responsibilities
The role involves designing and implementing the end-to-end data architecture for a large-scale, multi-site warehouse intelligence platform on GCP, focusing on both analytics and real-time operational data layers. Key tasks include developing CDC pipelines, implementing streaming layers using technologies like Flink or Dataflow, and ensuring integration with warehouse management systems.
Loading...