Senior Analytics EngineerNew at LotLinx Inc
Winnipeg, MB, Canada -
Full Time


Start Date

Immediate

Expiry Date

09 Oct, 25

Salary

103000.0

Posted On

09 Jul, 25

Experience

4 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Transformation, Aggregation, Communication Skills, Optimization Techniques, Tuning, Data Vault, Collaboration, Teams, Data Engineering, Data Manipulation, Inmon, Data Systems, Data Warehousing, Orchestration

Industry

Information Technology/IT

Description

Since our founding in 2012, Lotlinx has consistently pioneered advancements in the automotive landscape. We specialize in empowering automobile dealers and manufacturers by providing cutting-edge data and technology, delivering a distinct market advantage for every single vehicle transaction. Today, we stand as the foremost automotive AI and machine learning powered technology, excelling in digital marketing, risk management, and strategic inventory management.
Lotlinx provides employees with a dynamic work environment that is challenging, team-oriented, and full of passionate people. We offer great incentives to our employees, such as competitive compensation and benefits, flex time off, and career development opportunities.

POSITION SUMMARY

We are seeking an experienced Senior Analytics Engineer to join our growing Data team. You will play a pivotal role in architecting, building, and optimizing the data foundations that power analytics and data-driven decision-making across LotLinx. Reporting to the Director of Data Analytics, you will collaborate closely with Data Analysts, Data Engineers, and Product Managers to translate business needs into robust, scalable, and reliable data models and pipelines that are incorporated into our product portfolio. This is a key position where you’ll have significant ownership and impact on our data infrastructure and strategy.
This is a hybrid role and requires 3-4 days a week in either our Winnipeg, Hamilton, or Vancouver office locations.

Responsibilities

RESPONSIBILITIES

  • Architect & Build Data Models: Design, develop, and maintain scalable and performant data models in our data warehouse (Google BigQuery, Apache Pinot) to serve as the single source of truth for analytics.
  • Data Analysis: Conduct data validation and exploratory analysis across massive datasets (billions of rows, terabytes) to ensure the integrity of data pipelines and the accuracy of downstream reporting.
  • Develop Large Data Pipelines: Develop, monitor, and troubleshoot ELT/ETL pipelines processing high-volume data streams, ensuring reliability and performance at the terabyte scale.
  • Optimize Pipeline Performance: Optimize complex SQL queries and data transformation logic for maximum performance and cost-efficiency using multi-terabyte datasets within Google BigQuery.
  • Enhance OLAP performance: Analyze Apache Pinot query performance logs and usage patterns across terabyte-scale datasets to identify optimization opportunities and troubleshoot complex data access issues.
  • Enable Data Consumers: Partner with data analysts, data scientists, and business stakeholders to understand their data requirements, providing clean, well-documented, and easy-to-use datasets.
  • Champion Data Quality & Governance: Implement data quality checks, testing frameworks, and documentation standards to ensure the trustworthiness and usability of our data assets.
  • Collaborate & Mentor: Work effectively within a collaborative team environment. Potentially mentor junior team members and share best practices in analytics engineering.
  • Stay Current: Keep abreast of new technologies, tools, and best practices in the analytics engineering space and advocate for their adoption where relevant.

EXPERIENCE: 4+ YEARS OF RELEVANT PROFESSIONAL EXPERIENCE IN ANALYTICS ENGINEERING, DATA ENGINEERING, OR A HIGHLY RELATED ROLE, WITH A PROVEN TRACK RECORD OF BUILDING AND MANAGING COMPLEX DATA SYSTEMS.

  • Expert SQL: Deep expertise in writing complex, highly performant SQL for data transformation, aggregation, and analysis, particularly within a cloud data warehouse environment like BigQuery.
  • Performance Optimization: Demonstrated experience writing, tuning, and debugging complex SQL queries specifically for large-scale data warehouses (multi-terabyte environments), preferably BigQuery or Apache Pinot.
  • Data Modeling Mastery: Strong understanding of data modeling concepts (e.g., Kimball, Inmon, Data Vault) and practical experience designing and implementing warehouse schemas.
  • ETL/ELT & Orchestration: Proven experience building and maintaining data pipelines using relevant tools and frameworks. Python proficiency for scripting and data manipulation is essential.
  • Cloud Data Warehousing: Significant experience working with cloud data warehouses, specifically Google BigQuery. Understanding of underlying architecture and optimization techniques.
  • Problem-Solving: Excellent analytical and problem-solving skills, with the ability to troubleshoot complex data issues independently.
  • Communication & Collaboration: Strong communication skills, capable of explaining complex technical concepts to both technical and non-technical audiences. Proven ability to collaborate effectively across teams.
Loading...