Data science developer at Confidential
Toronto, ON M5V 2X4, Canada -
Full Time


Start Date

Immediate

Expiry Date

27 Sep, 25

Salary

0.0

Posted On

25 Aug, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Analytics, Data Engineering, Technical Documentation, Scripting, Python, Documentation, Version Control, Data Solutions, Communication Skills, Interpersonal Skills, Pipeline Development

Industry

Information Technology/IT

Description

REQUIRED SKILLS AND EXPERIENCE

  • Minimum 5+ years hands-on experience in Microsoft Azure cloud environment focused on data engineering and analytics.
  • Extensive experience designing, building, and maintaining cloud-based data lake and lakehouse architectures.
  • Strong expertise in creating and orchestrating automated data pipelines using Azure Data Factory (ADF) and Databricks.
  • Proven ability to implement medallion architecture for lakehouse data organization and handling.
  • Familiarity with Databricks Unity Catalog or similar governance frameworks is a strong plus.
  • Proficient in Python and SQL for data engineering, analytics development, and scripting.
  • Practical experience with CI/CD principles and tools for automating deployment and lifecycle management of data solutions.
  • Comfortable with GitHub workflows for version control and peer review in collaborative projects.
  • Demonstrated ability to understand client technology needs and translate them into robust technical solutions.
  • Skilled at troubleshooting and resolving complex, multi-faceted data system failures.
  • Experience preparing documentation, performing knowledge transfer, and training colleagues.
  • Prior experience working in Agile software development teams.

DESIRABLE SKILLS

  • Strong written and verbal communication skills to actively contribute to meetings, create technical documentation, present findings, and promote best practices.
  • Effective interpersonal skills for explaining the pros and cons of technical alternatives and collaborating with diverse stakeholders.

MUST-HAVE QUALIFICATIONS

  • 5+ years experience in Azure cloud data engineering environments.
  • 5+ years experience with Azure Data Factory and Databricks for data pipeline development.
  • 5+ years programming experience with Python and SQL.
    Job Types: Full-time, Fixed term contract
    Contract length: 130 days
    Work Location: Hybrid remote in Toronto, ON M5V 2X4
    Application deadline: 2025-07-29
    Expected start date: 2025-08-1

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ABOUT THE ROLE:

We are seeking a highly skilled Senior Data Science Developer to join our dynamic product teams. The ideal candidate will possess deep expertise in designing, developing, and maintaining cloud-based data and analytics solutions in Microsoft Azure environments, including modern data lakehouse architectures. You will work closely with IT and business stakeholders to deliver scalable, high-quality data pipelines and analytics models that drive business insights.

RESPONSIBILITIES

  • Collaborate within cross-functional product teams to analyze system requirements, architect, design, develop, test, and deploy cloud-based data and analytics products adhering to organizational standards.
  • Design, implement, and maintain scalable cloud-native data lake and lakehouse architectures, automated data pipelines, and analytics solutions.
  • Liaise with cluster IT teams to implement data solutions, conduct system/code reviews, troubleshoot and resolve operational issues.
  • Analyze complex technical challenges, evaluate alternatives, and recommend optimal solutions to improve data platform capabilities.
  • Drive migration of legacy data pipelines from Azure Synapse Analytics and Azure Data Factory—including stored procedures, views, and Parquet files in Azure Data Lake Storage (ADLS)—to modern Databricks-based solutions using Delta Lake and native orchestration tools.
  • Develop and promote reusable frameworks and standards to streamline data pipeline development and ensure consistent quality.
  • Participate actively in peer code reviews, enforce coding best practices, and conduct knowledge transfer sessions to enhance team capabilities and facilitate smooth project handoffs.
Loading...