Data Engineer (Contract) at ACTO Technologies
Remote, British Columbia, Canada -
Full Time


Start Date

Immediate

Expiry Date

13 Dec, 25

Salary

0.0

Posted On

13 Sep, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Flow, Python, Data Structures, Infrastructure Solutions, Reporting, Reference Data, Crm, Quality Processes, Snowflake

Industry

Information Technology/IT

Description

EngineeringFull TimeCanadaRemote
Job Type: Contract - 6-12 months
Job Location: Remote; ability to work on Eastern Standard Time

ABOUT US:

ACTO is an Intelligent Field Excellence (IFE) platform built for life sciences that improves field and HCP interactions with unified agentic AI. ACTO helps Sales, Marketing, and Medical teams improve customer engagement and brand performance by turning field professionals into “Masters of the Message” that engage HCPs and their support teams with authority and impact. ACTO partners with biopharma companies to ensure field professionals are always competent, confident, and credible, delivering the right message to HCPs, while providing senior leaders and frontline managers with the insight they need to drive continuous field force effectiveness. As a validated platform compliant with FDA 21 CFR Part 11 and SOC 2 Type II certified, ACTO is the trusted partner for intelligent field excellence in the life sciences industry. For more information, visit www.acto.com.Role Summary:

We are seeking a skilled Data Engineer to design, build, and maintain robust data pipelines and a scalable data lakehouse architecture. The ideal candidate will integrate various data sources and tools, such as Snowflake, Databricks, CRM, ensuring seamless data flow across systems. This role also involves supporting the Data Architect in implementing efficient, secure, and reliable data infrastructure solutions.In this role, you will be responsible for:

  • Build and maintain data processing pipeline and tools using state-of-the-art technologies.
  • Work with Python on Spark-based data pipelines.
  • Develop algorithms to build complex data relationships.
  • Build analytical data structures to support reporting.
  • Build and maintain Data Quality processes.
  • Collaborate with Product team to adapt our reference data to changing demands in the market.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

TO BE SUCCESSFUL IN THIS ROLE, YOU’LL NEED:

  • 4+ years of experience developing data pipelines using cloud-managed Spark clusters (e.g. AWS EMR, Databricks)
  • Must have experience with AWS Athena, glue architecture
  • Fluent in Python and Spark (3+ years of experience)
  • Previous experience building tools and libraries to automate and streamline data processing workflows.
  • Proficient with SQL / SparkSQL
  • Hands-on experience working with a Data Lakehouse.
  • Good verbal and written communication in English
  • Proven experience of working and delivering in an Agile environment.

We are seeking a skilled Data Engineer to design, build, and maintain robust data pipelines and a scalable data lakehouse architecture. The ideal candidate will integrate various data sources and tools, such as Snowflake, Databricks, CRM, ensuring seamless data flow across systems. This role also involves supporting the Data Architect in implementing efficient, secure, and reliable data infrastructure solutions.In this role, you will be responsible for:

  • Build and maintain data processing pipeline and tools using state-of-the-art technologies.
  • Work with Python on Spark-based data pipelines.
  • Develop algorithms to build complex data relationships.
  • Build analytical data structures to support reporting.
  • Build and maintain Data Quality processes.
  • Collaborate with Product team to adapt our reference data to changing demands in the market
Loading...