Senior Data Engineer at Procom
Remote, British Columbia, Canada -
Full Time


Start Date

Immediate

Expiry Date

08 Nov, 25

Salary

0.0

Posted On

09 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Agile, Data Modeling, Data Security, Sql, Python, Aws, Glue, Unstructured Data

Industry

Information Technology/IT

Description

SENIOR DATA ENGINEER:

On behalf of our Oil & Gas client, Procom is searching for a Senior Data Engineer for a 6-month role. This position is a remote position.

SENIOR DATA ENGINEER - JOB DESCRIPTION:

Our client is seeking a skilled Senior Data Engineer to join our Data & Platforms team. This role is integral to the Global Commodity Value Chain Optimization initiative, focusing on designing and delivering scalable data products for analytics and decision-making. You will work on building the data foundation for real-time optimization models and improving logistics efficiency.

SENIOR DATA ENGINEER - MANDATORY SKILLS:

  • Expert in Databricks, AWS (S3, Glue, Lambda), Python, SQL, and PySpark.
  • Strong experience with data modeling and warehousing.
  • Familiarity with ML workflows and model integration.
  • Experience with CI/CD pipelines and data quality frameworks.
  • Comfortable working in Agile or cross-functional product teams.
  • Knowledge of data security, privacy, and compliance standards.

SENIOR DATA ENGINEER – NICE-TO-HAVE SKILLS:

  • Experience with real-time pipelines (Kafka, Kinesis).
  • Experience with unstructured data (e.g., PDFs).
  • RegEx-based rule development.
  • Relevant certifications.
Responsibilities
  • Design, build, and maintain ETL/ELT pipelines using Databricks and AWS services.
  • Ingest and process structured and unstructured data from various sources.
  • Implement batch and streaming pipelines with Databricks Autoloader and Structured Streaming.
  • Automate workflows and orchestrate data jobs using CI/CD tools.
  • Develop data products for analytics, reporting, and machine learning use cases.
  • Monitor Spark job performance and troubleshoot production data issues.
  • Implement data quality checks and maintain metadata and data lineage.
  • Embed ML models into data workflows.
  • Document data architectures and transformation logic.
  • Collaborate with stakeholders to deliver business-aligned solutions.
Loading...