Data Systems Technologist at Chemelex
Edmonton, AB, Canada -
Full Time


Start Date

Immediate

Expiry Date

30 Nov, 25

Salary

0.0

Posted On

01 Sep, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Apache Kafka, Snowflake, Mysql, Dql, Sql, Sql Server, Orchestration, Ticketing, Computer Science, Gitlab, Database Development, Agile, Docker, Collaboration Tools, Software Development Methodologies, Computer Engineering, Git, Postgresql, Participation

Industry

Information Technology/IT

Description

Chemelex is a global leader in electric thermal and sensing solutions, protecting the world’s critical processes, places and people. With over 50 years of innovation and a commitment to excellence, we develop solutions that ensure safety, reliability, and efficiency in diverse environments – from industrial plants and data centers to people’s homes. We deliver future-ready technologies, advanced engineering capabilities and local expertise backed by global standards. Our offering includes a leading portfolio from our trusted brands: Raychem, Tracer, Nuheat and Pyrotenax.
Job Summary
We are seeking a talented and self-driven Data Systems Technologist to design, implement, and optimize robust data systems across Chemelex’s software portfolio. This role focuses on advancing cloud-based and industrial IoT (IIoT) data architectures to enable real-time analytics, seamless system integration, and insightful business intelligence. You will collaborate across cross-functional teams in a hybrid work environment, ensuring scalable, reliable, and high-performance data infrastructure.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines and ETL frameworks to support enterprise-wide data initiatives.
  • Develop and implement data management solutions across cloud and on-premises IIoT environments.
  • Collaborate with software engineers, system architects, and business stakeholders to define data architecture and integration requirements.
  • Research and integrate modern data technologies (e.g., lake house, streaming, edge computing) to improve feature sets, reduce operational costs, and enhance service quality.
  • Enforce data governance, security, validation, and monitoring protocols to maintain high data quality and system integrity.
  • Incorporate DevOps principles and CI/CD practices into data engineering workflows.
  • Embrace a culture of inclusion, innovation, and continuous improvement.

Required Qualifications

  • Bachelor’s degree or higher in Computer Science, Computer Engineering, Data Engineering, or a related field; or equivalent hands-on experience.
  • 2+ years of professional experience in data engineering, database development, or data application development.
  • Proficient in SQL, DQL, DMQL, or equivalent query languages.
  • Experience with modern data platforms such as Snowflake, Redshift, BigQuery, or equivalent.
  • Proven experience in deploying and managing data workloads in cloud environments (e.g., Azure, AWS).
  • Understanding and application of DevOps principles within data environments.
  • Familiarity with Agile, Scrum, or Lean software development methodologies.
  • Exposure to multiple programming languages (e.g., Python, Java, Scala).
  • Working knowledge of source control, ticketing, and collaboration tools (e.g., Git, GitLab, Jira, Confluence).
  • Experience working with relational databases such as PostgreSQL, MySQL, SQL Server.

Preferred Qualifications

  • Experience with real-time or near-real-time data processing tools (e.g., Apache Kafka, Spark Streaming, Flink).
  • Familiarity with containerization and orchestration (Docker, Kubernetes).
  • Knowledge of NoSQL databases (e.g., MongoDB, Cassandra).
  • Contributions to open-source projects or participation in data communities.
Responsibilities
  • Design, build, and maintain scalable data pipelines and ETL frameworks to support enterprise-wide data initiatives.
  • Develop and implement data management solutions across cloud and on-premises IIoT environments.
  • Collaborate with software engineers, system architects, and business stakeholders to define data architecture and integration requirements.
  • Research and integrate modern data technologies (e.g., lake house, streaming, edge computing) to improve feature sets, reduce operational costs, and enhance service quality.
  • Enforce data governance, security, validation, and monitoring protocols to maintain high data quality and system integrity.
  • Incorporate DevOps principles and CI/CD practices into data engineering workflows.
  • Embrace a culture of inclusion, innovation, and continuous improvement
Loading...