Senior Data Engineer at Peak Power Inc
Toronto, ON M5H 3S6, Canada -
Full Time


Start Date

Immediate

Expiry Date

29 Nov, 25

Salary

0.0

Posted On

30 Aug, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Mathematics, Data Processing, Code, Python, Inmon, Security, Reliability, Productivity, Scalability, Machine Learning, Apache Spark, Access, Software Development, Physics, Data Security, Communication Skills, Cloud Computing, Design, Cloud Applications, Testing, Amazon S3

Industry

Computer Software/Engineering

Description

AT PEAK POWER, WE’RE MAKING IT PROFITABLE TO PURSUE NET ZERO

Peak Power enables C&I companies to reduce costs while pursuing net zero goals by working with financiers to provide a no-cost battery energy storage solution that covers deployment, operation, and maintenance. We also help distributed energy resource asset owners optimize their assets through our Peak Synergy software.
Peak Synergy is our energy efficiency software platform. It analyzes grid conditions in real-time and sends notifications for high demand hours. The platform also looks at how dirty the grid is at any moment. If a building has batteries and EVs chargers onsite, Peak Synergy manages their usage to meet sustainability goals and participate in complicated energy market programs.
At the end of the day, this means cleaner, more reliable, and more affordable electricity for everyone. We’re powering the clean energy revolution.

THE EXPERIENCE AND EDUCATION.

  • Bachelor’s degree in software engineering, computer science or related technical field (e.g. EE, physics or mathematics), or equivalent practical experience
  • AWS certifications, or equivalent practical experience
  • 5+ years of experience as a data engineer, software engineer or similar technical role using Python to build, test and deploy data pipelines
  • Experience deploying and maintaining containerized cloud applications and cloud functions
  • Experience working with relational and time series databases, like Postgres, TimescaleDB and InfluxDB
  • Experience with workflow orchestration tools such as Apache Airflow or Luigi
  • Experience with infrastructure-as-code software such as Pulumi and Terraform
  • Experience with MLOps platforms such as Kubeflow and SageMaker is a nice to have
  • Passionate and experience about leveraging Generative AI tools (e.g., Cursor, Claude Code, IDE-integrated copilots) to accelerate development workflows and enhance productivity
  • Experience with large-scale data processing and distributed computing frameworks such as Apache Spark and Apache Flink
  • Experience building a data lake using Amazon S3, Apache Iceberge, Delta Lake, or similar technologies
  • Experience with data warehousing solutions (e.g. Snowflake, BigQuery, Redshift), including knowledge of dimensional data modelling techniques (e.g. Kimball, Inmon)
  • General knowledge of software development, APIs, data stores, networking, security, machine learning and cloud computing services
Responsibilities

Please refer the Job description for details

Loading...