Data Engineer (Analytics) at Netskrt Systems Inc
Vancouver, BC, Canada -
Full Time


Start Date

Immediate

Expiry Date

24 Oct, 25

Salary

90000.0

Posted On

24 Jul, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Spark, Cassandra, Nosql, Python, Visualization, Communication Skills, Data Warehouse, Version Control, Big Data, Computer Science, Databases, Google Cloud, Data Engineering, Sql, Kafka, Data Modeling, Postgresql, Java, Aws, Data Systems, Kibana, Hadoop, Collaboration, Cloud

Industry

Information Technology/IT

Description

Netskrt is seeking a talented and motivated Data Engineer to join our Analytics Team. This is a hybrid role (WFH 3 days) located in our beautiful downtown Vancouver office, next to Burrard SkyTrain station.
At Netskrt, we’re a highly driven team focused on building innovative products and services that enhance the customer experience of streaming video at the edge of the network. We’ve developed a suite of interrelated technologies aimed at businesses offering customers Wi-Fi in bandwidth-constrained environments.
This role offers hands-on experience across data infrastructure, networking, security, and cloud technologies—all while solving complex problems in a dynamic startup setting, alongside accomplished engineers and a leadership team with a strong track record of success. If you’re passionate about designing scalable data pipelines, experimenting with complex datasets, and uncovering insights from diverse data sources, this is an exciting opportunity to have a meaningful impact on our technology landscape.

REQUIRED SKILLS & QUALIFICATIONS:

  • Bachelor’s degree in Computer Science or equivalent professional experience.
  • 2+ years of hands-on experience in data engineering or related fields.
  • Proficiency in managing and differentiating between stream and batch processing pipelines.
  • Strong understanding of data modeling, ETL design, and warehouse architecture.
  • Experience working with large-scale, distributed data systems.
  • Excellent problem-solving skills and strong attention to detail.
  • Strong communication skills and an aptitude for collaboration.
  • Ability to multitask and thrive in a fast-paced, high-growth environment.
  • Passion for continuous learning and staying on top of data engineering innovations.

DESIRED QUALIFICATIONS:

  • Programming: Python, Scala, or Java
  • Big Data: Hadoop, Spark, Kafka, or similar frameworks
  • Databases: SQL and NoSQL systems such as PostgreSQL, ClickHouse, or Cassandra
  • Visualization: Grafana, Superset, Kibana
  • Workflow Automation: Apache Airflow or equivalent orchestration tools
  • Version Control: Git (GitHub/GitLab/Bitbucket)
  • RESTful APIs and building scalable data pipelines
  • Cloud: Experience with AWS (e.g., Redshift, S3, Lambda), Google Cloud (BigQuery, Dataflow), or Azure (Data Lake, SQL Data Warehouse)
Responsibilities
  • Collaborate with cross-functional teams to design and maintain robust, scalable data pipelines that automate data extraction, transformation, and loading from sources such as databases, APIs, and flat files.
  • Integrate and unify disparate data sources for analytical and reporting purposes.
  • Develop and maintain structured data models and warehousing using industry best practices.
  • Design and optimize ETL processes to handle both real-time streams and batch workloads.
  • Monitor and troubleshoot data workflows for performance and scalability.
  • Work closely with Data Scientists, Analysts, and Business Intelligence teams to deliver impactful solutions.
  • Champion data quality, integrity, and compliance across all workflows.
Loading...