Senior Data Engineer at NextT consultants
Mississauga, ON L5G 4V5, Canada -
Full Time


Start Date

Immediate

Expiry Date

16 Nov, 25

Salary

91368.34

Posted On

16 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Containerization, Dental Care, Docker, Hadoop, Cloud Services, Life Insurance, Python, Kafka, Scala, Java, Spark, Database Systems, Sql Server

Industry

Information Technology/IT

Description

We are seeking a highly skilled Senior Data Engineer to design, develop, and manage modern data solutions leveraging cloud technologies, big data frameworks, and advanced programming skills. The ideal candidate will have strong expertise in AWS, data pipeline development, and Agile delivery practices, with the ability to work across multiple data platforms and tools.

QUALIFICATIONS

  • Proven experience with AWS cloud services for deploying and managing end-to-end data solutions.
  • Strong programming skills in Python, Java, or Scala.
  • Hands-on experience with big data frameworks (Spark, Hadoop, Kafka).
  • Proficiency in SQL and experience with multiple database systems (SQL Server, Oracle).
  • Familiarity with modern data orchestration tools (Airflow, Prefect).
  • Experience with Agile development methodologies.
  • Knowledge of containerization (Docker, Kubernetes) is a plus.
    Job Type: Full-time
    Pay: $91,368.34-$125,000.00 per year

Benefits:

  • Dental care
  • Extended health care
  • Life insurance
  • Paid time off

Ability to commute/relocate:

  • Mississauga, ON L5G 4V5: reliably commute or plan to relocate before starting work (preferred)

Education:

  • Bachelor’s Degree (preferred)

Work Location: Hybrid remote in Mississauga, ON L5G 4V

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from diverse sources into cloud-based data platforms.
  • Develop and manage large-scale, distributed data processing systems using Apache Spark, Hadoop, and Kafka.
  • Create and maintain data solutions in AWS (Glue, EMR, Redshift, S3, Lambda) and work with Azure Synapse or GCP BigQuery as needed.
  • Program in Python, Java, and/or Scala to develop custom applications and data workflows.
  • Work with Microsoft SQL Server, Oracle, and associated management tools for data modeling, transformation, and performance optimization.
  • Orchestrate data workflows using Apache Airflow or Prefect to ensure timely and reliable data delivery.
  • Apply Agile methodologies to improve collaboration, transparency, and project delivery timelines.
  • Collaborate with cross-functional teams, including data scientists and analysts, to ensure data quality, consistency, and accessibility.
  • Leverage containerization technologies such as Docker and Kubernetes to package and deploy applications where applicable.
Loading...