Senior Data Engineer at Manulife
Toronto, ON M4W 1E5, Canada -
Full Time


Start Date

Immediate

Expiry Date

12 Nov, 25

Salary

94220.0

Posted On

12 Aug, 25

Experience

6 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Automation, Access Control, Python, Rbac, Subscriptions, High Proficiency

Industry

Information Technology/IT

Description

As a market leader, Manulife / John Hancock is dedicated to delivering an exceptional experience to our customers and end users. Technology is the most important enabler in our ability to provide that experience and generate business.
We are seeking a highly skilled and experienced Senior Data Engineer to join the Canada Data Office here at Manulife!
The ideal candidate will be leading a team of engineers responsible for the building, maintenance, improvements, and support of new and existing Azure data pipelines for various analytical use cases.
Collaboration with multi-functional teams and understanding data and using it to serve our businesses and operations whilst ensuring alignment to Manulife’s IT security and risk guidelines is essential to delivering high quality, scalable software solutions.
Are you a self-starter who loves technical challenges and looking for your next step as an Engineering Leader? Are you passionate about building high-quality, user-friendly solutions? If so, then we want to hear from you!

REQUIRED QUALIFICATIONS:

  • Bachelor’s degree or equivalent experience in computer/IT or data-related fields is required, with a Master’s degree being a plus.
  • 6+ years of previous data engineering experience with Azure-related data technologies.
  • Solid understanding of Azure infrastructure, including subscriptions, resource groups, resources, access control with RBAC (role-based access control), integrations with Azure AD, and Azure security principles (user group, service principal, managed identity), network concepts (VNet, Subnet, NSG rules, private endpoints), password/credential/key management, and data protection.
  • Strong hands-on knowledge of Azure Databricks, ADF, ADLS, Synapse Serverless/dedicated/spark pools, Python, PySpark, and T-SQL, along with experience crafting and developing scripts for ETL processes and automation in Azure Data Factory and Azure Databricks.
  • High proficiency in GIT/Jenkins/dev ops processes to maintain and resolve issues with data pipelines in production.

PREFERRED QUALIFICATIONS:

  • Knowledge of implementing Azure technologies and networking via Terraform, along with the ability to fix issues with Azure infrastructure in production.
  • Experience with data modelling, data mart, data lakehouse architecture, SCD, data mesh, and delta lake overall.
Responsibilities
  • Design, develop, and manage data pipelines that facilitate the detailed extraction, transformation, and loading of data from diverse sources.
  • Automate infrastructure provisioning and deployment using tools such as Terraform.
  • Collaborate with Architecture, security, and risk teams to implement the latest guidelines and Azure standard methodologies.
  • Collaborate with business collaborators to suggest data solutions that align with business goals and improve decision-making processes
  • Demonstrate a solid understanding of data privacy and compliance regulations and best-practices
Loading...