Director, Artificial Intelligence (AI) & Emerging Technology Oversight at Manulife
Toronto, ON M4W 1E5, Canada -
Full Time


Start Date

Immediate

Expiry Date

05 Dec, 25

Salary

110530.0

Posted On

06 Sep, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Presentations, Emerging Technologies, Machine Learning, Analytics, Decision Making, Risk, Documentation, Communication Skills, Regulations, Risk Frameworks, Accountability

Industry

Financial Services

Description

We are seeking a strategic and experienced Director, Artificial Intelligence (AI) & Emerging Technology Oversight as Line 2 leadership role responsible for independent oversight, challenge and governance of AI systems – including machine learning, LLMs and Gen AI applications – across the enterprise. This position will lead the design and execution of a fit -for purpose AI risk oversight framework in ensuring that technical implementation of AI align with enterprise risk appetite, regulatory expectations and secure software development best practices. The role will act as a strategic advisor across security, data, cloud and engineering teams.

KEY QUALIFICATIONS:

  • 7-10 years in cybersecurity risk management, cybersecurity, legal or analytics, or model and AI/ML governance
  • Deep understanding of AI technologies, machine learning/AI risk management best practices, and relevant global laws and regulations
  • Commitment to continuous learning in machine learning, generative AI, agentic risk technology, and risk management trends, with a focus on AI governance
  • Experience in developing policies and conducting risk assessments & a strong understanding of risk frameworks, such as the NIST Cybersecurity Framework, ISO 27001/27002, and PCI DSS 4.0
  • Ability to work cross-functionally, aligning AI risk management with broader business strategies
  • Excellent verbal and written communication skills, with the ability to effectively convey complex AI risk concepts and insights to senior leadership, collaborators, and board members. Skilled in crafting clear and concise reports, presentations, and documentation to facilitate informed decision-making
  • Expertise in engaging with diverse collaborators to integrate their feedback into AI risk management practices
  • Commitment to promoting ethical AI practices and ensuring fairness, transparency, and accountability
  • Ability to effectively manage crises related to AI risks, demonstrating resilience and adaptability
  • Keen interest in emerging technologies and innovations in AI, with the ability to assess potential risks and opportunities
Responsibilities
  • Lead the establishment of an AI risk management oversight and challenge function to proactively identify and assess AI and emerging risks in partnership with risk partners and first-line teams.
  • Develop and implement a robust Line 2 oversight framework for managing risks associated with AI, generative AI, and emerging technologies.
  • Collaborate with the AVP, AI and Emerging Risk, to define comprehensive AI-specific risk taxonomies, control expectations, and escalation criteria that align with the organization’s enterprise risk appetite.
  • Create and facilitate governance forums designed to rigorously review and challenge high-risk AI use cases, promoting accountability and transparency.
  • Spearhead AI risk awareness initiatives, including events and training sessions, to educate collaborators on AI risk management frameworks, second-line defense processes, metrics, and reporting.
  • Design and establish reporting and assurance processes to provide clear and effective oversight to the Board, risk committees, and regulators, ensuring transparency and accountability.
  • Critically assess and challenge Line 1 risk assessments and controls throughout the AI/ML lifecycle, from model development through deployment and ongoing monitoring.
  • Analyze and evaluate emerging threats such as model drift, bias, data poisoning, adversarial attacks, and prompt injection and hallucination risks to ensure proactive risk mitigation.
  • Conduct thorough assessments of AI technical implementations, focusing on cybersecurity, data loss prevention, IAM, cloud security posture, and third-party dependencies to safeguard organizational assets.
  • Ensure generative AI implementations adhere to stringent controls for data minimization, auditability, RBAC, encryption, sandboxing, and human-in-the-loop safeguards.
  • Oversee and mitigate risks associated with shadow AI or unauthorized model use in sensitive business functions, ensuring compliance and security.
  • Lead the operationalization of compliance efforts with OSFI 3-23, B-10, B-13, and map controls to NIST AI and ISO standards, ensuring regulatory adherence.
  • Monitor and analyze the evolving AI regulatory landscape to proactively guide control updates and maintain a compliant posture.
  • Prepare comprehensive formal risk positions and regulatory responses, demonstrating the organization’s AI/ML governance maturity and commitment to best practices.
Loading...