AI Security Architect - AI Red Team (Enterprise) at C-Serv
Vancouver, British Columbia, Canada -
Full Time


Start Date

Immediate

Expiry Date

24 May, 26

Salary

0.0

Posted On

23 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Adversarial Machine Learning, Red Teaming, LLM, AI Systems, Threat Modelling, Prompt Injection, Jailbreaking, Model Exploitation, Data Leakage, RAG System Manipulation, ISO 27001, SOC 2, ISO 27701, ISO 27017, Python, Executive Leadership

Industry

Business Consulting and Services

Description
The Opportunity We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products deployed to some of the world’s largest organizations. This is not a theoretical research role. This role sits at the intersection of adversarial machine learning, enterprise security architecture, and governance. You will lead the design and execution of structured red team engagements across multiple AI systems — and translate technical risk into enterprise-aligned assurance. If you have ever been frustrated watching AI risk findings remain stuck in a slide deck with no operational impact, this role is designed to change that. What You’ll Do Design and lead adversarial testing of LLM and AI-driven systems Conduct threat modelling across model, infrastructure and data layers Execute and oversee testing for: Prompt injection Jailbreaking Model exploitation Data leakage / extraction RAG system manipulation Translate findings into structured, audit-ready documentation Map vulnerabilities and remediation pathways to: ISO 27001 controls SOC 2 Trust Service Criteria ISO 27701 privacy controls ISO 27017 cloud security controls Partner closely with engineering, security, and compliance functions Present findings clearly to executive leadership This role ensures AI security findings integrated into enterprise governance frameworks. What We’re Looking For Core Technical Depth Strong understanding of adversarial machine learning Experience red teaming LLM or AI systems Deep familiarity with AI deployment architectures (RAG, APIs, vector DBs, fine-tuning pipelines) Strong Python proficiency Enterprise Security & Governance Fluency Experience working within ISO 27001 environments Practical knowledge of SOC 2 Trust Service Criteria Understanding of ISO 27701 privacy extensions Familiarity with ISO 27017 cloud security controls Ability to map technical findings to control frameworks Communication & Documentation Ability to produce clear, structured, audit-friendly documentation Comfortable presenting technical risk to executive audiences Strong written and verbal communication skills Who You Are Systems thinker Curious and adversarial in mindset Comfortable identifying uncomfortable truths Autonomous and fast-moving Enterprise-aware, not just technically strong Able to operate independently under executive leadership You understand that security is about both breaking systems and integrating findings into operational and compliance posture. Comprehensive Private Medical Coverage Support for Mental Health Expenses Life Insurance Options Attractive Compensation Package
Responsibilities
This role involves designing and leading adversarial testing engagements across enterprise-scale AI and LLM systems, focusing on stress-testing and hardening deployed products. Responsibilities include executing tests for vulnerabilities like prompt injection and jailbreaking, and translating technical risks into structured, audit-ready documentation mapped to enterprise governance controls.
Loading...