Security Researcher II at Microsoft
Pinkenba, Queensland, Australia -
Full Time


Start Date

Immediate

Expiry Date

18 Feb, 26

Salary

0.0

Posted On

20 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Security Research, Penetration Testing, AI Security, Vulnerability Discovery, Python, AI Frameworks, Security Testing Tools, Multi-Agent Systems, Exploit Development, Technical Reporting, Mentoring, Software Engineering, Distributed Systems, AI Attack Vectors, Conversational AI, Security Certifications

Industry

Software Development

Description
Developing tooling and new code via AI and leveraging AI to look for vulnerabilities in a scalable manner. Partner with Security Architecture to inform architectural improvements based on research findings. Testing & Exploitation: Design and implement methodologies and tools for evaluating AI agent security, including multi-agent system exploitation. Execute comprehensive penetration tests on AI platforms, focusing on prompt injection, jailbreaking, and workflow manipulation. Identify and validate vulnerabilities through hands-on testing, developing proof-of-concept exploits that simulate real-world attack scenarios. Framework & Tool Development: Contribute to the creation of AI security testing frameworks and automated validation tools. Collaborate with AI engineering teams to verify security fixes through iterative testing and validation. Reporting & Knowledge Sharing: Produce detailed technical reports and advisories that translate complex findings into actionable remediation strategies. Share expertise and mentor team members on AI security testing techniques and vulnerability discovery. Bachelor's Degree in Statistics, Mathematics, Computer Science or related field OR 3+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection. 3+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms. Proficiency in Python with experience in AI frameworks and security testing tools Ability to read and analyze code across multiple languages and codebases Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures. Published security research or conference presentations on AI security topics. Background in software engineering with distributed systems expertise. Security certifications such as OSCP, OSCE, GPEN, or similar. Knowledge of AI agent communication protocols and multi-agent architectures.
Responsibilities
Develop tooling and new code via AI to identify vulnerabilities in a scalable manner. Collaborate with Security Architecture to inform improvements based on research findings and execute penetration tests on AI platforms.
Loading...