Gen AI Security Researcher at ACTIVEFENCE INC
Ramat Gan, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

17 Jan, 26

Salary

0.0

Posted On

19 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

AI Vulnerabilities Analysis, AI Technologies, Generative Models, Offensive Cyber Security, Cloud Security, API Security, Agentic Frameworks, Python, Analytical Skills, Problem-Solving Skills, Communication Skills, Fast-Paced Environment, Machine Learning Frameworks, Ethical Hacking, Cyber Threats, Documentation

Industry

Software Development

Description
Description As a Red Team Specialist focused on Generative AI Models, you will play a critical role in enhancing the security and integrity of our cutting-edge AI technologies. Your primary responsibility will be to conduct analysis and testing of our generative AI systems, including but not limited to language models, image generation models, and any related infrastructure. The goal is to identify vulnerabilities, assess risks, and provide actionable insights to fortify our AI models and guardrails against potential threats. Key Responsibilities: Simulated Cyber Attacks: Conduct sophisticated and comprehensive simulated attacks on generative AI models and their operating environments to uncover vulnerabilities. Vulnerability Assessment: Evaluate the security posture of AI models and infrastructure, identifying weaknesses and potential threats. Risk Analysis: Perform thorough risk analysis to determine the impact of identified vulnerabilities and prioritize mitigation efforts. Mitigation Strategies: Collaborate with development and security teams to develop effective strategies to mitigate identified risks and enhance model resilience. Research and Innovation: Stay abreast of the latest trends and developments in AI security, ethical hacking, and cyber threats. Apply innovative testing methodologies to ensure cutting-edge security practices. Documentation and Reporting: Maintain detailed documentation of all red team activities, findings, and recommendations. Prepare and present reports to senior management and relevant stakeholders. Requirements Must-Have Proven record of AI vulnerabilities analysis Strong understanding of AI technologies and their underlying architectures, especially generative models and frameworks. At Least 5 years of experience in offensive cyber security, particularly in Cloud and API security. Familiarity with agentic frameworks and agentic development experience Proficiency in python. Excellent analytical, problem-solving, and communication skills. Ability to work in a fast-paced, ever-changing environment. Nice-to-Have: Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field. Proving record of building production quality pipelines and automations Experience with machine learning development frameworks and environments. Advanced Certifications in offensive cybersecurity (e.g. OSWE, OSCE3, SEC542, SEC522) are highly desirable. Certifications/background in DevOps/ML fields are highly desirable About ActiveFence ActiveFence is the leading provider of security and safety solutions for online experiences, safeguarding more than 3 billion users, top foundation models, and the world’s largest enterprises and tech platforms every day. As a trusted ally to major technology firms and Fortune 500 brands that build user-generated and GenAI products, ActiveFence empowers security, AI, and policy teams with low-latency Real-Time Guardrails and a continuous Red Teaming program that pressure-tests systems with adversarial prompts and emerging threat techniques. Powered by deep threat intelligence, unmatched harmful-content detection, and coverage of 117+ languages, ActiveFence enables organizations to deliver engaging and trustworthy experiences at global scale while operating safely and responsibly across all threat landscapes.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Conduct analysis and testing of generative AI systems to identify vulnerabilities and assess risks. Collaborate with teams to develop strategies to mitigate identified risks and enhance model resilience.
Loading...