GenAI Security Researcher -Mid Level at ACTIVEFENCE INC
Ramat Gan, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

23 May, 26

Salary

0.0

Posted On

22 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Red Teaming, AI Security, Generative AI, Offensive Cybersecurity, Web Apps Security, API Security, Python, JavaScript, GPT, DALL-E, Codex, Problem-Solving, Analysis, Communication, Ethical Hacking

Industry

Software Development

Description
As a GenAI Security Researcher, you’ll be deep diving into AI security challenges by running red teaming operations to find weaknesses in generative AI systems and their setup. Pioneering novel bypass techniques to test the latest AI security defenses. Identifying system flaws, driving remediation efforts, and fortifying overall AI security to help building robust, secure, and future-ready models. Partnering with teams to automate security testing and define enterprise best practices. Staying ahead of the curve in AI security, ethical hack Requirements Must Have 2+ years in offensive cybersecurity (especially web apps and API security) OR a B.SC/M.SC. with solid AI/Cybersecurity research under your belt. Coding/scripting skills (like Python, JavaScript) relevant to AI security. A deep understanding of AI tech, especially generative models (think GPT, DALL-E, Codex, etc.). Solid knowledge of how AI works internally. Awesome problem-solving, analysis, and communication skills. Nice to Have Offensive cybersecurity certs (OSWA, OSWE, OSCE3, SEC542, SEC522) OR a Master's or higher in Computer Science with a focus on Data Science or AI. Experience building products end-to-end, including the infrastructure and system design. Know-how in cloud development. Familiarity with AI security frameworks, compliance rules, and ethical guidelines. Being great at handling a super fast-paced, changing environment. About Alice Alice is a trust, safety, and security company built for the AI era. We safeguard the communicative technologies people use to create, collaborate, and interact—whether with each other or with machines. In a world where AI has fundamentally changed the nature of risk, Alice provides end-to-end coverage across the entire AI lifecycle. We support frontier model labs, enterprises, and UGC platforms with a comprehensive suite of solutions: from model hardening evaluations and pre-deployment red-teaming to runtime guardrails and ongoing drift detection.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The researcher will conduct deep dives into AI security by running red teaming operations to uncover weaknesses in generative AI systems and pioneering novel bypass techniques to test defenses. This role involves identifying system flaws, driving remediation efforts, and partnering with teams to automate security testing and define enterprise best practices.
Loading...