Member of Technical Staff - Secure Intelligence Institute at Perplexity AI Ltd
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

19 Jun, 26

Salary

405000.0

Posted On

21 Mar, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Security Research, Privacy Research, Frontier AI Security, Threat Modeling, Security Analysis, Novel Defenses, Mitigation Development, Detection Mechanisms, Security Evaluation Frameworks, Benchmarking, Python, TypeScript, Go, Rust, Independent Operation, Communication

Industry

Software Development

Description
Perplexity is seeking energetic researchers and engineers to join our Secure Intelligence Institute (SII), Perplexity's flagship research center for advancing security, privacy, and trust in frontier intelligence. SII’s goals are to advance frontier AI security research, translate those advances into concrete improvements in Perplexity's systems, and share knowledge and resources that strengthen the broader AI ecosystem. As a member of SII, you'll conduct original and impactful research on improving the security and privacy of frontier intelligence systems. Your goal will be to conduct research that is not only rigorous in theory, but practical enough to improve the systems people rely on every day. This work will be informed by the realities of operating general-purpose AI systems used by millions of people and thousands of enterprises, and you'll be expected to translate both your own research and advances from the broader community to practical improvements that protect and defend Perplexity's users. Responsibilities Develop threat models for emerging attack surfaces in AI-native products, including browser, search, and autonomous agents. Identify and analyze security and privacy threats across AI systems, infrastructure, and user-facing products. Develop novel defenses, mitigations, and detection mechanisms for security and privacy in AI-native products. Build security evaluation frameworks, benchmarks, and datasets to measure the effectiveness of different defense mechanisms. Partner with Perplexity’s Security Engineering team to translate state of the art research into shipped security features and hardened system architectures. Collaborate with top-tier academic and industry researchers in SII's external research network. Publish findings at premier venues and contribute to the broader security research community. Qualifications Hold a PhD (or equivalent research experience) in Computer Science, Computer Engineering, or a related field, with a primary focus on security and/or privacy. Experience publishing at top security conferences (IEEE S&P, USENIX Security, ACM CCS, NDSS) demonstrating original, impactful research contributions. Deep expertise in one or more of: security of agentic systems, systems security, web and applications security, program analysis, and software security. Proficiency in Python (bonus points for TypeScript, Go, and/or Rust). Ability to operate with high independence, willing to dive in and take ownership, and comfortable in a fast-paced environment where research directly informs product. Clear and concise communcation, translating complex attack narratives into actionable insights for engineering and leadership.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Responsibilities include developing threat models for AI-native products, identifying security and privacy threats across AI systems, and developing novel defenses and detection mechanisms. Researchers will also build security evaluation frameworks and partner with engineering teams to translate research into shipped security features.
Loading...