Principal Data Scientist - Deepfake Detection at Microsoft
Redmond, Washington, United States -
Full Time


Start Date

Immediate

Expiry Date

23 Feb, 26

Salary

0.0

Posted On

25 Nov, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Deepfake Detection, Machine Learning, Python, Audio/Video Analysis, Signal Processing, Media Forensics, Generative Technologies, Statistical Analysis, Research, Production Systems, Security Research, Big Data Platforms, Synthetic Media Detection, Adversarial Machine Learning, Collaboration Threat Models, Incident Response

Industry

Software Development

Description
Driving research and experimentation for synthetic A/V and manipulated media detection inside Teams, particularly developing evaluation and benchmarking protocols and measurements, running evaluations on new detection signals from external and internal sources, contributing to integration roadmaps, and identifying and driving effectiveness improvements Conducting experiments and analyzing new methods for generative A/V detection while maintaining an in-depth understanding of evolving generative technologies, including attacker tools and techniques Developing and supporting production ML models and heuristics to detect deepfakes and other social engineering attacks Availability and willingness to cover a periodic on-call rotation or engage in incident response as needed Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research). OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research). OR equivalent experience. 3+ years experience in applied security research or ML for audio/video analysis, biometrics, signal processing, media forensics or a closely related field; ideally shipping detections or classifiers to production at scale. 3+ years' experience with Python and ML frameworks and algorithms, with experience designing offline evaluations and online experiments that balance precision/recall, latency, and customer experience. These requirements include, but are not limited to the following specialized security screenings. Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 9+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers). 2+ years experience presenting at conferences or other events in the outside research/industry community as an invited speaker. 5+ years experience conducting research as part of a research program (in academic or industry settings) 3+ years experience developing and deploying live production systems, as part of a product team. 3+ years experience developing and deploying products or systems at multiple points in the product cycle from ideation to shipping. Experience in synthetic media detection and anti-spoofing (audio and/or video), including adversarial ML, domain shift, and real time performance constraints. Experience with big data platforms such as Azure Analytics Services, Spark, or comparable technology. Background with product security, trust & safety or digital safety programs. Experience in collaboration threat models or threat research, including deepfake/social engineering scenarios in meetings, email, and chats or related technology. Willing to lead and influence change without hierarchical authority, partner effectively with a diverse set of stakeholders, and drive consensus.
Responsibilities
Drive research and experimentation for synthetic audio/video and manipulated media detection. Develop evaluation protocols, run evaluations on detection signals, and contribute to integration roadmaps.
Loading...