Start Date
Immediate
Expiry Date
06 Jul, 25
Salary
229200.0
Posted On
07 Apr, 25
Experience
1 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Conferences, Testing Tools, Kali Linux, Econometrics, Nmap, Computer Science, Regulations, Shipping, Base Pay, Ethnicity, Applied Sciences, Predictive Analytics, Statistics, Microsoft, Ordinances, Research, Gwapt, Powershell, Python, Citizenship, Color, Production Systems
Industry
Information Technology/IT
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Do you want to find responsible AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you’ll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems. We are looking for a Senior AI Safety Researcher where you will get to work alongside experts to push the boundaries of AI Red Teaming. We are a fast paced, interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, and Responsible AI experts with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot and help keep Microsoft’s customers safe and secure.
More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
REQUIRED/MINIMUM QUALIFICATIONS:
OTHER REQUIREMENTS:
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
ADDITIONAL OR PREFERRED QUALIFICATIONS:
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations
The AI Red Team is looking for security researchers who can combine the development of cutting-edge attack techniques, with the ability to deliver complex, time limited operations as part of a diverse team. This includes the ability to manage several priorities at once, manage stakeholders, and communicate clearly with a range of audiences. Understand the products & services that the AI Red Team is testing, including the technology involved and the intended users to develop plans to test them. Understand the risk landscape of AI Safety & Security including cybersecurity threats, Responsible AI policies, and the evolving regulatory landscape to develop new attack methodologies for these areas. Conduct operations against systems as part of a multi-disciplinary team, delivering against multiple priority areas within a set timeline.
As a Senior Secuirty Researcher, you will: