Responsible AI Solution Risk Assessor at Microsoft
Hyderabad, Telangana, India -
Full Time


Start Date

Immediate

Expiry Date

03 Mar, 26

Salary

0.0

Posted On

03 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Risk Assessment, Compliance, Governance, Data Ethics, Analytical Skills, Communication, Documentation, Stakeholder Engagement, Policy Integration, Privacy-Preserving Technologies, Bias Mitigation Techniques, Internal Governance Processes, Cross-Functional Teamwork, AI Ethics, Emerging Technologies, Regulated Industries

Industry

Software Development

Description
Risk Assessment Ownership: Lead the Responsible AI risk assessment process for AI projects within your purview. Use Case Evaluation: Analyze proposed AI solutions for ethical, privacy, and security risks, including identifying sensitive use cases (e.g., facial recognition, biometric analysis, or legally sensitive applications). Escalation Management: Determine when use cases require escalation to internal review boards such as the Deployment Safety Board or other governance entities. Approval Coordination: Ensure all necessary approvals are obtained before development or deal sign-off, maintaining alignment with internal Responsible AI policies. Documentation & Compliance: Maintain thorough documentation of risk assessments, approvals, and mitigation strategies to support audit readiness and compliance. Stakeholder Engagement: Collaborate with product teams, legal, compliance, and engineering to ensure risk considerations are addressed early in the development lifecycle. Policy Integration: Translate Responsible AI policies into actionable assessment criteria and workflows. Bachelor's or master's degree in computer science, Data Ethics, Law, Risk Management, or a related field. 5+ years of experience in risk assessment, compliance, or governance roles, preferably in AI or emerging technologies. Familiarity with Responsible AI frameworks (e.g., NIST AI RMF, ISO/IEC 42001, EU AI Act). Strong analytical skills and attention to detail. Excellent communication and documentation abilities. Experience working with cross-functional teams in a matrixed organization. Experience with internal governance processes such as AI review boards or safety panels. Knowledge of privacy-preserving technologies and bias mitigation techniques. Background in regulated industries (e.g., healthcare, finance, government). Certifications in AI ethics, compliance, or risk management.
Responsibilities
Lead the Responsible AI risk assessment process for AI projects, analyzing proposed AI solutions for ethical, privacy, and security risks. Collaborate with product teams and ensure compliance with internal Responsible AI policies.
Loading...