Senior Technical AI Ethicist - AI Red Teamer
at Salesforce
San Francisco, CA 94105, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 30 Jan, 2025 | USD 227700 Annual | 31 Oct, 2024 | 5 year(s) or above | Consideration,Scripting Languages,R,Python,Salesforce,Root Cause Analysis,Design,Sql,Communication Skills,Data Analysis,Relational Databases,Color | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts.
Job Category
Data
Job Details
ABOUT SALESFORCE
We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place.
Salesforce’s Office of Ethical and Humane Use is seeking an experienced responsible AI data scientist with an adversarial approach and experience conducting ethical red teaming to contribute to our ethical red teaming practice. In this role, you will help us gain a deep understanding of how our models and products may be leveraged by malign actors or through unanticipated use to cause harm. In addition to adversarial testing, you will analyze current safety trends, and develop solutions to detect and mitigate risk, while working cross-functionally with security, engineering, data science, and AI Research teams. You will bring technical depth to the assessment of AI products, models, and applications, in order to identify the best technical mitigations to identified risks.
The ideal candidate will have technical experience in generative as well as predictive artificial intelligence and in responsible / ethical AI.
Responsibilities:
- Adversarial Testing
- Provide technical leadership in designing, prototyping, and implementing comprehensive adversarial testing strategies, including both automated and manual adversarial testing approaches.
- Mentor and guide collaborator teams on adversarial testing standard processes, helping them develop the skills to conduct their own testing effectively.
- Collaborate with cross-functional teams to integrate OEHU adversarial testing frameworks into the AI development lifecycle.
- Safety and Robustness
- Contribute to the development of detection models, safety guardrails, and other proactive measures to prevent and mitigate risks posed by bad actors.
- Research and implement innovative techniques for enhancing AI safety and robustness, drawing from both open-source and internal tools.
- Collaborate with Salesforce’s AI Research team on novel approaches to model safety.
- Technical Research and Implementation
- Write clean, efficient, and well-documented code (primarily in Python) to support research efforts and facilitate the evaluation of AI systems.
- Develop and maintain a repository of reusable code modules and libraries to streamline adversarial testing processes.
- Testing Execution and Collaboration
- Participate in scoping, documenting, and implementing tests with partner teams, including the implementation of mitigations identified during testing.
- Test for technical vulnerabilities, model vulnerabilities, and harm/abuse including but not limited to bias, toxicity, and inaccuracy.
- Participate in labeling test data in partnership with OEHU and partner teams
- Reporting, Documentation, and Continuous Learning
- Write reports covering the goals and outcomes of testing operations, including significant observations and recommendations.
- Continuously monitor and analyze emerging threats and vulnerabilities to inform the development of adaptive safety measures.
- Continue to grow expertise in model safety by keeping up with research in socio-technical systems, privacy, interpretability/explainability, robustness, alignment, and responsible AI
QUALIFICATIONS:
- A related technical degree required
- 5-7 years of proven experience in Software Engineering, AI ethics, AI research, or similar roles
- Experience creating heuristic-based detection logic and rules for identifying anomalous or suspicious activity in systems and networks (e.g. log analysis, user behavior analytics.)
- Experience using SQL and relational databases. Ability to use Python, R, or other scripting languages to perform data analysis at scale.
- Experience with problem-solving and troubleshooting sophisticated issues with an emphasis on root-cause analysis.
- Experience in analyzing sophisticated, large-scale data sets and communicating findings to technical and non-technical audiences.
- Proven organizational and execution skills within a fast-paced, multi-stakeholder environment.
- Experience working in a technical environment with a broad, cross-functional team to get results, define requirements, coordinate assets from other groups (design, legal, etc.), and deliver key achievements
- Excellent written and oral communication skills, as well as social skills, including the ability to articulate technical concepts to both technical and non-technical audiences.
- Works well under pressure, and is comfortable working in a fast-paced, ever-changing environment.
In office expectations are 36 days/a quarter to support customers and/or collaborate with their teams.
Accommodations
If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form .
Posting Statement
At Salesforce we believe that the business of business is to improve the state of our world. Each of us has a responsibility to drive Equality in our communities and workplaces. We are committed to creating a workforce that reflects society through inclusive programs and initiatives such as equal pay, employee resource groups, inclusive benefits, and more. Learn more about Equality at www.equality.com and explore our company benefits at www.salesforcebenefits.com .
Salesforce is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. Salesforce does not accept unsolicited headhunter and agency resumes. Salesforce will not pay any third-party agency or company that does not have a signed agreement with Salesforce .
Salesforce welcomes all.
Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records.
For New York-based roles, the base salary hiring range for this position is $165,600 to $227,700.
Responsibilities:
Responsibilities:
- Adversarial Testing
- Provide technical leadership in designing, prototyping, and implementing comprehensive adversarial testing strategies, including both automated and manual adversarial testing approaches.
- Mentor and guide collaborator teams on adversarial testing standard processes, helping them develop the skills to conduct their own testing effectively.
- Collaborate with cross-functional teams to integrate OEHU adversarial testing frameworks into the AI development lifecycle.
- Safety and Robustness
- Contribute to the development of detection models, safety guardrails, and other proactive measures to prevent and mitigate risks posed by bad actors.
- Research and implement innovative techniques for enhancing AI safety and robustness, drawing from both open-source and internal tools.
- Collaborate with Salesforce’s AI Research team on novel approaches to model safety.
- Technical Research and Implementation
- Write clean, efficient, and well-documented code (primarily in Python) to support research efforts and facilitate the evaluation of AI systems.
- Develop and maintain a repository of reusable code modules and libraries to streamline adversarial testing processes.
- Testing Execution and Collaboration
- Participate in scoping, documenting, and implementing tests with partner teams, including the implementation of mitigations identified during testing.
- Test for technical vulnerabilities, model vulnerabilities, and harm/abuse including but not limited to bias, toxicity, and inaccuracy.
- Participate in labeling test data in partnership with OEHU and partner teams
- Reporting, Documentation, and Continuous Learning
- Write reports covering the goals and outcomes of testing operations, including significant observations and recommendations.
- Continuously monitor and analyze emerging threats and vulnerabilities to inform the development of adaptive safety measures.
- Continue to grow expertise in model safety by keeping up with research in socio-technical systems, privacy, interpretability/explainability, robustness, alignment, and responsible A
REQUIREMENT SUMMARY
Min:5.0Max:7.0 year(s)
Information Technology/IT
IT Software - QA & Testing
Help Desk
Graduate
Proficient
1
San Francisco, CA 94105, USA