Lecturer/Senior Lecturer in AI Trust and Security (6 Posts Available)
at The University of Manchester
Manchester M13, England, United Kingdom -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 21 Aug, 2024 | GBP 68857 Annual | 22 May, 2024 | N/A | Good communication skills | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
We are seeking to appoint 3 Lecturers and 3 Senior Lecturers (each Full Time / 1.0 FTE, Permanent) as part of a new interdisciplinary research team working in the broad field of AI Trust and Security to build on our existing strengths in Digital Trust, Security and Society. These posts form part of a new £4.73m University investment in AI Trust and Security and will be accompanied by 6 Post-doctoral Research Associate posts; 6 funded PhDs and new investment in high performance computing over the next 5 years. The post-holders may be located in any one of the four Schools of the Faculty of Humanities and will form a new cross-cutting research cluster within the Centre for Digital Trust and Society.
We wish to appoint innovative, engaging and dynamic colleagues who will be able to enhance the quality of our teaching and research and help drive the Faculty’s vision to be a research leader in AI Trust and Security. How can society trust AI? How can we ensure the security and robustness to exploitation of AI technologies? What are the barriers to and/or enablers of trust in AI?
At UoM, we see AI security as part of a set of broader issues of trust and trustworthiness, distrust and trust exploitation, and trust-building and resilience, and promote an interdisciplinary approach to these issues. We are looking to appoint people from any discipline who can contribute to this multi-dimensional and collaborative approach to AI Trust and Security. This might relate to:
- researching the competence, reliability, and trustworthiness of AI technologies
- analysing the social responsibility and ethical dimensions of AI and its implementation for social good
- exploring the relational and linear nature of trust relations between AI, humans and organisations
- crafting the legal, regulatory and governance landscape to ensure the integrity of AI
- interrogating the ways in which AI can be manipulated or exploited for malicious reasons, whether for criminal enterprise, such as organised cyber frauds, or to undermine societal principles, values and processes, as we see with mis/dis/mal-information.
All such issues require an understanding of the socio-technical aspects of AI.
Responsibilities:
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Other
Education
Graduate
Proficient
1
Manchester M13, United Kingdom