Start Date
Immediate
Expiry Date
29 Aug, 25
Salary
2.901
Posted On
29 May, 25
Experience
5 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Human Computer Interaction, Natural Language Processing, Machine Learning, Computer Science, Communication Skills, Research
Industry
Education Management
JOB DESCRIPTION
The PhD will work towards analyzing explainable AI (XAI) techniques and designing explainable AI methods for diagnosing embedded AI models. They will focus on analyzing the state-of-the-art model-specific (e.g., integrated gradient) and model-free (e.g., counterfactual instances) XAI methods to identify robustness issues through causal analysis of relevant feature in model behavior. To that end, they will develop principled approaches and practical tools for diagnosing what knowledge the embedded ML model needs thus clearing the roadblock for robust AI. As part of the effort, they will design representation of knowledge of embedded AI and develop a reasoning engine to infer the unknowns of an embedded AI model. The expected results include 1) a human-in-the-loop knowledge extraction approach for describing AI knowledge in semantic concepts and for specifying required mechanisms in specific tasks; 2) a proof-of-concept of proposed diagnosis tool in the selected embedded AI domains.
JOB REQUIREMENTS
The successful PhD candidate should:
Please refer the Job description for details