PhD position in Human-centered Explainable AI

at  KU Leuven

Leuven, Vlaanderen, Belgium -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate21 Nov, 2024Not Specified22 Aug, 2024N/AGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

PhD position in Human-centered Explainable AI
(ref. BAP-2024-533)
Laatst aangepast: 19/07/24
We have an open Phd position that is part of a large interdisciplinary research project. Collaborating research groups include the Augment team of the Department of Computer Science (supervisor Katrien Verbert), the LIRIS research group at the Faculty of Economics and Business (supervisor Monique Snoeck) and the tutorial services of the Faculty of Engineering Science (supervisor Tinne De Laet).
Website van de eenheid
Project
While eXplainable AI (XAI) is only recently gaining widespread visibility, the Machine Learning (ML), Artificial Intelligence (AI) and Recommender Systems literature contain a long history of work on explanations. The distinction was made early on between transparency that explains the inner logic of a model and justification that decouples the explanation from the model. The latter category is also researched under the umbrella of model-agnostic approaches. As contemporary models are more complex and less interpretable than ever [5], this category is increasingly researched, particularly also for non-expert users with little or no knowledge of AI models.
Adadi and Berrada [1] categorised these methods into four groups. First, visualisations are post-hoc explanations that graphically display the inner workings of “black-box” models [7]. Second, knowledge extraction methods infer rules that approximate the global decision-making process by using the input and output of a model [2]. Third, influence methods are post-hoc explanations that estimate the relevance of features, i.e., they show how each feature affects the model outcome. Fourth, example-based explanation methods explain model outcomes for an individual by presenting groups of other individuals with similar or different characteristics.
Despite the rich set of explanation methods that has been developed, several core challenges have been identified in the literature. Most explanation methods focus on providing low-level explanations of how an individual decision was reached [3]. While important, these explanations rarely provide sufficient insight into the reasoning of models and the explanatory depth that people require to accept and trust the decision-making of the model. Most existing methods are also static and require significant expertise [6]. The prominent work of Dodge et al. [4] illustrates global explanations seem to render more confidence in understanding the model. The overall objective of this research is to enable the development of the next generation of explainability methods that can be used by users with little or no knowledge of AI. We will research how these users can be empowered to understand the outcomes of models and how they can provide feedback to improve models. We will tackle this challenge by developing novel interactive explainability methods as well as by combining, integrating, and extending different types of explanation methods as a basis to better support insight into both model behaviour and underlying data.
[1] A. Adadi and M. Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access, 6:52138-52160, 2018.
[2] C. Bucila, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD’06, page 535)541, New York, NY, USA, 2006. Association for Computing Machinery.
[3] R. Dazeley, P. Vamplew, C. Foale, C. Young, S. Aryal, and F. Cruz. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence, 299:103525, 2021.
[4] J. Dodge, Q. V. Liao, Y. Zhang, R. K. Bellamy, and C. Dugan. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces, pages 275-285, 2019.
[5] R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman. Metrics for explainable ai: Challenges and prospects. arXiv preprint arXiv:1812.04608, 2018.
[6] L. Jiang, S. Liu, and C. Chen. Recent research advances on interactive machine learning. Journal of Visualization, 22(2):401-417, 2019.
[7] M. Nazar, M. M. Alam, E. Yafi, and M. Mazliham. A systematic review of human-computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques. IEEE Access, 2021.
Profile

We expect from applicants:

  • an excellent Master degree in Computer Science or related discipline
  • strong programming skills
  • the ability to do independent research
  • a good background and interest in human-computer interaction research
  • a good background and interest in language models
  • strong commitment and the ability to work in a team
  • a high level of proficiency in English, both spoken and written.

Offer
Funding is available immediately. The research will be carried out at the Department of Computer Science of KU Leuven, campus Heverlee and the Faculty of Economics and Business (FEB).
Interested?
For more information please contact Prof. dr. Katrien Verbert, tel: +32 16 32 82 86, mail: katrien.verbert@kuleuven.be, Prof. dr. ir. Tinne De Laet, tel.: +32 16 32 70 75, mail: tinne.delaet@kuleuven.be or Prof. dr. Monique Snoeck, tel.: +32 16 32 68 79, mail: monique.snoeck@kuleuven.be.
You can apply for this job no later than August 28, 2024 via the online application tool
KU Leuven strives for an inclusive, respectful and socially safe environment. We embrace diversity among individuals and groups as an asset. Open dialogue and differences in perspective are essential for an ambitious research and educational environment. In our commitment to equal opportunity, we recognize the consequences of historical inequalities. We do not accept any form of discrimination based on, but not limited to, gender identity and expression, sexual orientation, age, ethnic or national background, skin colour, religious and philosophical diversity, neurodivergence, employment disability, health, or socioeconomic status. For questions about accessibility or support offered, we are happy to assist you at this email address.
-
Heb je een vraag over de online sollicitatieprocedure? Raadpleeg onze veelgestelde vragen of stuur een e-mail naar solliciteren@kuleuven.be
avtimer
Tewerkstellingspercentage: Voltijds
location
city
Locatie : Leuven
timer
Solliciteren tot en met:

Responsibilities:

Please refer the Job description for details


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Proficient

1

Leuven, Belgium