What factors influence the perceived optimal threshold value in the visualization of quantified MAP-dependence?
Keywords
Loading...
Authors
Issue Date
2022-06-20
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
In the context of explainable AI in Bayesian networks, justifying the decision process plays a key role in generating trust between user and system, with the amount of information that the user receives being an important factor in achieving maximal user satisfaction. On the basis of the work on MAP-independence by Kwisthout (2022), this thesis introduces a quantified version of MAP-dependence that allows for quantification of the individual MAP-dependent variables. These “weights” assigned to the MAP-dependent variables can be used to filter the MAP-dependent set, for example by dropping all variables that have a weight below a certain threshold value. This allows for control over the amount of information provided to the user. With the use of these threshold values visualizations of the MAP-dependent sets have been created that were subsequently used in a behavioural experiment that explores three possible factors influencing the perceived optimal information threshold value in the visualization of Bayesian inference: the network itself, the level of user expertise and the MAP-independence implementation. Results indicate that out of these three factors only the level of user expertise plays a significant role in deciding the optimal threshold value.
Keywords: Bayesian Networks ∙ MAP-independence ∙ Hamming Distance ∙ Explainable AI ∙ Human Decision-making.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen
