Exploring the usefulness of explainable machine learning in assessing fairness
Keywords
Loading...
Authors
Issue Date
2020-07-01
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
Investigating the fairness of an algorithm has become more important since such algorithms
have been employed in more sensitive areas, such as credit risk assessment
and criminal justice. There exists no rm consensus regarding the various existing
fairness measures, which can lead to an uninformed use of any of these measures.
This research aims to nd a relation between the eld of explainable arti cial intelligence
and the eld of fair arti cial intelligence. If such relation exists, this could
evoke a more transparent and informed fairness assessment. This research focuses
on the state-of-the-art explainability method SHAP and investigates the usefulness
of this method in assessing fairness. This is done in three ways: (1) the relationship
between SHAP and existing fairness measures is studied; (2) a possible improvement
of one fairness measure using SHAP is examined; (3) a usability study is conducted
to explain existing measures with SHAP. The results of this study show a promising
relationship between SHAP and the eld of fair arti cial intelligence.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen