Exploring the usefulness of explainable machine learning in assessing fairness
dc.contributor.advisor | Heskes, T.M. | |
dc.contributor.author | Smits, D. A. E. A. | |
dc.date.issued | 2020-07-01 | |
dc.description.abstract | Investigating the fairness of an algorithm has become more important since such algorithms have been employed in more sensitive areas, such as credit risk assessment and criminal justice. There exists no rm consensus regarding the various existing fairness measures, which can lead to an uninformed use of any of these measures. This research aims to nd a relation between the eld of explainable arti cial intelligence and the eld of fair arti cial intelligence. If such relation exists, this could evoke a more transparent and informed fairness assessment. This research focuses on the state-of-the-art explainability method SHAP and investigates the usefulness of this method in assessing fairness. This is done in three ways: (1) the relationship between SHAP and existing fairness measures is studied; (2) a possible improvement of one fairness measure using SHAP is examined; (3) a usability study is conducted to explain existing measures with SHAP. The results of this study show a promising relationship between SHAP and the eld of fair arti cial intelligence. | en_US |
dc.identifier.uri | https://theses.ubn.ru.nl/handle/123456789/12664 | |
dc.language.iso | en | en_US |
dc.thesis.faculty | Faculteit der Sociale Wetenschappen | en_US |
dc.thesis.specialisation | Bachelor Artificial Intelligence | en_US |
dc.thesis.studyprogramme | Artificial Intelligence | en_US |
dc.thesis.type | Bachelor | en_US |
dc.title | Exploring the usefulness of explainable machine learning in assessing fairness | en_US |
Files
Original bundle
1 - 1 of 1