Exploring the usefulness of explainable machine learning in assessing fairness

dc.contributor.advisorHeskes, T.M.
dc.contributor.authorSmits, D. A. E. A.
dc.description.abstractInvestigating the fairness of an algorithm has become more important since such algorithms have been employed in more sensitive areas, such as credit risk assessment and criminal justice. There exists no rm consensus regarding the various existing fairness measures, which can lead to an uninformed use of any of these measures. This research aims to nd a relation between the eld of explainable arti cial intelligence and the eld of fair arti cial intelligence. If such relation exists, this could evoke a more transparent and informed fairness assessment. This research focuses on the state-of-the-art explainability method SHAP and investigates the usefulness of this method in assessing fairness. This is done in three ways: (1) the relationship between SHAP and existing fairness measures is studied; (2) a possible improvement of one fairness measure using SHAP is examined; (3) a usability study is conducted to explain existing measures with SHAP. The results of this study show a promising relationship between SHAP and the eld of fair arti cial intelligence.en_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.titleExploring the usefulness of explainable machine learning in assessing fairnessen_US
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
1005509 Smits.pdf
556.54 KB
Adobe Portable Document Format