Falsi cation Machines and Meaningful Human Control

Keywords
Loading...
Thumbnail Image
Issue Date
2021-07-01
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
Arti cial Intelligence systems are becoming more and more prevalent in modern day society, and are now even beginning to get implemented in places where human expertise has been crucial in the past. Two examples of this are the legal and medical domains. When Arti cial Intelligence sys- tems help with making decisions in domains like these, it is important that some degree of meaningful human control over such a system is present. People tend to trust the recommendation of a Decision Support System, of- ten without having any idea how the system came to that recommendation. This thesis discusses why this is the case and addresses the various prob- lems that can arise from this. The concept of meaningful human control is explored and it is investigated how increasing meaningful human control in decision making processes can help alleviate these problems. Also, a possible way to increase meaningful human control in decision making processes is introduced; the Falsi cation Machine. This is a system that helps with deci- sion making where instead of giving a recommendation on what decision to make, falsifying questions are asked which provoke the users to think more about their decision. An experiment is conducted to test whether falsi - cation machines can indeed be used to increase meaningful human control in decision making processes. The conducted experiment indicates a trend that suggests that the idea of a falsi cation machine has the potential to increase meaningful human control in decision making processes
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen