Falsification Machines: a modular approach to increasing meaningful human control with decision support systems
Keywords
Loading...
Authors
Issue Date
2021-01-29
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
The rapid rise of (semi) autonomous artificial intelligence systems comes
with a dangerous increase in reliance on these systems and leads to growing
responsibility gaps. Critiquing decision support systems have proven
useful in reducing reliance and might also be able to increase meaningful
human control. As such, this thesis proposes a rudimentary specification
and implementation for a variant of the critiquing decision support system,
the falsifying decision support system. This falsification machine aims not
to increase user performance like a critic, but rather to increase the user’s
sense of responsibility. By taking a modular approach, different methods
for producing falsifying feedback can be combined. The most basic of these
modules will compare a user’s proposed solution to information known to
the falsification machine. The feedback produced by these modules are
compared with each other and to the most basic control feedback ’are you
sure?’. Comparing the user’s solution directly to known keywords in a
given case description produced the most promising results in this comparison,
producing relevant questions ranked with a confidence in the user’s
solution. From an implementation standpoint, the falsification machine
appears a promising and feasible concept, with many modules and combination
methods still unexplored. As such, while the effectiveness of a
falsification machine with regards to increasing meaningful human control
is yet to be determined, it has proven to be an interesting concept with a lot
of room for further research.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen
