Resilience Towards Bias in Artificial Agents: Human-Agent Team Performing a Triage Task

Keywords
Loading...
Thumbnail Image
Issue Date
2021-01-18
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Intelligent artificial agents have emerged in various domains of human society (financial, legal, social). The use of intelligent agents in morally loaded situations calls for meaningful human control (MHC). Human-agent teaming is necessary to ensure MHC, but it is not quite clear which type of teamwork would be optimal. This study helps to better understand which human-agent team (HAT) design patterns will result in MHC. One team design pattern is experimented with: a data-driven decision support team design pattern. The testbed of the experiment consists of the morally loaded situation of making triage decisions during a pandemic when resources are scarce. In collaboration with either a biased or an unbiased agent, participants needed to perform triage. The experiment indicates that in a data-driven team design pattern (without explanations from the agents) participants were not resilient to the bias in the agent or account for biased outcomes of the HAT. This research shows how important it is to design for and test for meaningful human control in a HAT. It suggests that the lack of MHC can potentially result in people being unable to detect biases in machines and to prevent biases in the outcome of the HAT.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen