How different delegation forms can preserve control in Human Machine Teams

Keywords

Loading...
Thumbnail Image

Issue Date

2023-11-29

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

Systems driven by Artificial Intelligence (AI) are increasingly designed to collaborate with humans in a team setting to form Human Machine Teams (HMTs), also in high-risk contexts such as military operations. State-of-the-art technology aims to make these autonomous agents fulfil increasingly high-level instructions (goals). High-level autonomous agents promise to make the life of their team members safer, easier, and more efficient. However, different technical, normative, and human limitations make it hard to implement such agents in practice. Current technology limits the self-sufficiency of the autonomous agents in complex environments. Ethicists raise important questions regarding meaningful human control (MHC) over autonomous systems. Responsibility gaps occur where it is unclear who is responsible for the actions of the autonomy. Human team members may not be in a position to be held accountable because they did not have sufficient control over the system, or because they did not have the information required to operate that system. In addition, human team members may choose not to use their AI team members because of a lack of trust or overuse their team members because of overtrust. All these factors make it important to critically question where such levels of autonomy are required, and where lower levels may suffice. There are different ways to delegate tasks to the AI team member. Each form of delegation deals with these factors differently and as such may excel at different use cases. This study compared playbook delegation, a popular intermediary form of delegation, to the two extremes, task-based and goal-based delegation. The delegation forms were compared on performance, experienced control, trust, and attributed responsibility, when the AI team member’s capabilities were compromised to different degrees. They were compared using a collaborative cooking task in a 2d kitchen simulation environment. The different delegation forms showed different strengths and weaknesses in the simulation. These weaknesses must be addressed for each delegation form to facilitate effective control, appropriate trust, and accurate attribution of responsibility. Future research should also aim to identify the various ways in which a system can be compromised (by technical, human, and environmental factors), and which delegation forms are most suited in a given situation. That way, systems can be designed with fitting delegation forms in mind, whether using a single delegation form or using adaptive delegation forms.

Description

Citation

Supervisor

Faculty

Faculteit der Sociale Wetenschappen