Learning Human Intention for Taskable Agents

Keywords

Loading...
Thumbnail Image

Issue Date

2019-09-20

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

As AI systems are continuously developed and improved, they can be used for an increasing variety of tasks. At the same time, dependency on these systems grows, and it becomes more important for AI systems to perform their tasks as we intend them to do. In this study, the focus lies with agents that learn, given a task, how to perform this task as the human intended. The use of context-dependent task constraints is studied as an approximation to the human’s intention for how the task should be executed. A drone reconnaissance task was built using a new multi-agent simulator, called the Man-Agent Teaming Rapid Experimentation Simulator (MATRXS). In the pilot, a small number of participants taught an agent how they want a task to be completed in various contexts by specifying constraints. Machine learning models were able to effectively and efficiently learn the context-dependent constraints (XGBoost with average F1 score of 0.95, 128 data points) for each participant individually. Models trained without context input features scored significantly lower (average F1 score of 0.60), showing the context-dependency of human intention for agent tasking. Although the conclusiveness of the results is lower due to the small magnitude of the experiment, the results show this to be a promising approach for establishing meaningful human control over agents. Finally, lessons learned from this explorative study were summarized into a set of recommendations which indicate promising future research and how to scale up to an experiment of larger magnitude and complexity.

Description

Citation

Faculty

Faculteit der Sociale Wetenschappen