Creating artificial moral agents for surveillance robots that can identify care and harm
Keywords
Loading...
Authors
Issue Date
2016-10-27
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
As artificial intelligence develops, robots are becoming progressively
more intelligent, autonomous and intertwined with societal life. Currently an important question is how we can make these robots apply
morality and ethics. One useful application for artificial moral agents are
surveillance robots, which could benefit from a human-like moral judgement
Therefore we investigate the possibility of creating an artificial
moral agent (AMA) that can distinguish between care and harm. Care
and harm are looked at from the perspective of the Moral Foundations
Theory. We attempt to solve the problem by working within the bottom-up
approach to designing artificial moral agents. First a survey-based
approach is used to gather human judgements on different moral problems
related to Care/harm. Then a multi-layer perceptron is trained to
learn the underlying moral function in the survey data. Finally parameter
choices for the network are determined that yield the highest performance
when classifying new moral problems. Additionally we also take a look at
the hidden units of the trained network. Based on the results we discuss
the limitations of a survey-based approach, the bottom-up approach and
the ethical and philosophical implications.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen