Can Robots Gain Appreciation by Mimicking Moral Values?
Keywords
Loading...
Authors
Issue Date
2019-06-30
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
If a robot imitates the moral values of the person it is interacting with, can it in
uence the way it is perceived
by that person? This thesis attempts to answer that question in terms of trust in the robot, and likability and
perceived intelligence of the robot. To get a concrete reading of a person's morality, the Moral Foundations
Theory is used. The Moral Foundations Theory identi es several pillars in moral judgment: care/harm,
fairness/cheating, loyalty/betrayal, authority/subversion and sactity/degradation. This thesis will limit its
scope to the loyalty/betrayal foundation (Also called Ingroup). An experiment was done in which participants
were asked to complete a survey to ascertain their reliance on the Ingroup foundation. They were then asked
to talk to a Nao robot, which described a scenario, and followed up by making a decision to help or betray
its ingroup, depending on the experimental condition. Afterwards the participants evaluated the robot on
trust, likability, and perceived intelligence. No signi cant results were found but some interesting suggestions
could be made to improve similar research in the future.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen
