Creating artificial moral agents for surveillance robots that can identify care and harm

dc.contributor.advisorHaselager, W.F.G.
dc.contributor.authorErp, D.L.J. van
dc.description.abstractAs artificial intelligence develops, robots are becoming progressively more intelligent, autonomous and intertwined with societal life. Currently an important question is how we can make these robots apply morality and ethics. One useful application for artificial moral agents are surveillance robots, which could benefit from a human-like moral judgement Therefore we investigate the possibility of creating an artificial moral agent (AMA) that can distinguish between care and harm. Care and harm are looked at from the perspective of the Moral Foundations Theory. We attempt to solve the problem by working within the bottom-up approach to designing artificial moral agents. First a survey-based approach is used to gather human judgements on different moral problems related to Care/harm. Then a multi-layer perceptron is trained to learn the underlying moral function in the survey data. Finally parameter choices for the network are determined that yield the highest performance when classifying new moral problems. Additionally we also take a look at the hidden units of the trained network. Based on the results we discuss the limitations of a survey-based approach, the bottom-up approach and the ethical and philosophical implications.en_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.titleCreating artificial moral agents for surveillance robots that can identify care and harmen_US
Original bundle
Now showing 1 - 1 of 1
Thumbnail Image
Erp, van D._BSc_Thesis_2016.pdf
946.49 KB
Adobe Portable Document Format