A Car that Kills: Predicting the Fairness of Moral Dilemmas in Autonomous Vehicles

dc.contributor.advisorHaselager, W.F.G.
dc.contributor.authorPijkeren, T.
dc.date.issued2017-11-20
dc.description.abstractMany traffic accidents could most likely be avoided when autonomous vehicles (AVs) are widely used. However, even with perfect sensing, AVs cannot ensure full safety and some AVs will certainly crash. When a crash is unavoidable, the AV could end up in a situation where it will need to choose between the lesser of two evils. Asking people to give their opinions about these situations could give us an understanding about what moral decisions are preferred. However, it is impossible to ask people's opinion on every possible traffic situation. In order to solve this problem, I trained an arti cial neural network (ANN) that tried to predict the human evaluation of traffic situations where a moral choice must be made. The network has been trained on lled-in questionnaires about these moral dilemmas. The goal of this research is to see to what extent a ANN can predict these human evaluations. The results show that the ANN is not able to predict the human evaluation on these tra c situations. This is most likely the case because the ANN has only been trained on forty-two instances. However, the humans ability to morally judge a situation is really complex and this might be another reason why the ANN is not able to generalise to new situations.en_US
dc.identifier.urihttp://theses.ubn.ru.nl/handle/123456789/5304
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.thesis.typeBacheloren_US
dc.titleA Car that Kills: Predicting the Fairness of Moral Dilemmas in Autonomous Vehiclesen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Pijkeren, T._BSc_Thesis_2017.pdf
Size:
885.48 KB
Format:
Adobe Portable Document Format
Description:
Thesis text