AUTONOMOUS DRIVING: LEARNING FROM INTERVENTIONS

dc.contributor.advisorMatiisen, Tambet
dc.contributor.advisorTampuuy, Ardi
dc.contributor.advisorGuclu, Umut
dc.contributor.authorChurchman, Thomas
dc.date.issued2021-10-01
dc.description.abstractAutonomous driving systems have not yet reached full self-driving capabilities. Safety driver oversight is required to recognize challenging situations and detect deviations. Each time the safety driver intervenes, a failure mode of the autonomous driving system is signaled. In this work, methods to improve deep neural network models based on data springing from safety driver interventions are evaluated. Self-driving models are usually trained through regular imitation learning on expert examples. To incorporate interventions in such training, a novel negative learning concept is proposed that is similar to reinforcement learning. The experiments are performed in Carla, a realistic driving simulation. Offline learning from interventions is found to improve model performance, though care has to be taken as overfitting is likely when many interventions are similar. Negative neural network learning is found to have a small, but promising, effect.
dc.identifier.urihttps://theses.ubn.ru.nl/handle/123456789/16091
dc.language.isoen
dc.thesis.facultyFaculteit der Sociale Wetenschappen
dc.thesis.specialisationspecialisations::Faculteit der Sociale Wetenschappen::Artificial Intelligence::Master Artificial Intelligence
dc.thesis.studyprogrammestudyprogrammes::Faculteit der Sociale Wetenschappen::Artificial Intelligence
dc.thesis.typeMaster
dc.titleAUTONOMOUS DRIVING: LEARNING FROM INTERVENTIONS
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Churchman, T. s-4206606-2021.pdf
Size:
7.75 MB
Format:
Adobe Portable Document Format