Proximal Policy Optimization for lane following

Keywords
No Thumbnail Available
Issue Date
2020-07-10
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis aimed to apply a state of the art reinforcement learning algorithm named proximal policy optimization on a complicated task with real world applicability in which sensor data is not always reliable. This algorithm was tested on the task of lane following. In order to do this the autonomous car simulator Carla was used. Semantic segmentation and Canny lter were discussed as methods to extract the lanes from the RGB sensor that the Carla simulator provided. The agent's performance was then examined on one of Carla's maps. In the end it turned out to be impossible to run the experiment due through hardware limitations. As an alternative, the algorithm was tested on the Luna lander environment, a game in which the agent had to land a rocket on the moon. Adding Gaussian noise to the agent's sensors did not prevent the algorithm from converging. It could be concluded from this that proximal policy optimization can derive an optimal policy on easy environments even if the sensor data is not completely reliable. There are, however, limits to the amount of noise that can be added.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen