Proximal Policy Optimization for lane following
Proximal Policy Optimization for lane following
dc.contributor.advisor | Thill, S. | |
dc.contributor.author | Geurtjens, R. P. | |
dc.date.issued | 2020-07-10 | |
dc.description.abstract | This thesis aimed to apply a state of the art reinforcement learning algorithm named proximal policy optimization on a complicated task with real world applicability in which sensor data is not always reliable. This algorithm was tested on the task of lane following. In order to do this the autonomous car simulator Carla was used. Semantic segmentation and Canny lter were discussed as methods to extract the lanes from the RGB sensor that the Carla simulator provided. The agent's performance was then examined on one of Carla's maps. In the end it turned out to be impossible to run the experiment due through hardware limitations. As an alternative, the algorithm was tested on the Luna lander environment, a game in which the agent had to land a rocket on the moon. Adding Gaussian noise to the agent's sensors did not prevent the algorithm from converging. It could be concluded from this that proximal policy optimization can derive an optimal policy on easy environments even if the sensor data is not completely reliable. There are, however, limits to the amount of noise that can be added. | en_US |
dc.embargo.lift | 10000-01-01 | |
dc.embargo.type | Permanent embargo | en_US |
dc.identifier.uri | https://theses.ubn.ru.nl/handle/123456789/12666 | |
dc.language.iso | en | en_US |
dc.thesis.faculty | Faculteit der Sociale Wetenschappen | en_US |
dc.thesis.specialisation | Bachelor Artificial Intelligence | en_US |
dc.thesis.studyprogramme | Artificial Intelligence | en_US |
dc.thesis.type | Bachelor | en_US |
dc.title | Proximal Policy Optimization for lane following | en_US |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- 1006223 Geurtjens.pdf
- Size:
- 1.47 MB
- Format:
- Adobe Portable Document Format
- Description: