Improving Model-Based Reinforcement Learning by Disentangling the Latent Representation of the Environment
| dc.contributor.advisor | Guclu, U. | |
| dc.contributor.author | Janssen, M.J. | |
| dc.date.issued | 2019-07-12 | |
| dc.description.abstract | This thesis explores to what degree model-based reinforcement learning can benefit from recent advances in representation learning. Specifically we measure the impact that the amount of featureentanglement within the learned representation of the environment influences overall model performance. We train a total of 45 (variational) autoencoders on a custom box-physics environment, varying the relative influence of the Kullback-Leibler divergence term on the encoders loss. For each of these models, we measure the amount of feature entanglement in their latent representations using the measures proposed in Higgins et al. (2017) and Eastwood, C., & Williams, C. K. (2018). These disentanglement scores will then be evaluated against the loss of a recurrent LSTM network that was pre-trained on sequences of environment encodings, generated by the relevant autoencoder. -- We find that less entangled representations of the environment significantly increase the accuracy of the recurrent model and that this effect is even stronger for larger latent spaces. | en_US |
| dc.identifier.uri | https://theses.ubn.ru.nl/handle/123456789/12599 | |
| dc.language.iso | en | en_US |
| dc.thesis.faculty | Faculteit der Sociale Wetenschappen | en_US |
| dc.thesis.specialisation | Bachelor Artificial Intelligence | en_US |
| dc.thesis.studyprogramme | Artificial Intelligence | en_US |
| dc.thesis.type | Bachelor | en_US |
| dc.title | Improving Model-Based Reinforcement Learning by Disentangling the Latent Representation of the Environment | en_US |
Files
Original bundle
1 - 1 of 1
