Improving Model-Based Reinforcement Learning by Disentangling the Latent Representation of the Environment

dc.contributor.advisorGuclu, U.
dc.contributor.authorJanssen, M.J.
dc.date.issued2019-07-12
dc.description.abstractThis thesis explores to what degree model-based reinforcement learning can benefit from recent advances in representation learning. Specifically we measure the impact that the amount of featureentanglement within the learned representation of the environment influences overall model performance. We train a total of 45 (variational) autoencoders on a custom box-physics environment, varying the relative influence of the Kullback-Leibler divergence term on the encoders loss. For each of these models, we measure the amount of feature entanglement in their latent representations using the measures proposed in Higgins et al. (2017) and Eastwood, C., & Williams, C. K. (2018). These disentanglement scores will then be evaluated against the loss of a recurrent LSTM network that was pre-trained on sequences of environment encodings, generated by the relevant autoencoder. -- We find that less entangled representations of the environment significantly increase the accuracy of the recurrent model and that this effect is even stronger for larger latent spaces.en_US
dc.identifier.urihttps://theses.ubn.ru.nl/handle/123456789/12599
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.thesis.typeBacheloren_US
dc.titleImproving Model-Based Reinforcement Learning by Disentangling the Latent Representation of the Environmenten_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
4764633 Janssen.pdf
Size:
2.11 MB
Format:
Adobe Portable Document Format