Visual Attention Through Uncertainty Minimization in Recurrent Generative Models
dc.contributor.advisor | Gerven, Marcel, van | |
dc.contributor.advisor | Gucluturk, Yagmur | |
dc.contributor.author | Standvoss, Kai | |
dc.date.issued | 2019-08-14 | |
dc.description.abstract | Allocating visual attention through saccadic eye movements is a key ability of intelligent agents. Attention is both influenced through bottom-up stimulus properties as well as topdown task demands. The interaction of these two attention mechanisms is not yet fully understood. A parsimonious reconciliation posits that both processes serve the minimization of predictive uncertainty. We propose a recurrent generative neural network model that predicts a visual scene based on foveated glimpses. The model shifts its attention in order to minimize the uncertainty in its predictions. We show that the proposed model produces naturalistic eye-movements focusing on salient stimulus regions. Introducing the additional task of classifying the stimulus modulates the saccade patterns and enables effective image classification. Given otherwise equal conditions, we show that different task requirements cause the model to focus on distinct, task-relevant regions. The model’s saccade statistics correspond well with previous experimental data in humans and provide insights into unsettled controversies in the literature. The results provide evidence that uncertainty minimization could be a fundamental mechanisms for the allocation of visual attention. | en_US |
dc.embargo.lift | 2044-08-14 | |
dc.embargo.type | Tijdelijk embargo | en_US |
dc.identifier.uri | https://theses.ubn.ru.nl/handle/123456789/10913 | |
dc.language.iso | en | en_US |
dc.thesis.faculty | Faculteit der Sociale Wetenschappen | en_US |
dc.thesis.specialisation | Researchmaster Cognitive Neuroscience | en_US |
dc.thesis.studyprogramme | Researchmaster Cognitive Neuroscience | en_US |
dc.thesis.type | Researchmaster | en_US |
dc.title | Visual Attention Through Uncertainty Minimization in Recurrent Generative Models | en_US |