State segmentation in neural network models and the brain
In this project, the brain mechanism supporting event segmentations, called neural state segmentation, is researched. With the help of convolutional neural networks, I was trying to find if neural state segmentations are directly dependent on our visual stimuli. This was done by comparing neural network feature segmentations, that only were dependent on the current visual input, and human fMRI segmentations. The neural network features were from the frames of a movie, and using the features per frame a timeseries was created. This timeseries was then segmented with GSBS. This was done for the neural network AlexNet, which was trained for object recognition, and for the neural network VGG16, which was trained for face recognition. The human fMRI data was taken from 15 subjects who watched the same movie. The subjects’ data was segmented for the two brain regions LOC and FFA. The results from comparing the brain regions with the models were inconsistent. While the AlexNet showed some results that were overlapping with the brain regions, the VGG16 model did not. Most likely, the data that was used was too sporadic to create consistent results. To get more insights into the topic, a more precise research with carefully thought out steps to get the most out of model data is needed before a conclusion can be made.
Faculteit der Sociale Wetenschappen