Sequentially Learning Multiple Meaningful Representations in Static Neural Networks: avoiding catastrophic interference in multi-layer perceptrons

dc.contributor.advisorSprinkhuizen-Kuyper, I.G.
dc.contributor.advisorRooij, I.J.E.I. van
dc.contributor.authorBieger, J.E.
dc.date.issued2009-05-06
dc.description.abstractArtificial neural networks (ANNs) attempt to mimic human neural networks in order to solve problems and carry out tasks. However, in contrast to their human counterparts ANNs cannot generally learn to perform new tasks without forgetting everything they already know due to a phenomenon called catastrophic interference. This paper discusses this phenomenon, shows that it occurs in multi-layer perceptrons with rbitrary task representations and proposes and discusses the static meaningful representation learning method that uses meaningful task representations to circumvent this problem when learning to perform multiple tasks. The technique is powerful enough to enable the learning of several simple tasks without changing the weights of the network. It remains to be seen whether the technique scales to more interesting task domains. The real potential of using meaningful task representations lies in their combination with other techniques.en_US
dc.identifier.urihttp://theses.ubn.ru.nl/handle/123456789/56
dc.language.isoenen_US
dc.thesis.facultyFaculteit der Sociale Wetenschappenen_US
dc.thesis.specialisationBachelor Artificial Intelligenceen_US
dc.thesis.studyprogrammeArtificial Intelligenceen_US
dc.thesis.typeBacheloren_US
dc.titleSequentially Learning Multiple Meaningful Representations in Static Neural Networks: avoiding catastrophic interference in multi-layer perceptronsen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Bieger, J.,Ba_thesis_09.pdf
Size:
1.3 MB
Format:
Adobe Portable Document Format
Description:
Scriptietekst