Sequentially Learning Multiple Meaningful Representations in Static Neural Networks: avoiding catastrophic interference in multi-layer perceptrons

Keywords
Loading...
Thumbnail Image
Issue Date
2009-05-06
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Artificial neural networks (ANNs) attempt to mimic human neural networks in order to solve problems and carry out tasks. However, in contrast to their human counterparts ANNs cannot generally learn to perform new tasks without forgetting everything they already know due to a phenomenon called catastrophic interference. This paper discusses this phenomenon, shows that it occurs in multi-layer perceptrons with rbitrary task representations and proposes and discusses the static meaningful representation learning method that uses meaningful task representations to circumvent this problem when learning to perform multiple tasks. The technique is powerful enough to enable the learning of several simple tasks without changing the weights of the network. It remains to be seen whether the technique scales to more interesting task domains. The real potential of using meaningful task representations lies in their combination with other techniques.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen