Unsupervised Learning of Compression in the Semantic Pointer Architecture

Keywords
Loading...
Thumbnail Image
Issue Date
2023-02-03
Language
en
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The Semantic Pointer Architecture for spiking neural networks has proven to be a biologically plausible model of cognition that can be practically applied for many tasks while having some advantages over deep neural networks. However, compression of information which serves as the backbone of this architecture, has previously only been learned in biologically implausible ways. Since semantic pointers are meant to form clusters, we hypothesize unsupervised learning of compression to be possible by using a clustering metric as a learning signal. To make learning easier, it is first explored how to minimize the negative impact of circular convolution, a common operation in this architecture, on separability of data. For this, many vectors are drawn from normal distributions with different parameters and then convolved with the same dataset. The silhouette score is taken after convolution to find where convolution had the worst effect on the score. This is also done with vectors generated by Nengo. For an experiment testing the unsupervised learning hypothesis, two simple networks are created in Nengo that take a linearly separable dataset and compress it in multiple stages. The output is sampled, clustered using k-means and the silhouette score is inverted to be fed as an error signal. The convolution experiment shows that Nengo’s vectors are not optimal to preserve separability when convolving with data. Unsupervised learning of compression failed. It is hypothesized that this approach might work if local learning signals were derived instead of providing one global signal for every population.
Description
Citation
Faculty
Faculteit der Sociale Wetenschappen