Understanding recurrent rate neural network by solving convex optimization

Keywords

Loading...
Thumbnail Image

Issue Date

2022-07-11

Language

en

Document type

Journal Title

Journal ISSN

Volume Title

Publisher

Title

ISSN

Volume

Issue

Startpage

Endpage

DOI

Abstract

Recurrent neural networks are widely used in artificial intelligence and neuroscience. Although the performance of these networks is high, there is still uncertainty on how the networks function after training. The inner workings can be seen as a black box, since we do not have much insight on what the network actually does with regard to the computations. Recent work has brought new insights into network computations, showing that spiking neural networks do convex optimization and that the dynamics can be visualized in an intuitive manner with the use of geometrical tools. Where this work only considers spiking networks, it would be interesting to explore the more widely applied rate-based networks in similar fashion. In this paper, we aim to solve convex problems with recurrent rate networks and by doing so, create a better understanding of network computations. The approach we pursue is the use of neural Ordinary Differential Equations as network model, which allows us to visualize the network dynamics with dynamical systems theory tools, such as phase plane analysis. These recurrent rate networks show that they can solve convex optimization for specific parameters. Furthermore we found that the network’s non-linearity considerably impacted whether convex optimization was successfully solved. The dynamics of the activation function, neurons and the total network are clearly explained by the network visualization.

Description

Citation

Faculty

Faculteit der Sociale Wetenschappen