Generalization of an Upper Bound on the Number of Nodes Needed to Achieve Linear Separability
No Thumbnail Available
An important issue in neural network research is how to choose the number of nodes and layers such as to solve a classiﬁcation problem. We provide new intuitions based on earlier results by An et al. (2015) by deriving an upper bound on the number of nodes in networks with two hidden layers such that linear separability can be achieved. Concretely, we show that if the data can be described in terms of N ﬁnite sets and the used activation function f is non-constant, increasing and has a left asymptote, we can derive how many nodes are needed to linearly separate these sets. This will be an upper bound that depends on the structure of the data. This structure can be analyzed using an algorithm. For the leaky rectiﬁed linear activation func-tion, we prove separately that under some conditions on the slope, the same number of layers and nodes as for the aforementioned activation functions is suÿcient. We empirically validate our claims.
Faculteit der Sociale Wetenschappen