Model-based Polynomial Function Approximation with Spiking Neural Networks

IEEE Conference on Cognitive Informatics and Cognitive Computing, 2017

Stefan Ulbrich, Terrence C. Stewart, Igor Peric, Arne Roennau, J. Marius Zollner, Rudiger Dillman

Abstract

Artificial neural networks are known to perform function approximation but with increasingly large non-redundant input spaces, the number of required neurons grows drastically. Functions have to be sampled densely leading to large data sets which imposes problems for applications such as neurorobotics, and requires a long time for training. Furthermore, they perform poorly on extrapolation as there are no model assumptions about the target function. This paper presents a novel network architecture of spiking neural networks for efficient model-based function approximation and prediction based on the concept of multivariate polynomial function approximation. This approach reduces the number of both training samples and required neurons, provides generalization and extrapolation depending on the chosen basis, and is capable of supervised learning. The network is implemented using the Neural Engineering Framework in the Nengo simulator and is centered around a mechanism for efficiently computing products of many input signals. We present the construction of the compound network, performance evaluation and propose a use case of its application.

Full text links

 External link

Conference Proceedings

Booktitle
IEEE Conference on Cognitive Informatics and Cognitive Computing

Cite

Plain text

BibTeX