Learning nonlinear functions on vectors: examples and predictions

Tech Report, 2010

Trevor Bekolay

Abstract

One of the underlying assumptions of the Neural Engineering Framework, and of most of theoretical neuroscience, is that neurons in the brain perform functions on signals. Models of brain systems make explicit the functions that a modeller hypothesizes are being performed in the brain; the Neural Engineering Framework defines an analytical method of determining connection weight matrices between populations to perform those functions in a biologically plausible manner. With the recent implementation of general error-modulated plasticity rules in Nengo, it is now possible to start with a random connection weight matrix and learn a weight matrix that will perform an arbitrary function. This technical report confirms that this is true by showing results of learning several non-linear functions performed on vectors of various dimensionality. It also discusses trends seen in the data, and makes predictions about what we might expect when trying to learn functions on very high-dimensional signals.

Full text links

 PDF

CTN Tech Report

Month
12/2010
Institution
Centre for Theoretical Neuroscience
Address
Waterloo, ON
Issn
CTN-TR-20101217-010

Cite

Plain text

BibTeX