Guest talk on BigBird, a sparse attention mechanism

We had an amazing talk by Dr. Guru Guruganesh from Google Research team on their recent NeurIPS paper: "Big Bird: Transformers for Longer Sequences".

It was a great opportunity to learn about the novel optimization strategies and techniques they are exploring, in order to train and scale up transformer models making them bigger, better and more efficient.


Summer School Lectures

Unfortunately the Nengo Summer School had to be cancelled this year due to COVID-19, but that doesn't mean you can't get a taste of what you'll learn at Brain Camp!

Chris and Terry have been hard at work recording a set of lectures covering the topics typically presented in the first few days of the summer school. The videos are freely available to view here. While no substitute for the hand-on experience of the actual summer school, these lectures and tutorials give an excellent introduction to the Neural Engineering Framework and how to develop spiking neural network models using Nengo.


Society for Neuroscience 2019

Congratulations to Pete and Nat on presenting their work at SfN this year! Both gave excellent poster presentations!

Pete explored the effects of different drugs on his rodent model of fear conditioning. He developed a spiking neural model of the different nuclei of the amygdala with enough detail to allow biophysical manipulations. "A spiking neuron model of pharmacologically-biased fear conditioning in the amygdala" [Abstract]

Nat developed a spiking neural model of adaptation in motor control. This includes an extension of the REACH model with a multimodal Kalman filter and replicates data in both human and non-human primates. "Spiking neuron model of motor control with adaptation to visuomotor rotation" [Abstract]


Representing Time and Space

Lots of exciting new research to share!

First up, computing functions across time in a spiking neural network. Aaron has developed a way to represent windows of history in a population of spiking neurons, which allows the computations of accurate delays and higher-order synapses while maintaining desired lower level properties. This work also gives a novel explanation of time cells found in a variety of tasks involving delays. Check out the first paper here!

Since this inaugural publication, the network structure that provides this window of history representation has been named a Legendre Memory Unit (LMU).

Beyond modelling biological neural systems, LMUs have proven to be a useful in the world of Deep Learning as a memory cell for Recurrent Neural Networks (RNN). A recent paper accepted to NeurIPS compares LMUs to the commonly used LSTM and shows significant improvement across a variety of tasks. The full paper can be found here.

Another recent area of work focusses on representations of space. A common way to represent a discrete position in a Semantic Pointer is to bind a displacement vector with itself some number of times to indicate its order in a sequence. This notion of self-binding can be extended to inlucde fractional binding, allowing representations of continuous space with semantic pointers. Vectors constructed in this manner for use in the SPA are called Spatial Semantic Pointers (SSP).

This representation allows many spatial operations to be performed efficiently in a spiking neuron network, including representing the location of collections of objects, performing spatial queries on a memory, and shifting the locations of objects in memory. The first papers describing SSPs and their applications were presented at CogSci this year by Brent and Thomas.