• Spaun 2.0: Extending the World’s Largest Functional Brain Model (PhD Thesis, 2018)

Xuan Choo

Abstract: Building large-scale brain models is one method used by theoretical neuroscientists to understand the way the human brain functions. Researchers typically use either a bottom-up approach, which focuses on the detailed modelling of various biological properties of the brain and places less importance on reproducing functional behaviour, or a top-down approach, which generally aim to reproduce the behaviour observed in real cognitive agents, but typically sacrifices adherence to constraints imposed by the neuro-biology. The focus of this thesis is Spaun, a large-scale brain model constructed using a combination of the bottom-up and top-down approaches to brain modelling. Spaun is currently the world’s largest functional brain model, capable of performing eight distinct cognitive tasks ranging from digit recognition to inductive reasoning. The thesis is organized to discuss three aspects of the Spaun model. First, it describes the original Spaun model, and explores how a top-down approach, known as the Semantic Pointer Architecture (SPA), has been combined with a bottom-up approach, known as the Neural Engineering Framework (NEF), to integrate six existing cognitive models into a unified cognitive model that is Spaun. Next, the thesis identifies some of the concerns with the original Spaun model, and show the modifications made to the network to remedy these issues. It also characterizes how the Spaun model was re-organized and re-implemented (to include the aforementioned modifications) as the Spaun 2.0 model. As part of the discussion of the Spaun 2.0 model, task performance results are presented that compare the original Spaun model and the re-implemented Spaun 2.0 model, demonstrating that the modifications to the Spaun 2.0 model have improved its accuracy on the working memory task, and the two induction tasks. Finally, three extensions to Spaun 2.0 are presented. These extensions take advantage of the re-organized Spaun model, giving Spaun 2.0 new capabilities – a motor system capable of adapting to unknown force fields applied to its arm; a visual system capable of processing 256×256 full-colour images; and the ability to follow general instructions. The Spaun model and architecture presented in this thesis demonstrate that by using the SPA and the NEF, it is not only possible to construct functional large-scale brain models, but to do so in a manner that supports complex extensions to the model. The final Spaun 2.0 model consists of approximately 6.6 million neurons, can perform 12 cognitive tasks, and has been demonstrated to reproduce behavioural and neurological data observed in natural cognitive agents.

• Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells (Neural Computation, 2018)

Abstract: Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.

• Spiking Deep Neural Networks: Engineered and Biological Approaches to Object Recognition (PhD Thesis, 2018)

Eric Hunsberger

Abstract: Modern machine learning models are beginning to rival human performance on some realistic object recognition tasks, but we still lack a full understanding of how the human brain solves this same problem. This thesis combines knowledge from machine learning and computational neuroscience to create models of human object recognition that are increasingly realistic both in their treatment of low-level neural mechanisms and in their reproduction of high-level human behaviour. First, I present extensions to the Neural Engineering Framework to make its preferred type of model—the “fixed-encoding” network—more accurate for object recognition tasks. These extensions include better distributions—such as Gabor filters—for the encoding weights, and better loss functions—namely weighted squared loss, softmax loss, and hinge loss—to solve for decoding weights. Second, I introduce increased biological realism into deep convolutional neural networks trained with backpropagation, by training them to run using spiking leaky integrate-and-fire (LIF) neurons. These models have been successful in machine learning, and I am able to convert them to spiking networks while retaining similar levels of performance. I present a novel method to smooth the LIF rate response function in order to avoid the common problems associated with differentiating spiking neurons in general and LIF neurons in particular. I also derive a number of novel characterizations of spiking variability, and use these to train spiking networks to be more robust to this variability. Finally, to address the problems with implementing backpropagation in a biological system, I train spiking deep neural networks using the more biological Feedback Alignment algorithm. I examine this algorithm in depth, including many variations on the core algorithm, methods to train using non-differentiable spiking neurons, and some of the limitations of the algorithm. Using these findings, I construct a spiking model that learns online in a biologically realistic manner. The models developed in this thesis help to explain both how spiking neurons in the brain work together to allow us to recognize complex objects, and how the brain may learn this behaviour. Their spiking nature allows them to be implemented on highly efficient neuromorphic hardware, opening the door to object recognition on energy-limited devices such as cell phones and mobile robots.

• Using neural networks to generate inferential roles for natural language (Frontiers in Psychology, 2018)

Abstract: Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.

• Towards Provably Moral AI Agents in Bottom-up Learning Frameworks (AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2018)

Nolan P. Shaw, Andreas Stöckel, Ryan Orr, Thomas F. Lidbetter, Robin Cohen

Abstract: We examine moral machine decision-making, inspired by a central question posed by Rossi regarding moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation held to the same standards as a human agent, removing the demand that ethical behavior is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, from the perspective of artificial intelligence, and sheds important light on understanding how much learning is required for an intelligent agent to behave morally with negligible error.

• Biologically Plausible Cortical Hierarchical-Classifier Circuit Extensions in Spiking Neurons (Master's Thesis, 2018)

Peter Suma

Abstract: Hierarchical categorization inter-leaved with sequence recognition of incoming stimuli in the mammalian brain is theorized to be performed by circuits composed of the thalamus and the six-layer cortex. Using these circuits, the cortex is thought to learn a ‘brain grammar’ composed of recursive sequences of categories. A thalamo-cortical, hierarchical classification and sequence learning “Core” circuit implemented as a linear matrix simulation and was published by Rodriguez, Whitson & Granger in 2004. In the brain, these functions are implemented by cortical and thalamic circuits composed of recurrently-connected, spiking neurons. The Neural Engineering Framework (NEF) (Eliasmith & Anderson, 2003) allows for the construction of large-scale biologically plausible neural networks. Existing NEF models of the basal-ganglia and the thalamus exist but to the best of our knowledge there does not exist an integrated, spiking-neuron, cortical-thalamic-Core network model. We construct a more biologically-plausible version of the hierarchical-classification function of the Core circuit using leaky-integrate-and-fire neurons which performs progressive visual classification of static image sequences relying on the neural activity levels to trigger the progressive classification of the stimulus. We proceed by implementing a recurrent NEF model of the cortical-thalamic Core circuit and then test the resulting model on the hierarchical categorization of images.

• Nonlinear synaptic interaction as a computational resource in the Neural Engineering Framework (Cosyne Abstracts, 2018)

Abstract: Nonlinear interaction in the dendritic tree is known to be an important computational resource in biological neurons. Yet, high-level neural compilers – such as the Neural Engineering Framework (NEF), or the predictive coding method published by Denève et al. in 2013 – tend not to include conductance-based nonlinear synaptic interactions in their models, and so do not exploit these interactions systematically. In this study, we extend the NEF to include synaptic computation of nonlinear multivariate functions, such as controlled shunting, multiplication, and the Euclidean norm. We present a theoretical framework that provides sufficient conditions under which nonlinear synaptic interaction yields a similar precision compared to traditional NEF methods, while reducing the number of layers, neurons, and latency in the network. The proposed method lends itself to increasing the computational power of neuromorphic hardware systems and improves the NEF's biological plausibility by mitigating one of its long-standing limitations, namely its reliance on linear, current-based synapses. We perform a series of numerical experiments with a conductance-based two-compartment LIF neuron model. Preliminary results show that nonlinear interactions in conductance-based synapses are sufficient to compute a wide variety of nonlinear functions with performance competitive to using an additional layer of neurons as a nonlinearity.

• Trajectory generation using a spiking neuron implementation of dynamic movement primitives (27th Annual Meeting for the Society for the Neural Control of Movement, 2017)

Abstract: We present a trajectory generating circuit using efficient function representation coding in a spiking neural network that can generate multiple complex trajectories dynamically from a single network. Integrating multiple trajectories within a single network allows us to explore the transitions between movements. We suggest that this kind of network is a possible mechanism for efficiently storing a wide array of movement features in the cortex, and compare our results to experimental data.

• Inferential Role Semantics for Natural Language (Proceedings of the 39th Annual Conference of the Cognitive Science Society, 2017)

Abstract: Cognitive models have long been used to study linguistic phenomena spanning the domains of phonology, syntax, and semantics. Of these domains, semantics is unique in that there is little clarity concerning what a model ought to do to provide an account of how the meanings of complex linguistic expressions are understood. To address this problem, we introduce a neural model that is trained to generate sentences that follow from an input sentence. The model is trained using the Stanford Natural Language Inference dataset, and to evaluate its performance, we report entailment prediction accuracies on test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both ground-truth and model-generated entailments for a random selection of test sentences. Taken together, these analyses indicate that the model accounts for important inferential relationships amongst linguistic expressions.

• A Population-Level Approach to Temperature Robustness in Neuromorphic Systems (IEEE International Symposium on Circuits and Systems (ISCAS), 2017)

Eric Kauderer-Abrams, Andrew Gilbert, Aaron R. Voelker, Ben V. Benjamin, Terrence C. Stewart, Kwabena Boahen

Abstract: We present a novel approach to achieving temperature-robust behavior in neuromorphic systems that operates at the population level, trading an increase in silicon-neuron count for robustness across temperature. Our silicon neurons' tuning curves were highly sensitive to temperature, which could be decoded from a 400-neuron population with a precision of 0.07°C. We overcame this temperature-sensitivity by combining methods from robust optimization theory with the Neural Engineering Framework. We developed two algorithms and compared their temperature-robustness across a range of 2°C by decoding one period of a sinusoid-like function from populations with 25 to 800 neurons. We find that 560 neurons are required to achieve the same precision across this temperature range as 35 neurons achieved at a single temperature.

• A spiking neural network of state transition probabilities in model-based reinforcement learning (Master's Thesis, 2017)

Mariah Martin Shein

Abstract: The development of the field of reinforcement learning was based on psychological studies of the instrumental conditioning of humans and other animals. Recently, reinforcement learning algorithms have been applied to neuroscience to help characterize neural activity and animal behaviour in instrumental conditioning tasks. A specific example is the hybrid learner developed to match human behaviour on a two-stage decision task. This hybrid learner is composed of a model-free and a model-based system. The model presented in this thesis is an implementation of that model-based system where the state transition probabilities and Q-value calculations use biologically plausible spiking neurons. Two variants of the model demonstrate the behaviour when the state transition probabilities are encoded in the network at the beginning of the task, and when these probabilities are learned over the course of the task. Various parameters that affect the behaviour of the model are explored, and ranges of these parameters that produce characteristically model-based behaviour are found. This work provides an important first step toward understanding how a model-based system in the human brain could be implemented, and how this system contributes to human behaviour.

• A neural model of hierarchical reinforcement learning (PLoS ONE, 2017)

Abstract: We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.

• Inferential Role Semantics for Natural Language (PhD Thesis, 2017)

Peter Blouw

Abstract: The most general goal of semantic theory is to explain facts about language use. In keeping with this goal, I introduce a framework for thinking about linguistic expressions in terms of (a) the inferences they license, (b) the behavioral predictions that their uses thereby sustain, and (c) the affordances that they provide to language users in virtue of these inferential and predictive involvements. Within this framework, linguistic expressions acquire meanings by regulating social practices that involve “intentional interpretation,” wherein people explain and predict one another’s behavior through linguistically specified mental state attributions. Developing a theory of meaning therefore requires formalizing the inferential roles that determine how linguistic expressions license predictions in the context intentional interpretation. Accordingly, the view I develop is an inferential role semantics for natural language. To describe this semantics, I take advantage of recently developed techniques in the field of natural language processing. I introduce a model that assigns inferential roles to arbitrary linguistic expressions by learning from examples of how sentences are distributed as premises and conclusions in a space of possible inferences. I then empirically evaluate the model’s ability to generate accurate entailments for novel sentences not used as training examples. I argue that this model takes a small but important step towards codifying the meanings of the expressions it manipulates. Next, I examine the theoretical implications of this work with respect to debates about the compositionality of language, the relationship between language and cognition, and the relationship between language and the world. With respect to compositionality, I argue that the debate is really about generalization in language use, and that the required sort of generalization can be achieved by “interpolating” between familiar examples of correct inferential transitions. With respect to the relationship between thought and language, I argue that it is a mistake to try to derive a theory of natural language semantics from a prior theory of mental representation because theories of mental representation invoke the sort of intentional interpretation at play in language use from the get-go. With respect to the relationship between language and the world, I argue that questions about truth conditions and reference relations are best thought of in terms of questions about the norms governing language use. These norms, in turn, are best characterized in primarily inferential terms. I conclude with an all-things-considered evaluation of my theory that demonstrates how it overcomes a number of challenges associated with semantic theories that take inference, rather than reference, as their starting point.

• Efficiently sampling vectors and coordinates from the n-sphere and n-ball (Tech Report, 2017)

Abstract: We provide a short proof that the uniform distribution of points for the n-ball is equivalent to the uniform distribution of points for the (n + 1)-sphere projected onto n dimensions. This implies the surprising result that one may uniformly sample the n-ball by instead uniformly sampling the (n + 1)-sphere and then arbitrarily discarding two coordinates. Consequently, any procedure for sampling coordinates from the uniform (n + 1)-sphere may be used to sample coordinates from the uniform n-ball without any modification. For purposes of the Semantic Pointer Architecture (SPA), these insights yield an efficient and novel procedure for sampling the dot-product of vectors—sampled from the uniform ball—with unit-length encoding vectors.

• An adaptive spiking neural controller for flapping insect-scale robots (IEEE Symposium Series on Computational Intelligence (SSCI), 2017)

Taylor S Clawson, Terrence C Stewart, Chris Eliasmith, Silvia Ferrari

Abstract: Insect-scale flapping robots are challenging to stabilize due to their fast dynamics, unmodeled parameter variations, and the periodic nature of their control input. Effective controller designs must tolerate wing asymmetries that occur due to manufacturing errors and react quickly to stabilize the fast unstable modes of the system. Additionally, they should have minimal power requirements to fit within the tightly constrained power budget associated with insect-scale flying robots. Adaptive control methods are capable of learning online to account for uncertain physical parameters and other model uncertainties, and can thus improve system performance over time. In this work, a spiking neural network is used to stabilize hovering of an insect-scale robot in the presence of unknown parameter variations. The controller is shown to adapt rapidly during a simulated flight test and requires a total of only 800 neurons, allowing it to be implemented with minimal power requirements.

• Automatic Optimization of the Computation Graph in the Nengo Neural Network Simulator (Frontiers in Neuroinformatics, 2017)

Abstract: One critical factor limiting the size of neural cognitive models is the time required to simulate such models. To reduce simulation time, specialized hardware is often used. However, such hardware can be costly, not readily available, or require specialized software implementations that are difficult to maintain. Here, we present an algorithm that optimizes the computational graph of the Nengo neural network simulator, allowing simulations to run more quickly on commodity hardware. This is achieved by merging identical operations into single operations and restructuring the accessed data in larger blocks of sequential memory. In this way, a time speed-up of up to 6.8 is obtained. While this does not beat the specialized OpenCL implementation of Nengo, this optimization is available on any platform that can run Python. In contrast, the OpenCL implementation supports fewer platforms and can be difficult to install.

• A Psychologically-Motivated Model of Opinion Change with Applications to American Politics (Journal of Artificial Societies and Social Simulation, 2017)

Peter Duggins

Abstract: Agent-based models are versatile tools for studying how societal opinion change, including political polarization and cultural diffusion, emerges from individual behavior. This study expands agents’ psychological realism using empirically-motivated rules governing interpersonal influence, commitment to previous beliefs, and conformity in social contexts. Computational experiments establish that these extensions produce three novel results: (a) sustained “strong” diversity of opinions across society, (b) opinion subcultures, and (c) pluralistic ignorance. These phenomena arise from a combination of agents’ intolerance, susceptibility and conformity, with extremist agents and social networks playing important roles. The distribution and dynamics of simulated opinions reproduce two empirical datasets on Americans' political opinions.

• Effects of Guanfacine and Phenylephrine on a Spiking Neuron Model of Working Memory (Topics in Cognitive Science, 2017)

Abstract: We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT.

• Incorporating Biologically Realistic Neuron Models into the NEF (Master's Thesis, 2017)

Peter Duggins

Abstract: Theoretical neuroscience is fundamentally concerned with the relationship between biological mechanisms, information processing, and cognitive abilities, yet current models often lack either biophysical realism or cognitive functionality. This thesis aims to partially fill this gap by incorporating geometrically and electrophisologically accurate models of individual neurons into the Neural Engineering Framework (NEF). After discussing the relationship between biologically complex neurons and the core principles/assumptions of the NEF, a neural model of working memory is introduced to demonstrate the NEF's existing capacity to capture biological and cognitive features. This model successfully performs the delayed response task and provides a medium for simulating mental disorders (ADHD) and its pharmacological treatments. Two methods of integrating more biologically sophisticated NEURON models into the NEF are subsequently explored and their ability to implement networks of varying complexity are assessed: the trained synaptic weights do realize the core NEF principles, though several errors remain unresolved. Returning to the working memory model, it is shown that bioneurons can perform the requisite computations in context, and that simulating the biophysical effects of pharmacological compounds produces results consistent with electrophysiological and behavioral data from monkeys.

• A Spiking Neural Bayesian Model of Life Span Inference (Proceedings of the 39th Annual Conference of the Cognitive Science Society, 2017)

Abstract: In this paper, we present a spiking neural model of life span inference. Through this model, we explore the biological plausibility of performing Bayesian computations in the brain. Specifically, we address the issue of representing probability distributions using neural circuits and combining them in meaningful ways to perform inference. We show that applying these methods to the life span inference task matches human performance on this task better than an ideal Bayesian model due to the use of neuron tuning curves. We also describe potential ways in which humans might be generating the priors needed for this inference. This provides an initial step towards better understanding how Bayesian computations may be implemented in a biologically plausible neural network.

• A Spiking Neuron Model of Word Associations for the Remote Associates Test (Frontiers in Psychology, 2017)

Ivana Kajić, Jan Gosmann, Terrence C. Stewart, Thomas Wennekers, Chris Eliasmith

Abstract: Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e. non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses.

• Feature-Based Resource Allocation for Real-Time Stereo Disparity Estimation (IEEE Access, 2017)

Eric Hunsberger, Victor Reyes Osorio, Jeff Orchard, Bryan P Tripp

Abstract: The most accurate stereo disparity algorithms take dozens or hundreds of seconds to process a single frame. This timescale is impractical for many applications. However, high accuracy is often not needed throughout the scene. Here, we investigate a “foveation” approach (in which some parts of an image are processed more intensively than others) in the context of modern stereo algorithms. We consider two scenarios: disparity estimation with a convolutional network in a robotic grasping context, and disparity estimation with a Markov random field in a navigation context. In each case, combining fast and slow methods in different parts of the scene improves frame rates while maintaining accuracy in the most task-relevant areas. We also demonstrate a simple and broadly applicable utility function for choosing foveal regions, which combines image and task information. Finally, we characterize the benefits of defining multiple individually placed small foveae per image, rather than a single large fovea. We find little benefit, supporting the use of hardware foveae of fixed size and shape. More generally, our results reaffirm that foveation is a practical way to combine speed with task-relevant accuracy. Foveae are present in the most complex biological vision systems, suggesting that they may become more important in artificial vision systems, as these systems become more complex.

• Deep learning in spiking LIF neurons (Cosyne Abstracts, 2017)

Abstract: The "backprop" algorithm has led to incredible successes for machines on object recognition tasks (among others), but how similar types of supervised learning may occur in the brain remains unclear. We present a fully spiking, biologically plausible supervised learning algorithm that extends the Feedback Alignment (FA) algorithm to run in spiking LIF neurons. This entirely spiking learning algorithm is a novel hypothesis about how biological systems may perform deep supervised learning. It addresses a number of the key problems with the biological plausibility of the backprop algorithm: 1) It does not use the transpose weight matrix to propagate error backwards, but rather uses a random weight matrix. 2) It does not use the derivative of the hidden unit activation function, but rather uses a function of the hidden neurons' filtered spiking outputs. We test this algorithm on a simple input-output function learning task with a two-hidden-layer deep network. The algorithm is able to learn at both hidden layers, and performs much better than shallow learning. Future work includes extending this algorithm to more challenging datasets, and comparing it with other candidate algorithms for more biologically plausible learning.

• Methods for applying the Neural Engineering Framework to neuromorphic hardware (arXiv preprint arXiv:1708.08133, 2017)

Abstract: We review our current software tools and theoretical methods for applying the Neural Engineering Framework to state-of-the-art neuromorphic hardware. These methods can be used to implement linear and nonlinear dynamical systems that exploit axonal transmission time-delays, and to fully account for nonideal mixed-analog-digital synapses that exhibit higher-order dynamics with heterogeneous time-constants. This summarizes earlier versions of these methods that have been discussed in a more biological context (Voelker & Eliasmith, 2017) or regarding a specific neuromorphic architecture (Voelker et al., 2017).

• Extending the Neural Engineering Framework for Nonideal Silicon Synapses (IEEE International Symposium on Circuits and Systems (ISCAS), 2017)

Aaron R. Voelker, Ben V. Benjamin, Terrence C. Stewart, Kwabena Boahen, Chris Eliasmith

Abstract: The Neural Engineering Framework (NEF) is a theory for mapping computations onto biologically plausible networks of spiking neurons. This theory has been applied to a number of neuromorphic chips. However, within both silicon and real biological systems, synapses exhibit higher-order dynamics and heterogeneity. To date, the NEF has not explicitly addressed how to account for either feature. Here, we analytically extend the NEF to directly harness the dynamics provided by heterogeneous mixed-analog-digital synapses. This theory is successfully validated by simulating two fundamental dynamical systems in Nengo using circuit models validated in SPICE. Thus, our work reveals the potential to engineer robust neuromorphic systems with well-defined high-level behaviour that harness the low-level heterogeneous properties of their physical primitives with millisecond resolution.

• A Spiking Independent Accumulator Model for Winner-Take-All Computation (Proceedings of the 39th Annual Conference of the Cognitive Science Society, 2017)

Abstract: Winner-take-all (WTA) mechanisms are an important component of many cognitive models. For example, they are often used to decide between multiple choices or to selectively direct attention. Here we compare two biologically plausible, spiking neural WTA mechanisms. We first provide a novel spiking implementation of the well-known leaky, competing accumulator (LCA) model, by mapping the dynamics onto a population-level representation. We then propose a two-layer spiking independent accumulator (IA) model, and compare its performance against the LCA network on a variety of WTA benchmarks. Our findings suggest that while the LCA network can rapidly adapt to new winners, the IA network is better suited for stable decision making in the presence of noise.

• Analysis of oscillatory weight changes from online learning with filtered spiking feedback (Tech Report, 2017)

Abstract: Prescribed Error Sensitivity (PES) is a biologically plausible supervised learning rule that is frequently used with the Neural Engineering Framework (NEF). PES modifies the connection weights between populations of spiking neurons to minimize an error signal. Continuing the work of Voelker (2015), we solve for the dynamics of PES, while filtering the error with an arbitrary linear synapse model. For the most common case of a lowpass filter, the continuous-time weight changes are characterized by a second-order bandpass filter with frequency $\omega = \sqrt {\tau ^{-1} \kappa \|{\bf a}\|^2 }$ and bandwidth $Q = \sqrt {\tau \kappa \|{\bf a}\|^2 }$, where τ is the exponential time constant, κ is the learning rate, and ${\bf a}$ is the activity vector. Therefore, the error converges to zero, yet oscillates if and only if $\tau \kappa \|{\bf a}\|^2 > \frac {1}{4}$. This provides a heuristic for setting κ based on the synaptic τ, and a method for engineering remarkably accurate decaying oscillators using only a single spiking leaky integrate-and-fire neuron.

• A Biologically Constrained Model of Semantic Memory Search (Proceedings of the 39th Annual Conference of the Cognitive Science Society, 2017)

Ivana Kajić, Jan Gosmann, Brent Komer, Ryan W. Orr, Terrence C. Stewart, Chris Eliasmith

Abstract: The semantic fluency task has been used to understand the effects of semantic relationships on human memory search. A variety of computational models have been proposed that explain human behavioral data, yet it remains unclear how millions of spiking neurons work in unison to realize the cognitive processes involved in memory search. In this paper, we present a biologically constrained neural network model that performs the task in a fashion similar to humans. The model reproduces experimentally observed response timing effects, as well as similarity trends within and across semantic categories derived from responses. Three different sources of the association data have been tested by embedding associations in neural connections, with free association norms providing the best match.

• Neural modeling of developmental lexical disorders (Bernstein Conference 2017, 2017)

Catharina Marie Stille, Trevor Bekolay, Bernd J. Kröger

Abstract: Standardized tests exist for the diagnostics of developmental lexical disorders, but it is still difficult to associate the resulting behavior of a child while speaking with functional deficits in the child´s brain. The mental lexicon is part of the speech and language knowledge repository of individuals. It enables humans to produce as well as to understand speech. The computational frameworks we used for implementing a model of the mental lexicon and speech processing are the NEF (Neural Engineering Framework, Eliasmith et al. 2012, Eliasmith 2013) and the SPA (Semantic Pointer Architecture, Eliasmith et al 2012, Stewart & Eliasmith 2014). These frameworks allow modeling of large scale neural networks, comprising sensory, motor and cognitive components. The modeled task is the WWT 6-10 (Word range and Word Retrieval Test, see Glück 2011), which comprises 95 items and is a picture naming and word comprehension task. In case of incorrect answers semantic and phonological cues are also given in order to facilitate word production. A major goal of this study is to introduce a quantitative neurocomputational model for lexical storage as well as for lexical retrieval. A further goal of this study is to associate neural dysfunctions with deficits in speech behavior. Concretely, the deficits of interest are in lexical storage and lexical access. The dysfunctions introduced here are the lesioning of specific neural SPA-buffers and of specific neural connections between these buffers. Based on the behavioral data given by the WWT, we are now able to associate functional neural deficits with symptomatic behavioral data. This allows us to identify potential dysfunctions at neural level for word retrieval and word storage.

• Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware (Frontiers in Computational Neuroscience, 2017)

Andreas Stöckel, Christoph Jenzen, Michael Thies, Ulrich Rückert

Abstract: Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

• Finding Tuning Curves for Point Neurons with Conductance-Based Synapses (Tech Report, 2017)

Andreas Stöckel

Abstract: In the Neural Engineering Framework (NEF), individual neuron tuning curves are often characterized in terms of a maximum firing rate and an $x$-intercept. However, for LIF neurons with conductance-based synapses it is not immediately clear how maximum rate and $x$-intercept should be mapped to excitatory and inhibitory conductance input functions $g_\mathrm {E}(x)$, $g_\mathrm {I}(x)$. In this technical report we describe a method for deriving such functions and compare the resulting conductance-based tuning curves to current-based tuning curves with equivalent parameters. For large maximum rates and $x$-intercepts the conductance-based tuning curves possess a significantly steeper spike-rate onset compared to their current-based counterparts.

• Point Neurons with Conductance-Based Synapses in the Neural Engineering Framework (arXiv preprint arXiv:1710.07659, 2017)

Abstract: The mathematical model underlying the Neural Engineering Framework (NEF) expresses neuronal input as a linear combination of synaptic currents. However, in biology, synapses are not perfect current sources and are thus nonlinear. Detailed synapse models are based on channel conductances instead of currents, which require independent handling of excitatory and inhibitory synapses. This, in particular, significantly affects the influence of inhibitory signals on the neuronal dynamics. In this technical report we first summarize the relevant portions of the NEF and conductance-based synapse models. We then discuss a naïve translation between populations of LIF neurons with current- and conductance-based synapses based on an estimation of an average membrane potential. Experiments show that this simple approach works relatively well for feed-forward communication channels, yet performance degrades for NEF networks describing more complex dynamics, such as integration.

• Biologically inspired methods in speech recognition and synthesis: closing the loop (PhD Thesis, 2016)

Trevor Bekolay

Abstract: Current state-of-the-art approaches to computational speech recognition and synthesis are based on statistical analyses of extremely large data sets. It is currently unknown how these methods relate to the methods that the human brain uses to perceive and produce speech. In this thesis, I present a conceptual model, Sermo, which describes some of the computations that the human brain uses to perceive and produce speech. I then implement three large-scale brain models that accomplish tasks theorized to be required by Sermo, drawing upon techniques in automatic speech recognition, articulatory speech synthesis, and computational neuroscience. The first model extracts features from an audio signal by performing a frequency decomposition with an auditory periphery model, then decorrelating the information in that power spectrum with methods commonly used in audio and image compression. I show that the features produced by this model implemented with biologically plausible spiking neurons can be used to classify phones in pre-segmented speech with significantly better accuracy than the features typically used in automatic speech recognition systems. Additionally, I show that this model can be used to compare auditory periphery models in terms of their ability to support phone classification of pre-segmented speech. The second model uses a symbol-like neural representation of a sequence of syllables to generate a trajectory of premotor commands that can be used to control an articulatory synthesizer. I show that the model can produce trajectories up to several seconds in length from a static syllable sequence representation that result in intelligible synthesized speech. The trajectories reflect the high temporal variability of human speech, and smoothly transition between successive syllables, even in rapid utterances. The third model classifies syllables from a trajectory of premotor commands. I show that the model is able to classify syllables online despite high temporal variability, and can produce the same syllable representations used by the second model. These two models can be connected in future work in order to implement a closed-loop sensorimotor speech system. Unlike current computational approaches, all three of these models are implemented with biologically plausible spiking neurons, which can be simulated with neuromorphic hardware, and can interface naturally with artificial cochleas. All models are shown to scale to the level of adult human vocabularies in terms of the neural resources required, though limitations on their performance as a result of scaling will be discussed.

• A spiking neural model of adaptive arm control (Proceedings of the Royal Society B, 2016)

Travis DeWolf, Terrence C Stewart, Jean-Jacques Slotine, Chris Eliasmith

Abstract: We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.

• Real-Time FPGA Simulation of Surrogate Models of Large Spiking Networks (ICANN, 2016)

Murphy Berzish, Chris Eliasmith, Bryan Tripp

Keywords: FPGA; Neural Engineering Framework; neuromorphic engineering

Abstract: Models of neural systems often use idealized inputs and out- puts, but there is also much to learn by forcing a neural model to inter- act with a complex simulated or physical environment. Unfortunately, sophisticated interactions require models of large neural systems, which are difficult to run in real time. We have prototyped a system that can simulate efficient surrogate models of a wide range of neural circuits in real time, with a field programmable gate array (FPGA). The scale of the simulations is increased by avoiding simulation of individual neu- rons, and instead simulating approximations of the collective activity of groups of neurons. The system can approximate roughly a million spiking neurons in a wide range of configurations

• Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework (The 2016 International Joint Conference on Neural Networks (IJCNN), 2016)

James Knight, Aaron R. Voelker, Andrew Mundy, Chris Eliasmith, Steve Furber

Keywords: Biological system modeling;Computational modeling;Decoding;Neural engineering;Neurons;Spinnaker;Neuromorphics

Abstract: The biological brain is a highly plastic system within which the efficacy and structure of synaptic connections are constantly changing in response to internal and external stimuli. While numerous models of this plastic behavior exist at various levels of abstraction, how these mechanisms allow the brain to learn meaningful values is unclear. The Neural Engineering Framework (NEF) is a hypothesis about how large-scale neural systems represent values using populations of spiking neurons, and transform them using functions implemented by the synaptic weights between populations. By exploiting the fact that these connection weight matrices are factorable, we have recently shown that static NEF models can be simulated very efficiently using the SpiNNaker neuromorphic architecture. In this paper, we demonstrate how this approach can be extended to efficiently support both supervised and unsupervised learning rules designed to operate on these factored matrices. We then present a heteroassociative memory architecture built using these learning rules and prove that it is capable of learning a human-scale semantic network. Finally we demonstrate a 100 000 neuron version of this architecture running on the SpiNNaker simulator with a speed-up exceeding 150x when compared to the Nengo reference simulator.

• Modeling interactions between speech production and perception: speech error detection at semantic and phonological levels and the inner speech loop (Frontiers in Computational Neuroscience, 2016)

Bernd J. Kroger, Eric Crawford, Trevor Bekolay, Chris Eliasmith

Abstract: Production and comprehension of speech are closely interwoven. For example, the ability to detect an error in one's own speech, halt speech production, and finally correct the error can be explained by assuming an inner speech loop which continuously compares the word representations induced by production to those induced by perception at various cognitive levels (e.g. conceptual, word, or phonological levels). Because spontaneous speech errors are relatively rare, a picture naming and halt paradigm can be used to evoke them. In this paradigm, picture presentation (target word initiation) is followed by an auditory stop signal (distractor word) for halting speech production. The current study seeks to understand the neural mechanisms governing self-detection of speech errors by developing a biologically inspired neural model of the inner speech loop. The neural model is based on the Neural Engineering Framework (NEF) and consists of a network of about 500,000 spiking neurons. In the first experiment we induce simulated speech errors semantically and phonologically. In the second experiment, we simulate a picture naming and halt task. Target-distractor word pairs were balanced with respect to variation of phonological and semantic similarity. The results of the first experiment show that speech errors are successfully detected by a monitoring component in the inner speech loop. The results of the second experiment show that the model correctly reproduces human behavioral data on the picture naming and halt task. In particular, the halting rate in the production of target words was lower for phonologically similar words than for semantically similar or fully dissimilar distractor words. We thus conclude that the neural architecture proposed here to model the inner speech loop reflects important interactions in production and perception at phonological and semantic levels.

• How is scene recognition in a convolutional network related to that in the human visual system? (Artificial Neural Networks and Machine Learning -- ICANN 2016, 2016)

Sugandha Sharma, Bryan Tripp

Keywords: Convolutional neural networks (CNNs), Scene recognition, Human visual system

Abstract: This study is an analysis of scene recognition in a pre-trained convolutional network, to evaluate the information the network uses to distinguish scene categories. We are particularly interested in how the network is related to various areas in the human brain that are involved in different modes of scene recognition. Results of several experiments suggest that the convolutional network relies heavily on objects and fine features, similar to the lateral occipital complex (LOC) in the brain, but less on large-scale scene layout. This suggests that future sceneprocessing convolutional networks might be made more brain-like by adding parallel components that are more sensitive to arrangement of simple forms.

• A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex (38th Annual Meeting of the Cognitive Science Society, 2016)

Sugandha Sharma, Brent J. Komer, Terrence C. Stewart, Chris Eliasmith

Keywords: context dependent decision making; decision making; neural engineering framework; neural dynamics; theoretical neuroscience

Abstract: In this paper, we present a spiking neural model of context dependent decision making. Prefrontal cortex (PFC) plays a fundamental role in context dependent behaviour. We model the PFC at the level of single spiking neurons, to explore the underlying computations which determine its contextual responses. The model is built using the Neural Engineering Framework and performs input selection and integration as a nonlinear recurrent dynamical process. The results obtained from the model closely match behavioural and neural experimental data obtained from macaque monkeys that are trained to perform a context sensitive perceptual decision task. The close match suggests that the low-dimensional, nonlinear dynamical model we suggest captures central aspects of context dependent decision making in primates.

• System Identification of Adapting Neurons (Tech Report, 2016)

Eric Hunsberger

Abstract: This report investigates how neurons with complex dynamics, specifically adaptation, can be incorporated into the Neural Engineering Framework. The focus of the report is fitting a linear-nonlinear system model to an adapting neuron model using system identification techniques. By characterizing the neuron dynamics in this way, we hope to gain a better understanding of what sort of temporal basis the neurons in a population provide, which will determine what kinds of dynamics can be decoded from the neural population. The report presents four system identification techniques: a correlation-based method, a least-squares method, an iterative least-squares technique based of Paulin's algorithm, and a general iterative least squares method based of gradient descent optimization. These four methods are all used to fit linear-nonlinear models to the adapting neuron model. We find that the Paulin least-squares method performs the best in this situation, and linear-nonlinear models fit in this manner are able to capture the relevant adaptation dynamics of the neuron model. Other questions related to the system identification, such as the type of input to use and the amount of regularization required for the least-squares methods, are also answered empirically. The report concludes by performing system identification on 20 neurons with a range of adaptation parameters, and examining what type of temporal basis these neurons provide.

• Training Spiking Deep Networks for Neuromorphic Hardware (arXiv:1611.05141, 2016)

Abstract: We describe a method to train spiking deep networks that can be run using leaky integrate-and-fire (LIF) neurons, achieving state-of-the-art results for spiking LIF networks on five datasets, including the large ImageNet ILSVRC-2012 benchmark. Our method for transforming deep artificial neural networks into spiking networks is scalable and works with a wide range of neural nonlinearities. We achieve these results by softening the neural response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our analysis shows that implementations of these networks on neuromorphic hardware will be many times more power-efficient than the equivalent non-spiking networks on traditional hardware.

• Large-scale cognitive model design using the Nengo neural simulator (Biologically Inspired Cognitive Architectures, 2016)

Abstract: The Neural Engineering Framework (NEF) and Semantic Pointer Architecture (SPA) provide the theoretical underpinnings of the neural simulation environment Nengo. Nengo has recently been used to build Spaun, a state-of-the-art, large-scale neural model that performs motor, perceptual, and cognitive functions with spiking neurons (Eliasmith et al., 2012). In this tutorial we take the reader through the steps needed to create two simpler, illustrative cognitive models. The purpose of this tutorial is to simultaneously introduce the reader to the SPA and its implementation in Nengo.

• Towards a Cognitively Realistic Representation of Word Associations (38th Annual Meeting of the Cognitive Science Society, 2016)

Ivana Kajić, Jan Gosmann, Terrence C. Stewart, Thomas Wennekers, Chris Eliasmith

Keywords: semantic spaces; vector representations; spiking neurons; insight, Remote Associates Test

Abstract: The ability to associate words is an important cognitive skill. In this study we investigate different methods for representing word associations in the brain, using the Remote Associates Test (RAT) as a task. We explore representations derived from free association norms and statistical n-gram data. Although n-gram representations yield better performance on the test, a closer match with the human performance is obtained with representations derived from free associations. We propose that word association strengths derived from free associations play an important role in the process of RAT solving. Furthermore, we show that this model can be implemented in spiking neurons, and estimate the number of biologically realistic neurons that would suffice for an accurate representation.

• A scaleable spiking neural model of action planning (Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016)

Peter Blouw, Chris Eliasmith, Brian Tripp

Abstract: Past research on action planning has shed light on the neural mechanisms underlying the selection of simple motor actions, along with the cognitive mechanisms underlying the planning of action sequences in constrained problem solving domains. We extend this research by describing a neural model that rapidly plans action sequences in relatively unconstrained domains by manipulating structured representations of objects and the actions they typically afford. We provide an analysis that indicates our model is able to reliably accomplish goals that require correctly performing a sequence of up to 5 actions in a simulated environment. We also provide an analysis of the scaling properties of our model with respect to the number of objects and affordances that constitute its knowledge of the environment. Using simplified simulations we find that our model is likely to function effectively while picking from 10,000 actions related to 25,000 objects.

• Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks (PLoS ONE, 2016)

Abstract: The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.

• Improving With Practice: A Neural Model of Mathematical Development (Topics in Cognitive Science, 2016)

Keywords: Neural engineering framework, Semantic pointer architecture, Nengo, Cognitive modeling, Mathematical ability, Dyscalculia, Skill consolidation

Abstract: The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in parallel: a slower basal ganglia loop and a faster cortical network. The slow network methodically computes the count from one digit given another, corresponding to the addition of two digits, whereas the fast network gradually “memorizes” the output from the slow network. The faster network eventually learns how to add the same digits that initially drove the behavior of the slower network. Performance of this model is demonstrated by simulating a fully spiking neural network that includes basal ganglia, thalamus, and various cortical areas. Consequently, the model incorporates various neuroanatomical data, in terms of brain areas used for calculation and makes psychologically testable predictions related to frequency of rehearsal. Furthermore, the model replicates developmental progression through addition strategies in terms of reaction times and accuracy, and naturally explains observed symptoms of dyscalculia.

• Human-Inspired Neurorobotic System for Classifying Surface Textures by Touch (Robotics and Automation Letters, 2016)

Ken Elmar Friedl, Aaron R. Voelker, Angelika Peer, Chris Eliasmith

Keywords: Biologically-inspired robots, Force and tactile sensing, Neurorobotics

Abstract: Giving robots the ability to classify surface textures requires appropriate sensors and algorithms. Inspired by the biology of human tactile perception, we implement a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semi-supervised approach for classifying dynamic stimuli. Input to the network is supplied by accelerometers mounted on a robotic arm. The sensor data is encoded by a heterogeneous population of neurons, modeled to match the spiking activity of mechanoreceptor cells. This activity is convolved by a hidden layer using bandpass filters to extract nonlinear frequency information from the spike trains. The resulting high-dimensional feature representation is then continuously classified using a neurally implemented support vector machine. We demonstrate that our system classifies 18 metal surface textures scanned in two opposite directions at a constant velocity. We also demonstrate that our approach significantly improves upon a baseline model that does not use the described feature extraction. This method can be performed in real-time using neuromorphic hardware, and can be extended to other applications that process dynamic stimuli online.

• Improving with Practice: A Neural Model of Mathematical Development (Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016)

Abstract: The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. We model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in parallel: a slower basal ganglia loop, and a faster cortical network. The slow network methodically computes the count from one digit given another, corresponding to the addition of two digits, while the fast network gradually "memorizes" the output from the slow network. The faster network eventually learns how to add the same digits that initially drove the behaviour of the slower network. Performance of this model is demonstrated by simulating a fully spiking neural network that includes basal ganglia, thalamus and various cortical areas. (*) Best Student Paper Award: Computational Modeling Prize in Applied Cognition

• Biologically Plausible, Human-Scale Knowledge Representation (Cognitive Science, 2015)

Eric Crawford, Matthew Gingerich, Chris Eliasmith

Keywords: Knowledge representation, Connectionism, Neural network, Biologically plausible, Vector symbolic architecture, WordNet, Scaling

Abstract: Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), "mesh" binding (van der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic Pointer Architecture (SPA) approach to modeling cognition (Eliasmith, 2013) do scale appropriately. Specifically, we construct a spiking neural network of about 2.5 million neurons that employs semantic pointers to successfully encode and decode the main lexical relations in WordNet, which has over 100,000 terms. In addition, we show that the same representations can be employed to construct recursively structured sentences consisting of arbitrary WordNet concepts, while preserving the original lexical structure. We argue that these results suggest that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition.

• An efficient SpiNNaker implementation of the Neural Engineering Framework (IJCNN, 2015)

Andrew Mundy, James Knight, Terrence C. Stewart, Steve Furber

Abstract: By building and simulating neural systems we hope to understand how the brain may work and use this knowledge to build neural and cognitive systems to tackle engineering problems. The Neural Engineering Framework (NEF) is a hypothesis about how such systems may be constructed and has recently been used to build the world's first functional brain model, Spaun. However, while the NEF simplifies the design of neural networks, simulating them using standard computer hardware is still computationally expensive – often running far slower than biological real-time and scaling very poorly: problems the SpiNNaker neuromorphic simulator was designed to solve. In this paper we (1) argue that employing the same model of computation used for simulating general purpose spiking neural networks on SpiNNaker for NEF models results in suboptimal use of the architecture, and (2) provide and evaluate an alternative simulation scheme which overcomes the memory and compute challenges posed by the NEF. This proposed method uses factored weight matrices to reduce memory usage by around 90% and, in some cases, simulate 2000 neurons on a processing core – double the SpiNNaker architectural target.

• Computing with temporal representations using recurrently connected populations of spiking neurons (Connecting Network Architecture and Network Computation, 2015)

Abstract: The modeling of neural systems often involves representing the temporal structure of a dynamic stimulus. We extend the methods of the Neural Engineering Framework (NEF) to generate recurrently connected populations of spiking neurons that compute functions across the history of a time-varying signal, in a biologically plausible neural network. To demonstrate the method, we propose a novel construction to approximate a pure delay, and use that approximation to build a network that represents a finite history (sliding window) of its input. Specifically, we solve for the state-space representation of a pure time-delay filter using Pade-approximants, and then map this system onto the dynamics of a recurrently connected population. The construction is robust to noisy inputs over a range of frequencies, and can be used with a variety of neuron models including: leaky integrate-and-fire, rectified linear, and Izhikevich neurons. Furthermore, we extend the approach to handle various models of the post-synaptic current (PSC), and characterize the effects of the PSC model on overall dynamics. Finally, we show that each delay may be modulated by an external input to scale the spacing of the sliding window on-the-fly. We demonstrate this by transforming the sliding window to compute filters that are linear (e.g., discrete Fourier transform) and nonlinear (e.g., mean squared power), with controllable frequency.

• Explorations in Distributed Recurrent Biological Parsing (International Conference on Cognitive Modelling, 2015)

Abstract: Our ongoing investigations into biologically plausible syntactic and semantic parsing have identified a novel methodology for processing complex structured information. This approach combines Vector Symbolic Architectures (a method for representing sentence structures as distributed vectors), the Neural Engineering Framework (a method for organizing biologically realistic neurons to approximate algorithms), and constraint-based parsing (a method for creating dynamic systems that converge to correct parsings). Here, we present some of our initial findings that show the promise of this approach for explaining the complex, flexible, and scalable parsing abilities found in humans.

• Reduction of dopamine in basal ganglia and its effects on syllable sequencing in speech: A computer simulation study (Basal Ganglia, 2015)

Valentin Senft, Terry Stewart, Trevor Bekolay, Chris Eliasmith, Bernd J. Kröger

Keywords: Freezing of speech movements

Abstract: Abstract Background: Reduction of dopamine in basal ganglia is a common cause of Parkinson's Disease (PD). If dopamine-producing cells die in the substantia nigra, as seen in PD, a typical symptom is freezing of articulatory movements during speech production. Goal: It is the goal of this study to simulate syllable sequencing tasks by computer modelling of the cortico-basal ganglia-thalamus-cortical action selection loop using different levels of dopamine in order to investigate the freezing effect in more detail. Method: This simulation was done using the Neural Engineering Object (Nengo) software tool. In the simulation, two dopamine level parameters (lg and le), representing the effect of \{D1\} and \{D2\} receptors, and therefore the level of dopamine in striatum respectively, can be differentiated and modified. Results: By a decrease of the dopamine level parameters lg and le to 50% we replicated a freezing effect after less than 5 syllable productions. Furthermore freezing of action selection in speech was greater for dopamine level reduction in \{D1\} than \{D2\} receptors. Conclusions: In this study using a neuro-functional brain model, the speech freezing effect results from simulating a reduction of dopamine level in striatum.

• A Solution to the Dynamics of the Prescribed Error Sensitivity Learning Rule (Tech Report, 2015)

Aaron R. Voelker

Abstract: Prescribed Error Sensitivity (PES) is a biologically plausible supervised learning rule that is frequently used with the Neural Engineering Framework (NEF). PES modifies the connection weights between populations of neurons to minimize an external error signal. We solve the discrete dynamical system for the case of constant inputs and no noise, to show that the decoding vectors given by the NEF have a simple closed-form expression in terms of the number of simulation timesteps. Moreover, with $\gamma = (1 - \kappa ||a||^2) < 1$, where κ is the learning rate and a is the vector of firing rates, the error at timestep $k$ is the initial error times $\gamma ^k$. Thus, $\gamma > -1$ implies exponential convergence to a unique stable solution, $\gamma < 0$ results in oscillatory weight changes, and $\gamma \le -1$ implies instability.

Brent Komer

Abstract: This thesis explores the application of a biologically inspired adaptive controller to quadcopter flight control. This begins with an introduction to modelling the dynamics of a quadcopter, followed by an overview of control theory and neural simulation in Nengo. The Virtual Robotics Experimentation Platform (V-REP) is used to simulate the quadcopter in a physical environment. Iterative design improvements leading to the final controller are discussed. The controller model is run on a series of benchmark tasks and its performance is compared to conventional controllers. The results show that the neural adaptive controller performs on par with conventional controllers on simple tasks but exceeds far beyond these controllers on tasks involving unexpected external forces in the environment.

• Spiking Deep Networks with LIF Neurons (arXiv:1510.08829, 2015)

Abstract: We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications.

• Constraint-based parsing with distributed representations (37th Annual Conference of the Cognitive Science Society, 2015)

Keywords: natural language processing; parsing; optimization; harmonic grammar; holographic reduced representations; semantic pointer architecture

Abstract: The idea that optimization plays a key role in linguistic cognition is supported by an increasingly large body of research. Building on this research, we describe a new approach to parsing distributed representations via optimization over a set of soft constraints on the wellformedness of parse trees. This work extends previous research involving the use of constraint-based or “harmonic” grammars by suggesting how parsing can be accomplished using fully distributed representations that preserve their dimensionality with arbitrary increases in structural complexity. We demonstrate that this method can be used to correctly evaluate the wellformedness of linguistic structures generated by a simple context-free grammar, and discuss a number of extensions concerning the neural implementation of the method and its application to complex parsing tasks.

• Concepts as semantic pointers: A framework and computational model (Cognitive Science, 2015)

Peter Blouw, Eugene Solodkin, Paul Thagard, Chris Eliasmith

Abstract: The reconciliation of theories of concepts based on prototypes, exemplars, and theory-like structures is a longstanding problem in cognitive science. In response to this problem, researchers have recently tended to adopt either hybrid theories that combine various kinds of representational structure, or eliminative theories that replace concepts with a more finely grained taxonomy of mental representations. In this paper, we describe an alternative approach involving a single class of mental representations called “semantic pointers.'' Semantic pointers are symbol-like representations that result from the compression and recursive binding of perceptual, lexical, and motor representations, effectively integrating traditional connectionist and symbolic approaches. We present a computational model using semantic pointers that replicates experimental data from categorization studies involving each prior paradigm. We argue that a framework involving semantic pointers can provide a unified account of conceptual phenomena, and we compare our framework to existing alternatives in accounting for the scope, content, recursive combination, and neural implementation of concepts.

• A Spiking Neural Model of the n-Back Task (37th Annual Meeting of the Cognitive Science Society, 2015)

Keywords: n-back task; neural engineering; computational neuroscience, vector symbolic architecture

Abstract: We present a computational model performing the n-back task. This task requires a number of cognitive processes including rapid binding, updating, and retrieval of items in working memory. The model is implemented in spiking leaky-integrate-and-fire neurons with physiologically constrained parameters, and anatomically constrained organization. The methods of the Semantic Pointer Architecture (SPA) are used to construct the model. Accuracies and reaction times produced by the model are shown to match human data. Namely, characteristic decline in accuracy and response speed with increase of n is reproduced. Furthermore, the model provides evidence, contrary to some past proposals, that an active removal process of items in working memory is not necessary for an accurate performance on the n-back task.

• Hyperopt: a Python library for model selection and hyperparameter optimization (Computational Science & Discovery, 2015)

James Bergstra, Brent Komer, Chris Eliasmith, Dan Yamins, David D Cox

Keywords: Python, Bayesian optimization, machine learning, Scikit-learn

Abstract: Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

• Closed-Loop Neuromorphic Benchmarks (Frontiers in neuroscience, 2015)

Terrence C Stewart, Travis DeWolf, Ashley Kleinhans, Chris Eliasmith

Abstract: Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of 'minimal' simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled.

• A neural model of the motor control system (PhD Thesis, 2014)

Travis DeWolf

Abstract: In this thesis I present the Recurrent Error-driven Adaptive Control Hierarchy (REACH); a large-scale spiking neuron model of the motor cortices and cerebellum of the motor control system. The REACH model consists of anatomically organized spiking neurons that control a nonlinear three-link arm to perform reaching and handwriting, while being able to adapt to unknown changes in arm dynamics and structure. I show that the REACH model accounts for data across 19 clinical and experimental studies of the motor control system. These data includes a mix of behavioural and neural spiking activity, across normal and damaged subjects performing adaptive and static tasks. The REACH model is a dynamical control system based on modern control theoretic methods, specifically operational space control, dynamic movement primitives, and nonlinear adaptive control. The model is implemented in spiking neurons using the Neural Engineering Framework (NEF). The model plans trajectories in end-effector space, and transforms these commands into joint torques that can be sent to the arm simulation. Adaptive components of the model are able to compensate for unknown kinematic or dynamic system parameters, such as arm segment length or mass. Using the NEF the adaptive components of the system can be seeded with approximations of the system kinematics and dynamics, allowing faster convergence to stability. Stability proofs for nonlinear adaptation methods implemented in distributed systems with scalar output are presented. By implementing the motor control model in spiking neurons, biological constraints such as neurotransmitter time-constants and anatomical connectivity can be imposed, allowing further comparison to experimental data for model validation. The REACH model is compared to clinical data from human patients as well as neural recording from monkeys performing reaching experiments. The REACH model represents a novel integration of control theoretic methods and neuroscientific constraints to specify a general, adaptive, biologically plausible motor control algorithm.

• Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn (ICML 2014 AutoML Workshop, 2014)

Abstract: Hyperopt-sklearn is a new software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of pre-processing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-Newsgroups, Convex Shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and Convex Shapes.

• Nengo: A Python tool for building large-scale functional brain models (Frontiers in Neuroinformatics, 2014)

Abstract: Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.

• A spiking neural integrator model of the adaptive control of action by the medial prefrontal cortex (The Journal of Neuroscience, 2014)

Trevor Bekolay, Mark Laubach, Chris Eliasmith

Abstract: Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.

• Mapping Arbitrary Mathematical Functions and Dynamical Systems to Neuromorphic VLSI Circuits for Spike-Based Neural Computation (IEEE International Symposium on Circuits and Systems (ISCAS), 2014)

Federico Corradi, Chris Eliasmith, Giacomo Indiveri

Keywords: neuromorphic, robotics, NEF, Spinnaker

Abstract: (Best Paper Honorable Mention) Brain-inspired, spike-based computation in electronic systems is being investigated for developing alternative, non-conventional computing technologies. The Neural Engineering Framework provides a method for programming these devices to implement computation. In this paper we apply this approach to perform arbitrary mathematical computation using a mixed signal analog/digital neuromorphic multi-neuron VLSI chip. This is achieved by means of a network of spiking neurons with multiple weighted connections. The synaptic weights are stored in a 4-bit on-chip programmable SRAM block. We propose a parallel event-based method for calibrating appropriately the synaptic weights and demonstrate the method by encoding and decoding arbitrary mathematical functions, and by implementing dynamical systems via recurrent connections.

• Large-Scale Synthesis of Functional Spiking Neural Circuits (Proceedings of the IEEE, 2014)

Keywords: Brain modeling, Computational modeling, Decoding, NEF/SPA/Nengo combination, Network architecture, Neural ENGineering Objects,Neural computation,Neural networks,Neurons,Neuroscience,Spaun,Spaun scale,Statistics,biologically plausible spiking networks,brain model,brain models,cognition,cognitive task,function resources,functional spiking neural circuits,large-scale synthesis,large-scale systems,mammalian brain,mathematical theory,medical computing,motor task,neural engineering framework,neural engineering framework (NEF),neural model simulation,neural model synthesis,neural modeling,neural nets,neuromorphic engineering,neuron-like components,neurophysiology,nonlinear dynamical systems,organization resources,perceptual task,representational resources,reverse-engineer,review,reviews,semantic pointer architecture,semantic pointer architecture (SPA),simple nonlinear components,software tool,spiking neural networks,synapses,theoretical tool

Abstract: In this paper, we review the theoretical and software tools used to construct Spaun, the first (and so far only) brain model capable of performing cognitive tasks. This tool set allowed us to configure 2.5 million simple nonlinear components (neurons) with 60 billion connections between them (synapses) such that the resulting model can perform eight different perceptual, motor, and cognitive tasks. To reverse-engineer the brain in this way, a method is needed that shows how large numbers of simple components, each of which receives thousands of inputs from other components, can be organized to perform the desired computations. We achieve this through the neural engineering framework (NEF), a mathematical theory that provides methods for systematically generating biologically plausible spiking networks to implement nonlinear and linear dynamical systems. On top of this, we propose the semantic pointer architecture (SPA), a hypothesis regarding some aspects of the organization, function, and representational resources used in the mammalian brain. We conclude by discussing Spaun, which is an example model that uses the SPA and is implemented using the NEF. Throughout, we discuss the software tool Neural ENGineering Objects (Nengo), which allows for the synthesis and simulation of neural models efficiently on the scale of Spaun, and provides support for constructing models using the NEF and the SPA. The resulting NEF/SPA/Nengo combination is a general tool set for both evaluating hypotheses about how the brain works, and for building systems that compute particular functions using neuron-like components.

• Sentence processing in spiking neurons: A biologically plausible left-corner parser (36th Annual Conference of the Cognitive Science Society, 2014)

Abstract: A long-standing challenge in cognitive science is how neurons could be capable of the flexible structured processing that is the hallmark of cognition. We present a spiking neural model that can be given an input sequence of words (a sentence) and produces a structured tree-like representation indicating the parts of speech it has identified and their relations to each other. While this system is based on a standard left-corner parser for constituency grammars, the neural nature of the model leads to new capabilities not seen in classical implementations. For example, the model gracefully decays in performance as the sentence structure gets larger. Unlike previous attempts at building neural parsing systems, this model is highly robust to neural damage, can be applied to any binary-constituency grammar, and requires relatively few neurons (~150,000).

• A spiking neural model applied to the study of human performance and cognitive decline on Raven's Advanced Progressive Matrices (Intelligence, 2014)

Keywords: Aging,Cognitive decline,Raven's Progressive Matrices,Spiking neural model,Vector symbolic architectures

• Event-based neural computing on an autonomous mobile platform (Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2014)

Francesco Galluppi, Christian Denk, Matthias Meiner, Terrence C Stewart, Luis Plana, Chris Eliasmith, Steve Furber, Jorg Conradt

Keywords: neuromorphic, robotics, NEF, Spinnaker

Abstract: Living organisms are capable of autonomously adapting to dynamically changing environments by receiving inputs from highly specialized sensory organs and elaborating them on the same parallel, power-efficient neural substrate. In this paper we present a prototype for a comprehensive integrated platform that allows replicating principles of neural information processing in real-time. Our system consists of (a) an autonomous mobile robotic platform, (b) on-board actuators and multiple (neuromorphic) sensors, and (c) the SpiNNaker computing system, a configurable neural architecture for exploration of parallel, brain-inspired models. The simulation of neurally inspired perception and reasoning algorithms is performed in real-time by distributed, low-power, low-latency event-driven computing nodes, which can be flexibly configured using C or specialized neural languages such as PyNN and Nengo. We conclude by demonstrating the platform in two experimental scenarios, exhibiting real-world closed loop behavior consisting of environmental perception, reasoning and execution of adequate motor actions.

• The Competing Benefits of Noise and Heterogeneity in Neural Coding (Neural Computation, 2014)

Eric Hunsberger, Matthew Scott, Chris Eliasmith

Abstract: Noise and heterogeneity are both known to benefit neural coding. Stochastic resonance describes how noise, in the form of random fluctuations in a neuron's membrane voltage, can improve neural representations of an input signal. Neuronal heterogeneity refers to variation in any one of a number of neuron parameters and is also known to increase the information content of a population. We explore the interaction between noise and heterogeneity and find that their benefits to neural coding are not independent. Specifically, a neuronal population better represents an input signal when either noise or heterogeneity is added, but adding both does not always improve representation further. To explain this phenomenon, we propose that noise and heterogeneity operate using two shared mechanisms: (1) temporally desynchronizing the firing of neurons in the population and (2) linearizing the response of a population to a stimulus. We first characterize the effects of noise and heterogeneity on the information content of populations of either leaky integrate-and-fire or FitzHugh-Nagumo neurons. We then examine how the mechanisms of desynchronization and linearization produce these effects, and find that they work to distribute information equally across all neurons in the population in terms of both signal timing (desynchronization) and signal amplitude (linearization). Without noise or heterogeneity, all neurons encode the same aspects of the input signal; adding noise or heterogeneity allows neurons to encode complementary aspects of the input signal, thereby increasing information content. The simulations detailed in this letter highlight the importance of heterogeneity and noise in population coding, demonstrate their complex interactions in terms of the information content of neurons, and explain these effects in terms of underlying mechanisms.

• Learning large-scale heteroassociative memories in spiking neurons (Unconventional Computation and Natural Computation, 2014)

Abstract: Associative memories have been an active area of research over the last forty years (Willshaw et al., 1969; Kohonen, 1972; Hopfield, 1982) because they form a central component of many cognitive architectures (Pollack, 1988; Anderson & Lebiere, 1998). We focus specifically on associative memories that store associations between arbitrary pairs of neural states. When a noisy version of an input state vector is presented to the network, it must output a "clean" version of the associated state vector. We describe a method for building large-scale networks for online learning of associations using spiking neurons, which works by exploiting the techniques of the Neural Engineering Framework (Eliasmith & Anderson, 2003). This framework has previously been used by Stewart et al. (2011) to create memories that possess a number of desirable properties including high accuracy, a fast, feedforward recall process, and etcient scaling, requiring a number of neurons linear in the number of stored associations. These memories have played a central role in several recent neural cognitive models including Spaun, the world's largest functional brain model (Eliasmith et al., 2012), as well as a proposal for human-scale, biologically plausible knowledge representation (Crawford et al., 2013). However, these memories are constructed using an ne optimization method that is not biologically plausible. Here we demonstrate how a similar set of connection weights can be arrived at through a biologically plausible, online learning process featuring a novel synaptic learning rule inspired in part by the well-known Oja learning rule (Oja, 1989). We present the details of our method and report the results of simulations exploring the storage capacity of these networks. We show that our technique scales up to large numbers of associations, and that recall performance degrades gracefully as the theoretical capacity is exceeded. This work has been implemented in the Nengo simulation package (http://nengo.ca), which will allow straightforward implementations of spiking neural networks on neuromorphic hardware. The result of our work is a fast, adaptive, scalable associative memory composed of spiking neurons which we expect to be a valuable addition to large systems peforming online neural computation.

• A unifying mechanistic model of selective attention in spiking neurons. (PLoS computational biology, 2014)

Keywords: Biology and life sciences,Circuit models,Coding mechanisms,Computational biology,Computational neuroscience,Neuroscience,Research Article,Sensory systems,Single neuron function,Visual system

Abstract: Visuospatial attention produces myriad effects on the activity and selectivity of cortical neurons. Spiking neuron models capable of reproducing a wide variety of these effects remain elusive. We present a model called the Attentional Routing Circuit (ARC) that provides a mechanistic description of selective attentional processing in cortex. The model is described mathematically and implemented at the level of individual spiking neurons, with the computations for performing selective attentional processing being mapped to specific neuron types and laminar circuitry. The model is used to simulate three studies of attention in macaque, and is shown to quantitatively match several observed forms of attentional modulation. Specifically, ARC demonstrates that with shifts of spatial attention, neurons may exhibit shifting and shrinking of receptive fields; increases in responses without changes in selectivity for non-spatial features (i.e. response gain), and; that the effect on contrast-response functions is better explained as a response-gain effect than as contrast-gain. Unlike past models, ARC embodies a single mechanism that unifies the above forms of attentional modulation, is consistent with a wide array of available data, and makes several specific and quantifiable predictions.

• Modelling the differential effects of prisms on perception and action in neglect. (Experimental Brain Research, 2014)

Steven Leigh, James Danckert, Chris Eliasmith

Abstract: Damage to the right parietal cortex often leads to a syndrome known as unilateral neglect in which the patient fails to attend or respond to stimuli in left space. Recent work attempting to rehabilitate the disorder has made use of rightward-shifting prisms that displace visual input further rightward. After a brief period of adaptation to prisms, many of the symptoms of neglect show improvements that can last for hours or longer, depending on the adaptation procedure. Recent work has shown, however, that differential effects of prisms can be observed on actions (which are typically improved) and perceptual biases (which often remain unchanged). Here, we present a computational model capable of explaining some basic symptoms of neglect (line bisection behaviour), the effects of prism adaptation in both healthy controls and neglect patients and the observed dissociation between action and perception following prisms. The results of our simulations support recent contentions that prisms primarily influence behaviours normally thought to be controlled by the dorsal stream.

• Hyperopt-Sklearn: Automatic Hyperparameter Configuration for Scikit-Learn (Proceedings of the 13th Python in Science Conference, 2014)

Keywords: bayesian optimization, model selection, hyperparameter optimization, scikit-learn

Abstract: Hyperopt-sklearn is a new software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-Newsgroups, Convex Shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and Convex Shapes.

• Preliminary Evaluation of Hyperopt Algorithms on HPOLib (ICML 2014 AutoML Workshop, 2014)

James Bergstra, Brent Komer, Chris Eliasmith, David Warde-Farley

Abstract: Model selection, also known as hyperparameter tuning, can be viewed as a blackbox optimization problem. Recently the HPOlib benchmarking suite was advanced to facilitate algorithm comparison between hyperparameter optimization algorithms. We compare seven optimization algorithms implemented in the Hyperopt optimization package, including a new annealing-type algorithm and a new family of Gaussian Process-based SMBO methods, on four screening problems from HPOLib. We find that methods based on Gaussian Processes (GPs) are the most call-efficient. Vanilla GP-based methods using stationary RBF kernels and maximum likelihood kernel parameter estimation provide a near-perfect ability to optimize the benchmarks. Despite being slower than more heuristic baselines, a Theano-based GP-SMBO implementation requires at most a few seconds to produce a candidate evaluation point. We compare this vanilla approach to Hybrid Monte-Carlo integration of the kernel lengthscales and fail to find compelling advantages of this more expensive procedure.

• A spiking neural model of episodic memory encoding and replay in hippocampus (Master's Thesis, 2014)

Oliver Trujillo

Abstract: As we experience life, we are constantly creating new memories, and the hippocampus plays an important role in the formation and recall of these episodic memories. We begin by describing the neural mechanisms that make the hippocampus ideally suited for memory formation, consolidation and recall. We then describe a biologically plausible spiking-neuron model of the hippocampus' role in episodic memory. The model includes a mechanism for generating temporal indexing vectors, for associating these indices with experience vectors to form episodes, and for replaying the original experience vectors in sequence when prompted. The model also associates these episodes with context vectors using synaptic plasticity, such that it is able to retrieve an episodic memory associated with a given context and replay it, even after long periods of time. We demonstrate the model's ability to experience sequences of sensory information in the form of semantic pointer vectors and replay the same sequences later, comparing the results to experimental data. In particular, the model runs a T-maze experiment in which a simulated rat is forced to choose between left or right at a decision point, during which the neural firing patterns of the model's place cells closely match those found in real rats performing the same task. We demonstrate that the model is robust to both spatial and non-spatial data, since the vector representation of the input data remains the same in either case. To our knowledge, this is the first spiking neural hippocampal model that can encode and recall sequences of both spatial and non-spatial data, while exhibiting temporal and spatial selectivity at a neural level.

• Trainable sensorimotor mapping in a neuromorphic robot (Robotics and Autonomous Systems, 2014)

Jorg Conradt, Francesco Galluppi, Terrence C Stewart

Abstract: We present a mobile robot with sufficient computing power to simulate up to a quarter of a million neurons in real-time. We use this computing power, combined with various on-board sensory and motor systems (including silicon retinae) to implement a novel method for learning sensorimotor competences by example. That is, by temporarily manually controlling the robot, it can gather information about what sensorimotor mapping it should be performing. We show that such a learning-by-example system is well-suited to power efficient neuron-based computation (60 W for all quarter of a million neurons), that it can learn quickly (a few tens of seconds), and that its learning generalizes well to novel situations.

• Biologically Plausible, Human-scale Knowledge Representation (35th Annual Conference of the Cognitive Science Society, 2013)

Eric Crawford, Matthew Gingerich, Chris Eliasmith

Keywords: cleanup memory, knowledge representation, Semantic Pointer Architecture, vector symbolic architecture, WordNet

Abstract: Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), mesh binding (van Der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990; Plate, 2003). Recent theoretical work has suggested that most of these methods will not scale well – that is, they cannot encode structured representations that use any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions (Stewart & Eliasmith, 2011; Eliasmith, 2013). Here we present an approach that will scale appropriately, and which is based on neurally implementing a type of Vector Symbolic Architecture (VSA). Specifically, we construct a spiking neural network composed of about 2.5 million neurons that employs a VSA to encode and decode the main lexical relations in WordNet, a semantic network containing over 100,000 concepts (Fellbaum, 1998). We experimentally demonstrate the capabilities of our model by measuring its performance on three tasks which test its ability to accurately traverse the WordNet hierarchy, as well as to decode sentences employing any WordNet term while preserving the original lexical structure. We argue that these results show that our approach is uniquely well-suited to providing a biologically plausible, human-scale account of the structured representations that underwrite cognition.

• Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks (35th Annual Conference of the Cognitive Science Society, 2013)

Abstract: We present a novel learning rule for learning transformations of sophisticated neural representations in a biologically plausible manner. We show that the rule can learn to transmit and bind semantic pointers. Semantic pointers have previously been used to build Spaun, which is currently the world's largest functional brain model (Eliasmith et al., 2012) and can perform several complex cognitive tasks. The learning rule combines a previously proposed supervised learning rule and a novel spiking form of the BCM unsupervised learning rule. We show that spiking BCM increases sparsity of connection weights at the cost of increased signal transmission error. We demonstrate that the combined learning rule can learn transformations as well as the supervised rule alone, and as well as the offline optimization used previously. We also demonstrate that the combined learning rule is more robust to changes in parameters and leads to better outcomes in higher dimensional spaces.

• A Biologically Plausible Spiking Neuron Model of Fear Conditioning (ICCM, 2013)

Abstract: Reinforcement learning based on rewarding or aversive stimuli is critical to understanding the adaptation of cognitive systems. One of the most basic and well-studied forms of reinforcement learning in mammals is found in fear conditioning. We present a biologically plausible spiking neuron model of mammalian fear conditioning and show that the model is capable of reproducing the results of four well known fear conditioning experiments (conditioning, second-order conditioning, blocking, and context-dependent extinction and renewal). The model contains approximately 2000 spiking neurons which make up various populations of primarily the amygdala, periaqueductal gray, and hippocampus. The connectivity and organization of these populations follows what is known about the fear conditioning circuit in mammalian brains. Input to the model is made up of populations representing sensory stimuli, contextual information, and electric shock, while the output is a population representing an autonomic fear response: freezing. Using a novel learning rule for spiking neurons, associations are learned between cues, contexts, and the aversive shock, reproducing the behaviors seen in rats during fear conditioning experiments.

• A neural model of the development of expertise (The 12th International Conference on Cognitive Modelling, 2013)

Keywords: motor control, automaticity, expertise, procedural learning, basal ganglia, motor cortex

Abstract: The ability to develop expertise through practice is a hallmark of biological systems, for both cognitive and motor based skills. At first, animals exhibit high variability and perform slowly, reliant on feedback signals constantly evaluating performance. With practice, the system develops a proficiency and consistency in skill execution, reflected in an increase in the associated cortical area (Pascual-Leone, 1995). Here we present a neural model of this expertise development. In the model, initial attempts at performing a task are based on generalizing previously learned control signals, which we refer to generically as actions', stored in the cortex. The basal ganglia evaluates these actions and modulates their contributions to the output signal, creating a novel action that performs the desired task. With repeated performance, the cortex learns to generate this action on its own, eventually developing an explicit representation of the action that can be called directly. This transference allows the system to more quickly and consistently execute the task, reflecting development of expertise. We present simulation results matching both behavioral and single cell spiking data.

• General Instruction Following in a Large-Scale Biologically Plausible Brain Model (35th Annual Conference of the Cognitive Science Society, 2013)

Abstract: We present a spiking neuron brain model implemented in 318,870 LIF neurons organized with distinct cortical modules, a basal ganglia, and a thalamus, that is capable of flexibly following memorized commands. Neural activity represents a structured set of rules, such as "If you see a 1, then push button A, and if you see a 2, then push button B". Synaptic connections between these neurons and the basal ganglia, thalamus, and other areas cause the system to detect when rules should be applied and to then do so. The model gives a reaction time difference of 77 ms between the simple and two-choice reaction time tasks, and requires 384 ms per item for sub-vocal counting, consistent with human experimental results. This is the first biologically realistic spiking neuron model capable of flexibly responding to complex structured instructions.

• Visual motion processing and perceptual decision making (35th Annual Conference of the Cognitive Science Society, 2013)

Abstract: We present a novel, biologically plausible model of visual motion processing and perceptual decision making that is independent of the number of choice categories or alternatives. The implementation is a large-scale spiking neural circuit consisting of: 1) a velocity filter using the principle of oscillator interference to determine the direction and speed of pattern motion in V1; 2) a representation of motion evidence in the middle temporal area (MT); and 3) integration of sensory evidence over time by a higher-dimensional attractor network in the lateral intraparietal area (LIP). We demonstrate the model by reproducing behavioral and neural results from classic perceptual decision making experiments that test the perceived direction of motion of variable coherence dot kinetograms. Specifically, these results capture monkey data from two-alternative forced-choice motion decision tests. We note that without any reconfiguration of the circuit, the implementation can be used to make decisions among a continuum of alternatives.

• Parsing Sequentially Presented Commands in a Large-Scale Biologically Realistic Brain Model (35th Annual Conference of the Cognitive Science Society, 2013)

Abstract: We present a neural mechanism for interpreting and executing visually presented commands. These are simple verb-noun commands (such as WRITE THREE) and can also include conditionals ([if] SEE SEVEN, [then] WRITE THREE). We apply this to a simplified version of our large-scale functional brain model "Spaun", where input is a 28x28 pixel visual stimulus, with a different pattern for each word. Output controls a simulated arm, giving hand-written answers. Cortical areas for categorizing, storing, and interpreting information are controlled by the basal ganglia (action selection) and thalamus (routing). The final model has approximately 100,000 LIF spiking neurons. We show that the model is extremely robust to neural damage (40 percent of neurons can be destroyed before performance drops significantly). Performance also drops for visual display times less than 250ms. Importantly, the system also scales to large vocabularies (approximately 100,000 nouns and verbs) without requiring an exponentially large number of neurons.

• God, the devil, and details: Fleshing out the predictive processing framework (commentary on Clark) (Behavioral and Brain Sciences, 2013)

Keywords: cognitive architecture

Abstract: The predictive processing framework lacks many of the architectural and implementational details needed to fully investigate or evaluate the ideas it presents. One way to begin to fill in these details is by turning to standard control-theoretic descriptions of these types of systems (e.g., Kalman filters), and by building complex, unified computational models in biologically realistic neural simulations.

• Does the Entorhinal Cortex use the Fourier Transform? (Tech Report, 2013)

Jeff Orchard, Hao Yang, Xiang Ji

Keywords: Fourier, Neural Engineering Framework, oscillators, path integration

Abstract: In 2005, Hafting et al. reported that some neurons in the entorhinal cortex (EC) fire bursts when the animal occupies locations oraganized in a hexagonal grid pattern in their spatial environment. Previous to that, place cells had been observed, firing bursts only when the animal occupied a particular region of the environment. Both of these types of cells exhibit theta-cycle modulation, firing bursts in the 4-12Hz range. In particular, grid cells fire bursts of action potentials that precess with respect to the theta cycle, a phenomenon dubbed "theta precession". Since then, various models have been proposed to explain the relationship between grid cells, place cells, and theta precession. However, most models have lacked a fundamental, overarching framework. As a reformulation of the pioneering work of Welday et al. (2011), we propose that the EC is implementing its spatial coding using the Fourier Transform. We show how the Fourier Shift Theorem relates to the phases of velocity-controlled oscillators (VCOs), and propose a model for how various other spatial maps might be implemented (eg. border cells). Our model exhibits the standard EC behaviours: grid cells, place cells, and phase precession, as bourne out by theoretical computations and spiking-neuron simulations. We hope that framing this constellation of phenomena in Fourier Theory will accelerate our understanding of how the EC – and perhaps the hippocampus {–} encodes spatial information.

• Heterogeneity Increases Information Transmission in Neuronal Populations (Cognitive and Systems Neuroscience, 2013)

Eric Hunsberger, Matthew Scott, Chris Eliasmith

Keywords: heterogeneity, noise, population coding, stochastic resonance

• A neurocomputational model of the mammalian fear conditioning circuit (Master's Thesis, 2013)

Carter Kolbeck

Abstract: In this thesis, I present a computational neural model that reproduces the high-level behavioural results of well-known fear conditioning experiments: first-order conditioning, second-order conditioning, sensory preconditioning, context conditioning, blocking, first-order extinction and renewal (AAB, ABC, ABA), and extinction and renewal after second-order conditioning and sensory preconditioning. The simulated neural populations used to account for the behaviour observed in these experiments correspond to known anatomical regions of the mammalian brain. Parts of the amygdala, periaqueductal gray, cortex and thalamus, and hippocampus are included and are connected to each other in a biologically plausible manner. The model was built using the principles of the Neural Engineering Framework (NEF): a mathematical framework that allows information to be encoded and manipulated in populations of neurons. Each population represents information via the spiking activity of simulated neurons, and is connected to one or more other populations; these connections allow computations to be performed on the information being represented. By specifying which populations are connected to which, and what functions these connections perform, I developed an information processing system that behaves analogously to the fear conditioning circuit in the brain.

• Realistic neurons can compute the operations needed by quantum probability theory and other vector symbolic architectures (Behavioral and Brain Sciences, 2013)

Abstract: (Commentary) Quantum probability theory can be seen as a type of Vector Symbolic Architecture: mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article.

• A Neural Model of Human Image Categorization (35th Annual Conference of the Cognitive Science Society, 2013)

Keywords: category representation, image categorization, Neural Engineering Framework, vector symbolic architecture

Abstract: Although studies of categorization have been a staple of psychological research for decades, there continues to be substantial disagreement about how unique classes of objects are represented in the brain. We present a neural architecture for categorizing visual stimuli based on the Neural Engineering Framework and the manipulation of semantic pointers. The model accounts for how the visual system computes semantic representations from raw images, and how those representations are then manipulated to produce category judgments. All computations of the model are carried out in simulated spiking neurons. We demonstrate that the model matches human performance on two seminal behavioural studies of image-based concept acquisition: Posner and Keele (1968) and Regehr and Brooks (1993).

• A Neurally Plausible Encoding of Word Order Information into a Semantic Vector Space (35th Annual Conference of the Cognitive Science Society, 2013)

Keywords: semantic memory; convolution; random permutation; vector space models; distributional semantics

Abstract: Distributed models of lexical semantics increasingly incorporate information about word order. One influential method for encoding this information into high-dimensional spaces uses convolution to bind together vectors to form representations of numerous n-grams that a target word is a part of. The computational complexity of this method has led to the development of an alternative that uses random permutation to perform order-sensitive vector combinations. We describe a simplified form of order encoding with convolution that yields comparable performance to earlier models, and we discuss considerations of neural implementation that favor the use of the proposed encoding. We conclude that this new encoding method is a more neurally plausible alternative than its predecessors.

• A neural reinforcement learning model for tasks with unknown time delays (35th Annual Conference of the Cognitive Science Society, 2013)

Abstract: We present a biologically based neural model capable of performing reinforcement learning in complex tasks. The model is unique in its ability to solve tasks that require the agent to make a sequence of unrewarded actions in order to reach the goal, in an environment where there are unknown and variable time delays between actions, state transitions, and rewards. Specifically, this is the first neural model of reinforcement learning able to function within a Semi-Markov Decision Process (SMDP) framework. We believe that this extension of current modelling efforts lays the groundwork for increasingly sophisticated models of human decision making.

• Modeling brain function: Current developments and future prospects (JAMA Neurology, 2013)

Abstract: We discuss work aimed at building functional models of the whole brain implemented in large-scale simulations of millions of individual neurons. Recent developments in this area demonstrate that such models can explain a variety of behavioral, neurophysiological, and neuroanatomical data. We argue that these models hold the potential to expand our understanding of the brain by connecting these levels of analysis in new and informative ways. However, current modeling efforts fall short of the target of whole-brain modeling. Consequently, we discuss different avenues of research that continue to progress toward that distant, but achievable, goal.

• The use and abuse of large-scale brain models (Current Opinion in Neurobiology, 2013)

Abstract: We provide an overview and comparison of several recent large-scale brain models. In addition to discussing challenges involved with building large neural models, we identify several expected benefits of pursuing such a research program. We argue that these benefits are only likely to be realized if two basic guidelines are made central to the pursuit. The first is that such models need to be intimately tied to behavior. The second is that models, and more importantly their underlying methods, should provide mechanisms for varying the level of simulated detail. Consequently, we express concerns with models that insist on a 'correct' amount of detail while expecting interesting behavior to simply emerge.

• Spike-based learning of transfer functions with the SpiNNaker neuromimetic simulator (International Joint Conference on Neural Networks, 2013)

Sergio Davies, Terrence C. Stewart, Chris Eliasmith, Steve Furber

Abstract: Recent papers have shown the possibility to implement large scale neural network models that perform complex algorithms in a biologically realistic way. However, such models have been simulated on architectures unable to perform real-time simulations. In previous work we presented the possibility to simulate simple models in real-time on the SpiNNaker neuromimetic architecture. However, such models were "static": the algorithm performed was defined at design-time. In this paper we present a novel learning rule, that exploits the peculiarities of the SpiNNaker system, enabling models designed with the Neural Engineering Framework (NEF) to learn transfer functions using a supervised framework. We show that the proposed learning rule, belonging to the Prescribed Error Sensitivity (PES) class, is able to learn, effectively, both linear and non-linear functions.

• A spiking neural model of strategy shifting in a simple reaction time task (Society for Neuroscience 2012, 2012)

Trevor Bekolay, Benjamine Liu, Chris Eliasmith, Mark Laubach

Abstract: In a simple reaction-time (RT) task with predictable foreperiods, subjects employ two strategies. They either wait until the cue and then respond, or they time the foreperiod and respond when the cue should occur. Evidence for these performance strategies has been detected in rodents, humans and other primates. A key brain region for implementing these control strategies is the medial prefrontal cortex (mPFC). Neurons in this brain region show changes in firing rates around the start of trials or fire persistently during the foreperiod of simple RT tasks, and exert control over the motor system by influencing firing rates in the motor cortex during the foreperiod activity (Narayanan & Laubach, 2006). Here, we describe a neural circuit model based on the known neuroanatomy that reproduces the observed activity patterns in rat mPFC and exhibits adjustments in the behavioral strategy based on the subject's recent outcomes. A neural circuit based on Singh and Eliasmith, 2006 tracks the behavioural state and the time elapsed in that state. This circuit serves as a top-down controller acting on a neural control system. When the top-down control is not being exerted, the system wait for the cue and responds at cue onset. When the foreperiod can be timed, topdown control is exerted when the behavioral response is predicted to occur. These adjustments can occur at any time and do not require synaptic weight changes.

• A mechanistic model of motion processing in the early visual system (Master's Thesis, 2012)

Aziz Hurzook

Keywords: large-scale spiking model, oscillator interference, visual motion

Abstract: A prerequisite for the perception of motion in primates is the transformation of varying intensities of light on the retina into an estimation of position, direction and speed of coherent objects. The neuro-computational mechanisms relevant for object feature encoding have been thoroughly explored, with many neurally plausible models able to represent static visual scenes. However, motion estimation requires the comparison of successive scenes through time. Precisely how the necessary neural dynamics arise and how other related neural system components interoperate have yet to be shown in a large-scale, biologically realistic simulation. The proposed model simulates a spiking neural network computation for representing object velocities in cortical areas V1 and middle temporal area (MT). The essential neural dynamics, hypothesized to reside in networks of V1 simple cells, are implemented through recurrent population connections that generate oscillating spatiotemporal tunings. These oscillators produce a resonance response when stimuli move in an appropriate manner in their receptive fields. The simulation shows close agreement between the predicted and actual impulse responses from V1 simple cells using an ideal stimulus. By integrating the activities of like V1 simple cells over space, a local measure of visual pattern velocity can be produced. This measure is also the linear weight of an associated velocity in a retinotopic map of optical flow. As a demonstration, the classic motion stimuli of drifting sinusoidal gratings and variably coherent dots are used as test stimuli and optical flow maps are generated. Vector field representations of this structure may serve as inputs for perception and decision making processes in later brain areas.

• The Neural Engineering Framework (AISB Quarterly, 2012)

Terrence C. Stewart

Abstract: The Neural Engineering Framework (NEF) is a general methodology that allows the building of large-scale, biologically plausible, neural models of cognition. The NEF acts as a neural compiler: once the properties of the neurons, the values to be represented, and the functions to be computed are specified, it solves for the connection weights between components that will perform the desired functions. Importantly, this works not only for feed-forward computations, but also for recurrent connections, allowing for complex dynamical systems including integrators, oscillators, Kalman filters, etc. The NEF also incorporates realistic local error-driven learning rules, allowing for the online adaptation and optimisation of responses. The NEF has been used to model visual attention, inductive reasoning, reinforcement learning and many other tasks. Recently, we used it to build Spaun, the world{\textquoteright }s largest functional brain model, using 2.5 million neurons to perform eight different cognitive tasks by interpreting visual input and producing hand-written output via a simulated 6-muscle arm. Our open-source software Nengo was used for all of these, and is available at http://nengo.ca, along with tutorials, demos, and downloadable models.

• A Technical Overview of the Neural Engineering Framework (Tech Report, 2012)

Terrence C. Stewart

Abstract: The Neural Engineering Framework (NEF) is a general methodology that allows you to build largescale, biologically plausible, neural models of cognition. In particular, it acts as a neural compiler: you specify the properties of the neurons, the values to be represented, and the functions to be computed, and it solves for the connection weights between components that will perform the desired functions. Importantly, this works not only for feed-forward computations, but recurrent connections as well, allowing for complex dynamical systems including integrators, oscillators, Kalman filters, and so on. It also incorporates realistic local error-driven learning rules, allowing for online adaptation and optimization of responses. The NEF has been used to model visual attention, inductive reasoning, reinforcement learning, and many other tasks. Recently, we used it to build Spaun, the world's largest functional brain model, using 2.5 million neurons to perform eight different cognitive tasks by interpreting visual input and producing hand-written output via a simulated 6-muscle arm. Our open-source software Nengo was used for all of these, and is available at http://nengo.ca, along with tutorials, demos, and downloadable models.

• Learning to select actions with spiking neurons in the basal ganglia (Frontiers in Decision Neuroscience, 2012)

Abstract: We expand our existing spiking neuron model of decision making in the cortex and basal ganglia to include local learning on the synaptic connections between the cortex and striatum, modulated by a dopaminergic reward signal. We then compare this model to animal data in the bandit task, which is used to test rodent learning in conditions involving forced choice under rewards. Our results indicate a good match in terms of both behavioral learning results and spike patterns in the ventral striatum. The model successfully generalizes to learning the utilities of multiple actions, and can learn to choose different actions in different states. The purpose of our model is to provide both high-level behavioral predictions and low-level spike timing predictions while respecting known neurophysiology and neuroanatomy.

• Spaun: A Perception-Cognition-Action Model Using Spiking Neurons (Cognitive Science Society, 2012)

Abstract: We present a large-scale cognitive neural model called Spaun (Semantic Pointer Architecture: Unified Network), and show simulation results on 6 tasks (digit recognition, tracing from memory, serial working memory, question answering, addition by counting, and symbolic pattern completion). The model consists of 2.3 million spiking neurons whose neural properties, organization, and connectivity match that of the mammalian brain. Input consists of images of handwritten and typed numbers and symbols, and output is the motion of a 2 degree-of-freedom arm that writes the model's responses. Tasks can be presented in any order, with no “rewiring” of the brain for each task. Instead, the model is capable of internal cognitive control (via the basal ganglia), selectively routing information throughout the brain and recruiting different cortical components as needed for each task.

• Silicon Neurons that Compute (International Conference on Artificial Neural Networks, 2012)

Swadesh Choudhary, Steven Sloan, Sam Fok, Alexander Neckar, Eric Trautmann, Peiran Gao, Terrence C. Stewart, Chris Eliasmith, Kwabena Boahen

• A general error-modulated STDP learning rule applied to reinforcement learning in the basal ganglia (Cognitive and Systems Neuroscience, 2011)

Abstract: We present a novel error-modulated spike-timing-dependent learning rule that utilizes a global error signal and the tuning properties of neurons in a population to learn arbitrary transformations on n-dimensional signals. This rule addresses the gap between low-level spike-timing learning rules modifying individual synaptic weights and higher-level learning schemes that characterize behavioural changes in an animal. The learning rule is first analyzed in a small spiking neural network. Using the encod- ing/decoding framework described by Eliasmith and Anderson (2003), we show that the rule can learn linear and non-linear transformations on n-dimensional signals. The learning rule arrives at a connection weight matrix that differs significantly from the connection weight matrix found analytically by Eliasmith and Anderson{\textquoteright }s method, but performs similarly well. We then use the learning rule to augment Stewart et al.{\textquoteright }s biologically plausible imple- mentation of action selection in the basal ganglia (2009). Their implementation forms the \actor " module in the actor-critic reinforcement learning architecture described by Barto (1995). We add a \critic " module, inspired by the physiology of the ventral striatum, that can modulate the model{\textquoteright }s likelihood of selecting actions based on the current state and the history of rewards obtained as a result of taking certain actions in that state. Despite being a complicated model with several interconnected populations, we are able to use our learning rule without any modifications. As a result, we suggest that this rule provides a unique and biologically plausible characterization of supervised and semi- supervised learning in the brain.

• Neural Representations of Compositional Structures: Representing and Manipulating Vector Spaces with Spiking Neurons (Connection Science, 2011)

Abstract: This paper re-examines the question of localist vs. distributed neural representations using a biologically realistic framework based on the central notion of neurons having a preferred direction vector. A preferred direction vector captures the general observation that neurons fire most vigorously when the stimulus lies in a particular direction in a represented vector space. This framework has been successful in capturing a wide variety of detailed neural data, although here we focus on cognitive representation. In particular, we describe methods for constructing spiking networks that can represent and manipulate structured, symbol-like representations. In the context of such networks, neuron activities can seem both localist and distributed, depending on the space of inputs being considered. This analysis suggests that claims of a set of neurons being localist or distributed cannot be made sense of without specifying the particular stimulus set used to examine the neurons.

• The neural optimal control hierarchy for motor control (The Journal of Neural Engineering, 2011)

Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

Abstract: Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.

• A spiking neuron model of movement and pre-movement activity in M1 (Cognitive and Systems Neuroscience, 2011)

Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex, spiking

Abstract: We present a spiking neuron model of the primary motor cortex (M1) in the context of a reaching task for a 2-link arm model on the horizontal plane. The M1 population is embedded in a larger scale, hierarchical optimal control model of the motor system called NOCH (DeWolf & Eliasmith, 2010). NOCH characterizes the overall functioning of the motor system, and has been shown to reproduce natural arm movements, as well as movements resulting from perturbations due to motor system damage from Huntington's, Parkinson's, and cerebellar lesions. Here, we demonstrate that the observed dynamics of spiking neurons in awake behaving animals can be accounted for by the NOCH characterization of the motor system. To do so, the M1 neural population is provided with target information and proprioceptive feedback in end-effector space, and outputs a lower-level system command, driving the arm to the target. The implemented neural population represents a single layer of the M1 hierarchy, transforming high-level, end-effector agnostic control forces into lower-level arm specific joint torques. The population is preferentially responsive to areas in space that have been well explored, providing more exact control for movements that can be executed using learned movement synergies. In this way the motor cortex performs component based movement generation, similar to recent Linear Bellman Equation (Todorov 2009) and Hidden Markov Model (Schaal 2009) based robotic control systems displaying high levels of robustness to complicated system dynamics, perturbations, and changing environments. We compare neural activity generated from our model of M1 to experimental data of movement and pre-movement recordings in monkeys (Churchland 2010), providing support for our model of the primary motor cortex, and to the methods underlying the more general NOCH framework.

• A neural model of rule generation in inductive reasoning (Topics in Cognitive Science, 2011)

Abstract: Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects.

• Learning in large-scale spiking neural networks (Master's Thesis, 2011)

Trevor Bekolay

Abstract: Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.

• The attentional routing circuit: receptive field modulation through nonlinear dendritic interactions (Cognitive and Systems Neuroscience, 2011)

Abstract: We present a model of attentional routing called the Attentional Routing Circuit (ARC) that extends an existing model of spiking neurons with dendritic nonlinearities. Specifically, we employ the Poirazi et al. (2003) pyramidal neuron in a population coding framework. ARC demonstrates that the dendritic nonlinearities can be exploited to result in selective routing, with a decrease in the number of cells needed by a factor of ~5 as compared with a linear dendrite model. Routing of attended information occurs through the modulation of feedforward visual signals by a cortical control signal specifying the location and size of the attended target. The model is fully specified in spiking single cells. Our approach differs from past work on shifter circuits by having more efficient control, and using a more biologically detailed substrate. Our approach differs from existing models that use gain fields by providing precise hypotheses about how the control signals are generated and distributed in a hierarchical model in spiking neurons. Further, the model accounts for numerous experimental findings regarding the timing, strength and extent of attentional modulation in ventral stream areas, and the perceived contrast enhancement of attended stimuli. To further demonstrate the plausibility of ARC, it is applied to the attention experiments of Womelsdorf et al. (2008) and tested in detail. For the simulations, the model has only two free parameters that influence its ability to match the experimental data, and without fitting, we show that it can account for the experimental observations of changes in receptive field (RF) gain and position with attention in macaques. In sum, the model provides an explanation of RF modulation as well as testable predictions about nonlinear cortical dendrites and attentional changes of receptive field properties.

• A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm (Neural Information Processing Systems (NIPS) 24, 2011)

Julie Dethier, Paul Nuyujukian, Chris Eliasmith, Terrence C. Stewart, Shauki A. Elassaad, Krishna Shenoy, Kwabena Boahen

Abstract: Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm{\textquoteright }s velocity and mapped on to the SNN using the Neural Engineer- ing Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neu- romorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

• A Dynamic Account of the Structure of Concepts (Master's Thesis, 2011)

Peter Blouw

Abstract: Concepts are widely agreed to be the basic constituents of thought. Amongst philosophers and psychologists, however, the question of how concepts are structured has been a longstanding problem and a locus of disagreement. I draw on recent work describing how representational content is ascribed to populations of neurons to develop a novel solution to this problem. Because disputes over the structure of concepts often reflect divergent explanatory goals, I begin by arguing for a set of six criteria that a good theory ought to accommodate. These criteria address philosophical concerns related to content, reference, scope, publicity, and compositionality, and psychological concerns related to categorization phenomena and neural plausibility. Next, I evaluate a number of existing theoretical approaches in relation to these six criteria. I consider classical views that identify concepts with definitions, similarity-based views that identify concepts with prototypes or exemplars, theory-based views that identify concepts with explanatory schemas, and atomistic views that identify concepts with unstructured mental symbols that enter into law-like relations with their referents. I conclude that none of these accounts can satisfactorily accommodate all of the criteria. I then describe the theory of representational content that I employ to motivate a novel account of concept structure. I briefly defend this theory against competitors, and I describe how it can be scaled from the level of basic perceptual representations to the level of highly complex conceptual representations. On the basis of this description, I contend that concepts are structured dynamically through sets of transformations of single source representation, and that the content of a given concept specifies the set of potential transformations it can enter into. I conclude by demonstrating that the ability of this account to meet all of the criteria introduced beforehand. I consider objections to my views throughout.

• Automating the Nengo build process (Tech Report, 2010)

Trevor Bekolay

Abstract: Nengo is a piece of sophisticated neural modelling software used to simulate large-scale networks of biologically plausible neurons. Previously, releases of Nengo were being created manually whenever the currently released version lacked an important new feature. While unit testing existed, it was not being maintained or run regularly. Good development practices are important for Nengo because it is a complicated application with over 50,000 lines of code, and depends on dozens of third-party libraries. In addition, being an open source project, having good development practices can attract new contributors. This technical report discusses the creation and automation of a back-end for Nengo, and how it integrates with the existing mechanisms used in Nengo development. Since the primary goal of the system was to avoid disturbing developers' workflow, the typical development cycle is made explicit and it is shown that the cycle is not affected by the new automated system.

• Dynamic scaling for efficient, low-cost control of high-precision movements in large environments (Tech Report, 2010)

Travis DeWolf

Abstract: This paper presents the dynamic scaling technique (DST), a method for control in large environments that dramatically reduces the resources required to achieve highly accurate movements. The DST uses a low resolution representation of the environment to calculate an initial approximately optimal trajectory and refines the control signal as the target is neared. Simulation results are presented and the effect of representation resolution on accuracy and computational efficiency is analyzed.

• A general error-based spike-timing dependent learning rule for the Neural Engineering Framework (Tech Report, 2010)

Trevor Bekolay

Abstract: Previous attempts at integrating spike-timing dependent plasticity rules in the NEF have met with little success. This project proposes a spike-timing dependent plasticity rule that uses local information to learn transformations between populations of neurons. The rule is implemented and tested on a simple one-dimensional communication channel, and is compared to a similar rate-based learning rule.

• Using and extending plasticity rules in Nengo (Tech Report, 2010)

Trevor Bekolay

Abstract: Learning in the form of synaptic plasticity is an essential part of any neural simulation software claiming to be biologically plausible. While plasticity has been a part of Nengo from the beginning, few simulations have been created to make full use of the plasticity mechanisms built into Nengo, and as a result, they have been under-maintained. Since SVN revision 985, the way plasticity is implemented in Nengo has changed signifcantly. This report is intended to explain how plasticity rules are implemented since that revision, and provide examples of how to use and extend the plasticity rules currently implemented.

• Learning nonlinear functions on vectors: examples and predictions (Tech Report, 2010)

Trevor Bekolay

Abstract: One of the underlying assumptions of the Neural Engineering Framework, and of most of theoretical neuroscience, is that neurons in the brain perform functions on signals. Models of brain systems make explicit the functions that a modeller hypothesizes are being performed in the brain; the Neural Engineering Framework defines an analytical method of determining connection weight matrices between populations to perform those functions in a biologically plausible manner. With the recent implementation of general error-modulated plasticity rules in Nengo, it is now possible to start with a random connection weight matrix and learn a weight matrix that will perform an arbitrary function. This technical report confirms that this is true by showing results of learning several non-linear functions performed on vectors of various dimensionality. It also discusses trends seen in the data, and makes predictions about what we might expect when trying to learn functions on very high-dimensional signals.

• The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons (Master's Thesis, 2010)

Xuan Choo

Abstract: In a world dominated by temporal order, memory capable of processing, encoding and subsequently recalling ordered information is very important. Over the decades this memory, known as serial memory, has been extensively studied, and its effects are well known. Many models have also been developed, and while these models are able to reproduce the behavioural effects observed in human recall studies, they are not always implementable in a biologically plausible manner. This thesis presents the Ordinal Serial Encoding model, a model inspired by biology and designed with a broader view of general cognitive architectures in mind. This model has the advantage of simplicity, and we show how neuro-plausibility can be achieved by employing the principles of the Neural Engineering Framework in the model's design. Additionally, we demonstrate that not only is the model able to closely mirror human performance in various recall tasks, but the behaviour of the model is itself a consequence of the underlying neural architecture.

• Dynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia (10th International Conference on Cognitive Modeling, 2010)

Abstract: A fundamental process for cognition is action selection: choosing a particular action out of the many possible actions available. This process is widely believed to involve the basal ganglia, and we present here a model of action selection that uses spiking neurons and is in accordance with the connectivity and neuron types found in this area. Since the parameters of the model are set by neurological data, we can produce timing predictions for different action selection situations without requiring parameter tweaking. Our results show that, while an action can be selected in 14 milliseconds (or longer for actions with similar utilities), it requires 34-44 milliseconds to go from one simple action to the next. For complex actions (whose effect involves routing information between cortical areas), 59-73 milliseconds are needed. This suggests a change to the standard cognitive modelling approach of requiring 50 milliseconds for all types of actions.

• Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop (32nd Annual Meeting of the Cognitive Science Society, 2010)

Keywords: basal ganglia

Abstract: We present a model of symbol manipulation implemented using spiking neurons and closely tied to the anatomy of the cortex, basal ganglia, and thalamus. The model is a general-purpose neural controller which plays a role analogous to a production system. Information stored in cortex is used by the basal ganglia as the basis for selecting between a set of inferences. When an inference rule is selected, it commands the thalamus to modify and transmit information between areas of the cortex. The system supports special-case and general-purpose inferences, including the ability to remember complex statements and answer questions about them. The resulting model suggests modifications to the standard structure of production system rules, and offers a neurological explanation for the 50 millisecond cognitive cycle time.

• A Spiking Neuron Model of Serial-Order Recall (32nd Annual Conference of the Cognitive Science Society, 2010)

Abstract: Vector symbolic architectures (VSAs) have been used to model the human serial-order memory system for decades. Despite their success, however, none of these models have yet been shown to work in a spiking neuron network. In an effort to take the first step, we present a proof-of-concept VSA-based model of serial-order memory implemented in a network of spiking neurons and demonstrate its ability to successfully encode and decode item sequences. This model also provides some insight into the differences between the cognitive processes of memory encoding and subsequent recall, and establish a firm foundation on which more complex VSA-based models of memory can be developed.

• Concept Based Representations for Ranking in Geographic Information Retrieval (IceTAL 2010, 2010)

Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Luis Villasenor-Pineda, Manuel Montes-y-Gomez

• A neural modelling approach to investigating general intelligence (Master's Thesis, 2010)

Daniel Rasmussen

Abstract: One of the most well-respected and widely used tools in the study of general intelligence is the Raven's Progressive Matrices test, a nonverbal task wherein subjects must induce the rules that govern the patterns in an arrangement of shapes and figures. This thesis describes the first neurally based, biologically plausible model that can dynamically generate the rules needed to solve Raven's matrices. We demonstrate the success and generality of the rules generated by the model, as well as interesting insights the model provides into the causes of individual differences, at both a low (neural capacity) and high (subject strategy) level. Throughout this discussion we place our research within the broader context of intelligence research, seeking to understand how the investigation and modelling of Raven's Progressive Matrices can contribute to our understanding of general intelligence.

• NOCH: A framework for biologically plausible models of neural motor control (Master's Thesis, 2010)

Travis DeWolf

Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

Abstract: This thesis examines the neurobiological components of the motor control system and relates it to current control theory in order to develop a novel framework for models of motor control in the brain. The presented framework is called the Neural Optimal Control Hierarchy (NOCH). A method of accounting for low level system dynamics with a Linear Bellman Controller (LBC) on top of a hierarchy is presented, as well as a dynamic scaling technique for LBCs that drastically reduces the computational power and storage requirements of the system. These contributions to LBC theory allow for low cost, high-precision control of movements in large environments without exceeding the biological constraints of the motor control system.

• NOCH: A framework for biologically plausible models of neural motor control (20th Annual Neural Control of Movement Conference, 2010)

Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

Abstract: This poster presents the Neural Optimal Control Hierarchy (NOCH), a framework based on optimal control theory and hierarchical control systems that takes advantage of recent developments in the field to map function to neurobiological components of the motor control system in the brain. An implementation of the NOCH controlling an arm model is shown to mimic human arm reach trajectories and account for various kinds of damage to the brain, including Huntington{\textquoteright }s disease and cerebellar damage.

• Spiking neurons and central executive control: The origin of the 50-millisecond cognitive cycle (9th International Conference on Cognitive Modelling, 2009)

Abstract: A common feature of many cognitive architectures is a central executive control with a 50-millisecond cycle time. This system determines which action to perform next, based on the current context. We present the first model of this system using spiking neurons. Given the constraints of well-established neural time constants, a cycle time of 46.6 milliseconds emerges from our model. This assumes that the neurotransmitter used is GABA (with GABA-A receptors), the primary neurotransmitter for the basal ganglia, where this cognitive module is generally believed to be located.

• Representing Context Information for Document Retrieval (Flexible Query Answering Systems, FQAS 2009, 2009)

Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Manuel Montes-y-Gomez, Luis Villasenor-Pineda

• Concept representations in Geographic Information Retrieval as Re-ranking Strategies (18th ACM Conference on Information and Knowledge Management, 2009)

Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Manuel Montes-y-Gomez, Luis Villasenor-Pineda

• Sequential production and recognition of songs by songbirds through NEF (Tech Report, 2009)

Marc Hurwitz

Abstract: The paper details an NEF model of bird song production and recognition. The model neurons and overall neural structure are taken from current research on the actual neuroanatomy of the zebra finch. While the model is simplified to songs of at most three notes, it illustrates that both sequence production (easy) and sequence recognition (hard) can be constructed in NEF. Furthermore, the model explains why specific types of neurons might be seen in the actual bird song-specific regions.

• A biologically realistic cleanup memory: Autoassociation in spiking neurons (9th International Conference on Cognitive Modelling, 2009)

Terrence C. Stewart, Yichuan Tang, Chris Eliasmith

Abstract: Methods for cleaning up (or recognizing) states of a neural network are crucial for the functioning of many neural cognitive models. For example, Vector Symbolic Architectures provide a method for manipulating symbols using a fixed-length vector representation. To recognize the result of these manipulations, a method for cleaning up the resulting noisy representation is needed, as this noise increases with the number of symbols being combined. While these manipulations have previously been modelled with biologically plausible neurons, this paper presents the first spiking neuron model of the cleanup process. We demonstrate that it approaches ideal performance and that the neural requirements scale linearly with the number of distinct symbols in the system. While this result is relevant for any biological model requiring cleanup, it is crucial for VSAs, as it completes the set of neural mechanisms needed to provide a full neural implementation of symbolic reasoning.

• Motor control in the brain (Tech Report, 2008)

Travis DeWolf

Keywords: motor control, review

Abstract: There has been much progress in the development of a model of motor control in the brain in the last decade; from the improved method for mathematically extracting the predicted movement direction from a population of neurons to the application of optimal control theory to motor control models, much work has been done to further our understanding of this area. In this paper recent literature is reviewed and the direction of future research is examined.

• Methods for augmenting semantic models with structural information for text classification (Advances in Information Retrieval, 2008)

Jonathan M. Fishbein, Chris Eliasmith

Abstract: Current representation schemes for automatic text classification treat documents as syntactically unstructured collections of words or concepts'. Past attempts to encode syntactic structure have treated part-of-speech information as another word-like feature, but have been shown to be less effective than non-structural approaches. Here, we investigate three methods to augment semantic modelling with syntactic structure, which encode the structure across all features of the document vector while preserving text semantics. We present classification results for these methods versus the Bag-of-Concepts semantic modelling representation to determine which method best improves classification scores.

• Is the brain a quantum computer? (Cognitive Science, 2006)

Abninder Litt, Chris Eliasmith, Frederick W. Kroon, Steven Weinstein, Paul Thagard