• Mapping Arbitrary Mathematical Functions and Dynamical Systems to Neuromorphic VLSI Circuits for Spike-Based Neural Computation (IEEE International Symposium on Circuits and Systems (ISCAS), 2014)

    Frederico Corradi, Chris Eliasmith, Giacomo Indiveri

    Keywords: neuromorphic, robotics, NEF, Spinnaker

    Abstract: Brain-inspired, spike-based computation in electronic systems is being investigated for developing alternative, non-conventional computing technologies. The Neural Engineering Framework provides a method for programming these devices to implement computation. In this paper we apply this approach to perform arbitrary mathematical computation using a mixed signal analog/digital neuromorphic multi-neuron VLSI chip. This is achieved by means of a network of spiking neurons with multiple weighted connections. The synaptic weights are stored in a 4-bit on-chip programmable SRAM block. We propose a parallel event-based method for calibrating appropriately the synaptic weights and demonstrate the method by encoding and decoding arbitrary mathematical functions, and by implementing dynamical systems via recurrent connections.

  • Event-based neural computing on an autonomous mobile platform (Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2014)

    Francesco Galluppi, Christian Denk, Matthias Meiner, Terrence C Stewart, Luis Plana, Chris Eliasmith, Steve Furber, Jorg Conradt

    Keywords: neuromorphic, robotics, NEF, Spinnaker

    Abstract: Living organisms are capable of autonomously adapting to dynamically changing environments by receiving inputs from highly specialized sensory organs and elaborating them on the same parallel, power-efficient neural substrate. In this paper we present a prototype for a comprehensive integrated platform that allows replicating principles of neural information processing in real-time. Our system consists of (a) an autonomous mobile robotic platform, (b) on-board actuators and multiple (neuromorphic) sensors, and (c) the SpiNNaker computing system, a configurable neural architecture for exploration of parallel, brain-inspired models. The simulation of neurally inspired perception and reasoning algorithms is performed in real-time by distributed, low-power, low-latency event-driven computing nodes, which can be flexibly configured using C or specialized neural languages such as PyNN and Nengo. We conclude by demonstrating the platform in two experimental scenarios, exhibiting real-world closed loop behavior consisting of environmental perception, reasoning and execution of adequate motor actions.

  • Nengo: A Python tool for building large-scale functional brain models (Frontiers in Neuroinformatics, 2014)

    Trevor Bekolay, James Bergstra, Eric Hunsberger, Travis DeWolf, Terrence C Stewart, Daniel Rasmussen, Xuan Choo, Aaron Russell Voelker, Chris Eliasmith

    Abstract: Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.

  • A spiking neural integrator model of the adaptive control of action by the medial prefrontal cortex (The Journal of Neuroscience, 2014)

    Trevor Bekolay, Mark Laubach, Chris Eliasmith

    Abstract: Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.

  • Simultaneous unsupervised and supervised learning of cognitive functions in biologically plausible spiking neural networks (35th Annual Conference of the Cognitive Science Society, 2013)

    Trevor Bekolay, Carter Kolbeck, Chris Eliasmith

    Abstract: We present a novel learning rule for learning transformations of sophisticated neural representations in a biologically plausible manner. We show that the rule can learn to transmit and bind semantic pointers. Semantic pointers have previously been used to build Spaun, which is currently the world's largest functional brain model (Eliasmith et al., 2012) and can perform several complex cognitive tasks. The learning rule combines a previously proposed supervised learning rule and a novel spiking form of the BCM unsupervised learning rule. We show that spiking BCM increases sparsity of connection weights at the cost of increased signal transmission error. We demonstrate that the combined learning rule can learn transformations as well as the supervised rule alone, and as well as the offline optimization used previously. We also demonstrate that the combined learning rule is more robust to changes in parameters and leads to better outcomes in higher dimensional spaces.

  • Biologically Plausible, Human-scale Knowledge Representation (35th Annual Conference of the Cognitive Science Society, 2013)

    Eric Crawford, Matthew Gingerich, Chris Eliasmith

    Keywords: cleanup memory, knowledge representation, Semantic Pointer Architecture, vector symbolic architecture, WordNet

    Abstract: Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, 1993), mesh binding (van Der Velde & de Kamps, 2006), and conjunctive binding (Smolensky, 1990; Plate, 2003). Recent theoretical work has suggested that most of these methods will not scale well -- that is, they cannot encode structured representations that use any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions (Stewart & Eliasmith, 2011; Eliasmith, 2013). Here we present an approach that will scale appropriately, and which is based on neurally implementing a type of Vector Symbolic Architecture (VSA). Specifically, we construct a spiking neural network composed of about 2.5 million neurons that employs a VSA to encode and decode the main lexical relations in WordNet, a semantic network containing over 100,000 concepts (Fellbaum, 1998). We experimentally demonstrate the capabilities of our model by measuring its performance on three tasks which test its ability to accurately traverse the WordNet hierarchy, as well as to decode sentences employing any WordNet term while preserving the original lexical structure. We argue that these results show that our approach is uniquely well-suited to providing a biologically plausible, human-scale account of the structured representations that underwrite cognition.

  • General Instruction Following in a Large-Scale Biologically Plausible Brain Model (35th Annual Conference of the Cognitive Science Society, 2013)

    Xuan Choo, Chris Eliasmith

    Abstract: We present a spiking neuron brain model implemented in 318,870 LIF neurons organized with distinct cortical modules, a basal ganglia, and a thalamus, that is capable of flexibly following memorized commands. Neural activity represents a structured set of rules, such as "If you see a 1, then push button A, and if you see a 2, then push button B". Synaptic connections between these neurons and the basal ganglia, thalamus, and other areas cause the system to detect when rules should be applied and to then do so. The model gives a reaction time difference of 77 ms between the simple and two-choice reaction time tasks, and requires 384 ms per item for sub-vocal counting, consistent with human experimental results. This is the first biologically realistic spiking neuron model capable of flexibly responding to complex structured instructions.

  • Spike-based learning of transfer functions with the SpiNNaker neuromimetic simulator (International Joint Conference on Neural Networks, 2013)

    Sergio Davies, Terrence C. Stewart, Chris Eliasmith, Steve Furber

    Abstract: Recent papers have shown the possibility to implement large scale neural network models that perform complex algorithms in a biologically realistic way. However, such models have been simulated on architectures unable to perform real-time simulations. In previous work we presented the possibility to simulate simple models in real-time on the SpiNNaker neuromimetic architecture. However, such models were "static": the algorithm performed was defined at design-time. In this paper we present a novel learning rule, that exploits the peculiarities of the SpiNNaker system, enabling models designed with the Neural Engineering Framework (NEF) to learn transfer functions using a supervised framework. We show that the proposed learning rule, belonging to the Prescribed Error Sensitivity (PES) class, is able to learn, effectively, both linear and non-linear functions.

  • A Biologically Plausible Spiking Neuron Model of Fear Conditioning (ICCM, 2013)

    Carter Kolbeck, Trevor Bekolay, Chris Eliasmith

    Abstract: Reinforcement learning based on rewarding or aversive stimuli is critical to understanding the adaptation of cognitive systems. One of the most basic and well-studied forms of reinforcement learning in mammals is found in fear conditioning. We present a biologically plausible spiking neuron model of mammalian fear conditioning and show that the model is capable of reproducing the results of four well known fear conditioning experiments (conditioning, second-order conditioning, blocking, and context-dependent extinction and renewal). The model contains approximately 2000 spiking neurons which make up various populations of primarily the amygdala, periaqueductal gray, and hippocampus. The connectivity and organization of these populations follows what is known about the fear conditioning circuit in mammalian brains. Input to the model is made up of populations representing sensory stimuli, contextual information, and electric shock, while the output is a population representing an autonomic fear response: freezing. Using a novel learning rule for spiking neurons, associations are learned between cues, contexts, and the aversive shock, reproducing the behaviors seen in rats during fear conditioning experiments.

  • A neural model of the development of expertise (The 12th International Conference on Cognitive Modelling, 2013)

    Travis DeWolf, Chris Eliasmith

    Keywords: motor control, automaticity, expertise, procedural learning, basal ganglia, motor cortex

    Abstract: The ability to develop expertise through practice is a hallmark of biological systems, for both cognitive and motor based skills. At first, animals exhibit high variability and perform slowly, reliant on feedback signals constantly evaluating performance. With practice, the system develops a proficiency and consistency in skill execution, reflected in an increase in the associated cortical area (Pascual-Leone, 1995). Here we present a neural model of this expertise development. In the model, initial attempts at performing a task are based on generalizing previously learned control signals, which we refer to generically as `actions', stored in the cortex. The basal ganglia evaluates these actions and modulates their contributions to the output signal, creating a novel action that performs the desired task. With repeated performance, the cortex learns to generate this action on its own, eventually developing an explicit representation of the action that can be called directly. This transference allows the system to more quickly and consistently execute the task, reflecting development of expertise. We present simulation results matching both behavioral and single cell spiking data.

  • A Neural Model of Human Image Categorization (35th Annual Conference of the Cognitive Science Society, 2013)

    Eric Hunsberger, Peter Blouw, James Bergstra, Chris Eliasmith

    Keywords: category representation, image categorization, Neural Engineering Framework, vector symbolic architecture

    Abstract: Although studies of categorization have been a staple of psychological research for decades, there continues to be substantial disagreement about how unique classes of objects are represented in the brain. We present a neural architecture for categorizing visual stimuli based on the Neural Engineering Framework and the manipulation of semantic pointers. The model accounts for how the visual system computes semantic representations from raw images, and how those representations are then manipulated to produce category judgments. All computations of the model are carried out in simulated spiking neurons. We demonstrate that the model matches human performance on two seminal behavioural studies of image-based concept acquisition: Posner and Keele (1968) and Regehr and Brooks (1993).

  • A Neurally Plausible Encoding of Word Order Information into a Semantic Vector Space (35th Annual Conference of the Cognitive Science Society, 2013)

    Peter Blouw, Chris Eliasmith

    Keywords: semantic memory; convolution; random permutation; vector space models; distributional semantics

    Abstract: Distributed models of lexical semantics increasingly incorporate information about word order. One influential method for encoding this information into high-dimensional spaces uses convolution to bind together vectors to form representations of numerous n-grams that a target word is a part of. The computational complexity of this method has led to the development of an alternative that uses random permutation to perform order-sensitive vector combinations. We describe a simplified form of order encoding with convolution that yields comparable performance to earlier models, and we discuss considerations of neural implementation that favor the use of the proposed encoding. We conclude that this new encoding method is a more neurally plausible alternative than its predecessors.

  • A neurocomputational model of the mammalian fear conditioning circuit (Thesis, 2013)

    Carter Kolbeck

    Abstract: In this thesis, I present a computational neural model that reproduces the high-level behavioural results of well-known fear conditioning experiments: first-order conditioning, second-order conditioning, sensory preconditioning, context conditioning, blocking, first-order extinction and renewal (AAB, ABC, ABA), and extinction and renewal after second-order conditioning and sensory preconditioning. The simulated neural populations used to account for the behaviour observed in these experiments correspond to known anatomical regions of the mammalian brain. Parts of the amygdala, periaqueductal gray, cortex and thalamus, and hippocampus are included and are connected to each other in a biologically plausible manner. The model was built using the principles of the Neural Engineering Framework (NEF): a mathematical framework that allows information to be encoded and manipulated in populations of neurons. Each population represents information via the spiking activity of simulated neurons, and is connected to one or more other populations; these connections allow computations to be performed on the information being represented. By specifying which populations are connected to which, and what functions these connections perform, I developed an information processing system that behaves analogously to the fear conditioning circuit in the brain.

  • A neural reinforcement learning model for tasks with unknown time delays (35th Annual Conference of the Cognitive Science Society, 2013)

    Daniel Rasmussen, Chris Eliasmith

    Abstract: We present a biologically based neural model capable of performing reinforcement learning in complex tasks. The model is unique in its ability to solve tasks that require the agent to make a sequence of unrewarded actions in order to reach the goal, in an environment where there are unknown and variable time delays between actions, state transitions, and rewards. Specifically, this is the first neural model of reinforcement learning able to function within a Semi-Markov Decision Process (SMDP) framework. We believe that this extension of current modelling efforts lays the groundwork for increasingly sophisticated models of human decision making.

  • Modeling brain function: Current developments and future prospects (JAMA Neurology, 2013)

    Daniel Rasmussen, Chris Eliasmith

    Abstract: We discuss work aimed at building functional models of the whole brain implemented in large-scale simulations of millions of individual neurons. Recent developments in this area demonstrate that such models can explain a variety of behavioral, neurophysiological, and neuroanatomical data. We argue that these models hold the potential to expand our understanding of the brain by connecting these levels of analysis in new and informative ways. However, current modeling efforts fall short of the target of whole-brain modeling. Consequently, we discuss different avenues of research that continue to progress toward that distant, but achievable, goal.

  • Visual motion processing and perceptual decision making (35th Annual Conference of the Cognitive Science Society, 2013)

    Aziz Hurzook, Oliver Trujillo, Chris Eliasmith

    Abstract: We present a novel, biologically plausible model of visual motion processing and perceptual decision making that is independent of the number of choice categories or alternatives. The implementation is a large-scale spiking neural circuit consisting of: 1) a velocity filter using the principle of oscillator interference to determine the direction and speed of pattern motion in V1; 2) a representation of motion evidence in the middle temporal area (MT); and 3) integration of sensory evidence over time by a higher-dimensional attractor network in the lateral intraparietal area (LIP). We demonstrate the model by reproducing behavioral and neural results from classic perceptual decision making experiments that test the perceived direction of motion of variable coherence dot kinetograms. Specifically, these results capture monkey data from two-alternative forced-choice motion decision tests. We note that without any reconfiguration of the circuit, the implementation can be used to make decisions among a continuum of alternatives.

  • The use and abuse of large-scale brain models (Current Opinion in Neurobiology, 2013)

    Chris Eliasmith, Oliver Trujillo

    Abstract: We provide an overview and comparison of several recent large-scale brain models. In addition to discussing challenges involved with building large neural models, we identify several expected benefits of pursuing such a research program. We argue that these benefits are only likely to be realized if two basic guidelines are made central to the pursuit. The first is that such models need to be intimately tied to behavior. The second is that models, and more importantly their underlying methods, should provide mechanisms for varying the level of simulated detail. Consequently, we express concerns with models that insist on a 'correct' amount of detail while expecting interesting behavior to simply emerge.

  • Realistic neurons can compute the operations needed by quantum probability theory and other vector symbolic architectures (Behavioral and Brain Sciences, 2013)

    Terrence C. Stewart, Chris Eliasmith

    Abstract: (Commentary) Quantum probability theory can be seen as a type of Vector Symbolic Architecture: mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article.

  • Parsing Sequentially Presented Commands in a Large-Scale Biologically Realistic Brain Model (35th Annual Conference of the Cognitive Science Society, 2013)

    Terrence C. Stewart, Chris Eliasmith

    Abstract: We present a neural mechanism for interpreting and executing visually presented commands. These are simple verb-noun commands (such as WRITE THREE) and can also include conditionals ([if] SEE SEVEN, [then] WRITE THREE). We apply this to a simplified version of our large-scale functional brain model "Spaun", where input is a 28x28 pixel visual stimulus, with a different pattern for each word. Output controls a simulated arm, giving hand-written answers. Cortical areas for categorizing, storing, and interpreting information are controlled by the basal ganglia (action selection) and thalamus (routing). The final model has approximately 100,000 LIF spiking neurons. We show that the model is extremely robust to neural damage (40 percent of neurons can be destroyed before performance drops significantly). Performance also drops for visual display times less than 250ms. Importantly, the system also scales to large vocabularies (approximately 100,000 nouns and verbs) without requiring an exponentially large number of neurons.

  • God, the devil, and details: Fleshing out the predictive processing framework (commentary on Clark) (Behavioral and Brain Sciences, 2013)

    Daniel Rasmussen, Chris Eliasmith

    Keywords: cognitive architecture

    Abstract: The predictive processing framework lacks many of the architectural and implementational details needed to fully investigate or evaluate the ideas it presents. One way to begin to fill in these details is by turning to standard control-theoretic descriptions of these types of systems (e.g., Kalman filters), and by building complex, unified computational models in biologically realistic neural simulations.

  • Does the Entorhinal Cortex use the Fourier Transform? (Tech Report, 2013)

    Jeff Orchard, Hao Yang, Xiang Ji

    Keywords: Fourier, Neural Engineering Framework, oscillators, path integration

    Abstract: In 2005, Hafting et al. reported that some neurons in the entorhinal cortex (EC) fire bursts when the animal occupies locations oraganized in a hexagonal grid pattern in their spatial environment. Previous to that, place cells had been observed, firing bursts only when the animal occupied a particular region of the environment. Both of these types of cells exhibit theta-cycle modulation, firing bursts in the 4-12Hz range. In particular, grid cells fire bursts of action potentials that precess with respect to the theta cycle, a phenomenon dubbed "theta precession". Since then, various models have been proposed to explain the relationship between grid cells, place cells, and theta precession. However, most models have lacked a fundamental, overarching framework. As a reformulation of the pioneering work of Welday et al. (2011), we propose that the EC is implementing its spatial coding using the Fourier Transform. We show how the Fourier Shift Theorem relates to the phases of velocity-controlled oscillators (VCOs), and propose a model for how various other spatial maps might be implemented (eg. border cells). Our model exhibits the standard EC behaviours: grid cells, place cells, and phase precession, as bourne out by theoretical computations and spiking-neuron simulations. We hope that framing this constellation of phenomena in Fourier Theory will accelerate our understanding of how the EC -- and perhaps the hippocampus {\textendash} encodes spatial information.

  • Spaun: A Perception-Cognition-Action Model Using Spiking Neurons (Cognitive Science Society, 2012)

    Terrence C. Stewart, Xuan Choo, Chris Eliasmith

    Abstract: We present a large-scale cognitive neural model called Spaun (Semantic Pointer Architecture: Unified Network), and show simulation results on 6 tasks (digit recognition, tracing from memory, serial working memory, question answering, addition by counting, and symbolic pattern completion). The model consists of 2.3 million spiking neurons whose neural properties, organization, and connectivity match that of the mammalian brain. Input consists of images of handwritten and typed numbers and symbols, and output is the motion of a 2 degree-of-freedom arm that writes the model's responses. Tasks can be presented in any order, with no ``rewiring'' of the brain for each task. Instead, the model is capable of internal cognitive control (via the basal ganglia), selectively routing information throughout the brain and recruiting different cortical components as needed for each task.

  • A spiking neural model of strategy shifting in a simple reaction time task (Society for Neuroscience 2012, 2012)

    Trevor Bekolay, Benjamine Liu, Chris Eliasmith, Mark Laubach

    Abstract: In a simple reaction-time (RT) task with predictable foreperiods, subjects employ two strategies. They either wait until the cue and then respond, or they time the foreperiod and respond when the cue should occur. Evidence for these performance strategies has been detected in rodents, humans and other primates. A key brain region for implementing these control strategies is the medial prefrontal cortex (mPFC). Neurons in this brain region show changes in firing rates around the start of trials or fire persistently during the foreperiod of simple RT tasks, and exert control over the motor system by influencing firing rates in the motor cortex during the foreperiod activity (Narayanan \& Laubach, 2006). Here, we describe a neural circuit model based on the known neuroanatomy that reproduces the observed activity patterns in rat mPFC and exhibits adjustments in the behavioral strategy based on the subject's recent outcomes. A neural circuit based on Singh and Eliasmith, 2006 tracks the behavioural state and the time elapsed in that state. This circuit serves as a top-down controller acting on a neural control system. When the top-down control is not being exerted, the system wait for the cue and responds at cue onset. When the foreperiod can be timed, topdown control is exerted when the behavioral response is predicted to occur. These adjustments can occur at any time and do not require synaptic weight changes.

  • The Neural Engineering Framework (AISB Quarterly, 2012)

    Terrence C. Stewart

    Abstract: The Neural Engineering Framework (NEF) is a general methodology that allows the building of large-scale, biologically plausible, neural models of cognition. The NEF acts as a neural compiler: once the properties of the neurons, the values to be represented, and the functions to be computed are specified, it solves for the connection weights between components that will perform the desired functions. Importantly, this works not only for feed-forward computations, but also for recurrent connections, allowing for complex dynamical systems including integrators, oscillators, Kalman filters, etc. The NEF also incorporates realistic local error-driven learning rules, allowing for the online adaptation and optimisation of responses. The NEF has been used to model visual attention, inductive reasoning, reinforcement learning and many other tasks. Recently, we used it to build Spaun, the world{\textquoteright}s largest functional brain model, using 2.5 million neurons to perform eight different cognitive tasks by interpreting visual input and producing hand-written output via a simulated 6-muscle arm. Our open-source software Nengo was used for all of these, and is available at http://nengo.ca, along with tutorials, demos, and downloadable models.

  • A Technical Overview of the Neural Engineering Framework (Tech Report, 2012)

    Terrence C. Stewart

    Abstract: The Neural Engineering Framework (NEF) is a general methodology that allows you to build largescale, biologically plausible, neural models of cognition. In particular, it acts as a neural compiler: you specify the properties of the neurons, the values to be represented, and the functions to be computed, and it solves for the connection weights between components that will perform the desired functions. Importantly, this works not only for feed-forward computations, but recurrent connections as well, allowing for complex dynamical systems including integrators, oscillators, Kalman filters, and so on. It also incorporates realistic local error-driven learning rules, allowing for online adaptation and optimization of responses. The NEF has been used to model visual attention, inductive reasoning, reinforcement learning, and many other tasks. Recently, we used it to build Spaun, the world's largest functional brain model, using 2.5 million neurons to perform eight different cognitive tasks by interpreting visual input and producing hand-written output via a simulated 6-muscle arm. Our open-source software Nengo was used for all of these, and is available at http://nengo.ca, along with tutorials, demos, and downloadable models.

  • Learning to select actions with spiking neurons in the basal ganglia (Frontiers in Decision Neuroscience, 2012)

    Terrence C. Stewart, Trevor Bekolay, Chris Eliasmith

    Abstract: We expand our existing spiking neuron model of decision making in the cortex and basal ganglia to include local learning on the synaptic connections between the cortex and striatum, modulated by a dopaminergic reward signal. We then compare this model to animal data in the bandit task, which is used to test rodent learning in conditions involving forced choice under rewards. Our results indicate a good match in terms of both behavioral learning results and spike patterns in the ventral striatum. The model successfully generalizes to learning the utilities of multiple actions, and can learn to choose different actions in different states. The purpose of our model is to provide both high-level behavioral predictions and low-level spike timing predictions while respecting known neurophysiology and neuroanatomy.

  • Silicon Neurons that Compute (International Conference on Artificial Neural Networks, 2012)

    Swadesh Choudhary, Steven Sloan, Sam Fok, Alexander Neckar, Eric Trautmann, Peiran Gao, Terrence C. Stewart, Chris Eliasmith, Kwabena Boahen

  • A mechanistic model of motion processing in the early visual system (Thesis, 2012)

    Aziz Hurzook

    Keywords: large-scale spiking model, oscillator interference, visual motion

    Abstract: A prerequisite for the perception of motion in primates is the transformation of varying intensities of light on the retina into an estimation of position, direction and speed of coherent objects. The neuro-computational mechanisms relevant for object feature encoding have been thoroughly explored, with many neurally plausible models able to represent static visual scenes. However, motion estimation requires the comparison of successive scenes through time. Precisely how the necessary neural dynamics arise and how other related neural system components interoperate have yet to be shown in a large-scale, biologically realistic simulation. The proposed model simulates a spiking neural network computation for representing object velocities in cortical areas V1 and middle temporal area (MT). The essential neural dynamics, hypothesized to reside in networks of V1 simple cells, are implemented through recurrent population connections that generate oscillating spatiotemporal tunings. These oscillators produce a resonance response when stimuli move in an appropriate manner in their receptive fields. The simulation shows close agreement between the predicted and actual impulse responses from V1 simple cells using an ideal stimulus. By integrating the activities of like V1 simple cells over space, a local measure of visual pattern velocity can be produced. This measure is also the linear weight of an associated velocity in a retinotopic map of optical flow. As a demonstration, the classic motion stimuli of drifting sinusoidal gratings and variably coherent dots are used as test stimuli and optical flow maps are generated. Vector field representations of this structure may serve as inputs for perception and decision making processes in later brain areas.

  • The neural optimal control hierarchy for motor control (The Journal of Neural Engineering, 2011)

    Travis DeWolf, Chris Eliasmith

    Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

    Abstract: Our empirical, neuroscientific understanding of biological motor systems has been rapidly growing in recent years. However, this understanding has not been systematically mapped to a quantitative characterization of motor control based in control theory. Here, we attempt to bridge this gap by describing the neural optimal control hierarchy (NOCH), which can serve as a foundation for biologically plausible models of neural motor control. The NOCH has been constructed by taking recent control theoretic models of motor control, analyzing the required processes, generating neurally plausible equivalent calculations and mapping them on to the neural structures that have been empirically identified to form the anatomical basis of motor control. We demonstrate the utility of the NOCH by constructing a simple model based on the identified principles and testing it in two ways. First, we perturb specific anatomical elements of the model and compare the resulting motor behavior with clinical data in which the corresponding area of the brain has been damaged. We show that damaging the assigned functions of the basal ganglia and cerebellum can cause the movement deficiencies seen in patients with Huntington's disease and cerebellar lesions. Second, we demonstrate that single spiking neuron data from our model's motor cortical areas explain major features of single-cell responses recorded from the same primate areas. We suggest that together these results show how NOCH-based models can be used to unify a broad range of data relevant to biological motor control in a quantitative, control theoretic framework.

  • The attentional routing circuit: receptive field modulation through nonlinear dendritic interactions (Cognitive and Systems Neuroscience, 2011)

    Bruce Bobier, Terrence C. Stewart, Chris Eliasmith

    Abstract: We present a model of attentional routing called the Attentional Routing Circuit (ARC) that extends an existing model of spiking neurons with dendritic nonlinearities. Specifically, we employ the Poirazi et al. (2003) pyramidal neuron in a population coding framework. ARC demonstrates that the dendritic nonlinearities can be exploited to result in selective routing, with a decrease in the number of cells needed by a factor of ~5 as compared with a linear dendrite model. Routing of attended information occurs through the modulation of feedforward visual signals by a cortical control signal specifying the location and size of the attended target. The model is fully specified in spiking single cells. Our approach differs from past work on shifter circuits by having more efficient control, and using a more biologically detailed substrate. Our approach differs from existing models that use gain fields by providing precise hypotheses about how the control signals are generated and distributed in a hierarchical model in spiking neurons. Further, the model accounts for numerous experimental findings regarding the timing, strength and extent of attentional modulation in ventral stream areas, and the perceived contrast enhancement of attended stimuli. To further demonstrate the plausibility of ARC, it is applied to the attention experiments of Womelsdorf et al. (2008) and tested in detail. For the simulations, the model has only two free parameters that influence its ability to match the experimental data, and without fitting, we show that it can account for the experimental observations of changes in receptive field (RF) gain and position with attention in macaques. In sum, the model provides an explanation of RF modulation as well as testable predictions about nonlinear cortical dendrites and attentional changes of receptive field properties.

  • A general error-modulated STDP learning rule applied to reinforcement learning in the basal ganglia (Cognitive and Systems Neuroscience, 2011)

    Trevor Bekolay, Chris Eliasmith

    Abstract: We present a novel error-modulated spike-timing-dependent learning rule that utilizes a global error signal and the tuning properties of neurons in a population to learn arbitrary transformations on n-dimensional signals. This rule addresses the gap between low-level spike-timing learning rules modifying individual synaptic weights and higher-level learning schemes that characterize behavioural changes in an animal. The learning rule is first analyzed in a small spiking neural network. Using the encod- ing/decoding framework described by Eliasmith and Anderson (2003), we show that the rule can learn linear and non-linear transformations on n-dimensional signals. The learning rule arrives at a connection weight matrix that differs significantly from the connection weight matrix found analytically by Eliasmith and Anderson{\textquoteright}s method, but performs similarly well. We then use the learning rule to augment Stewart et al.{\textquoteright}s biologically plausible imple- mentation of action selection in the basal ganglia (2009). Their implementation forms the \actor" module in the actor-critic reinforcement learning architecture described by Barto (1995). We add a \critic" module, inspired by the physiology of the ventral striatum, that can modulate the model{\textquoteright}s likelihood of selecting actions based on the current state and the history of rewards obtained as a result of taking certain actions in that state. Despite being a complicated model with several interconnected populations, we are able to use our learning rule without any modifications. As a result, we suggest that this rule provides a unique and biologically plausible characterization of supervised and semi- supervised learning in the brain.

  • Learning in large-scale spiking neural networks (Thesis, 2011)

    Trevor Bekolay

    Abstract: Learning is central to the exploration of intelligence. Psychology and machine learning provide high-level explanations of how rational agents learn. Neuroscience provides low-level descriptions of how the brain changes as a result of learning. This thesis attempts to bridge the gap between these two levels of description by solving problems using machine learning ideas, implemented in biologically plausible spiking neural networks with experimentally supported learning rules. We present three novel neural models that contribute to the understanding of how the brain might solve the three main problems posed by machine learning: supervised learning, in which the rational agent has a fine-grained feedback signal, reinforcement learning, in which the agent gets sparse feedback, and unsupervised learning, in which the agents has no explicit environmental feedback. In supervised learning, we argue that previous models of supervised learning in spiking neural networks solve a problem that is less general than the supervised learning problem posed by machine learning. We use an existing learning rule to solve the general supervised learning problem with a spiking neural network. We show that the learning rule can be mapped onto the well-known backpropagation rule used in artificial neural networks. In reinforcement learning, we augment an existing model of the basal ganglia to implement a simple actor-critic model that has a direct mapping to brain areas. The model is used to recreate behavioural and neural results from an experimental study of rats performing a simple reinforcement learning task. In unsupervised learning, we show that the BCM rule, a common learning rule used in unsupervised learning with rate-based neurons, can be adapted to a spiking neural network. We recreate the effects of STDP, a learning rule with strict time dependencies, using BCM, which does not explicitly remember the times of previous spikes. The simulations suggest that BCM is a more general rule than STDP. Finally, we propose a novel learning rule that can be used in all three of these simulations. The existence of such a rule suggests that the three types of learning examined separately in machine learning may not be implemented with separate processes in the brain.

  • A Dynamic Account of the Structure of Concepts (Thesis, 2011)

    Peter Blouw

    Abstract: Concepts are widely agreed to be the basic constituents of thought. Amongst philosophers and psychologists, however, the question of how concepts are structured has been a longstanding problem and a locus of disagreement. I draw on recent work describing how representational content is ascribed to populations of neurons to develop a novel solution to this problem. Because disputes over the structure of concepts often reflect divergent explanatory goals, I begin by arguing for a set of six criteria that a good theory ought to accommodate. These criteria address philosophical concerns related to content, reference, scope, publicity, and compositionality, and psychological concerns related to categorization phenomena and neural plausibility. Next, I evaluate a number of existing theoretical approaches in relation to these six criteria. I consider classical views that identify concepts with definitions, similarity-based views that identify concepts with prototypes or exemplars, theory-based views that identify concepts with explanatory schemas, and atomistic views that identify concepts with unstructured mental symbols that enter into law-like relations with their referents. I conclude that none of these accounts can satisfactorily accommodate all of the criteria. I then describe the theory of representational content that I employ to motivate a novel account of concept structure. I briefly defend this theory against competitors, and I describe how it can be scaled from the level of basic perceptual representations to the level of highly complex conceptual representations. On the basis of this description, I contend that concepts are structured dynamically through sets of transformations of single source representation, and that the content of a given concept specifies the set of potential transformations it can enter into. I conclude by demonstrating that the ability of this account to meet all of the criteria introduced beforehand. I consider objections to my views throughout.

  • Neural Representations of Compositional Structures: Representing and Manipulating Vector Spaces with Spiking Neurons (Connection Science, 2011)

    Terrence C. Stewart, Trevor Bekolay, Chris Eliasmith

    Abstract: This paper re-examines the question of localist vs. distributed neural representations using a biologically realistic framework based on the central notion of neurons having a preferred direction vector. A preferred direction vector captures the general observation that neurons fire most vigorously when the stimulus lies in a particular direction in a represented vector space. This framework has been successful in capturing a wide variety of detailed neural data, although here we focus on cognitive representation. In particular, we describe methods for constructing spiking networks that can represent and manipulate structured, symbol-like representations. In the context of such networks, neuron activities can seem both localist and distributed, depending on the space of inputs being considered. This analysis suggests that claims of a set of neurons being localist or distributed cannot be made sense of without specifying the particular stimulus set used to examine the neurons.

  • A spiking neuron model of movement and pre-movement activity in M1 (Cognitive and Systems Neuroscience, 2011)

    Travis DeWolf, Chris Eliasmith

    Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex, spiking

    Abstract: We present a spiking neuron model of the primary motor cortex (M1) in the context of a reaching task for a 2-link arm model on the horizontal plane. The M1 population is embedded in a larger scale, hierarchical optimal control model of the motor system called NOCH (DeWolf \& Eliasmith, 2010). NOCH characterizes the overall functioning of the motor system, and has been shown to reproduce natural arm movements, as well as movements resulting from perturbations due to motor system damage from Huntington's, Parkinson's, and cerebellar lesions. Here, we demonstrate that the observed dynamics of spiking neurons in awake behaving animals can be accounted for by the NOCH characterization of the motor system. To do so, the M1 neural population is provided with target information and proprioceptive feedback in end-effector space, and outputs a lower-level system command, driving the arm to the target. The implemented neural population represents a single layer of the M1 hierarchy, transforming high-level, end-effector agnostic control forces into lower-level arm specific joint torques. The population is preferentially responsive to areas in space that have been well explored, providing more exact control for movements that can be executed using learned movement synergies. In this way the motor cortex performs component based movement generation, similar to recent Linear Bellman Equation (Todorov 2009) and Hidden Markov Model (Schaal 2009) based robotic control systems displaying high levels of robustness to complicated system dynamics, perturbations, and changing environments. We compare neural activity generated from our model of M1 to experimental data of movement and pre-movement recordings in monkeys (Churchland 2010), providing support for our model of the primary motor cortex, and to the methods underlying the more general NOCH framework.

  • A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm (Neural Information Processing Systems (NIPS) 24, 2011)

    Julie Dethier, Paul Nuyujukian, Chris Eliasmith, Terrence C. Stewart, Shauki A. Elassaad, Krishna Shenoy, Kwabena Boahen

    Abstract: Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm{\textquoteright}s velocity and mapped on to the SNN using the Neural Engineer- ing Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neu- romorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

  • A neural model of rule generation in inductive reasoning (Topics in Cognitive Science, 2011)

    Daniel Rasmussen, Chris Eliasmith

    Abstract: Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The model is able to generate the rules necessary to correctly solve Raven's items, as well as recreate many of the experimental effects observed in human subjects.

  • Automating the Nengo build process (Tech Report, 2010)

    Trevor Bekolay

    Abstract: Nengo is a piece of sophisticated neural modelling software used to simulate large-scale networks of biologically plausible neurons. Previously, releases of Nengo were being created manually whenever the currently released version lacked an important new feature. While unit testing existed, it was not being maintained or run regularly. Good development practices are important for Nengo because it is a complicated application with over 50,000 lines of code, and depends on dozens of third-party libraries. In addition, being an open source project, having good development practices can attract new contributors. This technical report discusses the creation and automation of a back-end for Nengo, and how it integrates with the existing mechanisms used in Nengo development. Since the primary goal of the system was to avoid disturbing developers' workflow, the typical development cycle is made explicit and it is shown that the cycle is not affected by the new automated system.

  • Learning nonlinear functions on vectors: examples and predictions (Tech Report, 2010)

    Trevor Bekolay

    Abstract: One of the underlying assumptions of the Neural Engineering Framework, and of most of theoretical neuroscience, is that neurons in the brain perform functions on signals. Models of brain systems make explicit the functions that a modeller hypothesizes are being performed in the brain; the Neural Engineering Framework defines an analytical method of determining connection weight matrices between populations to perform those functions in a biologically plausible manner. With the recent implementation of general error-modulated plasticity rules in Nengo, it is now possible to start with a random connection weight matrix and learn a weight matrix that will perform an arbitrary function. This technical report confirms that this is true by showing results of learning several non-linear functions performed on vectors of various dimensionality. It also discusses trends seen in the data, and makes predictions about what we might expect when trying to learn functions on very high-dimensional signals.

  • Concept Based Representations for Ranking in Geographic Information Retrieval (IceTAL 2010, 2010)

    Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Luis Villasenor-Pineda, Manuel Montes-y-Gomez

  • NOCH: A framework for biologically plausible models of neural motor control (Thesis, 2010)

    Travis DeWolf

    Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

    Abstract: This thesis examines the neurobiological components of the motor control system and relates it to current control theory in order to develop a novel framework for models of motor control in the brain. The presented framework is called the Neural Optimal Control Hierarchy (NOCH). A method of accounting for low level system dynamics with a Linear Bellman Controller (LBC) on top of a hierarchy is presented, as well as a dynamic scaling technique for LBCs that drastically reduces the computational power and storage requirements of the system. These contributions to LBC theory allow for low cost, high-precision control of movements in large environments without exceeding the biological constraints of the motor control system.

  • The Ordinal Serial Encoding Model: Serial Memory in Spiking Neurons (Thesis, 2010)

    Xuan Choo

    Abstract: In a world dominated by temporal order, memory capable of processing, encoding and subsequently recalling ordered information is very important. Over the decades this memory, known as serial memory, has been extensively studied, and its effects are well known. Many models have also been developed, and while these models are able to reproduce the behavioural effects observed in human recall studies, they are not always implementable in a biologically plausible manner. This thesis presents the Ordinal Serial Encoding model, a model inspired by biology and designed with a broader view of general cognitive architectures in mind. This model has the advantage of simplicity, and we show how neuro-plausibility can be achieved by employing the principles of the Neural Engineering Framework in the model{\^a}??s design. Additionally, we demonstrate that not only is the model able to closely mirror human performance in various recall tasks, but the behaviour of the model is itself a consequence of the underlying neural architecture.

  • A general error-based spike-timing dependent learning rule for the Neural Engineering Framework (Tech Report, 2010)

    Trevor Bekolay

    Abstract: Previous attempts at integrating spike-timing dependent plasticity rules in the NEF have met with little success. This project proposes a spike-timing dependent plasticity rule that uses local information to learn transformations between populations of neurons. The rule is implemented and tested on a simple one-dimensional communication channel, and is compared to a similar rate-based learning rule.

  • NOCH: A framework for biologically plausible models of neural motor control (20th Annual Neural Control of Movement Conference, 2010)

    Travis DeWolf, Chris Eliasmith

    Keywords: motor control, NOCH, optimal control, hierarchy, basal ganglia, cerebellum, motor cortex

    Abstract: This poster presents the Neural Optimal Control Hierarchy (NOCH), a framework based on optimal control theory and hierarchical control systems that takes advantage of recent developments in the field to map function to neurobiological components of the motor control system in the brain. An implementation of the NOCH controlling an arm model is shown to mimic human arm reach trajectories and account for various kinds of damage to the brain, including Huntington{\textquoteright}s disease and cerebellar damage.

  • Using and extending plasticity rules in Nengo (Tech Report, 2010)

    Trevor Bekolay

    Abstract: Learning in the form of synaptic plasticity is an essential part of any neural simulation software claiming to be biologically plausible. While plasticity has been a part of Nengo from the beginning, few simulations have been created to make full use of the plasticity mechanisms built into Nengo, and as a result, they have been under-maintained. Since SVN revision 985, the way plasticity is implemented in Nengo has changed signifcantly. This report is intended to explain how plasticity rules are implemented since that revision, and provide examples of how to use and extend the plasticity rules currently implemented.

  • Dynamic scaling for efficient, low-cost control of high-precision movements in large environments (Tech Report, 2010)

    Travis DeWolf

    Abstract: This paper presents the dynamic scaling technique (DST), a method for control in large environments that dramatically reduces the resources required to achieve highly accurate movements. The DST uses a low resolution representation of the environment to calculate an initial approximately optimal trajectory and refines the control signal as the target is neared. Simulation results are presented and the effect of representation resolution on accuracy and computational efficiency is analyzed.

  • A neural modelling approach to investigating general intelligence (Thesis, 2010)

    Daniel Rasmussen

    Abstract: One of the most well-respected and widely used tools in the study of general intelligence is the Raven's Progressive Matrices test, a nonverbal task wherein subjects must induce the rules that govern the patterns in an arrangement of shapes and figures. This thesis describes the first neurally based, biologically plausible model that can dynamically generate the rules needed to solve Raven's matrices. We demonstrate the success and generality of the rules generated by the model, as well as interesting insights the model provides into the causes of individual differences, at both a low (neural capacity) and high (subject strategy) level. Throughout this discussion we place our research within the broader context of intelligence research, seeking to understand how the investigation and modelling of Raven's Progressive Matrices can contribute to our understanding of general intelligence.

  • Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop (32nd Annual Meeting of the Cognitive Science Society, 2010)

    Terrence C. Stewart, Xuan Choo, Chris Eliasmith

    Abstract: We present a model of symbol manipulation implemented using spiking neurons and closely tied to the anatomy of the cortex, basal ganglia, and thalamus. The model is a general-purpose neural controller which plays a role analogous to a production system. Information stored in cortex is used by the basal ganglia as the basis for selecting between a set of inferences. When an inference rule is selected, it commands the thalamus to modify and transmit information between areas of the cortex. The system supports special-case and general-purpose inferences, including the ability to remember complex statements and answer questions about them. The resulting model suggests modifications to the standard structure of production system rules, and offers a neurological explanation for the 50 millisecond cognitive cycle time.

  • A Spiking Neuron Model of Serial-Order Recall (32nd Annual Conference of the Cognitive Science Society, 2010)

    Xuan Choo, Chris Eliasmith

    Abstract: Vector symbolic architectures (VSAs) have been used to model the human serial-order memory system for decades. Despite their success, however, none of these models have yet been shown to work in a spiking neuron network. In an effort to take the first step, we present a proof-of-concept VSA-based model of serial-order memory implemented in a network of spiking neurons and demonstrate its ability to successfully encode and decode item sequences. This model also provides some insight into the differences between the cognitive processes of memory encoding and subsequent recall, and establish a firm foundation on which more complex VSA-based models of memory can be developed.

  • Dynamic Behaviour of a Spiking Model of Action Selection in the Basal Ganglia (10th International Conference on Cognitive Modeling, 2010)

    Terrence C. Stewart, Xuan Choo, Chris Eliasmith

    Abstract: A fundamental process for cognition is action selection: choosing a particular action out of the many possible actions available. This process is widely believed to involve the basal ganglia, and we present here a model of action selection that uses spiking neurons and is in accordance with the connectivity and neuron types found in this area. Since the parameters of the model are set by neurological data, we can produce timing predictions for different action selection situations without requiring parameter tweaking. Our results show that, while an action can be selected in 14 milliseconds (or longer for actions with similar utilities), it requires 34-44 milliseconds to go from one simple action to the next. For complex actions (whose effect involves routing information between cortical areas), 59-73 milliseconds are needed. This suggests a change to the standard cognitive modelling approach of requiring 50 milliseconds for all types of actions.

  • Representing Context Information for Document Retrieval (Flexible Query Answering Systems, FQAS 2009, 2009)

    Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Manuel Montes-y-Gomez, Luis Villasenor-Pineda

  • Concept representations in Geographic Information Retrieval as Re-ranking Strategies (18th ACM Conference on Information and Knowledge Management, 2009)

    Maya Carrillo, Esau Villatoro-Tello, Aurelio Lopez-Lopez, Chris Eliasmith, Manuel Montes-y-Gomez, Luis Villasenor-Pineda

  • Spiking neurons and central executive control: The origin of the 50-millisecond cognitive cycle (9th International Conference on Cognitive Modelling, 2009)

    Terrence C. Stewart, Chris Eliasmith

    Abstract: A common feature of many cognitive architectures is a central executive control with a 50-millisecond cycle time. This system determines which action to perform next, based on the current context. We present the first model of this system using spiking neurons. Given the constraints of well-established neural time constants, a cycle time of 46.6 milliseconds emerges from our model. This assumes that the neurotransmitter used is GABA (with GABA-A receptors), the primary neurotransmitter for the basal ganglia, where this cognitive module is generally believed to be located.

  • Sequential production and recognition of songs by songbirds through NEF (Tech Report, 2009)

    Marc Hurwitz

    Abstract: The paper details an NEF model of bird song production and recognition. The model neurons and overall neural structure are taken from current research on the actual neuroanatomy of the zebra finch. While the model is simplified to songs of at most three notes, it illustrates that both sequence production (easy) and sequence recognition (hard) can be constructed in NEF. Furthermore, the model explains why specific types of neurons might be seen in the actual bird song-specific regions.

  • A biologically realistic cleanup memory: Autoassociation in spiking neurons (9th International Conference on Cognitive Modelling, 2009)

    Terrence C. Stewart, Yichuan Tang, Chris Eliasmith

    Abstract: Methods for cleaning up (or recognizing) states of a neural network are crucial for the functioning of many neural cognitive models. For example, Vector Symbolic Architectures provide a method for manipulating symbols using a fixed-length vector representation. To recognize the result of these manipulations, a method for cleaning up the resulting noisy representation is needed, as this noise increases with the number of symbols being combined. While these manipulations have previously been modelled with biologically plausible neurons, this paper presents the first spiking neuron model of the cleanup process. We demonstrate that it approaches ideal performance and that the neural requirements scale linearly with the number of distinct symbols in the system. While this result is relevant for any biological model requiring cleanup, it is crucial for VSAs, as it completes the set of neural mechanisms needed to provide a full neural implementation of symbolic reasoning.

  • Motor control in the brain (Tech Report, 2008)

    Travis DeWolf

    Keywords: motor control, review

    Abstract: There has been much progress in the development of a model of motor control in the brain in the last decade; from the improved method for mathematically extracting the predicted movement direction from a population of neurons to the application of optimal control theory to motor control models, much work has been done to further our understanding of this area. In this paper recent literature is reviewed and the direction of future research is examined.

  • Methods for augmenting semantic models with structural information for text classification (Advances in Information Retrieval, 2008)

    Jonathan M. Fishbein, Chris Eliasmith

    Abstract: Current representation schemes for automatic text classification treat documents as syntactically unstructured collections of words or `concepts'. Past attempts to encode syntactic structure have treated part-of-speech information as another word-like feature, but have been shown to be less effective than non-structural approaches. Here, we investigate three methods to augment semantic modelling with syntactic structure, which encode the structure across all features of the document vector while preserving text semantics. We present classification results for these methods versus the Bag-of-Concepts semantic modelling representation to determine which method best improves classification scores.

  • Is the brain a quantum computer? (Cognitive Science, 2006)

    Abninder Litt, Chris Eliasmith, Frederick W. Kroon, Steven Weinstein, Paul Thagard