CogSci 2010 Workshop Hands-on Activities

These pages provide a step-by-step tutorial on the use of Nengo, our software for cognitive modelling using the Neural Engineering Framework. You can download this software at http://ctn.uwaterloo.ca/~cnrglab/f/nengo.zip. This tutorial is available online at http://ctn.uwaterloo.ca/~cnrglab/?q=node/650.

For further information, the slides that accompany this tutorial are also here.

Slides

On-line Tutorials

Part One: One-Dimensional Representation

Installing and Running Nengo

  • Install Nengo from the provided USB keys. Do this by copying the nengo directory onto your computer.
    • Alternatively, download it from http://ctn.uwaterloo.ca/~cnrglab/f/nengo.zip
    • You must also have Java installed on your computer
    • Nengo will run faster if you also have Python installed along with the NumPy and SciPy libraries. Versions of these for Windows can be found in the windows directory
  • To run Nengo, either:
    • Double-click on nengo.bat (in Windows)
    • run ./nengo (in OS X and Linux)

p1-1.png

Creating Networks

  • When creating an NEF model, the first step is to create a Network. This will contain all of the neural ensembles and any needed inputs to the system.
    • File->New Network
    • Give the network a name

p1-2.png

  • You can create networks inside of other networks. This can be useful for hierarchical organization of models.

Creating an Ensemble

  • Ensembles must be placed inside networks in order to be used
  • Right-click inside a network
    • Create New->NEF Ensemble

p1-3.png

  • Here the basic features of the ensemble can be configured
    • Name
    • Number of nodes (i.e. neurons)
    • Dimensions (the number of values in the vector encoded by these neurons; leave at 1 for now)
    • Radius (the range of values that can be encoded; for example, a value of 100 means the ensemble can encode numbers between -100 and 100)
  • Node Factory (the type of neuron to use)

p1-4.png

  • For this tutorial (and for the majority of our research), we use LIF Neuron, the standard Leaky-Integrate-and-Fire neuron. Clicking on Set allows for the neuron parameters to be configured
  • tauRC (RC time constant for the neuron membrane; usually 0.02)
  • tauRef (absolute refractory period for the neuron; usually 0.002)
  • Max rate (the maximum firing rate for the neurons; each neuron will have a maximum firing rate chosen from a uniform distribution between low and high)
  • Intercept (the range of possible x-intercepts on the tuning curve graph; normally set to -1 and 1)
    • Because there are many parameters to set and we often choose similar values, Nengo will remember your previous settings. Also, you can save templates by setting up the parameters as you like them and clicking on New in the Templates box. You will then be able to go back to these settings by choosing the template from the drop-down box.

p1-5.png

  • You can double-click on an ensemble to view the individual neurons within it

p1-5b.png

Plotting Tuning Curves

  • This shows the behaviour of each neuron when it is representing different values (i.e. the tuning curves for the neurons)
  • Right-click on the ensemble, select Plot->Constant Rate Responses

p1-6.png

  • tauRC affects the linearity of the neurons (smaller values are more linear)
  • Max rate affects the height of the curves at the left and right sides
  • Intercept affects where the curves hit the x-axis (i.e. the value where the neuron starts firing)

Plotting Representation Error

  • We often want to determine the accuracy of a neural ensemble.
  • Right-click on the ensemble, select Plot->Plot Distortion:X

p1-7.png

  • Mean Squared Error (MSE) is also shown (at the top)
  • MSE decreases as the square of the number of neurons (so RMSE is proportional to 1/N)
  • Can also affect representation accuracy by adjusting the range of intercepts. This will cause the system to be more accurate in the middle of the range and less accurate at the edges.

Adjusting an Ensemble

  • After an ensemble is created, we can inspect and modify many of its parameters
  • Right-click on an ensemble and select Configure

p1-8.png

  • neurons (number of neurons; this will rebuild the whole ensemble)
  • radii (the range of values that can be encoded; can be different for different dimensions)
  • encoders (preferred direction vectors for each neuron)

The Script Console

  • Nengo also allows users to interact with the model via a scripting interface using the Python language. This can be useful for writing scripts to create components of models that you use often.
  • You can also use it to inspect and modify various aspects of the model.
  • Press Ctrl-P or choose View->Toggle Script Console to show the script interface
    • The full flexibility of the Python programming language is available in this console. It interfaces to the underlying Java code of the simulation using Jython, making all Java methods available.
  • If you click on an object in the GUI (so that it is highlighted in yellow), this same object is available by the name “that” in the script console.
    • Click on an ensemble
    • Open the script console
    • type “print that.neurons”
    • type “that.neurons=50”
  • You can also run scripts by typing “run [scriptname.py]”

Part Two: Linear Transformations

Creating Terminations

  • Connections between ensembles are built using Origins and Terminations. The Origin from one ensemble can be connected to the Termination on the next ensemble
  • Create two ensembles. They can have different neural properties and different numbers of neurons, but for now make sure they are both one-dimensional.
  • Right-click on the second ensemble and select Add Decoded Termination
    • Provide a name (for example, “input”)
    • Set the input dimension to 1 and use Set Weights to set the connection weight to 1
    • Set tauPSC to 0.01 (this synaptic time constant differs according to which neurotransmitter is involved. 10ms is the time constant for AMPA (5-10ms).

p2-1.png p2-2.png

p2-3.png

Creating Projections

  • We can now connect the two neural ensembles.
  • Every ensemble automatically has an origin called X. This is an origin suitable for building any linear transformation. In Part Three we will show how to create origins for non-linear transformations.

p2-4.png

  • Click and drag from the origin to the termination. This will create the desired projection.

p2-5.png

Adding Inputs

  • In order to test that this projection works, we need to set the value encoded by the first neural ensemble. We do this by creating an input to the system. This is how all external inputs to Nengo models are specified.
  • Right-click inside the Network and choose Create New->Function Input.
  • Give it a name (for example, “external input”)
  • Set its output dimensions to 1

p2-6.png

  • Press Set Function to define the behaviour of this input
  • Select Constant Function from the drop-down list and then press Set to define the value itself. For this model, set it to 0.5.

p2-7.pngp2-8.png

p2-9.png

  • Add a termination on the first neural ensemble and create a projection from the new input to that ensemble.

p2-10.png

Interactive Plots

  • To observe the performance of this model, we now switch over to Interactive Plots. This allows us to both graph the performance of the model and adjust its inputs on-the-fly to see how this affects behaviour.
  • Start Interactive Plots by right-clicking inside the Network and selecting Interactive Plots

p2-101.png

  • The text shows the various components of your model, and the arrows indicate the synaptic connections between them.
    • You can move the components by left-click dragging them, and you can move all the components by dragging the background.
    • You can hide a component by right-clicking on it and selecting “hide”
    • To show a hidden component, right click on the background and select the component by name
  • The bottom of the window shows the controls for running the simulation.
    • The simulation can be started and stopped by pressing the Play or Pause button at the bottom right. Doing this right now will run the simulation, but no data will be displayed since we don’t have any graphs open yet!
    • The reset button on the far left clears all the data from the simulation and puts it back to the beginning.
    • In the middle is a slider that shows the current time in the simulation. Once a simulation has been run, we can slide this back and forth to observe data from different times in the simulation.
  • Right-clicking on a component also allows us to select a type of data to show about that component.
    • Right-click on A and select “value”. This creates a graph that shows the value being represented by the neuron in ensemble A. You can move the graph by left-click dragging it, and you can resize it by dragging near the corners or using a mouse scroll wheel.
    • Press the Play button at the bottom-right or the window and confirm that this group of neurons successfully represents its input value, which we previously set to be 0.5.

p2-102.png

  • Now let us see what happens if we change the input. Right-click on the input and select “control”. This lets us vary the input while the simulation is running.
  • Drag the slider up and down while the simulation is running (press Play again if it is paused). The neurons in ensemble A should be able to successfully represent the changing values.

p2-103.png

  • We can also view what the individual neurons are doing during the simulation. Right-click on A and choose “spike raster”. This shows the individual spikes coming from the neurons. Since there are 100 neurons in ensemble A, the spikes from only a sub-set of these are shown. You can right-click on the spike raster graph and adjust the proportion of spikes shown. Change it to 50%.
  • Run the simulation and change the input. This will affect the neuron firing patterns.

p2-104.png

  • We can also see the voltage levels of all the individual neurons. Right-click on A and choose “voltage grid”. Each neuron is shown as a square and the shading of that square indicates the voltage of that neuron’s cell membrane, from black (resting potential) to white (firing threshold). Yellow indicates a spike.
  • The neurons are initially randomly ordered. You can change this by right-clicking on the voltage grid and selecting “improve layout”. This will attempt to re-order the neurons so that neurons with similar firing patterns are near each other, as they are in the brain. This does not otherwise affect the simulation in any way.
  • Run the simulation and change the input. This will affect the neuron voltage.

p2-105.png

  • So far, we have just been graphing information about neural ensemble A. We have shown that these 100 neurons can accurately represent a value that is directly input to them.
  • For this to be useful for constructing cognitive models, we need to also show that the spiking output from this group of neurons can be used to transfer this information from one neural group to another.
    • In other words, we want to show that B can represent the same thing as A, where B’s only input is the neural firing from group A. For this to happen, the correct synaptic connection weights between A and B (as per the Neural Engineering Framework) must be calculated.
    • Nengo automatically calculates these weights whenever an origin is created.
  • We can see that this communication is successful by creating graphs for ensemble B.
    • Do this by right-clicking on B and selecting “value”, and then right-clicking on B again and selecting “voltage grid”.
    • To aid in identifying which graph goes with which ensemble, right click on a graph and select “label”.
    • Graphs can be moved (by dragging) and resized (by dragging near the edges and corners or by the mouse scroll wheel) as desired.

p2-106.png

  • Notice that the neural ensembles can be representing the same value, but have a different firing pattern.
  • Close the Interactive Plots when you are finished.

Adding Scalars

  • If we want to add two values, we can simply add another termination to the final ensemble and project to it as well.
  • Create a termination on the second ensemble called “input 2”
  • Create a new ensemble
  • Create a projection from the X origin to input 2

p2-19.png

  • Create a new Function input and set its value to -0.7
  • Add the required termination and projection to connect it to the new ensemble

p2-20.png

  • Switch to Interactive Plots.
  • Show the controls for the two inputs
  • Create value graphs for the three neural ensembles
  • Press Play to start the simulation. The value for the final ensemble should be 0.5-0.7=-0.2
  • Use the control sliders to adjust the input. The output should still be the sum of the inputs.

p2-107.png

  • This will be true for most values. However, if the sum is outside of the radius that was set when the neural group was formed (in this case, from -1 to 1), then the neurons may not be able to fire fast enough to represent that value (i.e. they will saturate). Try this by computing 1+1. The result will only be around 1.3.
  • To accurately represent values outside of the range -1 to 1, we need to change the radius of the output ensemble. Return to the standard black editing mode and right-click on ensemble B. Select “Configure” and change its radii to 2. Now return to the Interactive Plots. The network should now accurately compute that 1+1=2.

Adjusting Transformations

  • So far, we have only considered projections that do not adjust the values being represented in any way. However, due to the NEF derivation of the synaptic weights between neurons, we can adjust these to create arbitrary linear transformations (i.e. we can multiply any represented value by a matrix).
  • Each termination in Nengo has an associated transformation matrix. This can be adjusted as desired. In this case, we will double the weight of the original value, so instead of computing x+y, the network will compute 2x+y.
  • Right-click on the first termination in the ensemble that has two projections coming into it. Select Configure. Double-click on transform.
  • Double-click on the 1.0 and change it to 2.0

p2-22.png

  • Click on OK and then Done
  • Now run the simulation. The final result should be 2(0.5)-0.7=0.3

Multiple Dimensions

  • Everything discussed above also applies to ensembles that represent more than one dimension.
  • To create these, set the number of dimensions to 2 when creating the ensemble

p2-24.png

  • When adding a termination, the input dimension can be adjusted. This defines the shape of the transformation matrix for the termination, allowing for projections that change the dimension of the data

p2-25.png

  • For example, two 1-dimensional values can be combined into a single two-dimensional ensemble. This would be done with two terminations: one with a transformation (or coupling) matrix of [1 0] and the other with [0 1]. If the two inputs are called a and b, this will result in the following calculation:

    • a*[1 0] + b*[0 1] = [a 0] + [0 b] = [a b]
    • This will be useful for creating non-linear transformations, as discussed further in the next section.
  • There are additional ways to view 2D representations in the interactive plots

    • Including plotting the activity of the neurons along their preferred direction vectors
    • Plotting the 2D decoded value of the representation

p2-108.png

Scripting

  • Along with the ability to construct models using this point-and-click interface, Nengo also provides a Python scripting language interface for model creation. These examples can be seen in the “demo” directory.
  • To create the communication channel through the scripting interface, go to the Script Console (Ctrl-P) and type
run demo/communication.py
  • The actual code for this can be seen by opening the communication.py file in the demo directory.
import nef
 
net=nef.Network('Communications Channel')
input=net.make_input('input',[0.5])
A=net.make('A',100,1)
B=net.make('B',100,1)
net.connect(input,A)
net.connect(A,B)
net.add_to(world)
  • The following demo scripts create models similar to those seen in this part of the tutorial:
    • demo/singleneuron.py shows what happens with an ensemble with only a single neuron on it (poor representation)
    • demo/twoneurons.py shows two neurons working together to represent
    • demo/manyneurons.py shows a standard ensemble of 100 neurons representing a value
    • demo/communication.py shows a communication channel
    • demo/addition.py shows adding two numbers
    • demo/2drepresentation.py shows 100 neurons representing a 2-D vector
    • demo/combining.py shows two separate values being combined into a 2-D vector

Part Three: Non-Linear Transformations

Functions of one variable

  • We now turn to creating nonlinear transformations in Nengo. The main idea here is that instead of using the X origin, we will create a new origin that estimates some arbitrary function of X. This will allow us to estimate any desired function.
    • The accuracy of this estimate will, of course, be dependent on the properties of the neurons.
  • For one-dimensional ensembles, we can calculate various 1-dimensional functions:

    • f(x)=x2
    • f(x)=θ(x) (thresholding)
    • f(x)=√x
  • To perform a non-linear operation, we need to define a new origin

    • The X origin just uses f(x)=x.
    • Create a new ensemble and a function input. The ensemble should be one-dimensional with 100 neurons and a radius of 1. Use a Constant Function input set two 0.5.
    • Create a termination on the ensemble and connect the function input to it
    • Now create a new origin that will estimate the square of the value.
      • Right-click on the combined ensemble and select Add decoded origin
      • Set the name to “square”
      • Click on Set Functions
      • Select User-defined Function and press Set
      • For the Expression, enter “x0*x0”. We refer to the value as x0 because when we extend this to multiple dimensions, we will refer to them as x0, x1, x2, and so on.
      • Press OK, OK, and OK.
    • You can now generate a plot that shows how good the ensemble is at calculating the non-linearity. Right-click on the ensemble and select Plot->Plot distortion:square.

p3-9b.png

  • Start Interactive Plots.
  • Create a control for the input, so you can adjust it while the model runs (right-click on the input and select “control”)
  • Create a graph of the “square” value from the ensemble. Do this by right-clicking on the ensemble in the Interactive Plots window and selecting “square->value”.
  • For comparison, also create a graph for the standard X origin byt right-clicking on the ensemble and selecting “X->value”. This is the standard value graph that just shows the value being represented by this ensemble.
  • Press Play to run the simulation. With the default input of 0.5, the squared value should be near 0.25. Use the control to adjust the input. The output should be the square of the input.

p3-101.png

  • You can also run this example using scripting
run demo/squaring.py

Functions of multiple variables

  • Since X (the value being represented by an ensemble) can also be multidimensional, we can also calculate these sorts of functions
    • f(x)=x0*x1
    • f(x)=max(x0,x1)
  • To begin, we create two ensembles and two function inputs. These will represent the two values we wish to multiply together.
    • The ensembles should be one-dimensional, use 100 neurons and have a radius of 10 (so they can represent values between -10 and 10)
    • The two function inputs should be constants set to 8 and 5
    • The terminations you create to connect them should have time constants of 0.01 (AMPA)

p3-1.png

  • Now create a two-dimensional neural ensemble with a radius of 15 called Combined
    • Since it needs to represent multiple values, we increase the number of neurons it contains to 200
  • Add two terminations to Combined
    • For each one, the input dimensions are 1
    • For the first one, use Set Weights to make the transformation be [1 0]
    • For the second one, use Set Weights to make the transformation be [0 1]
  • Connect the two other ensembles to the Combined one

p3-2.png

  • Next, create an ensemble to store the result. It should have a radius of 100, since it will need to represent values from -100 to 100. Give it a single one-dimensional termination with a weight of 1.

p3-3.png

  • Now we need to create a new origin that will estimate the product between the two values stored in the combined ensemble.
    • Right-click on the combined ensemble and select Add decoded origin.
    • Set the name to “product”
    • Set Output dimensions to 1

p3-4.png

  • Click on Set Functions
  • Select User-defined Function and press Set.

p3-5.png

  • For the Expression, enter x0*x1

p3-6.png

  • Press OK, OK, and OK to finish creating the origin
    • Connect the new origin to the termination on the result ensemble

p3-7.png

  • Add a probe to the result ensemble and run the simulation
  • The result should be approximately 40.
  • Adjust the input controls to multiple different numbers together.

p3-102.png

  • You can also run this example using scripting
run demo/multiplication.py

Combined approaches

  • We can combine these two approaches in order to compute more complex funxtions, such as x2y.
    • Right-click on the ensemble representing the first of the two values and select Add decoded origin.
    • Give it the name “square”, set its output dimensions to 1, and press Set Functions.
    • As before, select the User-defined Function and press Set.
    • Set the Expression to be “x0*x0”.
    • Press OK, OK, and OK to finish creating the origin.
    • This new origin will calculate the square of the value represented by this ensemble.
    • If you connect this new origin to the Combined ensemble instead of the standard X origin, the network will calculate x2y instead of xy.

p3-9a.png

Part Four: Feedback and Dynamics

Storing Information Over Time: Constructing an Integrator

  • The basis of many of our cognitive models is the integrator. Mathematically, the output of this network should be the integral of the inputs to this network.
    • Practically speaking, this means that if the input to the network is zero, then its output will stay at whatever value it is currently at. This makes it the basis of a neural memory system, as a representation can be stored over time.
    • Integrators are also often used in sensorimotor systems, such as eye control
  • For an integrator, a neural ensemble needs to connect to itself with a transformation weight of 1, and have an input with a weight of τ, which is the same as the synaptic time constant of the neurotransmitter used.
  • Create a one-dimensional ensemble called Integrator. Use 100 neurons and a radius of 1.
  • Add two terminations with synaptic time constants of 0.1s. Call the first one “input” and give it a weight of 0.1. Call the second one “feedback” and give it a weight of 1.
  • Create a new Function input using a Constant Function with a value of 1.
  • Connect the Function input to the input termination
  • Connect the X origin of the ensemble back to its own feedback termination.

p4-101.png

  • Go to Interactive Plots. Create a graph for the value of the ensemble (right-click on the ensemble and select “value”).
  • Press Play to run the simulation. The value stored in the ensemble should linearly increase, reaching a value of 1 after approximately 1 second.
    • You can increase the amount of time shown on the graphs in Interactive Plots. Do this by clicking on the small downwards-pointing arrow at the bottom of the window. This will reveal a variety of settings for Interactive Plots. Change the “time shown” to 1.

p4-102.png

Representation Range

  • What happens if the previous simulation runs for longer than one second?
  • The value stored in the ensemble does not increase after a certain point. This is because all neural ensembles have a range of values they can represent (the radius), and cannot accurately represent outside of that range.

p4-103.png

  • Adjust the radius of the ensemble to 1.5 using either the Configure interface or the script console (that.radii=[1.5]). Run the model again. It should now accurately integrate up to a maximum of 1.5.

p4-104.png

Complex Input

  • We can also run the model with a more complex input. Change the Function input using the following command from the script console (after clicking on it in the black model editing mode interface). Press Ctrl-P to show the script console.
that.functions=[ca.nengo.math.impl.PiecewiseConstantFunction([0.2,0.3,0.44,0.54,0.8,0.9],[0,5,0,-10,0,5,0])]
  • You can see what this function looks like by right-clicking on it in the editing interface and selecting “Plot”.

p4-5.png

  • Return to Interactive Plots and run the simulation.

p4-105.png

Adjusting Synaptic Time Constants

  • You can adjust the accuracy of an integrator by using different neurotransmitters.
  • Change the input termination to have a tau of 0.01 (10ms: GABA) and a transform to be 0.01. Also change the feedback termination to have a tau of 0.01 (but leave its transform at 1).

p4-106.png

  • By using a shorter time constant, the network dynamics are more sensitive to small-scale variation (i.e. noise).
  • This indicates how important the use of a particular neurotransmitter is, and why there are so many different types with vastly differing time constants.

    • AMPA: 2-10ms
    • GABAA: 10-20ms
    • NMDA: 20-150ms
    • The actual details of these time constants vary across the brain as well. We are collecting empirical data on these from various sources at http://ctn.uwaterloo.ca/~cnrglab/?q=node/505
  • You can also run this example using scripting

run demo/integrator.py

Controlled Integrator

  • We can also build an integrator where the feedback transformation (1 in the previous model) can be controlled.
    • This allows us to build a tunable filter.
  • This requires the use of multiplication, since we need to multiply two stored values together. This was covered in the previous part of the tutorial.
  • We can efficiently implement this by using a two-dimensional ensemble. One dimension will hold the value being represented, and the other dimension will hold the transformation weight.
  • Create a two-dimensional neural ensemble with 225 neurons and a radius of 1.5.
  • Create the following three terminations:
    • “input”: time constant of 0.1, 1 dimensional, with a transformation matrix of [0.1 0]. This acts the same as the input in the previous model, but only affects the first dimension.
    • “control”: time constant of 0.1, 1 dimensional, with a transformation matrix of [0 1]. This stores the input control signal into the second dimension of the ensemble.
    • “feedback”: time constant of 0.1, 1 dimensional, with a transformation matrix of [1 0]. This will be used in the same manner as the feedback termination in the previous model.
  • Create a new origin that multiplies the values in the vector together
    • This is exactly the same as the multiplier in the previous part of this tutorial
    • This is a 1 dimensional output, with a User-defined Function of x0*x1
  • Create two function inputs called “input” and “control”. Start with Constant functions with a value of 1
    • Use the script console to set the “input” function by clicking on it and entering the same input function as used above.
that.functions=[ca.nengo.math.impl.PiecewiseConstantFunction([0.2,0.3,0.44,0.54,0.8,0.9],[0,5,0,-10,0,5,0])]
  • Connect the input function to the input termination, the control function to the control termination, and the product origin to the feedback termination.

p4-9.png

  • Go to Interactive Plots and show a graph for the value of the ensemble (right-click->X->value). If you run the simulation, this graph will show the values of both variables stored in this ensemble (the integrated value and the control signal). For clarity, turn off the display of the cotrol signal by right-clicking on the graph and removing the checkmark beside “v[1]”.
  • The performance of this model should be similar to that of the non-controlled integrator.

p4-107.png

  • Now adjust the control input to be 0.3 instead of 1. This will make the integrator into a leaky integrator. This value adjusts how quickly the integrator forgets over time.

p4-108.png

  • You can also run this example using scripting
run demo/controlledintegrator.py

Part Five: Cognitive Models

Larger Systems

  • So far, we’ve seen how to implement the various basic components
    • representations
    • linear transformation
    • non-linear transformation
    • feedback
  • The goal is to use these components to build full cognitive models using spiking neurons
    • Constrained by the actual properties of real neurons in real brains (numbers of neurons, connectivity, neurotransmitters, etc)
    • Should be able to produce behavioural predictions in terms of timing, accuracy, lesion effects, drug treatments, etc
  • Some simple examples
    • Motor control
      • take an existing engineering control model for what angles to move joints to to place the hand at a particular position
run demo/armcontrol.py

p5-101.png

  • Braitenberg vehicle
    • connect range sensors to opposite motors on a wheeled robot
run demo/vehicle.py

p5-102.png

Binding Semantic Pointers (SPs)

  • We want to manipulate sophisticated representational states (this is the purpose of describing the semantic pointer architecture (SPA))
  • The main operation to manipulate representations in the SPA is circular convolution (for binding)
  • Let’s explore a binding circuit for semantic pointers

  • Input: Two semantic pointers (high-dimensional vectors)

  • Output: One semantic pointer (binding the original two)

  • Implementation: element-wise multiplication of DFT (as described in slides)

run demo/convolve.py

p5-201.png

  • To deal with high-dimensional vectors, we don’t want to have to set each individual value for each vector
    • would need 100 controls to configure a single 100-dimensional vector
  • Nengo has a specialized “semantic pointer” graph for these high-dimensional cases

    • Instead of showing the value of each element in the vector (as with a normal graph), it shows the similarity between the currently represented vector and all the known vectors
    • “How much like CAT is this? How much like DOG? How much like RED? How much like TRIANGLE?”
    • You can configure which comparisons are shown using the right-click menu
    • You can also use it to set the contents of a neural group by right-clicking and choosing “set value”. This will force the neurons to represent the given semantic pointer. You can go back to normal behaviour by selecting “release value”.
  • Use the right-click menu to set the input values to “a” and “b”. The output should be similar to “a*b”.

    • This shows that the network is capable of computing the circular convolution operation, which binds two semantic pointers to create a third one.
  • Use the right-click menu to set the input values to “a” and “~a*b”. The output should be similar to “b”.
    • This shows that convolution can be used to transform representations via binding and unbinding, since “a(~ab)” is approximately “b”.

Control and Action Selection: Basal Ganglia

  • Pretty much every cognitive model has an action selection component
    • Out of many possible things you could do right now, pick one
    • Usually mapped on to the basal ganglia
    • Some sort of winner-take-all calculation based on how suitable the various possible actions are to the current situation
  • Input: A vector representing how good each action is (for example, [0.2, 0.3, 0.9, 0.1, 0.7])
  • Output: Which action to take ([0, 0, 1, 0, 0])

    • Actually, the output from the basal ganglia is inhibitory, so the output is more like [1, 1, 0, 1, 1]
  • Implementation

    • Could try doing it as a direct function
      • Highly non-linear function
      • Low accuracy
    • Could do it by setting up inhibitory interconnections
      • Like the integrator, but any value above zero would also act to decrease the others
      • Often used in non-spiking neural networks (e.g. PDP++) to do k-winner-take-all
      • But, you have to wait for the network to settle, so it can be rather slow
    • Gurney, Prescott, & Redgrave (2001)
      • Model of action selection constrained by the connectivity of the basal ganglia

p5-103.png

  • Each component computes the following function

p5-104.png

  • Their model uses unrealistic rate neurons with that function for an output
  • We can use populations of spiking neurons and compute that function
  • We can also use correct timing values for the neurotransmitters involved
run demo/basalganglia.py
  • Adjust the input controls to change the five utility values being selected between
  • Graph shows the output from the basal ganglia (each line shows a different action)
  • The selected action is the one set to zero

p5-105.png

  • Comparison to neural data
    • Ryan & Clark, 1991
    • Stimulate regions in medial orbitofrontal cortex, measure from GPi, see how long it takes for a response to occur

p5-106.png

  • To replicate
    • Set the inputs to [0, 0, 0.6, 0, 0]
    • Run simulation for a bit, then pause it
    • Set the inputs to [0, 0, 0.6, 1, 0]
    • Continue simulation
    • Measure how long it takes for the neurons for the fourth action to stop firing

p5-107.png

  • In rats: 14-17ms. In model: 14ms (or more if the injected current isn’t extremely large)

p5-108.png

Sequences of Actions

  • To do something useful with the action selection system we need two things
    • A way to determine the utility of each action given the current context
    • A way to take the output from the action selection and have it affect behaviour
  • We do this using the representations of the semantic pointer architecture
    • Any cognitive state is represented as a high-dimensional vector (a semantic pointer)
    • Working memory stores semantic pointers (using an integrator)
    • Calculate the utility of an action by computing the dot product between the current state and the state for the action (i.e. the IF portion of an IF-THEN production rule)
      • This is a linear operation, so we can directly compute it using the connection weights between the cortex and the basal ganglia
    • The THEN portion of a rule says what semantic pointers to send to what areas of the brain. This is again a linear operation that can be computed on the output of the thalamus using the output from the basal ganglia
  • Simple example:
    • Five possible states: A, B, C, D, and E
    • Rules for IF A THEN B, IF B THEN C, IF C THEN D, IF D THEN E, IF E THEN A
    • Five `production rules’ (semantic pointer mappings) cycling through the five states
run demo/sequence.py

p5-109.png

  • Can set the contents of working memory in Interactive Plots by opening an SP graph, right-clicking on it, and choosing “set value” (use “release value” to allow the model to change the contents)
  • Cycle time is around 40ms, slightly faster than the standard 50ms value used in ACT-R, Soar, EPIC, etc.
    • This depends on the time constant for the neurotransmitter GABA

p5-110.png

Routing of Information

  • What about more complex actions?
    • Same model as above, be we want visual input to be able to control where we start the sequence
    • Simple approach: add a visual buffer and connect it to the working memory
run demo/sequencenogate.py

p5-113.png

  • Problem: If this connection always exists, then the visual input will always override what’s in working memory. this connection needs to be controllable

  • Solution

    • Actions need to be able to control the flow of information between cortical areas.
    • Instead of sending a particular SP to working memory, we need “IF X THEN transfer the pattern in cortex area Y to cortex area Z”?
    • In this case, we add a rule that says “IF it contains a letter, transfer the data from the visual area to working memory”
      • We make the utility of the rule lower than the utility of the sequence rules, so that it will only transfer that information (open that gate) when no other action applies.
run demo/sequencerouted.py

p5-112.png

  • The pattern in the visual buffer is successfully transferred to working memory, then the sequence is continued from that letter.

p5-111.png

  • Takes longer (60-70ms) for these more complex productions to occur

Question Answering

  • The control signal in the previous network can also be another semantic pointer that binds/unbinds the contents of the visual buffer (instead of just a gating signal)
    • This more flexible control does not add processing time
    • Allows processing the representations while routing them
  • This allows us to perform arbitrary symbol manipulation such as “take the contents of buffer X, unbind it with buffer Y, and place the results in buffer Z”
  • Example: Question answering
    • System is presented with a statement such as “red triangle and blue circle”
      • a semantic pointer representing this statement is placed in the visual cortical area
      • “statement+red*triangle+blue*circle”
    • Statement is removed after a period of time
    • Now a question is presented, such as “What was Red?”
      • “question+red” is presented to the same visual cortical area as before
    • Goal is to place the correct answer in a motor cortex area (in this case, “triangle”)
  • This is achieved by creating two action rules:
    • If a statement is in the visual area, move it to working memory (as in the previous example)
    • If a question is in the visual area, unbind it with working memory and place the result in the motor area
  • This example requires a much larger simulation than any of the others in this tutorial (more than 50,000 neurons). If you run this script, Nengo may take a long time (hours!) to solve for the decoders and neural connection weights needed. We have pre-computed the larger of these networks for you, and they can be downloaded at http://ctn.uwaterloo.ca/~cnrglab/f/question.zip.

p5-202.png

run demo/question.py