My current project involves demonstrating that cleanup memories (a form of autoassociative memory) combined with semantic pointers offer the best current solution to the problem of human-scale connectionist knowledge representation. We demonstrate this by using our technique to encode and decode the primary relations in WordNet, a human-scale knowledge base consisting of ~117,000 concepts, in a biologically-plausible spiking neural network using fewer neural resources than any previous approach would require. Code for this network can be found on github.
I am also thinking about the problem of learnable cleanup memories, which would be capable of adapting as the statistics of the inputs change. A potential application of such a learnable memory is hippocampus, where neural representations change on medium term time scales in response to changing task demands and learned associations.
I am broadly interested in machine learning, neural coding, and neural computation in general. I am particularly interested in neural implementations of Bayesian inference, for giving neural agents the ability to represent and manage uncertainty in the world. Does the brain represent probability distributions, and if so, how? How do these distributions relate to each other, and how can they be combined to yield complex inferences? Is every variable in the environment represented as a probability distribution, or only some of them? How are these distributions updated by data coming in from the world? Finally, how does neural data constrain our answers to these questions?
I spent 3 of my undergrad co-op terms at the CNRG where I worked on parallel implementations of neural networks. I wrote a GPU implementation for Nengo, our software based on the Neural Engineering Framework, permitting hundreds of thousands of neurons to be simulated in parallel.
I completed a Masters in Computer Science at the CNRG. In 2012 I graduated from the University of Waterloo's BMATH(CS) Co-op program with the Cognitive Science option.