A lot of algorithm related to learning weights for (artificial) neural networks operate as follows: standard dataset split into training, validation and testing; network model to perform unsupervised generative or discriminative modeling, result composes of the error on the test set.
The goal is to achieve the lowest test error in the shortest amount of training time.
However, this is not the way brain learns. Learning is performed constantly and actively (cf. Active Learning). It would be great interest to me to test learning algorithms which are always learning (reinforcement/unsupervised)
JJ Hopfield mentioned that one of his most recent projects is how to simulate, using realistic network models, a rat's (or mine) mental thinking process of going from place A to place B, where food resides. The idea is that after the rat learns the environment very well, the rat is placed in a new location A that it's never been to, with obstacles between A and B. What Hopfield was interested in was for the rat to explore the solution space without activating the motor neurons and physically explore. This is clearly a problem of visual thinking. I suggested just mentally visualizing ways to get to B and then select the route that max some criterion. He said I'm thinking too much like a computer scientists and that the problem with real networks is how does the mental process actually work and how to suppress the motor neurons.
All of the cogsci tutorials are now available
here. It's the most
recent set, and now includes usage of Terry's awesome visualizer.
Bryan graduated this weekend! Here's a link to a profile that arts did on him:
Dimitri will be visiting again at the end of the Summer.
Part of his visit will overlap with Charlie's, so we'll be busy!