Thoughts on Network Learning

A lot of algorithm related to learning weights for (artificial) neural networks operate as follows: standard dataset split into training, validation and testing; network model to perform unsupervised generative or discriminative modeling, result composes of the error on the test set. The goal is to achieve the lowest test error in the shortest amount of training time.

However, this is not the way brain learns. Learning is performed constantly and actively (cf. Active Learning). It would be great interest to me to test learning algorithms which are always learning (reinforcement/unsupervised)


Visual thinking using neural networks

JJ Hopfield mentioned that one of his most recent projects is how to simulate, using realistic network models, a rat's (or mine) mental thinking process of going from place A to place B, where food resides. The idea is that after the rat learns the environment very well, the rat is placed in a new location A that it's never been to, with obstacles between A and B. What Hopfield was interested in was for the rat to explore the solution space without activating the motor neurons and physically explore. This is clearly a problem of visual thinking. I suggested just mentally visualizing ways to get to B and then select the route that max some criterion. He said I'm thinking too much like a computer scientists and that the problem with real networks is how does the mental process actually work and how to suppress the motor neurons.