A lot of algorithm related to learning weights for (artificial) neural networks operate as follows: standard dataset split into training, validation and testing; network model to perform unsupervised generative or discriminative modeling, result composes of the error on the test set. The goal is to achieve the lowest test error in the shortest amount of training time.
However, this is not the way brain learns. Learning is performed constantly and actively (cf. Active Learning). It would be great interest to me to test learning algorithms which are always learning (reinforcement/unsupervised)