Neuroscience podcast
Trevor just sent this to me and it looks pretty neat, if anyone else has any other neuroscience lecture series please post them here!
Trevor just sent this to me and it looks pretty neat, if anyone else has any other neuroscience lecture series please post them here!
So, I briefly skimmed this and didn't read any of the actual scientific articles, but it seemed like people might be interested so I thought I'd post it.
Here's a video of some of some work involving that gauy Schaal I was talking about at the last meeting. This stuff is crazy and awesome.
If you haven't seen this already my pal just showed me the link and I thought it was neat! And that you would be interested in it if you haven't seen it already.
http://www.technovelgy.com/ct/Science-Fiction-News.asp?NewsNum=415
A lot of algorithm related to learning weights for (artificial) neural networks operate as follows: standard dataset split into training, validation and testing; network model to perform unsupervised generative or discriminative modeling, result composes of the error on the test set. The goal is to achieve the lowest test error in the shortest amount of training time.
However, this is not the way brain learns. Learning is performed constantly and actively (cf. Active Learning). It would be great interest to me to test learning algorithms which are always learning (reinforcement/unsupervised)