A neural model of hierarchical reinforcement learning

PLoS ONE, 2017

Daniel Rasmussen, Aaron R. Voelker, Chris Eliasmith

Abstract

We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.

Full text links

 PDF

 DOI

Journal Article

Doi
10.1371/journal.pone.0180234
Journal
PLoS ONE
Number
7
Pages
1–39
Publisher
Public Library of Science
Volume
12

Cite

Plain text

BibTeX