Find models by
Find models for
Find models of
Electrical synapses (gap junctions)
SenseLab mailing list
ModelDB related resources
Computational neuroscience ecosystem
Models in a git repository
Reinforcement Learning with Forgetting: Linking Sustained Dopamine to Motivation (Kato Morita 2016)
Download zip file
Help downloading and running models
"It has been suggested that dopamine (DA) represents reward-prediction-error (RPE) defined in reinforcement learning and therefore DA responds to unpredicted but not predicted reward. However, recent studies have found DA response sustained towards predictable reward in tasks involving self-paced behavior, and suggested that this response represents a motivational signal. We have previously shown that RPE can sustain if there is decay/forgetting of learned-values, which can be implemented as decay of synaptic strengths storing learned-values. This account, however, did not explain the suggested link between tonic/sustained DA and motivation. In the present work, we explored the motivational effects of the value-decay in self-paced approach behavior, modeled as a series of ‘Go’ or ‘No-Go’ selections towards a goal. Through simulations, we found that the value-decay can enhance motivation, specifically, facilitate fast goal-reaching, albeit counterintuitively. ..."
Kato A, Morita K (2016) Forgetting in Reinforcement Learning Links Sustained Dopamine Signals to Motivation.
PLoS Comput Biol
(Click on a link to find other models with that property)
Kato, Ayaka ;
Morita, Kenji [morita at p.u-tokyo.ac.jp];
File not selected
<- Select file from this column.
Loading data, please wait...
Questions, comments, problems? Email the
How to cite ModelDB
© This site is Copyright 2018 Shepherd Lab, Yale University