Speed/accuracy trade-off between the habitual and the goal-directed processes (Kermati et al. 2011)


Help downloading and running models
Accession:195856
"This study is a reference implementation of Keramati, Dezfouli, and Piray 2011 that proposed an arbitration mechanism between a goal-directed strategy and a habitual strategy, used to model the behavior of rats in instrumental conditionning tasks. The habitual strategy is the Kalman Q-Learning from Geist, Pietquin, and Fricout 2009. We replicate the results of the first task, i.e. the devaluation experiment with two states and two actions. ..."
References:
1 . Keramati M, Dezfouli A, Piray P (2011) Speed/accuracy trade-off between the habitual and the goal-directed processes. PLoS Comput Biol 7:e1002055 [PubMed]
2 . Viejo G, Girard B, Khamassi M (2016) [Re] Speed/accuracy trade-off between the habitual and the goal-directed processes ReScience 2(1):1-5
Model Information (Click on a link to find other models with that property)
Model Type:
Brain Region(s)/Organism: Basal ganglia;
Cell Type(s):
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: Python (web link to model);
Model Concept(s): Action Selection/Decision Making; Reinforcement Learning; Learning;
Implementer(s): Viejo, Guillaume [guillaume.viejo at isir.upmc.fr]; Girard, Benoit [girard at isir.upmc.fr]; Khamassi, Mehdi ;
(located via links below)
Loading data, please wait...