Dynamic dopamine modulation in the basal ganglia: Learning in Parkinson (Frank et al 2004,2005)

 Download zip file 
Help downloading and running models
Accession:79488
See README file for all info on how to run models under different tasks and simulated Parkinson's and medication conditions.
Reference:
1 . Frank MJ (2005) Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism. J Cogn Neurosci 17:51-72 [PubMed]
2 . Frank MJ, Seeberger LC, O'reilly RC (2004) By carrot or by stick: cognitive reinforcement learning in parkinsonism. Science 306:1940-3 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Basal ganglia;
Cell Type(s): Neostriatum medium spiny direct pathway GABA cell;
Channel(s): I Na,t; I K,leak; I Cl,Ca;
Gap Junctions:
Receptor(s): D1; D2; Glutamate; Gaba;
Gene(s):
Transmitter(s): Dopamine; Gaba; Glutamate;
Simulation Environment: Emergent/PDP++;
Model Concept(s): Simplified Models; Synaptic Plasticity; Pathophysiology; Rate-coding model neurons; Parkinson's; Reinforcement Learning; Action Selection/Decision Making; Hebbian plasticity;
Implementer(s): Frank, Michael [mfrank at u.arizona.edu];
Search NeuronDB for information about:  Neostriatum medium spiny direct pathway GABA cell; D1; D2; Glutamate; Gaba; I Na,t; I K,leak; I Cl,Ca; Dopamine; Gaba; Glutamate;
int i,j;
float A_GoR1, A_NoGoR1,  A_GoR2, A_NoGoR2,  B_GoR1, B_NoGoR1,  B_GoR2, B_NoGoR2; 
 A_GoR1=A_NoGoR1=A_GoR2=A_NoGoR2=B_GoR1=B_NoGoR1= B_GoR2=B_NoGoR2 =0; 

 for (i=0;i<36;i=i+4) {
   for (j=0;j<20;j=j+5)
     A_GoR1 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

 for (i=2;i<36;i=i+4) {
   for (j=0;j<20;j=j+5)
     A_NoGoR1 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

 for (i=1;i<36;i=i+4) {
   for (j=0;j<20;j=j+5)
     A_GoR2 +=.environments.RF_Env.events[i].patterns[0].value[j];
 }

 for (i=3;i<36;i=i+4) {
   for (j=0;j<20;j=j+5)
     A_NoGoR2 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

for (i=0;i<36;i=i+4) {
   for (j=1;j<20;j=j+5)
     B_GoR1 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

 for (i=2;i<36;i=i+4) {
   for (j=1;j<20;j=j+5)
     B_NoGoR1 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

for (i=1;i<36;i=i+4) {
   for (j=1;j<20;j=j+5)
     B_GoR2 += .environments.RF_Env.events[i].patterns[0].value[j];
 }

 for (i=3;i<36;i=i+4) {
   for (j=1;j<20;j=j+5)
     B_NoGoR2 += .environments.RF_Env.events[i].patterns[0].value[j];
 }



vals[0].val= (A_GoR1 - A_NoGoR1) + (B_GoR2-B_NoGoR2); // relative Go activity for positive responses (R1 forand R2 for B).
vals[1].val= (B_GoR1 - B_NoGoR1) + (A_GoR2-A_NoGoR2); // relative Go activity for negative responses (R2 for A and R1 for B). -- this should be negative since the network should learn NoGo to negative responses.

vals[2].val= (A_GoR1 - A_NoGoR1); // Positive Go resps just for A/R1
vals[3].val= (B_GoR1 - B_NoGoR1); // Negative Go resps just for B/R1