Distributed cerebellar plasticity implements adaptable gain control (Garrido et al., 2013)

 Download zip file 
Help downloading and running models
We tested the role of plasticity distributed over multiple synaptic sites (Hansel et al., 2001; Gao et al., 2012) by generating an analog cerebellar model embedded into a control loop connected to a robotic simulator. The robot used a three-joint arm and performed repetitive fast manipulations with different masses along an 8-shape trajectory. In accordance with biological evidence, the cerebellum model was endowed with both LTD and LTP at the PF-PC, MF-DCN and PC-DCN synapses. This resulted in a network scheme whose effectiveness was extended considerably compared to one including just PF-PC synaptic plasticity. Indeed, the system including distributed plasticity reliably self-adapted to manipulate different masses and to learn the arm-object dynamics over a time course that included fast learning and consolidation, along the lines of what has been observed in behavioral tests. In particular, PF-PC plasticity operated as a time correlator between the actual input state and the system error, while MF-DCN and PC-DCN plasticity played a key role in generating the gain controller. This model suggests that distributed synaptic plasticity allows generation of the complex learning properties of the cerebellum.
1 . Garrido JA, Luque NR, D'Angelo E, Ros E (2013) Distributed cerebellar plasticity implements adaptable gain control in a manipulation task: a closed-loop robotic simulation Front. Neural Circuits 7:159:1-20 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Cerebellum;
Cell Type(s): Cerebellum deep nucleus neuron;
Gap Junctions:
Simulation Environment: C or C++ program; MATLAB; Simulink;
Model Concept(s): Long-term Synaptic Plasticity;
Implementer(s): Garrido, Jesus A [jesus.garrido at unipv.it]; Luque, Niceto R. [nluque at ugr.es];
function [sys,x0,str,ts] = inverse(t,x,u,flag)

% Dispatch the flag. The switch function controls the calls to 
% S-function routines at each simulation stage of the S-function.
switch flag,
  % Initialization %
  % Initialize the states, sample times, and state ordering strings.
  case 0

  % Outputs %
  % Return the outputs of the S-function block.
  case 3

  % Unhandled flags %
  % There are no termination tasks (flag=9) to be handled.
  % Also, there are no continuous or discrete states,
  % so flags 1,2, and 4 are not used, so return an emptyu
  % matrix 
  case { 1, 2, 4, 9 }

  % Unexpected flags (error handling)%
  % Return an error message for unhandled flag values.
    error(['Unhandled flag = ',num2str(flag)]);


% end timestwo

% mdlInitializeSizes
% Return the sizes, initial conditions, and sample times for the S-function.
function [sys,x0,str,ts] = mdlInitializeSizes()

sizes = simsizes;
sizes.NumContStates  = 0;
sizes.NumDiscStates  = 0;
sizes.NumOutputs     = -1;  % dynamically sized
sizes.NumInputs      = -1;  % dynamically sized
sizes.DirFeedthrough = 1;   % has direct feedthrough
sizes.NumSampleTimes = 1;

sys = simsizes(sizes);
str = [];
x0  = [];
ts  = [-1 0];   % inherited sample time

% end mdlInitializeSizes

% mdlOutputs
% Return the output vector for the S-function
function sys = mdlOutputs(t,x,u)
sys = u;

% end mdlOutputs