Robust transmission in the inhibitory Purkinje Cell to Cerebellar Nuclei pathway (Abbasi et al 2017)

 Download zip file 
Help downloading and running models
Accession:229279

References:
1 . Abbasi S, Hudson AE, Maran SK, Cao Y, Abbasi A, Heck DH, Jaeger D (2017) Robust Transmission of Rate Coding in the Inhibitory Purkinje Cell to Cerebellar Nuclei Pathway in Awake Mice PLOS Computational Biology
2 . Steuber V, Schultheiss NW, Silver RA, De Schutter E, Jaeger D (2011) Determinants of synaptic integration and heterogeneity in rebound firing explored with data-driven models of deep cerebellar nucleus cells. J Comput Neurosci 30:633-58 [PubMed]
3 . Steuber V, Jaeger D (2013) Modeling the generation of output by the cerebellar nuclei. Neural Netw 47:112-9 [PubMed]
4 . Steuber V, De Schutter E, Jaeger D (2004) Passive models of neurons in the deep cerebellar nuclei: the effect of reconstruction errors Neurocomputing 58-60:563-568
5 . Luthman J, Hoebeek FE, Maex R, Davey N, Adams R, De Zeeuw CI, Steuber V (2011) STD-dependent and independent encoding of input irregularity as spike rate in a computational model of a cerebellar nucleus neuron. Cerebellum 10:667-82 [PubMed]
Model Information (Click on a link to find other models with that property)
Model Type: Neuron or other electrically excitable cell;
Brain Region(s)/Organism: Cerebellum;
Cell Type(s): Cerebellum deep nucleus neuron;
Channel(s): I h; I T low threshold; I L high threshold; I Na,p; I Na,t; I K,Ca; I K;
Gap Junctions:
Receptor(s): AMPA; NMDA; GabaA;
Gene(s):
Transmitter(s): Gaba; Glutamate;
Simulation Environment: GENESIS;
Model Concept(s): Synaptic Integration;
Implementer(s): Jaeger, Dieter [djaeger at emory.edu];
Search NeuronDB for information about:  GabaA; AMPA; NMDA; I Na,p; I Na,t; I L high threshold; I T low threshold; I K; I h; I K,Ca; Gaba; Glutamate;
/
codes
pandora-matlab-1.4compat2
classes
@tests_db
private
.cvsignore *
abs.m
addColumns.m
addLastRow.m
addRow.m
allocateRows.m
anyRows.m
approxMappingLIBSVM.m
approxMappingNNet.m
approxMappingSVM.m
assignRowsTests.m
checkConsistentCols.m
compareRows.m
corrcoef.m
cov.m
crossProd.m
dbsize.m
delColumns.m
diff.m
display.m
displayRows.m
displayRowsCSV.m
displayRowsTeX.m
end.m
enumerateColumns.m
eq.m
factoran.m
fillMissingColumns.m
ge.m
get.m *
getColNames.m
groupBy.m
gt.m
histogram.m
invarValues.m
isinf.m
isnan.m
isnanrows.m
joinRows.m
kmeansCluster.m
le.m
lt.m
matchingRow.m
max.m
mean.m
meanDuplicateRows.m
min.m
minus.m
mtimes.m
ne.m
noNaNRows.m
onlyRowsTests.m
physiol_bundle.m
plot.m
plot_abstract.m
plot_bars.m
plotBox.m
plotCircular.m
plotCovar.m
plotImage.m
plotrow.m
plotrows.m
plotScatter.m
plotScatter3D.m
plotTestsHistsMatrix.m
plotUITable.m
plotUniquesStats2D.m
plotUniquesStatsBars.m
plotUniquesStatsStacked3D.m
plotXRows.m
plotYTests.m
plus.m
princomp.m
processDimNonNaNInf.m
rankMatching.m
rdivide.m
renameColumns.m
rop.m
rows2Struct.m
set.m *
setProp.m *
setRows.m
shufflerows.m
sortrows.m
sqrt.m
statsAll.m
statsBounds.m
statsMeanSE.m
statsMeanStd.m
std.m
subsasgn.m
subsref.m
sum.m
swapRowsPages.m
tests_db.m
tests2cols.m
tests2idx.m
tests2log.m
testsHists.m
times.m
transpose.m
uminus.m
unique.m
uop.m
vertcat.m
                            
function [an_approx_db, a_nnet] = approxMappingNNet(a_db, input_cols, output_cols, props)

% approxMappingNNet - Approximates the desired input-output mapping using a Matlab neural network.
%
% Usage:
% [an_approx_db, a_nnet] = approxMappingNNet(a_db, input_cols, output_cols, props)
%
% Description:
%   Approximates the mapping between the given inputs to outputs
% using the Matlab Neural Network Toolbox. By default it creates a
% feed-forward network to be trained with a Levenberg-Marquardt training
% algorithm (see newff). Returns and the trained network object and a
% database with output columns obtained from the approximator. The outputs
% can then be compared to the original database to test the success of the
% approximation. If 'warning on verbose' is issued prior to running, it
% provides additional debug info.
%
% Parameters:
%   a_db: A tests_db object.
%   input_cols, output_cols: Input and output columns to be mapped
%		(see tests2cols for accept column specifications).
%   props: A structure with any optional properties.
%     nnetFcn: Neural network classifier function (default='newff')
%     nnetParams: Cell array of parameters passed to nnetFcn after
%	  	      inputs and outputs.
%     trainMode: 'batch' or 'incr'.
%     testControl: Ratio of dataset to train the data and rest to test for success
%		(default=0; disabled).
%     classProbs: 'prob': use probabilistic sampling to normalize
%	  		prior class probabilities.
%     maxEpochs: maximum number of epochs to train for.
%     (Rest passed to balanceInputProbs and tests_db)
%		
%   Returns:
%	an_approx_db: A tests_db object containing the original inputs and
%			the approximated outputs.
%	a_nnet: The Matlab neural network approximator object.
%
% Example:
% >> [a_class_db, a_nnet = approxMappingNNet(my_db, {'NaF', 'Kv3'}, {'spike_width'});
% >> plotFigure(plot_superpose({plotScatter(my_db, 'NaF', 'spike_width'),
% 			        plotScatter(a_class_db, 'NaF', 'spike_width')}))
%
% See also: tests_db, newff
%
% $Id$
%
% Author: Cengiz Gunay <cgunay@emory.edu>, 2007/12/12

% Copyright (c) 2007 Cengiz Gunay <cengique@users.sf.net>.
% This work is licensed under the Academic Free License ("AFL")
% v. 3.0. To view a copy of this license, please look at the COPYING
% file distributed with this software or visit
% http://opensource.org/licenses/afl-3.0.php.

vs = warning('query', 'verbose');
verbose = strcmp(vs.state, 'on');

if ~exist('props', 'var')
  props = struct;
end

if ~ isfield(props, 'nnetFcn')
  props.nnetFcn = 'newff';
end

if ~ isfield(props, 'nnetParams')
  props.nnetParams = {};
end

% read inputs and outputs from db
a_nnet_inputs = ...
    get(onlyRowsTests(a_db, ':', input_cols), 'data')';
a_nnet_outputs = ...
    get(onlyRowsTests(a_db, ':', output_cols), 'data')';

% create NNet object
a_nnet = feval(props.nnetFcn, a_nnet_inputs, ...
                a_nnet_outputs, props.nnetParams{:});

% debug:
%a_nnet.outputs{1}
%a_nnet.outputs{1}.processedRange = [0 1];

% set display params
if ~ verbose
  a_nnet.trainParam.show = NaN;
end

orig_inputs = a_nnet_inputs;
orig_outputs = a_nnet_outputs;

% first, divide into training and validation sets 
num_samples = size(a_nnet_inputs, 2);
train_logic = false(num_samples, 1);
if isfield(props, 'testControl') && props.testControl ~= 0
  num_train = floor(props.testControl * num_samples);
  num_test = num_samples - num_train;
  if verbose
    disp(['Train/test with ' num2str(num_train) '/' num2str(num_test) ...
          ' samples.']);
  end
  % choose training samples randomly
  a_perm = randperm(num_samples)';
  train_logic(a_perm(1:num_train), :) = true;
  a_nnet_inputs = a_nnet_inputs(:, train_logic);
  a_nnet_outputs = a_nnet_outputs(:, train_logic);
  % rest is for control
  orig_inputs = orig_inputs(:, ~train_logic);
  orig_outputs = orig_outputs(:, ~train_logic);
end

% balance inputs, if requested
if isfield(props, 'classProbs') && strcmp(props.classProbs, 'prob')
  [a_nnet_inputs, a_nnet_outputs] = ...
      balanceInputProbs(a_nnet_inputs, a_nnet_outputs, 1, props);
end

% train it
if ~ isfield(props, 'trainMode') || strcmp(props.trainMode, 'batch')
  % batch training
  a_nnet = train(a_nnet, a_nnet_inputs, a_nnet_outputs);
else
  % incremental training
  if isfield(props, 'maxEpochs')
    num_passes = props.maxEpochs;
  else
    num_passes = 10;
  end
  goal_mse = 1e-3;
  cell_ins = num2cell(a_nnet_inputs, 1);
  cell_outs = num2cell(a_nnet_outputs); 
  last_conditions = [];
  for pass_num = 1:num_passes
    [a_nnet, nn_outs, nn_errs, last_conditions] = ...
        adapt(a_nnet, cell_ins, cell_outs, last_conditions);
    nn_mse = mse(nn_errs);
    if verbose, nn_mse, end
    if nn_mse < goal_mse
      if verbose, disp([ 'mse goal ' num2str(goal_mse) ' met.']); end
      break;
    end
  end
end

% add the predicted labels in a NaN pre-filled column
new_data = [get(onlyRowsTests(a_db, ':', input_cols), 'data'), ...
           repmat(NaN, dbsize(a_db, 1), 1)];
new_data(~train_logic, end) = sim(a_nnet, orig_inputs)';

% return simulated approximator output in new db
col_names = getColNames(a_db);
an_approx_db = ...
    tests_db(new_data, ...
             [ col_names(tests2cols(a_db, input_cols)), ...
               col_names(tests2cols(a_db, output_cols)) ], {}, ...
             [ 'NNet approximated ' get(a_db, 'id') ]);
end


Loading data, please wait...