ModelDB is moving. Check out our new site at The corresponding page is

Synaptic scaling balances learning in a spiking model of neocortex (Rowan & Neymotin 2013)

 Download zip file 
Help downloading and running models
Learning in the brain requires complementary mechanisms: potentiation and activity-dependent homeostatic scaling. We introduce synaptic scaling to a biologically-realistic spiking model of neocortex which can learn changes in oscillatory rhythms using STDP, and show that scaling is necessary to balance both positive and negative changes in input from potentiation and atrophy. We discuss some of the issues that arise when considering synaptic scaling in such a model, and show that scaling regulates activity whilst allowing learning to remain unaltered.
1 . Rowan MS, Neymotin SA (2013) Synaptic Scaling Balances Learning in a Spiking Model of Neocortex Adaptive and Natural Computing Algorithms, Tomassini M, Antonioni A, Daolio F, Buesser P, ed. pp.20
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Neocortex;
Cell Type(s): Neocortex L5/6 pyramidal GLU cell; Neocortex L2/3 pyramidal GLU cell; Neocortex V1 interneuron basket PV GABA cell; Neocortex fast spiking (FS) interneuron; Neocortex spiny stellate cell; Neocortex spiking regular (RS) neuron; Neocortex spiking low threshold (LTS) neuron; Abstract integrate-and-fire adaptive exponential (AdEx) neuron;
Gap Junctions:
Receptor(s): GabaA; AMPA; NMDA;
Transmitter(s): Gaba; Glutamate;
Simulation Environment: NEURON; Python;
Model Concept(s): Synaptic Plasticity; Long-term Synaptic Plasticity; Learning; STDP; Homeostasis;
Implementer(s): Lytton, William [bill.lytton at]; Neymotin, Sam [Samuel.Neymotin at]; Rowan, Mark [m.s.rowan at];
Search NeuronDB for information about:  Neocortex L5/6 pyramidal GLU cell; Neocortex L2/3 pyramidal GLU cell; Neocortex V1 interneuron basket PV GABA cell; GabaA; AMPA; NMDA; Gaba; Glutamate;
autotune.hoc *
basestdp.hoc *
batch.hoc *
batch2.hoc *
checkirreg.hoc * *
col.hoc *
comppowspec.hoc *
condisconcellfig.hoc *
condisconpowfig.hoc *
declist.hoc *
decmat.hoc *
decnqs.hoc *
decvec.hoc *
default.hoc *
drline.hoc *
e2hubsdisconpow.hoc *
e2incconpow.hoc *
filtutils.hoc *
geom.hoc *
graphplug.hoc *
grvec.hoc *
init.hoc *
labels.hoc *
load.hoc *
local.hoc *
makepopspikenq.hoc *
matfftpowplug.hoc *
matpmtmplug.hoc *
matpmtmsubpopplug.hoc *
matspecplug.hoc *
network.hoc *
nload.hoc *
nqpplug.hoc *
nqs.hoc *
nqsnet.hoc *
nrnoc.hoc *
powchgtest.hoc *
python.hoc *
pywrap.hoc *
redE2.hoc *
setup.hoc *
shufmua.hoc *
simctrl.hoc *
spkts.hoc *
stats.hoc *
syncode.hoc *
vsampenplug.hoc *
xgetargs.hoc *
// $Id: matpmtmplug.hoc,v 1.4 2011/02/28 06:07:42 samn Exp $ 

// "plugin" (for batch.hoc) to do analysis on sim data

binsz = 5 // bin size in ms
sampr = 1e3 / binsz // sampling rate
initAllMyNQs() // initialize counts per time, by type, column, etc.

objref nqf,nqtmp
objref vintraty[CTYPi] // each type, within column


proc myrsz () { // util func to call matpmtm and add results to nqf
  {vec.resize(0) vec.copy($o1) vec.sub(vec.mean)}
  {nqf.resize($s2) nqf.v[nqf.m-1].copy(nqtmp.getcol("pow"))}

nqf=new NQS()

for case(&j,E2,I2,I2L) {
  vintraty[j]=new Vector(sz)

{vec.resize(0) vec.copy(nqLFP.v,500/vdt_INTF6,9.5e3/vdt_INTF6) vec.sub(vec.mean) nqtmp=matpmtm(vec,1e3/vdt_INTF6)}
{nqf.resize("fLFP") nqf.v[nqf.m-1].copy(nqtmp.getcol("f"))}
{nqf.resize("vLFP") nqf.v[nqf.m-1].copy(nqtmp.getcol("pow")) nqsdel(nqtmp)}

//{vec.resize(0) vec.copy(vintraty[E2]) vec.sub(vec.mean) nqtmp=matpmtm(vec,sampr)}
//{nqf.resize("fMUA") nqf.v[nqf.m-1].copy(nqtmp.getcol("f"))}
//{nqf.resize("E2MUA") nqf.v[nqf.m-1].copy(nqtmp.getcol("pow")) nqsdel(nqtmp)}


if(batch_flag) nqsdel(nqf)

Loading data, please wait...