3D olfactory bulb: operators (Migliore et al, 2015)

 Download zip file   Auto-launch 
Help downloading and running models
Accession:168591
"... Using a 3D model of mitral and granule cell interactions supported by experimental findings, combined with a matrix-based representation of glomerular operations, we identify the mechanisms for forming one or more glomerular units in response to a given odor, how and to what extent the glomerular units interfere or interact with each other during learning, their computational role within the olfactory bulb microcircuit, and how their actions can be formalized into a theoretical framework in which the olfactory bulb can be considered to contain "odor operators" unique to each individual. ..."
Reference:
1 . Migliore M, Cavarretta F, Marasco A, Tulumello E, Hines ML, Shepherd GM (2015) Synaptic clusters function as odor operators in the olfactory bulb. Proc Natl Acad Sci U S A 112:8499-504 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism:
Cell Type(s): Olfactory bulb main mitral GLU cell; Olfactory bulb main interneuron granule MC GABA cell;
Channel(s): I Na,t; I A; I K;
Gap Junctions:
Receptor(s): AMPA; NMDA; Gaba;
Gene(s):
Transmitter(s): Gaba; Glutamate;
Simulation Environment: NEURON; Python;
Model Concept(s): Activity Patterns; Dendritic Action Potentials; Active Dendrites; Synaptic Plasticity; Action Potentials; Synaptic Integration; Unsupervised Learning; Sensory processing; Olfaction;
Implementer(s): Migliore, Michele [Michele.Migliore at Yale.edu]; Cavarretta, Francesco [francescocavarretta at hotmail.it];
Search NeuronDB for information about:  Olfactory bulb main mitral GLU cell; Olfactory bulb main interneuron granule MC GABA cell; AMPA; NMDA; Gaba; I Na,t; I A; I K; Gaba; Glutamate;
/
figure1eBulb3D
readme.html
ampanmda.mod *
distrt.mod *
fi.mod *
fi_stdp.mod *
kamt.mod *
kdrmt.mod *
naxn.mod *
ThreshDetect.mod *
.hg_archival.txt
all2all.py *
balance.py *
bindict.py
binsave.py
binspikes.py
BulbSurf.py
catfiles.sh
colors.py *
common.py
complexity.py *
custom_params.py *
customsim.py
destroy_model.py *
determine_connections.py
distribute.py *
falsegloms.txt
fixnseg.hoc *
g37e1i002.py
gidfunc.py *
Glom.py *
granule.hoc *
granules.py
grow.py
input-odors.txt *
loadbalutil.py *
lpt.py *
m2g_connections.py
mayasyn.py
mgrs.py
misc.py
mitral.hoc *
mkdict.py
mkmitral.py
modeldata.py *
multisplit_distrib.py *
net_mitral_centric.py
odordisp.py *
odors.py *
odorstim.py
params.py
parrun.py
realgloms.txt *
realSoma.py *
runsim.py
spike2file.hoc *
split.py *
util.py *
vrecord.py
weightsave.py *
                            
from util import *
from all2all import all2all
import heapq

def lpt(cx, npart):
  ''' from the list of (cx, gid) return a npart length list with each partition
      being a total_cx followed by a list of (cx, gid).
  '''
  cx.sort(key=lambda x:x[0], reverse=True)
  # initialize a priority queue for fast determination of current
  # partition with least complexity. The priority queue always has
  # npart items in it. At this time we do not care which partition will
  # be associated with which rank so a partition on the heap is just
  # (totalcx, [list of (cx, gid)]
  h = []
  for i in range(npart):
    heapq.heappush(h, (0.0, []))
  #each cx item goes into the current least complex partition
  for c in cx:
    lp = heapq.heappop(h) # least partition
    lp[1].append(c)
    heapq.heappush(h, (lp[0]+c[0], lp[1]))
  parts = [heapq.heappop(h) for i in range(len(h))]
  return parts

def statistics(parts):
  npart = len(parts)
  total_cx = 0
  max_part_cx = 0
  ncx = 0
  max_cx = 0
  for part in parts:
    ncx += len(part[1])
    total_cx += part[0]
    if part[0] > max_part_cx:
      max_part_cx = part[0]
    for cx in part[1]:
      if cx[0] > max_cx:
        max_cx = cx[0]
  avg_part_cx =total_cx/npart
  loadbal = 1.0
  if max_part_cx > 0.:
    loadbal = avg_part_cx/max_part_cx
  s = "loadbal=%g total_cx=%g npart=%d ncx=%d max_part_cx=%g max_cx=%g"%(loadbal,total_cx,npart,ncx,max_part_cx, max_cx)
  return s

if __name__ == '__main__':
  from util import serialize, finish
  for cx in ([(i, i) for i in range(10)],[]):
    print len(cx), ' complexity items ', cx
    pinfo = lpt(cx, 3)
    print len(pinfo), ' lpt partitions ', pinfo
    print statistics(pinfo)