SenseLab Home ModelDB Home

Models that contain the Model Topic : Unsupervised Learning

(A method for neural network training where the network is only presented with inputs and tries to find patterns within the inputs to classify the inputs, or alternatively, attempts to maximize a fitness function by exploring an environment without any best output pattern made available.)

   Models   Description
3D model of the olfactory bulb (Migliore et al. 2014)
This entry contains a link to a full HD version of movie 1 and the NEURON code of the paper: "Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb" by M Migliore, F Cavarretta, ML Hines, and GM Shepherd.
Alternative time representation in dopamine models (Rivest et al. 2009)
Combines a long short-term memory (LSTM) model of the cortex to a temporal difference learning (TD) model of the basal ganglia. Code to run simulations similar to the published data: Rivest, F, Kalaska, J.F., Bengio, Y. (2009) Alternative time representation in dopamine models. Journal of Computational Neuroscience. See http://dx.doi.org/10.1007/s10827-009-0191-1 for details.
Cancelling redundant input in ELL pyramidal cells (Bol et al. 2011)
The paper investigates the property of the electrosensory lateral line lobe (ELL) of the brain of weakly electric fish to cancel predictable stimuli. Electroreceptors on the skin encode all signals in their firing activity, but superficial pyramidal (SP) cells in the ELL that receive this feedforward input do not respond to constant sinusoidal signals. This cancellation putatively occurs using a network of feedback delay lines and burst-induced synaptic plasticity between the delay lines and the SP cell that learns to cancel the redundant input. Biologically, the delay lines are parallel fibres from cerebellar-like granule cells in the eminentia granularis posterior. A model of this network (e.g. electroreceptors, SP cells, delay lines and burst-induced plasticity) was constructed to test whether the current knowledge of how the network operates is sufficient to cancel redundant stimuli.
Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
Democratic population decisions result in robust policy-gradient learning (Richmond et al. 2011)
This model demonstrates the use of GPU programming (with CUDA)to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and to investigate its ability to learn a simplified navigation task using a learning rule stemming from Reinforcement Learning, a policy-gradient rule.
Development of orientation-selective simple cell receptive fields (Rishikesh and Venkatesh, 2003)
Implementation of a computational model for the development of simple-cell receptive fields spanning the regimes before and after eye-opening. The before eye-opening period is governed by a correlation-based rule from Miller (Miller, J. Neurosci., 1994), and the post eye-opening period is governed by a self-organizing, experience-dependent dynamics derived in the reference below.
Human Attentional Networks: A Connectionist Model (Wang and Fan 2007)
"... We describe a connectionist model of human attentional networks to explore the possible interplays among the networks from a computational perspective. This model is developed in the framework of leabra (local, error-driven, and associative, biologically realistic algorithm) and simultaneously involves these attentional networks connected in a biologically inspired way. ... We evaluate the model by simulating the empirical data collected on normal human subjects using the Attentional Network Test (ANT). The simulation results fit the experimental data well. In addition, we show that the same model, with a single parameter change that affects executive control, is able to simulate the empirical data collected from patients with schizophrenia. This model represents a plausible connectionist explanation for the functional structure and interaction of human attentional networks."
Large scale model of the olfactory bulb (Yu et al., 2013)
The readme file currently contains links to the results for all the 72 odors investigated in the paper, and the movie showing the network activity during learning of odor k3-3 (an aliphatic ketone).
Learning spatial transformations through STDP (Davison, Fr├ęgnac 2006)
A common problem in tasks involving the integration of spatial information from multiple senses, or in sensorimotor coordination, is that different modalities represent space in different frames of reference. Coordinate transformations between different reference frames are therefore required. One way to achieve this relies on the encoding of spatial information using population codes. The set of network responses to stimuli in different locations (tuning curves) constitute a basis set of functions which can be combined linearly through weighted synaptic connections in order to approximate non-linear transformations of the input variables. The question then arises how the appropriate synaptic connectivity is obtained. This model shows that a network of spiking neurons can learn the coordinate transformation from one frame of reference to another, with connectivity that develops continuously in an unsupervised manner, based only on the correlations available in the environment, and with a biologically-realistic plasticity mechanism (spike timing-dependent plasticity).
Mapping function onto neuronal morphology (Stiefel and Sejnowski 2007)
"... We used an optimization procedure to find neuronal morphological structures for two computational tasks: First, neuronal morphologies were selected for linearly summing excitatory synaptic potentials (EPSPs); second, structures were selected that distinguished the temporal order of EPSPs. The solutions resembled the morphology of real neurons. In particular the neurons optimized for linear summation electrotonically separated their synapses, as found in avian nucleus laminaris neurons, and neurons optimized for spike-order detection had primary dendrites of significantly different diameter, as found in the basal and apical dendrites of cortical pyramidal neurons. ..."
Oscillations, phase-of-firing coding and STDP: an efficient learning scheme (Masquelier et al. 2009)
The model demonstrates how a common oscillatory drive for a group of neurons formats and reliabilizes their spike times - through an activation-to-phase conversion - so that repeating activation patterns can be easily detected and learned by a downstream neuron equipped with STDP, and then recognized in just one oscillation cycle.
Reciprocal regulation of rod and cone synapse by NO (Kourennyi et al 2004)
We constructed models of rod and cone photoreceptors using NEURON software to predict how changes in Ca channels would affect the light response in these cells and in postsynaptic horizontal cells.
Relative spike time coding and STDP-based orientation selectivity in V1 (Masquelier 2012)
Phenomenological spiking model of the cat early visual system. We show how natural vision can drive spike time correlations on sufficiently fast time scales to lead to the acquisition of orientation-selective V1 neurons through STDP. This is possible without reference times such as stimulus onsets, or saccade landing times. But even when such reference times are available, we demonstrate that the relative spike times encode the images more robustly than the absolute ones.
Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. ... This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. ... Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. See paper for more and details.
Self-influencing synaptic plasticity (Tamosiunaite et al. 2007)
"... Similar to a previous study (Saudargiene et al., 2004) we employ a differential Hebbian learning rule to emulate spike-timing dependent plasticity and investigate how the interaction of dendritic and back-propagating spikes, as the post-synaptic signals, could influence plasticity. ..."
Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
Development of spiking grid cells and place cells in the entorhinal-hippocampal system to represent positions in large spaces
STDP allows fast rate-modulated coding with Poisson-like spike trains (Gilson et al. 2011)
The model demonstrates that a neuron equipped with STDP robustly detects repeating rate patterns among its afferents, from which the spikes are generated on the fly using inhomogenous Poisson sampling, provided those rates have narrow temporal peaks (10-20ms) - a condition met by many experimental Post-Stimulus Time Histograms (PSTH).
STDP and NMDAR Subunits (Gerkin et al. 2007)
The paper argues for competing roles of NR2A- and NR2B-containing NMDARs in spike-timing-dependent plasticity. This simple dynamical model recapitulates the results of STDP experiments involving selective blockers of NR2A- and NR2B-containing NMDARs, for which the stimuli are pre- and postsynaptic spikes in varying combinations. Experiments were done using paired recordings from glutamatergic neurons in rat hippocampal cultures. This model focuses on the dynamics of the putative potentiation and depression modules themselves, and their interaction For detailed dynamics involving NMDARs and Ca2+ transients, see Rubin et al., J. Neurophys., 2005.
Striatal GABAergic microcircuit, spatial scales of dynamics (Humphries et al, 2010)
The main thrust of this paper was the development of the 3D anatomical network of the striatum's GABAergic microcircuit. We grew dendrite and axon models for the MSNs and FSIs and extracted probabilities for the presence of these neurites as a function of distance from the soma. From these, we found the probabilities of intersection between the neurites of two neurons given their inter-somatic distance, and used these to construct three-dimensional striatal networks. These networks were examined for their predictions for the distributions of the numbers and distances of connections for all the connections in the microcircuit. We then combined the neuron models from a previous model (Humphries et al, 2009; ModelDB ID: 128874) with the new anatomical model. We used this new complete striatal model to examine the impact of the anatomical network on the firing properties of the MSN and FSI populations, and to study the influence of all the inputs to one MSN within the network.


Re-display model names without descriptions

ModelDB Home  SenseLab Home   Help
Questions, comments, problems? Email the ModelDB Administrator
How to cite ModelDB
This site is Copyright 2014 Shepherd Lab, Yale University