Circuits that contain the Model Concept : Unsupervised Learning

(A method for neural network training where the network is only presented with inputs and tries to find patterns within the inputs to classify the inputs, or alternatively, attempts to maximize a fitness function by exploring an environment without any best output pattern made available.)
Re-display model names without descriptions
    Models   Description
1. 3D model of the olfactory bulb (Migliore et al. 2014)
This entry contains a link to a full HD version of movie 1 and the NEURON code of the paper: "Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb" by M Migliore, F Cavarretta, ML Hines, and GM Shepherd.
2. 3D olfactory bulb: operators (Migliore et al, 2015)
"... Using a 3D model of mitral and granule cell interactions supported by experimental findings, combined with a matrix-based representation of glomerular operations, we identify the mechanisms for forming one or more glomerular units in response to a given odor, how and to what extent the glomerular units interfere or interact with each other during learning, their computational role within the olfactory bulb microcircuit, and how their actions can be formalized into a theoretical framework in which the olfactory bulb can be considered to contain "odor operators" unique to each individual. ..."
3. Alternative time representation in dopamine models (Rivest et al. 2009)
Combines a long short-term memory (LSTM) model of the cortex to a temporal difference learning (TD) model of the basal ganglia. Code to run simulations similar to the published data: Rivest, F, Kalaska, J.F., Bengio, Y. (2009) Alternative time representation in dopamine models. Journal of Computational Neuroscience. See http://dx.doi.org/10.1007/s10827-009-0191-1 for details.
4. Cancelling redundant input in ELL pyramidal cells (Bol et al. 2011)
The paper investigates the property of the electrosensory lateral line lobe (ELL) of the brain of weakly electric fish to cancel predictable stimuli. Electroreceptors on the skin encode all signals in their firing activity, but superficial pyramidal (SP) cells in the ELL that receive this feedforward input do not respond to constant sinusoidal signals. This cancellation putatively occurs using a network of feedback delay lines and burst-induced synaptic plasticity between the delay lines and the SP cell that learns to cancel the redundant input. Biologically, the delay lines are parallel fibres from cerebellar-like granule cells in the eminentia granularis posterior. A model of this network (e.g. electroreceptors, SP cells, delay lines and burst-induced plasticity) was constructed to test whether the current knowledge of how the network operates is sufficient to cancel redundant stimuli.
5. Coding explains development of binocular vision and its failure in Amblyopia (Eckmann et al 2020)
This is the MATLAB code for the Active Efficient Coding model introduced in Eckmann et al 2020. It simulates an agent that self-calibrates vergence and accommodation eye movements in a simple visual environment. All algorithms are explained in detail in the main manuscript and the supplementary material of the paper.
6. Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
7. Development of orientation-selective simple cell receptive fields (Rishikesh and Venkatesh, 2003)
Implementation of a computational model for the development of simple-cell receptive fields spanning the regimes before and after eye-opening. The before eye-opening period is governed by a correlation-based rule from Miller (Miller, J. Neurosci., 1994), and the post eye-opening period is governed by a self-organizing, experience-dependent dynamics derived in the reference below.
8. Hierarchical anti-Hebbian network model for the formation of spatial cells in 3D (Soman et al 2019)
This model shows how spatial representations in 3D space could emerge using unsupervised neural networks. Model is a hierarchical one which means that it has multiple layers, where each layer has got a specific function to achieve. This architecture is more of a generalised one i.e. it gives rise to different kinds of spatial representations after training.
9. Large scale model of the olfactory bulb (Yu et al., 2013)
The readme file currently contains links to the results for all the 72 odors investigated in the paper, and the movie showing the network activity during learning of odor k3-3 (an aliphatic ketone).
10. Learning spatial transformations through STDP (Davison, Frégnac 2006)
A common problem in tasks involving the integration of spatial information from multiple senses, or in sensorimotor coordination, is that different modalities represent space in different frames of reference. Coordinate transformations between different reference frames are therefore required. One way to achieve this relies on the encoding of spatial information using population codes. The set of network responses to stimuli in different locations (tuning curves) constitute a basis set of functions which can be combined linearly through weighted synaptic connections in order to approximate non-linear transformations of the input variables. The question then arises how the appropriate synaptic connectivity is obtained. This model shows that a network of spiking neurons can learn the coordinate transformation from one frame of reference to another, with connectivity that develops continuously in an unsupervised manner, based only on the correlations available in the environment, and with a biologically-realistic plasticity mechanism (spike timing-dependent plasticity).
11. Optimal Localist and Distributed Coding Through STDP (Masquelier & Kheradpisheh 2018)
We show how a LIF neuron equipped with STDP can become optimally selective, in an unsupervised manner, to one or several repeating spike patterns, even when those patterns are hidden in Poisson spike trains.
12. Oscillations, phase-of-firing coding and STDP: an efficient learning scheme (Masquelier et al. 2009)
The model demonstrates how a common oscillatory drive for a group of neurons formats and reliabilizes their spike times - through an activation-to-phase conversion - so that repeating activation patterns can be easily detected and learned by a downstream neuron equipped with STDP, and then recognized in just one oscillation cycle.
13. Relative spike time coding and STDP-based orientation selectivity in V1 (Masquelier 2012)
Phenomenological spiking model of the cat early visual system. We show how natural vision can drive spike time correlations on sufficiently fast time scales to lead to the acquisition of orientation-selective V1 neurons through STDP. This is possible without reference times such as stimulus onsets, or saccade landing times. But even when such reference times are available, we demonstrate that the relative spike times encode the images more robustly than the absolute ones.
14. Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. ... This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. ... Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. See paper for more and details.
15. Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
Development of spiking grid cells and place cells in the entorhinal-hippocampal system to represent positions in large spaces
16. STDP allows fast rate-modulated coding with Poisson-like spike trains (Gilson et al. 2011)
The model demonstrates that a neuron equipped with STDP robustly detects repeating rate patterns among its afferents, from which the spikes are generated on the fly using inhomogenous Poisson sampling, provided those rates have narrow temporal peaks (10-20ms) - a condition met by many experimental Post-Stimulus Time Histograms (PSTH).

Re-display model names without descriptions