Circuits that contain the Model Concept : Attractor Neural Network

(A recurrently connected neural network with iterative activation update. A noisy initial pattern (of neural activity) will converge to one of a set of template patterns that is stored in the network by its weighted connections. Because of this property it is also a model of Associative Memory. The Continuous Attractor Network uses continuous activation values (~ firing rates), while the Binary Attractor Network uses {0,1} or {-1,1} as possible neuronal activations.)
Re-display model names without descriptions
    Models   Description
1. An attractor network model of grid cells and theta-nested gamma oscillations (Pastoll et al 2013)
A two population spiking continuous attractor model of grid cells. This model combines the attractor dynamics with theta-nested gamma oscillatory activity. It reproduces the behavioural response of grid cells (grid fields) in medial entorhinal cortex, while at the same time allowing for nested gamma oscillations of post-synaptic currents.
2. Ca+/HCN channel-dependent persistent activity in multiscale model of neocortex (Neymotin et al 2016)
"Neuronal persistent activity has been primarily assessed in terms of electrical mechanisms, without attention to the complex array of molecular events that also control cell excitability. We developed a multiscale neocortical model proceeding from the molecular to the network level to assess the contributions of calcium regulation of hyperpolarization-activated cyclic nucleotide-gated (HCN) channels in providing additional and complementary support of continuing activation in the network. ..."
3. Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
4. Feedforward heteroassociative network with HH dynamics (Lytton 1998)
Using the original McCulloch-Pitts notion of simple on and off spike coding in lieu of rate coding, an Anderson-Kohonen artificial neural network (ANN) associative memory model was ported to a neuronal network with Hodgkin-Huxley dynamics.
5. Fixed point attractor (Hasselmo et al 1995)
"... In the model, cholinergic suppression of synaptic transmission at excitatory feedback synapses is shown to determine the extent to which activity depends upon new features of the afferent input versus components of previously stored representations. ..." See paper for more and details. The MATLAB script demonstrates the model of fixed point attractors mediated by excitatory feedback with subtractive inhibition in a continuous firing rate model.
6. Fronto-parietal visuospatial WM model with HH cells (Edin et al 2007)
1) J Cogn Neurosci: 3 structural mechanisms that had been hypothesized to underlie vsWM development during childhood were evaluated by simulating the model and comparing results to fMRI. It was concluded that inter-regional synaptic connection strength cause vsWM development. 2) J Integr Neurosci: Given the importance of fronto-parietal connections, we tested whether connection asymmetry affected resistance to distraction. We drew the conclusion that stronger frontal connections are beneficial. By comparing model results to EEG, we concluded that the brain indeed has stronger frontal-to-parietal connections than vice versa.
7. Generation of stable heading representations in diverse visual scenes (Kim et al 2019)
"Many animals rely on an internal heading representation when navigating in varied environments. How this representation is linked to the sensory cues that define different surroundings is unclear. In the fly brain, heading is represented by ‘compass’ neurons that innervate a ring-shaped structure known as the ellipsoid body. Each compass neuron receives inputs from ‘ring’ neurons that are selective for particular visual features; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity, which flexibly transforms two-dimensional visual cues into a stable heading representation. ... " See the supplementary information for model details.
8. Grid cell spatial firing models (Zilli 2012)
This package contains MATLAB implementations of most models (published from 2005 to 2011) of the hexagonal firing field arrangement of grid cells.
9. Hierarchical network model of perceptual decision making (Wimmer et al 2015)
Neuronal variability in sensory cortex predicts perceptual decisions. To investigate the interaction of bottom-up and top-down mechanisms during the decision process, we developed a hierarchical network model. The network consists of two circuits composed of leaky integrate-and-fire neurons: an integration circuit (e.g. LIP, FEF) and a sensory circuit (MT), recurrently coupled via bottom-up feedforward connections and top-down feedback connections. The integration circuit accumulates sensory evidence and produces a binary categorization due to winner-take-all competition between two decision-encoding populations (X.J. Wang, Neuron, 2002). The sensory circuit is a balanced randomly connected EI-network, that contains neural populations selective to opposite directions of motion. We have used this model to simulate a standard two-alternative forced-choice motion discrimination task.
10. High dimensional dynamics and low dimensional readouts in neural microcircuits (Haeusler et al 2006)
We investigate generic models for cortical microcircuits, i.e. recurrent circuits of integrate-and fire neurons with dynamic synapses. These complex dynamic systems subserve the amazing information processing capabilities of the cortex, but are at the present time very little understood. We analyze the transient dynamics of models for neural microcircuits from the point of view of one or two readout neurons that collapse the high dimensional transient dynamics of a neural circuit into a 1- or 2--dimensional output stream. See paper for more and details.
11. Hopfield and Brody model (Hopfield, Brody 2000)
NEURON implementation of the Hopfield and Brody model from the papers: JJ Hopfield and CD Brody (2000) JJ Hopfield and CD Brody (2001). Instructions are provided in the below readme.txt file.
12. Hopfield and Brody model (Hopfield, Brody 2000) (NEURON+python)
Demonstration of Hopfield-Brody snychronization using artificial cells in NEURON+python.
13. Hyperconnectivity, slow synapses in PFC mental retardation and autism model (Testa-Silva et al 2011)
The subdirectory 'matlab' contains MATLAB scripts (The Mathworks, USA) that can be used to reproduce the panels of Figures 4-5. This directory contains files to reproduce sample computer simulations presented in the 2011 paper authored by Meredith, R., Testa-Silva, G., Loebel, A., Giugliano, M., de Kock, C.; Mansvelder, H. "Hyperconnectivity and slow synapses in prefrontal cortex of a model for mental retardation and autism". ABSTRACT "... We propose that these findings are tightly linked: using a network model, we show that slower synapses are essential to counterbalance hyperconnectivity in order to maintain a dynamic range of excitatory activity. However, the slow synaptic time constants induce decreased responsiveness to low frequency stimulation, which may explain deficits in integration and information processing in attentional neuronal networks in neurodevelopmental disorders."
14. Multistability of clustered states in a globally inhibitory network (Chandrasekaran et al. 2009)
"We study a network of m identical excitatory cells projecting excitatory synaptic connections onto a single inhibitory interneuron, which is reciprocally coupled to all excitatory cells through inhibitory synapses possessing short-term synaptic depression. We find that such a network with global inhibition possesses multiple stable activity patterns with distinct periods, characterized by the clustering of the excitatory cells into synchronized sub-populations. We prove the existence and stability of n-cluster solutions in a m-cell network. ... Implications for temporal coding and memory storage are discussed."
15. Noise promotes independent control of gamma oscillations and grid firing (Solanka et al 2015)
"Neural computations underlying cognitive functions require calibration of the strength of excitatory and inhibitory synaptic connections and are associated with modulation of gamma frequency oscillations in network activity. However, principles relating gamma oscillations, synaptic strength and circuit computations are unclear. We address this in attractor network models that account for grid firing and theta-nested gamma oscillations in the medial entorhinal cortex. ..."
16. Recurrent amplification of grid-cell activity (D'Albis and Kempter 2020)
17. Self-organized olfactory pattern recognition (Kaplan & Lansner 2014)
" ... We present a large-scale network model with single and multi-compartmental Hodgkin–Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. ... The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian–Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures."
18. Stable propagation of synchronous spiking in cortical neural networks (Diesmann et al 1999)
"... Here we show that precisely synchronized action potentials can propagate within a model of cortical network activity that recapitulates many of the features of biological systems. An attractor, yielding a stable spiking precision in the (sub)millisecond range, governs the dynamics of synchronization. Our results indicate that a combinatorial neural code, based on rapid associations of groups of neurons co-ordinating their activity at the single spike level, is possible within a cortical-like network."

Re-display model names without descriptions