Circuits that contain the Modeling Application : NEST (Home Page)

("NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons. The development of NEST is coordinated by the NEST Initiative. NEST is ideal for networks of spiking neurons of any size, for example: 1. Models of information processing e.g. in the visual or auditory cortex of mammals, 2. Models of network activity dynamics, e.g. laminar cortical networks or balanced random networks, 3. Models of learning and plasticity." )
Re-display model names without descriptions
    Models   Description
1. A spatial model of the intermediate superior colliculus (Moren et. al. 2013)
A spatial model of the intermediate superior colliculus. It reproduces the collicular saccade-generating output profile from NMDA receptor-driven burst neurons, shaped by integrative inhibitory feedback from spreading buildup neuron activity. The model is consistent with the view that collicular activity directly shapes the temporal profile of saccadic eye movements. We use the Adaptive exponential integrate and fire neuron model, augmented with an NMDA-like membrane potential-dependent receptor. In addition, we use a synthetic spike integrator model as a stand-in for a spike-integrator circuit in the reticular formation. NOTE: We use a couple of custom neuron models, so the supplied model file includes an entire version of NEST. I also include a patch that applies to a clean version of the simulator (see the doc/README).
2. A spiking neural network model of model-free reinforcement learning (Nakano et al 2015)
"Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. ... In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL (partially observable reinforcement learning) problems with high-dimensional observations. ... The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. "
3. A spiking NN for amplification of feature-selectivity with specific connectivity (Sadeh et al 2015)
The model simulates large-scale inhibition-dominated spiking networks with different degrees of recurrent specific connectivity. It shows how feature-specific connectivity leads to a linear amplification of feedforward tuning, as reported in recent electrophysiological single-neuron recordings in rodent neocortex. Moreover, feature-specific connectivity leads to the emergence of feature-selective reverberating activity, and entails pattern completion in network responses.
4. An electrophysiological model of GABAergic double bouquet cells (Chrysanthidis et al. 2019)
We present an electrophysiological model of double bouquet cells (DBCs) and integrate them into an established cortical columnar microcircuit model that implements a BCPNN (Bayesian Confidence Propagation Neural Network) learning rule. The proposed architecture effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. The introduction of DBCs improves the biological plausibility of our model, without affecting the model's spiking activity, basic operation, and learning abilities.
5. Complex dynamics: reproducing Golgi cell electroresponsiveness (Geminiani et al 2018, 2019ab)
Excerpts from three papers abstracts: "Brain neurons exhibit complex electroresponsive properties – including intrinsic subthreshold oscillations and pacemaking, resonance and phase-reset – which are thought to play a critical role in controlling neural network dynamics. Although these properties emerge from detailed representations of molecular-level mechanisms in “realistic” models, they cannot usually be generated by simplified neuronal models (although these may show spike-frequency adaptation and bursting). We report here that this whole set of properties can be generated by the extended generalized leaky integrate-and-fire (E-GLIF) neuron model. ..." "... In order to reproduce these properties in single-point neuron models, we have optimized the Extended-Generalized Leaky Integrate and Fire (E-GLIF) neuron through a multi-objective gradient-based algorithm targeting the desired input–output relationships. ..." " ... In order to investigate how single neuron dynamics and geometrical modular connectivity affect cerebellar processing, we have built an olivocerebellar Spiking Neural Network (SNN) based on a novel simplification algorithm for single point models (Extended Generalized Leaky Integrate and Fire, EGLIF) capturing essential non-linear neuronal dynamics (e.g., pacemaking, bursting, adaptation, oscillation and resonance). ..."
6. Cortical feedback alters visual response properties of dLGN relay cells (Martínez-Cañada et al 2018)
Network model that includes biophysically detailed, single-compartment and multicompartment neuron models of relay-cells and interneurons in the dLGN and a population of orientation-selective layer 6 simple cells, consisting of pyramidal cells (PY). We have considered two different arrangements of synaptic feedback from the ON and OFF zones in the visual cortex to the dLGN: phase-reversed (‘push-pull’) and phase-matched (‘push-push’), as well as different spatial extents of the corticothalamic projection pattern. This project is the result of a research work and its associated publication is: (Martínez-Cañada et al 2018). Installation instructions as well as the latest version can be found in the Github repository:
7. Emergence of spatiotemporal sequences in spiking neuronal networks (Spreizer et al 2019)
"Spatio-temporal sequences of neuronal activity are observed in many brain regions in a variety of tasks and are thought to form the basis of meaningful behavior. However, mechanisms by which a neuronal network can generate spatio-temporal activity sequences have remained obscure. Existing models are biologically untenable because they either require manual embedding of a feedforward network within a random network or supervised learning to train the connectivity of a network to generate sequences. Here, we propose a biologically plausible, generative rule to create spatio-temporal activity sequences in a network of spiking neurons with distance-dependent connectivity. We show that the emergence of spatio- temporal activity sequences requires: (1) individual neurons preferentially project a small fraction of their axons in a specific direction, and (2) the preferential projection direction of neighboring neurons is similar. Thus, an anisotropic but correlated connectivity of neuron groups suffices to generate spatio-temporal activity sequences in an otherwise random neuronal network model."
8. GLMCC validation neural network model (Kobayashi et al. 2019)
Network model of two populations of randomly connected inhibitory and excitatory neurons to validate method for reconstructing the neural circuitry developed in "Reconstructing Neuronal Circuitry from Parallel Spike Trains" by Ryota Kobayashi, Shuhei Kurita, Anno Kurth, Katsunori Kitano, Kenji Mizuseki, Markus Diesmann, Barry J. Richmond and Shigeru Shinomoto.
9. Growth Rules for Repair of Asynch Irregular Networks after Peripheral Lesions (Sinha et al 2021)
A model of peripheral lesions and the resulting activity-dependent rewiring in a simplified balanced cortical network model that exhibits biologically realistic Asynchronous Irregular (AI) activity, used to derive activity dependent growth rules for different synaptic elements: dendritic and axonal.
10. Multi-area layer-resolved spiking network model of resting-state dynamics in macaque visual cortex
See for any updates.
11. Networks of spiking neurons: a review of tools and strategies (Brette et al. 2007)
This package provides a series of codes that simulate networks of spiking neurons (excitatory and inhibitory, integrate-and-fire or Hodgkin-Huxley type, current-based or conductance-based synapses; some of them are event-based). The same networks are implemented in different simulators (NEURON, GENESIS, NEST, NCS, CSIM, XPP, SPLIT, MVAspike; there is also a couple of implementations in SciLab and C++). The codes included in this package are benchmark simulations; see the associated review paper (Brette et al. 2007). The main goal is to provide a series of benchmark simulations of networks of spiking neurons, and demonstrate how these are implemented in the different simulators overviewed in the paper. See also details in the enclosed file Appendix2.pdf, which describes these different benchmarks. Some of these benchmarks were based on the Vogels-Abbott model (Vogels TP and Abbott LF 2005).
12. Neuromorphic muscle spindle model (Vannucci et al 2017)
A fully spike-based, biologically inspired mechanism for the translation of proprioceptive feedback.
13. Orientation selectivity in inhibition-dominated recurrent networks (Sadeh and Rotter, 2015)
Emergence of contrast-invariant orientation selectivity in large-scale networks of excitatory and inhibitory neurons using integrate-and-fire neuron models.
14. Sparsely connected networks of spiking neurons (Brunel 2000)
The dynamics of networks of sparsely connected excitatory and inhibitory integrate-and-fire neurons are studied analytically (and with simulations). The analysis reveals a rich repertoire of states, including synchronous states in which neurons fire regularly; asynchronous states with stationary global activity and very irregular individual cell activity; and states in which the global activity oscillates but individual cells fire irregularly, typically at rates lower than the global oscillation frequency. See paper for more and details.
15. Structure-dynamics relationships in bursting neuronal networks revealed (Mäki-Marttunen et al. 2013)
This entry includes tools for generating and analyzing network structure, and for running the neuronal network simulations on them.

Re-display model names without descriptions