Circuits that contain the Model Type : Connectionist Network

(Connectionist models typically describe the interaction of brain regions with simpler nodes than so-called realistic network models whose neuron`s voltages are comparable to electrophysiology recordings.)
Re-display model names without descriptions
    Models   Description
1. A basal ganglia model of aberrant learning (Ursino et al. 2018)
A comprehensive, biologically inspired neurocomputational model of action selection in the Basal Ganglia allows simulation of dopamine induced aberrant learning in Parkinsonian subjects. In particular, the model simulates the Alternate Finger Tapping motor task as an indicator of bradykinesia.
2. A computational model of systems memory consolidation and reconsolidation (Helfer & Shultz 2019)
A neural-network framework for modeling systems memory consolidation and reconsolidation.
3. A dendritic disinhibitory circuit mechanism for pathway-specific gating (Yang et al. 2016)
"While reading a book in a noisy café, how does your brain ‘gate in’ visual information while filtering out auditory stimuli? Here we propose a mechanism for such flexible routing of information flow in a complex brain network (pathway-specific gating), tested using a network model of pyramidal neurons and three classes of interneurons with connection probabilities constrained by data. We find that if inputs from different pathways cluster on a pyramidal neuron dendrite, a pathway can be gated-on by a disinhibitory circuit motif. ..."
4. A dynamical model of the basal ganglia (Leblois et al 2006)
We propose a new model for the function and dysfunction of the basal ganglia (BG). The basal ganglia are a set of cerebral structures involved in motor control which dysfunction causes high-incidence pathologies such as Parkinson's disease (PD). Their precise motor functions remain unknown. The classical model of the BG that allowed for the discovery of new treatments for PD seems today outdated in several respects. Based on experimental observations, our model proposes a simple dynamical framework for the understanding of how BG may select motor programs to be executed. Moreover, we explain how this ability is lost and how tremor-related oscillations in neuronal activity may emerge in PD.
5. A Neural mass computational model of the Thalamocorticothalamic circuitry (Bhattacharya et al. 2011)
The model presented here is a bio-physically plausible version of a simple thalamo-cortical neural mass computational model proposed by Lopes da Silva in 1974 to simulate brain EEG activity within the alpha band (8-13 Hz). The thalamic and cortical circuitry are presented as separate modules in this model with cell populations as in biology. The connectivity between cell populations are as reported by Sherman, S. in Scholarpedia, 2006. The values of the synaptic connectivity parameters are as reported by Van Horn et al, 2000. In our paper (doi:10.1016/j.neunet.2011.02.009), we study the model behaviour while varying the values of the synaptic connectivity parameters (Cyyy) in the model about their respective 'basal' (intial) values.
6. A neural mass model for critical assessment of brain connectivity (Ursino et al 2020)
We use a neural mass model of interconnected regions of interest to simulate reliable neuroelectrical signals in the cortex. In particular, signals simulating mean field potentials were generated assuming two, three or four ROIs, connected via excitatory or by-synaptic inhibitory links. Then we investigated whether bivariate Transfer Entropy (TE) can be used to detect a statistically significant connection from data (as in binary 0/1 networks), and even if connection strength can be quantified (i.e., the occurrence of a linear relationship between TE and connection strength). Results suggest that TE can reliably estimate the strength of connectivity if neural populations work in their linear regions. However, nonlinear phenomena dramatically affect the assessment of connectivity, since they may significantly reduce TE estimation. Software included here allows the simulation of neural mass models with a variable number of ROIs and connections, the estimation of TE using the free package Trentool, and the realization of figures to compare true connectivity with estimated values.
7. A neural model of Parkinson`s disease (Cutsuridis and Perantonis 2006, Cutsuridis 2006, 2007)
"A neural model of neuromodulatory (dopamine) control of arm movements in Parkinson’s disease (PD) bradykinesia was recently introduced [1, 2]. The model is multi-modular consisting of a basal ganglia module capable of selecting the most appropriate motor command in a given context, a cortical module for coordinating and executing the final motor commands, and a spino-musculo-skeletal module for guiding the arm to its final target and providing proprioceptive (feedback) input of the current state of the muscle and arm to higher cortical and lower spinal centers. ... The new (extended) model [3] predicted that the reduced reciprocal disynaptic Ia inhibition in the DA depleted case doesn’t lead to the co-contraction of antagonist motor units." See below readme and papers for more and details.
8. A NN with synaptic depression for testing the effects of connectivity on dynamics (Jacob et al 2019)
Here we used a 10,000 neuron model. The neurons are a mixture of excitatory and inhibitory integrate-and-fire neurons connected with synapses that exhibit synaptic depression. Three different connectivity paradigms were tested to look for spontaneous transition between interictal spiking and seizure: uniform, small-world network, and scale-free. All three model types are included here.
9. A reinforcement learning example (Sutton and Barto 1998)
This MATLAB script demonstrates an example of reinforcement learning functions guiding the movements of an agent (a black square) in a gridworld environment. See at the top of the matlab script and the book for more details.
10. A single-cell spiking model for the origin of grid-cell patterns (D'Albis & Kempter 2017)
A single-cell spiking model explaining the formation of grid-cell pattern in a feed-forward network. Patterns emerge via spatially-tuned feedforward inputs, synaptic plasticity, and spike-rate adaptation.
11. A spatial model of the intermediate superior colliculus (Moren et. al. 2013)
A spatial model of the intermediate superior colliculus. It reproduces the collicular saccade-generating output profile from NMDA receptor-driven burst neurons, shaped by integrative inhibitory feedback from spreading buildup neuron activity. The model is consistent with the view that collicular activity directly shapes the temporal profile of saccadic eye movements. We use the Adaptive exponential integrate and fire neuron model, augmented with an NMDA-like membrane potential-dependent receptor. In addition, we use a synthetic spike integrator model as a stand-in for a spike-integrator circuit in the reticular formation. NOTE: We use a couple of custom neuron models, so the supplied model file includes an entire version of NEST. I also include a patch that applies to a clean version of the simulator (see the doc/README).
12. A theory of ongoing activity in V1 (Goldberg et al 2004)
Ongoing spontaneous activity in the cerebral cortex exhibits complex spatiotemporal patterns in the absence of sensory stimuli. To elucidate the nature of this ongoing activity, we present a theoretical treatment of two contrasting scenarios of cortical dynamics: (1) fluctuations about a single background state and (2) wandering among multiple “attractor” states, which encode a single or several stimulus features. Studying simplified network rate models of the primary visual cortex (V1), we show that the single state scenario is characterized by fast and high-dimensional Gaussian-like fluctuations, whereas in the multiple state scenario the fluctuations are slow, low dimensional, and highly non-Gaussian. Studying a more realistic model that incorporates correlations in the feedforward input, spatially restricted cortical interactions, and an experimentally derived layout of pinwheels, we show that recent optical-imaging data of ongoing activity in V1 are consistent with the presence of either a single background state or multiple attractor states encoding many features.
13. Alleviating catastrophic forgetting: context gating and synaptic stabilization (Masse et al 2018)
"Artificial neural networks can suffer from catastrophic forgetting, in which learning a new task causes the network to forget how to perform previous tasks. While previous studies have proposed various methods that can alleviate forgetting over small numbers (<10) of tasks, it is uncertain whether they can prevent forgetting across larger numbers of tasks. In this study, we propose a neuroscience-inspired scheme, called “context-dependent gating,” in which mostly nonoverlapping sets of units are active for any one task. Importantly, context-dependent gating has a straightforward implementation, requires little extra computational overhead, and when combined with previous methods to stabilize connection weights, can allow networks to maintain high performance across large numbers of sequentially presented tasks."
14. Alternative time representation in dopamine models (Rivest et al. 2009)
Combines a long short-term memory (LSTM) model of the cortex to a temporal difference learning (TD) model of the basal ganglia. Code to run simulations similar to the published data: Rivest, F, Kalaska, J.F., Bengio, Y. (2009) Alternative time representation in dopamine models. Journal of Computational Neuroscience. See for details.
15. An oscillatory neural autoencoder based on frequency modulation and multiplexing (Soman et al 2018)
" ... We propose here an oscillatory neural network model that performs the function of an autoencoder. The model is a hybrid of rate-coded neurons and neural oscillators. Input signals modulate the frequency of the neural encoder oscillators. These signals are then multiplexed using a network of rate-code neurons that has afferent Hebbian and lateral anti-Hebbian connectivity, termed as Lateral Anti Hebbian Network (LAHN). Finally the LAHN output is de-multiplexed using an output neural layer which is a combination of adaptive Hopf and Kuramoto oscillators for the signal reconstruction. The Kuramoto-Hopf combination performing demodulation is a novel way of describing a neural phase-locked loop. The proposed model is tested using both synthetic signals and real world EEG signals. The proposed model arises out of the general motivation to construct biologically inspired, oscillatory versions of some of the standard neural network models, and presents itself as an autoencoder network based on oscillatory neurons applicable to time series signals. As a demonstration, the model is applied to compression of EEG signals."
16. An oscillatory neural model of multiple object tracking (Kazanovich and Borisyuk 2006)
An oscillatory neural network model of multiple object tracking is described. The model works with a set of identical visual objects moving around the screen. At the initial stage, the model selects into the focus of attention a subset of objects initially marked as targets. Other objects are used as distractors. The model aims to preserve the initial separation between targets and distractors while objects are moving. This is achieved by a proper interplay of synchronizing and desynchronizing interactions in a multilayer network, where each layer is responsible for tracking a single target. The results of the model simulation are presented and compared with experimental data. In agreement with experimental evidence, simulations with a larger number of targets have shown higher error rates. Also, the functioning of the model in the case of temporarily overlapping objects is presented.
17. Basal Ganglia and Levodopa Pharmacodynamics model for parameter estimation in PD (Ursino et al 2020)
Parkinson disease (PD) is characterized by a clear beneficial motor response to levodopa (LD) treatment. However, with disease progression and longer LD exposure, drug-related motor fluctuations usually occur. Recognition of the individual relationship between LD concentration and its effect may be difficult, due to the complexity and variability of the mechanisms involved. This work proposes an innovative procedure for the automatic estimation of LD pharmacokinetics and pharmacodynamics parameters, by a biologically-inspired mathematical model. An original issue, compared with previous similar studies, is that the model comprises not only a compartmental description of LD pharmacokinetics in plasma and its effect on the striatal neurons, but also a neurocomputational model of basal ganglia action selection. Parameter estimation was achieved on 26 patients (13 with stable and 13 with fluctuating LD response) to mimic plasma LD concentration and alternate finger tapping frequency along four hours after LD administration, automatically minimizing a cost function of the difference between simulated and clinical data points. Results show that individual data can be satisfactorily simulated in all patients and that significant differences exist in the estimated parameters between the two groups. Specifically, the drug removal rate from the effect compartment, and the Hill coefficient of the concentration-effect relationship were significantly higher in the fluctuating than in the stable group. The model, with individualized parameters, may be used to reach a deeper comprehension of the PD mechanisms, mimic the effect of medication, and, based on the predicted neural responses, plan the correct management and design innovative therapeutic procedures.
18. Basal ganglia-thalamocortical loop model of action selection (Humphries and Gurney 2002)
We embed our basal ganglia model into a wider circuit containing the motor thalamocortical loop and thalamic reticular nucleus (TRN). Simulation of this extended model showed that the additions gave five main results which are desirable in a selection/switching mechanism. First, low salience actions (i.e. those with low urgency) could be selected. Second, the range of salience values over which actions could be switched between was increased. Third, the contrast between the selected and non-selected actions was enhanced via improved differentiation of outputs from the BG. Fourth, transient increases in the salience of a non-selected action were prevented from interrupting the ongoing action, unless the transient was of sufficient magnitude. Finally, the selection of the ongoing action persisted when a new closely matched salience action became active. The first result was facilitated by the thalamocortical loop; the rest were dependent on the presence of the TRN. Thus, we conclude that the results are consistent with these structures having clearly defined functions in action selection.
19. Biologically-plausible models for spatial navigation (Cannon et al 2003)
Hypotheses about how parahippocampal and hippocampal structures may be involved in spatial navigation tasks are implemented in a model of a virtual rat navigating through a virtual environment in search of a food reward. The model incorporates theta oscillations to separate encoding from retrieval and yields testable predictions about the phase relations of spiking activity to theta oscillations in different parts of the hippocampal formation at various stages of the behavioral task. See paper for more and details.
20. Brainstem circuits controlling locomotor frequency and gait (Ausborn et al 2019)
"A series of recent studies identified key structures in the mesencephalic locomotor region and the caudal brainstem of mice involved in the initiation and control of slow (exploratory) and fast (escape-type) locomotion and gait. However, the interactions of these brainstem centers with each other and with the spinal locomotor circuits are poorly understood. Previously we suggested that commissural and long propriospinal interneurons are the main targets for brainstem inputs adjusting gait (Danner et al., 2017). Here, by extending our previous model, we propose a connectome of the brainstem-spinal circuitry and suggest a mechanistic explanation of the operation of brainstem structures and their roles in controlling speed and gait. We suggest that brainstem control of locomotion is mediated by two pathways, one controlling locomotor speed via connections to rhythm generating circuits in the spinal cord and the other providing gait control by targeting commissural and long propriospinal interneurons."
21. Cat auditory nerve model (Zilany and Bruce 2006, 2007)
"This paper presents a computational model to simulate normal and impaired auditory-nerve (AN) fiber responses in cats. The model responses match physiological data over a wider dynamic range than previous auditory models. This is achieved by providing two modes of basilar membrane excitation to the inner hair cell (IHC) rather than one. ... The model responses are consistent with a wide range of physiological data from both normal and impaired ears for stimuli presented at levels spanning the dynamic range of hearing."
22. Cerebellar memory consolidation model (Yamazaki et al. 2015)
"Long-term depression (LTD) at parallel fiber-Purkinje cell (PF-PC) synapses is thought to underlie memory formation in cerebellar motor learning. Recent experimental results, however, suggest that multiple plasticity mechanisms in the cerebellar cortex and cerebellar/vestibular nuclei participate in memory formation. To examine this possibility, we formulated a simple model of the cerebellum with a minimal number of components based on its known anatomy and physiology, implementing both LTD and long-term potentiation (LTP) at PF-PC synapses and mossy fiber-vestibular nuclear neuron (MF-VN) synapses. With this model, we conducted a simulation study of the gain adaptation of optokinetic response (OKR) eye movement. Our model reproduced several important aspects of previously reported experimental results in wild-type and cerebellum-related gene-manipulated mice. ..."
23. Cochlear implant models (Bruce et al. 1999a, b, c, 2000)
"In a recent set of modeling studies we have developed a stochastic threshold model of auditory nerve response to single biphasic electrical pulses (Bruce et al., 1999c) and moderate rate (less than 800 pulses per second) pulse trains (Bruce et al., 1999a). In this article we derive an analytical approximation for the single-pulse model, which is then extended to describe the pulse-train model in the case of evenly timed, uniform pulses. This renewal-process description provides an accurate and computationally efficient model of electrical stimulation of single auditory nerve fibers by a cochlear implant that may be extended to other forms of electrical neural stimulation."
24. Coding explains development of binocular vision and its failure in Amblyopia (Eckmann et al 2020)
This is the MATLAB code for the Active Efficient Coding model introduced in Eckmann et al 2020. It simulates an agent that self-calibrates vergence and accommodation eye movements in a simple visual environment. All algorithms are explained in detail in the main manuscript and the supplementary material of the paper.
25. Cognitive and motor cortico-basal ganglia interactions during decision making (Guthrie et al 2013)
This is a re-implementation of Guthrie et al 2013 by Topalidou and Rougier 2015. The original study investigated how multiple level action selection could be performed by the basal ganglia.
26. Connection-set Algebra (CSA) for the representation of connectivity in NN models (Djurfeldt 2012)
"The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. ... The expressiveness of CSA makes prototyping of network structure easy. A C++ version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31–42, 2008b) and an implementation in Python has been publicly released."
27. Continuous lateral oscillations as a mechanism for taxis in Drosophila larvae (Wystrach et al 2016)
" ...Our analysis of larvae motion reveals a rhythmic, continuous lateral oscillation of the anterior body, encompassing all head-sweeps, small or large, without breaking the oscillatory rhythm. Further, we show that an agent-model that embeds this hypothesis reproduces a surprising number of taxis signatures observed in larvae. Also, by coupling the sensory input to a neural oscillator in continuous time, we show that the mechanism is robust and biologically plausible. ..."
28. Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
29. Cortico - Basal Ganglia Loop (Mulcahy et al 2020)
The model represents learning and reversal tasks and shows performance in control, Parkinsonian and Huntington disease conditions
30. CRH modulates excitatory transmission and network physiology in hippocampus (Gunn et al. 2017)
This model simulates the effects of CRH on sharp waves in a rat CA1/CA3 model. It uses the frequency of the sharp waves as an output of the network.
31. Deep belief network learns context dependent behavior (Raudies, Zilli, Hasselmo 2014)
We tested a rule generalization capability with a Deep Belief Network (DBN), Multi-Layer Perceptron network, and the combination of a DBN with a linear perceptron (LP). Overall, the combination of the DBN and LP had the highest success rate for generalization.
32. Effect of circuit structure on odor representation in insect olfaction (Rajagopalan & Assisi 2020)
"How does the structure of a network affect its function? We address this question in the context of two olfactory systems that serve the same function, to distinguish the attributes of different odorants, but do so using markedly distinct architectures. In the locust, the probability of connections between projection neurons and Kenyon cells - a layer downstream - is nearly 50%. In contrast, this number is merely 5% in drosophila. We developed computational models of these networks to understand the relative advantages of each connectivity. Our analysis reveals that the two systems exist along a continuum of possibilities that balance two conflicting goals – separating the representations of similar odors while grouping together noisy variants of the same odor."
33. Fast population coding (Huys et al. 2007)
"Uncertainty coming from the noise in its neurons and the ill-posed nature of many tasks plagues neural computations. Maybe surprisingly, many studies show that the brain manipulates these forms of uncertainty in a probabilistically consistent and normative manner, and there is now a rich theoretical literature on the capabilities of populations of neurons to implement computations in the face of uncertainty. However, one major facet of uncertainty has received comparatively little attention: time. In a dynamic, rapidly changing world, data are only temporarily relevant. Here, we analyze the computational consequences of encoding stimulus trajectories in populations of neurons. ..."
34. Fisher and Shannon information in finite neural populations (Yarrow et al. 2012)
Here we model populations of rate-coding neurons with bell-shaped tuning curves and multiplicative Gaussian noise. This Matlab code supports the calculation of information theoretic (mutual information, stimulus-specific information, stimulus-specific surprise) and Fisher-based measures (Fisher information, I_Fisher, SSI_Fisher) in these population models. The information theoretic measures are computed by Monte Carlo integration, which allows computationally-intensive decompositions of the mutual information to be computed for relatively large populations (hundreds of neurons).
35. Generation of stable heading representations in diverse visual scenes (Kim et al 2019)
"Many animals rely on an internal heading representation when navigating in varied environments. How this representation is linked to the sensory cues that define different surroundings is unclear. In the fly brain, heading is represented by ‘compass’ neurons that innervate a ring-shaped structure known as the ellipsoid body. Each compass neuron receives inputs from ‘ring’ neurons that are selective for particular visual features; this combination provides an ideal substrate for the extraction of directional information from a visual scene. Here we combine two-photon calcium imaging and optogenetics in tethered flying flies with circuit modelling, and show how the correlated activity of compass and visual neurons drives plasticity, which flexibly transforms two-dimensional visual cues into a stable heading representation. ... " See the supplementary information for model details.
36. Graph-theoretical Derivation of Brain Structural Connectivity (Giacopelli et al 2020)
Brain connectivity at the single neuron level can provide fundamental insights into how information is integrated and propagated within and between brain regions. However, it is almost impossible to adequately study this problem experimentally and, despite intense efforts in the field, no mathematical description has been obtained so far. Here, we present a mathematical framework based on a graph-theoretical approach that, starting from experimental data obtained from a few small subsets of neurons, can quantitatively explain and predict the corresponding full network properties. This model also changes the paradigm with which large-scale model networks can be built, from using probabilistic/empiric connections or limited data, to a process that can algorithmically generate neuronal networks connected as in the real system.
37. Hebbian STDP for modelling the emergence of disparity selectivity (Chauhan et al 2018)
This code shows how Hebbian learning mediated by STDP mechanisms could explain the emergence of disparity selectivity in the early visual system. This upload is a snapshot of the code at the time of acceptance of the paper. For a link to a soon-to-come git repository, consult the author's website: . The datasets used in the paper are not provided due to size, but download links and expected directory-structures are. The user can (and is strongly encouraged to) experiment with their own dataset. Let me know if you find something interesting! Finally, I am very keen on a redesign/restructure/adaptation of the code to more applied problems in AI and robotics (or any other field where a spiking non-linear approach makes sense). If you have a serious proposal, don't hesitate to contact me [research AT tusharchauhan DOT com ].
38. Hierarchical anti-Hebbian network model for the formation of spatial cells in 3D (Soman et al 2019)
This model shows how spatial representations in 3D space could emerge using unsupervised neural networks. Model is a hierarchical one which means that it has multiple layers, where each layer has got a specific function to achieve. This architecture is more of a generalised one i.e. it gives rise to different kinds of spatial representations after training.
39. Hippocampal context-dependent retrieval (Hasselmo and Eichenbaum 2005)
"... The model simulates the context-sensitive firing properties of hippocampal neurons including trial-specific firing during spatial alternation and trial by trial changes in theta phase precession on a linear track. ..." See paper for more and details.
40. Hotspots of dendritic spine turnover facilitates new spines and NN sparsity (Frank et al 2018)
Model for the following publication: Adam C. Frank, Shan Huang, Miou Zhou, Amos Gdalyahu, George Kastellakis, Panayiota Poirazi, Tawnie K. Silva, Ximiao Wen, Joshua T. Trachtenberg, and Alcino J. Silva Hotspots of Dendritic Spine Turnover Facilitate Learning-related Clustered Spine Addition and Network Sparsity
41. Human Attentional Networks: A Connectionist Model (Wang and Fan 2007)
"... We describe a connectionist model of human attentional networks to explore the possible interplays among the networks from a computational perspective. This model is developed in the framework of leabra (local, error-driven, and associative, biologically realistic algorithm) and simultaneously involves these attentional networks connected in a biologically inspired way. ... We evaluate the model by simulating the empirical data collected on normal human subjects using the Attentional Network Test (ANT). The simulation results fit the experimental data well. In addition, we show that the same model, with a single parameter change that affects executive control, is able to simulate the empirical data collected from patients with schizophrenia. This model represents a plausible connectionist explanation for the functional structure and interaction of human attentional networks."
42. Input strength and time-varying oscillation peak frequency (Cohen MX 2014)
The purpose of this paper is to argue that a single neural functional principle—temporal fluctuations in oscillation peak frequency (“frequency sliding”)—can be used as a common analysis approach to bridge multiple scales within neuroscience. The code provided here recreates the network models used to demonstrate changes in peak oscillation frequency as a function of static and time-varying input strength, and also shows how correlated frequency sliding can be used to identify functional connectivity between two networks.
43. Interplay between somatic and dendritic inhibition promotes place fields (Pedrosa & Clopath 2020)
Hippocampal pyramidal neurons are thought to encode spatial information. A subset of these cells, named place cells, are active only when the animal traverses a specific region within the environment. Although vastly studied experimentally, the development and stabilization of place fields are not fully understood. Here, we propose a mechanistic model of place cell formation in the hippocampal CA1 region. Using our model, we reproduce place field dynamics observed experimentally and provide a mechanistic explanation for the stabilization of place fields. Finally, our model provides specific predictions on protocols to shift place field location.
44. Irregular oscillations produced by cyclic recurrent inhibition (Friesen, Friesen 1994)
Model of recurrent cyclic inhibition as described on p.119 of Friesen and Friesen (1994), which was slightly modified from Szekely's model (1965) of a network for producing alternating limb movements.
45. L5 PFC microcircuit used to study persistent activity (Papoutsi et al. 2014, 2013)
Using a heavily constrained biophysical model of a L5 PFC microcircuit we investigate the mechanisms that underlie persistent activity emergence (ON) and termination (OFF) and search for the minimum network size required for expressing these states within physiological regimes.
46. MDD: the role of glutamate dysfunction on Cingulo-Frontal NN dynamics (Ramirez-Mahaluf et al 2017)
" ...Currently, no mechanistic framework describes how network dynamics, glutamate, and serotonin interact to explain MDD symptoms and treatments. Here, we built a biophysical computational model of 2 areas (vACC and dlPFC) that can switch between emotional and cognitive processing. (Major Depression Disease) MDD networks were simulated by slowing glutamate decay in vACC and demonstrated sustained vACC activation. ..."
47. Mechanisms for stable, robust, and adaptive development of orientation maps (Stevens et al. 2013)
GCAL (Gain Control, Adaptation, Laterally connected). Simple but robust single-population V1 orientation map model.
48. Microsaccades and synchrony coding in the retina (Masquelier et al. 2016)
We show that microsaccades (MS) enable efficient synchrony-based coding among the primate retinal ganglion cells (RGC). We find that each MS causes certain RGCs to fire synchronously, namely those whose receptive fields contain contrast edges after the MS. The emitted synchronous spike volley thus rapidly transmits the most salient edges of the stimulus. We demonstrate that the readout could be done rapidly by simple coincidence-detector neurons, and that the required connectivity could emerge spontaneously with spike timing-dependent plasticity.
49. Modeling brain dynamics in brain tumor patients using the Virtual Brain (Aerts et al 2018)
"Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. ... we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong–Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance."
50. Modeling local field potentials (Bedard et al. 2004)
This demo simulates a model of local field potentials (LFP) with variable resistivity. This model reproduces the low-pass frequency filtering properties of extracellular potentials. The model considers inhomogeneous spatial profiles of conductivity and permittivity, which result from the multiple media (fluids, membranes, vessels, ...) composing the extracellular space around neurons. Including non-constant profiles of conductivity enables the model to display frequency filtering properties, ie slow events such as EPSPs/IPSPs are less attenuated than fast events such as action potentials. The demo simulates Fig 6 of the paper.
51. Modelling gain modulation in stability-optimised circuits (Stroud et al 2018)
We supply Matlab code to create 'stability-optimised circuits'. These networks can give rise to rich neural activity transients that resemble primary motor cortex recordings in monkeys during reaching. We also supply code that allows one to learn new network outputs by changing the input-output gain of neurons in a stability-optimised network. Our code recreates the main results of Figure 1 in our related publication.
52. Motion Clouds: Synthesis of random textures for motion perception (Leon et al. 2012)
We describe a framework to generate random texture movies with controlled information content. In particular, these stimuli can be made closer to naturalistic textures compared to usual stimuli such as gratings and random-dot kinetograms. We simplified the definition to parametrically define these "Motion Clouds" around the most prevalent feature axis (mean and bandwith): direction, spatial frequency, orientation.
53. Motor Cortex Connectivity & Event Related Desynchronization Based on Neural Mass Models (Ursino 21)
Knowledge of motor cortex connectivity is of great value in cognitive neuroscience, in order to provide a better understanding of motor organization and its alterations in pathological conditions. Traditional methods provide connectivity estimations which may vary depending on the task. This work aims to propose a new method for motor connectivity assessment based on the hypothesis of a task-independent connectivity network, assuming nonlinear behavior. The model considers six cortical regions of interest (ROIs) involved in hand movement. The dynamics of each region is simulated using a neural mass model, which reproduces the oscillatory activity through the interaction among four neural populations. Parameters of the model have been assigned to simulate both power spectral densities and coherences of a patient with left-hemisphere stroke during: resting condition, movement of the affected and movement of the unaffected hand. The presented model can simulate the three conditions using a single set of connectivity parameters, assuming that only inputs to the ROIs change from one condition to the other. The proposed procedure represents an innovative method to assess a brain circuit, which does not rely on a task-dependent connectivity network, and allows brain rhythms and desynchronization to be assessed on a quantitative basis.
54. Multi-area layer-resolved spiking network model of resting-state dynamics in macaque visual cortex
See for any updates.
55. Multiscale modeling of epileptic seizures (Naze et al. 2015)
" ... In the context of epilepsy, the functional properties of the network at the source of a seizure are disrupted by a possibly large set of factors at the cellular and molecular levels. It is therefore needed to sacrifice some biological accuracy to model seizure dynamics in favor of macroscopic realizations. Here, we present a neuronal network model that convenes both neuronal and network representations with the goal to describe brain dynamics involved in the development of epilepsy. We compare our modeling results with animal in vivo recordings to validate our approach in the context of seizures. ..."
56. Neural Mass Model for relationship between Brain Rhythms + Functional Connectivity (Ricci et al '21)
The Neural Mass Model (NMM) generates biologically reliable mean field potentials of four interconnected regions of interest (ROIs) of the cortex, each simulating a different brain rhythm (in theta, alpha, beta and gamma ranges). These neuroelectrical signals originate from the assumption that ROIs influence each other via of excitatory or by-synaptic inhibitory connections. Besides receiving long-range synapses from other ROIs, each one receives an external input and superimposed Gaussian white noise. We used the NMM to simulate different connectivity networks of four ROIs, by varying both the synaptic strengths and the inputs. The purpose of this study is to investigate how the transmission of brain rhythms behaves under linear and nonlinear conditions. To this aim, we investigated the performance of eight Functional Connectivity (FC) estimators (Correlation, Delayed Correlation, Coherence, Lagged Coherence, Temporal Granger Causality, Spectral Granger Causality, Phase Synchronization and Transfer Entropy) in detecting the connectivity network changes. Results suggest that when a ROI works in the linear region, its capacity to transmit its rhythm increases, while when it saturates, the oscillatory activity becomes strongly affected by other ROIs. Software included here allows the simulation of mean field potentials of four interconnected ROIs, their visualization, both in time and frequency domains, and the estimation of the related FC with eight different methods (for Transfer Entropy the Trentool package is needed).
57. Neural model of two-interval discrimination (Machens et al 2005)
Two-interval discrimination involves comparison of two stimuli that are presented at different times. It has three phases: loading, in which the first stimulus is perceived and stored in working memory; maintenance of working memory; decision making, in which the second stimulus is perceived and compared with the first. In behaving monkeys, each phase is associated with characteristic firing activity of neurons in the prefrontal cortex. This model implements both working memory and decision making with a mutual inhibition network that reproduces all three phases of two-interval discrimination. Machens, C.K., Romo, R., and Brody, C.D. Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science 307:1121-1124, 2005.
58. Neuronal population models of intracerebral EEG (Wendling et al. 2005)
"... In this study, the authors relate electrophysiologic patterns typically observed during the transition from interictal to ictal activity in human mesial temporal lobe epilepsy (MTLE) to mechanisms (at a neuronal population level) involved in seizure generation through a computational model of EEG activity. Intracerebral EEG signals recorded from hippocampus in five patients with MTLE during four periods (during interictal activity, just before seizure onset, during seizure onset, and during ictal activity) were used to identify the three main parameters of a model of hippocampus EEG activity (related to excitation, slow dendritic inhibition and fast somatic inhibition). ... . Results demonstrated that the model generates very realistic signals for automatically identified parameters. They also showed that the transition from interictal to ictal activity cannot be simply explained by an increase in excitation and a decrease in inhibition but rather by time-varying ensemble interactions between pyramidal cells and local interneurons projecting to either their dendritic or perisomatic region (with slow and fast GABAA kinetics). Particularly, during preonset activity, an increasing dendritic GABAergic inhibition compensates a gradually increasing excitation up to a brutal drop at seizure onset when faster oscillations (beta and low gamma band, 15 to 40 Hz) are observed. ... These findings obtained from model identification in human temporal lobe epilepsy are in agreement with some results obtained experimentally, either on animal models of epilepsy or on the human epileptic tissue."
59. NN for proto-object based contour integration and figure-ground segregation (Hu & Niebur 2017)
"Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects (“proto-objects”) based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. ..."
60. Odor supported place cell model and goal navigation in rodents (Kulvicius et al. 2008)
" ... Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. ..."
61. Optimal Localist and Distributed Coding Through STDP (Masquelier & Kheradpisheh 2018)
We show how a LIF neuron equipped with STDP can become optimally selective, in an unsupervised manner, to one or several repeating spike patterns, even when those patterns are hidden in Poisson spike trains.
62. Oscillation and coding in a proposed NN model of insect olfaction (Horcholle-Bossavit et al. 2007)
"For the analysis of coding mechanisms in the insect olfactory system, a fully connected network of synchronously updated McCulloch and Pitts neurons (MC-P type) was (previously) developed. ... Considering the update time as an intrinsic clock, this “Dynamic Neural Filter” (DNF), which maps regions of input space into spatio-temporal sequences of neuronal activity, is able to produce exact binary codes extracted from the synchronized activities recorded at the level of projection neurons (PN) in the locust antennal lobe (AL) in response to different odors ... We find synaptic matrices which lead to both the emergence of robust oscillations and spatio-temporal patterns, using a formal criterion, based on a Normalized Euclidian Distance (NED), in order to measure the use of the temporal dimension as a coding dimension by the DNF. Similarly to biological PN, the activity of excitatory neurons in the model can be both phase-locked to different cycles of oscillations which (is reminiscent of the) local field potential (LFP), and nevertheless exhibit dynamic behavior complex enough to be the basis of spatio-temporal codes."
63. Parallel cortical inhibition processing enables context-dependent behavior (Kuchibhotla et al. 2016)
Physical features of sensory stimuli are fixed, but sensory perception is context dependent. The precise mechanisms that govern contextual modulation remain unknown. Here, we trained mice to switch between two contexts: passively listening to pure tones and performing a recognition task for the same stimuli. Two-photon imaging showed that many excitatory neurons in auditory cortex were suppressed during behavior, while some cells became more active. Whole-cell recordings showed that excitatory inputs were affected only modestly by context, but inhibition was more sensitive, with PV+, SOM+, and VIP+ interneurons balancing inhibition and disinhibition within the network. Cholinergic modulation was involved in context switching, with cholinergic axons increasing activity during behavior and directly depolarizing inhibitory cells. Network modeling captured these findings, but only when modulation coincidently drove all three interneuron subtypes, ruling out either inhibition or disinhibition alone as sole mechanism for active engagement. Parallel processing of cholinergic modulation by cortical interneurons therefore enables context-dependent behavior.
64. Phase oscillator models for lamprey central pattern generators (Varkonyi et al. 2008)
In our paper, Varkonyi et al. 2008, we derive phase oscillator models for the lamprey central pattern generator from two biophysically based segmental models. We study intersegmental coordination and show how these models can provide stable intersegmental phase lags observed in real animals.
65. Place and grid cells in a loop (Rennó-Costa & Tort 2017)
This model implements a loop circuit between place and grid cells. The model was used to explain place cell remapping and grid cell realignment. Grid cell model as a continuous attractor network. Place cells have recurrent attractor network. Rate models implemented with E%-MAX winner-take-all network dynamics, with gamma cycle time-step.
66. Potjans-Diesmann cortical microcircuit model in NetPyNE (Romaro et al 2021)
The Potjans-Diesmann cortical microcircuit model is a widely used model originally implemented in NEST. Here, we re-implemented the model using NetPyNE, a high-level Python interface to the NEURON simulator, and reproduced the findings of the original publication. We also implemented a method for rescaling the network size which preserves first and second order statistics, building on existing work on network theory. The new implementation enables using more detailed neuron models with multicompartment morphologies and multiple biophysically realistic channels. This opens the model to new research, including the study of dendritic processing, the influence of individual channel parameters, and generally multiscale interactions in the network. The rescaling method provides flexibility to increase or decrease the network size if required when running these more realistic simulations. Finally, NetPyNE facilitates modifying or extending the model using its declarative language; optimizing model parameters; running efficient large-scale parallelized simulations; and analyzing the model through built-in methods, including local field potential calculation and information flow measures.
67. Prefrontal cortical mechanisms for goal-directed behavior (Hasselmo 2005)
".. a model of prefrontal cortex function emphasizing the influence of goal-related activity on the choice of the next motor output. ... Different neocortical minicolumns represent distinct sensory input states and distinct motor output actions. The dynamics of each minicolumn include separate phases of encoding and retrieval. During encoding, strengthening of excitatory connections forms forward and reverse associations between each state, the following action, and a subsequent state, which may include reward. During retrieval, activity spreads from reward states throughout the network. The interaction of this spreading activity with a specific input state directs selection of the next appropriate action. Simulations demonstrate how these mechanisms can guide performance in a range of goal directed tasks, and provide a functional framework for some of the neuronal responses previously observed in the medial prefrontal cortex during performance of spatial memory tasks in rats."
68. Roles of subthalamic nucleus and DBS in reinforcement conflict-based decision making (Frank 2006)
Deep brain stimulation (DBS) of the subthalamic nucleus dramatically improves the motor symptoms of Parkinson's disease, but causes cognitive side effects such as impulsivity. This model from Frank (2006) simulates the role of the subthalamic nucleus (STN) within the basal ganglia circuitry in decision making. The STN dynamically modulates network decision thresholds in proportion to decision conflict. The STN ``hold your horses'' signal adaptively allows the system more time to settle on the best choice when multiple options are valid. The model also replicates effects in Parkinson's patients on and off DBS in experiments designed to test the model (Frank et al, 2007).
69. Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. ... This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. ... Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. See paper for more and details.
70. SHOT-CA3, RO-CA1 Training, & Simulation CODE in models of hippocampal replay (Nicola & Clopath 2019)
In this code, we model the interaction between the medial septum and hippocampus as a FORCE trained, dual oscillator model. One oscillator corresponds to the medial septum and serves as an input, while a FORCE trained network of LIF neurons acts as a model of the CA3. We refer to this entire model as the Septal Hippocampal Oscillator Theta (or SHOT) network. The code contained in this upload allows a user to train a SHOT network, train a population of reversion interneurons, and simulate the SHOT-CA3 and RO-CA1 networks after training. The code scripts are labeled to correspond to the figure from the manuscript.
71. Simulation studies on mechanisms of levetiracetam-mediated inhibition of IK(DR) (Huang et al. 2009)
Levetiracetam (LEV) is an S-enantiomer pyrrolidone derivative with established antiepileptic efficacy in generalized epilepsy and partial epilepsy. However, its effects on ion currents and membrane potential remain largely unclear. In this study, we investigated the effect of LEV on differentiated NG108-15 neurons. ... Simulation studies in a modified Hodgkin-Huxley neuron and network unraveled that the reduction of slowly inactivating IK(DR) resulted in membrane depolarization accompanied by termination of the firing of action potentials in a stochastic manner. Therefore, the inhibitory effects on slowly inactivating IK(DR) (Kv3.1-encoded current) may constitute one of the underlying mechanisms through which LEV affects neuronal activity in vivo.
72. Single compartment Dorsal Lateral Medium Spiny Neuron w/ NMDA and AMPA (Biddell and Johnson 2013)
A biophysical single compartment model of the dorsal lateral striatum medium spiny neuron is presented here. The model is an implementation then adaptation of a previously described model (Mahon et al. 2002). The model has been adapted to include NMDA and AMPA receptor models that have been fit to dorsal lateral striatal neurons. The receptor models allow for excitation by other neuron models.
73. Single neuron properties shape chaos and signal transmission in random NNs (Muscinelli et al 2019)
"While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of “resonant chaos”, characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks."
74. Sparsely connected networks of spiking neurons (Brunel 2000)
The dynamics of networks of sparsely connected excitatory and inhibitory integrate-and-fire neurons are studied analytically (and with simulations). The analysis reveals a rich repertoire of states, including synchronous states in which neurons fire regularly; asynchronous states with stationary global activity and very irregular individual cell activity; and states in which the global activity oscillates but individual cells fire irregularly, typically at rates lower than the global oscillation frequency. See paper for more and details.
75. Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
Development of spiking grid cells and place cells in the entorhinal-hippocampal system to represent positions in large spaces
76. Supervised learning in spiking neural networks with FORCE training (Nicola & Clopath 2017)
The code contained in the zip file runs FORCE training for various examples from the paper: Figure 2 (Oscillators and Chaotic Attractor) Figure 3 (Ode to Joy) Figure 4 (Song Bird Example) Figure 5 (Movie Example) Supplementary Figures 10-12 (Classifier) Supplementary Ode to Joy Example Supplementary Figure 2 (Oscillator Panel) Supplementary Figure 17 (Long Ode to Joy) Note that due to file size limitations, the supervisors for Figures 4/5 are not included. See Nicola, W., & Clopath, C. (2016). Supervised Learning in Spiking Neural Networks with FORCE Training. arXiv preprint arXiv:1609.02545. for further details.
77. Synaptic gating at axonal branches, and sharp-wave ripples with replay (Vladimirov et al. 2013)
The computational model of in vivo sharp-wave ripples with place cell replay. Excitatory post-synaptic potentials at dendrites gate antidromic spikes arriving from the axonal collateral, and thus determine when the soma and the main axon fire. The model allows synchronous replay of pyramidal cells during sharp-wave ripple event, and the replay is possible in both forward and reverse directions.
78. Time-warp-invariant neuronal processing (Gutig & Sompolinsky 2009)
" ... Here, we report that time-warp-invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp-invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. ..."
79. Towards a biologically plausible model of LGN-V1 pathways (Lian et al 2019)
"Increasing evidence supports the hypothesis that the visual system employs a sparse code to represent visual stimuli, where information is encoded in an efficient way by a small population of cells that respond to sensory input at a given time. This includes simple cells in primary visual cortex (V1), which are defined by their linear spatial integration of visual stimuli. Various models of sparse coding have been proposed to explain physiological phenomena observed in simple cells. However, these models have usually made the simplifying assumption that inputs to simple cells already incorporate linear spatial summation. This overlooks the fact that these inputs are known to have strong non-linearities such as the separation of ON and OFF pathways, or separation of excitatory and inhibitory neurons. Consequently these models ignore a range of important experimental phenomena that are related to the emergence of linear spatial summation from non-linear inputs, such as segregation of ON and OFF sub-regions of simple cell receptive fields, the push-pull effect of excitation and inhibition, and phase-reversed cortico-thalamic feedback. Here, we demonstrate that a two-layer model of the visual pathway from the lateral geniculate nucleus to V1 that incorporates these biological constraints on the neural circuits and is based on sparse coding can account for the emergence of these experimental phenomena, diverse shapes of receptive fields and contrast invariance of orientation tuning of simple cells when the model is trained on natural images. The model suggests that sparse coding can be implemented by the V1 simple cells using neural circuits with a simple biologically plausible architecture."
80. Unsupervised learning of an efficient short-term memory network (Vertechi, Brendel & Machens 2014)
Learning in recurrent neural networks has been a topic fraught with difficulties and problems. We here report substantial progress in the unsupervised learning of recurrent networks that can keep track of an input signal. Specifically, we show how these networks can learn to efficiently represent their present and past inputs, based on local learning rules only.
81. V1 and AL spiking neural network for visual contrast response in mouse (Meijer et al. 2020)
This code contains the computational model included in Meijer et al., Cell Reports 2020, which reproduces some of the main experimental findings reported --most notably, the higher sensory response of secondary visual areas compared to that of primary visual areas for moderate visual contrast levels in mice. The model is based on a two-area spiking neural network with embedded short-term synaptic plasticity mechanisms.

Re-display model names without descriptions