Models that contain the Model Concept : Memory

(The mechanisms by which neural tissue stores or recalls an impression of past occurrences.)
Re-display model names without descriptions
    Models   Description
1.  3D model of the olfactory bulb (Migliore et al. 2014)
This entry contains a link to a full HD version of movie 1 and the NEURON code of the paper: "Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb" by M Migliore, F Cavarretta, ML Hines, and GM Shepherd.
2.  3D olfactory bulb: operators (Migliore et al, 2015)
"... Using a 3D model of mitral and granule cell interactions supported by experimental findings, combined with a matrix-based representation of glomerular operations, we identify the mechanisms for forming one or more glomerular units in response to a given odor, how and to what extent the glomerular units interfere or interact with each other during learning, their computational role within the olfactory bulb microcircuit, and how their actions can be formalized into a theoretical framework in which the olfactory bulb can be considered to contain "odor operators" unique to each individual. ..."
3.  A 1000 cell network model for Lateral Amygdala (Kim et al. 2013)
1000 Cell Lateral Amygdala model for investigation of plasticity and memory storage during Pavlovian Conditioning.
4.  A computational model of systems memory consolidation and reconsolidation (Helfer & Shultz 2019)
A neural-network framework for modeling systems memory consolidation and reconsolidation.
5.  A large-scale model of the functioning brain (spaun) (Eliasmith et al. 2012)
" ... In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap (between neural activity and biological function) by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks."
6.  A model of antennal lobe of bee (Chen JY et al. 2015)
" ... Here we use calcium imaging to reveal how responses across antennal lobe projection neurons change after association of an input odor with appetitive reinforcement. After appetitive conditioning to 1-hexanol, the representation of an odor mixture containing 1-hexanol becomes more similar to this odor and less similar to the background odor acetophenone. We then apply computational modeling to investigate how changes in synaptic connectivity can account for the observed plasticity. Our study suggests that experience-dependent modulation of inhibitory interactions in the antennal lobe aids perception of salient odor components mixed with behaviorally irrelevant background odors."
7.  A Model of Selection between Stimulus and Place Strategy in a Hawkmoth (Balkenius et al. 2004)
"In behavioral experiments, the hawkmoth Deilephila elpenor can learn both the color and the position of artificial flowers. ... We show how a computational model can reproduce the behavior in the experimental situation. The aim of the model is to investigate which learning and behavior selection strategies are necessary to reproduce the behavior observed in the experiment. The model is based on behavioral data and the sensitivities of the moth photoreceptors. The model consists of a number of interacting behavior systems that are triggered by specific stimuli and control specific behaviors. The ability of the moth to learn the colors of different flowers and the adaptive processes involved in the choice between stimulus-approach and place-approach strategies are reproduced very accurately by the model. The model has implications both for further studies of the ecology of the animal and for robotic systems."
8.  A neurocomputational model of classical conditioning phenomena (Moustafa et al. 2009)
"... Here, we show that the same information-processing function proposed for the hippocampal region in the Gluck and Myers (1993) model can also be implemented in a network without using the backpropagation algorithm. Instead, our newer instantiation of the theory uses only (a) Hebbian learning methods which match more closely with synaptic and associative learning mechanisms ascribed to the hippocampal region and (b) a more plausible representation of input stimuli. We demonstrate here that this new more biologically plausible model is able to simulate various behavioral effects, including latent inhibition, acquired equivalence, sensory preconditioning, negative patterning, and context shift effects. ..."
9.  A reinforcement learning example (Sutton and Barto 1998)
This MATLAB script demonstrates an example of reinforcement learning functions guiding the movements of an agent (a black square) in a gridworld environment. See at the top of the matlab script and the book for more details.
10.  A simple model of neuromodulatory state-dependent synaptic plasticity (Pedrosa and Clopath, 2016)
The model is used to illustrate the role of neuromodulators in cortical plasticity. The model consists of a feedforward network with 1 postsynaptic neuron with plastic synaptic weights. These weights are updated through a spike-timing-dependent plasticity rule. "First, we explore the ability of neuromodulators to gate plasticity by reshaping the learning window for spike-timing-dependent plasticity. Using a simple computational model, we implement four different learning rules and demonstrate their effects on receptive field plasticity. We then compare the neuromodulatory effects of upregulating learning rate versus the effects of upregulating neuronal activity. "
11.  A spiking neural network model of model-free reinforcement learning (Nakano et al 2015)
"Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. ... In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL (partially observable reinforcement learning) problems with high-dimensional observations. ... The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. "
12.  Acetylcholine-modulated plasticity in reward-driven navigation (Zannone et al 2018)
"Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike- Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules."
13.  Adaptation of Short-Term Plasticity parameters (Esposito et al. 2015)
"The anatomical connectivity among neurons has been experimentally found to be largely non-random across brain areas. This means that certain connectivity motifs occur at a higher frequency than would be expected by chance. Of particular interest, short-term synaptic plasticity properties were found to colocalize with specific motifs: an over-expression of bidirectional motifs has been found in neuronal pairs where short-term facilitation dominates synaptic transmission among the neurons, whereas an over-expression of unidirectional motifs has been observed in neuronal pairs where short-term depression dominates. In previous work we found that, given a network with fixed short-term properties, the interaction between short- and long-term plasticity of synaptic transmission is sufficient for the emergence of specific motifs. Here, we introduce an error-driven learning mechanism for short-term plasticity that may explain how such observed correspondences develop from randomly initialized dynamic synapses. ..."
14.  Adaptive robotic control driven by a versatile spiking cerebellar network (Casellato et al. 2014)
" ... We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. ..."
15.  Alleviating catastrophic forgetting: context gating and synaptic stabilization (Masse et al 2018)
"Artificial neural networks can suffer from catastrophic forgetting, in which learning a new task causes the network to forget how to perform previous tasks. While previous studies have proposed various methods that can alleviate forgetting over small numbers (<10) of tasks, it is uncertain whether they can prevent forgetting across larger numbers of tasks. In this study, we propose a neuroscience-inspired scheme, called “context-dependent gating,” in which mostly nonoverlapping sets of units are active for any one task. Importantly, context-dependent gating has a straightforward implementation, requires little extra computational overhead, and when combined with previous methods to stabilize connection weights, can allow networks to maintain high performance across large numbers of sequentially presented tasks."
16.  Alternative time representation in dopamine models (Rivest et al. 2009)
Combines a long short-term memory (LSTM) model of the cortex to a temporal difference learning (TD) model of the basal ganglia. Code to run simulations similar to the published data: Rivest, F, Kalaska, J.F., Bengio, Y. (2009) Alternative time representation in dopamine models. Journal of Computational Neuroscience. See http://dx.doi.org/10.1007/s10827-009-0191-1 for details.
17.  An electrophysiological model of GABAergic double bouquet cells (Chrysanthidis et al. 2019)
We present an electrophysiological model of double bouquet cells (DBCs) and integrate them into an established cortical columnar microcircuit model that implements a BCPNN (Bayesian Confidence Propagation Neural Network) learning rule. The proposed architecture effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. The introduction of DBCs improves the biological plausibility of our model, without affecting the model's spiking activity, basic operation, and learning abilities.
18.  Behavioral time scale synaptic plasticity underlies CA1 place fields (Bittner et al. 2017)
" ... Place fields could be produced in vivo in a single trial by potentiation of input that arrived seconds before and after complex spiking.The potentiated synaptic input was not initially coincident with action potentials or depolarization.This rule, named behavioral timescale synaptic plasticity, abruptly modifies inputs that were neither causal nor close in time to postsynaptic activation. ...", " ... To determine if the above plasticity rule could be observed under more realistic model conditions, we constructed and optimized a biophysically detailed model and attempted to fully account for the experimental data. ... "
19.  CA1 pyramidal neurons: binding properties and the magical number 7 (Migliore et al. 2008)
NEURON files from the paper: Single neuron binding properties and the magical number 7, by M. Migliore, G. Novara, D. Tegolo, Hippocampus, in press (2008). In an extensive series of simulations with realistic morphologies and active properties, we demonstrate how n radial (oblique) dendrites of these neurons may be used to bind n inputs to generate an output signal. The results suggest a possible neural code as the most effective n-ple of dendrites that can be used for short-term memory recollection of persons, objects, or places. Our analysis predicts a straightforward physiological explanation for the observed puzzling limit of about 7 short-term memory items that can be stored by humans.
20.  Calcium response prediction in the striatal spines depending on input timing (Nakano et al. 2013)
We construct an electric compartment model of the striatal medium spiny neuron with a realistic morphology and predict the calcium responses in the synaptic spines with variable timings of the glutamatergic and dopaminergic inputs and the postsynaptic action potentials. The model was validated by reproducing the responses to current inputs and could predict the electric and calcium responses to glutamatergic inputs and back-propagating action potential in the proximal and distal synaptic spines during up and down states.
21.  Cancelling redundant input in ELL pyramidal cells (Bol et al. 2011)
The paper investigates the property of the electrosensory lateral line lobe (ELL) of the brain of weakly electric fish to cancel predictable stimuli. Electroreceptors on the skin encode all signals in their firing activity, but superficial pyramidal (SP) cells in the ELL that receive this feedforward input do not respond to constant sinusoidal signals. This cancellation putatively occurs using a network of feedback delay lines and burst-induced synaptic plasticity between the delay lines and the SP cell that learns to cancel the redundant input. Biologically, the delay lines are parallel fibres from cerebellar-like granule cells in the eminentia granularis posterior. A model of this network (e.g. electroreceptors, SP cells, delay lines and burst-induced plasticity) was constructed to test whether the current knowledge of how the network operates is sufficient to cancel redundant stimuli.
22.  Cerebellar gain and timing control model (Yamazaki & Tanaka 2007)(Yamazaki & Nagao 2012)
This paper proposes a hypothetical computational mechanism for unified gain and timing control in the cerebellum. The hypothesis is justified by computer simulations of a large-scale spiking network model of the cerebellum.
23.  Cerebellar memory consolidation model (Yamazaki et al. 2015)
"Long-term depression (LTD) at parallel fiber-Purkinje cell (PF-PC) synapses is thought to underlie memory formation in cerebellar motor learning. Recent experimental results, however, suggest that multiple plasticity mechanisms in the cerebellar cortex and cerebellar/vestibular nuclei participate in memory formation. To examine this possibility, we formulated a simple model of the cerebellum with a minimal number of components based on its known anatomy and physiology, implementing both LTD and long-term potentiation (LTP) at PF-PC synapses and mossy fiber-vestibular nuclear neuron (MF-VN) synapses. With this model, we conducted a simulation study of the gain adaptation of optokinetic response (OKR) eye movement. Our model reproduced several important aspects of previously reported experimental results in wild-type and cerebellum-related gene-manipulated mice. ..."
24.  Cognitive and motor cortico-basal ganglia interactions during decision making (Guthrie et al 2013)
This is a re-implementation of Guthrie et al 2013 by Topalidou and Rougier 2015. The original study investigated how multiple level action selection could be performed by the basal ganglia.
25.  Computational endophenotypes in addiction (Fiore et al 2018)
"... here we simulated phenotypic variations in addiction symptomology and responses to putative treatments, using both a neural model, based on cortico-striatal circuit dynamics, and an algorithmic model of reinforcement learning. These simulations rely on the widely accepted assumption that both the ventral, model-based, goal-directed system and the dorsal, model-free, habitual system are vulnerable to extra-physiologic dopamine reinforcements triggered by addictive rewards. We found that endophenotypic differences in the balance between the two circuit or control systems resulted in an inverted U-shape in optimal choice behavior. Specifically, greater unbalance led to a higher likelihood of developing addiction and more severe drug-taking behaviors. ..."
26.  Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
27.  Cortical model with reinforcement learning drives realistic virtual arm (Dura-Bernal et al 2015)
We developed a 3-layer sensorimotor cortical network of consisting of 704 spiking model-neurons, including excitatory, fast-spiking and low-threshold spiking interneurons. Neurons were interconnected with AMPA/NMDA, and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a virtual musculoskeletal human arm, with realistic anatomical and biomechanical properties, to reach a target. Virtual arm position was used to simultaneously control a robot arm via a network interface.
28.  Cortico-striatal plasticity in medium spiny neurons (Gurney et al 2015)
In the associated paper (Gurney et al, PLoS Biology, 2015) we presented a computational framework that addresses several issues in cortico-striatal plasticity including spike timing, reward timing, dopamine level, and dopamine receptor type. Thus, we derived a complete model of dopamine and spike-timing dependent cortico-striatal plasticity from in vitro data. We then showed this model produces the predicted activity changes necessary for learning and extinction in an operant task. Moreover, we showed the complex dependencies of cortico-striatal plasticity are not only sufficient but necessary for learning and extinction. The model was validated in a wider setting of action selection in basal ganglia, showing how it could account for behavioural data describing extinction, renewal, and reacquisition, and replicate in vitro experimental data on cortico-striatal plasticity. The code supplied here allows reproduction of the proposed process of learning in medium spiny neurons, giving the results of Figure 7 of the paper.
29.  Democratic population decisions result in robust policy-gradient learning (Richmond et al. 2011)
This model demonstrates the use of GPU programming (with CUDA) to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and to investigate its ability to learn a simplified navigation task using a learning rule stemming from Reinforcement Learning, a policy-gradient rule.
30.  Development of modular activity of grid cells (Urdapilleta et al 2017)
This study explores the self-organization of modular activity of grid cells
31.  Development of orientation-selective simple cell receptive fields (Rishikesh and Venkatesh, 2003)
Implementation of a computational model for the development of simple-cell receptive fields spanning the regimes before and after eye-opening. The before eye-opening period is governed by a correlation-based rule from Miller (Miller, J. Neurosci., 1994), and the post eye-opening period is governed by a self-organizing, experience-dependent dynamics derived in the reference below.
32.  Dynamic dopamine modulation in the basal ganglia: Learning in Parkinson (Frank et al 2004,2005)
See README file for all info on how to run models under different tasks and simulated Parkinson's and medication conditions.
33.  Effects of increasing CREB on storage and recall processes in a CA1 network (Bianchi et al. 2014)
Several recent results suggest that boosting the CREB pathway improves hippocampal-dependent memory in healthy rodents and restores this type of memory in an AD mouse model. However, not much is known about how CREB-dependent neuronal alterations in synaptic strength, excitability and LTP can boost memory formation in the complex architecture of a neuronal network. Using a model of a CA1 microcircuit, we investigate whether hippocampal CA1 pyramidal neuron properties altered by increasing CREB activity may contribute to improve memory storage and recall. With a set of patterns presented to a network, we find that the pattern recall quality under AD-like conditions is significantly better when boosting CREB function with respect to control. The results are robust and consistent upon increasing the synaptic damage expected by AD progression, supporting the idea that the use of CREB-based therapies could provide a new approach to treat AD.
34.  Encoding and retrieval in a model of the hippocampal CA1 microcircuit (Cutsuridis et al. 2009)
This NEURON code implements a small network model (100 pyramidal cells and 4 types of inhibitory interneuron) of storage and recall of patterns in the CA1 region of the mammalian hippocampus. Patterns of PC activity are stored either by a predefined weight matrix generated by Hebbian learning, or by STDP at CA3 Schaffer collateral AMPA synapses.
35.  First-Spike-Based Visual Categorization Using Reward-Modulated STDP (Mozafari et al. 2018)
"...Here, for the first time, we show that (Reinforcement Learning) RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. ..."
36.  Fixed point attractor (Hasselmo et al 1995)
"... In the model, cholinergic suppression of synaptic transmission at excitatory feedback synapses is shown to determine the extent to which activity depends upon new features of the afferent input versus components of previously stored representations. ..." See paper for more and details. The MATLAB script demonstrates the model of fixed point attractors mediated by excitatory feedback with subtractive inhibition in a continuous firing rate model.
37.  FRAT: An amygdala-centered model of fear conditioning (Krasne et al. 2011)
Model of Pavlovian fear conditioning and extinction (due to neuromodulator-controlled LTP on principal cells and inhibory interneurons)occur in amygdala and contextual representations are learned in hippocampus. Many properties of fear conditioning are accounted for.
38.  Functional balanced networks with synaptic plasticity (Sadeh et al, 2015)
The model investigates the impact of learning on functional sensory networks. It uses large-scale recurrent networks of excitatory and inhibitory spiking neurons equipped with synaptic plasticity. It explains enhancement of orientation selectivity and emergence of feature-specific connectivity in visual cortex of rodents during development, as reported in experiments.
39.  Hebbian learning in a random network for PFC modeling (Lindsay, et al. 2017)
Creates a random model that replicates the inputs and outputs of PFC cells during a complex task. Then executes Hebbian learning in the model and performs a set of analyses on the output. A portion of this model's analysis requires code from: https://github.com/brian-lau/highdim
40.  Hebbian STDP for modelling the emergence of disparity selectivity (Chauhan et al 2018)
This code shows how Hebbian learning mediated by STDP mechanisms could explain the emergence of disparity selectivity in the early visual system. This upload is a snapshot of the code at the time of acceptance of the paper. For a link to a soon-to-come git repository, consult the author's website: www.tusharchauhan.com/research/ . The datasets used in the paper are not provided due to size, but download links and expected directory-structures are. The user can (and is strongly encouraged to) experiment with their own dataset. Let me know if you find something interesting! Finally, I am very keen on a redesign/restructure/adaptation of the code to more applied problems in AI and robotics (or any other field where a spiking non-linear approach makes sense). If you have a serious proposal, don't hesitate to contact me [research AT tusharchauhan DOT com ].
41.  Hierarchical anti-Hebbian network model for the formation of spatial cells in 3D (Soman et al 2019)
This model shows how spatial representations in 3D space could emerge using unsupervised neural networks. Model is a hierarchical one which means that it has multiple layers, where each layer has got a specific function to achieve. This architecture is more of a generalised one i.e. it gives rise to different kinds of spatial representations after training.
42.  Hippocampal context-dependent retrieval (Hasselmo and Eichenbaum 2005)
"... The model simulates the context-sensitive firing properties of hippocampal neurons including trial-specific firing during spatial alternation and trial by trial changes in theta phase precession on a linear track. ..." See paper for more and details.
43.  Large scale model of the olfactory bulb (Yu et al., 2013)
The readme file currently contains links to the results for all the 72 odors investigated in the paper, and the movie showing the network activity during learning of odor k3-3 (an aliphatic ketone).
44.  Learning spatial transformations through STDP (Davison, Frégnac 2006)
A common problem in tasks involving the integration of spatial information from multiple senses, or in sensorimotor coordination, is that different modalities represent space in different frames of reference. Coordinate transformations between different reference frames are therefore required. One way to achieve this relies on the encoding of spatial information using population codes. The set of network responses to stimuli in different locations (tuning curves) constitute a basis set of functions which can be combined linearly through weighted synaptic connections in order to approximate non-linear transformations of the input variables. The question then arises how the appropriate synaptic connectivity is obtained. This model shows that a network of spiking neurons can learn the coordinate transformation from one frame of reference to another, with connectivity that develops continuously in an unsupervised manner, based only on the correlations available in the environment, and with a biologically-realistic plasticity mechanism (spike timing-dependent plasticity).
45.  Linking STDP and Dopamine action to solve the distal reward problem (Izhikevich 2007)
"... How does the brain know what firing patterns of what neurons are responsible for the reward if 1) the patterns are no longer there when the reward arrives and 2) all neurons and synapses are active during the waiting period to the reward? Here, we show how the conundrum is resolved by a model network of cortical spiking neurons with spike-timing-dependent plasticity (STDP) modulated by dopamine (DA). Although STDP is triggered by nearly coincident firing patterns on a millisecond timescale, slow kinetics of subsequent synaptic plasticity is sensitive to changes in the extracellular DA concentration during the critical period of a few seconds. ... This study emphasizes the importance of precise firing patterns in brain dynamics and suggests how a global diffusive reinforcement signal in the form of extracellular DA can selectively influence the right synapses at the right time." See paper for more and details.
46.  Logarithmic distributions prove that intrinsic learning is Hebbian (Scheler 2017)
"In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability."
47.  Long time windows from theta modulated inhib. in entorhinal–hippo. loop (Cutsuridis & Poirazi 2015)
"A recent experimental study (Mizuseki et al., 2009) has shown that the temporal delays between population activities in successive entorhinal and hippocampal anatomical stages are longer (about 70–80 ms) than expected from axon conduction velocities and passive synaptic integration of feed-forward excitatory inputs. We investigate via computer simulations the mechanisms that give rise to such long temporal delays in the hippocampus structures. ... The model shows that the experimentally reported long temporal delays in the DG, CA3 and CA1 hippocampal regions are due to theta modulated somatic and axonic inhibition..."
48.  Mapping function onto neuronal morphology (Stiefel and Sejnowski 2007)
"... We used an optimization procedure to find neuronal morphological structures for two computational tasks: First, neuronal morphologies were selected for linearly summing excitatory synaptic potentials (EPSPs); second, structures were selected that distinguished the temporal order of EPSPs. The solutions resembled the morphology of real neurons. In particular the neurons optimized for linear summation electrotonically separated their synapses, as found in avian nucleus laminaris neurons, and neurons optimized for spike-order detection had primary dendrites of significantly different diameter, as found in the basal and apical dendrites of cortical pyramidal neurons. ..."
49.  Model of cerebellar parallel fiber-Purkinje cell LTD and LTP (Gallimore et al 2018)
Model of cerebellar parallel fiber-Purkinje cell LTD and LTP implemented in Matlab Simbiology
50.  Model of DARPP-32 phosphorylation in striatal medium spiny neurons (Lindskog et al. 2006)
The work describes a model of how transient calcium and dopamine inputs might affect phosphorylation of DARPP-32 in the medium spiny neurons in the striatum. The model is relevant for understanding both the "three-factor rule" for synaptic plasticity in corticostriatal synapses, and also for relating reinforcement learning theories to biology.
51.  Modeling hebbian and homeostatic plasticity (Toyoizumi et al. 2014)
"... We propose a model in which synaptic strength is the product of a synapse-specific Hebbian factor and a postsynaptic- cell-specific homeostatic factor, with each factor separately arriving at a stable inactive state. This model captures ODP dynamics and has plausible biophysical substrates. We confirm model predictions experimentally that plasticity is inactive at stable states and that synaptic strength overshoots during recovery from visual deprivation. ..."
52.  Modelling gain modulation in stability-optimised circuits (Stroud et al 2018)
We supply Matlab code to create 'stability-optimised circuits'. These networks can give rise to rich neural activity transients that resemble primary motor cortex recordings in monkeys during reaching. We also supply code that allows one to learn new network outputs by changing the input-output gain of neurons in a stability-optimised network. Our code recreates the main results of Figure 1 in our related publication.
53.  Motor system model with reinforcement learning drives virtual arm (Dura-Bernal et al 2017)
"We implemented a model of the motor system with the following components: dorsal premotor cortex (PMd), primary motor cortex (M1), spinal cord and musculoskeletal arm (Figure 1). PMd modulated M1 to select the target to reach, M1 excited the descending spinal cord neurons that drove the arm muscles, and received arm proprioceptive feedback (information about the arm position) via the ascending spinal cord neurons. The large-scale model of M1 consisted of 6,208 spiking Izhikevich model neurons [37] of four types: regular-firing and bursting pyramidal neurons, and fast-spiking and low-threshold-spiking interneurons. These were distributed across cortical layers 2/3, 5A, 5B and 6, with cell properties, proportions, locations, connectivity, weights and delays drawn primarily from mammalian experimental data [38], [39], and described in detail in previous work [29]. The network included 486,491 connections, with synapses modeling properties of four different receptors ..."
54.  Multimodal stimuli learning in hawkmoths (Balkenius et al. 2008)
The moth Macroglossum stellatarum can learn the color and sometimes the odor of a rewarding food source. We present data from 20 different experiments with different combinations of blue and yellow artificial flowers and the two odors, honeysuckle and lavender. ... Three computational models were tested in the same experimental situations as the real moths and their predictions were compared with the experimental data. ... Neither the Rescorla–Wagner model nor a learning model with independent learning for each stimulus component were able to explain the experimental data. We present the new hawkmoth learning model, which assumes that the moth learns a template for the sensory attributes of the rewarding stimulus. This model produces behavior that closely matches that of the real moth in all 20 experiments.
55.  Neurogenesis in the olfactory bulb controlled by top-down input (Adams et al 2018)
This code implements a model for adult neurogenesis of granule cells in the olfactory system. The granule cells receive sensory input via the mitral cells and top-down input from a cortical area. That cortical area also receives olfactory input from the mitral cells as well as contextual input. This plasticity leads to a network structure consisting of bidirectional connections between bulbar and cortical odor representations. The top-down input enhances stimulus discrimination based on contextual input.
56.  Neuronify: An Educational Simulator for Neural Circuits (Dragly et al 2017)
"Neuronify, a new educational software application (app) providing an interactive way of learning about neural networks, is described. Neuronify allows students with no programming experience to easily build and explore networks in a plug-and-play manner picking network elements (neurons, stimulators, recording devices) from a menu. The app is based on the commonly used integrate-and-fire type model neuron and has adjustable neuronal and synaptic parameters. ..."
57.  Odor supported place cell model and goal navigation in rodents (Kulvicius et al. 2008)
" ... Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. ..."
58.  Olfactory bulb mitral and granule cell column formation (Migliore et al. 2007)
In the olfactory bulb, the processing units for odor discrimination are believed to involve dendrodendritic synaptic interactions between mitral and granule cells. There is increasing anatomical evidence that these cells are organized in columns, and that the columns processing a given odor are arranged in widely distributed arrays. Experimental evidence is lacking on the underlying learning mechanisms for how these columns and arrays are formed. We have used a simplified realistic circuit model to test the hypothesis that distributed connectivity can self-organize through an activity-dependent dendrodendritic synaptic mechanism. The results point to action potentials propagating in the mitral cell lateral dendrites as playing a critical role in this mechanism, and suggest a novel and robust learning mechanism for the development of distributed processing units in a cortical structure.
59.  Optimal Localist and Distributed Coding Through STDP (Masquelier & Kheradpisheh 2018)
We show how a LIF neuron equipped with STDP can become optimally selective, in an unsupervised manner, to one or several repeating spike patterns, even when those patterns are hidden in Poisson spike trains.
60.  Optimal spatiotemporal spike pattern detection by STDP (Masquelier 2017)
We simulate a LIF neuron equipped with STDP. A pattern repeats in its inputs. The LIF progressively becomes selective to the repeating pattern, in an optimal manner.
61.  Oscillations, phase-of-firing coding and STDP: an efficient learning scheme (Masquelier et al. 2009)
The model demonstrates how a common oscillatory drive for a group of neurons formats and reliabilizes their spike times - through an activation-to-phase conversion - so that repeating activation patterns can be easily detected and learned by a downstream neuron equipped with STDP, and then recognized in just one oscillation cycle.
62.  Prefrontal cortical mechanisms for goal-directed behavior (Hasselmo 2005)
".. a model of prefrontal cortex function emphasizing the influence of goal-related activity on the choice of the next motor output. ... Different neocortical minicolumns represent distinct sensory input states and distinct motor output actions. The dynamics of each minicolumn include separate phases of encoding and retrieval. During encoding, strengthening of excitatory connections forms forward and reverse associations between each state, the following action, and a subsequent state, which may include reward. During retrieval, activity spreads from reward states throughout the network. The interaction of this spreading activity with a specific input state directs selection of the next appropriate action. Simulations demonstrate how these mechanisms can guide performance in a range of goal directed tasks, and provide a functional framework for some of the neuronal responses previously observed in the medial prefrontal cortex during performance of spatial memory tasks in rats."
63.  Reinforcement learning of targeted movement (Chadderdon et al. 2012)
"Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint “forearm” to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. ..."
64.  Reinforcement Learning with Forgetting: Linking Sustained Dopamine to Motivation (Kato Morita 2016)
"It has been suggested that dopamine (DA) represents reward-prediction-error (RPE) defined in reinforcement learning and therefore DA responds to unpredicted but not predicted reward. However, recent studies have found DA response sustained towards predictable reward in tasks involving self-paced behavior, and suggested that this response represents a motivational signal. We have previously shown that RPE can sustain if there is decay/forgetting of learned-values, which can be implemented as decay of synaptic strengths storing learned-values. This account, however, did not explain the suggested link between tonic/sustained DA and motivation. In the present work, we explored the motivational effects of the value-decay in self-paced approach behavior, modeled as a series of ‘Go’ or ‘No-Go’ selections towards a goal. Through simulations, we found that the value-decay can enhance motivation, specifically, facilitate fast goal-reaching, albeit counterintuitively. ..."
65.  Relative spike time coding and STDP-based orientation selectivity in V1 (Masquelier 2012)
Phenomenological spiking model of the cat early visual system. We show how natural vision can drive spike time correlations on sufficiently fast time scales to lead to the acquisition of orientation-selective V1 neurons through STDP. This is possible without reference times such as stimulus onsets, or saccade landing times. But even when such reference times are available, we demonstrate that the relative spike times encode the images more robustly than the absolute ones.
66.  Reward modulated STDP (Legenstein et al. 2008)
"... This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. ... In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics."
67.  Robust Reservoir Generation by Correlation-Based Learning (Yamazaki & Tanaka 2008)
"Reservoir computing (RC) is a new framework for neural computation. A reservoir is usually a recurrent neural network with fixed random connections. In this article, we propose an RC model in which the connections in the reservoir are modifiable. ... We apply our RC model to trace eyeblink conditioning. The reservoir bridged the gap of an interstimulus interval between the conditioned and unconditioned stimuli, and a readout neuron was able to learn and express the timed conditioned response."
68.  Role for short term plasticity and OLM cells in containing spread of excitation (Hummos et al 2014)
This hippocampus model was developed by matching experimental data, including neuronal behavior, synaptic current dynamics, network spatial connectivity patterns, and short-term synaptic plasticity. Furthermore, it was constrained to perform pattern completion and separation under the effects of acetylcholine. The model was then used to investigate the role of short-term synaptic depression at the recurrent synapses in CA3, and inhibition by basket cell (BC) interneurons and oriens lacunosum-moleculare (OLM) interneurons in containing the unstable spread of excitatory activity in the network.
69.  Roles of subthalamic nucleus and DBS in reinforcement conflict-based decision making (Frank 2006)
Deep brain stimulation (DBS) of the subthalamic nucleus dramatically improves the motor symptoms of Parkinson's disease, but causes cognitive side effects such as impulsivity. This model from Frank (2006) simulates the role of the subthalamic nucleus (STN) within the basal ganglia circuitry in decision making. The STN dynamically modulates network decision thresholds in proportion to decision conflict. The STN ``hold your horses'' signal adaptively allows the system more time to settle on the best choice when multiple options are valid. The model also replicates effects in Parkinson's patients on and off DBS in experiments designed to test the model (Frank et al, 2007).
70.  Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. ... This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. ... Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. See paper for more and details.
71.  Sensorimotor cortex reinforcement learning of 2-joint virtual arm reaching (Neymotin et al. 2013)
"... We developed a model of sensory and motor neocortex consisting of 704 spiking model-neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA, and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a 2-joint virtual arm to reach to a fixed target. ... "
72.  Sequential neuromodulation of Hebbian plasticity in reward-based navigation (Brzosko et al 2017)
" ...Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. ..."
73.  SHOT-CA3, RO-CA1 Training, & Simulation CODE in models of hippocampal replay (Nicola & Clopath 2019)
In this code, we model the interaction between the medial septum and hippocampus as a FORCE trained, dual oscillator model. One oscillator corresponds to the medial septum and serves as an input, while a FORCE trained network of LIF neurons acts as a model of the CA3. We refer to this entire model as the Septal Hippocampal Oscillator Theta (or SHOT) network. The code contained in this upload allows a user to train a SHOT network, train a population of reversion interneurons, and simulate the SHOT-CA3 and RO-CA1 networks after training. The code scripts are labeled to correspond to the figure from the manuscript.
74.  Simulated cortical color opponent receptive fields self-organize via STDP (Eguchi et al., 2014)
"... In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. ... Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. ... After training with natural images, the neurons display heightened sensitivity to specific colors. ..."
75.  Single compartment Dorsal Lateral Medium Spiny Neuron w/ NMDA and AMPA (Biddell and Johnson 2013)
A biophysical single compartment model of the dorsal lateral striatum medium spiny neuron is presented here. The model is an implementation then adaptation of a previously described model (Mahon et al. 2002). The model has been adapted to include NMDA and AMPA receptor models that have been fit to dorsal lateral striatal neurons. The receptor models allow for excitation by other neuron models.
76.  Spatial structure from diffusive synaptic plasticity (Sweeney and Clopath, 2016)
In this paper we propose a new form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. The effects of this diffusive plasticity are implemented in networks of rate-based neurons, and lead to the emergence of spatial structure in the synaptic connectivity of the network.
77.  Speed/accuracy trade-off between the habitual and the goal-directed processes (Kermati et al. 2011)
"This study is a reference implementation of Keramati, Dezfouli, and Piray 2011 that proposed an arbitration mechanism between a goal-directed strategy and a habitual strategy, used to model the behavior of rats in instrumental conditionning tasks. The habitual strategy is the Kalman Q-Learning from Geist, Pietquin, and Fricout 2009. We replicate the results of the first task, i.e. the devaluation experiment with two states and two actions. ..."
78.  Spike-timing dependent inhibitory plasticity for gating bAPs (Wilmes et al 2017)
"Inhibition is known to influence the forward-directed flow of information within neurons. However, also regulation of backward-directed signals, such as backpropagating action potentials (bAPs), can enrich the functional repertoire of local circuits. Inhibitory control of bAP spread, for example, can provide a switch for the plasticity of excitatory synapses. Although such a mechanism is possible, it requires a precise timing of inhibition to annihilate bAPs without impairment of forward-directed excitatory information flow. Here, we propose a specific learning rule for inhibitory synapses to automatically generate the correct timing to gate bAPs in pyramidal cells when embedded in a local circuit of feedforward inhibition. Based on computational modeling of multi-compartmental neurons with physiological properties, we demonstrate that a learning rule with anti-Hebbian shape can establish the required temporal precision. ..."
79.  Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
Development of spiking grid cells and place cells in the entorhinal-hippocampal system to represent positions in large spaces
80.  STDP allows fast rate-modulated coding with Poisson-like spike trains (Gilson et al. 2011)
The model demonstrates that a neuron equipped with STDP robustly detects repeating rate patterns among its afferents, from which the spikes are generated on the fly using inhomogenous Poisson sampling, provided those rates have narrow temporal peaks (10-20ms) - a condition met by many experimental Post-Stimulus Time Histograms (PSTH).
81.  Striatal dopamine ramping: an explanation by reinforcement learning with decay (Morita & Kato, 2014)
Incorporation of decay of learned values into temporal-difference (TD) learning (Sutton & Barto, 1998, Reinforcement Learning (MIT Press)) causes ramping of TD reward prediction error (RPE), which could explain, given the hypothesis that dopamine represents TD RPE (Montague et al., 1996, J Neurosci 16:1936; Schultz et al., 1997, Science 275:1593), the reported ramping of the dopamine concentration in the striatum in a reward-associated spatial navigation task (Howe et al., 2013, Nature 500:575).
82.  Supervised learning with predictive coding (Whittington & Bogacz 2017)
"To effciently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error back-propagation algorithm. However, in the back-propagation algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of pre-synaptic and post-synaptic neurons. Several models have been proposed that approximate the back-propagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. ..."
83.  Synaptic scaling balances learning in a spiking model of neocortex (Rowan & Neymotin 2013)
Learning in the brain requires complementary mechanisms: potentiation and activity-dependent homeostatic scaling. We introduce synaptic scaling to a biologically-realistic spiking model of neocortex which can learn changes in oscillatory rhythms using STDP, and show that scaling is necessary to balance both positive and negative changes in input from potentiation and atrophy. We discuss some of the issues that arise when considering synaptic scaling in such a model, and show that scaling regulates activity whilst allowing learning to remain unaltered.
84.  The APP in C-terminal domain alters CA1 neuron firing (Pousinha et al 2019)
"The amyloid precursor protein (APP) is central to AD pathogenesis and we recently showed that its intracellular domain (AICD) could modify synaptic signal integration. We now hypothezise that AICD modifies neuron firing activity, thus contributing to the disruption of memory processes. Using cellular, electrophysiological and behavioural techniques, we showed that pathological AICD levels weakens CA1 neuron firing activity through a gene transcription-dependent mechanism. Furthermore, increased AICD production in hippocampal neurons modifies oscillatory activity, specifically in the gamma frequency range, and disrupts spatial memory task. Collectively, our data suggest that AICD pathological levels, observed in AD mouse models and in human patients, might contribute to progressive neuron homeostatic failure, driving the shift from normal ageing to AD."
85.  Theta phase precession in a model CA3 place cell (Baker and Olds 2007)
"... The present study concerns a neurobiologically based computational model of the emergence of theta phase precession in which the responses of a single model CA3 pyramidal cell are examined in the context of stimulation by realistic afferent spike trains including those of place cells in entorhinal cortex, dentate gyrus, and other CA3 pyramidal cells. Spike-timing dependent plasticity in the model CA3 pyramidal cell leads to a spatially correlated associational synaptic drive that subsequently creates a spatially asymmetric expansion of the model cell’s place field. ... Through selective manipulations of the model it is possible to decompose theta phase precession in CA3 into the separate contributing factors of inheritance from upstream afferents in the dentate gyrus and entorhinal cortex, the interaction of synaptically controlled increasing afferent drive with phasic inhibition, and the theta phase difference between dentate gyrus granule cell and CA3 pyramidal cell activity."
86.  Towards a biologically plausible model of LGN-V1 pathways (Lian et al 2019)
"Increasing evidence supports the hypothesis that the visual system employs a sparse code to represent visual stimuli, where information is encoded in an efficient way by a small population of cells that respond to sensory input at a given time. This includes simple cells in primary visual cortex (V1), which are defined by their linear spatial integration of visual stimuli. Various models of sparse coding have been proposed to explain physiological phenomena observed in simple cells. However, these models have usually made the simplifying assumption that inputs to simple cells already incorporate linear spatial summation. This overlooks the fact that these inputs are known to have strong non-linearities such as the separation of ON and OFF pathways, or separation of excitatory and inhibitory neurons. Consequently these models ignore a range of important experimental phenomena that are related to the emergence of linear spatial summation from non-linear inputs, such as segregation of ON and OFF sub-regions of simple cell receptive fields, the push-pull effect of excitation and inhibition, and phase-reversed cortico-thalamic feedback. Here, we demonstrate that a two-layer model of the visual pathway from the lateral geniculate nucleus to V1 that incorporates these biological constraints on the neural circuits and is based on sparse coding can account for the emergence of these experimental phenomena, diverse shapes of receptive fields and contrast invariance of orientation tuning of simple cells when the model is trained on natural images. The model suggests that sparse coding can be implemented by the V1 simple cells using neural circuits with a simple biologically plausible architecture."

Re-display model names without descriptions