Circuits that contain the Model Concept : Learning

(The ability of a neural network to change over time, or trials, its output in response to a set of, or a repetition of inputs.)
Re-display model names without descriptions
    Models   Description
1. 3D model of the olfactory bulb (Migliore et al. 2014)
This entry contains a link to a full HD version of movie 1 and the NEURON code of the paper: "Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb" by M Migliore, F Cavarretta, ML Hines, and GM Shepherd.
2. 3D olfactory bulb: operators (Migliore et al, 2015)
"... Using a 3D model of mitral and granule cell interactions supported by experimental findings, combined with a matrix-based representation of glomerular operations, we identify the mechanisms for forming one or more glomerular units in response to a given odor, how and to what extent the glomerular units interfere or interact with each other during learning, their computational role within the olfactory bulb microcircuit, and how their actions can be formalized into a theoretical framework in which the olfactory bulb can be considered to contain "odor operators" unique to each individual. ..."
3. A 1000 cell network model for Lateral Amygdala (Kim et al. 2013)
1000 Cell Lateral Amygdala model for investigation of plasticity and memory storage during Pavlovian Conditioning.
4. A large-scale model of the functioning brain (spaun) (Eliasmith et al. 2012)
" ... In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap (between neural activity and biological function) by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks."
5. A model of antennal lobe of bee (Chen JY et al. 2015)
" ... Here we use calcium imaging to reveal how responses across antennal lobe projection neurons change after association of an input odor with appetitive reinforcement. After appetitive conditioning to 1-hexanol, the representation of an odor mixture containing 1-hexanol becomes more similar to this odor and less similar to the background odor acetophenone. We then apply computational modeling to investigate how changes in synaptic connectivity can account for the observed plasticity. Our study suggests that experience-dependent modulation of inhibitory interactions in the antennal lobe aids perception of salient odor components mixed with behaviorally irrelevant background odors."
6. A reinforcement learning example (Sutton and Barto 1998)
This MATLAB script demonstrates an example of reinforcement learning functions guiding the movements of an agent (a black square) in a gridworld environment. See at the top of the matlab script and the book for more details.
7. A spiking neural network model of model-free reinforcement learning (Nakano et al 2015)
"Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. ... In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL (partially observable reinforcement learning) problems with high-dimensional observations. ... The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. "
8. Acetylcholine-modulated plasticity in reward-driven navigation (Zannone et al 2018)
"Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike- Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules."
9. Adaptive robotic control driven by a versatile spiking cerebellar network (Casellato et al. 2014)
" ... We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. ..."
10. Alleviating catastrophic forgetting: context gating and synaptic stabilization (Masse et al 2018)
"Artificial neural networks can suffer from catastrophic forgetting, in which learning a new task causes the network to forget how to perform previous tasks. While previous studies have proposed various methods that can alleviate forgetting over small numbers (<10) of tasks, it is uncertain whether they can prevent forgetting across larger numbers of tasks. In this study, we propose a neuroscience-inspired scheme, called “context-dependent gating,” in which mostly nonoverlapping sets of units are active for any one task. Importantly, context-dependent gating has a straightforward implementation, requires little extra computational overhead, and when combined with previous methods to stabilize connection weights, can allow networks to maintain high performance across large numbers of sequentially presented tasks."
11. Alternative time representation in dopamine models (Rivest et al. 2009)
Combines a long short-term memory (LSTM) model of the cortex to a temporal difference learning (TD) model of the basal ganglia. Code to run simulations similar to the published data: Rivest, F, Kalaska, J.F., Bengio, Y. (2009) Alternative time representation in dopamine models. Journal of Computational Neuroscience. See http://dx.doi.org/10.1007/s10827-009-0191-1 for details.
12. An electrophysiological model of GABAergic double bouquet cells (Chrysanthidis et al. 2019)
We present an electrophysiological model of double bouquet cells (DBCs) and integrate them into an established cortical columnar microcircuit model that implements a BCPNN (Bayesian Confidence Propagation Neural Network) learning rule. The proposed architecture effectively solves the problem of duplexed learning of inhibition and excitation by replacing recurrent inhibition between pyramidal cells in functional columns of different stimulus selectivity with a plastic disynaptic pathway. The introduction of DBCs improves the biological plausibility of our model, without affecting the model's spiking activity, basic operation, and learning abilities.
13. Cancelling redundant input in ELL pyramidal cells (Bol et al. 2011)
The paper investigates the property of the electrosensory lateral line lobe (ELL) of the brain of weakly electric fish to cancel predictable stimuli. Electroreceptors on the skin encode all signals in their firing activity, but superficial pyramidal (SP) cells in the ELL that receive this feedforward input do not respond to constant sinusoidal signals. This cancellation putatively occurs using a network of feedback delay lines and burst-induced synaptic plasticity between the delay lines and the SP cell that learns to cancel the redundant input. Biologically, the delay lines are parallel fibres from cerebellar-like granule cells in the eminentia granularis posterior. A model of this network (e.g. electroreceptors, SP cells, delay lines and burst-induced plasticity) was constructed to test whether the current knowledge of how the network operates is sufficient to cancel redundant stimuli.
14. Cerebellar gain and timing control model (Yamazaki & Tanaka 2007)(Yamazaki & Nagao 2012)
This paper proposes a hypothetical computational mechanism for unified gain and timing control in the cerebellum. The hypothesis is justified by computer simulations of a large-scale spiking network model of the cerebellum.
15. Cerebellar memory consolidation model (Yamazaki et al. 2015)
"Long-term depression (LTD) at parallel fiber-Purkinje cell (PF-PC) synapses is thought to underlie memory formation in cerebellar motor learning. Recent experimental results, however, suggest that multiple plasticity mechanisms in the cerebellar cortex and cerebellar/vestibular nuclei participate in memory formation. To examine this possibility, we formulated a simple model of the cerebellum with a minimal number of components based on its known anatomy and physiology, implementing both LTD and long-term potentiation (LTP) at PF-PC synapses and mossy fiber-vestibular nuclear neuron (MF-VN) synapses. With this model, we conducted a simulation study of the gain adaptation of optokinetic response (OKR) eye movement. Our model reproduced several important aspects of previously reported experimental results in wild-type and cerebellum-related gene-manipulated mice. ..."
16. Cognitive and motor cortico-basal ganglia interactions during decision making (Guthrie et al 2013)
This is a re-implementation of Guthrie et al 2013 by Topalidou and Rougier 2015. The original study investigated how multiple level action selection could be performed by the basal ganglia.
17. Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
A simulator and the configuration files for three publications are provided. First, "A hybrid generative and predictive model of the motor cortex" (Weber at al. 2006) which uses reinforcement learning to set up a toy action scheme, then uses unsupervised learning to "copy" the learnt action, and an attractor network to predict the hidden code of the unsupervised network. Second, "A Self-Organizing Map of Sigma-Pi Units" (Weber and Wermter 2006/7) learns frame of reference transformations on population codes in an unsupervised manner. Third, "A possible representation of reward in the learning of saccades" (Weber and Triesch, 2006) implements saccade learning with two possible learning schemes for horizontal and vertical saccades, respectively.
18. Cortical model with reinforcement learning drives realistic virtual arm (Dura-Bernal et al 2015)
We developed a 3-layer sensorimotor cortical network of consisting of 704 spiking model-neurons, including excitatory, fast-spiking and low-threshold spiking interneurons. Neurons were interconnected with AMPA/NMDA, and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a virtual musculoskeletal human arm, with realistic anatomical and biomechanical properties, to reach a target. Virtual arm position was used to simultaneously control a robot arm via a network interface.
19. Development of orientation-selective simple cell receptive fields (Rishikesh and Venkatesh, 2003)
Implementation of a computational model for the development of simple-cell receptive fields spanning the regimes before and after eye-opening. The before eye-opening period is governed by a correlation-based rule from Miller (Miller, J. Neurosci., 1994), and the post eye-opening period is governed by a self-organizing, experience-dependent dynamics derived in the reference below.
20. Dynamic dopamine modulation in the basal ganglia: Learning in Parkinson (Frank et al 2004,2005)
See README file for all info on how to run models under different tasks and simulated Parkinson's and medication conditions.
21. Effects of increasing CREB on storage and recall processes in a CA1 network (Bianchi et al. 2014)
Several recent results suggest that boosting the CREB pathway improves hippocampal-dependent memory in healthy rodents and restores this type of memory in an AD mouse model. However, not much is known about how CREB-dependent neuronal alterations in synaptic strength, excitability and LTP can boost memory formation in the complex architecture of a neuronal network. Using a model of a CA1 microcircuit, we investigate whether hippocampal CA1 pyramidal neuron properties altered by increasing CREB activity may contribute to improve memory storage and recall. With a set of patterns presented to a network, we find that the pattern recall quality under AD-like conditions is significantly better when boosting CREB function with respect to control. The results are robust and consistent upon increasing the synaptic damage expected by AD progression, supporting the idea that the use of CREB-based therapies could provide a new approach to treat AD.
22. Encoding and retrieval in a model of the hippocampal CA1 microcircuit (Cutsuridis et al. 2009)
This NEURON code implements a small network model (100 pyramidal cells and 4 types of inhibitory interneuron) of storage and recall of patterns in the CA1 region of the mammalian hippocampus. Patterns of PC activity are stored either by a predefined weight matrix generated by Hebbian learning, or by STDP at CA3 Schaffer collateral AMPA synapses.
23. First-Spike-Based Visual Categorization Using Reward-Modulated STDP (Mozafari et al. 2018)
"...Here, for the first time, we show that (Reinforcement Learning) RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. ..."
24. Fixed point attractor (Hasselmo et al 1995)
"... In the model, cholinergic suppression of synaptic transmission at excitatory feedback synapses is shown to determine the extent to which activity depends upon new features of the afferent input versus components of previously stored representations. ..." See paper for more and details. The MATLAB script demonstrates the model of fixed point attractors mediated by excitatory feedback with subtractive inhibition in a continuous firing rate model.
25. FRAT: An amygdala-centered model of fear conditioning (Krasne et al. 2011)
Model of Pavlovian fear conditioning and extinction (due to neuromodulator-controlled LTP on principal cells and inhibory interneurons)occur in amygdala and contextual representations are learned in hippocampus. Many properties of fear conditioning are accounted for.
26. Functional balanced networks with synaptic plasticity (Sadeh et al, 2015)
The model investigates the impact of learning on functional sensory networks. It uses large-scale recurrent networks of excitatory and inhibitory spiking neurons equipped with synaptic plasticity. It explains enhancement of orientation selectivity and emergence of feature-specific connectivity in visual cortex of rodents during development, as reported in experiments.
27. Hebbian STDP for modelling the emergence of disparity selectivity (Chauhan et al 2018)
This code shows how Hebbian learning mediated by STDP mechanisms could explain the emergence of disparity selectivity in the early visual system. This upload is a snapshot of the code at the time of acceptance of the paper. For a link to a soon-to-come git repository, consult the author's website: www.tusharchauhan.com/research/ . The datasets used in the paper are not provided due to size, but download links and expected directory-structures are. The user can (and is strongly encouraged to) experiment with their own dataset. Let me know if you find something interesting! Finally, I am very keen on a redesign/restructure/adaptation of the code to more applied problems in AI and robotics (or any other field where a spiking non-linear approach makes sense). If you have a serious proposal, don't hesitate to contact me [research AT tusharchauhan DOT com ].
28. Hippocampal context-dependent retrieval (Hasselmo and Eichenbaum 2005)
"... The model simulates the context-sensitive firing properties of hippocampal neurons including trial-specific firing during spatial alternation and trial by trial changes in theta phase precession on a linear track. ..." See paper for more and details.
29. Large scale model of the olfactory bulb (Yu et al., 2013)
The readme file currently contains links to the results for all the 72 odors investigated in the paper, and the movie showing the network activity during learning of odor k3-3 (an aliphatic ketone).
30. Learning spatial transformations through STDP (Davison, Frégnac 2006)
A common problem in tasks involving the integration of spatial information from multiple senses, or in sensorimotor coordination, is that different modalities represent space in different frames of reference. Coordinate transformations between different reference frames are therefore required. One way to achieve this relies on the encoding of spatial information using population codes. The set of network responses to stimuli in different locations (tuning curves) constitute a basis set of functions which can be combined linearly through weighted synaptic connections in order to approximate non-linear transformations of the input variables. The question then arises how the appropriate synaptic connectivity is obtained. This model shows that a network of spiking neurons can learn the coordinate transformation from one frame of reference to another, with connectivity that develops continuously in an unsupervised manner, based only on the correlations available in the environment, and with a biologically-realistic plasticity mechanism (spike timing-dependent plasticity).
31. Linking STDP and Dopamine action to solve the distal reward problem (Izhikevich 2007)
"... How does the brain know what firing patterns of what neurons are responsible for the reward if 1) the patterns are no longer there when the reward arrives and 2) all neurons and synapses are active during the waiting period to the reward? Here, we show how the conundrum is resolved by a model network of cortical spiking neurons with spike-timing-dependent plasticity (STDP) modulated by dopamine (DA). Although STDP is triggered by nearly coincident firing patterns on a millisecond timescale, slow kinetics of subsequent synaptic plasticity is sensitive to changes in the extracellular DA concentration during the critical period of a few seconds. ... This study emphasizes the importance of precise firing patterns in brain dynamics and suggests how a global diffusive reinforcement signal in the form of extracellular DA can selectively influence the right synapses at the right time." See paper for more and details.
32. Logarithmic distributions prove that intrinsic learning is Hebbian (Scheler 2017)
"In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability."
33. Long time windows from theta modulated inhib. in entorhinal–hippo. loop (Cutsuridis & Poirazi 2015)
"A recent experimental study (Mizuseki et al., 2009) has shown that the temporal delays between population activities in successive entorhinal and hippocampal anatomical stages are longer (about 70–80 ms) than expected from axon conduction velocities and passive synaptic integration of feed-forward excitatory inputs. We investigate via computer simulations the mechanisms that give rise to such long temporal delays in the hippocampus structures. ... The model shows that the experimentally reported long temporal delays in the DG, CA3 and CA1 hippocampal regions are due to theta modulated somatic and axonic inhibition..."
34. Modeling hebbian and homeostatic plasticity (Toyoizumi et al. 2014)
"... We propose a model in which synaptic strength is the product of a synapse-specific Hebbian factor and a postsynaptic- cell-specific homeostatic factor, with each factor separately arriving at a stable inactive state. This model captures ODP dynamics and has plausible biophysical substrates. We confirm model predictions experimentally that plasticity is inactive at stable states and that synaptic strength overshoots during recovery from visual deprivation. ..."
35. Modelling gain modulation in stability-optimised circuits (Stroud et al 2018)
We supply Matlab code to create 'stability-optimised circuits'. These networks can give rise to rich neural activity transients that resemble primary motor cortex recordings in monkeys during reaching. We also supply code that allows one to learn new network outputs by changing the input-output gain of neurons in a stability-optimised network. Our code recreates the main results of Figure 1 in our related publication.
36. Motor system model with reinforcement learning drives virtual arm (Dura-Bernal et al 2017)
"We implemented a model of the motor system with the following components: dorsal premotor cortex (PMd), primary motor cortex (M1), spinal cord and musculoskeletal arm (Figure 1). PMd modulated M1 to select the target to reach, M1 excited the descending spinal cord neurons that drove the arm muscles, and received arm proprioceptive feedback (information about the arm position) via the ascending spinal cord neurons. The large-scale model of M1 consisted of 6,208 spiking Izhikevich model neurons [37] of four types: regular-firing and bursting pyramidal neurons, and fast-spiking and low-threshold-spiking interneurons. These were distributed across cortical layers 2/3, 5A, 5B and 6, with cell properties, proportions, locations, connectivity, weights and delays drawn primarily from mammalian experimental data [38], [39], and described in detail in previous work [29]. The network included 486,491 connections, with synapses modeling properties of four different receptors ..."
37. Neuronify: An Educational Simulator for Neural Circuits (Dragly et al 2017)
"Neuronify, a new educational software application (app) providing an interactive way of learning about neural networks, is described. Neuronify allows students with no programming experience to easily build and explore networks in a plug-and-play manner picking network elements (neurons, stimulators, recording devices) from a menu. The app is based on the commonly used integrate-and-fire type model neuron and has adjustable neuronal and synaptic parameters. ..."
38. Odor supported place cell model and goal navigation in rodents (Kulvicius et al. 2008)
" ... Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. ..."
39. Olfactory bulb mitral and granule cell column formation (Migliore et al. 2007)
In the olfactory bulb, the processing units for odor discrimination are believed to involve dendrodendritic synaptic interactions between mitral and granule cells. There is increasing anatomical evidence that these cells are organized in columns, and that the columns processing a given odor are arranged in widely distributed arrays. Experimental evidence is lacking on the underlying learning mechanisms for how these columns and arrays are formed. We have used a simplified realistic circuit model to test the hypothesis that distributed connectivity can self-organize through an activity-dependent dendrodendritic synaptic mechanism. The results point to action potentials propagating in the mitral cell lateral dendrites as playing a critical role in this mechanism, and suggest a novel and robust learning mechanism for the development of distributed processing units in a cortical structure.
40. Optimal Localist and Distributed Coding Through STDP (Masquelier & Kheradpisheh 2018)
We show how a LIF neuron equipped with STDP can become optimally selective, in an unsupervised manner, to one or several repeating spike patterns, even when those patterns are hidden in Poisson spike trains.
41. Oscillations, phase-of-firing coding and STDP: an efficient learning scheme (Masquelier et al. 2009)
The model demonstrates how a common oscillatory drive for a group of neurons formats and reliabilizes their spike times - through an activation-to-phase conversion - so that repeating activation patterns can be easily detected and learned by a downstream neuron equipped with STDP, and then recognized in just one oscillation cycle.
42. Prefrontal cortical mechanisms for goal-directed behavior (Hasselmo 2005)
".. a model of prefrontal cortex function emphasizing the influence of goal-related activity on the choice of the next motor output. ... Different neocortical minicolumns represent distinct sensory input states and distinct motor output actions. The dynamics of each minicolumn include separate phases of encoding and retrieval. During encoding, strengthening of excitatory connections forms forward and reverse associations between each state, the following action, and a subsequent state, which may include reward. During retrieval, activity spreads from reward states throughout the network. The interaction of this spreading activity with a specific input state directs selection of the next appropriate action. Simulations demonstrate how these mechanisms can guide performance in a range of goal directed tasks, and provide a functional framework for some of the neuronal responses previously observed in the medial prefrontal cortex during performance of spatial memory tasks in rats."
43. Reinforcement learning of targeted movement (Chadderdon et al. 2012)
"Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint “forearm” to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. ..."
44. Relative spike time coding and STDP-based orientation selectivity in V1 (Masquelier 2012)
Phenomenological spiking model of the cat early visual system. We show how natural vision can drive spike time correlations on sufficiently fast time scales to lead to the acquisition of orientation-selective V1 neurons through STDP. This is possible without reference times such as stimulus onsets, or saccade landing times. But even when such reference times are available, we demonstrate that the relative spike times encode the images more robustly than the absolute ones.
45. Reward modulated STDP (Legenstein et al. 2008)
"... This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. ... In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics."
46. Robust Reservoir Generation by Correlation-Based Learning (Yamazaki & Tanaka 2008)
"Reservoir computing (RC) is a new framework for neural computation. A reservoir is usually a recurrent neural network with fixed random connections. In this article, we propose an RC model in which the connections in the reservoir are modifiable. ... We apply our RC model to trace eyeblink conditioning. The reservoir bridged the gap of an interstimulus interval between the conditioned and unconditioned stimuli, and a readout neuron was able to learn and express the timed conditioned response."
47. Role for short term plasticity and OLM cells in containing spread of excitation (Hummos et al 2014)
This hippocampus model was developed by matching experimental data, including neuronal behavior, synaptic current dynamics, network spatial connectivity patterns, and short-term synaptic plasticity. Furthermore, it was constrained to perform pattern completion and separation under the effects of acetylcholine. The model was then used to investigate the role of short-term synaptic depression at the recurrent synapses in CA3, and inhibition by basket cell (BC) interneurons and oriens lacunosum-moleculare (OLM) interneurons in containing the unstable spread of excitatory activity in the network.
48. Roles of subthalamic nucleus and DBS in reinforcement conflict-based decision making (Frank 2006)
Deep brain stimulation (DBS) of the subthalamic nucleus dramatically improves the motor symptoms of Parkinson's disease, but causes cognitive side effects such as impulsivity. This model from Frank (2006) simulates the role of the subthalamic nucleus (STN) within the basal ganglia circuitry in decision making. The STN dynamically modulates network decision thresholds in proportion to decision conflict. The STN ``hold your horses'' signal adaptively allows the system more time to settle on the best choice when multiple options are valid. The model also replicates effects in Parkinson's patients on and off DBS in experiments designed to test the model (Frank et al, 2007).
49. Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. ... This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. ... Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. See paper for more and details.
50. Sensorimotor cortex reinforcement learning of 2-joint virtual arm reaching (Neymotin et al. 2013)
"... We developed a model of sensory and motor neocortex consisting of 704 spiking model-neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA, and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a 2-joint virtual arm to reach to a fixed target. ... "
51. Simulated cortical color opponent receptive fields self-organize via STDP (Eguchi et al., 2014)
"... In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. ... Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. ... After training with natural images, the neurons display heightened sensitivity to specific colors. ..."
52. Single compartment Dorsal Lateral Medium Spiny Neuron w/ NMDA and AMPA (Biddell and Johnson 2013)
A biophysical single compartment model of the dorsal lateral striatum medium spiny neuron is presented here. The model is an implementation then adaptation of a previously described model (Mahon et al. 2002). The model has been adapted to include NMDA and AMPA receptor models that have been fit to dorsal lateral striatal neurons. The receptor models allow for excitation by other neuron models.
53. Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
Development of spiking grid cells and place cells in the entorhinal-hippocampal system to represent positions in large spaces
54. STDP allows fast rate-modulated coding with Poisson-like spike trains (Gilson et al. 2011)
The model demonstrates that a neuron equipped with STDP robustly detects repeating rate patterns among its afferents, from which the spikes are generated on the fly using inhomogenous Poisson sampling, provided those rates have narrow temporal peaks (10-20ms) - a condition met by many experimental Post-Stimulus Time Histograms (PSTH).
55. Striatal dopamine ramping: an explanation by reinforcement learning with decay (Morita & Kato, 2014)
Incorporation of decay of learned values into temporal-difference (TD) learning (Sutton & Barto, 1998, Reinforcement Learning (MIT Press)) causes ramping of TD reward prediction error (RPE), which could explain, given the hypothesis that dopamine represents TD RPE (Montague et al., 1996, J Neurosci 16:1936; Schultz et al., 1997, Science 275:1593), the reported ramping of the dopamine concentration in the striatum in a reward-associated spatial navigation task (Howe et al., 2013, Nature 500:575).
56. Synaptic scaling balances learning in a spiking model of neocortex (Rowan & Neymotin 2013)
Learning in the brain requires complementary mechanisms: potentiation and activity-dependent homeostatic scaling. We introduce synaptic scaling to a biologically-realistic spiking model of neocortex which can learn changes in oscillatory rhythms using STDP, and show that scaling is necessary to balance both positive and negative changes in input from potentiation and atrophy. We discuss some of the issues that arise when considering synaptic scaling in such a model, and show that scaling regulates activity whilst allowing learning to remain unaltered.
57. Towards a biologically plausible model of LGN-V1 pathways (Lian et al 2019)
"Increasing evidence supports the hypothesis that the visual system employs a sparse code to represent visual stimuli, where information is encoded in an efficient way by a small population of cells that respond to sensory input at a given time. This includes simple cells in primary visual cortex (V1), which are defined by their linear spatial integration of visual stimuli. Various models of sparse coding have been proposed to explain physiological phenomena observed in simple cells. However, these models have usually made the simplifying assumption that inputs to simple cells already incorporate linear spatial summation. This overlooks the fact that these inputs are known to have strong non-linearities such as the separation of ON and OFF pathways, or separation of excitatory and inhibitory neurons. Consequently these models ignore a range of important experimental phenomena that are related to the emergence of linear spatial summation from non-linear inputs, such as segregation of ON and OFF sub-regions of simple cell receptive fields, the push-pull effect of excitation and inhibition, and phase-reversed cortico-thalamic feedback. Here, we demonstrate that a two-layer model of the visual pathway from the lateral geniculate nucleus to V1 that incorporates these biological constraints on the neural circuits and is based on sparse coding can account for the emergence of these experimental phenomena, diverse shapes of receptive fields and contrast invariance of orientation tuning of simple cells when the model is trained on natural images. The model suggests that sparse coding can be implemented by the V1 simple cells using neural circuits with a simple biologically plausible architecture."

Re-display model names without descriptions