Circuits that contain the Model Concept : Memory

(The mechanisms by which neural tissue stores or recalls an impression of past occurrences.)
Re-display model names with descriptions
    Models
1. 3D model of the olfactory bulb (Migliore et al. 2014)
2. 3D olfactory bulb: operators (Migliore et al, 2015)
3. A 1000 cell network model for Lateral Amygdala (Kim et al. 2013)
4. A computational model of systems memory consolidation and reconsolidation (Helfer & Shultz 2019)
5. A large-scale model of the functioning brain (spaun) (Eliasmith et al. 2012)
6. A model of antennal lobe of bee (Chen JY et al. 2015)
7. A reinforcement learning example (Sutton and Barto 1998)
8. A single-cell spiking model for the origin of grid-cell patterns (D'Albis & Kempter 2017)
9. A spiking neural network model of model-free reinforcement learning (Nakano et al 2015)
10. Acetylcholine-modulated plasticity in reward-driven navigation (Zannone et al 2018)
11. Adaptive robotic control driven by a versatile spiking cerebellar network (Casellato et al. 2014)
12. Alleviating catastrophic forgetting: context gating and synaptic stabilization (Masse et al 2018)
13. Alternative time representation in dopamine models (Rivest et al. 2009)
14. An electrophysiological model of GABAergic double bouquet cells (Chrysanthidis et al. 2019)
15. Basal Ganglia and Levodopa Pharmacodynamics model for parameter estimation in PD (Ursino et al 2020)
16. Cancelling redundant input in ELL pyramidal cells (Bol et al. 2011)
17. Cerebellar gain and timing control model (Yamazaki & Tanaka 2007)(Yamazaki & Nagao 2012)
18. Cerebellar memory consolidation model (Yamazaki et al. 2015)
19. Cerebellar Model for the Optokinetic Response (Kim and Lim 2021)
20. Coding explains development of binocular vision and its failure in Amblyopia (Eckmann et al 2020)
21. Cognitive and motor cortico-basal ganglia interactions during decision making (Guthrie et al 2013)
22. Computational model of the distributed representation of operant reward memory (Costa et al. 2020)
23. Cortex learning models (Weber at al. 2006, Weber and Triesch, 2006, Weber and Wermter 2006/7)
24. Cortical model with reinforcement learning drives realistic virtual arm (Dura-Bernal et al 2015)
25. Cortico - Basal Ganglia Loop (Mulcahy et al 2020)
26. Development of orientation-selective simple cell receptive fields (Rishikesh and Venkatesh, 2003)
27. Dynamic dopamine modulation in the basal ganglia: Learning in Parkinson (Frank et al 2004,2005)
28. Effects of increasing CREB on storage and recall processes in a CA1 network (Bianchi et al. 2014)
29. Encoding and retrieval in a model of the hippocampal CA1 microcircuit (Cutsuridis et al. 2009)
30. First-Spike-Based Visual Categorization Using Reward-Modulated STDP (Mozafari et al. 2018)
31. Fixed point attractor (Hasselmo et al 1995)
32. FRAT: An amygdala-centered model of fear conditioning (Krasne et al. 2011)
33. Functional balanced networks with synaptic plasticity (Sadeh et al, 2015)
34. Generation of stable heading representations in diverse visual scenes (Kim et al 2019)
35. Hebbian STDP for modelling the emergence of disparity selectivity (Chauhan et al 2018)
36. Hierarchical anti-Hebbian network model for the formation of spatial cells in 3D (Soman et al 2019)
37. Hippocampal context-dependent retrieval (Hasselmo and Eichenbaum 2005)
38. Large scale model of the olfactory bulb (Yu et al., 2013)
39. Learning spatial transformations through STDP (Davison, Frégnac 2006)
40. Linking STDP and Dopamine action to solve the distal reward problem (Izhikevich 2007)
41. Logarithmic distributions prove that intrinsic learning is Hebbian (Scheler 2017)
42. Long time windows from theta modulated inhib. in entorhinal–hippo. loop (Cutsuridis & Poirazi 2015)
43. Modeling hebbian and homeostatic plasticity (Toyoizumi et al. 2014)
44. Modelling gain modulation in stability-optimised circuits (Stroud et al 2018)
45. Motor system model with reinforcement learning drives virtual arm (Dura-Bernal et al 2017)
46. Neurogenesis in the olfactory bulb controlled by top-down input (Adams et al 2018)
47. Neuronify: An Educational Simulator for Neural Circuits (Dragly et al 2017)
48. Odor supported place cell model and goal navigation in rodents (Kulvicius et al. 2008)
49. Olfactory bulb mitral and granule cell column formation (Migliore et al. 2007)
50. Optimal Localist and Distributed Coding Through STDP (Masquelier & Kheradpisheh 2018)
51. Oscillations, phase-of-firing coding and STDP: an efficient learning scheme (Masquelier et al. 2009)
52. Prefrontal cortical mechanisms for goal-directed behavior (Hasselmo 2005)
53. Reinforcement learning of targeted movement (Chadderdon et al. 2012)
54. Relative spike time coding and STDP-based orientation selectivity in V1 (Masquelier 2012)
55. Reward modulated STDP (Legenstein et al. 2008)
56. Robust Reservoir Generation by Correlation-Based Learning (Yamazaki & Tanaka 2008)
57. Role for short term plasticity and OLM cells in containing spread of excitation (Hummos et al 2014)
58. Roles of subthalamic nucleus and DBS in reinforcement conflict-based decision making (Frank 2006)
59. Scaling self-organizing maps to model large cortical networks (Bednar et al 2004)
60. Sensorimotor cortex reinforcement learning of 2-joint virtual arm reaching (Neymotin et al. 2013)
61. SHOT-CA3, RO-CA1 Training, & Simulation CODE in models of hippocampal replay (Nicola & Clopath 2019)
62. Simulated cortical color opponent receptive fields self-organize via STDP (Eguchi et al., 2014)
63. Single compartment Dorsal Lateral Medium Spiny Neuron w/ NMDA and AMPA (Biddell and Johnson 2013)
64. Spiking GridPlaceMap model (Pilly & Grossberg, PLoS One, 2013)
65. STDP allows fast rate-modulated coding with Poisson-like spike trains (Gilson et al. 2011)
66. Striatal dopamine ramping: an explanation by reinforcement learning with decay (Morita & Kato, 2014)
67. Synaptic scaling balances learning in a spiking model of neocortex (Rowan & Neymotin 2013)
68. Towards a biologically plausible model of LGN-V1 pathways (Lian et al 2019)

Re-display model names with descriptions