Large-scale neural model of visual short-term memory (Ulloa, Horwitz 2016; Horwitz, et al. 2005,...)

 Download zip file 
Help downloading and running models
Accession:206337
Large-scale neural model of visual short term memory embedded into a 998-node connectome. The model simulates electrical activity across neuronal populations of a number of brain regions and converts that activity into fMRI and MEG time-series. The model uses a neural simulator developed at the Brain Imaging and Modeling Section of the National Institutes of Health.
References:
1 . Tagamets MA, Horwitz B (1998) Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cereb Cortex 8:310-20 [PubMed]
2 . Ulloa A, Horwitz B (2016) Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex. Front Neuroinform 10:32 [PubMed]
3 . Horwitz B, Warner B, Fitzer J, Tagamets MA, Husain FT, Long TW (2005) Investigating the neural basis for functional and effective connectivity. Application to fMRI. Philos Trans R Soc Lond B Biol Sci 360:1093-108 [PubMed]
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Prefrontal cortex (PFC);
Cell Type(s):
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: Python;
Model Concept(s): Working memory;
Implementer(s): Ulloa, Antonio [antonio.ulloa at alum.bu.edu];
# ============================================================================
#
#                            PUBLIC DOMAIN NOTICE
#
#       National Institute on Deafness and Other Communication Disorders
#
# This software/database is a "United States Government Work" under the 
# terms of the United States Copyright Act. It was written as part of 
# the author's official duties as a United States Government employee and 
# thus cannot be copyrighted. This software/database is freely available 
# to the public for use. The NIDCD and the U.S. Government have not placed 
# any restriction on its use or reproduction. 
#
# Although all reasonable efforts have been taken to ensure the accuracy 
# and reliability of the software and data, the NIDCD and the U.S. Government 
# do not and cannot warrant the performance or results that may be obtained 
# by using this software or data. The NIDCD and the U.S. Government disclaim 
# all warranties, express or implied, including warranties of performance, 
# merchantability or fitness for any particular purpose.
#
# Please cite the author in any work or product based on this material.
# 
# ==========================================================================



# ***************************************************************************
#
#   Large-Scale Neural Modeling software (LSNM)
#
#   Section on Brain Imaging and Modeling
#   Voice, Speech and Language Branch
#   National Institute on Deafness and Other Communication Disorders
#   National Institutes of Health
#
#   This file (plot_neural_auditory_topographic.py) was created on March 26, 2015.
#
#
#   Author: Antonio Ulloa. Last updated by Antonio Ulloa March 26, 2015  
# **************************************************************************/

# plot_neural_auditory_topographic.py
#
# Plays a movie using output data files of visual delay-match-to-sample simulation

import numpy as np
import matplotlib.pyplot as plt

# Load data files
mgns = np.loadtxt('mgns.out')
efd1 = np.loadtxt('efd1.out')
efd2 = np.loadtxt('efd2.out')
ea1u = np.loadtxt('ea1u.out')
ea1d = np.loadtxt('ea1d.out')
ea2u = np.loadtxt('ea2u.out')
ea2d = np.loadtxt('ea2d.out')
ea2c = np.loadtxt('ea2c.out')
exfr = np.loadtxt('exfr.out')
exfs = np.loadtxt('exfs.out')
estg = np.loadtxt('estg.out')

fig = plt.figure(1)

plt.suptitle('SIMULATED NEURAL ACTIVITY')

# adds index of each array item to the value contained in the item
for (i,j), value in np.ndenumerate(mgns):
    mgns[i][j] = mgns[i][j] + j
    efd1[i][j] = efd1[i][j] + j
    efd2[i][j] = efd2[i][j] + j
    ea1u[i][j] = ea1u[i][j] + j
    ea1d[i][j] = ea1d[i][j] + j
    ea2u[i][j] = ea2u[i][j] + j
    ea2d[i][j] = ea2d[i][j] + j
    ea2c[i][j] = ea2c[i][j] + j
    exfr[i][j] = exfr[i][j] + j
    exfs[i][j] = exfs[i][j] + j
    estg[i][j] = estg[i][j] + j

# Render LGN array in a colormap
ax = plt.subplot(3,4,1)
plt.plot(mgns)
plt.title('MGN')
plt.ylim([38,69])

# Render EV1h array in a colormap
ax = plt.subplot(3,4,5)
plt.plot(ea1u)
plt.title('A1u')
plt.ylim([38,69])

# Render EV1v array in a colormap
ax = plt.subplot(3,4,9)
plt.plot(ea1d)
plt.title('A1d')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,2)
plt.plot(ea2u)
plt.title('A2u')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,6)
plt.plot(ea2d)
plt.title('A2d')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,10)
plt.plot(ea2c)
plt.title('A2c')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,3)
plt.plot(estg)
plt.title('STG')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,7)
plt.plot(exfs)
plt.title('FS')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,11)
plt.plot(efd1)
plt.title('FD1')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,4)
plt.plot(efd2)
plt.title('FD2')
plt.ylim([38,69])

# Render array in a colormap
ax = plt.subplot(3,4,8)
plt.plot(exfr)
plt.title('FR')
plt.ylim([38,69])

# Show the plot on the screen
plt.show()

Loading data, please wait...