Large-scale neural model of visual short-term memory (Ulloa, Horwitz 2016; Horwitz, et al. 2005,...)

 Download zip file 
Help downloading and running models
Accession:206337
Large-scale neural model of visual short term memory embedded into a 998-node connectome. The model simulates electrical activity across neuronal populations of a number of brain regions and converts that activity into fMRI and MEG time-series. The model uses a neural simulator developed at the Brain Imaging and Modeling Section of the National Institutes of Health.
References:
1 . Tagamets MA, Horwitz B (1998) Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cereb Cortex 8:310-20 [PubMed]
2 . Ulloa A, Horwitz B (2016) Embedding Task-Based Neural Models into a Connectome-Based Model of the Cerebral Cortex. Front Neuroinform 10:32 [PubMed]
3 . Horwitz B, Warner B, Fitzer J, Tagamets MA, Husain FT, Long TW (2005) Investigating the neural basis for functional and effective connectivity. Application to fMRI. Philos Trans R Soc Lond B Biol Sci 360:1093-108 [PubMed]
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Prefrontal cortex (PFC);
Cell Type(s):
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: Python;
Model Concept(s): Working memory;
Implementer(s): Ulloa, Antonio [antonio.ulloa at alum.bu.edu];
=================================================================
PUBLIC DOMAIN NOTICE

National Institute on Deafness and Other Communication Disorders

This software/database is a "United States Government Work" 
under the terms of the United States Copyright Act. It was 
written as part of the author's official duties as a United 
States Government employee and thus cannot be copyrighted. 
This software/database is freely available to the public for 
use. The NIDCD and the U.S. Government have not placed any 
restriction on its use or reproduction. 

Although all reasonable efforts have been taken to ensure 
the accuracy and reliability of the software and data, the 
NIDCD and the U.S. Government do not and cannot warrant the
performance or results that may be obtained by using this 
software or data. The NIDCD and the U.S. Government disclaim 
all warranties, express or implied, including warranties of 
performance, merchantability or fitness for any particular 
purpose.

Please cite the author in any work or product based on this 
material.
=================================================================

Large-Scale Neural Modeling software (LSNM)

Section on Brain Imaging and Modeling
Voice, Speech and Language Branch
National Institute on Deafness and Other Communication Disorders
National Institutes of Health

This README file was last modified on August 9, 2015.

================================================================

Python version of LSNM (Large-Scale Neural Modeling software).
This repository contains the following directories:

   * stimuli_creation: Scripts to create stimuli for simulation
   
   * simulation: Scripts to simulate neural models
   
   * analysis: Scripts to analyze simulated
   
   * visualization: Scripts to visualize simulated data

   * auditory_model: Husain et al (2004)'s auditory model

   * visual_model: Tagamets and Horwitz (1998)'s visual model


To execute this software you will need to
have the following installed on your local machine or server:

   * Python 2.7 for any platform (so far tested on Mac OS and
     RedHat Linux). Please note that python LSNM is NOT compatible
     with either Python 2.6 or Python 3.0.

   * Python modules matplotlib, re, random, math, numpy, sys, and
     PyQt4, pandas, among other scientific computation modules. My
     advice is to download Anaconda Python, freely available
     from continuum.io, which contains Python 2.7 and a full set
     of modules commonly used in scientific computation.
     
   * The Virtual Brain Python modules, located at the TVB github
     repository at https://github.com/the-virtual-brain. You will
     need to clone the directories 'tvb-library' and 'tvb-data'
     and install it locally so that your Python installation is
     able to see the location of such modules. Please refer to the
     instructions provided in the github repository on how to cleanly
     install TVB. Also, refer to the 'Alternate installation: the user
     scheme' at https://docs.python.org/2/install/, for
     instructions on how to install python modules locally when
     you don't have 'write' permission to global-site packages (e.g.,
     you are only a user in a Unix system an do not have a 'root'
     or 'su' password. If the ‘Alternate installation’ does not work, please
     refer to the “Installing Python modules” in the lsnm_in_python
     wiki page, at https://github.com/NIDCD/lsnm_in_python/wiki/Installing-Python-modules.

Loading data, please wait...