Learning spatiotemporal sequences using recurrent spiking NN that discretizes time (Maes et al 2020)

 Download zip file 
Help downloading and running models
Accession:257609
"Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity."
Reference:
1 . Maes A, Barahona M, Clopath C (2020) Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 16:e1007606 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type:
Brain Region(s)/Organism:
Cell Type(s): Abstract integrate-and-fire adaptive exponential (AdEx) neuron; Abstract integrate-and-fire leaky neuron;
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: MATLAB; Julia;
Model Concept(s):
Implementer(s): Maes, Amadeus [amadeus.maes at gmail.com];
This .txt file contains information regarding the code used to produce the results for: 'Learning spatiotemporal sequences using a recurrent spiking network that discretizes time' (published on 31 January 2020 in Plos cb).
Author: Amadeus Maes
Contact: ahm17@ic.ac.uk or amadeus.maes@gmail.com

The code contains three folders:
1) Learning sequential dynamics
2) Training read-out neurons
3) Data and plotting

1)
In this julia code a recurrent network is trained. Change boolean variables in the file runsim.jl to simulate and train networks. Change variables in sim.jl to change network size and parameters. Repeated sequential input embeds a feedforward structure in the weight matrix. A new matrix can be created and trained or existing matrices can be uploaded. You can turn off the structured stimulation and simulate spontaneous dynamics. Matrices can be saved in .txt format and loaded into the matlab code. The RNNs used in the main results are saved in .mat format in folder 3).

2)
In this matlab code read-out synapses are trained. Change variables in the file clockActionNetwork.m to choose the type of simulation you want to run. RNN weights matrices can be loaded into the file from folder 3).  

3)
Some code to plot and the main trained RNN weight matrices are saved in this folder.


Hopefully the code is readable and can be of help. If anything is unclear or does not work please feel free to contact me.