Modeling and MEG evidence of early consonance processing in auditory cortex (Tabas et al 2019)

 Download zip file 
Help downloading and running models
Accession:256624
Pitch is a fundamental attribute of auditory perception. The interaction of concurrent pitches gives rise to a sensation that can be characterized by its degree of consonance or dissonance. In this work, we propose that human auditory cortex (AC) processes pitch and consonance through a common neural network mechanism operating at early cortical levels. First, we developed a new model of neural ensembles incorporating realistic neuronal and synaptic parameters to assess pitch processing mechanisms at early stages of AC. Next, we designed a magnetoencephalography (MEG) experiment to measure the neuromagnetic activity evoked by dyads with varying degrees of consonance or dissonance. MEG results show that dissonant dyads evoke a pitch onset response (POR) with a latency up to 36 ms longer than consonant dyads. Additionally, we used the model to predict the processing time of concurrent pitches; here, consonant pitch combinations were decoded faster than dissonant combinations, in line with the experimental observations. Specifically, we found a striking match between the predicted and the observed latency of the POR as elicited by the dyads. These novel results suggest that consonance processing starts early in human auditory cortex and may share the network mechanisms that are responsible for (single) pitch processing.
Reference:
1 . Tabas A, Andermann M, Schuberth V, Riedel H, Balaguer-Ballester E, Rupp A (2019) Modeling and MEG evidence of early consonance processing in auditory cortex. PLoS Comput Biol 15:e1006820 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Auditory cortex;
Cell Type(s):
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: MATLAB; Python;
Model Concept(s): Magnetoencephalography;
Implementer(s): Tabas, Alejandro [tabas at cbs.mpg.de];
function [s, r, lagSpace, timeSpace] = tdoch(pars, parsing)

    %if nargin == 0; pars = loadParameters(); end
    if nargin == 0; pars = loadParameters(); end
    if nargin < 2;  parsing = 0; end;

    % Sound --> auditory nerve spiking probabilities
    if pars.verb, fprintf('Computing thalamic input...\n'); end;

    r.lagSpace = binSpace(pars);
    r.freqSpace = 1000 ./ r.lagSpace; 

    if ischar(parsing)
        [timeSpace, r.A, r.n, r.b] = parseThalamic(parsing);
    else
        [timeSpace, r.A, r.n, r.b] = pyThalamic(r.lagSpace, pars);
    end

    % Subcortical libs are in seconds but cortical libs are in ms (sorry)
    r.timeSpace = timeSpace * 1000; 
    
    % Computing system evolution
    if not(pars.onlySubcort)
        s = tdochCortex(r, pars);
    else
        s = 0;
    end        

    timeSpace = r.timeSpace;
    lagSpace  = r.lagSpace;

end



function [timeSpace, A, n, b] = pyThalamic(lagSpace, pars)
 
    % parse filename is randomized to allow parallel computations
    chart = char(['A':'Z' 'a':'z' '0':'9']);
    parseID = ['pyparse' chart(ceil(length(chart) * rand(1, 4)))];

    save([parseID, 'In.mat'], 'pars', 'lagSpace');
 
    % MATLAB accessed bash didn't include many of my bash paths for some 
    % reason, so I had to use ipython to allow python access to the packages
    % installed through pip and add anaconda's libraries manually to allow 
    % bash to access to ipython. Adjust these lines as necessary! If it does 
    % not work at first you can try to use ipython to run it:
    %python = '/home/tabs/Apps/anaconda/bin/ipython --colors=NoColor';
    % If it still does not work, you might need to manually set MATLAB's 
    % environment using setenv:
    % >>> setenv('LD_LIBRARY_PATH', <pathToPythonBins>);
    % In my Linux system this was: setenv('LD_LIBRARY_PATH', '/bin')
    python = 'python';

    system([python, ' subthalamic.py ', parseID]);

    [timeSpace, A, n, b] = parseThalamic([parseID, 'Out.mat']);
    
    delete([parseID, 'Out.mat']);

end



function [timeSpace, A, n, b] = parseThalamic(parsing)
 
    pyparse = load(parsing);
    timeSpace = pyparse.timeSpace;
    A = pyparse.A;    
    n = pyparse.n;
    b = pyparse.b;

end



function lagSpace = binSpace(pars)

    lagMin = 1000 / (pars.freqInterval(2));
    lagMax = 1000 / (pars.freqInterval(1));
    lagSpace = linspace(lagMax, lagMin, pars.N)';

end