Modeling and MEG evidence of early consonance processing in auditory cortex (Tabas et al 2019)

 Download zip file 
Help downloading and running models
Accession:256624
Pitch is a fundamental attribute of auditory perception. The interaction of concurrent pitches gives rise to a sensation that can be characterized by its degree of consonance or dissonance. In this work, we propose that human auditory cortex (AC) processes pitch and consonance through a common neural network mechanism operating at early cortical levels. First, we developed a new model of neural ensembles incorporating realistic neuronal and synaptic parameters to assess pitch processing mechanisms at early stages of AC. Next, we designed a magnetoencephalography (MEG) experiment to measure the neuromagnetic activity evoked by dyads with varying degrees of consonance or dissonance. MEG results show that dissonant dyads evoke a pitch onset response (POR) with a latency up to 36 ms longer than consonant dyads. Additionally, we used the model to predict the processing time of concurrent pitches; here, consonant pitch combinations were decoded faster than dissonant combinations, in line with the experimental observations. Specifically, we found a striking match between the predicted and the observed latency of the POR as elicited by the dyads. These novel results suggest that consonance processing starts early in human auditory cortex and may share the network mechanisms that are responsible for (single) pitch processing.
Reference:
1 . Tabas A, Andermann M, Schuberth V, Riedel H, Balaguer-Ballester E, Rupp A (2019) Modeling and MEG evidence of early consonance processing in auditory cortex. PLoS Comput Biol 15:e1006820 [PubMed]
Citations  Citation Browser
Model Information (Click on a link to find other models with that property)
Model Type: Realistic Network;
Brain Region(s)/Organism: Auditory cortex;
Cell Type(s):
Channel(s):
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: MATLAB; Python;
Model Concept(s): Magnetoencephalography;
Implementer(s): Tabas, Alejandro [tabas at cbs.mpg.de];
import numpy as np 
import cochlea
import scipy.signal
#import matplotlib.pyplot as plt


def peripheralSpikes(sound, par, fs = -1):

    if fs == -1:
        fs = par['periphFs']

    anfTrains = cochlea.run_zilany2014(sound, fs, 
                                   anf_num = [60, 25, 15], 
                                   cf = par['cochChanns'], 
                                   species = 'human', seed = 0);

    return(anfTrains)



def peripheral(sound, par, fs = -1):

    if fs == -1:
        fs = par['periphFs']

    ANRts = cochlea.run_zilany2014_rate(sound, fs, 
                                        anf_types = ('lsr', 'msr', 'hsr'), 
                                        cf = par['cochChanns'], 
                                        species = 'human', 
                                        cohc = 1,
                                        cihc = 1)

    ANRts = .6 * ANRts['hsr'] + .25 * ANRts['msr'] + .15 * ANRts['lsr']


    if par['subCortFs'] == fs:
        p = ANRts.get_values() / par['subCortFs']
    else:
        resampleN = len(sound) * par['subCortFs'] / fs 
        p = scipy.signal.resample(ANRts, num = resampleN) / par['subCortFs']  
        p[p < 0] = 0
        p[p > 1] = 1

    return(p)



def subcortical(prob, lagSpace, par):

    # Processing constants
    dt = 1./par['subCortFs']
    timeSpace = np.arange(start = dt, stop = (len(prob) + 1) * dt, step = dt)
    
    if par['SACFTau'] <= 0:
        tauFactor = 2 # [Wiegriebe2000]
        taus = tauFactor * lagSpace
        taus = np.maximum(taus, 0.0025) # [Wiegriebe2000]
    else:
        taus = par['SACFTau'] * np.ones(lagSpace.shape)


    # Initalising variables
    a  = np.zeros((len(lagSpace), par['cochChanns'][2]))
    B0 = np.zeros((len(lagSpace), par['cochChanns'][2]))
    B  = np.zeros(len(lagSpace))
    z0 = np.zeros(par['cochChanns'][2])
    z  = np.zeros(len(prob))
    k  = 0.5 * np.ones(len(prob))
    C  = np.zeros((len(prob), len(lagSpace)))

    for ti in range(1, len(prob)):
        
        # SACF 
        for li in range(len(lagSpace)):
            if (timeSpace[ti - 1] - lagSpace[li] - par['solvOnset']) >  dt:
                tiL = int(max(round(ti - par['subCortFs'] * lagSpace[li]), 1))
                a[li] = prob[ti] * prob[tiL] * par['subCortFs'] 
                
        B0 = B0 * np.exp(-dt / np.tile(taus, (1, par['cochChanns'][2])))
        B0 = B0 * dt / np.tile(taus, (1, par['cochChanns'][2])) + a 
        B0[B0 < 0] = 0
        B = B + (dt / par['subCortTau']) * (B0.sum(1) - B)

        if par['regularise'] == 1:

            # Normalisation factor (multiplicative)
            a0 = (prob[ti]**2) * par['subCortFs']
            z0 = z0 * np.exp(-dt / taus.min()) * (dt / taus.min()) + a0
            z0[z0 < 0] = 0    
            z[ti] = z[ti-1] + (dt/par['subCortTau']) * (z0.sum(0) - z[ti-1])
            
            # Normalisation factor (additive) 
            sInd = np.argmin((timeSpace[ti - 1] - 1.25 * lagSpace)**2)
            if sInd > (len(lagSpace)):
                k[ti] = 0.5
            else:
                k[ti] = B[sInd:].mean() / (z[ti] + 0.01)

            if z[ti] > 5:
                C[ti] = B / (z[ti] + 0.01)
            else:
                C[ti] = (B - 1 / (z[ti] + 0.1)) / (z[ti] + 0.01)

        else:
            C[ti] = B
            z[ti] = 0
            k[ti] = 0


    # Recomputing additive normalisation factor k and multiplicative gain A0
    if (par['regularise'] == 1):
        
        if (par['SACFGround'] < 0):
            if (len(prob) * dt < 0.075):
                print('Warning: dur(stim) < 75ms; Using default baseline 0.5')
                k0 = 0.5
            else:
                k0 = np.mean(k[int(0.05/dt) : int(min(0.10/dt, len(prob)))])            
            k[0:int(np.ceil(0.075 / dt))] = k0
            for ti in range(1, len(prob)):
                k[ti] = k[ti-1] + (dt/par['subCortTau']) * (k[ti] - k[ti-1])
        else:
            k[:] = par['SACFGround']

        kMat = np.transpose(np.tile(k, [len(lagSpace), 1]))
        A0 = par['mu0'] / np.maximum(0.1, 1 - kMat)
        C = np.maximum(0, A0 * (C - kMat))

    resampleN = len(prob) * par['cortFs'] / par['subCortFs']
    A = scipy.signal.resample(C, num = resampleN, axis = 0) 
    n = scipy.signal.resample(z, num = resampleN, axis = 0) 
    b = scipy.signal.resample(k, num = resampleN, axis = 0)

    # Resampling introduces non-existent negative values
    A[A < 0] = 0
    # Resampling introduces early activity not observed in the original series
    A[0:int((par['solvOnset'] * par['cortFs']) + 1), :] = 0

    return [A, n, b]