ModelDB is moving. Check out our new site at https://modeldb.science. The corresponding page is https://modeldb.science/129149.

Increased computational accuracy in multi-compartmental cable models (Lindsay et al. 2005)

 Download zip file 
Help downloading and running models
Accession:129149
Compartmental models of dendrites are the most widely used tool for investigating their electrical behaviour. Traditional models assign a single potential to a compartment. This potential is associated with the membrane potential at the centre of the segment represented by the compartment. All input to that segment, independent of its location on the segment, is assumed to act at the centre of the segment with the potential of the compartment. By contrast, the compartmental model introduced in this article assigns a potential to each end of a segment, and takes into account the location of input to a segment on the model solution by partitioning the effect of this input between the axial currents at the proximal and distal boundaries of segments. For a given neuron, the new and traditional approaches to compartmental modelling use the same number of locations at which the membrane potential is to be determined, and lead to ordinary differential equations that are structurally identical. However, the solution achieved by the new approach gives an order of magnitude better accuracy and precision than that achieved by the latter in the presence of point process input.
Reference:
1 . Lindsay AE, Lindsay KA, Rosenberg JR (2005) Increased computational accuracy in multi-compartmental cable models by a novel approach for precise point process localization. J Comput Neurosci 19:21-38 [PubMed]
Model Information (Click on a link to find other models with that property)
Model Type: Neuron or other electrically excitable cell;
Brain Region(s)/Organism:
Cell Type(s):
Channel(s): I Na,t; I K;
Gap Junctions:
Receptor(s):
Gene(s):
Transmitter(s):
Simulation Environment: NEURON; C or C++ program;
Model Concept(s): Methods;
Implementer(s):
Search NeuronDB for information about:  I Na,t; I K;
/
LindsayEtAl2005
readme.txt
03-192.pdf
AnalyseResults.c
BitsAndPieces.c
CellData.dat
CompareSpikeTrain.c
Ed04.tex
ExactSolution.dat
GammaCode
Gen.tex
Gen1.tex
Gen2.tex
Gen3.tex
Gen4.tex
Gen5.tex
Gen6.tex
GenCom.c
GenCom1.c
GenCom2.c
GenComExactSoln.c
GenerateInput.c
GenerateInputText.c
GenRan.ran
GetNodeNumbers.c
Info100.dat
Info20.dat
Info200.dat
Info30.dat
Info300.dat
Info40.dat
Info400.dat
Info50.dat
Info500.dat
Info60.dat
Info70.dat
Info80.dat
Info90.dat
InputCurrents.dat
InputDendrite.dat
JaySpikeTrain.c
JayTest1.dat
JayTest100.dat
KenSpikeTrain.c
KenTest1.dat *
KenTest10.dat
KenTest100.dat *
KenTest10p.dat
KenTest1p.dat *
KenTest2.dat
KenTest2p.dat
KenTest3.dat
KenTest3p.dat
KenTest4.dat
KenTest4p.dat
KenTest5.dat
KenTest5p.dat
KenTest6.dat
KenTest6p.dat
KenTest7.dat
KenTest7p.dat
KenTest8.dat
KenTest8p.dat
KenTest9.dat
KenTest9p.dat
LU.c
Mean50.dat
Mean500.dat
mosinit.hoc
NC.pdf
NC.tex
NC1.tex
NC2.tex
NC3.tex
NC4.tex
NC5.tex
NC6.tex
NCFig2.eps *
NCFig3.eps *
NCFig4.eps *
NCFig5a.eps *
NCFig5b.eps *
NCFig6.eps *
NCPics.tex
NeuronDriver.hoc
NewComExactSoln.c
NewComp.pdf
NewComp.ps
NewComp.tex
NewComp.toc
NewComp1.tex
NewComp2.tex
NewComp3.tex
NewComp4.tex
NewComp5.tex
NewComp6.tex
NewCompFig1.eps
NewCompFig2.eps *
NewCompFig3.eps *
NewCompFig4.eps *
NewCompFig5a.eps *
NewCompFig5b.eps *
NewCompFig6.eps *
NewCompPics.tex
NewComSpikeTrain.c
NewRes.dat
NewRes60.dat
NewRes70.dat
NewRes80.dat
NewSynRes40.dat
NewTestCell.d3
NResults.res
OldComExactSoln.c
out.res
principles_01.tex
rand
Ratio.dat
RelErr.dat
ReviewOfSpines.pdf
SpikeTimes.dat
TestCell.d3
TestCell1.d3
TestCell2.d3
TestCell3.d3
TestCell4.d3
testcellnew2.hoc
TestCGS.c
TestGen1.c
TestSim.hoc
TestSim020.hoc
TestSim030.hoc
TestSim040.hoc
TestSim050.hoc
TestSim060.hoc
TestSim070.hoc
TestSim080.hoc
TestSim090.hoc
TestSim1.hoc
TestSim100.hoc
TestSim200.hoc
TestSim300.hoc
TestSim400.hoc
TestSim500
TestSim500.hoc
                            
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>

 /***************************************************************
    Function to analyse the numerical error of Generalised
    Compartmental Model. A single input is simulated and
    the exact solution determined using the Equivalent Cable
  ***************************************************************/

typedef struct SparseMatrix_t
{
        double *a;
        int *col;
        int *StartRow;
        int n;
        struct SparseMatrix_t *l;
        struct SparseMatrix_t *u;

} SparseMatrix;


double  ran(unsigned int *, unsigned int *, unsigned int *);
void solve( int, double *, double *, double *);

void    Matrix_Vector_Multiply( SparseMatrix *, double *, double *),
        Matrix_Malloc( SparseMatrix *, int, int),
        Matrix_Free( SparseMatrix *),
        cgs1( int *, SparseMatrix *, double *, double *, double),
        cgs( int *, SparseMatrix *, double *, double *, double);

/* Global definitions */
#define            CGS    1.0e-26   /* Tolerance used in CGS algorithm */
#define          NODES    50
#define          NSEED    2         /* Seed for random number generator */

/* Global Variables */
unsigned int ix, iy, iz;

int main( void )
{
    extern unsigned int ix, iy, iz;
    SparseMatrix a;
    extern double pi, dt;
    int k, j, jj, getmem;
    double *vec, *phi, *soln, vold, vnew, *vlam;
    void srand( unsigned int);

/*  Initialise random number generator */
    srand( ((unsigned int) NSEED) );
    ix = rand( );
    iy = rand( );
    iz = rand( );
    soln = (double *) malloc( 4*sizeof(double) );
    vlam = (double *) malloc( 4*sizeof(double) );
    phi = (double *) malloc( 4*sizeof(double) );
    vec = (double *) malloc( 4*sizeof(double) );

/*  Phase 1. - Allocate sparse matrices for synaptic activity */
    Matrix_Malloc( &a, 4, 10 );
    for ( j=0 ; j<3 ; j++ ) {
        jj = 2*j;
        a.StartRow[j] = jj;
        a.a[jj] = 1.0;
        a.col[jj] = j;
        a.a[jj+1] = -1.0;
        a.col[jj+1] = j+1;
    }
    a.StartRow[3] = 6;
    vnew = 0.0;
    vlam[0] = vnew;
    for ( j=0 ; j<3 ; j++ ) {
        jj = 6+j;
        vold = vnew;
        vnew += 0.25;
        vlam[j+1] = vnew;
        a.a[jj] = vnew-vold;
        a.col[jj] = j;
    }
    jj = 9;
    a.a[jj] = 1.0-vnew;
    a.col[jj] = 3;
    a.StartRow[4] = 10;

    for ( j=0 ; j<4 ; j++ ) soln[j] = ran( &ix, &iy, &iz);
    Matrix_Vector_Multiply( &a, soln, phi);
    for ( j=0 ; j<4 ; j++ ) vec[j] = 0.0;
    getmem = 1;
    cgs1( &getmem, &a, phi, vec, CGS);
    for ( k=0 ; k<4 ; k++ ) {
        printf("\nTrue %12.6lf \t Numerical %12.6lf", soln[k], vec[k]);
    }


    solve( 3, vlam, phi, vec);
    printf("\n\n");
    for ( k=0 ; k<4 ; k++ ) {
        printf("\nTrue %12.6lf \t Numerical %12.6lf", soln[k], vec[k]);
    }

    return 0;
}


 /**********************************************************
     Multiplies sparse matrix a[ ][ ] with vector v[ ]
  **********************************************************/
void Matrix_Vector_Multiply( SparseMatrix *a, double *v , double *b)
{
    int i, j, k, n;

    n = a->n;
    for ( j=0 ; j<n ; j++) {
        k = a->StartRow[j+1];
        for( b[j]=0.0,i=(a->StartRow[j]) ; i<k ; i++ ) {
            b[j] += (a->a[i])*v[a->col[i]];
        }
    }
    return;
}


 /***********************************************
        Allocate memory to a sparse matrix
  ***********************************************/
void Matrix_Malloc( SparseMatrix *a, int n, int w)
{
    a->a = (double *) malloc( w*sizeof(double) );
    a->col = (int *) malloc( w*sizeof(int) );
    a->StartRow = (int *) malloc( (n+1)*sizeof(int) );
    a->n = n;
    a->l = malloc(sizeof(SparseMatrix));
    a->u = malloc(sizeof(SparseMatrix));
    a->l->a = (double *) malloc( (2*n-1)*sizeof(double) );
    a->l->col = (int *) malloc( (2*n-1)*sizeof(int) );
    a->l->StartRow = (int *) malloc( (n+1)*sizeof(int) );
    a->l->n = n;
    a->u->a = (double *) malloc( (2*n-1)*sizeof(double) );
    a->u->col = (int *) malloc( (2*n-1)*sizeof(int) );
    a->u->StartRow = (int *) malloc( (n+1)*sizeof(int) );
    a->u->n = n;
    return;
}


 /**********************************************
     De-allocates memory of a sparse matrix
  **********************************************/
void Matrix_Free( SparseMatrix *a)
{
    free(a->a);
    free(a->col);
    free(a->StartRow);
    free(a);
}



 /************************************************************
         Function returns primitive uniform random number.
  ************************************************************/
double ran(unsigned int *ix, unsigned int *iy, unsigned int *iz)
{
    double tmp;

/*  1st item of modular arithmetic  */
    *ix = (171*(*ix))%30269;
/*  2nd item of modular arithmetic  */
    *iy = (172*(*iy))%30307;
/*  3rd item of modular arithmetic  */
    *iz = (170*(*iz))%30323;
/*  Generate random number in (0,1) */
    tmp = ((double) (*ix))/30269.0+((double) (*iy))/30307.0
          +((double) (*iz))/30323.0;
    return fmod(tmp,1.0);
}


void cgs1(int *getmem, SparseMatrix *a, double *b, double *x, double tol)
{
    long int i, k, n, repeat;
    static int start=1;
    double rho_old, rho_new, alpha, beta, sigma, err;
    static double *p, *q, *r, *u, *v, *rb, *y;

/* Step 1 - Check memory status */
    printf("\nEntering 1 ");
    n = a->n;
    if ( start ) {
        *getmem = 1;
        start = 0;
    }
    if ( *getmem ) {
        if ( p ) free(p);
        p = (double *) malloc( n*sizeof(double) );
        if ( q ) free(q);
        q = (double *) malloc( n*sizeof(double) );
        if ( r ) free(r);
        r = (double *) malloc( n*sizeof(double) );
        if ( u ) free(u);
        u = (double *) malloc( n*sizeof(double) );
        if ( v ) free(v);
        v = (double *) malloc( n*sizeof(double) );
        if ( rb ) free(rb);
        rb = (double *) malloc( n*sizeof(double) );
        if ( y ) free(y);
        y = (double *) malloc( n*sizeof(double) );
        *getmem = 0;
    }

/* Step 2 - Initialise residual, p[ ] and q[ ] */
    Matrix_Vector_Multiply( a, x, r);
    for ( rho_old=0.0,i=0 ; i<n ; i++ ) {
        r[i] = b[i]-r[i];
        rho_old += r[i]*r[i];
        rb[i] = r[i];
        p[i] = 0.0;
        q[i] = 0.0;
    }
    if ( rho_old<tol*((double) n) ) return;


/* The main loop */
    rho_old = 1.0;
    do {

/* Compute scale parameter for solution update */
        for ( rho_new=0.0,i=0 ; i<n ; i++ ) rho_new += r[i]*rb[i];
        beta = rho_new/rho_old;

/* Update u[ ] and p[ ] */
        for ( i=0 ; i<n ; i++ ) {
            u[i] = r[i]+beta*q[i];
            p[i] = u[i]+beta*(q[i]+beta*p[i]);
        }

/* Update v[ ] and compute sigma */
        Matrix_Vector_Multiply( a, p, v);
        for ( sigma=0.0,i=0 ; i<n ; i++ ) sigma += rb[i]*v[i];

/* Compute alpha and update q[ ], v[ ] and x[ ] */
        if ( sigma==0.0 ) {
            printf(" Trouble ");
            for (i=0 ; i<n ; i++ ) {
                printf("\n%20.16lf",v[i]);
                getchar( );
            }
        }
        alpha = rho_new/sigma;
        for ( i=0 ; i<n ; i++ ) {
            q[i] = u[i]-alpha*v[i];
            v[i] = alpha*(u[i]+q[i]);
            x[i] += v[i];
        }

 /* Update r[ ] and estimate error */
        Matrix_Vector_Multiply( a, v, y);
        for ( err=0.0,i=0 ; i<n ; i++ ) {
            r[i] -= y[i];
            err += r[i]*r[i];
        }
        rho_old = rho_new;
        repeat = ( err > tol*((double) n) );
    } while ( repeat );
    printf(" Leaving\n");
    return;
}


void cgs(int *getmem, SparseMatrix *a, double *b, double *x, double tol)
{
    int i, k, n, finish;
    static int start=1;
    double rho_old, rho_new, alpha, beta, sigma, err;
    static double *p, *q, *u, *v, *r, *rb, *y;

/* Step 1 - Check memory status */
    n = a->n;
    if ( start ) {
        r = (double *) malloc( n*sizeof(double) );
        rb = (double *) malloc( n*sizeof(double) );
        p = (double *) malloc( n*sizeof(double) );
        q = (double *) malloc( n*sizeof(double) );
        u = (double *) malloc( n*sizeof(double) );
        v = (double *) malloc( n*sizeof(double) );
        y = (double *) malloc( n*sizeof(double) );
        start = 0;
    }

/* Step 2 - Initialise residual, p[ ] and q[ ] */
    Matrix_Vector_Multiply( a, x, r);
    for ( i=0 ; i<n ; i++ ) {
        r[i] = b[i]-r[i];
        rb[i] = r[i];
        p[i] = 0.0;
        q[i] = 0.0;
    }
    rho_old = 1.0;
    finish = 0;

/* The main loop */
    while ( !finish ) {

/* Compute scale parameter for solution update */
        for ( rho_new=0.0,i=0 ; i<n ; i++ ) rho_new += r[i]*rb[i];
        beta = rho_new/rho_old;

/* Update u[ ] and p[ ] */
        for ( i=0 ; i<n ; i++ ) {
            u[i] = r[i]+beta*q[i];
            p[i] = u[i]+beta*(q[i]+beta*p[i]);
        }

/* Update v[ ] and compute sigma */
        Matrix_Vector_Multiply( a, p, v);
        for ( sigma=0.0,i=0 ; i<n ; i++ ) sigma += rb[i]*v[i];

/* Compute alpha and update q[ ], v[ ] and x[ ] */
        alpha = rho_new/sigma;
        for ( i=0 ; i<n ; i++ ) {
            q[i] = u[i]-alpha*v[i];
            v[i] = alpha*(u[i]+q[i]);
            x[i] += v[i];
        }

 /* Update r[ ] and estimate error */
        Matrix_Vector_Multiply( a, v, y);
        for ( err=0.0,i=0 ; i<n ; i++ ) {
            r[i] -= y[i];
            err += r[i]*r[i];
        }
        rho_old = rho_new;
        if ( err<tol ) finish = 1;
    }

/* Check memory status */
    if ( *getmem<=0 ) start = 1;
    if ( start ) {
        free(r);
        free(rb);
        free(p);
        free(q);
        free(u);
        free(v);
        free(y);
    }
    return;
}


void solve( int nsyn, double *vlam, double *b, double *soln)
{
    double sum;
    int k;

/*  Step 1. - Forward substitution phase */
    for ( sum=0.0,k=0 ; k<nsyn ; k++ ) {
        soln[k] = b[k];
        sum += vlam[k+1]*soln[k];
    }
    soln[nsyn] = b[nsyn]-sum;

/*  Step 2. - Backward substitution phase */
    for ( k=nsyn-1 ; k>=0 ; k-- ) soln[k] += soln[k+1];
    return;
}

Loading data, please wait...