Spiking auditory network model and spectro-temporal receptive fields
from auditory nerve, midbrain, thalamus and cortex.
Fatemeh Khatami and Monty A. Escabí
Summary
The accompanying neural data, sounds, and models are outlined in the
publication:
Fatemeh Khatami and Monty A. Escabí, Spiking network optimized for
word recognition in noise predicts auditory system hierarchy. PLOS
Comp. Bio (in press).
The archive includes a MATLAB implementation of the auditory model from
the above citation. The auditory model consists of a front end cochlear
model that is connected to a hierarchical spiking neural network (HSNN).
The HSNN contains inhibitory and excitatory connections between
consecutive layers as outlined in the above manuscript. The original
sounds used to test the network in a speech recognition task were
derived from clean speech from the TIMIT Acoustic-Phonetic Continuous
Speech Corpus (https://catalog.ldc.upenn.edu/LDC93S1). Here, edited
speech sounds consisting of digits ("zero" to "nine") that have added
background noise and that were used in the study to test the network are
included.
The archive also includes neural data that was used to compare results
from the auditory system to the auditory HSNN model. Neural data
consists of recordings from auditory nerve (AN), inferior colliculus
(IC), auditory thalamus (MGB) and cortex (A1) from the following
previously published manuscripts:
Auditory Nerve (AN):
Kim, P. J. & Young, E. D. Comparative analysis of spectro-temporal
receptive fields, reverse correlation functions, and frequency tuning
curves of auditory-nerve fibers. J Acoust Soc Am 95, 410-422
(1994).
Inferior Colliculus (IC):
Chen, C., Read, H. L. & Escabi, M. A. Precise feature based
time-scales and frequency decorrelation lead to a sparse auditory
code. J Neurosis 32, 8454-8468 (2012).
Auditory Thalamus (MGB ) and Cortex (A1):
Miller, L. M., Escabi, M. A., Read, H. L. & Schreiner, C. E.
Spectrotemporal receptive fields in the lemniscal auditory thalamus
and cortex. J Neurophysiol 87, 516-527 (2002).
All of the above data is from anesthetized cats although the
experimental procedures and sounds for each of these studies differed.
The experimental details for each can be found in the respective
publications. The AN data from Kim and Young uses white noise to drive
and record from auditory nerve fibers. The AN data was provided by E.D.
Young and is included here with his permission. Studies in the IC, MGB
and A1 all used dynamic moving ripple sounds as described in Escabi and
Schreiner (2002) and Miller et al. (2002), although the parameters for
IC were different from the MBG/A1 studies (to allow for faster
modulations). For all of these studies, spectro-temporal receptive
fields (STRF) were computed using the delivered sounds and correlation
based estimation methods (as described in the respective publications).
The archive also includes MATLAB code used to analyze STRFs from neural
and model data. The analysis procedures used were developed and are
described in the following studies:
M.A. Escabí and C.E. Schreiner (2002). Nonlinear spectrotemporal sound
analysis by neurons in the auditory midbrain. J Neurosci 22(10):
4114-31.
A. Qiu, C.E. Schreiner, and M.A. Escabí. (2003) Gabor analysis of
auditory midbrain receptive fields: Spectro-temporal and binaural
composition. J Neurophysiol. 90 (1): 456-476.
F.A. Rodríguez, H.L. Read, M.A. Escabí (2010) Spectrotemporal
Modulation Tradeoff Along the Tonotopic Axis of the Inferior
Colliculus. Journal of Neurophysiol. 103: 887-903.
Conditions for using the data
If you use this dataset or any of the accompanying analysis or modeling
code please cite the above manuscript along with a citation to the CRCNS
dataset:
Fatemeh Khatami and Monty A. Escabí, Spiking network optimized for
word recognition in noise predicts auditory system hierarchy. PLOS
Comp. Bio (in press).
http://dx.doi.org/xxxxxx/xxxxxx
If neural data or a specific analysis code is used a citation should be
provided to the respective manuscript for the original data or analysis
source (e.g., Kim and Young 1994 for AN data; e.g., Qiu et al. for Gabor
analysis code; etc.).
Methods
The methods for implementing the auditory HSNN model can be found in the
above citation (Khatami & Escabí 2020). Details of the neural
recordings, data acquisition and sound delivery for each of the auditory
structures (AN, IC, MGB and A1) can be found in the respective
manuscripts noted above.
Data files organization
The data is organized in a directory tree with the following
subdirectories. Below, unix convention (using '/' instead of '\') is
used for the directory structure.
[Documentation]{.underline}
./Documentation/
- This directory contains the documentation (this file) for the CRCNS
database.
[Manuscripts]{.underline}
This directory contains all of the manuscripts noted above.
[MATLAB Code]{.underline}
This directory contains the relevant MATLAB code used for the auditory
network model as well as the code used to analyze neural data. The
directory structure is as follows:
./MatlabCode/audnetwork/ - Auditory network code
./MatlabCode/cochleogram/ - Cochlear model (cochleogram)
./MatlabCode/examples/ - Example code for both the neural data and sound
analysis
./MatlabCode/ripplesounds/ - Code used to generate ripple sounds
./MatlabCode/strfanalysis/ - Code used to analyze neural and model STRFs
./MatlabCode/strfgabor/ - Code used for Gabor STRF model fitting
procedure
[Model Data]{.underline}
../ModelData/DigitsInNoise/
- Contains the digits in speech babble noise sounds at multiple SNRs
that are used in this study. The original clean sounds are available
from TIMIT Acoustic-Phonetic Continuous Speech Corpus
(https://catalog.ldc.upenn.edu/LDC93S1).
./ModelData/ModelSTRF/
- Contains the measured and Gabor fitted model STRFs for the optimal
and high-resolution auditory networks
[Neural Data]{.underline}
./NeuralData/AuditoryNerve/
- Auditory nerve data. This is the original data directory provided by
E.D. Young. This directory contains all of the auditory nerve data
along with code for reading and analyzing the data. Details are
available in the file README.doc
[Inferior Colliculus Data]{.underline}
./NeuralData/InferiorColliculus/DMREnvelope/
- Dynamic moving ripple (DMR) envelopes used for sound generation and
used to estimate STRFs (see Escabi & Schreiner 2002)
./NeuralData/InferiorColliculus/SPET/
- Spike event time (SPET) files
./NeuralData/InferiorColliculus/STRF/
- Computed Inferior Colliculus STRFs
[Auditory Thalamus and Cortex Data]{.underline}
./NeuralData/AuditoryThalamusAndCortex/DMRENvelope/c534/
- Contains the dynamic moving ripple (DMR) envelopes used for
experiment c534
./NeuralData/AuditoryThalamusAndCortex/DMRENvelope/ct476/
- Contains the dynamic moving ripple (DMR) envelope used for
experiment ct476
./NeuralData/AuditoryThalamusAndCortex/Cortex/SPET/
- Cortical spike event time (SPET) files
./NeuralData/AuditoryThalamusAndCortex/Cortex/STRF/
./NeuralData/AuditoryThalamusAndCortex/Thalamus/SPET/
- Thalamic spike event time (SPET) files
./NeuralData/AuditoryThalamusAndCortex/Thalamus/STRF/
Data format
[Neural and model data files]{.underline}
All neural data from IC, thalamus and cortex as well as the model data
are stored in MATLAB files (.mat). Details of the data format and usage
are available in the provided MATLAB examples which are commented for
usage details. Documentation for all of the analysis routines also
provide details of the data format and their usage. This information can
be obtained using 'help filename.m.'
The auditory nerve data is stored in binary formatted files as
originally provided by E.D. Young. Details for analyzing and extracting
the data is available in the documentation file
./NeuralData/AuditoryNerve/README.doc.
[Speech in babble noise sound files]{.underline}
All sound files are stored in standard WAV format as well as in MATLAB
(.mat) format.
[Dynamic moving ripple envelope files]{.underline}
These files contain the sound envelopes that are used to generate the
DMR stimuli and which are used to calculate the STRFs for IC, MGB and
A1. These envelope files are generated by the main program used to
generate the DMR sounds (using 'ripnoise.m') and are subsequently used
during the STRF analysis (using 'rtwstrfdb.m').
Each of the respective directories for IC, thalamus and cortex contains
the envelope files in the subdirectory 'DMREnvelope'. The envelopes are
stored in binary formatted files ('float') with an extension '.spr.' For
instance, in the IC envelope directory, the envelope is stored in the
file dynamicripple750ic.spr where 750 indicates the maximum temporal
modulation rate for this sound. In addition, there is an accompanying
file dynamicripple750ic_param.mat which contains parameters that were
used to generate the DMR sounds and which are also necessary to analyze
neural data and compute STRFs (using 'rtwstrfdb.m').
Code
[Auditory Model]{.underline}
The MATLAB functions that are used to simulate the auditory network
model are located in the directory ./MatlabCode/audnetwork/ and
./Matalab/cochleogram/. The primary routines are:
cochleogram.m - Cochlear model output given a sound vector. The output
of this file is used as the input to the hierarchical spiking neural
network
integratefirenetworkaud.m - Auditory spiking neural network model
glnpaudnetwork - Gabor Linear-Nonlinear-Poisson (LNP) network
[Neural data analysis code]{.underline}
The routines used to analyze neural data are located in the directories:
./MatlabCode/strfanalysis/ and ./MatlabCode/strfgabor/. The primary
routines used are:
rtwstrfdb.m - Used to compute STRFs from the dynamic moving ripple
sounds (Escabi & Scrheiner 2003)
strfparam.m - Computes a variety of STRF parameters (Rodriguez et al
2010)
strfgaborfit.m - Fits neural STRFs to the Gabor STRF model (Qiu et al
2003)
For all of the above files, a detailed list of input and output
parameters along with the function syntax can be obtained by typing
"help filename.m" on the MATLAB command line.
[Ripple stimuli generation code]{.underline}
The routines used to generate dynamic moving ripple and ripple noise
stimuli are located in the directory: './MatlabCode/ripplesounds/'. The
main routine used to generate both sounds is ripnoise.m.
How to get started
MATLAB example files are provided that illustrate the usage of the model
and neural data analysis codes. All of the example routines are
commented to guide the reader through the various examples as well as to
document the parameters used. These files are all located in the
directory ./MatlabCode/examples/:
Example1.m - HSNN model simulation
Example2.m - STRF parameter estimation for IC, Thal, CTX
Example3.m - STRF calculation for IC unit example
Example4.m - STRF calculation for cortical unit example
Example5.m - STRF calculation for thalamic unit example
Example6.m - Gabor STRF model fitting (IC, Thal and CTX)
Example7.m - STRF Parameter estimation for HSNN model
Example8.m - Gabor LNP Network simulation
Example9.m - Illustrates how to generate ripple stimuli
Example10.m - Illustrates how to apply a Bayesian Classifier to the
outputs of the auditory spiking neural network
How to get help
To get help with the data set post any questions on the forum at
CRCNS.org.