BOLD fMRI responses in human subjects viewing a set of natural short movie clips

Tom Dupré la Tour 8512c0d4a3 add all subjects and stimuli 2 лет назад
.datalad f6837f8c08 [DATALAD] new dataset 2 лет назад
features af76358eba add first subject 2 лет назад
mappers 8512c0d4a3 add all subjects and stimuli 2 лет назад
responses 8512c0d4a3 add all subjects and stimuli 2 лет назад
stimuli 8512c0d4a3 add all subjects and stimuli 2 лет назад
utils b85d74360f add utils 2 лет назад
.gitattributes 83bcbaca18 Instruct annex to add text files to Git 2 лет назад
README.md 24b7154cbc Add flatmap in the readme 2 лет назад
datacite.yml 867f912448 Add ORCID numbers 2 лет назад

README.md

Gallant Lab Natural Short Clips 3T fMRI Data

Summary

This data set contains BOLD fMRI responses in human subjects viewing a set of natural short movie clips. The functional data were collected in five subjects, in three sessions over three separate days for each subject. Details of the experiment are described in the original publication [1].

The natural short movie clips used in this dataset are identical to those used in a previous experiment described in [2]. However, the functional data is different. This data set contains full brain responses recorded every two seconds with a 3T scanner [1]. The data set described in [2] contains responses from the occipital lobe only, recorded every second with a 4T scanner.

[1] Huth, Alexander G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76(6), 1210-1224.

[2] Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

Cite this dataset

If you publish any work using the data, please cite the original publication [1], and cite the dataset in the following recommended format:

[3] Huth, A. G., Nishimoto, S., Vu, A. T., Dupre la Tour, T., & Gallant, J. L. (2022). Gallant Lab Natural Short Clips 3T fMRI Data. http://dx.doi.org/--TBD--

Data files organization

features/                    → feature spaces used for voxelwise modeling
    motion_energy.hdf        → visual motion energy, as described in [2]
    wordnet.hdf              → visual semantic labels, as described in [1]
mappers/                     → plotting mappers for each subject
    S01_mapper.hdf
    ...
    S05_mapper.hdf
responses/                   → functional responses for each subject
    S01_responses.hdf
    ...
    S05_responses.hdf
stimuli/                     → natural movie stimuli, for each fMRI run
    test.hdf
    train_00.hdf
    ...
    train_11.hdf
utils/
    example.py               → Python functions to analyze the data
    wordnet_categories.txt   → names of the wordnet labels
    wordnet_graph.dot        → wordnet graph to plot as in [1]

Data format

All files are hdf5 files, with multiple arrays stored inside. The names, shapes, and descriptions of each array are listed below.

Each file in `features` contains:
    X_train: array of shape (3600, n_features)
        Training features.
    X_test: array of shape (270, n_features)
        Testing features.
    run_onsets: array of shape (12, )
        Indices of each run onset.
where (n_features = 6555) for `motion_energy.hdf`
and (n_features = 1705) for `wordnet.hdf`.

Each file in `mappers` contains:
    voxel_to_flatmap: CSR sparse array of shape (n_pixels, n_voxels)
        Mapper from voxels to flatmap image. The sparse array is stored with
        four dense arrays: (data, indices, indptr, shape).
    voxel_to_fsaverage: CSR sparse array of shape (n_vertices, n_voxels)
        Mapper from voxels to FreeSurfer surface. The sparse array is stored
        with four dense arrays: (data, indices, indptr, shape).
    flatmap_mask: array of shape (width, height)
        Pixels of the flatmap image associated with a voxel.
    flatmap_rois: array of shape (width, height, 4)
        Transparent image with annotated ROIs (for subjects S01, S02, and S03).
    flatmap_curvature: array of shape (width, height)
        Transparent image with binarized curvature to locate sulci/gyri.
    roi_mask_xxx: array of shape (n_voxels, )
        Mask indicating which voxels are in the ROI `xxx`.
        ROI list is different on each subject. SO4 and S05 have no ROIs.

Each file in `responses` contains:
    Y_train: array of shape (3600, n_voxels)
        Training responses.
    Y_test: array of shape (270, n_voxels)
        Testing responses.
    run_onsets: array of shape (12, )
        Indices of each run onset.

Each file in `stimuli` contains:
    stimuli: array of shape (n_images, 512, 512, 3)
        Each training run contains 9000 images total.
        The test run contains 8100 images total.

The utils.py file contains helpers to load the data in Python.

How to get started

The utils directory contains basic Python helpers to get started with the data.

More tutorials on voxelwise modeling using this data set are available at https://github.com/gallantlab/voxelwise_tutorials. They includes Python downloading tools, data loaders, plotting tools, and examples of analysis.

Example

Note that to get started, you might not need to download all the data. In particular, the stimuli data is large, and is already processed into two feature spaces to be used in voxelwise modeling.

How to get help

The recommended way to ask questions is in the issue tracker on the GitHub page https://github.com/gallantlab/voxelwise_tutorials/issues.

datacite.yml
Title Gallant Lab Natural Short Clips 3T fMRI Data
Authors Huth,Alexander G.;University of California, Berkeley;ORCID:0000-0002-7590-3525
Nishimoto,Shinji;University of California, Berkeley;ORCID:0000-0001-8015-340X
Vu,An T.;University of California, Berkeley
Dupre la Tour,Tom;University of California, Berkeley;ORCID:0000-0002-2674-1670
Gallant,Jack L.;University of California, Berkeley;ORCID:0000-0001-7273-1054
Description This data set contains BOLD fMRI responses in human subjects viewing a set of natural short clips. The functional data were collected for five subjects, in three sessions over three separate days for each subject. Details of the experiment are described in the original publication.
License Creative Commons CC0 1.0 Public Domain Dedication (https://creativecommons.org/publicdomain/zero/1.0/)
References Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current biology, 21(19), 1641-1646. [doi:10.1016/j.cub.2011.08.031] (IsDescribedBy)
Huth, A. G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012). A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron, 76(6), 1210-1224. [doi:10.1016/j.neuron.2012.10.014] (IsDescribedBy)
Huth, A. G., Lee, T., Nishimoto, S., Bilenko, N. Y., Vu, A. T., & Gallant, J. L. (2016). Decoding the semantic content of natural movies from human brain activity. Frontiers in systems neuroscience, 10, 81. [doi:10.3389/fnsys.2016.00081] (IsReferencedBy)
Popham, S. F., Huth, A. G., Bilenko, N. Y., Deniz, F., Gao, J. S., Nunez-Elizalde, A. O., & Gallant, J. L. (2021). Visual and linguistic semantic representations are aligned at the border of human visual cortex. Nature Neuroscience, 24(11), 1628-1636. [doi:10.1038/s41593-021-00921-6] (IsReferencedBy)
Funding NEI, EY019684
CSoI, CCF-0939370
Keywords Neuroscience
fMRI
Naturalistic stimuli
Voxelwise encoding models
Resource Type Dataset