|
@@ -2,32 +2,86 @@
|
|
|
|
|
|
## Summary
|
|
|
|
|
|
-This data set contains BOLD fMRI responses in human subjects viewing a set of
|
|
|
+This dataset contains BOLD fMRI responses in human subjects viewing a set of
|
|
|
natural short movie clips. The functional data were collected in five subjects,
|
|
|
in three sessions over three separate days for each subject. Details of the
|
|
|
experiment are described in the original publication [1].
|
|
|
|
|
|
-The natural short movie clips used in this dataset are identical to those used
|
|
|
-in a previous experiment described in [2]. However, the functional data is different.
|
|
|
-This data set contains full brain responses recorded every two seconds with a 3T scanner [1].
|
|
|
-The data set described in [2] contains responses from the occipital lobe only,
|
|
|
-recorded every second with a 4T scanner.
|
|
|
+> **[1]** Huth, Alexander G., Nishimoto, S., Vu, A. T., & Gallant, J. L.
|
|
|
+> (2012). A continuous semantic space describes the representation of thousands
|
|
|
+> of object and action categories across the human brain. Neuron, 76(6),
|
|
|
+> 1210-1224. https://dx.doi.org/10.1016/j.neuron.2012.10.014
|
|
|
|
|
|
-> **[1]** Huth, Alexander G., Nishimoto, S., Vu, A. T., & Gallant, J. L. (2012).
|
|
|
-A continuous semantic space describes the representation of thousands of object
|
|
|
-and action categories across the human brain. Neuron, 76(6), 1210-1224.
|
|
|
+If you publish any work using the dataset, please cite the original publication
|
|
|
+[1], and cite the dataset [1b] in the following recommended format:
|
|
|
|
|
|
-> **[2]** Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant,
|
|
|
-J. L. (2011). Reconstructing visual experiences from brain activity evoked by
|
|
|
-natural movies. Current Biology, 21(19), 1641-1646.
|
|
|
+> **[1b]** Huth, A. G., Nishimoto, S., Vu, A. T., Dupre la Tour, T., & Gallant,
|
|
|
+> J. L. (2022). Gallant Lab Natural Short Clips 3T fMRI Data.
|
|
|
+> https://dx.doi.org/--TBD--
|
|
|
|
|
|
-## Cite this dataset
|
|
|
+#### Difference with the "vim-2" dataset
|
|
|
|
|
|
-If you publish any work using the data, please cite the original publication [1],
|
|
|
-and cite the dataset in the following recommended format:
|
|
|
+The present dataset uses the same stimuli (natural short movie clips) than a
|
|
|
+previous experiment of the Gallant lab [2], publicly released in CRCNS under
|
|
|
+the name ["vim-2"](https://crcns.org/data-sets/vc/vim-2/) [2b]. Both dataset
|
|
|
+use the same stimuli, but the functional data is different.
|
|
|
+
|
|
|
+The "shortclips" dataset [1b] contains full brain responses recorded every two
|
|
|
+seconds (2s) with a 3T scanner. The "vim-2" dataset [2b] contains responses
|
|
|
+from the occipital lobe only, recorded every second (1s) with a 4T scanner.
|
|
|
+Contrary to the "shortclips" dataset, the "vim-2" dataset does not provide
|
|
|
+mappers to plot the data on flatten maps of the cortical surface.
|
|
|
+
|
|
|
+
|
|
|
+> **[2]** Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
|
|
|
+> Gallant, J. L. (2011). Reconstructing visual experiences from brain activity
|
|
|
+> evoked by natural movies. Current Biology, 21(19), 1641-1646.
|
|
|
+> https://dx.doi.org/10.1016/j.cub.2011.08.031
|
|
|
+
|
|
|
+> **[2b]** Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., &
|
|
|
+> Gallant, J. L. (2014). Gallant Lab Natural Movie 4T fMRI Data. CRCNS.org.
|
|
|
+> https://dx.doi.org/10.6080/K00Z715X
|
|
|
+
|
|
|
+## How to get started
|
|
|
+
|
|
|
+#### a. With dedicated tutorials
|
|
|
+The preferred way to explore this dataset is through the [voxelwise
|
|
|
+tutorials](https://github.com/gallantlab/voxelwise_tutorials). These tutorials
|
|
|
+includes Python downloading tools, data loaders, plotting utilities, and
|
|
|
+examples of analysis following the original publication [1] [2].
|
|
|
+
|
|
|
+<a href="https://gallantlab.github.io/voxelwise_tutorials/"><img
|
|
|
+src="https://gallantlab.github.io/voxelwise_tutorials/_images/sphx_glr_06_plot_banded_ridge_model_002.png"
|
|
|
+alt="Example" width="600"/></a>
|
|
|
+
|
|
|
+#### b. With git and git-annex
|
|
|
+
|
|
|
+To download the data with [git-annex](https://git-annex.branchable.com/), run
|
|
|
+the commands:
|
|
|
+```bash
|
|
|
+# clone the repository, without the data files
|
|
|
+git clone https://gin.g-node.org/gallantlab/shortclips
|
|
|
+cd shortclips
|
|
|
+# download one file (e.g. features/wordnet.hdf)
|
|
|
+git annex get features/wordnet.hdf --from wasabi
|
|
|
+# download all files
|
|
|
+git annex get . --from wasabi
|
|
|
+```
|
|
|
+
|
|
|
+To maximize the downloading speed, two remotes are available to download the
|
|
|
+data. The first remote is GIN (`--from origin`), but the bandwidth might be
|
|
|
+limited. The second remote is Wasabi (`--from wasabi`), with a larger
|
|
|
+bandwidth.
|
|
|
+
|
|
|
+To load and plot the data, a basic example script is available in `example.py`.
|
|
|
+For more utilities and examples of analysis, see the dedicated [voxelwise
|
|
|
+tutorials](https://github.com/gallantlab/voxelwise_tutorials).
|
|
|
+
|
|
|
+## How to get help
|
|
|
+
|
|
|
+The recommended way to ask questions is in the issue tracker on the GitHub page
|
|
|
+https://github.com/gallantlab/voxelwise_tutorials/issues.
|
|
|
|
|
|
-> **[3]** Huth, A. G., Nishimoto, S., Vu, A. T., Dupre la Tour, T., & Gallant, J. L. (2022).
|
|
|
-Gallant Lab Natural Short Clips 3T fMRI Data. http://dx.doi.org/--TBD--
|
|
|
|
|
|
## Data files organization
|
|
|
|
|
@@ -49,9 +103,9 @@ stimuli/ → natural movie stimuli, for each fMRI run
|
|
|
...
|
|
|
train_11.hdf
|
|
|
utils/
|
|
|
- example.py → Python functions to analyze the data
|
|
|
wordnet_categories.txt → names of the wordnet labels
|
|
|
wordnet_graph.dot → wordnet graph to plot as in [1]
|
|
|
+example.py → Python example to load and plot the data
|
|
|
```
|
|
|
|
|
|
## Data format
|
|
@@ -100,27 +154,4 @@ Each file in `stimuli` contains:
|
|
|
Each training run contains 9000 images total.
|
|
|
The test run contains 8100 images total.
|
|
|
```
|
|
|
-The `utils.py` file contains helpers to load the data in Python.
|
|
|
-
|
|
|
-## How to get started
|
|
|
-
|
|
|
-The `utils` directory contains basic Python helpers to get started with
|
|
|
-the data.
|
|
|
-
|
|
|
-More tutorials on voxelwise modeling using this data set are available at
|
|
|
-https://github.com/gallantlab/voxelwise_tutorials.
|
|
|
-They includes Python downloading tools, data loaders,
|
|
|
-plotting tools, and examples of analysis.
|
|
|
-
|
|
|
-
|
|
|
-<img src="https://gallantlab.github.io/voxelwise_tutorials/_images/sphx_glr_06_plot_banded_ridge_model_002.png" alt="Example" width="600"/>
|
|
|
|
|
|
-Note that to get started, you might not need to download all the data. In
|
|
|
-particular, the stimuli data is large, and is already processed into two
|
|
|
-feature spaces to be used in voxelwise modeling.
|
|
|
-
|
|
|
-
|
|
|
-## How to get help
|
|
|
-
|
|
|
-The recommended way to ask questions is in the issue tracker on the GitHub page
|
|
|
-https://github.com/gallantlab/voxelwise_tutorials/issues.
|