Description: StateSwitch
Please cite the following references if you use these data:
A static copy of the data for younger adults is also provided at https://osf.io/mgxqr/.
In case of questions, contact kosciessa@mpib-berlin.mpg.de or MPIB Research Data Management.
IMPORTANT: Events are provided as stick regressors and exclude the first 12 volumes to reach BOLD equilibrium. Please make sure to discard the first 12 volumes during any analysis to correctly index the event timing.
The experiment was structured as follows:
Subjects performed a dynamic visual attention task (multi-attribute Task; MAT), and had to sample up to four visual features in a joint display for subsequent recall. Prior to stimulus presentation, subjects were validly cued to potential target probes. The number and identity of cues was varied to modulate the level of expected target uncertainty – and thus the contextually required encoding dimensions. Subjects performed 4 runs of 8 blocks each; each block contained 8 sequences of 8 trials with identical state cueing.
Each trial was structured as follows: Cue onset during which the relevant targets were centrally presented (1s), fixation phase (2s), dynamic stimulus phase (3s), probe phase (incl. response; 2s); ITI (un-jittered; 1.5s).
At the onset of each block, the relevant attentional target set was presented for 5 s. At the offset of each block, subjects received sham feedback for 3s.
The following triggers were used in the design:
Within-trial triggers
64 | ITI onset
128 | pause onset
6/8 | recording offset
Anatomical data cannot be publicly shared as per informed consent.
If required, defaced
anatomical data can be made available for research purposes only.
Please send a request to Research Data Management and submit a data protection statement.
No fieldmap images were acquired.
The original analyses did not yet rely on BIDS input.
Discrepancies between filenames and data organization in the analysis code base and this directory are expected.
For reproducibility, see the datasets used for analysis.
2132: final volumes missing
EyeTracking data and EEG rest are not yet included in this BIDS structure.
Please contact kosciessa@mpib-berlin.mpg.de in case of issues.
This repository is a DataLad dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, DataLad is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and git-annex to allow sharing, synchronizing, and version controlling collections of large files. You can find information on how to install DataLad at handbook.datalad.org/en/latest/intro/installation.html.
A DataLad dataset can be cloned
by running
datalad clone <url>
Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.
After cloning a dataset, you can retrieve file contents by running
datalad get <path/to/directory/or/file>`
This command will trigger a download of the files, directories, or subdatasets you have specified.
DataLad datasets can contain other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run
datalad get -n <path/to/subdataset>
Afterwards, you can browse the retrieved metadata to find out about
subdataset contents, and retrieve individual files with datalad get
.
If you use datalad get <path/to/subdataset>
, all contents of the
subdataset will be downloaded at once.
DataLad datasets can be updated. The command datalad update
will
fetch updates and store them on a different branch (by default
remotes/origin/master
). Running
datalad update --merge
will pull available updates and integrate them in one go.
DataLad datasets contain their history in the git log
.
By running git log
(or a tool that displays Git history) in the dataset or on
specific files, you can find out what has been done to the dataset or to individual files
by whom, and when.
More information on DataLad and how to use it can be found in the DataLad Handbook at handbook.datalad.org. The chapter "DataLad datasets" can help you to familiarize yourself with the concept of a dataset.