[![made-with-datalad](https://www.datalad.org/badges/made_with.svg)](https://datalad.org) # **EEG Preprocessing** Steps 1-4 create filters (e.g., from the ICA, segment labeling), which later get applied to the raw data starting in step 5. All steps use FieldTrip and were executed within MATLAB 2020a. Deviations from the standard pipeline: - no ECG available, which would normally guide heart component labeling - all scripts have been adapted to single run --- ### **01_prepare_preprocessing** - Prepare for ICA - Read into FieldTrip format - Switch channels - EEG settings: - Referenced to avg. mastoid (A1, A2) - downsample: 1000Hz to 250 Hz - 4th order Butterworth 1-100 Hz BPF - no reref for ECG ### **02_visual_inspection** - Check for gross noise periods that should not be considered for ICA ### **03_ica** - Conduct initial ICA1, this should be run on tardis ### **04_ica_labeling** - Manual labeling of artefactual ICA components ### **05_segmentation_raw_data** - Segmentation: -1500 ms relative to fixcue onset to 1500 ms after ITI onset - Load raw data - Switch channels - EEG settings: - Referenced to avg. mastoid (A1, A2) - 0.2 4th order butterworth HPF - 125 4th order butterworth LPF - demean - recover implicit reference: POz - downsample: 1000Hz to 500 Hz ### **06_automatic_artifact_correction** - Automatic artifact correction, interpolation - Remove blink, move, heart, ref, art & emg ICA components prior to calculation - get artifact contaminated channels by kurtosis, low & high frequency artifacts - get artifact contaminated channels by FASTER - interpolate artifact contaminated channels - get artifact contaminated epochs & exclude epochs recursively - get channel x epoch artifacts - Note that this does NOT yet remove anything. We only calculate the data to be removed in the next step. ### **07_prep_data_for_analysis** - Remove blink, move, heart, ref, art & emg ICA components - Interpolate detected artifact channels - Remove artifact-heavy trials, for subjects with missing onsets, the missing trials are included here as "artefactual trials", hence correcting the EEG-behavior assignment: ### **08_assignConditionsToData** - Remove additional channels - Load behavioral data and add information to data --- # **DataLad datasets and how to use them** This repository is a [DataLad](https://www.datalad.org/) dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, [DataLad](https://www.datalad.org/) is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and [git-annex](https://git-annex.branchable.com/) to allow sharing, synchronizing, and version controlling collections of large files. You can find information on how to install DataLad at [handbook.datalad.org/en/latest/intro/installation.html](http://handbook.datalad.org/en/latest/intro/installation.html). ### Get the dataset A DataLad dataset can be `cloned` by running ``` datalad clone ``` Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual *content* of the (sometimes large) data files. ### Retrieve dataset content After cloning a dataset, you can retrieve file contents by running ``` datalad get ` ``` This command will trigger a download of the files, directories, or subdatasets you have specified. DataLad datasets can contain other datasets, so called *subdatasets*. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run ``` datalad get -n ``` Afterwards, you can browse the retrieved metadata to find out about subdataset contents, and retrieve individual files with `datalad get`. If you use `datalad get `, all contents of the subdataset will be downloaded at once. ### Stay up-to-date DataLad datasets can be updated. The command `datalad update` will *fetch* updates and store them on a different branch (by default `remotes/origin/master`). Running ``` datalad update --merge ``` will *pull* available updates and integrate them in one go. ### Find out what has been done DataLad datasets contain their history in the ``git log``. By running ``git log`` (or a tool that displays Git history) in the dataset or on specific files, you can find out what has been done to the dataset or to individual files by whom, and when. ### More information More information on DataLad and how to use it can be found in the DataLad Handbook at [handbook.datalad.org](http://handbook.datalad.org/en/latest/index.html). The chapter "DataLad datasets" can help you to familiarize yourself with the concept of a dataset.