Scheduled service maintenance on November 22


On Friday, November 22, 2024, between 06:00 CET and 18:00 CET, GIN services will undergo planned maintenance. Extended service interruptions should be expected. We will try to keep downtimes to a minimum, but recommend that users avoid critical tasks, large data uploads, or DOI requests during this time.

We apologize for any inconvenience.

Preprocessing for EEGmanypipelines

Julian Kosciessa c5cbcf34e4 update input data (removed empty jsons) před 2 roky
.datalad ae84e56384 [DATALAD] new dataset před 3 roky
code a835d3d5c5 update compilations před 2 roky
data c5cbcf34e4 update input data (removed empty jsons) před 2 roky
doc f924501ebf clean up trash files, add cleaned up submodule před 2 roky
tools 749a5edc7e run FASTER před 2 roky
.gitattributes 15f457f88d Instruct annex to add text files to Git před 2 roky
.gitignore b9b3292c0b add executables před 2 roky
.gitmodules 602c9cab7a change url to rawdata submodule před 2 roky
LICENSE dd5f9ddf71 Update 'LICENSE' před 2 roky
README.md d20549b70e lock files před 2 roky
datacite.yml 3d14e9893d Add information for publishing with DataCite před 2 roky

README.md

made-with-datalad

EEG Preprocessing

Steps 1-4 create preprocessing information (e.g., from the ICA, segment labeling), that later is applied to the raw data starting in step 5.

All steps use FieldTrip and were executed within MATLAB 2020a.

Deviations from the standard pipeline:

  • no ECG was available, which would normally guide heart IC labeling
  • all scripts have been adapted to work on a single run

Many scripts were run on the high performance computing cluster (HPC) at the Max Planck Institute for Human Development (Berlin, Germany). One of two different deployment approaches was used: either (a) the relevant script was compiled or (b) the script was directly called with multiple MATLAB instances. In the case of (a), the respective code folder will contain a "_prepare" file that was used to compile the code, as well as a '_START' bash script, that deployed the compiled code; in the case of (b) bo compiling was done, and the '_START' script immediately calls the relevant MATLAB code. To re-run scripts outside of the HPC environment, a script needs to be written that loops the script across all subjects. Note that for debugging purposes, code usually checks whether it is run on a mac, and if so, performs all computations on an example subject. Depending on the deployment situation, this section may need to be adapted.


a1_prepare_preprocessing

  • Prepare for ICA
  • Read into FieldTrip format
  • Referenced to avg. mastoid (A1, A2)
  • downsample: 1000Hz to 250 Hz
  • 4th order Butterworth 1-100 Hz BPF

a2_visual_inspection

  • Manual labeling of noise periods that will be excluded prior to ICA

a3_ica

  • prior to ICA, data are segmented into 2 s pseudo-trials
  • ICA is conducted
  • use VEOG, HEOG and ECG (not available) to pre-label blink, eye move and heart ICs

a4_ica_labeling

  • Manual labeling of artefactual ICA components

a5_segmentation_raw_data

This step loads the raw data again and segments them to the desired time window, prior filters will be applied to these data in the next step.

  • load raw data
  • segment to -1s to +2s relative to stimulus onset
  • EEG settings:
    • Referenced to avg. mastoid (A1, A2)
    • recover implicit reference: POz
    • 0.2 4th order butterworth HPF
    • 125 4th order butterworth LPF
    • demean
    • downsample: 512 Hz to 500 Hz

a6_automatic_artifact_correction

Identify additional artifacts after removing ICA components. This step does NOT yet remove anything. We only calculate the data to be removed in the next step.

  • get artifact contaminated channels by kurtosis, low & high frequency artifacts
  • get artifact contaminated channels by FASTER
  • interpolate artifact contaminated channels
  • get artifact contaminated epochs & exclude epochs recursively
  • get channel x epoch artifacts

a7_prep_data_for_analysis

  • Remove blink, move, heart, ref, art & emg ICA components
  • Remove artifact-heavy trials
  • Interpolate artifact-heavy channels
  • output: preprocessed data: 'sub-XXX_task-xxxx_eeg_art.mat'

DataLad datasets and how to use them

This repository is a DataLad dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, DataLad is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and git-annex to allow sharing, synchronizing, and version controlling collections of large files. You can find information on how to install DataLad at handbook.datalad.org/en/latest/intro/installation.html.

Get the dataset

A DataLad dataset can be cloned by running

datalad clone <url>

Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.

Retrieve dataset content

After cloning a dataset, you can retrieve file contents by running

datalad get <path/to/directory/or/file>`

This command will trigger a download of the files, directories, or subdatasets you have specified.

DataLad datasets can contain other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run

datalad get -n <path/to/subdataset>

Afterwards, you can browse the retrieved metadata to find out about subdataset contents, and retrieve individual files with datalad get. If you use datalad get <path/to/subdataset>, all contents of the subdataset will be downloaded at once.

Stay up-to-date

DataLad datasets can be updated. The command datalad update will fetch updates and store them on a different branch (by default remotes/origin/master). Running

datalad update --merge

will pull available updates and integrate them in one go.

Find out what has been done

DataLad datasets contain their history in the git log. By running git log (or a tool that displays Git history) in the dataset or on specific files, you can find out what has been done to the dataset or to individual files by whom, and when.

More information

More information on DataLad and how to use it can be found in the DataLad Handbook at handbook.datalad.org. The chapter "DataLad datasets" can help you to familiarize yourself with the concept of a dataset.

datacite.yml
Title EEGmanypipelines Preprocessing
Authors Kosciessa,Julian;Max Planck Institute for Human Development;ORCID:0000-0002-4553-2794
Description Preprocessing code and data for EEGmanypipelines project
License Creative Commons Attribution-ShareAlike 4.0 (https://creativecommons.org/licenses/by-sa/4.0/)
References
Funding
Keywords Neuroscience
Electrophysiology
EEGmanypipelines
Resource Type Dataset