Pre- and post training EEG and behavioral data of Multi- vs. Unisensory training

Evangelos Paraskevopoulos eea44e8158 Upload files to 'RawData/UniPlastGroup/Y057/Pre/ERP_Recording' hace 8 meses
RawData eea44e8158 Upload files to 'RawData/UniPlastGroup/Y057/Pre/ERP_Recording' hace 8 meses
LICENSE 35b733a7f1 Initial commit hace 11 meses
README.md 4eaa83007a Update 'README.md' hace 8 meses
datacite.yml 238fc5d324 Update 'datacite.yml' hace 8 meses

README.md

Uni-Vs.Multisensory Training

Pre- and post training EEG and behavioral data of Multi- vs. Unisensory training.

Data provided @ RawData folder.

EEG and event file data folder structure for each group is self-explanatory. Subjects Y001, Y002 from the MusicPlast group and subjects Y023, Y025 from UniPlast group were measured using Brain Products' actiCAP-128 cap, while all other subjects were measured using Brain Products' R-Net.

EEG was recorded using a multisensory mismatch paradigm with the following conditions: Congruent, Incongruent, Auditory mismatch, visual mismatch.

The event files contain 2 types of triggers: the one defining the condition (1: Congruent, 2: Incongruent, 4: Auditory mismatch, 8: visual mismatch) and one defining the timepoint of presentation of the stimulus (64). Hence, timepoint 0 of stimulus presentation is the trigger 64 and the condition definition is the trigger prior to the 64. Congruent condition has 3 triggers of 64, hence timepoint 0 of congruent is trigger 64 when previous trigger is also 64 and next one as well.

The file Cognitive.sav is an SPSS file containing D prime and cognitive evaluation data for each subject.

The file MIWithinBetweenBandsAllSubjectsAllTimesAllConditions2.m is the code for estimating GCMI on the single trial data. It is assumed that one has the GCMI toolbox and brainstorm. Once the hilbert transformed beamformer data are estimated for each trial from Brainstorm, they are exported and using the code provided, the connectivity is estimated for each trial. After that a simple mean is estimated for the trials of each subject and each condition to compile the hypernetwork of each subject, that will be subjected to statistical analysis via the Network Based Statistics Toolbox.

datacite.yml
Title Unravelling the multisensory learning advantage: Different patterns of within and across frequency-specific interactions drive uni- and multisensory neuroplasticity
Authors Paraskevopoulos,Evangelos;UCY
Anagnostopoulou,Alexandra;AUTH
Chalas,Nikolas;IBB
Karagianni,Maria;AUTH
Bamidis,Panagiotis;AUTH
Description In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. The outcomes confirm the superior efficacy of multisensory learning in enhancing cognitive processes and improving multisensory processing. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.
License Creative Commons CC0 1.0 Public Domain Dedication (https://creativecommons.org/publicdomain/zero/1.0/)
References Citation1 [doi:10.xxx/zzzz] (IsSupplementTo)
Funding HFRI: 2089
Keywords Neuroscience
Multisensory learning
Resource Type Dataset