BIDS conversion tutorial data: Human EEG data recorded in a single session while particiapnt is performing an auditory FPAS-oddball task.

dervinism cc6b25780e Corrected the jupyter notebook 1 år sedan
code cc6b25780e Corrected the jupyter notebook 1 år sedan
sourcedata fa079a399d Initial commit 1 år sedan
sub-01 fa079a399d Initial commit 1 år sedan
CHANGES fa079a399d Initial commit 1 år sedan
LICENSE b2a226b4f1 Initial commit 1 år sedan
README.md fa079a399d Initial commit 1 år sedan
dataset_description.json fa079a399d Initial commit 1 år sedan
participants.json fa079a399d Initial commit 1 år sedan
participants.tsv fa079a399d Initial commit 1 år sedan

README.md

README

This repository contains sample files of an EEG experiment that explores non-native accent adaptation.

Description of the experiment

The EEG experiment was run using Frequency Tagging, specifically we combined an oddball paradigm with Fast Periodic Auditory Stimulation (FPAS), a technique used to explore passive and automatic perception. The participants’ ability to perceive non-native phonological contrasts was tested using a stream of syllables composed by standards and deviant sounds presented at fixed frequencies (e.g. [ba-ba-ba-ba-pa-ba-ba…]). Data was collected using Brain Vision recorder.

Files contained in the repository

  • EEG DATA FILES:
    • sub-01_task-fpasOddball_eeg.eeg: EEG data
    • sub-01_task-fpasOddball_eeg.vhdr: Brain Vision Data Exchange Header File, contains information on the electrodes used during testing.
    • sub-01_task-fpasOddball_eeg.vmrk: Brain Vision Data Exchange Marker File, contains information on the markers used during testing.
  • Files produced by BIDS conversion (AV2BIDS):
    • dataset_description.json
    • participants.json
    • participants.tsv
    • sub-01_task-fpasOddball_channels.tsv
    • sub-01_task-fpasOddball_coordsystem.json
    • sub-01_task-fpasOddball_eeg.json
    • sub-01_task-fpasOddball_electrodes.tsv
    • sub-01_task-fpasOddball_events.tsv
  • CODE:
    • FREQUENCY_TAGGING-ANALYSIS-GIN_VERSION.ipynb: a Jupyter Notebook script that runs some preprocessing steps, detects event markers/triggers and generates a picture of the PSD and SNR spectrum of the data. You should be able to read it directly in the repository.
  • PLOT:
    • FREQ_TAGGING-EXAMPLE_PLOT.png: a plot of the PSD and SNR spectrum that you should get if you run the code in the repository. NOTE: you can get different results if you change the markers in the script, as different markers correspond to different conditions.

How to use

You can run the script on the EEG data that is contained in the repository. The script is made to generate a PSD and SNR spectrum plot of one specific condition of the experiment (the English Speaker Control condition). However, you can plot other conditions by changing the relevant markers (in the Estimation of stream duration section of the script).

Markers

Further details of the experiment's conditions: as mentioned before, this experiment tested non-native accent adaptation. We used a training and test paradigm, that is, we exposed listeners to a non-native accent and then tested their adaptation to this accent. To capture changes in accent adaptation we collected EEG data with the FPAS-Oddball task before and after training. We tested two non-native "consonants" P and J. Also, in the case of P, we tested whether listeners generalise adaptation to a different speaker of the same accent and whether it changes how they perceive a native English speaker. Finally, we have a couple of control conditions.

Markers list (beginning/end of the condition):

  • BASELINE P CONSONANT NON-NATIVE SPEAKER - 110 / 111
  • BASELINE J CONSONANT NON-NATIVE SPEAKER - 120 / 121
  • CONTROL NON-NATIVE SPEAKER - 130 / 131
  • TEST P CONSONANT MAIN NON-NATIVE SPEAKER - 140 / 141
  • TEST P CONSONANT SECOND NON-NATIVE SPEAKER - 150 / 151
  • TEST P CONSONANT ENGLISH NATIVE SPEAKER - 160 / 161
  • TEST J CONSONANT MAIN NON-NATIVE SPEAKER - 170 / 171
  • CONTROL NON-NATIVE SPEAKER - 180 / 181