No Description

Remi Gau 05f5767537 [DATALAD] Recorded changes 2 years ago
.datalad bd26653662 Configure metadata type(s) 2 years ago
code 05f5767537 [DATALAD] Recorded changes 2 years ago
sourcedata @ d37559da4c 8407dfb428 initial save 2 years ago
sub-112 b67b485b21 convert 50 samples 2 years ago
sub-173 b67b485b21 convert 50 samples 2 years ago
sub-185 b67b485b21 convert 50 samples 2 years ago
sub-206 b67b485b21 convert 50 samples 2 years ago
sub-233 b67b485b21 convert 50 samples 2 years ago
sub-238 b67b485b21 convert 50 samples 2 years ago
sub-268 b67b485b21 convert 50 samples 2 years ago
sub-270 b67b485b21 convert 50 samples 2 years ago
sub-283 b67b485b21 convert 50 samples 2 years ago
sub-288 b67b485b21 convert 50 samples 2 years ago
sub-295 b67b485b21 convert 50 samples 2 years ago
.gitattributes 4b57d30adf Apply default BIDS dataset setup 2 years ago
.gitignore b67b485b21 convert 50 samples 2 years ago
.gitmodules 2d85df15e9 [DATALAD] Added subdataset 2 years ago
CHANGES 209856aba6 deal with scans TSVs 2 years ago
LICENSE 209856aba6 deal with scans TSVs 2 years ago
README 209856aba6 deal with scans TSVs 2 years ago
dataset_description.json 4746f00bfa update code, test, refactor and lint 2 years ago
ephys.json 4746f00bfa update code, test, refactor and lint 2 years ago
participants.tsv b67b485b21 convert 50 samples 2 years ago
samples.tsv b67b485b21 convert 50 samples 2 years ago

README

# README

## Contact person:

- Rémi Gau
- email: remi.gau@gmail.com
- ORCID: 0000-0002-1535-9767

## Access to the data

See the datalad section below.

## Overview

- [ ] Project name (if relevant)

- [x] Year(s) that the project ran

- from 2007 to 2010

- [ ] Brief overview of the tasks in the experiment

A paragraph giving an overview of the experiment. This should include the goals
or purpose and a discussion about how the experiment tries to achieve these
goals.

- [ ] Description of the contents of the dataset

An easy thing to add is the output of the bids-validator that describes what
type of data and the number of subject one can expect to find in the dataset.

- [ ] Independent variables

A brief discussion of condition variables (sometimes called contrasts or
independent variables) that were varied across the experiment.

- [ ] Dependent variables

A brief discussion of the response variables (sometimes called the dependent
variables) that were measured and or calculated to assess the effects of varying
the condition variables. This might also include questionnaires administered to
assess behavioral aspects of the experiment.

- [ ] Control variables

A brief discussion of the control variables --- that is what aspects were
explicitly controlled in this experiment. The control variables might include
subject pool, environmental conditions, set up, or other things that were
explicitly controlled.

- [ ] Quality assessment of the data

## Methods

### Apparatus

A summary of the equipment and environment setup for the experiment. For
example, was the experiment performed in a shielded room with the subject seated
in a fixed position.

### Initial setup

A summary of what setup was performed when a subject arrived.

### Task organization

How the tasks were organized for a session. This is particularly important
because BIDS datasets usually have task data separated into different files.)

- [ ] Was task order counter-balanced?
- [ ] What other activities were interspersed between tasks?
- [ ] In what order were the tasks and other activities performed?

### Task details

As much detail as possible about the task and the events that were recorded.

### Missing data

Mention something if some participants are missing some aspects of the data.
This can take the form of a processing log and/or abnormalities about the
dataset.

Some examples:

- A brain lesion or defect only present in one participant
- Some experimental conditions missing on a given run for a participant because
of some technical issue.
- Any noticeable feature of the data for certain participants
- Differences (even slight) in protocol for certain participants.

................................................................................

[![made-with-datalad](https://www.datalad.org/badges/made_with.svg)](https://datalad.org)

## DataLad datasets and how to use them

This repository is a [DataLad](https://www.datalad.org/) dataset. It provides
fine-grained data access down to the level of individual files, and allows for
tracking future updates. In order to use this repository for data retrieval,
[DataLad](https://www.datalad.org/) is required. It is a free and open source
command line tool, available for all major operating systems, and builds up on
Git and [git-annex](https://git-annex.branchable.com/) to allow sharing,
synchronizing, and version controlling collections of large files. You can find
information on how to install DataLad at
[handbook.datalad.org/en/latest/intro/installation.html](http://handbook.datalad.org/en/latest/intro/installation.html).

### Get the dataset

A DataLad dataset can be `cloned` by running

```
datalad install git@gin.g-node.org:/RemiGau/example_ephys_bids_conversion.git
```

Once a dataset is cloned, it is a light-weight directory on your local machine.
At this point, it contains only small metadata and information on the identity
of the files in the dataset, but not actual _content_ of the (sometimes large)
data files.

### Retrieve dataset content

After cloning a dataset, you can retrieve file contents by running

```
datalad get `
```

This command will trigger a download of the files, directories, or subdatasets
you have specified.

DataLad datasets can contain other datasets, so called _subdatasets_. If you
clone the top-level dataset, subdatasets do not yet contain metadata and
information on the identity of files, but appear to be empty directories. In
order to retrieve file availability metadata in subdatasets, run

```
datalad get -n
```

Afterwards, you can browse the retrieved metadata to find out about subdataset
contents, and retrieve individual files with `datalad get`. If you use
`datalad get `, all contents of the subdataset will be
downloaded at once.

### Stay up-to-date

DataLad datasets can be updated. The command `datalad update` will _fetch_
updates and store them on a different branch (by default
`remotes/origin/master`). Running

```
datalad update --merge
```

will _pull_ available updates and integrate them in one go.

### Find out what has been done

DataLad datasets contain their history in the `git log`. By running `git log`
(or a tool that displays Git history) in the dataset or on specific files, you
can find out what has been done to the dataset or to individual files by whom,
and when.

### More information

More information on DataLad and how to use it can be found in the DataLad
Handbook at
[handbook.datalad.org](http://handbook.datalad.org/en/latest/index.html). The
chapter "DataLad datasets" can help you to familiarize yourself with the concept
of a dataset.