The raw-data repository for the project "AirtrackMotorPlanning" (examines whether whiskers and pupil positions predict the animal's motor plans)

Keisuke Sehara 6da6032f43 revert 2 năm trước cách đây
MLA005756 40375eb38f migrate from the original dataset 4 năm trước cách đây
MLA005757 40375eb38f migrate from the original dataset 4 năm trước cách đây
MLA007518 40375eb38f migrate from the original dataset 4 năm trước cách đây
MLA007519 40375eb38f migrate from the original dataset 4 năm trước cách đây
resources 3ebfbc6536 update README 2 năm trước cách đây
.gitignore db70c0ac1e Initial commit 4 năm trước cách đây
LICENSE db70c0ac1e Initial commit 4 năm trước cách đây
README.md c2a5af890b update ORCID 2 năm trước cách đây
datacite.yml 6da6032f43 revert 2 năm trước cách đây
format-eyevideos.sh 40375eb38f migrate from the original dataset 4 năm trước cách đây

README.md

Raw data for: Bergmann, Sehara et al., 2022

The raw-data repository for the project "AirtrackMotorPlanning" (examines whether whiskers and pupil positions predict the animal's motor plans)

The dataset stored here is part of the root data repository.

Note that this dataset contains the un-aligned, unprocessed data. If you intend to look at the 'final' dataset (processed and aligned), refer to the formatted dataset.

Contents

File organization

File names

The files are organized as: <ANIMAL>/<SESSION>/<DOMAIN>/<ANIMAL>_<SESSION>_<DOMAIN>_run<TRIAL>, where:

  • ANIMAL representing the ID of the animal (in the pattern MLAxxxxxx).
  • SESSION being the name of the session (in the pattern sessionYYYY-MM-DD-001).
  • DOMAIN representing the domain of recording (see below).
  • TRIAL being the trial index during the session.

Domain names

Domain name Description
top AVI files taken at 200 Hz using ZR View (a custom video acquisition software) under the near-UV lighting, trial-based. See acquisition for more details.
states CSV files containing manual annotation of behavioral states based on the top-view videos, trial-based. See behavioral state annotation for more details.
left-eye AVI files of left eye videos taken at 100 Hz using PylonPD (a custom video acquisition software) under the IR-lighting, trial-based.
right-eye AVI files of right eye videos taken at 100 Hz using PylonPD (a custom video acquisition software) under the IR-lighting, trial-based.

Acquisition

Below is the schematics of acquisition. The "Start/Stop" signal from the Arduino Mega controls the timings of the beginning and the end of acquisition for each domain.

schematics of trial-based acquisition

Behavioral state annotation

Names of states

Basically the idea is to take notes of the period of frames when the subject does a particular action:

  1. Standing still
    • at the end of a corridor
      • before starting another trial (AtEnd)
      • waiting for the lick port to come (Expect)
    • at the junction between a corridor and the central arena (AtCenter)
    • in the midway in a corridor (AtMidpoint)
  2. Moving backward in a corridor (Backward)
  3. Moving forward in a corridor (Forward)
  4. Turning in the central arena
    • in the left (Left)
    • in the right (Right)
  5. Licking from the lick port (Lick)

Criteria of transition

Transitions from Backward to Left, or vice versa, would be one of the the most difficult decisions. Sina used the level of the subject's eyes as the reference point, and considered "in the lane (i.e. Backward or Forward) if their eyes are still in the corridor.

Another difficulty is determining on when the animal starts to move. You will have to decide case-by-case in accordance with e.g. consistency of motion and positions.

Format

You can keep it as a CSV file, consisting of entries like below:

Trial FromFrame ToFrame State notes
1 1 4036 AtEnd
1 4037 4385 Backward
1 4386 4473 Right (ambiguous transition)
1 4480 4702 Forward
... ... ... ... ...
  1. You can enter multiple trials in one file, if you want. It is better to have a column holding the trial numbers, in any case.
  2. Frames start from one. This is in accordance with how ZR View works.
  3. Frames must contain the start (inclusive) and the stop (inclusive) indexes. This is because it is sometimes hard to determine the transitions, and you may want to skip some ambiguous frames for annotation.

Copyright (c) 2022, Ronny Bergmann, Keisuke Sehara, Sina E. Dominiak, Julien Colomb, Jens Kremkow, Matthew E. Larkum, Robert N. S. Sachdev, CC-BY 4.0

datacite.yml
Title Raw data for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack)
Authors Bergmann,Ronny;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.;ORCID:0000-0002-1477-7502
Sehara,Keisuke;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.;ORCID:0000-0003-4368-8143
Dominiak,Sina E.;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.
Colomb,Julien;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.;ORCID:0000-0002-3127-5520
Kremkow,Jens;Charité–Universitätsmedizin Berlin, Berlin, 10117 Germany.;ORCID:0000-0001-7077-4528
Larkum,Matthew E.;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.;ORCID:0000-0001-9799-2656
Sachdev,Robert N.S.;Institut für Biologie, Humboldt Universität zu Berlin, Berlin, 10117 Germany.;ORCID:0000-0002-6627-0199
Description Raw data repository for Bergmann R, Sehara K, Dominiak SE, Kremkow J, Larkum ME, Sachdev RNS, 2022 eNeuro "Coordination between eye movement and whisking in head fixed mice navigating a plus-maze". This repository contains the un-aligned, unprocessed data.
License Creative Commons 4.0 Attribution (https://creativecommons.org/licenses/by/4.0/)
References Bergmann R, Sehara K, Dominiak SE, Kremkow J, Larkum ME, Sachdev RNS (2022) Coordination between eye movement and whisking in head fixed mice navigating a plus-maze. [https://doi.org/10.1523/ENEURO.0089-22.2022] (IsSupplementTo)
Bergmann R, Sehara K, Dominiak SE, Colomb J, Kremkow J, Larkum ME, Sachdev RN (2022) Data for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack). G-Node. https://doi.org/10.12751/g-node.j9wxqe [https://doi.org/10.12751/g-node.j9wxqe] (IsPartOf)
Bergmann R, Sehara K, Dominiak SE, Colomb J, Kremkow J, Larkum ME, Sachdev RN (2022) DeepLabCut model repository for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack). G-Node. https://doi.org/10.12751/g-node.1kd2qf [https://doi.org/10.12751/g-node.1kd2qf] (IsSourceOf)
Bergmann R, Sehara K, Dominiak SE, Colomb J, Kremkow J, Larkum ME, Sachdev RN (2022) Whisker-tracking data for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack). G-Node. https://doi.org/10.12751/g-node.s62hni [https://doi.org/10.12751/g-node.s62hni] (IsSourceOf)
Bergmann R, Sehara K, Dominiak SE, Colomb J, Kremkow J, Larkum ME, Sachdev RN (2022) Eye-tracking data for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack). G-Node. https://doi.org/10.12751/g-node.5jz67s [https://doi.org/10.12751/g-node.5jz67s] (IsSourceOf)
Bergmann R, Sehara K, Dominiak SE, Colomb J, Kremkow J, Larkum ME, Sachdev RN (2022) Formatted data for Bergmann, Sehara et al., 2022 eNeuro (eye-whisker coordination on Airtrack). G-Node. https://doi.org/10.12751/g-node.hhxe3v [https://doi.org/10.12751/g-node.hhxe3v] (IsSourceOf)
Funding EU, EU.670118
EU, EU.327654276
EU, EU.720270
EU, EU.785907
EU, EU.945539
DFG, DFG.250048060
DFG, DFG.246731133
DFG, DFG.267823436
Keywords Neuroscience
Behavioral tracking
Motor coordination
Whiskers
Pupil tracking
Airtrack
Resource Type Dataset