Browse Source

add docs in README; renamed metadata

Keisuke Sehara 4 years ago
parent
commit
3090ac164f
8 changed files with 120 additions and 1 deletions
  1. 0 0
      DATASETS.json
  2. 120 1
      README.md
  3. 0 0
      REPOSITORY.json
  4. 0 0
      SESSIONS.csv
  5. 0 0
      SUBJECTS.csv
  6. BIN
      images/acquisition.png
  7. BIN
      images/airtrack_platform.jpg
  8. BIN
      images/task.png

datasets_metadata.json → DATASETS.json


+ 120 - 1
README.md

@@ -3,6 +3,125 @@
 Another attempt to organize the (derived) data set, to be uploaded
 to the Human Brain Project Collab.
 
-## License
+### Contents
+
+- [How this dataset was acquired](#desc-root)
+  - [Setup](#setup-desc)
+  - [Task](#task-desc)
+  - [Acquisition and basic analysis](#acquisition-desc)
+    - [Raw videos](#raw)
+    - [Behavioral states](#states)
+    - [Body-part tracking using UV paint](#tracking)
+- [How this dataset can be read](#walkthrough)
+- [Repository information](#repository-root)
+  - [Authors](#authors)
+  - [License](#license)
+
+<a link="desc-root" />
+
+## How this dataset was acquired
+
+<a link="setup-desc" />
+
+### Setup
+
+<a link="image-of-setup" />
+
+<img src="images/airtrack_platform.jpg" alt="Airtrack platform" style="zoom:33%;" />
+
+We used the Airtrack platform ([Nashaat et al., 2016](https://doi.org/10.1152/jn.00088.2016)) (above) to engage a head-fixed mouse in a freely-behaving environment. Pumped air flow lets the plus-maze environment float on the table so that the head-fixed mouse (only the head-post holder is shown at the center of the plus maze) can easily propel the environment.
+
+The position of the floating plus maze was monitored using an Arduno microcontroller, which  controlled of the behavioral task at the same time. For more details of the setup, please refer to https://www.neuro-airtrack.com .
+
+<a link="task-desc" />
+
+### Task
+
+<img src="images/task.png" alt="A schematic of the task" style="zoom:40%;" />
+
+The water-restricted, head-fixed mouse performed a simple plus-maze task:
+
+1. Each trial started when the mouse was at the end of a lane (not shown in the figure above).
+2. The mouse had to move backward (i.e. push the plus maze forward) to go to the central arena, and propel the maze to find out the correct, rewarding lane (**Left** image in the figure):
+   - There was a LED-based visual cue in front of the animal that signaled whether or not the lane in front of the animal was the correct one.
+   - If the lane in front of the animal was the correct one, the LED was _off_.
+   - If the lane in front of the animal was not the correct one, the LED was _on_.
+3. When the mouse locomoted to the end of the correct lane, the mouse was allowed an access to the lick port for short (~2 s) amount of time (**Right** image in the figure):
+   - The lick port was attached to a linear stage (i.e. the apparatus on the red mechanical arm in [the setup figure](#image-of-setup)), and normally was in the protracted position to keep the mouse out of access.
+   - The microcontroller for Airtrack detected the position of the animal, and moved the linear stage to push the lick port right in front of the animal.
+
+
+
+<a link="acquisition-desc" />
+
+### Acquisition and basic analysis
+
+<img src="images/acquisition.png" alt="Sample images from high-speed video acquisition" style="zoom:50%;" />
+
+A high-speed camera acquired whisking behavior of the animal.
+
+<a link="raw" />
+
+#### Raw videos
+
+The raw videos may be found in the `videos` domain of the `raw` dataset.
+
+<a link="states" />
+
+#### Behavioral states
+
+For some videos of the dataset, the behavioral states of the animal were manually annotated:
+
+- **AtEnd**: the animal was standing at the end of a lane, without apparently doing anything.
+- **Midway**: the animal was standing in the middle of a lane.
+- **Backward**: the animal was moving backward along a lane.
+- **Left/Right**: the animal was turning left or right (from the animal's point of view) in the central arena.
+- **Forward**: the animal was moving forward along a lane.
+- **Expect**: the animal was standing at the end of the correct, rewarding lane, waiting for the lick port to come forward.
+- **Lick**: the lick port was in front of the animal.
+
+The data was stored as CSV files (the `states` domain of the `tracking` dataset).
+
+Note that the frame numbers in the videos are indicated as _one_-starting numbers, i.e. the first frame of the video was referred to as the frame 1.
+
+<a link="tracking" />
+
+#### Body-part tracking using UV paint
+
+To facilitate tracking procedures, a small amount of UV paint (["UV glow"](http://www.uvglow.co.uk/), to be precise) was applied to the body parts of interest. Some mice only received paints on their whiskers, but the others have their nose painted as well (**Left** in the figure above).
+
+Tracking procedures were performed using either of the following programs (both of them engage the same tracking algorithm):
+
+- the [Pixylator](https://doi.org/10.5281/zenodo.3516008) ImageJ plugin
+- the [videobatch](https://doi.org/10.5281/zenodo.3407666) python module
+
+In brief, the procedures were as follows:
+
+1. A maximal projection images for each video was computed (**Right** in the figure above; also see the `projection` domain of the `tracking` dataset)
+2. The ROIs for each body part of interest were manually determined (see the `ROI` domain of the `tracking` dataset).
+3. Within each ROI in every frame, pixels that has the specified hue (i.e. color-balance) values were collected, and the luminance-based weighted average of their positions were defined as the tracked position of the body part.
+4. The tracked positions were stored as a CSV file (see the `tracked` domain of the `tracking` dataset).
+
+<a link="walkthrough" />
+
+## How this dataset can be read
+
+We prepared the [helper.py](helper.py) python module for easing the dataset reading. Please check the [walkthrough.ipynb](walkthrough.ipynb) Jupyter notebook to see how to use it to read the dataset.
+
+General information about the Jupyter program can be found [here](https://jupyter.org/).
+
+<a link="repository-root" />
+
+## Repository information
+
+<a link="authors" />
+
+### Authors
+
+Please refer to [REPOSITORY.json](REPOSITORY.json).
+
+<a link="license" />
+
+### License
 
 [Creative Commons CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/)

repository_metadata.json → REPOSITORY.json


sessions_metadata.csv → SESSIONS.csv


subjects_metadata.csv → SUBJECTS.csv


BIN
images/acquisition.png


BIN
images/airtrack_platform.jpg


BIN
images/task.png