Aucune description

Polina Turishcheva 981072bc1b Update 'README.md' il y a 5 mois
LICENSE 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
README.md 981072bc1b Update 'README.md' il y a 5 mois
datacite.yml 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
dynamic29156-11-10-Video-8744edeac3b4d1ce16b680916b5267ce.zip 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
dynamic29228-2-10-Video-8744edeac3b4d1ce16b680916b5267ce.zip 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
dynamic29234-6-9-Video-8744edeac3b4d1ce16b680916b5267ce.zip 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
dynamic29513-3-5-Video-8744edeac3b4d1ce16b680916b5267ce.zip 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
dynamic29514-2-9-Video-8744edeac3b4d1ce16b680916b5267ce.zip 577d2741fe gin commit from Polinas-MacBook-Pro.local il y a 1 an
factorised_baseline.pth cfef113ebf Upload files to '' il y a 1 an
gru_baseline.pth cfef113ebf Upload files to '' il y a 1 an

README.md

Data for the Sensorium2023 competition

!!! Note: GIN does not support bulk downloads. You need to download the files individually !!!

This dataset does not have out-of-distribution responses. To download the one with OOD responses additionally - go here - Link

Dataset Structure

Below we provide a brief explanation of the dataset structure and how to access all the information contained in them.

Have a look at our white paper for in depth description of the data. White paper on arXiv

We provide the datasets in the .zip format. Unzipping them will create two folders data and meta.

  • data: includes the variables that were recorded during the experiment. The experimental variables are saved as a collection of numpy arrays. Each numpy array contains the value of that variable at a specific image presentation (i.e. trial). Note that the name of the files does not contain any information about the order or time at which the trials took place in experimental time. They are randomly ordered.
    • videos: This directory contains NumPy arrays where each single X.npy contains the video that was shown to the mouse in trial X.
    • responses: This directory contains NumPy arrays where each single X.npy contains the deconvolved calcium traces (i.e. responses) recorded from the mouse in trial X in response to the particular presented image.
    • behavior: Behavioral variables include pupil dilation and running speed. The directory contain NumPy arrays (of size 1 x 2) where each single X.npy contains the behavioral variables (in the same order that was mentioned earlier) for trial X.
    • pupil_center: the eye position of the mouse, estimated as the center of the pupil. The directory contain NumPy arrays (of size 1 x 2) for horizontal and vertical eye positions.
  • meta: includes meta data of the experiment

    • neurons: This directory contains neuron-specific information. Below are a list of important variables in this directory
      • cell_motor_coordinates.npy: contains the position (x, y, z) of each neuron in the cortex, given in microns. Note: The
    • statistics: This directory contains statistics (i.e. mean, median, etc.) of the experimental variables (i.e. behavior, images, pupil_center, and responses).

      • Note: The statistics of the responses are or particular importance, because we provide the deconvolved calcium traces here in the responses.

      However, for the evaluation of submissions in the competition, we require the responses to be standardized (i.e. r = r/(std_r)).

    • trials: This directory contains trial-specific meta data.

      • tiers.npy: contains labels that are used to split the data into train, validation, and test set
        • The training and validation split is only present for convenience, and is used by our ready-to-use PyTorch DataLoaders.
        • The test set is used to evaluate the model preformance. In the competition datasets, the responses to all test images is withheld.

License

This data is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. This license requires that you contact us before you use the data in your own research. In particular, this means that you have to ask for permission if you intend to publish a new analysis performed with this data (no derivative works-clause).

Creative Commons License

datacite.yml
Title The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos - Dataset
Authors Fahey,Paul;Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA;ORCID:0000-0001-6844-3551
Turushcheva,Polina;Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
Hansel,Laura;Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
Froebe,Rachel;Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
Ponder,Kayla;Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
Vystrcilová,Michaela;Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
Willeke,Konstantin;International Max Planck Research School for Intelligent Systems, University of Tübingen, Germany
Bashiri,Mohammad;International Max Planck Research School for Intelligent Systems, University of Tübingen, Germany
Tolias,Andreas;Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
Sinz,Fabian;Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
Ecker,Alexander;Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany
Description Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input. However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Competition with dynamic input. This includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input. We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.
License Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
References [doi:10.48550] (IsDescribedBy)
Funding EXC 2064/1
SFB 1233
SFB 1528
NSF 1707400
Keywords Neuroscience
Predictive models
Mouse visual cortex
System identification
Resource Type Dataset