This repository contains code for analyzing data and producing figure panels as presented in the manuscript "Mouse visual cortex areas represent perceptual and semantic features of learned visual categories" by Pieter M. Goltstein, Sandra Reinert, Tobias Bonhoeffer and Mark Hübener (Max Planck Institute of Neurobiology).
A detailed instruction can be found below. In case of any questions, please do not hesitate to contact us.
Download and install Python 3.7.10 (Anaconda distribution).
Follow instructions on https://anaconda.org
Download and install “Git” (this is optional, the code can also be downloaded manually).
Follow instructions on https://git-scm.com
Download and install the “GIN command line client” (this is optional, the data can also be downloaded manually).
Follow instructions on https://gin.g-node.org (under the 'Help' tab)
Open up any type of command line shell (csh, bash, zsh, cmd.exe, etc) and change the current directory to a drive/folder in which you like to have the entire project. The dataset is approximately 98 GB, so make sure you have enough free disk space.
Log in on the gin server
gin login
Download the 'small file' dataset
gin get pgoltstein/category-learning-visual-areas
Download the 'large files' from within the repository folder
cd category-learning-visual-areas
gin get-content
Note that this can take a long time as it downloads about 98 GB of data.
Or, if you prefer, you can instead download the data manually from gin.g-node
https://gin.g-node.org/pgoltstein/category-learning-visual-areas
CD into the repo-folder (called “category-learning-visual-areas”) if you are not already there ..
Download the code (this will be placed in a newly created subfolder “code”)
git clone https://github.com/pgoltstein/category-learning-visual-areas.git code
Add a folder for the figure output
mkdir figureout
Your directory structure should look like this:
- category-learning-visual-areas (or any other name you chose for your base directory)
- code
- p1_behavioralchambers
- behavior_chamber_learningcurve.py
- ... etc
- p2a_headfixbehavior
- ... etc
- data
- chronicrecordings
- p1_behavioralchambers
- ...
- figureout
- ... etc
Create the python environment from the (system dependent) yaml file in the code folder
conda env create -f ./code/environment_windows.yaml --name catvisareas
or
conda env create -f ./code/environment_macosx.yaml --name catvisareas
Activate the environment
conda activate catvisareas
All code should be run from the respective "code path", that is, cd into the code directory and run it using python. This is for the reason that it (by defaut) will look for the data in a reletive path starting from the folder where the python code is stored. So, for example, to make a learning curve as in figure 1b:
cd code
cd p1_behavioralchambers
python behavior_chamber_learningcurve.py
Data path: “./data/p1_behavioralchambers”
Code path: “./code/p1_behavioralchambers”
Figure 1
python behavior_chamber_learningcurve.py
python performance_per_categorylevel.py
python performance_on_first_trials.py
python performance_2d_including_fit.py
Extended Data Figure 1
python performance_per_categorylevel.py
python boundary_stability.py
Data path: “./data/p2a_headfixbehavior”
Code path: “./code/p2a_headfixbehavior”
Figure 2
python performance_2d_including_fit.py.py
Extended Data Figure 2
python headfix_learningcurve.py
python performance_per_categorylevel.py
Data path: “./data/p2b_retinotopybehavior”
Code path: “./code/p2b_retinotopybehavior”
Figure 2
python retinotopy-analysis-performance.py
Extended Data Figure 2
python retinotopy-analysis-eyetracking.py
Data path: “./data/p3a_chronicimagingbehavior”
Code path: “./code/p3a_chronicimagingbehavior”
Figure 3
python performance_per_categorylevel.py category
python performance_per_categorylevel.py baseline
Extended Data Figure 4
python chronicimagingbehavior_learningcurve.py
python performance_per_categorylevel.py category
Data path: “./data/p3b_corticalinactivation”
Code path: “./code/p3b_corticalinactivation”
Figure 3
python corticalinactivation.py
Extended Data Figure 3
python corticalinactivation.py
Data path: “./data/p3c_visualareainactivation”
Code path: “./code/p3c_visualareainactivation”
Extended Data Figure 5
python visualareainactivation-analysis.py
Data path: “./data/chronicrecordings”
Processed data path: “./data/p4_fractionresponsiveneurons”
Code path: “./code/p4_fractionresponsiveneurons”
Data pre-processing:
python frac-n-resp-cat-subsampl.py [areaname]
Figure 3
python fractionresponsiveneurons-clustering.py
python fractionresponsiveneurons-linearmodel.py
Extended Data Figure 6
python fractionresponsiveneurons-clustering.py
python fractionresponsiveneurons-linearmodel.py
Data path: “./data/chronicrecordings”
Processed data path: “./data/p5_encodingmodel”
Code path: “./code/p5_encodingmodel”
Data pre-processing:
python run-encodingmodel-within.py [areaname] [mousename] comb -o R2m -r trials -l Category -c Trained
Fits the full encoding model to the activity trace of each neuron in a singe chronic recording (identified by area and mouse).
python run-encodingmodel-within.py [areaname] [mousename] comb -o R2m -r trials -l Category -c Trained -s trials
Fits the full encoding model to the shuffled activity trace of each neuron in a singe chronic recording (identified by area and mouse).
python run-encodingmodel-delta.py [areaname] [mousename] comb -o R2m -l Category -c Trained -g [group selector]
Fits a full encoding model with one regressor group shuffled to the activity trace of each neuron in a singe chronic recording (identified by area and mouse).
python run-encodingmodel-delta.py [areaname] [mousename] comb -o R2m -l Category -c Trained -a [group selector]
Fits a full encoding model with all but one regressor group shuffled to the activity trace of each neuron in a singe chronic recording (identified by area and mouse).
python run-encodingmodel-regularization.py [areaname] [mousename]
Repeatedly fits a full encoding model to the activity trace of each neuron in a singe chronic recording (identified by area and mouse) using different L1 values.
Figure 5
python encodingmodel-full-area-dorsal-ventral.py
python encodingmodel-delta-component-area.py
Extended Data Figure 7
python encodingmodel-full-regularization.py
python encodingmodel-full-R2-fraction.py
python encodingmodel-full-vs-responsivefraction.py
python encodingmodel-full-area-dorsal-ventral.py
python encodingmodel-delta-component-area.py
Data path: “./data/chronicrecordings”
Processed data path: “./data/p5_encodingmodel”
Code path: “./code/p6a_encmodelcategorytuning”
Data pre-processing: see part 5
Figure 6
python encmodel-cti-semantic-feature.py
python encmodel-deltacti-vs-choice.py
Extended Data Figure 8
python encmodel-kernelframes.py
Extended Data Figure 9
python encmodel-cti-semantic-feature.py
Extended Data Figure 10
python encmodel-psth.py
python encmodel-tuningproperties.py
Note: The script encmodel-cti-semantic-feature.py plots many individual data points and links them using individual lines, which causes the script to run very slow. In the script is an option for not showing the individual lines connecting the individual datapoints. The variable is called "suppress_connecting_individual_datapoints_for_speed" (on line 34), set this to "True" and the script will run much faster.
Data path: “./data/p6b_mouthtracking”
Code path: “./code/p6b_mouthtracking”
Extended Data Figure 10
python tracking-of-mouth-movements.py
That's all folks