{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Your allensdk version is: 2.15.2\n" ] } ], "source": [ "import os\n", "import shutil\n", "import pandas as pd\n", "from pathlib import Path\n", "import matplotlib.pyplot as plt\n", "import matplotlib\n", "import math\n", "import numpy as np\n", "import scipy.signal as signal\n", "import scipy.fft as fft\n", "import pickle\n", "\n", "from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache\n", "from allensdk.brain_observatory.ecephys.ecephys_session import (\n", " EcephysSession, \n", " removed_unused_stimulus_presentation_columns\n", ")\n", "from allensdk.brain_observatory.ecephys.visualization import plot_mean_waveforms, plot_spike_counts, raster_plot\n", "from allensdk.brain_observatory.visualization import plot_running_speed\n", "\n", "import allensdk\n", "from allensdk.brain_observatory.behavior.behavior_project_cache import VisualBehaviorNeuropixelsProjectCache\n", "\n", "# Confirming your allensdk version\n", "print(f\"Your allensdk version is: {allensdk.__version__}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#we import the LFP functions\n", "import importlib\n", "import LFP_functions\n", "importlib.reload(LFP_functions)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# LFP data obtention from Allen Dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Update this to a valid directory in your filesystem. This is where the data will be stored.\n", "output_dir = r'E:/BT_Code'\n", "DOWNLOAD_COMPLETE_DATASET = False\n", "manifest_path = os.path.join(output_dir, \"manifest.json\")\n", "cache = VisualBehaviorNeuropixelsProjectCache.from_s3_cache(cache_dir=output_dir)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check the manifest downloaded. We worked with version 0.5.0" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(cache.current_manifest())\n", "cache.load_latest_manifest()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now select the probes in VISp from session data in the dataset that suit our specifications and store them in lfp_VISp. This array contains the probe and session IDs, which we will use to download the LFP" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ecephys_sessions = cache.get_ecephys_session_table()\n", "probes = cache.get_probe_table()\n", "valid_lfp = probes[probes['has_lfp_data']] #we select sessions with LFP data\n", "\n", "#we find the indices for our sessions of interest (wt mice, 3uL reward, images G and containing a probe in VISp)\n", "wt_indices = ecephys_sessions[ecephys_sessions['genotype'] == 'wt/wt'][ecephys_sessions['session_type'] == 'EPHYS_1_images_G_3uL_reward'].index.tolist()\n", "lfp_VISp = valid_lfp[(valid_lfp['structure_acronyms'].str.contains(\", 'VISp',\")) & (valid_lfp['ecephys_session_id'].isin(wt_indices))]\n", "lfp_VISp" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For our work, we used 16 sessions. The probe and session ids are all stored in probe_ids and session_ids respectively" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "probe_ids = [1055324729, 1062886498, 1055328906, 1065764271, 1067688317, 1081293774, 1081297300, 1104418694, 1108676394, 1109998730, 1117240990, 1118711338, 1120380894, 1128939997, 1130463465, 1140256852]\n", "\n", "session_ids = []\n", "for probe_id in probe_ids:\n", " session_id = probes.loc[probe_id]['ecephys_session_id']\n", " if session_id not in session_ids:\n", " session_ids.append(session_id)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now obtain the LFPs and save them as pickle files. ATTENTION: each LFP file weights several GBs, so this is will take a long time. Make sure you work with a GPU or an overall powerful computer to avoid memory errors during the code execution." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sessions = {} #dictionary of sessions\n", "for ses in range(len(probe_ids)):\n", " sessions[ses+1] = cache.get_ecephys_session(\n", " ecephys_session_id=session_ids[ses])\n", " #here we are getting the LFP data for the probe\n", " lfp = sessions[ses+1].get_lfp(probe_ids[ses])\n", " #we save the LFP data as a pickle file\n", "\n", " pickle_file_path = \"lfp_data/lfp_data_\"+str(ses+1)+\".pickle\"\n", " with open(pickle_file_path, \"wb\") as file:\n", " pickle.dump(lfp, file)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# LFP alingment to stimuli and V1 selection\n", "\n", "We are first going to select the stimulus presentation data for the active task and the passive replay. The LFP data is then aligned to the presentation of novel images with the align_image_lfps function. Finally, we select the channels in V1." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Check if the folders to store the aligned LFPs and channels exist, if not, create it\n", "if not os.path.exists(\"aligned_LFP_area\"):\n", " os.makedirs(\"aligned_LFP_area\")\n", "\n", "if not os.path.exists(\"channels\"):\n", " os.makedirs(\"channels\")\n", "\n", "\n", "#save in stim_active the stim_presentations with the stimulus_block 0\n", "for ses in range(len(session_ids)):\n", " probe_id = probe_ids[ses] #select the probe id for the session\n", " stim_presentations = sessions[ses+1].stimulus_presentations\n", " stim_active = stim_presentations[stim_presentations['stimulus_block'] == 0]\n", " stim_passive = stim_presentations[stim_presentations['stimulus_block'] == 5]\n", "\n", " #we obtain alligned LFP data given the active session\n", " aligned_lfps_act = LFP_functions.align_image_lfps(stim_active,lfp)\n", "\n", " #we obtain alligned LFP data given the passive session\n", " aligned_lfps_pas = LFP_functions.align_image_lfps(stim_passive,lfp)\n", "\n", " #we select the channels within V1\n", " chans = sessions[ses+1].get_channels()\n", " aligned_lfps_act_en_V1,chans_V1 = LFP_functions.select_area(aligned_lfps_act, chans, probe_id, 'VISp')\n", " aligned_lfps_pas_en_V1,_ = LFP_functions.select_area(aligned_lfps_pas, chans, probe_id, 'VISp')\n", "\n", " #save aligned_lfps_V1 and chans_V1 in a pickle file for each session\n", " with open(\"aligned_LFP_area/aligned_lfps_act_V1\"+str(ses)+\".pickle\", \"wb\") as file:\n", " pickle.dump(aligned_lfps_act_en_V1, file)\n", " with open(\"aligned_LFP_area/aligned_lfps_pas_V1_\"+str(ses)+\".pickle\", \"wb\") as file:\n", " pickle.dump(aligned_lfps_pas_en_V1, file)\n", " with open(\"channels/chans_V1_\"+str(ses)+\".pickle\", \"wb\") as file:\n", " pickle.dump(chans_V1, file)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we have the aligned lfp data for each session and the channels, we can compute the power spectra and Current Source Density (see notebooks get_power_spectra and get_CSD)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Extra code: visualize number of presentations and probe locations" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Specify the file path of the pickle file\n", "ses=1\n", "pickle_file_path = \"aligned_LFP_area/aligned_lfps_act_V1_\"+str(ses)+\".pickle\"\n", "\n", "# Load the LFP data from the pickle file\n", "with open(pickle_file_path, \"rb\") as file:\n", " lfp= pickle.load(file)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We obtain the active and passive stimuli for separate. Although they both have the same 'stimulus_name' atribute, they have different 'stimulus_block' values, where the active task corresponds to block 0 and the passive to block 5. We use this to obtain the two dataframes." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", " | stimulus_block | \n", "stimulus_name | \n", "
---|---|---|
stimulus_presentations_id | \n", "\n", " | \n", " |
0 | \n", "0 | \n", "Natural_Images_Lum_Matched_set_ophys_G_2019 | \n", "
4804 | \n", "1 | \n", "spontaneous | \n", "
4805 | \n", "2 | \n", "gabor_20_deg_250ms | \n", "
8450 | \n", "3 | \n", "spontaneous | \n", "
8451 | \n", "4 | \n", "flash_250ms | \n", "
8601 | \n", "5 | \n", "Natural_Images_Lum_Matched_set_ophys_G_2019 | \n", "