{ "cells": [ { "cell_type": "markdown", "source": [ "# Tagging\n", "\n", "These exercises cover the topics introduced in tutorial_2. We will reuse the datasets from the previous session.\n", "\n", "**Note:** If your solution fails while you extended the dataset the file might be in a state you are not expecting. It might be best to re-create the file to start over again.\n", "\n", "## Exercise 1: Tagging a single segment in the data\n", "\n", "1. Reopen the \"intracellular_data.nix\" file in ``ReadWrite`` mode.\n", "2. Let's assume that a stimulus was presented in the time interval between 3 and 7.5 seconds.\n", "3. Create a **Tag** that tags this segment in the \"intracellular data\" and the \"spike times\" data.\n", "4. Close the file again.\n", "\n", "### Your solution\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exercise 2: Retrieving the tagged data\n", "\n", "1. Reopen the file for read only access.\n", "2. Find the **Tag** that annotates the stimulus segment.\n", "3. Retrieve the tagged data for the \"intracellular data\" and the \"spike times\" data.\n", "4. If you feel like it, plot them.\n", "\n", "### Your solution\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exercise 3: Tagging in 2-D\n", "\n", "In the \"lfp_fake.nix\" we recorded Local Field Potentials (LFPs) in 10 parallel recording channels. Let's assume that half of the electrodes were placed on V4 and the other half on V5 in visual cortex. We can use a tag to bind the channels to the respective areas.\n", "\n", "1. Reopen the \"lfp_fake.nix\" file in ``ReadWrite`` mode.\n", "2. Create **Tags** to assign channel 0 through 4 to area V4, and channel 5 through 9 to area V5.\n", "3. Close the file.\n", "\n", "4. Reopen in ``ReadOnly`` mode for plotting.\n", "5. Plot the data retrieved from the **Tags** (if you plot them into the same plot use different colors for each group).\n", "\n", "### Your solution\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exercise 4: Tagging multiple points in 1-D\n", "\n", "Again we are using the \"intracellular_data.nix\" dataset. So far we stored the recorded membrane voltage and the spike times in it. We added a **Tag** tagging the stimulus segment. This **Tag** binds the stimulus time to the two data traces. But the two traces themselves are only weakly linked together. How can we make it more explicit that the spike times stored in one **DataArray** relate to the times in the recorded membrane voltage?\n", "\n", "We can use a **MultiTag** for this purpose.\n", "\n", "1. Reopen the \"intracellular_data.nix\" file for read-write access.\n", "2. Create a **MultiTag** entity that uses the \"spike times\" as positions to point at the \"intracellullar data\".\n", "\n", "### Your solution\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Exercise 5: Tagging multiple points in 2D\n", "\n", "For this exercise we go back and use the multiple channel ``lfp_fake.nix`` dataset. In the data we see that there are \"spikes\" in the measurements. \n", "\n", "1. Reopen the ``lfp_fake.nix`` dataset and find the lfp data.\n", "2. To identify these events we use some more effort and apply some low- and highpass filtering to the data (call the ``detect_spikes`` function below).\n", "3. Store the spike positions in an **DataArray** and use it for the **MultiTag** that binds the spike detection to the data.\n", "4. Close the file.\n", "\n", "5. Reopen to create a plot that shows both, data and detected spikes (e.g. combination of imshow and scatter). Try to start from the **MultiTag**.\n", "6. For each tagged position get the respective value from the data (use the ``tagged_data`` method). \n", "\n", "### Your solution\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": null, "source": [ "def threshold_crossings(data, threshold=0.0, flank=\"rising\"):\n", " \"\"\"\n", " Returns the indices of threshold crossings in a given signal.\n", " :param data: the signal.\n", " :param threshold: the threshold. Default 0.0.\n", " :param flank: accepts values {\"rising\", \"falling\"} with \"rising\" being the default.\n", "\n", " :return: the indices of the threshold crossings.\n", " \"\"\"\n", " flanks = [\"rising\", \"falling\"]\n", " if flank.lower() not in flanks:\n", " flank = flanks[0]\n", " data = np.squeeze(data)\n", " if len(data.shape) > 1:\n", " raise Exception(\"trace must be 1-D\")\n", " shifted_data = np.hstack((0, data[0:-1]))\n", " if flank.lower() == \"rising\":\n", " crossings = (data > threshold) & (shifted_data <= threshold)\n", " else:\n", " crossings = (data < threshold) & (shifted_data >= threshold)\n", " positions = np.nonzero(crossings)[0]\n", "\n", " return positions\n", "\n", "\n", "def butter_lowpass(highcut, fs, order=5):\n", " \"\"\" Creates a butterworth lowpass filter.\n", "\n", " Args:\n", " highcut (double): the cutoff frequency in Hz\n", " fs (int): the sampling rate of the data\n", " order (int, optional): the order of the low-pass filter. Defaults to 5.\n", "\n", " Returns:\n", " b, a (np.array): the filter coefficients\n", " \"\"\"\n", " nyq = 0.5 * fs\n", " high = highcut / nyq\n", " b, a = signal.butter(order, high, btype='low')\n", "\n", " return b, a\n", "\n", "\n", "def butter_highpass(lowcut, fs, order=5):\n", " \"\"\" Creates a butterworth highpass filter.\n", "\n", " Args:\n", " highcut (double): the cutoff frequency in Hz\n", " fs (int): the sampling rate of the data\n", " order (int, optional): the order of the low-pass filter. Defaults to 5.\n", "\n", " Returns:\n", " b, a (np.array): the filter coefficients\n", " \"\"\"\n", " nyq = 0.5 * fs\n", " low = lowcut / nyq\n", " b, a = signal.butter(order, low, btype='high')\n", "\n", " return b, a\n", "\n", "\n", "def detect_spikes(data, time, fs):\n", " # filter the data a bit\n", " hpb , hpa = butter_highpass(2.5, fs, 5)\n", " lpb , lpa = butter_lowpass(100, fs, 5)\n", "\n", " spike_times = []\n", " spike_channels = [] \n", " for i in range(data.shape[1]):\n", " y = signal.lfilter(hpb, hpa, data[:, i])\n", " y = signal.lfilter(lpb, lpa, y)\n", " \n", " # detect spikes\n", " spike_indices = threshold_crossings(y, 0.5)\n", " times = time[spike_indices]\n", " spike_times.extend(times)\n", " spike_channels.extend(np.ones_like(times) * i)\n", "\n", " spike_positions = np.vstack((np.array(spike_times), spike_channels)).T # we need to have the time along the first dimension, as in the data\n", " return spike_positions" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [], "metadata": {} } ], "metadata": { "orig_nbformat": 4, "language_info": { "name": "python", "version": "3.9.5" }, "kernelspec": { "name": "python3", "display_name": "Python 3.9.5 64-bit" }, "interpreter": { "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" } }, "nbformat": 4, "nbformat_minor": 2 }