Browse Source

Simplified UE tutorial data preparation

Michael Denker 2 years ago
parent
commit
ebd7fa1405
2 changed files with 58 additions and 87 deletions
  1. 18 32
      hands_on_3.ipynb
  2. 40 55
      hands_on_3_solution.ipynb

+ 18 - 32
hands_on_3.ipynb

@@ -5,7 +5,7 @@
    "id": "503cb687",
    "metadata": {},
    "source": [
-    "# Hands-on session 3: Elephant (Solution)\n",
+    "# Hands-on session 3: Elephant\n",
     "\n",
     "These exercises build on concepts introduced in Tutorial 3\n",
     "\n",
@@ -98,7 +98,9 @@
     "- have a high spike count (e.g. above 10000)\n",
     "- not be captured by electrodes rejected due to low, intermediate or high frequency components.\n",
     "\n",
-    "Hint: You may want to look into `spiketrain.annotations` and list comprehensions to take the best spiketrains from all the spiketrains present in the segment."
+    "Create a new segement that contains the list of selected spiketrains that match these criteria. \n",
+    "\n",
+    "Hint: You may want to look into `segment.spiketrain[0].annotations` and list comprehensions to take the best spiketrains from all the spiketrains present in the segment."
    ]
   },
   {
@@ -130,31 +132,7 @@
    "id": "3dfaefbc",
    "metadata": {},
    "source": [
-    "5. Pass the pair of spiketrains (step 3) and the epoch you created (step 2) to the function below. The output of this function will be the spiketrains cut using the epoch object and arranged in the required way for the UE analysis."
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": null,
-   "id": "59b072e6",
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "def select_sts(spiketrain_pair, epoch):\n",
-    "    # Formatting the data for UE analysis where dims are [trials, neurons, spiketrains]\n",
-    "    data = list()\n",
-    "    for trial_id, (t_start, t_duration) in enumerate(zip(epoch.times, epoch.durations)):\n",
-    "        t_shift = - t_start\n",
-    "        t_stop = t_start + t_duration\n",
-    "        spiketrains = list()\n",
-    "        for st in spiketrain_pair:\n",
-    "            # Slice the spike train\n",
-    "            st_ts = st.time_slice(t_start, t_stop)\n",
-    "            # Shift time to start from 0\n",
-    "            st_ts = st_ts.time_shift(t_shift)\n",
-    "            spiketrains.append(st_ts.rescale('ms'))\n",
-    "        data.append(spiketrains)\n",
-    "    return data"
+    "5. Now that we have isolated the good spiketrains, cut the segment according to the defined epoch using `neo.utils.cut_segment_by_epoch()`. From the result, prepare the input for the unitary event analysis, which is a list, where the `n`-th element is a list of spiketrains for the `n`-th trial, or, in other words, where the `n`-th element is the `spiketrains` attribute of the `n`-th segment.  "
    ]
   },
   {
@@ -226,7 +204,7 @@
    "id": "a42cbb4e",
    "metadata": {},
    "source": [
-    "7. In close analogy to the beginning of tutorial 3, prepare a Neo `Block`, containing one `Segment` of SGHF data of the first correct trial. Name this `Segement` by the variable `trial`, as in tutorial 3.  In contrast to the lecture, we will cut data from the trial start `TS-ON` to reward administration indicated by event `RW-ON`. To this end, first find also the `RW-ON` events similar to how we found `TS-ON` in tutorial 3. Then, in the call to `neo.utils.add_epoch()`, supply these as a second event `event2=` *instead* of giving `t_pre=0` and `t_post=2*pq.s`. This will cut from event 1 to event 2, instead of a fixed amount of 2 s around event 1 (as in tutorial 3)."
+    "7. In close analogy to the beginning of tutorial 3 and what we have done above, prepare a Neo `Block`, containing one `Segment` of SGHF data of the first correct trial. Name this `Segement` by the variable `trial`, as in tutorial 3.  In contrast to the lecture and the UE analysis above, we will cut data from the trial start `TS-ON` to reward administration indicated by event `RW-ON`. To this end, first find also the `RW-ON` events similar to how we found `TS-ON` in tutorial 3. Then, in the call to `neo.utils.add_epoch()`, supply these as a second event `event2=` *instead* of giving `t_pre=0` and `t_post=2*pq.s`. This will cut from event 1 to event 2, instead of a fixed amount of 2 s around event 1 (as in tutorial 3 or the UE exercise above)."
    ]
   },
   {
@@ -249,9 +227,7 @@
     "## SPADE analysis\n",
     "\n",
     "Note that patterns in this data are not easily to spot by eye in the rasterplot we developed in tutorial 3.\n",
-    "We use SPADE as a technique that does that for us by finding all patterns and checking for each pattern if it\n",
-    "occurs more often than expected given its complexity (number of neurons participating) and frequency.\n",
-    "Before going directly to the analysis we briefly explain SPADE's most important parameters:\n",
+    "We use SPADE (Torre et al., 2013; Stella et al. 2019) as a technique that does that for us by finding all patterns and checking for each pattern if it occurs more often than expected given its complexity (number of neurons participating) and frequency. Before going directly to the analysis we briefly explain SPADE's most important parameters:\n",
     "\n",
     "- `binsize`: temporal precision of the method. The smaller the binsize is, the more precisely we expect each single pattern to repeat. This raises an important question: which is the temporal precision that are you interested in? It depends on the scientific question! We often use 5ms, based on a number of studies on the minimal neuronal temporal precision.\n",
     "- `winlen`: window length, or maximal length allowed to each pattern, expressed in bin units. SPADE will detect patterns with a temporal duration up to the window length. If winlen=1, then only synchronous patterns are detected. Are you interested in synchronous higher-order correlations? Are you interested in patterns with delays? Note: the higher the winlen parameter is, the more expensive (memory and time) the pattern search is!\n",
@@ -382,8 +358,18 @@
     "# References\n",
     "1. Grün et al (2002a) DOI: 10.1162/089976602753284455\n",
     "2. Grün et al (2002b) DOI: 10.1162/089976602753284464\n",
-    "3. Grün et al (2010) DOI: 10.1007/978-1-4419-5675-0_10"
+    "3. Grün et al (2010) DOI: 10.1007/978-1-4419-5675-0_10\n",
+    "4. Torre et al (2013) DOI: 10.3389/fncom.2013.00132\n",
+    "5. Stella et al (2019) DOI: 10.1016/j.biosystems.2019.104022"
    ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "458adf0e-6b24-4c7a-95a5-2896a7247d0d",
+   "metadata": {},
+   "outputs": [],
+   "source": []
   }
  ],
  "metadata": {

+ 40 - 55
hands_on_3_solution.ipynb

@@ -125,7 +125,9 @@
     "- have a high spike count (e.g. above 10000)\n",
     "- not be captured by electrodes rejected due to low, intermediate or high frequency components.\n",
     "\n",
-    "Hint: You may want to look into `spiketrain.annotations` and list comprehensions to take the best spiketrains from all the spiketrains present in the segment."
+    "Create a new segement that contains the list of selected spiketrains that match these criteria. \n",
+    "\n",
+    "Hint: You may want to look into `segment.spiketrain[0].annotations` and list comprehensions to take the best spiketrains from all the spiketrains present in the segment."
    ]
   },
   {
@@ -136,7 +138,8 @@
    "outputs": [],
    "source": [
     "# Select only SUA spike trains with spikes and certain quality criteria\n",
-    "spiketrains = [spiketrain for spiketrain in segment.spiketrains if\n",
+    "ue_segment = neo.Segment()\n",
+    "ue_segment.spiketrains = [spiketrain for spiketrain in segment.spiketrains if\n",
     "                                   spiketrain.annotations['sua'] and\n",
     "                                   not spiketrain.annotations['electrode_reject_HFC'] and\n",
     "                                   not spiketrain.annotations['electrode_reject_IFC'] and\n",
@@ -169,7 +172,7 @@
     }
    ],
    "source": [
-    "print(f\"The number of left over clean spike trains after filtering is {len(spiketrains)}\")"
+    "print(f\"The number of left over clean spike trains after filtering is {len(ue_segment.spiketrains)}\")"
    ]
   },
   {
@@ -177,41 +180,23 @@
    "id": "3dfaefbc",
    "metadata": {},
    "source": [
-    "5. Pass the pair of spiketrains (step 3) and the epoch you created (step 2) to the function below. The output of this function will be the spiketrains cut using the epoch object and arranged in the required way for the UE analysis."
+    "5. Now that we have isolated the good spiketrains, cut the segment according to the defined epoch using `neo.utils.cut_segment_by_epoch()`. From the result, prepare the input for the unitary event analysis, which is a list, where the `n`-th element is a list of spiketrains for the `n`-th trial, or, in other words, where the `n`-th element is the `spiketrains` attribute of the `n`-th segment.  "
    ]
   },
   {
    "cell_type": "code",
    "execution_count": 8,
-   "id": "59b072e6",
-   "metadata": {},
-   "outputs": [],
-   "source": [
-    "def select_sts(spiketrain_pair, epoch):\n",
-    "    # Formatting the data for UE analysis where dims are [trials, neurons, spiketrains]\n",
-    "    data = list()\n",
-    "    for trial_id, (t_start, t_duration) in enumerate(zip(epoch.times, epoch.durations)):\n",
-    "        t_shift = - t_start\n",
-    "        t_stop = t_start + t_duration\n",
-    "        spiketrains = list()\n",
-    "        for st in spiketrain_pair:\n",
-    "            # Slice the spike train\n",
-    "            st_ts = st.time_slice(t_start, t_stop)\n",
-    "            # Shift time to start from 0\n",
-    "            st_ts = st_ts.time_shift(t_shift)\n",
-    "            spiketrains.append(st_ts.rescale('ms'))\n",
-    "        data.append(spiketrains)\n",
-    "    return data"
-   ]
-  },
-  {
-   "cell_type": "code",
-   "execution_count": 9,
    "id": "1ee5a825",
    "metadata": {},
    "outputs": [],
    "source": [
-    "sts_for_uea = select_sts(spiketrains, ue_epoch)"
+    "# Create the new block\n",
+    "ue_trials = neo.Block()\n",
+    "\n",
+    "# Cut the recording segment into the trials, as defined by the epochs\n",
+    "ue_trials.segments = neo.utils.cut_segment_by_epoch(ue_segment, ue_epoch, reset_time=True)\n",
+    "\n",
+    "sts_for_uea = [x.spiketrains for x in ue_trials.segments]"
    ]
   },
   {
@@ -248,7 +233,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 10,
+   "execution_count": 9,
    "id": "59d18c64",
    "metadata": {},
    "outputs": [],
@@ -276,7 +261,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 11,
+   "execution_count": 10,
    "id": "5b9bcb12",
    "metadata": {},
    "outputs": [
@@ -298,7 +283,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 12,
+   "execution_count": 11,
    "id": "45078831",
    "metadata": {},
    "outputs": [
@@ -340,12 +325,12 @@
    "id": "a42cbb4e",
    "metadata": {},
    "source": [
-    "7. In close analogy to the beginning of tutorial 3, prepare a Neo `Block`, containing one `Segment` of SGHF data of the first correct trial. Name this `Segement` by the variable `trial`, as in tutorial 3.  In contrast to the lecture, we will cut data from the trial start `TS-ON` to reward administration indicated by event `RW-ON`. To this end, first find also the `RW-ON` events similar to how we found `TS-ON` in tutorial 3. Then, in the call to `neo.utils.add_epoch()`, supply these as a second event `event2=` *instead* of giving `t_pre=0` and `t_post=2*pq.s`. This will cut from event 1 to event 2, instead of a fixed amount of 2 s around event 1 (as in tutorial 3)."
+    "7. In close analogy to the beginning of tutorial 3 and what we have done above, prepare a Neo `Block`, containing one `Segment` of SGHF data of the first correct trial. Name this `Segement` by the variable `trial`, as in tutorial 3.  In contrast to the lecture and the UE analysis above, we will cut data from the trial start `TS-ON` to reward administration indicated by event `RW-ON`. To this end, first find also the `RW-ON` events similar to how we found `TS-ON` in tutorial 3. Then, in the call to `neo.utils.add_epoch()`, supply these as a second event `event2=` *instead* of giving `t_pre=0` and `t_post=2*pq.s`. This will cut from event 1 to event 2, instead of a fixed amount of 2 s around event 1 (as in tutorial 3 or the UE exercise above)."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 13,
+   "execution_count": 12,
    "id": "943d26af",
    "metadata": {
     "pycharm": {
@@ -370,7 +355,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 14,
+   "execution_count": 13,
    "id": "189fd391",
    "metadata": {
     "pycharm": {
@@ -389,7 +374,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 15,
+   "execution_count": 14,
    "id": "e44de382",
    "metadata": {
     "pycharm": {
@@ -399,15 +384,15 @@
    "outputs": [],
    "source": [
     "# Create the new block\n",
-    "trials = neo.Block()\n",
+    "spade_trials = neo.Block()\n",
     "\n",
     "# Cut the recording segment into the trials, as defined by the epochs\n",
-    "trials.segments = neo.utils.cut_segment_by_epoch(segment, trial_epochs, reset_time=True)"
+    "spade_trials.segments = neo.utils.cut_segment_by_epoch(segment, trial_epochs, reset_time=True)"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 16,
+   "execution_count": 15,
    "id": "eb7ade10",
    "metadata": {
     "pycharm": {
@@ -417,7 +402,7 @@
    "outputs": [],
    "source": [
     "# Select first segment as the trial for analysis\n",
-    "trial = trials.segments[0]"
+    "spade_trial = spade_trials.segments[0]"
    ]
   },
   {
@@ -428,9 +413,7 @@
     "## SPADE analysis\n",
     "\n",
     "Note that patterns in this data are not easily to spot by eye in the rasterplot we developed in tutorial 3.\n",
-    "We use SPADE as a technique that does that for us by finding all patterns and checking for each pattern if it\n",
-    "occurs more often than expected given its complexity (number of neurons participating) and frequency.\n",
-    "Before going directly to the analysis we briefly explain SPADE's most important parameters:\n",
+    "We use SPADE (Torre et al., 2013; Stella et al. 2019) as a technique that does that for us by finding all patterns and checking for each pattern if it occurs more often than expected given its complexity (number of neurons participating) and frequency. Before going directly to the analysis we briefly explain SPADE's most important parameters:\n",
     "\n",
     "- `binsize`: temporal precision of the method. The smaller the binsize is, the more precisely we expect each single pattern to repeat. This raises an important question: which is the temporal precision that are you interested in? It depends on the scientific question! We often use 5ms, based on a number of studies on the minimal neuronal temporal precision.\n",
     "- `winlen`: window length, or maximal length allowed to each pattern, expressed in bin units. SPADE will detect patterns with a temporal duration up to the window length. If winlen=1, then only synchronous patterns are detected. Are you interested in synchronous higher-order correlations? Are you interested in patterns with delays? Note: the higher the winlen parameter is, the more expensive (memory and time) the pattern search is!\n",
@@ -443,7 +426,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 17,
+   "execution_count": 16,
    "id": "4d7cad57",
    "metadata": {
     "pycharm": {
@@ -453,7 +436,7 @@
    "outputs": [],
    "source": [
     "# Select only SUA spike trains with spikes and certain quality criteria\n",
-    "spiketrains = [spiketrain for spiketrain in trial.spiketrains if\n",
+    "spiketrains = [spiketrain for spiketrain in spade_trial.spiketrains if\n",
     "                                   spiketrain.annotations['sua'] and\n",
     "                                   not spiketrain.annotations['electrode_reject_HFC'] and\n",
     "                                   not spiketrain.annotations['electrode_reject_IFC'] and\n",
@@ -476,7 +459,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 18,
+   "execution_count": 17,
    "id": "23263b16",
    "metadata": {
     "pycharm": {
@@ -496,7 +479,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 19,
+   "execution_count": 18,
    "id": "23f7d6b4",
    "metadata": {
     "pycharm": {
@@ -516,7 +499,7 @@
      "name": "stdout",
      "output_type": "stream",
      "text": [
-      "Time for data mining: 0.2950148582458496\n"
+      "Time for data mining: 0.316211462020874\n"
      ]
     }
    ],
@@ -537,7 +520,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 20,
+   "execution_count": 19,
    "id": "994e3df5",
    "metadata": {
     "pycharm": {
@@ -558,7 +541,7 @@
        "  'pvalue': -1})"
       ]
      },
-     "execution_count": 20,
+     "execution_count": 19,
      "metadata": {},
      "output_type": "execute_result"
     }
@@ -578,7 +561,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 21,
+   "execution_count": 20,
    "id": "a5dfda9b",
    "metadata": {},
    "outputs": [
@@ -588,7 +571,7 @@
        "<AxesSubplot:xlabel='Time (s)', ylabel='Neuron'>"
       ]
      },
-     "execution_count": 21,
+     "execution_count": 20,
      "metadata": {},
      "output_type": "execute_result"
     },
@@ -611,7 +594,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 22,
+   "execution_count": 21,
    "id": "b2a28cb4",
    "metadata": {},
    "outputs": [
@@ -623,7 +606,7 @@
        "       <AxesSubplot:xlabel='Pattern size', ylabel='Count'>], dtype=object)"
       ]
      },
-     "execution_count": 22,
+     "execution_count": 21,
      "metadata": {},
      "output_type": "execute_result"
     },
@@ -652,7 +635,9 @@
     "# References\n",
     "1. Grün et al (2002a) DOI: 10.1162/089976602753284455\n",
     "2. Grün et al (2002b) DOI: 10.1162/089976602753284464\n",
-    "3. Grün et al (2010) DOI: 10.1007/978-1-4419-5675-0_10"
+    "3. Grün et al (2010) DOI: 10.1007/978-1-4419-5675-0_10\n",
+    "4. Torre et al (2013) DOI: 10.3389/fncom.2013.00132\n",
+    "5. Stella et al (2019) DOI: 10.1016/j.biosystems.2019.104022"
    ]
   },
   {