Browse Source

Adjusted examples for blackrock and nix

Michael Denker 11 months ago
parent
commit
c925bd89b1
5 changed files with 259 additions and 13 deletions
  1. 11 6
      README.md
  2. 1 1
      code/convert_to_nix.py
  3. 7 6
      code/example.py
  4. 0 0
      code/example_matlab.m
  5. 240 0
      code/example_nix.py

+ 11 - 6
README.md

@@ -70,10 +70,10 @@ Download the latest release as a zip file by clicking on *Releases* on the main
 ## Repository structure
 
 ### Directory datasets_blackrock
-Contains the two original data sets `i140703-001` and `l101210-001`. Original data files are provided in the Blackrock file format (.nev, .ns2, .ns5, .ns6, .ccf), e.g., `i140703-001.nev`, `i140703-001.ns6`,.... The files `i140703-001-03.nev` and `l101210-001-02.nev` contain offline spike sorted data for both datasets as opposed to the original recordings `i140703-001.nev` and `l101210-001.nev` which contain the same spikes, but unreliable sorting that should not be used. The files `i140703-001.odml` and `l101210-001.odml` contain extensive metadata describing the datasets in the odML format. The Excel files `i140703-001.xls` and `l101210-001.xls` contain the same information as in the odML for easy reading and browsing, however, they are not used by the loading routines. The file `odml.xsl` is an XML schema that is required for viewing the odML files with a web browser. These datasets can be 
+Contains the two original data sets `i140703-001` and `l101210-001`. Original data files are provided in the Blackrock file format (.nev, .ns2, .ns5, .ns6, .ccf), e.g., `i140703-001.nev`, `i140703-001.ns6`,.... The files `i140703-001-03.nev` and `l101210-001-02.nev` contain offline spike sorted data for both datasets as opposed to the original recordings `i140703-001.nev` and `l101210-001.nev` which contain the same spikes, but unreliable sorting that should not be used. The files `i140703-001.odml` and `l101210-001.odml` contain extensive metadata describing the datasets in the odML format. The Excel files `i140703-001.xls` and `l101210-001.xls` contain the same information as in the odML for easy reading and browsing, however, they are not used by the loading routines. The file `odml.xsl` is an XML schema that is required for viewing the odML files with a web browser. the file `example_blackrock.py` in the `code` subdirectory contains an example on using these nix files.These datasets can be 
 
 ### Directory datasets_nix
-Contains a ready to use version data sets `i140703-001` and `l101210-001` in the Nix data format using a Neo structure. Datasets can be loaded via the Neo command
+Contains a ready to use version data sets `i140703-001` and `l101210-001` in the Nix data format using a Neo structure. For practical purposes, we suggest using these files (instead of the original Blackrock files in `'datasets_blackrock`) for easier access to the data. Datasets can be loaded via the Neo command
 ```
 import neo
 with neo.NixIO(nix_filename, mode='ro') as io:
@@ -85,22 +85,27 @@ In addition, `l101210-001.nix` will contain a downsampled version of the raw 30K
 
 Next to `l101210-001.nix` and `l101210-001.nix`, the directory also contains the files `l101210-001_no_raw.nix` and `l101210-001_no_raw.nix`. These files do not contain the raw electrode signals sampled at 30kHz, and are therefore considerably more light-weight in terms of file size.
 
-The code to produce the Nix files from the source files in the dataset directory is given in `convert_to_nix.py` in the `code` subdirectory.
+The code to produce the Nix files from the source files in the dataset directory is given in `convert_to_nix.py` in the `code` subdirectory. Also, the file `example_nix.py` in the `code` subdirectory contains an example on using these nix files.
 
 ### Directory datasets_matlab
 Contains the data and metadata output of the Python loading routines in the MATLAB .mat file format. These files are provided for convenience for MATLAB users, however, note that these files are not the original data files and contain a condensed, interpreted subset of the original data. Due to size restrictions of the MATLAB file format, the files `i140703-001_lfp-spikes.mat` and `l101210-001 _lfp-spikes.mat` contain only spikes and LFP data (for monkey N), while raw data is saved separately for each channel in correspondingly named files.
 
 ### Directory code
-Contains example code to help in loading and analyzing the data based on the original data files contained in the `datasets_blackrock` folder. The file `examply.py` is a Python script that acts as a tutorial for loading and plotting data. The scripts `data_overview_1.py` and `data_overview_2.py` reproduce the plots of the data found  in the publication. The files `neo_utils.py` and `odml_utils.py` contain useful utility routines to work with data and metadata. Finally, the file `example.m` contains a rudimentary MATLAB script demonstrating how to use the data provided in the .mat files.
+Contains example code to help in loading and analyzing the data based on the original data files contained in the `datasets_blackrock` folder. The file `example_blackrock.py` is a Python script that acts as a tutorial for loading and plotting data. Moreover, the file `example_nix.py` contains the same example using the nix files stored in the folder `datasets_nix`. The scripts `data_overview_1.py` and `data_overview_2.py` reproduce the plots of the data found  in the publication. The files `neo_utils.py` and `odml_utils.py` contain useful utility routines to work with data and metadata. Finally, the file `example_matlab.m` contains a rudimentary MATLAB script demonstrating how to use the data provided in the .mat files.
 
 To run the Python example code, download the release of this repository, and install the requirements in `code/requirements.txt`. Then, run the example via
 ```
    cd code
-   python example.py
+   python example_blackrock.py
+```
+or (the preferred option that does not require custom code in `code/reachgraspio`):
+```
+   cd code
+   python example_nix.py
 ```
 The script produces a figure saved in three different graphics file formats.
 
-Also, the file `convert_to_nix.py` contains code that produces the easy-to-use Nix files (in `datasets_nix`) from the original source data.
+Also, the file `convert_to_nix.py` contains code that produces the easy-to-use Nix files (in `datasets_nix`) from the original source data (in `datasets_blackrock`).
 
 ### Directory code/reachgraspio
 Contains the file `reachgraspio.py`, which contains the loading routine specific to the Reach-to-Grasp experiments in this repository. This loading routine merges the recorded data with metadata information from the odML files into a common Neo object. It is recommended that this loading routine is used in combination with the odML and Neo libraries (see below) to work on the data.

+ 1 - 1
code/convert_to_nix.py

@@ -27,7 +27,7 @@ from reachgraspio import reachgraspio
 
 # Choose which session you want to convert into a nix file
 session = "i140703-001"
-#session = "l101210-001"
+session = "l101210-001"
 
 # Input data. i.e., original Blackrock files and odML
 dataset_dir = '../datasets_blackrock'

+ 7 - 6
code/example.py

@@ -2,7 +2,8 @@
 """
 Example code for loading and processing of a recording of the reach-
 to-grasp experiments conducted at the Institute de Neurosciences de la Timone
-by Thomas Brochier and Alexa Riehle.
+by Thomas Brochier and Alexa Riehle from the original Blaclrock files and
+odML files, using the custom loading routine `ReachGraspIO`.
 
 Authors: Julia Sprenger, Lyuba Zehl, Michael Denker
 
@@ -57,9 +58,9 @@ from neo_utils import load_segment
 
 # Specify the path to the recording session to load, eg,
 # '/home/user/l101210-001'
-session_name = os.path.join('..', 'datasets', 'i140703-001')
-# session_name = os.path.join('..', 'datasets', 'l101210-001')
-odml_dir = os.path.join('..', 'datasets')
+session_name = os.path.join('..', 'datasets_blackrock', 'i140703-001')
+# session_name = os.path.join('..', 'datasets_blackrock', 'l101210-001')
+odml_dir = os.path.join('..', 'datasets_blackrock')
 
 # Open the session for reading
 session = reachgraspio.ReachGraspIO(session_name, odml_directory=odml_dir)
@@ -283,6 +284,6 @@ plt.ylabel(amplitude_unit.name)
 plt.legend(loc=4, fontsize=10)
 
 # Save plot
-fname = 'example_plot'
+file_name = 'example_plot_from_blackrock_%s' % session_name
 for file_format in ['eps', 'png', 'pdf']:
-    fig.savefig(fname + '.%s' % file_format, dpi=400, format=file_format)
+    fig.savefig(file_name + '.%s' % file_format, dpi=400, format=file_format)

code/example.m → code/example_matlab.m


+ 240 - 0
code/example_nix.py

@@ -0,0 +1,240 @@
+# -*- coding: utf-8 -*-
+"""
+Example code for loading and processing of a recording of the reach-
+to-grasp experiments conducted at the Institute de Neurosciences de la Timone
+by Thomas Brochier and Alexa Riehle from the NIX files.
+
+Authors: Julia Sprenger, Lyuba Zehl, Michael Denker
+
+
+Copyright (c) 2017, Institute of Neuroscience and Medicine (INM-6),
+Forschungszentrum Juelich, Germany
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+* Redistributions in binary form must reproduce the above copyright notice,
+this list of conditions and the following disclaimer in the documentation
+and/or other materials provided with the distribution.
+* Neither the names of the copyright holders nor the names of the contributors
+may be used to endorse or promote products derived from this software without
+specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+"""
+
+import os
+
+import numpy as np
+import matplotlib.pyplot as plt
+
+import quantities as pq
+
+from neo import Block, Segment
+from neo.io import NixIO
+from neo.utils import cut_segment_by_epoch, add_epoch, get_events
+
+
+# =============================================================================
+# Load data
+#
+# As a first step, we load the data file into memory as a Neo object.
+# =============================================================================
+
+# Specify the path to the recording session to load, eg,
+# '/home/user/l101210-001'
+session_name = "i140703-001"
+# session_name = "l101210-001"
+session_path = os.path.join("..", "datasets_nix", session_name + ".nix")
+
+# Open the session for reading
+session = NixIO(session_path)
+
+# Channel ID to plot
+target_channel_id = 62
+
+
+# Read the complete dataset in lazy mode generating all neo objects,
+# but not loading data into memory.  The lazy neo structure will contain objects
+# to capture all recorded data types (time series at 1000Hz (ns2) and 30kHz (ns6)
+# scaled to units of voltage, sorted spike trains, spike waveforms and events)
+# of the recording session and return it as a Neo Block. The
+# time shift of the ns2 signal (LFP) induced by the online filter is
+# automatically corrected for by a heuristic factor stored in the metadata
+# (correct_filter_shifts=True).
+
+block = session.read_block()
+
+# Validate there is only a single Segment present in the block
+assert len(block.segments) == 1
+
+# loading data content of all data objects during the first 300 seconds
+# data_segment = load_segment(block.segments[0], time_range=(None, 300*pq.s))
+data_segment = block.segments[0]
+
+print("Session loaded.")
+
+
+# =============================================================================
+# Construct analysis epochs
+#
+# In this step we extract and cut the data into time segments (termed analysis
+# epochs) that we wish to analyze. We contrast these analysis epochs to the
+# behavioral trials that are defined by the experiment as occurrence of a Trial
+# Start (TS-ON) event in the experiment. Concretely, here our analysis epochs
+# are constructed as a cutout of 25ms of data around the TS-ON event of all
+# successful behavioral trials.
+# =============================================================================
+
+# Get Trial Start (TS-ON) events of all successful behavioral trials
+# (corresponds to performance code 255, which is accessed for convenience and
+# better legibility in the dictionary attribute performance_codes of the
+# ReachGraspIO class).
+#
+# To this end, we filter all event objects of the loaded data to match the name
+# "TrialEvents", which is the Event object containing all Events available (see
+# documentation of ReachGraspIO). From this Event object we extract only events
+# matching "TS-ON" and the desired trial performance code (which are
+# annotations of the Event object).
+start_events = get_events(
+    data_segment,
+    name='TrialEvents',
+    trial_event_labels='TS-ON',
+    performance_in_trial_str='correct_trial')
+print('Determined start events.')
+
+# Extract single Neo Event object containing all TS-ON triggers
+assert len(start_events) == 1
+start_event = start_events[0]
+
+
+# Construct analysis epochs from 10ms before the TS-ON of a successful
+# behavioral trial to 15ms after TS-ON. The name "analysis_epochs" is given to
+# the resulting Neo Epoch object. The object is not attached to the Neo
+# Segment. The parameter event2 of add_epoch() is left empty, since we are
+# cutting around a single event, as opposed to cutting between two events.
+
+pre = -10 * pq.ms
+post = 15 * pq.ms
+epoch = add_epoch(
+    data_segment,
+    event1=start_event, event2=None,
+    pre=pre, post=post,
+    attach_result=False,
+    name='analysis_epochs',
+    array_annotations=start_event.array_annotations)
+print('Added epoch.')
+
+
+# Create new segments of data cut according to the analysis epochs of the
+# 'analysis_epochs' Neo Epoch object. The time axes of all segments are aligned
+# such that each segment starts at time 0 (parameter reset_times); annotations
+# describing the analysis epoch are carried over to the segments. A new Neo
+# Block named "data_cut_to_analysis_epochs" is created to capture all cut
+# analysis epochs. For execution time reason, we are only considering the
+# first 10 epochs here.
+
+cut_trial_block = Block(name="data_cut_to_analysis_epochs")
+cut_trial_block.segments = cut_segment_by_epoch(
+    data_segment, epoch[:10], reset_time=True)
+print("Cut data.")
+
+
+# =============================================================================
+# Plot data
+# =============================================================================
+
+# Determine the first existing trial ID i from the Event object containing all
+# start events. Then, by calling the filter() function of the Neo Block
+# "data_cut_to_analysis_epochs" containing the data cut into the analysis
+# epochs, we ask to return all Segments annotated by the behavioral trial ID i.
+# In this case this call should return one matching analysis epoch around TS-ON
+# belonging to behavioral trial ID i. For monkey N, this is trial ID 1, for
+# monkey L this is trial ID 2 since trial ID 1 is not a correct trial.
+
+trial_id = int(np.min(start_event.array_annotations['trial_id']))
+trial_segments = cut_trial_block.filter(
+    targdict={"trial_id": trial_id}, objects=Segment)
+assert len(trial_segments) == 1
+trial_segment = trial_segments[0]
+
+# Create figure
+fig = plt.figure(facecolor='w')
+time_unit = pq.CompoundUnit('1./30000*s')
+amplitude_unit = pq.microvolt
+nsx_colors = {2: 'k', 5: 'r', 6: 'b'}
+
+# Loop through all AnalogSignal objects and plot the signal of the target channel
+# in a color corresponding to its sampling frequency (i.e., originating from the ns2/ns5 or ns2/ns6).
+for i, anasig in enumerate(trial_segment.analogsignals):
+    # only visualize neural data
+    if anasig.annotations['neural_signal']:
+        if 'nsx' in anasig.annotations:
+            nsx = anasig.annotations['nsx']
+        else:
+            nsx = anasig.array_annotations['nsx'][0]
+        channel_ids = np.asarray(anasig.array_annotations['channel_ids'], dtype=int)
+        target_channel_index = np.where(channel_ids == target_channel_id)[0]
+        target_signal = anasig[:, target_channel_index]
+        plt.plot(
+            target_signal.times.rescale(time_unit),
+            target_signal.squeeze().rescale(amplitude_unit),
+            label=target_signal.name,
+            color=nsx_colors[nsx if nsx>0 else 2]) # offline LFP: nsx==-1
+
+# Loop through all spike trains and plot the spike time, and overlapping the
+# wave form of the spike used for spike sorting stored separately in the nev
+# file.
+for st in trial_segment.spiketrains:
+    color = np.random.rand(3,)
+    if st.annotations['channel_id'] == target_channel_id:
+        for spike_id, spike in enumerate(st):
+            # Plot spike times
+            plt.axvline(
+                spike.rescale(time_unit).magnitude,
+                color=color,
+                label='Unit ID %i' % st.annotations['unit_id'])
+            # Plot waveforms
+            waveform = st.waveforms[spike_id, 0, :]
+            waveform_times = np.arange(len(waveform))*time_unit + spike
+            plt.plot(
+                waveform_times.rescale(time_unit).magnitude,
+                waveform.rescale(amplitude_unit),
+                '--',
+                linewidth=2,
+                color=color,
+                zorder=0)
+
+# Loop through all events
+for event in trial_segment.events:
+    if event.name == 'TrialEvents':
+        for ev_id, ev in enumerate(event):
+            plt.axvline(
+                ev.rescale(time_unit),
+                alpha=0.2,
+                linewidth=3,
+                linestyle='dashed',
+                label=f'event {event.array_annotations["trial_event_labels"][ev_id]}')
+
+# Finishing touches on the plot
+plt.autoscale(enable=True, axis='x', tight=True)
+plt.xlabel(time_unit.name)
+plt.ylabel(amplitude_unit.name)
+plt.legend(loc=4, fontsize=10)
+
+# Save plot
+file_name = 'example_plot_from_nix_%s' % session_name
+for file_format in ['eps', 'png', 'pdf']:
+    fig.savefig(file_name + '.%s' % file_format, dpi=400, format=file_format)