Browse Source

Merge branch 'enh/neo09-2' of INT/multielectrode_grasp into master

I think, the /docs/asset*.nix in elephant should be deleted, and version of neo should be limited to <0.10.0 -- the latest releases have problems that should be handled by a separate PR.
Michael Denker 2 years ago
parent
commit
4383eec98d
100 changed files with 300 additions and 2119 deletions
  1. 166 164
      code/data_overview_1.py
  2. 78 69
      code/data_overview_2.py
  3. 0 72
      code/elephant/.gitignore
  4. 0 44
      code/elephant/.travis.yml
  5. 0 1
      code/elephant/AUTHORS.txt
  6. 1 0
      code/elephant/CITATION.txt
  7. 0 10
      code/elephant/LICENSE.txt
  8. 1 0
      code/elephant/LICENSE.txt
  9. 0 8
      code/elephant/MANIFEST.in
  10. 1 0
      code/elephant/MANIFEST.in
  11. 1 0
      code/elephant/PKG-INFO
  12. 1 0
      code/elephant/README.md
  13. 0 23
      code/elephant/README.rst
  14. 0 135
      code/elephant/continuous_integration/install.sh
  15. 0 23
      code/elephant/continuous_integration/test_script.sh
  16. 0 153
      code/elephant/doc/Makefile
  17. 1 0
      code/elephant/doc/Makefile
  18. 1 0
      code/elephant/doc/_templates/autosummary/class.rst
  19. 1 0
      code/elephant/doc/acknowledgments.rst
  20. 0 47
      code/elephant/doc/authors.rst
  21. 1 0
      code/elephant/doc/authors.rst
  22. 1 0
      code/elephant/doc/bib/elephant.bib
  23. 1 0
      code/elephant/doc/citation.rst
  24. 0 323
      code/elephant/doc/conf.py
  25. 1 0
      code/elephant/doc/conf.py
  26. 0 226
      code/elephant/doc/developers_guide.rst
  27. 1 0
      code/elephant/doc/developers_guide.rst
  28. 1 0
      code/elephant/doc/documentation_guide.rst
  29. 0 34
      code/elephant/doc/environment.yml
  30. 1 0
      code/elephant/doc/get_in_touch.rst
  31. BIN
      code/elephant/doc/images/elephant_favicon.ico
  32. 1 0
      code/elephant/doc/images/elephant_favicon.ico
  33. BIN
      code/elephant/doc/images/elephant_logo.png
  34. 1 0
      code/elephant/doc/images/elephant_logo.png
  35. BIN
      code/elephant/doc/images/elephant_logo_sidebar.png
  36. 1 0
      code/elephant/doc/images/elephant_logo_sidebar.png
  37. BIN
      code/elephant/doc/images/elephant_structure.png
  38. BIN
      code/elephant/doc/images/tutorials/tutorial_1_figure_1.png
  39. BIN
      code/elephant/doc/images/tutorials/tutorial_1_figure_2.png
  40. 0 44
      code/elephant/doc/index.rst
  41. 1 0
      code/elephant/doc/index.rst
  42. 0 107
      code/elephant/doc/install.rst
  43. 1 0
      code/elephant/doc/install.rst
  44. 1 0
      code/elephant/doc/maintainers_guide.rst
  45. 0 190
      code/elephant/doc/make.bat
  46. 1 0
      code/elephant/doc/make.bat
  47. 0 26
      code/elephant/doc/modules.rst
  48. 1 0
      code/elephant/doc/modules.rst
  49. 0 113
      code/elephant/doc/overview.rst
  50. 1 0
      code/elephant/doc/reference/_spike_train_processing.rst
  51. 0 6
      code/elephant/doc/reference/asset.rst
  52. 1 0
      code/elephant/doc/reference/asset.rst
  53. 1 0
      code/elephant/doc/reference/causality.rst
  54. 1 0
      code/elephant/doc/reference/cell_assembly_detection.rst
  55. 1 0
      code/elephant/doc/reference/change_point_detection.rst
  56. 0 6
      code/elephant/doc/reference/conversion.rst
  57. 1 0
      code/elephant/doc/reference/conversion.rst
  58. 0 6
      code/elephant/doc/reference/cubic.rst
  59. 1 0
      code/elephant/doc/reference/cubic.rst
  60. 0 6
      code/elephant/doc/reference/current_source_density.rst
  61. 1 0
      code/elephant/doc/reference/current_source_density.rst
  62. 1 0
      code/elephant/doc/reference/gpfa.rst
  63. 0 6
      code/elephant/doc/reference/kernels.rst
  64. 1 0
      code/elephant/doc/reference/kernels.rst
  65. 0 6
      code/elephant/doc/reference/neo_tools.rst
  66. 1 0
      code/elephant/doc/reference/neo_tools.rst
  67. 0 6
      code/elephant/doc/reference/pandas_bridge.rst
  68. 1 0
      code/elephant/doc/reference/pandas_bridge.rst
  69. 1 0
      code/elephant/doc/reference/parallel.rst
  70. 1 0
      code/elephant/doc/reference/phase_analysis.rst
  71. 0 13
      code/elephant/doc/reference/signal_processing.rst
  72. 1 0
      code/elephant/doc/reference/signal_processing.rst
  73. 1 0
      code/elephant/doc/reference/spade.rst
  74. 0 6
      code/elephant/doc/reference/spectral.rst
  75. 1 0
      code/elephant/doc/reference/spectral.rst
  76. 0 13
      code/elephant/doc/reference/spike_train_correlation.rst
  77. 1 0
      code/elephant/doc/reference/spike_train_correlation.rst
  78. 0 8
      code/elephant/doc/reference/spike_train_dissimilarity.rst
  79. 1 0
      code/elephant/doc/reference/spike_train_dissimilarity.rst
  80. 0 11
      code/elephant/doc/reference/spike_train_generation.rst
  81. 1 0
      code/elephant/doc/reference/spike_train_generation.rst
  82. 0 12
      code/elephant/doc/reference/spike_train_surrogates.rst
  83. 1 0
      code/elephant/doc/reference/spike_train_surrogates.rst
  84. 1 0
      code/elephant/doc/reference/spike_train_synchrony.rst
  85. 0 18
      code/elephant/doc/reference/sta.rst
  86. 1 0
      code/elephant/doc/reference/sta.rst
  87. 0 6
      code/elephant/doc/reference/statistics.rst
  88. 1 0
      code/elephant/doc/reference/statistics.rst
  89. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.AlphaKernel.rst
  90. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.EpanechnikovLikeKernel.rst
  91. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.ExponentialKernel.rst
  92. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.GaussianKernel.rst
  93. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.LaplacianKernel.rst
  94. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.RectangularKernel.rst
  95. 1 0
      code/elephant/doc/reference/toctree/kernels/elephant.kernels.TriangularKernel.rst
  96. 0 6
      code/elephant/doc/reference/unitary_event_analysis.rst
  97. 1 0
      code/elephant/doc/reference/unitary_event_analysis.rst
  98. 1 0
      code/elephant/doc/reference/waveform_features.rst
  99. 0 172
      code/elephant/doc/release_notes.rst
  100. 0 0
      code/elephant/doc/release_notes.rst

+ 166 - 164
code/data_overview_1.py

@@ -50,7 +50,8 @@ from reachgraspio import reachgraspio
 
 import odml.tools
 
-import neo_utils
+from neo import utils as neo_utils
+from neo_utils import load_segment
 import odml_utils
 
 
@@ -102,22 +103,24 @@ def force_aspect(ax, aspect=1):
         (ax.get_ylim()[1] - ax.get_ylim()[0])) / aspect)
 
 
-def get_arraygrid(blackrock_elid_list, chosen_el, rej_el=None):
-    if rej_el is None:
-        rej_el = []
-    array_grid = np.zeros((10, 10))
-    for m in range(10):
-        for n in range(10):
-            idx = (9 - m) * 10 + n
-            bl_id = blackrock_elid_list[idx]
-            if bl_id == -1:
-                array_grid[m, n] = 0.7
-            elif bl_id == chosen_el:
-                array_grid[m, n] = -0.7
-            elif bl_id in rej_el:
-                array_grid[m, n] = -0.35
-            else:
-                array_grid[m, n] = 0
+def get_arraygrid(signals, chosen_el):
+    array_grid = np.ones((10, 10)) * 0.7
+
+    rejections = np.logical_or(signals.array_annotations['electrode_reject_HFC'],
+                               signals.array_annotations['electrode_reject_LFC'],
+                               signals.array_annotations['electrode_reject_IFC'])
+
+    for sig_idx in range(signals.shape[-1]):
+        connector_aligned_id = signals.array_annotations['connector_aligned_ids'][sig_idx]
+        x, y = int((connector_aligned_id -1)// 10), int((connector_aligned_id - 1) % 10)
+
+        if signals.array_annotations['channel_ids'][sig_idx] == chosen_el:
+            array_grid[x, y] = -0.7
+        elif rejections[sig_idx]:
+            array_grid[x, y] = -0.35
+        else:
+            array_grid[x, y] = 0
+
     return np.ma.array(array_grid, mask=np.isnan(array_grid))
 
 
@@ -125,12 +128,9 @@ def get_arraygrid(blackrock_elid_list, chosen_el, rej_el=None):
 # Load data and metadata for a monkey
 # =============================================================================
 # CHANGE this parameter to load data of the different monkeys
-# monkey = 'Nikos2'
-monkey = 'Lilou'
+monkey = 'Nikos2'
+# monkey = 'Lilou'
 
-nsx_none = {'Lilou': None, 'Nikos2': None}
-nsx_lfp = {'Lilou': 2, 'Nikos2': 2}
-nsx_raw = {'Lilou': 5, 'Nikos2': 6}
 chosen_el = {'Lilou': 71, 'Nikos2': 63}
 chosen_units = {'Lilou': range(1, 5), 'Nikos2': range(1, 5)}
 
@@ -141,141 +141,133 @@ session = reachgraspio.ReachGraspIO(
     odml_directory=datasetdir,
     verbose=False)
 
-bl_lfp = session.read_block(
-    index=None,
-    name=None,
-    description=None,
-    nsx_to_load=nsx_lfp[monkey],
-    n_starts=None,
-    n_stops=None,
-    channels='all',
-    units=chosen_units[monkey],
-    load_waveforms=False,
-    load_events=True,
-    scaling='voltage',
-    lazy=False,
-    cascade=True)
-
-bl_raw = session.read_block(
-    index=None,
-    name=None,
-    description=None,
-    nsx_to_load=nsx_raw[monkey],
-    n_starts=None,
-    n_stops=None,
-    channels=chosen_el[monkey],
-    units=chosen_units[monkey],
-    load_waveforms=True,
-    load_events=True,
-    scaling='voltage',
-    lazy=False,
-    cascade=True)
-
-seg_raw = bl_raw.segments[0]
-seg_lfp = bl_lfp.segments[0]
+block = session.read_block(lazy=True)
+segment = block.segments[0]
 
 # Displaying loaded data structure as string output
 print("\nBlock")
-print('Attributes ', bl_raw.__dict__.keys())
-print('Annotations', bl_raw.annotations)
+print('Attributes ', block.__dict__.keys())
+print('Annotations', block.annotations)
 print("\nSegment")
-print('Attributes ', seg_raw.__dict__.keys())
-print('Annotations', seg_raw.annotations)
+print('Attributes ', segment.__dict__.keys())
+print('Annotations', segment.annotations)
 print("\nEvents")
-for x in seg_raw.events:
+for x in segment.events:
     print('\tEvent with name', x.name)
     print('\t\tAttributes ', x.__dict__.keys())
     print('\t\tAnnotation keys', x.annotations.keys())
     print('\t\ttimes', x.times[:20])
-    for anno_key in ['trial_id', 'trial_timestamp_id', 'trial_event_labels',
-                     'trial_reject_IFC']:
-        print('\t\t'+anno_key, x.annotations[anno_key][:20])
-
-print("\nChannels")
-for x in bl_raw.channel_indexes:
-    print('\tChannel with name', x.name)
-    print('\t\tAttributes ', x.__dict__.keys())
-    print('\t\tchannel_ids', x.channel_ids)
-    print('\t\tchannel_names', x.channel_names)
-    print('\t\tAnnotations', x.annotations)
-print("\nUnits")
-for x in bl_raw.list_units:
-    print('\tUnit with name', x.name)
+    if x.name == 'TrialEvents':
+        for anno_key in ['trial_id', 'trial_timestamp_id', 'trial_event_labels',
+                         'trial_reject_IFC']:
+            print('\t\t'+anno_key, x.array_annotations[anno_key][:20])
+
+print("\nGroups")
+for x in block.groups:
+    print('\tGroup with name', x.name)
     print('\t\tAttributes ', x.__dict__.keys())
     print('\t\tAnnotations', x.annotations)
-    print('\t\tchannel_id', x.annotations['channel_id'])
-    assert(x.annotations['channel_id'] == x.channel_index.channel_ids[0])
+
 print("\nSpikeTrains")
-for x in seg_raw.spiketrains:
+for x in segment.spiketrains:
     print('\tSpiketrain with name', x.name)
     print('\t\tAttributes ', x.__dict__.keys())
     print('\t\tAnnotations', x.annotations)
     print('\t\tchannel_id', x.annotations['channel_id'])
-    print('\t\tspike times', x.times[0:20])
+    print('\t\tunit_id', x.annotations['unit_id'])
+    print('\t\tis sua', x.annotations['sua'])
+    print('\t\tis mua', x.annotations['mua'])
+
 print("\nAnalogSignals")
-for x in seg_raw.analogsignals:
+for x in segment.analogsignals:
     print('\tAnalogSignal with name', x.name)
     print('\t\tAttributes ', x.__dict__.keys())
     print('\t\tAnnotations', x.annotations)
-    print('\t\tchannel_id', x.annotations['channel_id'])
+    print('\t\tchannel_ids', x.array_annotations['channel_ids'])
 
 # get start and stop events of trials
 start_events = neo_utils.get_events(
-    seg_raw,
-    properties={
-        'name': 'TrialEvents',
-        'trial_event_labels': 'TS-ON',
-        'performance_in_trial': 255})
+    segment,
+    **{
+    'name': 'TrialEvents',
+    'trial_event_labels': 'TS-ON',
+    'performance_in_trial': 255})
 stop_events = neo_utils.get_events(
-    seg_raw,
-    properties={
-        'name': 'TrialEvents',
-        'trial_event_labels': 'STOP',
-        'performance_in_trial': 255})
+    segment,
+    **{
+    'name': 'TrialEvents',
+    'trial_event_labels': 'STOP',
+    'performance_in_trial': 255})
 
 # there should only be one event object for these conditions
 assert len(start_events) == 1
 assert len(stop_events) == 1
 
 # insert epochs between 10ms before TS to 50ms after RW corresponding to trails
-neo_utils.add_epoch(
-    seg_raw,
+ep = neo_utils.add_epoch(
+    segment,
     start_events[0],
     stop_events[0],
     pre=-250 * pq.ms,
     post=500 * pq.ms,
-    trial_status='complete_trials',
-    trial_type=start_events[0].annotations['belongs_to_trialtype'],
-    trial_performance=start_events[0].annotations['performance_in_trial'])
+    trial_status='complete_trials')
+ep.array_annotate(trial_type=start_events[0].array_annotations['belongs_to_trialtype'],
+                  trial_performance=start_events[0].array_annotations['performance_in_trial'])
 
 # access single epoch of this data_segment
-epochs = neo_utils.get_epochs(seg_raw,
-                              properties={'trial_status': 'complete_trials'})
+epochs = neo_utils.get_epochs(segment, **{'trial_status': 'complete_trials'})
 assert len(epochs) == 1
 
-# cut segments according to inserted 'complete_trials' epochs and reset trial
-#  times
-cut_segments_raw = neo_utils.cut_segment_by_epoch(
-    seg_raw, epochs[0], reset_time=True)
-
-cut_segments_lfp = neo_utils.cut_segment_by_epoch(
-    seg_lfp, epochs[0], reset_time=True)
+# remove spiketrains not belonging to chosen_electrode
+segment.spiketrains = segment.filter(targdict={'channel_id': chosen_el[monkey]},
+                                     recursive=True, objects='SpikeTrainProxy')
+segment.spiketrains = [st for st in segment.spiketrains if st.annotations['unit_id'] in range(1, 5)]
+# replacing the segment with a new segment containing all data
+# to speed up cutting of segments
+segment = load_segment(segment, load_wavefroms=True, channel_indexes=[chosen_el[monkey]])
+
+# use most raw neuronal data if multiple versions are present
+max_sampling_rate = max([a.sampling_rate for a in segment.analogsignals])
+idx = 0
+while idx < len(segment.analogsignals):
+    signal = segment.analogsignals[idx]
+    if signal.annotations['neural_signal'] and signal.sampling_rate < max_sampling_rate:
+        segment.analogsignals.pop(idx)
+    else:
+        idx += 1
+
+# neural_signals = []
+# behav_signals = []
+# for sig in segment.analogsignals:
+#     if sig.annotations['neural_signal']:
+#         neural_signals.append(sig)
+#     else:
+#         behav_signals.append(sig)
+#
+# chosen_raw = neural_signals[0]
+# for sig in neural_signals:
+#     if sig.sampling_rate > chosen_raw.sampling_rate:
+#         chosen_raw = sig
+#
+# segment.analogsignals = behav_signals + [chosen_raw]
+
+# cut segments according to inserted 'complete_trials' epochs and reset trial times
+cut_segments = neo_utils.cut_segment_by_epoch(segment, epochs[0], reset_time=True)
 
 # =============================================================================
 # Define data for overview plots
 # =============================================================================
 trial_index = {'Lilou': 0, 'Nikos2': 6}
 
-trial_seg_raw = cut_segments_raw[trial_index[monkey]]
-trial_seg_lfp = cut_segments_lfp[trial_index[monkey]]
+trial_segment = cut_segments[trial_index[monkey]]
 
-blackrock_elid_list = bl_lfp.annotations['avail_electrode_ids']
+blackrock_elid_list = block.annotations['avail_electrode_ids']
 
 # get 'TrialEvents'
-event = trial_seg_lfp.events[2]
-start = event.annotations['trial_event_labels'].index('TS-ON')
-trialx_trty = event.annotations['belongs_to_trialtype'][start]
-trialx_trtimeid = event.annotations['trial_timestamp_id'][start]
+event = trial_segment.events[2]
+start = np.where(event.array_annotations['trial_event_labels'] == 'TS-ON')[0][0]
+trialx_trty = event.array_annotations['belongs_to_trialtype'][start]
+trialx_trtimeid = event.array_annotations['trial_timestamp_id'][start]
 trialx_color = trialtype_colors[trialx_trty]
 
 # find trial index for next trial with opposite force type (for ax5b plot)
@@ -284,13 +276,13 @@ if 'LF' in trialx_trty:
 else:
     trialz_trty = trialx_trty.replace('HF', 'LF')
 
-for i, tr in enumerate(cut_segments_lfp):
+for i, tr in enumerate(cut_segments):
     eventz = tr.events[2]
-    nextft = eventz.annotations['trial_event_labels'].index('TS-ON')
-    if eventz.annotations['belongs_to_trialtype'][nextft] == trialz_trty:
-        trialz_trtimeid = eventz.annotations['trial_timestamp_id'][nextft]
+    nextft = np.where(eventz.array_annotations['trial_event_labels'] == 'TS-ON')[0][0]
+    if eventz.array_annotations['belongs_to_trialtype'][nextft] == trialz_trty:
+        trialz_trtimeid = eventz.array_annotations['trial_timestamp_id'][nextft]
         trialz_color = trialtype_colors[trialz_trty]
-        trialz_seg_lfp = tr
+        trialz_seg = tr
         break
 
 
@@ -339,7 +331,7 @@ behav_signal_unit = pq.V
 # =============================================================================
 
 # load complete metadata collection
-odmldoc = odml.tools.xmlparser.load(datasetdir + datafile + '.odml')
+odmldoc = odml.load(datasetdir + datafile + '.odml')
 
 # get total trial number
 trno_tot = odml_utils.get_TrialCount(odmldoc)
@@ -417,7 +409,7 @@ leg = ax1.legend(
 leg.draw_frame(False)
 
 # adjust x and y axis
-xticks = [i for i in range(1, 101, 10)] + [100]
+xticks = list(range(1, 101, 10)) + [100]
 ax1.set_xticks(xticks)
 ax1.set_xticklabels([str(int(t)) for t in xticks], size='xx-small')
 ax1.set_xlabel('trial ID', size='x-small')
@@ -427,7 +419,7 @@ ax1.set_ylim(0, 3)
 ax1.spines['top'].set_visible(False)
 ax1.spines['left'].set_visible(False)
 ax1.spines['right'].set_visible(False)
-ax1.tick_params(direction='out', top='off')
+ax1.tick_params(direction='out', top=False, left=False, right=False)
 ax1.set_title('sequence of the first 100 trials', fontdict_titles, y=2)
 ax1.set_aspect('equal')
 
@@ -435,17 +427,22 @@ ax1.set_aspect('equal')
 # =============================================================================
 # PLOT ELECTRODE POSITION of chosen electrode
 # =============================================================================
-arraygrid = get_arraygrid(blackrock_elid_list, chosen_el[monkey])
+neural_signals = [sig for sig in trial_segment.analogsignals if sig.annotations['neural_signal']]
+assert len(neural_signals) == 1
+neural_signals = neural_signals[0]
+
+arraygrid = get_arraygrid(neural_signals, chosen_el[monkey])
 cmap = plt.cm.RdGy
 
 ax2a.pcolormesh(
-    np.flipud(arraygrid), vmin=-1, vmax=1, lw=1, cmap=cmap, edgecolors='k',
-    shading='faceted')
+    arraygrid, vmin=-1, vmax=1, lw=1, cmap=cmap, edgecolors='k',
+    #shading='faceted'
+    )
 
 force_aspect(ax2a, aspect=1)
 ax2a.tick_params(
-    bottom='off', top='off', left='off', right='off',
-    labelbottom='off', labeltop='off', labelleft='off', labelright='off')
+    bottom=False, top=False, left=False, right=False,
+    labelbottom=False, labeltop=False, labelleft=False, labelright=False)
 ax2a.set_title('electrode pos.', fontdict_titles)
 
 
@@ -457,15 +454,17 @@ unit_type = {1: '', 2: '', 3: ''}
 
 wf_lim = []
 # plotting waveform for all spiketrains available
-for spiketrain in trial_seg_raw.spiketrains:
+for spiketrain in trial_segment.spiketrains:
     unit_id = spiketrain.annotations['unit_id']
     # get unit type
     if spiketrain.annotations['sua']:
         unit_type[unit_id] = 'SUA'
     elif spiketrain.annotations['mua']:
         unit_type[unit_id] = 'MUA'
+    elif unit_id in [0, 255]:
+        continue
     else:
-        pass
+        raise ValueError(f'Found unit with id {unit_id}, that is not SUA or MUA.')
     # get correct ax
     ax = unit_ax_translator[unit_id]
     # get wf sampling time before threshold crossing
@@ -489,7 +488,7 @@ for unit_id, ax in unit_ax_translator.items():
     ax.set_title('unit %i (%s)' % (unit_id, unit_type[unit_id]),
                  fontdict_titles)
     ax.tick_params(direction='in', length=3, labelsize='xx-small',
-                   labelleft='off', labelright='off')
+                   labelleft=False, labelright=False)
     ax.set_xlabel(wf_time_unit.dimensionality.latex, fontdict_axis)
     xticklocator = ticker.MaxNLocator(nbins=5)
     ax.xaxis.set_major_locator(xticklocator)
@@ -497,7 +496,7 @@ for unit_id, ax in unit_ax_translator.items():
     force_aspect(ax, aspect=1)
 
 # adding ylabel
-ax2d.tick_params(labelsize='xx-small', labelright='on')
+ax2d.tick_params(labelsize='xx-small', labelright=True)
 ax2d.set_ylabel(wf_signal_unit.dimensionality.latex, fontdict_axis)
 ax2d.yaxis.set_label_position("right")
 
@@ -508,20 +507,20 @@ ax2d.yaxis.set_label_position("right")
 plotted_unit_ids = []
 
 # plotting all available spiketrains
-for st in trial_seg_raw.spiketrains:
+for st in trial_segment.spiketrains:
     unit_id = st.annotations['unit_id']
     plotted_unit_ids.append(unit_id)
     ax3.plot(st.times.rescale(plotting_time_unit),
              np.zeros(len(st.times)) + unit_id,
              'k|')
 
-# setting layout of spiktrain plot
+# setting layout of spiketrain plot
 ax3.set_ylim(min(plotted_unit_ids) - 0.5, max(plotted_unit_ids) + 0.5)
 ax3.set_ylabel(r'unit ID', fontdict_axis)
 ax3.yaxis.set_major_locator(ticker.MultipleLocator(base=1))
 ax3.yaxis.set_label_position("right")
 ax3.tick_params(axis='y', direction='in', length=3, labelsize='xx-small',
-                labelleft='off', labelright='on')
+                labelleft=False, labelright=True)
 ax3.invert_yaxis()
 ax3.set_title('spiketrains', fontdict_titles)
 
@@ -529,23 +528,25 @@ ax3.set_title('spiketrains', fontdict_titles)
 # PLOT "raw" SIGNAL of chosen trial of chosen electrode
 # =============================================================================
 # get "raw" data from chosen electrode
-assert len(trial_seg_raw.analogsignals) == 1
-el_raw_sig = trial_seg_raw.analogsignals[0]
+el_raw_sig = [a for a in trial_segment.analogsignals if a.annotations['neural_signal']]
+assert len(el_raw_sig) == 1
+el_raw_sig = el_raw_sig[0]
 
-# plotting raw signal trace
+# plotting raw signal trace of chosen electrode
+chosen_el_idx = np.where(el_raw_sig.array_annotations['channel_ids'] == chosen_el[monkey])[0][0]
 ax4.plot(el_raw_sig.times.rescale(plotting_time_unit),
-         el_raw_sig.squeeze().rescale(raw_signal_unit),
+         el_raw_sig[:, chosen_el_idx].squeeze().rescale(raw_signal_unit),
          color='k')
 
 # setting layout of raw signal plot
 ax4.set_ylabel(raw_signal_unit.units.dimensionality.latex, fontdict_axis)
 ax4.yaxis.set_label_position("right")
 ax4.tick_params(axis='y', direction='in', length=3, labelsize='xx-small',
-                labelleft='off', labelright='on')
+                labelleft=False, labelright=True)
 ax4.set_title('"raw" signal', fontdict_titles)
 
-ax4.set_xlim(trial_seg_raw.t_start.rescale(plotting_time_unit),
-             trial_seg_raw.t_stop.rescale(plotting_time_unit))
+ax4.set_xlim(trial_segment.t_start.rescale(plotting_time_unit),
+             trial_segment.t_stop.rescale(plotting_time_unit))
 ax4.xaxis.set_major_locator(ticker.MultipleLocator(base=1))
 
 
@@ -553,15 +554,14 @@ ax4.xaxis.set_major_locator(ticker.MultipleLocator(base=1))
 # PLOT EVENTS across ax3 and ax4 and add time bar
 # =============================================================================
 # find trial relevant events
-startidx = event.annotations['trial_event_labels'].index('TS-ON')
-stopidx = event.annotations['trial_event_labels'][startidx:].index('STOP') + \
-    startidx + 1
+startidx = np.where(event.array_annotations['trial_event_labels'] == 'TS-ON')[0][0]
+stopidx = np.where(event.array_annotations['trial_event_labels'][startidx:] == 'STOP')[0][0] + startidx + 1
 
 for ax in [ax3, ax4]:
     xticks = []
     xticklabels = []
     for ev_id, ev in enumerate(event[startidx:stopidx]):
-        ev_labels = event.annotations['trial_event_labels'][startidx:stopidx]
+        ev_labels = event.array_annotations['trial_event_labels'][startidx:stopidx]
         if ev_labels[ev_id] in event_colors.keys():
             ev_color = event_colors[ev_labels[ev_id]]
             ax.axvline(
@@ -577,7 +577,7 @@ for ax in [ax3, ax4]:
     ax.set_xticks(xticks)
     ax.set_xticklabels(xticklabels)
     ax.tick_params(axis='x', direction='out', length=3, labelsize='xx-small',
-                   labeltop='off', top='off')
+                   labeltop=False, top=False)
 
 timebar_ypos = ax4.get_ylim()[0] + np.diff(ax4.get_ylim())[0] / 10
 timebar_labeloffset = np.diff(ax4.get_ylim())[0] * 0.01
@@ -594,11 +594,12 @@ ax4.text(timebar_xmin + 0.25 * pq.s, timebar_ypos + timebar_labeloffset,
 # PLOT BEHAVIORAL SIGNALS of chosen trial
 # =============================================================================
 # get behavioral signals
-ainp_signals = [nsig for nsig in trial_seg_lfp.analogsignals if
-                nsig.annotations['channel_id'] > 96]
+ainp_signals = [nsig for nsig in trial_segment.analogsignals if not nsig.annotations['neural_signal']][0]
 
-ainp_trialz = [nsig for nsig in trialz_seg_lfp.analogsignals if
-               nsig.annotations['channel_id'] == 141][0]
+force_channel_idx = np.where(ainp_signals.array_annotations['channel_ids'] == 141)[0][0]
+ainp_trialz_signals = [a for a in trialz_seg.analogsignals if not a.annotations['neural_signal']]
+assert len(ainp_trialz_signals)
+ainp_trialz = ainp_trialz_signals[0][:, force_channel_idx]
 
 # find out what signal to use
 trialx_sec = odmldoc['Recording']['TaskSettings']['Trial_%03i' % trialx_trid]
@@ -616,31 +617,32 @@ else:
 
 
 # define time epoch
-startidx = event.annotations['trial_event_labels'].index('SR')
-stopidx = event.annotations['trial_event_labels'].index('OBB')
+startidx = np.where(event.array_annotations['trial_event_labels'] == 'SR')[0][0]
+stopidx = np.where(event.array_annotations['trial_event_labels'] == 'OBB')[0][0]
 sr = event[startidx].rescale(plotting_time_unit)
 stop = event[stopidx].rescale(plotting_time_unit) + 0.050 * pq.s
-startidx = event.annotations['trial_event_labels'].index('FSRplat-ON')
-stopidx = event.annotations['trial_event_labels'].index('FSRplat-OFF')
+startidx = np.where(event.array_annotations['trial_event_labels'] == 'FSRplat-ON')[0][0]
+stopidx = np.where(event.array_annotations['trial_event_labels'] == 'FSRplat-OFF')[0][0]
 fplon = event[startidx].rescale(plotting_time_unit)
 fploff = event[stopidx].rescale(plotting_time_unit)
 
 # define time epoch trialz
-startidx = eventz.annotations['trial_event_labels'].index('FSRplat-ON')
-stopidx = eventz.annotations['trial_event_labels'].index('FSRplat-OFF')
+startidx = np.where(eventz.array_annotations['trial_event_labels'] == 'FSRplat-ON')[0][0]
+stopidx = np.where(eventz.array_annotations['trial_event_labels'] == 'FSRplat-OFF')[0][0]
 fplon_trz = eventz[startidx].rescale(plotting_time_unit)
 fploff_trz = eventz[stopidx].rescale(plotting_time_unit)
 
 # plotting grip force and object displacement
 ai_legend = []
 ai_legend_txt = []
-for ainp in ainp_signals:
-    if ainp.annotations['channel_id'] in trialx_chids:
+for chidx, chid in enumerate(ainp_signals.array_annotations['channel_ids']):
+    ainp = ainp_signals[:, chidx]
+    if ainp.array_annotations['channel_ids'][0] in trialx_chids:
         ainp_times = ainp.times.rescale(plotting_time_unit)
         mask = (ainp_times > sr) & (ainp_times < stop)
         ainp_ampli = stats.zscore(ainp.magnitude[mask])
 
-        if ainp.annotations['channel_id'] != 143:
+        if ainp.array_annotations['channel_ids'][0] != 143:
             color = 'gray'
             ai_legend_txt.append('grip force')
         else:
@@ -650,7 +652,7 @@ for ainp in ainp_signals:
             ax5a.plot(ainp_times[mask], ainp_ampli, color=color)[0])
 
     # get force load of this trial for next plot
-    elif ainp.annotations['channel_id'] == 141:
+    elif ainp.array_annotations['channel_ids'][0] == 141:
         ainp_times = ainp.times.rescale(plotting_time_unit)
         mask = (ainp_times > fplon) & (ainp_times < fploff)
         force_av_01 = np.mean(ainp.rescale(behav_signal_unit).magnitude[mask])
@@ -659,7 +661,7 @@ for ainp in ainp_signals:
 ax5a.set_title('grip force and object displacement', fontdict_titles)
 ax5a.yaxis.set_label_position("left")
 ax5a.tick_params(direction='in', length=3, labelsize='xx-small',
-                 labelleft='off', labelright='on')
+                 labelleft=False, labelright=True)
 ax5a.set_ylabel('zscore', fontdict_axis)
 ax5a.legend(
     ai_legend, ai_legend_txt,
@@ -679,22 +681,22 @@ ax5b.bar([0, 0.6], [force_av_01, force_av_02], bar_width, color=color)
 ax5b.set_title('load/pull force', fontdict_titles)
 ax5b.set_ylabel(behav_signal_unit.units.dimensionality.latex, fontdict_axis)
 ax5b.set_xticks([0, 0.6])
-ax5b.set_xticklabels([trialx_trty, trialz_trty], fontdict_axis)
+ax5b.set_xticklabels([trialx_trty, trialz_trty], fontdict=fontdict_axis)
 ax5b.yaxis.set_label_position("right")
 ax5b.tick_params(direction='in', length=3, labelsize='xx-small',
-                 labelleft='off', labelright='on')
+                 labelleft=False, labelright=True)
 
 # =============================================================================
 # PLOT EVENTS across ax5a and add time bar
 # =============================================================================
 # find trial relevant events
-startidx = event.annotations['trial_event_labels'].index('SR')
-stopidx = event.annotations['trial_event_labels'].index('OBB')
+startidx = np.where(event.array_annotations['trial_event_labels'] == 'SR')[0][0]
+stopidx = np.where(event.array_annotations['trial_event_labels'] == 'OBB')[0][0]
 
 xticks = []
 xticklabels = []
 for ev_id, ev in enumerate(event[startidx:stopidx]):
-    ev_labels = event.annotations['trial_event_labels'][startidx:stopidx + 1]
+    ev_labels = event.array_annotations['trial_event_labels'][startidx:stopidx + 1]
     if ev_labels[ev_id] in ['RW-ON']:
         ax5a.axvline(ev.rescale(plotting_time_unit), color='k', zorder=0.5)
         xticks.append(ev.rescale(plotting_time_unit))
@@ -712,9 +714,9 @@ for ev_id, ev in enumerate(event[startidx:stopidx]):
             ev.rescale(plotting_time_unit), color='k', ls='-.', zorder=0.5)
 
 ax5a.set_xticks(xticks)
-ax5a.set_xticklabels(xticklabels, fontdict_axis, rotation=90)
+ax5a.set_xticklabels(xticklabels, fontdict=fontdict_axis, rotation=90)
 ax5a.tick_params(axis='x', direction='out', length=3, labelsize='xx-small',
-                 labeltop='off', top='off')
+                 labeltop=False, top=False)
 ax5a.set_ylim([-2.0, 2.0])
 
 timebar_ypos = ax5a.get_ylim()[0] + np.diff(ax5a.get_ylim())[0] / 10

+ 78 - 69
code/data_overview_2.py

@@ -46,9 +46,11 @@ import quantities as pq
 import numpy as np
 
 from neo import (AnalogSignal, SpikeTrain)
+from neo.utils import *
 from reachgraspio import reachgraspio
 
-import neo_utils
+
+from neo_utils import load_segment
 
 # =============================================================================
 # Define data and metadata directories and general settings
@@ -67,12 +69,11 @@ def get_monkey_datafile(monkey):
 # Enter your dataset directory here
 datasetdir = os.path.join('..', 'datasets')
 
-nsx_none = {'Lilou': None, 'Nikos2': None}
-nsx_lfp = {'Lilou': 5, 'Nikos2': 2}
 chosen_els = {'Lilou': range(3, 97, 7), 'Nikos2': range(1, 97, 7)}
 chosen_el = {
     'Lilou': chosen_els['Lilou'][0],
     'Nikos2': chosen_els['Nikos2'][0]}
+chosen_unit = 1
 trial_indexes = range(14)
 trial_index = trial_indexes[0]
 chosen_events = ['TS-ON', 'WS-ON', 'CUE-ON', 'CUE-OFF', 'GO-ON', 'SR-ON',
@@ -92,31 +93,23 @@ session = reachgraspio.ReachGraspIO(
     odml_directory=datasetdir,
     verbose=False)
 
-bl = session.read_block(
-    index=None,
-    name=None,
-    description=None,
-    nsx_to_load=nsx_lfp[monkey],
-    n_starts=None,
-    n_stops=None,
-    channels=chosen_els[monkey],
-    units=[1],  # loading only unit_id 1
-    load_waveforms=False,
-    load_events=True,
-    scaling='voltage',
-    lazy=False,
-    cascade=True)
+bl = session.read_block(lazy=True)
+    # channels=chosen_els[monkey],
+    # units=[1],  # loading only unit_id 1
+    # load_waveforms=False,
+    # load_events=True,
+    # scaling='voltage')
 
 seg = bl.segments[0]
 
 # get start and stop events of trials
-start_events = neo_utils.get_events(
-    seg, properties={
+start_events = get_events(
+    seg, **{
         'name': 'TrialEvents',
         'trial_event_labels': 'TS-ON',
         'performance_in_trial': session.performance_codes['correct_trial']})
-stop_events = neo_utils.get_events(
-    seg, properties={
+stop_events = get_events(
+    seg, **{
         'name': 'TrialEvents',
         'trial_event_labels': 'RW-ON',
         'performance_in_trial': session.performance_codes['correct_trial']})
@@ -126,30 +119,48 @@ assert len(start_events) == 1
 assert len(stop_events) == 1
 
 # insert epochs between 10ms before TS to 50ms after RW corresponding to trails
-neo_utils.add_epoch(
-    seg,
-    start_events[0],
-    stop_events[0],
-    pre=-250 * pq.ms,
-    post=500 * pq.ms,
-    segment_type='complete_trials',
-    trialtype=start_events[0].annotations[
-        'belongs_to_trialtype'])
+ep = add_epoch(seg,
+               start_events[0],
+               stop_events[0],
+               pre=-250 * pq.ms,
+               post=500 * pq.ms,
+               segment_type='complete_trials')
+ep.array_annotate(trialtype=start_events[0].array_annotations['belongs_to_trialtype'])
 
 # access single epoch of this data_segment
-epochs = neo_utils.get_epochs(seg,
-                              properties={'segment_type': 'complete_trials'})
+epochs = get_epochs(seg, **{'segment_type': 'complete_trials'})
 assert len(epochs) == 1
 
+# remove spiketrains not belonging to chosen_electrode
+seg.spiketrains = seg.filter(targdict={'unit_id': chosen_unit},
+                             recursive=True, objects='SpikeTrainProxy')
+# remove all non-neural signals
+seg.analogsignals = seg.filter(targdict={'neural_signal': True},
+                               objects='AnalogSignalProxy')
+
+# use most raw data if multiple versions are present
+raw_signal = seg.analogsignals[0]
+for sig in seg.analogsignals:
+    if sig.sampling_rate > raw_signal.sampling_rate:
+        raw_signal = sig
+seg.analogsignals = [raw_signal]
+
+# replacing the segment with a new segment containing all data
+# to speed up cutting of segments
+seg = load_segment(seg, load_wavefroms=True)
+
+# only keep the chosen electrode signal in the AnalogSignal object
+mask = np.isin(seg.analogsignals[0].array_annotations['channel_ids'], chosen_els[monkey])
+
+seg.analogsignals[0] = seg.analogsignals[0][:, mask]
+
 # cut segments according to inserted 'complete_trials' epochs and reset trial
 # times
-cut_segments = neo_utils.cut_segment_by_epoch(seg,
-                                              epochs[0],
-                                              reset_time=True)
+cut_segments = cut_segment_by_epoch(seg, epochs[0], reset_time=True)
 
-# explicitely adding trial type annotations to cut segments
+# explicitly adding trial type annotations to cut segments
 for i, cut_seg in enumerate(cut_segments):
-    cut_seg.annotate(trialtype=epochs[0].annotations['trialtype'][i])
+    cut_seg.annotate(trialtype=epochs[0].array_annotations['trialtype'][i])
 
 # =============================================================================
 # Define figure and subplot axis for first data overview
@@ -206,14 +217,13 @@ time_unit = 'ms'
 lfp_unit = 'uV'
 
 # define scaling factors for analogsignals
-anasig_std = np.mean([np.std(anasig.rescale(lfp_unit)) for anasig in
-                      cut_segments[trial_index].analogsignals]) \
-    * getattr(pq, lfp_unit)
+
+anasig_std = np.mean(np.std(cut_segments[trial_index].analogsignals[0].rescale(lfp_unit), axis=0))
 anasig_offset = 3 * anasig_std
 
 
 # =============================================================================
-# SUPPLEMENTORY PLOTTING functions
+# SUPPLEMENTARY PLOTTING functions
 # =============================================================================
 
 def add_scalebar(ax, std):
@@ -246,13 +256,12 @@ selected_trial = cut_segments[trial_index]
 for el_idx, electrode_id in enumerate(chosen_els[monkey]):
 
     # PLOT ANALOGSIGNALS in upper plot
-    anasigs = selected_trial.filter(
-        channel_id=electrode_id, objects=AnalogSignal)
-    for anasig in anasigs:
-        ax1.plot(anasig.times.rescale(time_unit),
-                 np.asarray(anasig.rescale(lfp_unit))
-                 + anasig_offset.magnitude * el_idx,
-                 color=electrode_colors[el_idx])
+    chosen_el_idx = np.where(cut_segments[0].analogsignals[0].array_annotations['channel_ids'] == electrode_id)[0][0]
+    anasig = selected_trial.analogsignals[0][:, chosen_el_idx]
+    ax1.plot(anasig.times.rescale(time_unit),
+             np.asarray(anasig.rescale(lfp_unit))
+             + anasig_offset.magnitude * el_idx,
+             color=electrode_colors[el_idx])
 
     # PLOT SPIKETRAINS in lower plot
     spiketrains = selected_trial.filter(
@@ -264,10 +273,10 @@ for el_idx, electrode_id in enumerate(chosen_els[monkey]):
 # PLOT EVENTS in both plots
 for event_type in chosen_events:
     # get events of each chosen event type
-    event_data = neo_utils.get_events(selected_trial,
-                                      {'trial_event_labels': event_type})
+    event_data = get_events(selected_trial,
+                                      **{'trial_event_labels': event_type})
     for event in event_data:
-        event_color = event_colors[event.annotations['trial_event_labels'][0]]
+        event_color = event_colors[event.array_annotations['trial_event_labels'][0]]
         # adding lines
         for ax in [ax1, ax3]:
             ax.axvline(event.times.rescale(time_unit),
@@ -275,7 +284,7 @@ for event_type in chosen_events:
                        zorder=0.5)
         # adding labels
         ax1.text(event.times.rescale(time_unit), 0,
-                 event.annotations['trial_event_labels'][0],
+                 event.array_annotations['trial_event_labels'][0],
                  ha="center", va="top", rotation=45, color=event_color,
                  size=8, transform=event_label_transform)
 
@@ -298,32 +307,32 @@ ax3.set_xlabel('time [%s]' % time_unit, fontdict=fontdict_axis)
 # PLOT DATA OF SINGLE ELECTRODE
 # =============================================================================
 
+
 # plot data for each chosen trial
+chosen_el_idx = np.where(cut_segments[0].analogsignals[0].array_annotations['channel_ids'] == chosen_el[monkey])[0][0]
 for trial_idx, trial_id in enumerate(trial_indexes):
-    trial_data = cut_segments[trial_id].filter(channel_id=chosen_el[monkey])
-    trial_type = trial_data[0].parents[0].annotations['trialtype']
+    trial_spikes = cut_segments[trial_id].filter(channel_id=chosen_el[monkey], objects='SpikeTrain')
+    trial_type = cut_segments[trial_id].annotations['trialtype']
     trial_color = trialtype_colors[trial_type]
-    for t_data in trial_data:
 
-        # PLOT ANALOGSIGNALS in upper plot
-        if isinstance(t_data, AnalogSignal):
-            ax2.plot(t_data.times.rescale(time_unit),
-                     np.asarray(t_data.rescale(lfp_unit))
-                     + anasig_offset.magnitude * trial_idx,
-                     color=trial_color, zorder=1)
+    t_signal = cut_segments[trial_id].analogsignals[0][:, chosen_el_idx]
+    # PLOT ANALOGSIGNALS in upper plot
+    ax2.plot(t_signal.times.rescale(time_unit),
+             np.asarray(t_signal.rescale(lfp_unit))
+             + anasig_offset.magnitude * trial_idx,
+             color=trial_color, zorder=1)
 
+    for t_data in trial_spikes:
         # PLOT SPIKETRAINS in lower plot
-        elif isinstance(t_data, SpikeTrain):
-            ax4.plot(t_data.times.rescale(time_unit),
-                     np.ones(len(t_data.times)) + trial_idx, 'k|')
+        ax4.plot(t_data.times.rescale(time_unit),
+                 np.ones(len(t_data.times)) + trial_idx, 'k|')
 
     # PLOT EVENTS in both plots
     for event_type in chosen_events:
         # get events of each chosen event type
-        event_data = neo_utils.get_events(cut_segments[trial_id],
-                                          {'trial_event_labels': event_type})
+        event_data = get_events(cut_segments[trial_id], **{'trial_event_labels': event_type})
         for event in event_data:
-            color = event_colors[event.annotations['trial_event_labels'][0]]
+            color = event_colors[event.array_annotations['trial_event_labels'][0]]
             ax2.vlines(x=event.times.rescale(time_unit),
                        ymin=(trial_idx - 0.5) * anasig_offset,
                        ymax=(trial_idx + 0.5) * anasig_offset,
@@ -340,7 +349,7 @@ ax2.set_title('single electrode', fontdict=fontdict_titles)
 ax2.set_ylabel('trial id', fontdict=fontdict_axis)
 ax2.set_yticks(np.asarray(trial_indexes) * anasig_offset)
 ax2.set_yticklabels(
-    [epochs[0].annotations['trial_id'][_] for _ in trial_indexes])
+    [epochs[0].array_annotations['trial_id'][_] for _ in trial_indexes])
 ax2.yaxis.set_label_position("right")
 ax2.tick_params(direction='in', length=3, labelleft='off', labelright='on')
 ax2.autoscale(enable=True, axis='y')
@@ -355,7 +364,7 @@ ax4.xaxis.set_ticks(np.arange(start, end, 1000))
 ax4.xaxis.set_ticks(np.arange(start, end, 500), minor=True)
 ax4.set_yticks(range(1, len(trial_indexes) + 1))
 ax4.set_yticklabels(np.asarray(
-    [epochs[0].annotations['trial_id'][_] for _ in trial_indexes]))
+    [epochs[0].array_annotations['trial_id'][_] for _ in trial_indexes]))
 ax4.yaxis.set_label_position("right")
 ax4.tick_params(direction='in', length=3, labelleft='off', labelright='on')
 ax4.autoscale(enable=True, axis='y')

+ 0 - 72
code/elephant/.gitignore

@@ -1,72 +0,0 @@
-#########################################
-# Editor temporary/working/backup files #
-.#*
-[#]*#
-*~
-*$
-*.bak
-.coverage
-*.kdev4
-*.komodoproject
-.mr.developer.cfg
-nosetests.xml
-*.orig
-.project
-.pydevproject
-.settings
-*.tmp*
-.idea
-
-# Compiled source #
-###################
-*.a
-*.com
-*.class
-*.dll
-*.exe
-*.mo
-*.o
-*.py[ocd]
-*.so
-
-# Python files #
-################
-# setup.py working directory
-build
-# other build directories
-bin
-parts
-var
-lib
-lib64
-# sphinx build directory
-doc/_build
-# setup.py dist directory
-dist
-sdist
-# Egg metadata
-*.egg-info
-*.egg
-*.EGG
-*.EGG-INFO
-eggs
-develop-eggs
-# tox testing tool
-.tox
-# Packages
-.installed.cfg
-pip-log.txt
-# coverage
-cover
-
-# OS generated files #
-######################
-.directory
-.gdb_history
-.DS_Store?
-ehthumbs.db
-Icon?
-Thumbs.db
-
-# Things specific to this project #
-###################################

+ 0 - 44
code/elephant/.travis.yml

@@ -1,44 +0,0 @@
-dist: precise
-language: python
-sudo: false
-
-addons:
-   apt:
-      packages:
-      - libatlas3gf-base
-      - libatlas-dev
-      - libatlas-base-dev
-      - liblapack-dev
-      - gfortran
-      - python-scipy
-
-python:
-  - 2.7.13     
-      
-env:
-  matrix:
-    # This environment tests the newest supported anaconda env
-    - DISTRIB="conda" PYTHON_VERSION="2.7" INSTALL_MKL="true"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" PANDAS_VERSION="0.23.4"
-      SIX_VERSION="1.10.0" COVERAGE="true"
-    - DISTRIB="conda" PYTHON_VERSION="3.5" INSTALL_MKL="true"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" PANDAS_VERSION="0.23.4"
-      SIX_VERSION="1.10.0" COVERAGE="true"
-    # This environment tests minimal dependency versions
-    - DISTRIB="conda_min" PYTHON_VERSION="2.7" INSTALL_MKL="false"
-      SIX_VERSION="1.10.0" NUMPY_VERSION="1.8.2" SCIPY_VERSION="0.14.0" COVERAGE="true"
-    - DISTRIB="conda_min" PYTHON_VERSION="3.4" INSTALL_MKL="false"
-      SIX_VERSION="1.10.0" NUMPY_VERSION="1.8.2" SCIPY_VERSION="0.14.0" COVERAGE="true"
-    # basic Ubuntu build environment
-    - DISTRIB="ubuntu" PYTHON_VERSION="2.7" INSTALL_ATLAS="true"
-      COVERAGE="true"
-    # This environment tests for mpi
-    - DISTRIB="mpi" PYTHON_VERSION="3.5" INSTALL_MKL="false"
-      NUMPY_VERSION="1.15.1" SCIPY_VERSION="1.1.0" SIX_VERSION="1.10.0"
-      MPI_VERSION="2.0.0" COVERAGE="true" MPI="true"
-
-install: source continuous_integration/install.sh
-script: bash continuous_integration/test_script.sh
-after_success:
-    - if [[ "$COVERAGE" == "true" ]]; then coveralls || echo "failed"; fi
-cache: apt

+ 0 - 1
code/elephant/AUTHORS.txt

@@ -1 +0,0 @@
-See doc/authors.rst

+ 1 - 0
code/elephant/CITATION.txt

@@ -0,0 +1 @@
+../../.git/annex/objects/pM/QK/MD5-s150--406acd3d4c454eafea04e056a70e14b8/MD5-s150--406acd3d4c454eafea04e056a70e14b8

+ 0 - 10
code/elephant/LICENSE.txt

@@ -1,10 +0,0 @@
-Copyright (c) 2014-2018, Elephant authors and contributors
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
-
-* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
-* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
-* Neither the names of the copyright holders nor the names of the contributors may be used to endorse or promote products derived from this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

+ 1 - 0
code/elephant/LICENSE.txt

@@ -0,0 +1 @@
+../../.git/annex/objects/4K/GQ/MD5-s1506--7a9d8755791957b707aa0a9e26faf593/MD5-s1506--7a9d8755791957b707aa0a9e26faf593

+ 0 - 8
code/elephant/MANIFEST.in

@@ -1,8 +0,0 @@
-# Include requirements
-include requirement*.txt
-include README.rst
-include LICENSE.txt
-include AUTHORS.txt
-include elephant/test/spike_extraction_test_data.npz
-recursive-include doc *
-prune doc/build

+ 1 - 0
code/elephant/MANIFEST.in

@@ -0,0 +1 @@
+../../.git/annex/objects/K6/Xf/MD5-s583--52fb473cd423b20363501db96b01d042/MD5-s583--52fb473cd423b20363501db96b01d042

+ 1 - 0
code/elephant/PKG-INFO

@@ -0,0 +1 @@
+../../.git/annex/objects/Q1/WX/MD5-s2925--ab354f6fa0ea8caa0a76b6107c79f03e/MD5-s2925--ab354f6fa0ea8caa0a76b6107c79f03e

+ 1 - 0
code/elephant/README.md

@@ -0,0 +1 @@
+../../.git/annex/objects/mM/WG/MD5-s1770--357e859d4bb2713c06ddf80b40373d35/MD5-s1770--357e859d4bb2713c06ddf80b40373d35

+ 0 - 23
code/elephant/README.rst

@@ -1,23 +0,0 @@
-Elephant - Electrophysiology Analysis Toolkit
-=============================================
-
-Elephant is a package for the analysis of neurophysiology data, based on Neo.
-
-Code status
------------
-
-.. image:: https://travis-ci.org/NeuralEnsemble/elephant.png?branch=master
-   :target: https://travis-ci.org/NeuralEnsemble/elephant
-   :alt: Unit Test Status
-.. image:: https://coveralls.io/repos/NeuralEnsemble/elephant/badge.png
-   :target: https://coveralls.io/r/NeuralEnsemble/elephant
-   :alt: Unit Test Coverage
-.. image:: https://requires.io/github/NeuralEnsemble/elephant/requirements.png?branch=master
-   :target: https://requires.io/github/NeuralEnsemble/elephant/requirements/?branch=master
-   :alt: Requirements Status
-.. image:: https://readthedocs.org/projects/elephant/badge/?version=latest
-   :target: https://readthedocs.org/projects/elephant/?badge=latest
-   :alt: Documentation Status
-
-:copyright: Copyright 2014-2018 by the Elephant team, see AUTHORS.txt.
-:license: Modified BSD License, see LICENSE.txt for details.

+ 0 - 135
code/elephant/continuous_integration/install.sh

@@ -1,135 +0,0 @@
-#!/bin/bash
-# Based on a script from scikit-learn
-
-# This script is meant to be called by the "install" step defined in
-# .travis.yml. See http://docs.travis-ci.com/ for more details.
-# The behavior of the script is controlled by environment variabled defined
-# in the .travis.yml in the top level folder of the project.
-
-set -e
-
-# Fix the compilers to workaround avoid having the Python 3.4 build
-# lookup for g++44 unexpectedly.
-export CC=gcc
-export CXX=g++
-
-if [[ "$DISTRIB" == "conda_min" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage \
-        six=$SIX_VERSION numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION
-    source activate testenv
-    conda install libgfortran=1
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-elif [[ "$DISTRIB" == "conda" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage six=$SIX_VERSION \
-        numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION pandas=$PANDAS_VERSION scikit-learn
-    source activate testenv
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-    if [[ "$COVERAGE" == "true" ]]; then
-        pip install coveralls
-    fi
-
-    python -c "import pandas; import os; assert os.getenv('PANDAS_VERSION') == pandas.__version__"
-
-elif [[ "$DISTRIB" == "mpi" ]]; then
-    # Deactivate the travis-provided virtual environment and setup a
-    # conda-based environment instead
-    deactivate
-
-    # Use the miniconda installer for faster download / install of conda
-    # itself
-    wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh \
-        -O miniconda.sh
-    chmod +x miniconda.sh && ./miniconda.sh -b -p $HOME/miniconda
-    export PATH=/home/travis/miniconda/bin:$PATH
-    conda config --set always_yes yes
-    conda update --yes conda
-
-    # Configure the conda environment and put it in the path using the
-    # provided versions
-    conda create -n testenv --yes python=$PYTHON_VERSION pip nose coverage six=$SIX_VERSION \
-        numpy=$NUMPY_VERSION scipy=$SCIPY_VERSION scikit-learn mpi4py=$MPI_VERSION
-    source activate testenv
-
-    if [[ "$INSTALL_MKL" == "true" ]]; then
-        # Make sure that MKL is used
-        conda install --yes --no-update-dependencies mkl
-    else
-        # Make sure that MKL is not used
-        conda remove --yes --features mkl || echo "MKL not installed"
-    fi
-
-    if [[ "$COVERAGE" == "true" ]]; then
-        pip install coveralls
-    fi
-
-elif [[ "$DISTRIB" == "ubuntu" ]]; then
-    # deactivate
-    # Create a new virtualenv using system site packages for numpy and scipy
-    # virtualenv --system-site-packages testenv
-    # source testenv/bin/activate
-    pip install -r requirements.txt    
-fi
-
-if [[ "$COVERAGE" == "true" ]]; then
-    pip install coveralls
-fi
-
-# pip install neo==0.3.3
-wget https://github.com/NeuralEnsemble/python-neo/archive/master.tar.gz
-tar -xzvf master.tar.gz
-pushd python-neo-master
-python setup.py install
-popd
-
-pip install .
-
-if ! [[ "$DISTRIB" == "ubuntu" ]]; then
-    python -c "import numpy; import os; assert os.getenv('NUMPY_VERSION') == numpy.__version__, 'Numpy versions do not match: {0} - {1}'.format(os.getenv('NUMPY_VERSION'), numpy.__version__)"
-    python -c "import scipy; import os; assert os.getenv('SCIPY_VERSION') == scipy.__version__, 'Scipy versions do not match: {0} - {1}'.format(os.getenv('SCIPY_VERSION'), scipy.__version__)"
-fi

+ 0 - 23
code/elephant/continuous_integration/test_script.sh

@@ -1,23 +0,0 @@
-#!/bin/bash
-# Based on a script from scikit-learn
-
-# This script is meant to be called by the "script" step defined in
-# .travis.yml. See http://docs.travis-ci.com/ for more details.
-# The behavior of the script is controlled by environment variables defined
-# in the .travis.yml in the top level folder of the project.
-
-set -e
-
-python --version
-python -c "import numpy; print('numpy %s' % numpy.__version__)"
-python -c "import scipy; print('scipy %s' % scipy.__version__)"
-
-if [[ "$COVERAGE" == "true" ]]; then
-    if [[ "$MPI" == "true" ]]; then
-	mpiexec -n 1 nosetests --with-coverage --cover-package=elephant
-    else
-	nosetests --with-coverage --cover-package=elephant
-    fi
-else
-    nosetests
-fi

+ 0 - 153
code/elephant/doc/Makefile

@@ -1,153 +0,0 @@
-# Makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line.
-SPHINXOPTS    =
-SPHINXBUILD   = sphinx-build
-PAPER         =
-BUILDDIR      = _build
-
-# Internal variables.
-PAPEROPT_a4     = -D latex_paper_size=a4
-PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS   = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-# the i18n builder cannot share the environment and doctrees with the others
-I18NSPHINXOPTS  = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-
-.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
-
-help:
-	@echo "Please use \`make <target>' where <target> is one of"
-	@echo "  html       to make standalone HTML files"
-	@echo "  dirhtml    to make HTML files named index.html in directories"
-	@echo "  singlehtml to make a single large HTML file"
-	@echo "  pickle     to make pickle files"
-	@echo "  json       to make JSON files"
-	@echo "  htmlhelp   to make HTML files and a HTML help project"
-	@echo "  qthelp     to make HTML files and a qthelp project"
-	@echo "  devhelp    to make HTML files and a Devhelp project"
-	@echo "  epub       to make an epub"
-	@echo "  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
-	@echo "  latexpdf   to make LaTeX files and run them through pdflatex"
-	@echo "  text       to make text files"
-	@echo "  man        to make manual pages"
-	@echo "  texinfo    to make Texinfo files"
-	@echo "  info       to make Texinfo files and run them through makeinfo"
-	@echo "  gettext    to make PO message catalogs"
-	@echo "  changes    to make an overview of all changed/added/deprecated items"
-	@echo "  linkcheck  to check all external links for integrity"
-	@echo "  doctest    to run all doctests embedded in the documentation (if enabled)"
-
-clean:
-	-rm -rf $(BUILDDIR)/*
-
-html:
-	$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
-
-dirhtml:
-	$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
-	@echo
-	@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
-
-singlehtml:
-	$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
-	@echo
-	@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
-
-pickle:
-	$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
-	@echo
-	@echo "Build finished; now you can process the pickle files."
-
-json:
-	$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
-	@echo
-	@echo "Build finished; now you can process the JSON files."
-
-htmlhelp:
-	$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
-	@echo
-	@echo "Build finished; now you can run HTML Help Workshop with the" \
-	      ".hhp project file in $(BUILDDIR)/htmlhelp."
-
-qthelp:
-	$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
-	@echo
-	@echo "Build finished; now you can run "qcollectiongenerator" with the" \
-	      ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
-	@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Elephant.qhcp"
-	@echo "To view the help file:"
-	@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Elephant.qhc"
-
-devhelp:
-	$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
-	@echo
-	@echo "Build finished."
-	@echo "To view the help file:"
-	@echo "# mkdir -p $$HOME/.local/share/devhelp/Elephant"
-	@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Elephant"
-	@echo "# devhelp"
-
-epub:
-	$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
-	@echo
-	@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
-
-latex:
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo
-	@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
-	@echo "Run \`make' in that directory to run these through (pdf)latex" \
-	      "(use \`make latexpdf' here to do that automatically)."
-
-latexpdf:
-	$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
-	@echo "Running LaTeX files through pdflatex..."
-	$(MAKE) -C $(BUILDDIR)/latex all-pdf
-	@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-
-text:
-	$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
-	@echo
-	@echo "Build finished. The text files are in $(BUILDDIR)/text."
-
-man:
-	$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
-	@echo
-	@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
-
-texinfo:
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo
-	@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
-	@echo "Run \`make' in that directory to run these through makeinfo" \
-	      "(use \`make info' here to do that automatically)."
-
-info:
-	$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
-	@echo "Running Texinfo files through makeinfo..."
-	make -C $(BUILDDIR)/texinfo info
-	@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
-
-gettext:
-	$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
-	@echo
-	@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
-
-changes:
-	$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
-	@echo
-	@echo "The overview file is in $(BUILDDIR)/changes."
-
-linkcheck:
-	$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
-	@echo
-	@echo "Link check complete; look for any errors in the above output " \
-	      "or in $(BUILDDIR)/linkcheck/output.txt."
-
-doctest:
-	$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
-	@echo "Testing of doctests in the sources finished, look at the " \
-	      "results in $(BUILDDIR)/doctest/output.txt."

+ 1 - 0
code/elephant/doc/Makefile

@@ -0,0 +1 @@
+../../../.git/annex/objects/8k/Ww/MD5-s5572--c4a12f8b6ba74cceea47e8a9fd972340/MD5-s5572--c4a12f8b6ba74cceea47e8a9fd972340

+ 1 - 0
code/elephant/doc/_templates/autosummary/class.rst

@@ -0,0 +1 @@
+../../../../../.git/annex/objects/1Z/Pp/MD5-s590--725730472bef3c901ac3d2766d3f7945/MD5-s590--725730472bef3c901ac3d2766d3f7945

+ 1 - 0
code/elephant/doc/acknowledgments.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/mj/MM/MD5-s351--ebfb71a1bf24b00d2557d0159673b6a4/MD5-s351--ebfb71a1bf24b00d2557d0159673b6a4

+ 0 - 47
code/elephant/doc/authors.rst

@@ -1,47 +0,0 @@
-.. _authors:
-
-************************
-Authors and contributors
-************************
-
-The following people have contributed code and/or ideas to the current version
-of Elephant. The institutional affiliations are those at the time of the
-contribution, and may not be the current affiliation of a contributor.
-
-* Alper Yegenoglu [1]
-* Andrew Davison [2]
-* Detlef Holstein [2]
-* Eilif Muller [3, 4]
-* Emiliano Torre [1]
-* Espen Hagen [1]
-* Jan Gosmann [6, 8]
-* Julia Sprenger [1]
-* Junji Ito [1]
-* Michael Denker [1]
-* Paul Chorley [1]
-* Pierre Yger [2]
-* Pietro Quaglio [1]
-* Richard Meyes [1]
-* Vahid Rostami [1]
-* Subhasis Ray [5]
-* Robert Pröpper [6]
-* Richard C Gerkin [7]
-* Bartosz Telenczuk [2]
-* Chaitanya Chintaluri [9]
-* Michał Czerwiński [9]
-* Michael von Papen [1]
-* Robin Gutzen [1]
-* Felipe Méndez [10]
-
-1. Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience & Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience, Jülich Research Centre and JARA, Jülich, Germany
-2. Unité de Neurosciences, Information et Complexité, CNRS UPR 3293, Gif-sur-Yvette, France
-3. Electronic Visions Group, Kirchhoff-Institute for Physics, University of Heidelberg, Germany
-4. Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland
-5. NIH–NICHD, Laboratory of Cellular and Synaptic Physiology, Bethesda, Maryland 20892, USA
-6. Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Germany
-7. Arizona State University School of Life Sciences, USA
-8. Computational Neuroscience Research Group (CNRG), Waterloo Centre for Theoretical Neuroscience, Waterloo, Canada
-9. Nencki Institute of Experimental Biology, Warsaw, Poland
-10.  Instituto de Neurobiología, Universidad Nacional Autónoma de México, Mexico City, Mexico
-
-If we've somehow missed you off the list we're very sorry - please let us know.

+ 1 - 0
code/elephant/doc/authors.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/G1/p6/MD5-s2541--2d51cc6845fad7510f6ed8b7d8146b21/MD5-s2541--2d51cc6845fad7510f6ed8b7d8146b21

+ 1 - 0
code/elephant/doc/bib/elephant.bib

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Qx/KP/MD5-s4032--9b892b5b2c865d8cfb12d21017b9f7ad/MD5-s4032--9b892b5b2c865d8cfb12d21017b9f7ad

+ 1 - 0
code/elephant/doc/citation.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/7J/x2/MD5-s1197--32bed887c47c0d826c404f45f7572698/MD5-s1197--32bed887c47c0d826c404f45f7572698

+ 0 - 323
code/elephant/doc/conf.py

@@ -1,323 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Elephant documentation build configuration file, created by
-# sphinx-quickstart on Wed Feb  5 17:11:26 2014.
-#
-# This file is execfile()d with the current directory set to its containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
-
-import sys
-import os
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.insert(0, '..')
-
-# -- General configuration -----------------------------------------------
-
-# If your documentation needs a minimal Sphinx version, state it here.
-#needs_sphinx = '1.0'
-
-# Add any Sphinx extension module names here, as strings. They can be extensions
-# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
-extensions = [
-    'sphinx.ext.autodoc',
-    'sphinx.ext.doctest',
-    'sphinx.ext.intersphinx',
-    'sphinx.ext.todo',
-    'sphinx.ext.imgmath',
-    'sphinx.ext.viewcode',
-    'numpydoc']
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# The suffix of source filenames.
-source_suffix = '.rst'
-
-# The encoding of source files.
-#source_encoding = 'utf-8-sig'
-
-# The master toctree document.
-master_doc = 'index'
-
-# General information about the project.
-project = u'Elephant'
-authors = u'Elephant authors and contributors'
-copyright = u'2014-2018, ' + authors
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = '0.6'
-# The full version, including alpha/beta/rc tags.
-release = '0.6.0'
-
-# The language for content autogenerated by Sphinx. Refer to documentation
-# for a list of supported languages.
-#language = None
-
-# There are two options for replacing |today|: either, you set today to some
-# non-false value, then it is used:
-#today = ''
-# Else, today_fmt is used as the format for a strftime call.
-#today_fmt = '%B %d, %Y'
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-exclude_patterns = ['_build']
-
-# The reST default role (used for this markup: `text`) to use for all documents.
-#default_role = None
-
-# If true, '()' will be appended to :func: etc. cross-reference text.
-#add_function_parentheses = True
-
-# If true, the current module name will be prepended to all description
-# unit titles (such as .. function::).
-#add_module_names = True
-
-# If true, sectionauthor and moduleauthor directives will be shown in the
-# output. They are ignored by default.
-#show_authors = False
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# A list of ignored prefixes for module index sorting.
-#modindex_common_prefix = []
-
-
-# -- Options for HTML output ---------------------------------------------
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-html_theme = 'sphinxdoc'
-
-# Theme options are theme-specific and customize the look and feel of a theme
-# further.  For a list of options available for each theme, see the
-# documentation.
-#html_theme_options = {}
-
-# Add any paths that contain custom themes here, relative to this directory.
-#html_theme_path = []
-
-# The name for this set of Sphinx documents.  If None, it defaults to
-# "<project> v<release> documentation".
-#html_title = None
-
-# A shorter title for the navigation bar.  Default is the same as html_title.
-#html_short_title = None
-
-# The name of an image file (relative to this directory) to place at the top
-# of the sidebar.
-html_logo = 'images/elephant_logo_sidebar.png'
-
-# The name of an image file (within the static path) to use as favicon of the
-# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
-# pixels large.
-html_favicon = 'images/elephant_favicon.ico'
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
-# using the given strftime format.
-#html_last_updated_fmt = '%b %d, %Y'
-
-# If true, SmartyPants will be used to convert quotes and dashes to
-# typographically correct entities.
-html_use_smartypants = True
-
-# Custom sidebar templates, maps document names to template names.
-#html_sidebars = {}
-
-# Additional templates that should be rendered to pages, maps page names to
-# template names.
-#html_additional_pages = {}
-
-# If false, no module index is generated.
-#html_domain_indices = True
-
-# If false, no index is generated.
-html_use_index = True
-
-# If true, the index is split into individual pages for each letter.
-#html_split_index = False
-
-# If true, links to the reST sources are added to the pages.
-#html_show_sourcelink = True
-
-# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
-#html_show_sphinx = True
-
-# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
-#html_show_copyright = True
-
-# If true, an OpenSearch description file will be output, and all pages will
-# contain a <link> tag referring to it.  The value of this option must be the
-# base URL from which the finished HTML is served.
-#html_use_opensearch = ''
-
-# This is the file name suffix for HTML files (e.g. ".xhtml").
-#html_file_suffix = None
-
-# Output file base name for HTML help builder.
-htmlhelp_basename = 'elephantdoc'
-
-
-# -- Options for LaTeX output --------------------------------------------
-
-latex_elements = {
-    # The paper size ('letterpaper' or 'a4paper').
-    #'papersize': 'letterpaper',
-
-    # The font size ('10pt', '11pt' or '12pt').
-    #'pointsize': '10pt',
-
-    # Additional stuff for the LaTeX preamble.
-    #'preamble': '',
-}
-
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title, author, documentclass [howto/manual]).
-latex_documents = [
-    ('index', 'elephant.tex', u'Elephant Documentation',
-     authors, 'manual'),
-]
-
-# The name of an image file (relative to this directory) to place at the top of
-# the title page.
-#latex_logo = None
-
-# For "manual" documents, if this is true, then toplevel headings are parts,
-# not chapters.
-#latex_use_parts = False
-
-# If true, show page references after internal links.
-#latex_show_pagerefs = False
-
-# If true, show URL addresses after external links.
-#latex_show_urls = False
-
-# Documents to append as an appendix to all manuals.
-#latex_appendices = []
-
-# If false, no module index is generated.
-#latex_domain_indices = True
-
-
-# -- Options for manual page output --------------------------------------
-
-# One entry per manual page. List of tuples
-# (source start file, name, description, authors, manual section).
-man_pages = [
-    ('index', 'elephant', u'Elephant Documentation',
-     [authors], 1)
-]
-
-# If true, show URL addresses after external links.
-#man_show_urls = False
-
-
-# -- Options for Texinfo output ------------------------------------------
-
-# Grouping the document tree into Texinfo files. List of tuples
-# (source start file, target name, title, author,
-#  dir menu entry, description, category)
-texinfo_documents = [
-    ('index',
-     'Elephant',
-     u'Elephant Documentation',
-     authors,
-     'Elephant',
-     'Elephant is a package for the analysis of neurophysiology data.',
-     'Miscellaneous'),
-]
-
-# Documents to append as an appendix to all manuals.
-#texinfo_appendices = []
-
-# If false, no module index is generated.
-#texinfo_domain_indices = True
-
-# How to display URL addresses: 'footnote', 'no', or 'inline'.
-#texinfo_show_urls = 'footnote'
-
-
-# -- Options for Epub output ---------------------------------------------
-
-# Bibliographic Dublin Core info.
-epub_title = project
-epub_author = authors
-epub_publisher = authors
-epub_copyright = copyright
-
-# The language of the text. It defaults to the language option
-# or en if the language is not set.
-#epub_language = ''
-
-# The scheme of the identifier. Typical schemes are ISBN or URL.
-#epub_scheme = ''
-
-# The unique identifier of the text. This can be a ISBN number
-# or the project homepage.
-#epub_identifier = ''
-
-# A unique identification for the text.
-#epub_uid = ''
-
-# A tuple containing the cover image and cover page html template filenames.
-#epub_cover = ()
-
-# HTML files that should be inserted before the pages created by sphinx.
-# The format is a list of tuples containing the path and title.
-#epub_pre_files = []
-
-# HTML files shat should be inserted after the pages created by sphinx.
-# The format is a list of tuples containing the path and title.
-#epub_post_files = []
-
-# A list of files that should not be packed into the epub file.
-#epub_exclude_files = []
-
-# The depth of the table of contents in toc.ncx.
-#epub_tocdepth = 3
-
-# Allow duplicate toc entries.
-#epub_tocdup = True
-
-
-# Example configuration for intersphinx: refer to the Python standard library.
-intersphinx_mapping = {'http://docs.python.org/': None}
-
-# Use more reliable mathjax source
-mathjax_path = 'https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML'
-
-# Remove the copyright notice from docstrings:
-
-
-def process_docstring_remove_copyright(app, what, name, obj, options, lines):
-    copyright_line = None
-    for i, line in enumerate(lines):
-        if line.startswith(':copyright:'):
-            copyright_line = i
-            break
-    if copyright_line:
-        while len(lines) > copyright_line:
-            lines.pop()
-
-
-def setup(app):
-    app.connect('autodoc-process-docstring',
-                process_docstring_remove_copyright)

+ 1 - 0
code/elephant/doc/conf.py

@@ -0,0 +1 @@
+../../../.git/annex/objects/54/xG/MD5-s11529--919894abd745a96e9811f70e42bfd109/MD5-s11529--919894abd745a96e9811f70e42bfd109

+ 0 - 226
code/elephant/doc/developers_guide.rst

@@ -1,226 +0,0 @@
-=================
-Developers' guide
-=================
-
-These instructions are for developing on a Unix-like platform, e.g. Linux or
-Mac OS X, with the bash shell. If you develop on Windows, please get in touch.
-
-
-Mailing lists
--------------
-
-General discussion of Elephant development takes place in the `NeuralEnsemble Google
-group`_.
-
-Discussion of issues specific to a particular ticket in the issue tracker should
-take place on the tracker.
-
-
-Using the issue tracker
------------------------
-
-If you find a bug in Elephant, please create a new ticket on the `issue tracker`_,
-setting the type to "defect".
-Choose a name that is as specific as possible to the problem you've found, and
-in the description give as much information as you think is necessary to
-recreate the problem. The best way to do this is to create the shortest possible
-Python script that demonstrates the problem, and attach the file to the ticket.
-
-If you have an idea for an improvement to Elephant, create a ticket with type
-"enhancement". If you already have an implementation of the idea, open a pull request.
-
-
-Requirements
-------------
-
-See :doc:`install`. We strongly recommend using virtualenv_ or similar.
-
-
-Getting the source code
------------------------
-
-We use the Git version control system. The best way to contribute is through
-GitHub_. You will first need a GitHub account, and you should then fork the
-repository at https://github.com/NeuralEnsemble/elephant
-(see http://help.github.com/fork-a-repo/).
-
-To get a local copy of the repository::
-
-    $ cd /some/directory
-    $ git clone git@github.com:<username>/elephant.git
-    
-Now you need to make sure that the ``elephant`` package is on your PYTHONPATH.
-You can do this by installing Elephant::
-
-    $ cd elephant
-    $ python setup.py install
-    $ python3 setup.py install
-
-but if you do this, you will have to re-run ``setup.py install`` any time you make
-changes to the code. A better solution is to install Elephant with the *develop* option,
-this avoids reinstalling when there are changes in the code::
-
-    $ python setup.py develop
-
-or::
-
-    $ pip install -e .
-
-To update to the latest version from the repository::
-
-    $ git pull
-
-
-Running the test suite
-----------------------
-
-Before you make any changes, run the test suite to make sure all the tests pass
-on your system::
-
-    $ cd elephant/test
-
-With Python 2.7 or 3.x::
-
-    $ python -m unittest discover
-    $ python3 -m unittest discover
-
-If you have nose installed::
-
-    $ nosetests
-
-At the end, if you see "OK", then all the tests
-passed (or were skipped because certain dependencies are not installed),
-otherwise it will report on tests that failed or produced errors.
-
-
-Writing tests
--------------
-
-You should try to write automated tests for any new code that you add. If you
-have found a bug and want to fix it, first write a test that isolates the bug
-(and that therefore fails with the existing codebase). Then apply your fix and
-check that the test now passes.
-
-To see how well the tests cover the code base, run::
-
-    $ nosetests --with-coverage --cover-package=elephant --cover-erase
-
-
-Working on the documentation
-----------------------------
-
-The documentation is written in `reStructuredText`_, using the `Sphinx`_
-documentation system. To build the documentation::
-
-    $ cd elephant/doc
-    $ make html
-    
-Then open `some/directory/elephant/doc/_build/html/index.html` in your browser.
-Docstrings should conform to the `NumPy docstring standard`_.
-
-To check that all example code in the documentation is correct, run::
-
-    $ make doctest
-
-To check that all URLs in the documentation are correct, run::
-
-    $ make linkcheck
-
-
-Committing your changes
------------------------
-
-Once you are happy with your changes, **run the test suite again to check
-that you have not introduced any new bugs**. Then you can commit them to your
-local repository::
-
-    $ git commit -m 'informative commit message'
-    
-If this is your first commit to the project, please add your name and
-affiliation/employer to :file:`doc/source/authors.rst`
-
-You can then push your changes to your online repository on GitHub::
-
-    $ git push
-    
-Once you think your changes are ready to be included in the main Elephant repository,
-open a pull request on GitHub (see https://help.github.com/articles/using-pull-requests).
-
-
-Python 3
---------
-
-Elephant should work with Python 2.7 and Python 3.
-
-So far, we have managed to write code that works with both Python 2 and 3.
-Mainly this involves avoiding the ``print`` statement (use ``logging.info``
-instead), and putting ``from __future__ import division`` at the beginning of
-any file that uses division.
-
-If in doubt, `Porting to Python 3`_ by Lennart Regebro is an excellent resource.
-
-The most important thing to remember is to run tests with at least one version
-of Python 2 and at least one version of Python 3. There is generally no problem
-in having multiple versions of Python installed on your computer at once: e.g.,
-on Ubuntu Python 2 is available as `python` and Python 3 as `python3`, while
-on Arch Linux Python 2 is `python2` and Python 3 `python`. See `PEP394`_ for
-more on this.
-
-
-Coding standards and style
---------------------------
-
-All code should conform as much as possible to `PEP 8`_, and should run with
-Python 2.7 and 3.2-3.5.
-
-
-Making a release
-----------------
-
-.. TODO: discuss branching/tagging policy.
-
-.. Add a section in /doc/releases/<version>.rst for the release.
-
-First, check that the version string (in :file:`elephant/__init__.py`, :file:`setup.py`,
-:file:`doc/conf.py`, and :file:`doc/install.rst`) is correct.
-
-Second, check that the copyright statement (in :file:`LICENCE.txt`, :file:`README.md`, and :file:`doc/conf.py`) is correct.
-
-To build a source package::
-
-    $ python setup.py sdist
-
-To upload the package to `PyPI`_ (if you have the necessary permissions)::
-
-    $ python setup.py sdist upload
-
-.. should we also distribute via software.incf.org
-
-Finally, tag the release in the Git repository and push it::
-
-    $ git tag <version>
-    $ git push --tags upstream
-
-Here, version should be of the form `vX.Y.Z`.
-
-.. make a release branch
-
-
-
-.. _Python: http://www.python.org
-.. _nose: http://somethingaboutorange.com/mrl/projects/nose/
-.. _neo: http://neuralensemble.org/neo
-.. _coverage: http://nedbatchelder.com/code/coverage/
-.. _`PEP 8`: http://www.python.org/dev/peps/pep-0008/
-.. _`issue tracker`: https://github.com/NeuralEnsemble/elephant/issues
-.. _`Porting to Python 3`: http://python3porting.com/
-.. _`NeuralEnsemble Google group`: http://groups.google.com/group/neuralensemble
-.. _reStructuredText: http://docutils.sourceforge.net/rst.html
-.. _Sphinx: http://sphinx.pocoo.org/
-.. _numpy: http://www.numpy.org/
-.. _quantities: http://pypi.python.org/pypi/quantities
-.. _PEP394: http://www.python.org/dev/peps/pep-0394/
-.. _PyPI: http://pypi.python.org
-.. _GitHub: http://github.com
-.. _`NumPy docstring standard`: https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
-.. _`virtualenv`: https://virtualenv.pypa.io/en/latest/

+ 1 - 0
code/elephant/doc/developers_guide.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/Vx/36/MD5-s2966--7c744f8573f853f259e70b75e6fddee8/MD5-s2966--7c744f8573f853f259e70b75e6fddee8

+ 1 - 0
code/elephant/doc/documentation_guide.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/WW/00/MD5-s1248--8f148a9e475a7a821d013cbb10970501/MD5-s1248--8f148a9e475a7a821d013cbb10970501

+ 0 - 34
code/elephant/doc/environment.yml

@@ -1,34 +0,0 @@
-name: elephant
-dependencies:
-- libgfortran=1.0=0
-- alabaster=0.7.7=py35_0
-- babel=2.2.0=py35_0
-- docutils
-- jinja2=2.8=py35_0
-- markupsafe=0.23=py35_0
-- mkl=11.3.1=0
-- numpy
-- numpydoc
-- openssl=1.0.2g=0
-- pip=8.1.1=py35_0
-- pygments=2.1.1=py35_0
-- python=3.5.1=0
-- pytz=2016.2=py35_0
-- readline=6.2=2
-- scipy=0.17.0=np110py35_0
-- setuptools=20.3=py35_0
-- six=1.10.0=py35_0
-- scikit-learn==0.17.1
-- snowballstemmer=1.2.1=py35_0
-- sphinx=1.3.5=py35_0
-- sphinx_rtd_theme=0.1.9=py35_0
-- sqlite=3.9.2=0
-- tk=8.5.18=0
-- wheel=0.29.0=py35_0
-- xz=5.0.5=1
-- zlib=1.2.8=0
-- pip:
-  - neo
-  - quantities
-  - sphinx-rtd-theme
- 

+ 1 - 0
code/elephant/doc/get_in_touch.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/j5/kJ/MD5-s1075--5462185ac556605e4e053b5d0165db73/MD5-s1075--5462185ac556605e4e053b5d0165db73

BIN
code/elephant/doc/images/elephant_favicon.ico


+ 1 - 0
code/elephant/doc/images/elephant_favicon.ico

@@ -0,0 +1 @@
+../../../../.git/annex/objects/fX/2G/MD5-s2238--b91262d48f8d89cb772d15e49c434510/MD5-s2238--b91262d48f8d89cb772d15e49c434510

BIN
code/elephant/doc/images/elephant_logo.png


+ 1 - 0
code/elephant/doc/images/elephant_logo.png

@@ -0,0 +1 @@
+../../../../.git/annex/objects/W6/q5/MD5-s116686--fec6da7c3a787da51745adc4cfd5f278/MD5-s116686--fec6da7c3a787da51745adc4cfd5f278

BIN
code/elephant/doc/images/elephant_logo_sidebar.png


+ 1 - 0
code/elephant/doc/images/elephant_logo_sidebar.png

@@ -0,0 +1 @@
+../../../../.git/annex/objects/QW/9J/MD5-s12239--074f907bad4daea7d2998cfe8fb73760/MD5-s12239--074f907bad4daea7d2998cfe8fb73760

BIN
code/elephant/doc/images/elephant_structure.png


BIN
code/elephant/doc/images/tutorials/tutorial_1_figure_1.png


BIN
code/elephant/doc/images/tutorials/tutorial_1_figure_2.png


+ 0 - 44
code/elephant/doc/index.rst

@@ -1,44 +0,0 @@
-.. Elephant documentation master file, created by
-   sphinx-quickstart on Thu Aug 22 08:39:42 2013.
-
-
-*********************************************
-Elephant - Electrophysiology Analysis Toolkit
-*********************************************
-
-Synopsis
---------
-    
-
-*Elephant* is a toolbox for the analysis of electrophysiological data based on the Neo_ framework. This manual covers the installation of Elephant in an existing Python environment, several tutorials to help get you started, information on the structure and conventions of the library, a list of modules, and help for future contributors to Elephant.
-
-	
-Table of Contents
------------------
-
-.. toctree::
-    :maxdepth: 1
-
-    overview
-    install
-    tutorial
-    modules
-    developers_guide
-    authors
-    release_notes	       
-
-   
-
-.. Indices and tables
-.. ==================
-
-.. * :ref:`genindex`
-.. * :ref:`modindex`
-.. * :ref:`search`
-
-
-.. _`Neo`: https://github.com/NeuralEnsemble/python-neo
-
-
-.. |date| date::
-.. |time| date:: %H:%M

+ 1 - 0
code/elephant/doc/index.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/fg/fG/MD5-s2195--f3ef8b961d5a06b3c54e8612816918ed/MD5-s2195--f3ef8b961d5a06b3c54e8612816918ed

+ 0 - 107
code/elephant/doc/install.rst

@@ -1,107 +0,0 @@
-.. _install:
-
-****************************
-Prerequisites / Installation
-****************************
-
-Elephant is a pure Python package so that it should be easy to install on any system.
-
-
-Dependencies
-============
-
-The following packages are required to use Elephant:
-    * Python_ >= 2.7
-    * numpy_ >= 1.8.2
-    * scipy_ >= 0.14.0
-    * quantities_ >= 0.10.1
-    * neo_ >= 0.5.0
-
-The following packages are optional in order to run certain parts of Elephant:
-    * For using the pandas_bridge module: 
-        * pandas >= 0.14.1
-    * For using the ASSET analysis
-    * scikit-learn >= 0.15.1
-    * For building the documentation:
-        * numpydoc >= 0.5
-        * sphinx >= 1.2.2
-    * For running tests:
-        * nose >= 1.3.3
-
-All dependencies can be found on the Python package index (PyPI).
-
-
-Debian/Ubuntu
--------------
-For Debian/Ubuntu, we recommend to install numpy and scipy as system packages using apt-get::
-    
-    $ apt-get install python-numpy python-scipy python-pip python-six
-
-Further packages are found on the Python package index (pypi) and should be installed with pip_::
-    
-    $ pip install quantities
-    $ pip install neo
-
-We highly recommend to install these packages using a virtual environment provided by virtualenv_ or locally in the home directory using the ``--user`` option of pip (e.g., ``pip install --user quantities``), neither of which require administrator privileges.
-
-Windows/Mac OS X
-----------------
-
-On non-Linux operating systems we recommend using the Anaconda_ Python distribution, and installing all dependencies in a `Conda environment`_, e.g.::
-
-    $ conda create -n neuroscience python numpy scipy pip six
-    $ source activate neuroscience
-    $ pip install quantities
-    $ pip install neo
-
-
-Installation
-============
-
-Automatic installation from PyPI
---------------------------------
-
-The easiest way to install Elephant is via pip_::
-
-    $ pip install elephant    
-
-
-Manual installation from pypi
------------------------------
-
-To download and install manually, download the latest package from http://pypi.python.org/pypi/elephant
-
-Then::
-
-    $ tar xzf elephant-0.6.0.tar.gz
-    $ cd elephant-0.6.0
-    $ python setup.py install
-    
-or::
-
-    $ python3 setup.py install
-    
-depending on which version of Python you are using.
-
-
-Installation of the latest build from source
---------------------------------------------
-
-To install the latest version of Elephant from the Git repository::
-
-    $ git clone git://github.com/NeuralEnsemble/elephant.git
-    $ cd elephant
-    $ python setup.py install
-
-
-
-.. _`Python`: http://python.org/
-.. _`numpy`: http://www.numpy.org/
-.. _`scipy`: http://scipy.org/scipylib/
-.. _`quantities`: http://pypi.python.org/pypi/quantities
-.. _`neo`: http://pypi.python.org/pypi/neo
-.. _`pip`: http://pypi.python.org/pypi/pip
-.. _`virtualenv`: https://virtualenv.pypa.io/en/latest/
-.. _`this snapshot`: https://github.com/NeuralEnsemble/python-neo/archive/snapshot-20150821.zip
-.. _Anaconda: http://continuum.io/downloads
-.. _`Conda environment`: http://conda.pydata.org/docs/faq.html#creating-new-environments

+ 1 - 0
code/elephant/doc/install.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/WW/P3/MD5-s3780--24de5e0d1dc3d2d73488afab86f3418f/MD5-s3780--24de5e0d1dc3d2d73488afab86f3418f

+ 1 - 0
code/elephant/doc/maintainers_guide.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/Z4/GG/MD5-s4780--ba87736b2da033c21f9461c2c58a7dc3/MD5-s4780--ba87736b2da033c21f9461c2c58a7dc3

+ 0 - 190
code/elephant/doc/make.bat

@@ -1,190 +0,0 @@
-@ECHO OFF
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set BUILDDIR=_build
-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
-set I18NSPHINXOPTS=%SPHINXOPTS% .
-if NOT "%PAPER%" == "" (
-	set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
-	set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
-)
-
-if "%1" == "" goto help
-
-if "%1" == "help" (
-	:help
-	echo.Please use `make ^<target^>` where ^<target^> is one of
-	echo.  html       to make standalone HTML files
-	echo.  dirhtml    to make HTML files named index.html in directories
-	echo.  singlehtml to make a single large HTML file
-	echo.  pickle     to make pickle files
-	echo.  json       to make JSON files
-	echo.  htmlhelp   to make HTML files and a HTML help project
-	echo.  qthelp     to make HTML files and a qthelp project
-	echo.  devhelp    to make HTML files and a Devhelp project
-	echo.  epub       to make an epub
-	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
-	echo.  text       to make text files
-	echo.  man        to make manual pages
-	echo.  texinfo    to make Texinfo files
-	echo.  gettext    to make PO message catalogs
-	echo.  changes    to make an overview over all changed/added/deprecated items
-	echo.  linkcheck  to check all external links for integrity
-	echo.  doctest    to run all doctests embedded in the documentation if enabled
-	goto end
-)
-
-if "%1" == "clean" (
-	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
-	del /q /s %BUILDDIR%\*
-	goto end
-)
-
-if "%1" == "html" (
-	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
-	goto end
-)
-
-if "%1" == "dirhtml" (
-	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
-	goto end
-)
-
-if "%1" == "singlehtml" (
-	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
-	goto end
-)
-
-if "%1" == "pickle" (
-	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the pickle files.
-	goto end
-)
-
-if "%1" == "json" (
-	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the JSON files.
-	goto end
-)
-
-if "%1" == "htmlhelp" (
-	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run HTML Help Workshop with the ^
-.hhp project file in %BUILDDIR%/htmlhelp.
-	goto end
-)
-
-if "%1" == "qthelp" (
-	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run "qcollectiongenerator" with the ^
-.qhcp project file in %BUILDDIR%/qthelp, like this:
-	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Elephant.qhcp
-	echo.To view the help file:
-	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Elephant.ghc
-	goto end
-)
-
-if "%1" == "devhelp" (
-	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished.
-	goto end
-)
-
-if "%1" == "epub" (
-	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub file is in %BUILDDIR%/epub.
-	goto end
-)
-
-if "%1" == "latex" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "text" (
-	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The text files are in %BUILDDIR%/text.
-	goto end
-)
-
-if "%1" == "man" (
-	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The manual pages are in %BUILDDIR%/man.
-	goto end
-)
-
-if "%1" == "texinfo" (
-	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
-	goto end
-)
-
-if "%1" == "gettext" (
-	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
-	goto end
-)
-
-if "%1" == "changes" (
-	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.The overview file is in %BUILDDIR%/changes.
-	goto end
-)
-
-if "%1" == "linkcheck" (
-	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Link check complete; look for any errors in the above output ^
-or in %BUILDDIR%/linkcheck/output.txt.
-	goto end
-)
-
-if "%1" == "doctest" (
-	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of doctests in the sources finished, look at the ^
-results in %BUILDDIR%/doctest/output.txt.
-	goto end
-)
-
-:end

+ 1 - 0
code/elephant/doc/make.bat

@@ -0,0 +1 @@
+../../../.git/annex/objects/3M/52/MD5-s5100--997eccc0d9764057341b3b340918e6f7/MD5-s5100--997eccc0d9764057341b3b340918e6f7

+ 0 - 26
code/elephant/doc/modules.rst

@@ -1,26 +0,0 @@
-****************************
-Function Reference by Module
-****************************
-
-.. toctree::
-   :maxdepth: 1
-
-   reference/statistics
-   reference/signal_processing
-   reference/spectral
-   reference/current_source_density
-   reference/kernels
-   reference/spike_train_dissimilarity
-   reference/sta
-   reference/spike_train_correlation
-   reference/unitary_event_analysis
-   reference/cubic
-   reference/asset
-   reference/cell_assembly_detection
-   reference/spike_train_generation
-   reference/spike_train_surrogates
-   reference/conversion
-   reference/neo_tools
-   reference/pandas_bridge
-
-

+ 1 - 0
code/elephant/doc/modules.rst

@@ -0,0 +1 @@
+../../../.git/annex/objects/36/3k/MD5-s772--1580ace450f6439c202125e85cd7da07/MD5-s772--1580ace450f6439c202125e85cd7da07

+ 0 - 113
code/elephant/doc/overview.rst

@@ -1,113 +0,0 @@
-********
-Overview
-********
-
-What is Elephant?
-=====================
-
-As a result of the complexity inherent in modern recording technologies that yield massively parallel data streams and in advanced analysis methods to explore such rich data sets, the need for more reproducible research in the neurosciences can no longer be ignored. Reproducibility rests on building workflows that may allow users to transparently trace their analysis steps from data acquisition to final publication. A key component of such a workflow is a set of defined analysis methods to perform the data processing.
-
-Elephant (Electrophysiology Analysis Toolkit) is an emerging open-source, community centered library for the analysis of electrophysiological data in the Python programming language. The focus of Elephant is on generic analysis functions for spike train data and time series recordings from electrodes, such as the local field potentials (LFP) or intracellular voltages. In addition to providing a common platform for analysis codes from different laboratories, the Elephant project aims to provide a consistent and homogeneous analysis framework that is built on a modular foundation. Elephant is the direct successor to Neurotools [#f1]_ and maintains ties to complementary projects such as OpenElectrophy [#f2]_ and spykeviewer [#f3]_.
-
-* Analysis functions use consistent data formats and conventions as input arguments and outputs. Electrophysiological data will generally be represented by data models defined by the Neo_ [#f4]_ project.
-* Library functions are based on a set of core functions for commonly used operations, such as sliding windows, converting data to alternate representations, or the generation of surrogates for hypothesis testing.
-* Accepted analysis functions must be equipped with a range of unit tests to ensure a high standard of code quality.
-
-
-Elephant library structure
-==========================
-
-Elephant is a standard python package and is structured into a number of submodules. The following is a sketch of the layout of the Elephant library (0.3.0 release).
-
-.. figure:: images/elephant_structure.png
-    :width: 400 px
-    :align: center
-    :figwidth: 80 %
-    
-    Modules of the Elephant library. Modules containing analysis functions are colored in blue shades, core functionality in green shades.
-   
-
-Conceptually, modules of the Elephant library can be divided into those related to a specific category of analysis methods, and supporting modules that provide a layer of various core utility functions. All available modules are available directly on the the top level of the Elephant package in the ``elephant`` subdirectory to avoid unnecessary hierarchical clutter. Unit tests for all functions are located in the ``elephant/test`` subdirectory and are named according the module name. This documentation is located in the top level ``doc`` subdirectory.
-
-In the following we provide a brief overview of the modules available in Elephant.
-
-
-Analysis modules
-----------------
-
-``statistics``
-^^^^^^^^^^^^^^
-Statistical measures of spike trains (e.g., Fano factor) and functions to estimate firing rates.
-
-``signal_processing``
-^^^^^^^^^^^^^^^^^^^^^
-Basic processing procedures for analog signals (e.g., performing a z-score of a signal, or filtering a signal).
-
-``spectral``
-^^^^^^^^^^^^
-Identification of spectral properties in analog signals (e.g., the power spectrum)
-
-``kernels``
-^^^^^^^^^^^^^^
-A class that provides representations for commonly used kernel functions.
-
-``spike_train_dissimilarity_measures``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Spike train metrics (e.g., the Victor-Purpura measure) to measure the (dis-)similarity between spike trains.
-
-``sta``
-^^^^^^^
-Calculate the spike-triggered average and spike-field-coherence of an analog signal.
-
-``spike_train_correlation``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Functions to quantify correlations between sets of spike trains.
-
-``unitary_event_analysis``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-Determine periods where neurons synchronize their activity beyond chance level.
-
-``cubic``
-^^^^^^^^^
-Implements the method Cumulant Based Inference of higher-order Correlation (CuBIC) to detect the presence of higher-order correlations in massively parallel data based on its complexity distribution.
-
-``asset``
-^^^^^^^^^
-Implementation of the Analysis of Sequences of Synchronous EvenTs (ASSET) to detect, in particular, syn-fire chain like activity.
-
-``csd``
-^^^^^^^
-Inverse and standard methods to estimate of current source density (CSD) of laminar LFP recordings.
-
-
-Supporting modules
-------------------
-
-``conversion``
-^^^^^^^^^^^^^^
-This module allows to convert standard data representations (e.g., a spike train stored as Neo ``SpikeTrain`` object) into other representations useful to perform calculations on the data. An example is the representation of a spike train as a sequence of 0-1 values (*binned spike train*). 
-
-``spike_train_generation``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-This module provides functions to generate spike trains according to prescribed stochastic models (e.g., a Poisson spike train). 
-
-``spike_train_surrogates``
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-This module provides functionality to generate surrogate spike trains from given spike train data. This is particularly useful in the context of determining the significance of analysis results via Monte-Carlo methods.
-
-``neo_tools``
-^^^^^^^^^^^^^
-Provides useful convenience functions to work efficiently with Neo objects.
-
-``pandas_bridge``
-^^^^^^^^^^^^^^^^^
-Bridge from Elephant to the pandas library.
-
-
-References
-==========
-.. [#f1]  http://neuralensemble.org/NeuroTools/
-.. [#f2]  http://neuralensemble.org/OpenElectrophy/
-.. [#f3]  http://spykeutils.readthedocs.org/en/0.4.1/
-.. [#f4]  Garcia et al. (2014) Front.~Neuroinform. 8:10
-.. _`Neo`: http://neuralensemble.org/neo/

+ 1 - 0
code/elephant/doc/reference/_spike_train_processing.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/MZ/3z/MD5-s182--82fa15302738cb8482ee11d3fcaae07c/MD5-s182--82fa15302738cb8482ee11d3fcaae07c

+ 0 - 6
code/elephant/doc/reference/asset.rst

@@ -1,6 +0,0 @@
-===================================================
-Analysis of Sequences of Synchronous EvenTs (ASSET) 
-===================================================
-
-.. automodule:: elephant.asset
-   :members:

+ 1 - 0
code/elephant/doc/reference/asset.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/qv/Qv/MD5-s310--9be984cbafd3118658fcdd3e73742294/MD5-s310--9be984cbafd3118658fcdd3e73742294

+ 1 - 0
code/elephant/doc/reference/causality.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/38/0W/MD5-s101--f0d20b4a0b246574e3636b87c855b57b/MD5-s101--f0d20b4a0b246574e3636b87c855b57b

+ 1 - 0
code/elephant/doc/reference/cell_assembly_detection.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Gk/jp/MD5-s159--5951921a7287b8c1860214cf5069e94d/MD5-s159--5951921a7287b8c1860214cf5069e94d

+ 1 - 0
code/elephant/doc/reference/change_point_detection.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Jk/78/MD5-s176--d3d09a2d47c3c04c20082638f68c14b9/MD5-s176--d3d09a2d47c3c04c20082638f68c14b9

+ 0 - 6
code/elephant/doc/reference/conversion.rst

@@ -1,6 +0,0 @@
-=======================
-Data format conversions
-=======================
-
-.. automodule:: elephant.conversion
-   :members:

+ 1 - 0
code/elephant/doc/reference/conversion.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/kk/KG/MD5-s127--8e466ef50d52a4e24dca9f83ce28d6de/MD5-s127--8e466ef50d52a4e24dca9f83ce28d6de

+ 0 - 6
code/elephant/doc/reference/cubic.rst

@@ -1,6 +0,0 @@
-============================================================
-Cumulant Based Inference of higher-order Correlation (CuBIC) 
-============================================================
-
-.. automodule:: elephant.cubic
-   :members:

+ 1 - 0
code/elephant/doc/reference/cubic.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/2p/jQ/MD5-s228--df0773b1559944480bc51f06348c3d91/MD5-s228--df0773b1559944480bc51f06348c3d91

+ 0 - 6
code/elephant/doc/reference/current_source_density.rst

@@ -1,6 +0,0 @@
-===============================
-Current source density analysis
-===============================
-
-.. automodule:: elephant.current_source_density
-   :members:

+ 1 - 0
code/elephant/doc/reference/current_source_density.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/wz/K5/MD5-s158--8da51976bd0dad9aef5f4dfd893b5723/MD5-s158--8da51976bd0dad9aef5f4dfd893b5723

+ 1 - 0
code/elephant/doc/reference/gpfa.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Kf/9V/MD5-s156--e1108114fac15e82227346f86b7fa85a/MD5-s156--e1108114fac15e82227346f86b7fa85a

+ 0 - 6
code/elephant/doc/reference/kernels.rst

@@ -1,6 +0,0 @@
-=======
-Kernels
-=======
-
-.. automodule:: elephant.kernels
-   :members:

+ 1 - 0
code/elephant/doc/reference/kernels.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/11/4f/MD5-s58--9f6c75995fa82e8959bf80f91b3fe9f7/MD5-s58--9f6c75995fa82e8959bf80f91b3fe9f7

+ 0 - 6
code/elephant/doc/reference/neo_tools.rst

@@ -1,6 +0,0 @@
-===========================================
-Utility functions to manipulate Neo objects
-===========================================
-
-.. automodule:: elephant.neo_tools
-   :members:

+ 1 - 0
code/elephant/doc/reference/neo_tools.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/63/3m/MD5-s115--838c933f8336e8466b20d0e085564c66/MD5-s115--838c933f8336e8466b20d0e085564c66

+ 0 - 6
code/elephant/doc/reference/pandas_bridge.rst

@@ -1,6 +0,0 @@
-============================
-Bridge to the pandas library
-============================
-
-.. automodule:: elephant.pandas_bridge
-   :members:

+ 1 - 0
code/elephant/doc/reference/pandas_bridge.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/gV/09/MD5-s140--b41b0dbfc4236d9bac78443367c48f68/MD5-s140--b41b0dbfc4236d9bac78443367c48f68

+ 1 - 0
code/elephant/doc/reference/parallel.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/0p/x6/MD5-s62--1c6368f16636e8e23b6c86ded8ef1d73/MD5-s62--1c6368f16636e8e23b6c86ded8ef1d73

+ 1 - 0
code/elephant/doc/reference/phase_analysis.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/8p/z9/MD5-s132--b82728acc4c9f495d14691c99c3ab3f1/MD5-s132--b82728acc4c9f495d14691c99c3ab3f1

+ 0 - 13
code/elephant/doc/reference/signal_processing.rst

@@ -1,13 +0,0 @@
-=================
-Signal processing
-=================
-
-.. testsetup::
-
-   import numpy as np
-   from quantities import mV, s, Hz
-   import neo
-   from elephant.signal_processing import zscore
-
-.. automodule:: elephant.signal_processing
-   :members:

+ 1 - 0
code/elephant/doc/reference/signal_processing.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/MM/Jf/MD5-s249--c65b01e17907309b6aae8ff4866afa94/MD5-s249--c65b01e17907309b6aae8ff4866afa94

+ 1 - 0
code/elephant/doc/reference/spade.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/x3/F2/MD5-s173--3483035bed5ceafaf6d2a7a8ec4d377f/MD5-s173--3483035bed5ceafaf6d2a7a8ec4d377f

+ 0 - 6
code/elephant/doc/reference/spectral.rst

@@ -1,6 +0,0 @@
-=================
-Spectral analysis
-=================
-
-.. automodule:: elephant.spectral
-   :members:

+ 1 - 0
code/elephant/doc/reference/spectral.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/xF/6Q/MD5-s102--d66cc3efdef18856eb59bcbf78fcf50e/MD5-s102--d66cc3efdef18856eb59bcbf78fcf50e

+ 0 - 13
code/elephant/doc/reference/spike_train_correlation.rst

@@ -1,13 +0,0 @@
-=======================
-Spike train correlation
-=======================
-
-.. testsetup::
-
-   from quantities import Hz, s, ms
-   from elephant.spike_train_correlation import corrcoef
-
-
-.. automodule:: elephant.spike_train_correlation
-   :members:
-   :exclude-members: cch, sttc

+ 1 - 0
code/elephant/doc/reference/spike_train_correlation.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Wx/v4/MD5-s233--5310d034effd9b19f213699c5751d780/MD5-s233--5310d034effd9b19f213699c5751d780

+ 0 - 8
code/elephant/doc/reference/spike_train_dissimilarity.rst

@@ -1,8 +0,0 @@
-=================================================
-Spike train dissimilarity / spike train synchrony
-=================================================
-
-
-.. automodule:: elephant.spike_train_dissimilarity
-   :members:
-

+ 1 - 0
code/elephant/doc/reference/spike_train_dissimilarity.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/gg/Fx/MD5-s131--dbd61fc54190a2661643c6658747ee3e/MD5-s131--dbd61fc54190a2661643c6658747ee3e

+ 0 - 11
code/elephant/doc/reference/spike_train_generation.rst

@@ -1,11 +0,0 @@
-=================================
-Stochastic spike train generation
-=================================
-
-.. testsetup::
-
-   from elephant.spike_train_generation import homogeneous_poisson_process, homogeneous_gamma_process
-
-
-.. automodule:: elephant.spike_train_generation
-   :members:

+ 1 - 0
code/elephant/doc/reference/spike_train_generation.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/GF/4J/MD5-s284--70fbd440defb172755b4b2f26a1768b6/MD5-s284--70fbd440defb172755b4b2f26a1768b6

+ 0 - 12
code/elephant/doc/reference/spike_train_surrogates.rst

@@ -1,12 +0,0 @@
-======================
-Spike train surrogates
-======================
-
-
-.. testsetup::
-
-   from elephant.spike_train_surrogates import shuffle_isis, randomise_spikes, jitter_spikes, dither_spikes, dither_spike_train
-
-
-.. automodule:: elephant.spike_train_surrogates
-   :members:

+ 1 - 0
code/elephant/doc/reference/spike_train_surrogates.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/49/WK/MD5-s278--bc3adc9949200f937d49eeff3f08706d/MD5-s278--bc3adc9949200f937d49eeff3f08706d

+ 1 - 0
code/elephant/doc/reference/spike_train_synchrony.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/Qv/kK/MD5-s241--6581d04418081afc61056f10ebdd43c7/MD5-s241--6581d04418081afc61056f10ebdd43c7

+ 0 - 18
code/elephant/doc/reference/sta.rst

@@ -1,18 +0,0 @@
-=======================
-Spike-triggered average
-=======================
-
-.. testsetup::
-
-   import numpy as np
-   import neo
-   from quantities import ms
-   from elephant.sta import spike_triggered_average
-
-   signal1 = np.arange(1000.0)
-   signal2 = np.arange(1, 1001.0)
-   spiketrain1 = neo.SpikeTrain([10.12, 20.23, 30.45], units=ms, t_stop=50*ms)
-   spiketrain2 = neo.SpikeTrain([10.34, 20.56, 30.67], units=ms, t_stop=50*ms)
-
-.. automodule:: elephant.sta
-   :members:

+ 1 - 0
code/elephant/doc/reference/sta.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/9G/9p/MD5-s115--8e271c5e745eb4910886aee12a3c0d92/MD5-s115--8e271c5e745eb4910886aee12a3c0d92

+ 0 - 6
code/elephant/doc/reference/statistics.rst

@@ -1,6 +0,0 @@
-======================
-Spike train statistics
-======================
-
-.. automodule:: elephant.statistics
-   :members:

+ 1 - 0
code/elephant/doc/reference/statistics.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/MM/4q/MD5-s118--bc119628f8fe31386c28337e35c28abe/MD5-s118--bc119628f8fe31386c28337e35c28abe

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.AlphaKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/Fw/7w/MD5-s546--1563997a97f7d13b3b104ed64f5c9612/MD5-s546--1563997a97f7d13b3b104ed64f5c9612

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.EpanechnikovLikeKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/33/W7/MD5-s656--0a39e567ef299963b1eace0558b5e74d/MD5-s656--0a39e567ef299963b1eace0558b5e74d

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.ExponentialKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/m3/9v/MD5-s606--6d8a488e110e81c2349e0dd5a53d63df/MD5-s606--6d8a488e110e81c2349e0dd5a53d63df

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.GaussianKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/kZ/6q/MD5-s576--c4f09b2c11979379f06d79493e0f73af/MD5-s576--c4f09b2c11979379f06d79493e0f73af

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.LaplacianKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/Q0/mg/MD5-s586--03a46e98341569c6c60496e2516a9a37/MD5-s586--03a46e98341569c6c60496e2516a9a37

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.RectangularKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/Z4/pv/MD5-s606--7213680e4fecb342f30ab09f8a88114b/MD5-s606--7213680e4fecb342f30ab09f8a88114b

+ 1 - 0
code/elephant/doc/reference/toctree/kernels/elephant.kernels.TriangularKernel.rst

@@ -0,0 +1 @@
+../../../../../../.git/annex/objects/kV/zz/MD5-s596--a4e75caa26a81f172bac1d86a3a8ca28/MD5-s596--a4e75caa26a81f172bac1d86a3a8ca28

+ 0 - 6
code/elephant/doc/reference/unitary_event_analysis.rst

@@ -1,6 +0,0 @@
-===========================
-Unitary Event (UE) Analysis
-===========================
-
-.. automodule:: elephant.unitary_event_analysis
-   :members:

+ 1 - 0
code/elephant/doc/reference/unitary_event_analysis.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/KK/XJ/MD5-s421--cb6d20ada4cab45c8f3941e9e0b620bb/MD5-s421--cb6d20ada4cab45c8f3941e9e0b620bb

+ 1 - 0
code/elephant/doc/reference/waveform_features.rst

@@ -0,0 +1 @@
+../../../../.git/annex/objects/JM/GF/MD5-s111--8f6d3a09a9132e3d5a09493b67d8057b/MD5-s111--8f6d3a09a9132e3d5a09493b67d8057b

+ 0 - 172
code/elephant/doc/release_notes.rst

@@ -1,172 +0,0 @@
-*************
-Release Notes
-*************
-
-Elephant 0.6.0 release notes
-============================
-October 12th 2018
-
-New functions
--------------
-* `cell_assembly_detection` module
-    * New function to detect higher-order correlation structures such as patterns in parallel spike trains based on Russo et al, 2017.
-*  **wavelet_transform()** function in `signal_prosessing.py` module
-    * Function for computing wavelet transform of a given time series based on Le van Quyen et al. (2001)
-
-Other changes
--------------
-* Switched to multiple `requirements.txt` files which are directly read into the `setup.py`
-* `instantaneous_rate()` accepts now list of spiketrains
-* Minor bug fixes  
-
-
-Elephant 0.5.0 release notes
-============================
-April 4nd 2018
-
-New functions
--------------
-* `change_point_detection` module:
-    * New function to detect changes in the firing rate
-* `spike_train_correlation` module:
-    * New function to calculate the spike time tiling coefficient
-* `phase_analysis` module:
-    * New function to extract spike-triggered phases of an AnalogSignal
-* `unitary_event_analysis` module:
-    * Added new unit test to the UE function to verify the method based on data of a recent [Re]Science publication
-  
-Other changes
--------------
-* Minor bug fixes
-  
-  
-Elephant 0.4.3 release notes
-============================
-March 2nd 2018
-
-Other changes
--------------
-* Bug fixes in `spade` module:
-    * Fixed an incompatibility with the latest version of an external library
-
-  
-Elephant 0.4.2 release notes
-============================
-March 1st 2018
-
-New functions
--------------
-* `spike_train_generation` module:
-    * **inhomogeneous_poisson()** function
-* Modules for Spatio Temporal Pattern Detection (SPADE) `spade_src`:
-    * Module SPADE: `spade.py`
-* Module `statistics.py`:
-    * Added CV2 (coefficient of variation for non-stationary time series)
-* Module `spike_train_correlation.py`:
-    * Added normalization in **cross-correlation histogram()** (CCH)
-
-Other changes
--------------
-* Adapted the `setup.py` to automatically install the spade modules including the compiled `C` files `fim.so`
-* Included testing environment for MPI in `travis.yml`
-* Changed function arguments  in `current_source_density.py` to `neo.AnalogSignal` instead list of `neo.AnalogSignal` objects
-* Fixes to travis and setup configuration files
-* Fixed bug in ISI function `isi()`, `statistics.py` module
-* Fixed bug in `dither_spikes()`, `spike_train_surrogates.py`
-* Minor bug fixes
- 
- 
-Elephant 0.4.1 release notes
-============================
-March 23rd 2017
-
-Other changes
--------------
-* Fix in `setup.py` to correctly import the current source density module
-
-
-Elephant 0.4.0 release notes
-============================
-March 22nd 2017
-
-New functions
--------------
-* `spike_train_generation` module:
-    * peak detection: **peak_detection()**
-* Modules for Current Source Density: `current_source_density_src`
-    * Module Current Source Density: `KCSD.py`
-    * Module for Inverse Current Source Density: `icsd.py`
-
-API changes
------------
-* Interoperability between Neo 0.5.0 and Elephant
-    * Elephant has adapted its functions to the changes in Neo 0.5.0,
-      most of the functionality behaves as before
-    * See Neo documentation for recent changes: http://neo.readthedocs.io/en/latest/whatisnew.html
-
-Other changes
--------------
-* Fixes to travis and setup configuration files.
-* Minor bug fixes.
-* Added module `six` for Python 2.7 backwards compatibility
-
-
-Elephant 0.3.0 release notes
-============================
-April 12st 2016
-
-New functions
--------------
-* `spike_train_correlation` module:
-    * cross correlation histogram: **cross_correlation_histogram()**
-* `spike_train_generation` module:
-    * single interaction process (SIP): **single_interaction_process()**
-    * compound Poisson process (CPP): **compound_poisson_process()**
-* `signal_processing` module:
-    * analytic signal: **hilbert()**
-* `sta` module:
-    * spike field coherence: **spike_field_coherence()**
-* Module to represent kernels: `kernels` module
-* Spike train metrics / dissimilarity / synchrony measures: `spike_train_dissimilarity` module
-* Unitary Event (UE) analysis: `unitary_event_analysis` module
-* Analysis of Sequences of Synchronous EvenTs (ASSET): `asset` module
-
-API changes
------------
-* Function **instantaneous_rate()** now uses kernels as objects defined in the `kernels` module. The previous implementation of the function using the `make_kernel()` function is deprecated, but still temporarily available as `oldfct_instantaneous_rate()`.
-
-Other changes
--------------
-* Fixes to travis and readthedocs configuration files.
-
-
-Elephant 0.2.1 release notes
-============================
-February 18th 2016
-
-Other changes
--------------
-Minor bug fixes.
-
-
-Elephant 0.2.0 release notes
-============================
-September 22nd 2015
-
-New functions
--------------
-* Added covariance function **covariance()** in the `spike_train_correlation` module
-* Added complexity pdf **complexity_pdf()** in the `statistics` module
-* Added spike train extraction from analog signals via threshold detection the in `spike_train_generation` module
-* Added **coherence()** function for analog signals in the `spectral` module
-* Added **Cumulant Based Inference for higher-order of Correlation (CuBIC)** in the `cubic` module for correlation analysis of parallel recorded spike trains
-
-API changes
------------
-* **Optimized kernel bandwidth** in `rate_estimation` function: Calculates the optimized kernel width when the paramter kernel width is specified as `auto`
-
-Other changes
--------------
-* **Optimized creation of sparse matrices**: The creation speed of the sparse matrix inside the `BinnedSpikeTrain` class is optimized
-* Added **Izhikevich neuron simulator** in the `make_spike_extraction_test_data` module
-* Minor improvements to the test and continous integration infrastructure

+ 0 - 0
code/elephant/doc/release_notes.rst


Some files were not shown because too many files changed in this diff