Scheduled service maintenance on November 22


On Friday, November 22, 2024, between 06:00 CET and 18:00 CET, GIN services will undergo planned maintenance. Extended service interruptions should be expected. We will try to keep downtimes to a minimum, but recommend that users avoid critical tasks, large data uploads, or DOI requests during this time.

We apologize for any inconvenience.

1
1

usecases.rst 7.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234
  1. *****************
  2. Typical use cases
  3. *****************
  4. Recording multiple trials from multiple channels
  5. ================================================
  6. In this example we suppose that we have recorded from an 8-channel probe, and
  7. that we have recorded three trials/episodes. We therefore have a total of
  8. 8 x 3 = 24 signals, grouped into three :class:`AnalogSignal` objects, one per trial.
  9. Our entire dataset is contained in a :class:`Block`, which in turn contains:
  10. * 3 :class:`Segment` objects, each representing data from a single trial,
  11. * 1 :class:`ChannelIndex`.
  12. .. image:: images/multi_segment_diagram.png
  13. :width: 75%
  14. :align: center
  15. :class:`Segment` and :class:`ChannelIndex` objects provide two different
  16. ways to access the data, corresponding respectively, in this scenario, to access
  17. by **time** and by **space**.
  18. .. note:: Segments do not always represent trials, they can be used for many
  19. purposes: segments could represent parallel recordings for different
  20. subjects, or different steps in a current clamp protocol.
  21. **Temporal (by segment)**
  22. In this case you want to go through your data in order, perhaps because you want
  23. to correlate the neural response with the stimulus that was delivered in each segment.
  24. In this example, we're averaging over the channels.
  25. .. doctest::
  26. import numpy as np
  27. from matplotlib import pyplot as plt
  28. for seg in block.segments:
  29. print("Analyzing segment %d" % seg.index)
  30. avg = np.mean(seg.analogsignals[0], axis=1)
  31. plt.figure()
  32. plt.plot(avg)
  33. plt.title("Peak response in segment %d: %f" % (seg.index, avg.max()))
  34. **Spatial (by channel)**
  35. In this case you want to go through your data by channel location and average over time.
  36. Perhaps you want to see which physical location produces the strongest response, and every stimulus was the same:
  37. .. doctest::
  38. # We assume that our block has only 1 ChannelIndex
  39. chx = block.channelindexes[0]:
  40. siglist = [sig[:, chx.index] for sig in chx.analogsignals]
  41. avg = np.mean(siglist, axis=0)
  42. plt.figure()
  43. for index, name in zip(chx.index, chx.channel_names):
  44. plt.plot(avg[:, index])
  45. plt.title("Average response on channels %s: %s' % (index, name)
  46. **Mixed example**
  47. Combining simultaneously the two approaches of descending the hierarchy
  48. temporally and spatially can be tricky. Here's an example.
  49. Let's say you saw something interesting on the 6th channel (index 5) on even numbered trials
  50. during the experiment and you want to follow up. What was the average response?
  51. .. doctest::
  52. index = chx.index[5]
  53. avg = np.mean([seg.analogsignals[0][:, index] for seg in block.segments[::2]], axis=1)
  54. plt.plot(avg)
  55. Recording spikes from multiple tetrodes
  56. =======================================
  57. Here is a similar example in which we have recorded with two tetrodes and
  58. extracted spikes from the extra-cellular signals. The spike times are contained
  59. in :class:`SpikeTrain` objects.
  60. Again, our data set is contained in a :class:`Block`, which contains:
  61. * 3 :class:`Segments` (one per trial).
  62. * 2 :class:`ChannelIndexes` (one per tetrode), which contain:
  63. * 2 :class:`Unit` objects (= 2 neurons) for the first :class:`ChannelIndex`
  64. * 5 :class:`Units` for the second :class:`ChannelIndex`.
  65. In total we have 3 x 7 = 21 :class:`SpikeTrains` in this :class:`Block`.
  66. .. image:: images/multi_segment_diagram_spiketrain.png
  67. :width: 75%
  68. :align: center
  69. There are three ways to access the :class:`SpikeTrain` data:
  70. * by :class:`Segment`
  71. * by :class:`RecordingChannel`
  72. * by :class:`Unit`
  73. **By Segment**
  74. In this example, each :class:`Segment` represents data from one trial, and we
  75. want a PSTH for each trial from all units combined:
  76. .. doctest::
  77. for seg in block.segments:
  78. print("Analyzing segment %d" % seg.index)
  79. stlist = [st - st.t_start for st in seg.spiketrains]
  80. plt.figure()
  81. count, bins = np.histogram(stlist)
  82. plt.bar(bins[:-1], count, width=bins[1] - bins[0])
  83. plt.title("PSTH in segment %d" % seg.index)
  84. **By Unit**
  85. Now we can calculate the PSTH averaged over trials for each unit, using the
  86. :attr:`block.list_units` property:
  87. .. doctest::
  88. for unit in block.list_units:
  89. stlist = [st - st.t_start for st in unit.spiketrains]
  90. plt.figure()
  91. count, bins = np.histogram(stlist)
  92. plt.bar(bins[:-1], count, width=bins[1] - bins[0])
  93. plt.title("PSTH of unit %s" % unit.name)
  94. **By ChannelIndex**
  95. Here we calculate a PSTH averaged over trials by channel location,
  96. blending all units:
  97. .. doctest::
  98. for chx in block.channelindexes:
  99. stlist = []
  100. for unit in chx.units:
  101. stlist.extend([st - st.t_start for st in unit.spiketrains])
  102. plt.figure()
  103. count, bins = np.histogram(stlist)
  104. plt.bar(bins[:-1], count, width=bins[1] - bins[0])
  105. plt.title("PSTH blend of tetrode %s" % chx.name)
  106. Spike sorting
  107. =============
  108. Spike sorting is the process of detecting and classifying high-frequency
  109. deflections ("spikes") on a group of physically nearby recording channels.
  110. For example, let's say you have defined a ChannelIndex for a tetrode
  111. containing 4 separate channels. Here is an example showing (with fake data)
  112. how you could iterate over the contained signals and extract spike times.
  113. (Of course in reality you would use a more sophisticated algorithm.)
  114. .. doctest::
  115. # generate some fake data
  116. seg = Segment()
  117. seg.analogsignals.append(
  118. AnalogSignal([[0.1, 0.1, 0.1, 0.1],
  119. [-2.0, -2.0, -2.0, -2.0],
  120. [0.1, 0.1, 0.1, 0.1],
  121. [-0.1, -0.1, -0.1, -0.1],
  122. [-0.1, -0.1, -0.1, -0.1],
  123. [-3.0, -3.0, -3.0, -3.0],
  124. [0.1, 0.1, 0.1, 0.1],
  125. [0.1, 0.1, 0.1, 0.1]],
  126. sampling_rate=1000*Hz, units='V'))
  127. chx = ChannelIndex(channel_indexes=[0, 1, 2, 3])
  128. chx.analogsignals.append(seg.analogsignals[0])
  129. # extract spike trains from each channel
  130. st_list = []
  131. for signal in chx.analogsignals:
  132. # use a simple threshhold detector
  133. spike_mask = np.where(np.min(signal.magnitude, axis=1) < -1.0)[0]
  134. # create a spike train
  135. spike_times = signal.times[spike_mask]
  136. st = neo.SpikeTrain(spike_times, t_start=signal.t_start, t_stop=signal.t_stop)
  137. # remember the spike waveforms
  138. wf_list = []
  139. for spike_idx in np.nonzero(spike_mask)[0]:
  140. wf_list.append(signal[spike_idx-1:spike_idx+2, :])
  141. st.waveforms = np.array(wf_list)
  142. st_list.append(st)
  143. At this point, we have a list of spiketrain objects. We could simply create
  144. a single Unit object, assign all spike trains to it, and then assign the
  145. Unit to the group on which we detected it.
  146. .. doctest::
  147. u = Unit()
  148. u.spiketrains = st_list
  149. chx.units.append(u)
  150. Now the recording channel group (tetrode) contains a list of analogsignals,
  151. and a single Unit object containing all of the detected spiketrains from those
  152. signals.
  153. Further processing could assign each of the detected spikes to an independent
  154. source, a putative single neuron. (This processing is outside the scope of
  155. Neo. There are many open-source toolboxes to do it, for instance our sister
  156. project OpenElectrophy.)
  157. In that case we would create a separate Unit for each cluster, assign its
  158. spiketrains to it, and then store all the units in the original
  159. recording channel group.
  160. .. EEG
  161. .. Network simulations