Scheduled service maintenance on November 22


On Friday, November 22, 2024, between 06:00 CET and 18:00 CET, GIN services will undergo planned maintenance. Extended service interruptions should be expected. We will try to keep downtimes to a minimum, but recommend that users avoid critical tasks, large data uploads, or DOI requests during this time.

We apologize for any inconvenience.

how_to_start.m 33 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768
  1. %% How to start
  2. %
  3. % Authors
  4. % * Hio-Been Han, hiobeen@mit.edu
  5. %
  6. % * SungJun Cho, sungjun.cho@psych.ox.ac.uk
  7. %
  8. % * DaYoung Jung, dayoung@kist.re.kr
  9. %
  10. % * Jee Hyun Choi, jeechoi@kist.re.kr
  11. %
  12. % The following guide provides the instructions on how to access the dataset
  13. % uploaded to the GIN G-Node repository; <https://gin.g-node.org/hiobeen/Mouse-threat-and-escape-CBRAIN
  14. % https://gin.g-node.org/jeelab/Mouse-threat-and-escape-CBRAIN>.
  15. %
  16. %
  17. %% LFP (EEG) dataset
  18. % The structure of our dataset follows the BIDS-EEG format introduced by Pernet
  19. % et al. (2019). Within the top-level directory |data_BIDS|, the LFP data are
  20. % organized by the path names starting with |sub-*|. These LFP recordings (n =
  21. % 8 mice) were recorded under the threat-and-escape experimental paradigm, which
  22. % involves dynamic interactions with a spider robot (Kim et al., 2020).
  23. %
  24. % This experiment was conducted in two separate conditions: (1) the solitary
  25. % condition (denoted as "Single"), in which a mouse was exposed to a robot alone
  26. % in the arena, and (2) the group condition (denoted as "Group"), in which a group
  27. % of mice encountered a robot together. A measurement device called the CBRAIN
  28. % headstage was used to record LFP data at a sampling rate of 1024 Hz. The recordings
  29. % were taken from the medial prefrontal cortex (Channel 1) and the basolateral
  30. % amygdala (Channel 2).
  31. %
  32. % For a more comprehensive understanding of the experimental methods and procedures,
  33. % please refer to Kim et al. (2020).
  34. %
  35. % *IMPORTANT NOTE*
  36. %
  37. % > To ensure compatibility with EEGLAB and maintain consistency with the BIDS-EEG
  38. % format, the term "EEG" is used throughout the dataset to refer to the data,
  39. % although its actual measurement was recorded as LFP. Therefore, we use the term
  40. % "EEG" in the guideline below for convenience.
  41. %
  42. % > Please be aware that while our dataset generally follows the BIDS format,
  43. % we have chosen not to strictly adhere to all BIDS requirements. Due to the nature
  44. % of our multi-subject simultaneous recordings, some session numbers are missing.
  45. % We decided not to 'strictly' follow the BIDS format to preserve data integrity
  46. % and avoid confusion.
  47. %
  48. %
  49. %%
  50. %% Behavioral video & position tracking dataset
  51. % You can find videos that were simultaneously recorded during LFP measurements
  52. % under the directory |data_BIDS/stimuli/video|. From the videos, users may extract
  53. % video frames to directly compare mouse behaviors (e.g., instantaneous movement
  54. % parameters) with neural activities. Additionally, we also uploaded instantaneous
  55. % position coordinates of each mouse extracted from the videos. These position
  56. % data are located in the directory |data_BIDS/stimuli/position|.
  57. %
  58. % *IMPORTANT NOTE*
  59. %
  60. % > Please note that the position tracking process we employed differed across
  61. % two task conditions. For the solitary condition, the position tracking was performed
  62. % using the CNN-based U-Net model (Ronnenberger et al., 2015; see Han et al.,
  63. % 2023 for the detailed procedure). This approach tracks an object location based
  64. % on the centroid of a mouse's body. For the group condition, an OpenCV-based
  65. % custom-built script was used (see Kim et al., 2020 for details). This approach
  66. % tracks the location of a headstage mounted on each mouse and extracts its head
  67. % position. Therefore, the datasets for two task conditions contain distinct information:
  68. % body position for the solitary condition and headstage position for the group
  69. % condition. Users should be aware of these differences if they intend to analyze
  70. % this data.
  71. %
  72. %
  73. %% References
  74. %%
  75. % # Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips,
  76. % C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain
  77. % imaging data structure for electroencephalography. _Scientific Data_, 6(1):103.
  78. % # * Kim, J., Kim, C., Han, H. B., Cho, C. J., Yeom, W., Lee, S. Q., Choi,
  79. % J. H. (2020). A bird’s-eye view of brain activity in socially interacting mice
  80. % through mobile edge computing (MEC). _Science Advances_, 6(49):<https://gin.g-node.org/JEELAB/Mouse-threat-and-escape-CBRAIN/src/master/commit/eabb9841
  81. % |eabb9841|>.
  82. % # Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks
  83. % for biomedical image segmentation. In _Medical Image Computing and Computer-Assisted
  84. % Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October
  85. % 5-9, 2015, Proceedings, Part III 18_ (pp. 234-241). Springer International Publishing.
  86. % # * Han, H. B., Shin H. S., Jeong Y., Kim, J., Choi, J. H. (2023). Dynamic
  87. % switching of neural oscillations in the prefrontal–amygdala circuit for naturalistic
  88. % freeze-or-flight. _Proceedings of the National Academy of Sciences_, 120(37):<https://gin.g-node.org/JEELAB/Mouse-threat-and-escape-CBRAIN/src/master/commit/e2308762120
  89. % |e230876212|>.
  90. % # * Cho, S., Choi, J. H. (2023). A guide towards optimal detection of transient
  91. % oscillatory bursts with unknown parameters. _Journal of Neural Engineering_,
  92. % 20:046007.
  93. %%
  94. % *: Publications that used this dataset.
  95. %% Dataset Structure
  96. % For the easier understanding of how the dataset is structured, we provide
  97. % a complete list of every file type within the data, its format, and its contents.
  98. %%
  99. % * |README.md|: the overall documentation & a plain text version of tutorial
  100. % MATLAB scripts (same as how_to_start.mlx)
  101. % * |how_to_start.mlx|: the MATLAB live-script file providing tutorials for
  102. % basic LFP/video analyses. If you don't have MATLAB, please refer the analysis
  103. % script from 'README.md' written in plain text as it contains the same contents.
  104. % * |how_to_start.m|: Same as _mlx_ file (analysis tutorial MATLAB script) but
  105. % in _m_-file format.
  106. % * |montage.csv|: the file containing the anatomical locations of mPFC (Ch
  107. % 1) and BLA (Ch 2) with respect to the bregma.
  108. % * |stimuli|: the directory containing the tracked positions and videos of
  109. % the mice and a robot.
  110. %%
  111. % *Directory*
  112. %%
  113. % *Description*
  114. %%
  115. % *Format*
  116. %%
  117. % position
  118. %%
  119. % positional coordinates of both the mice and robotic spider
  120. %%
  121. % |Mouse-Day*-Trial*-Mouse*.csv| for mice in Single & Group|Robot-Day*-Trial*.csv|
  122. % for a robot in Group|Robot-Day*-Trial*-Mouse*.csv| for a robot in Single
  123. %%
  124. % video
  125. %%
  126. % video footages of the experimental sessions
  127. %%
  128. % |Day*-Trial*-Group.avi| for Group|Day*-Trial*-Mouse*.avi| for Single
  129. %%
  130. % * |sub-*|: the directories containing mice LFP data for each experimental
  131. % session.
  132. %%
  133. % *Directory*
  134. %%
  135. % *Description*
  136. %%
  137. % *Format*
  138. %%
  139. % |ses01 ~ ses08|
  140. %%
  141. % contains mouse LFP data for each experiment sessions in the Single condition
  142. %%
  143. % All data files within each directory start with a file name |Day*-Trial*-Single|
  144. % with the following extensions:1. |fdt| and |set| for mice LFP data2. |.json|
  145. % for meta data of recording sites and hyperparameters3. |_eeg_channels.tsv| as
  146. % a placeholder for LFP channel meta data4. |_eeg_events.tsv| as a placeholder
  147. % for LFP events meta data
  148. %%
  149. % |ses09 ~ ses16|
  150. %%
  151. % contains mouse LFP data for each experiment sessions in the Group condition
  152. %%
  153. % All data files within each directory start with a file name |Day*-Trial*-Group|
  154. % with the following extensions:1. |fdt| and |set| for mice LFP data2. |.json|
  155. % for meta data of recording sites and hyperparameters3. |_eeg_channels.tsv| as
  156. % a placeholder for LFP channel meta data4. |_eeg_events.tsv| as a placeholder
  157. % for LFP events meta data
  158. %%
  159. %
  160. %
  161. % *DATA EXCLUSION NOTICE*
  162. %
  163. % Due to a synchronization issue between the LFP and video data, specific data
  164. % files exhibiting sync errors have been excluded to maintain the integrity of
  165. % the research data. The affected data files are listed below:
  166. %%
  167. % * Single Condition: Mouse 2 Session 5, Mouse 5 Session 6, Mouse 7 Session
  168. % 2, Mouse 8 Session 4, Mouse 8 Session 8
  169. % * Group Condition: Session 13 and 16 for all mice (Mouse 1 ~ 8)
  170. %% User Guidelines
  171. %% Part 0) Environment setup for eeglab extension
  172. % Since the EEG data is in the eeglab dataset format (*.set/*.fdt), installing
  173. % the eeglab toolbox (Delorme & Makeig, 2004) is necessary to access the data.
  174. % For details on the installation process, please refer to the following webpage:
  175. % <https://eeglab.org/others/How_to_download_EEGLAB.html https://eeglab.org/others/How_to_download_EEGLAB.html>.
  176. % Adding eeglab-installed directory to MATLAB path
  177. eeglab_dir = './subfunc/external/eeglab2023.0';
  178. addpath(genpath(eeglab_dir));
  179. %% Part 1) EEG data overview
  180. % *1-1) Loading an example EEG dataset*
  181. % In this instance, we'll go through loading EEG data from a single session
  182. % and visualizing a segment of that data. For this specific example, we'll use
  183. % Mouse 1, Session 1, under the Solitary threat condition.
  184. path_base = './data_BIDS/';
  185. path_base = 'E:/Mouse-threat-and-escape-CBRAIN/data_BIDS/';
  186. addpath ./subfunc/
  187. mouse = 1;
  188. sess = 1;
  189. eeg_data_path = sprintf('%ssub-%02d/ses-%02d/eeg/',path_base, mouse, sess);
  190. eeg_data_name = dir([eeg_data_path '*.set']);
  191. EEG = pop_loadset('filename', eeg_data_name.name, 'filepath', eeg_data_name.folder, 'verbose', 'off');
  192. %%
  193. % After loading this data, we can visualize tiny pieces of EEG time trace (3
  194. % seconds) as follows.
  195. win = [1:3*EEG.srate] + EEG.srate*115; % 2 seconds arbitary slicing for example
  196. % Open figure handle
  197. fig = figure(1); clf; set(gcf, 'Position', [0 0 800 400], 'Color', [1 1 1])
  198. plot_multichan( EEG.times(win), EEG.data(1:2,win));
  199. xlabel('Time (sec)')
  200. title(sprintf('Example EEG time trace, [%s]',eeg_data_name.name))
  201. set(gca, 'XTick', 0:240)
  202. % *1-2) Visualizing example spectrogram*
  203. % Each dataset contains 240 seconds of EEG recording, under the procedure of
  204. % the threat-and-escape paradigm described in the original publication (Kim et
  205. % al., 2020). To visualize an overall pattern of EEG activities in one example
  206. % dataset, a spectrogram can be obtained as follows.
  207. %
  208. % The threat-and-escape procedure consists of four different stages:
  209. %
  210. % Stage 1: (Robot absent) Baseline (0-60 sec)
  211. %
  212. % Stage 2: (Robot present) Robot attack (60-120 sec)
  213. %
  214. % Stage 3: (Robot present) Gate to safezone open (120-180 sec)
  215. %
  216. % Stage 4: (Robot absent) No threat (180-240 sec)
  217. % Calculating spectrogram
  218. ch = 1;
  219. [spec_d, spec_t, spec_f] = get_spectrogram( EEG.data(ch,:), EEG.times );
  220. spec_d = spec_d*1000; % Unit scaling - millivolt to microvolt
  221. % Visualization
  222. fig = figure(2); clf; set(gcf, 'Position', [0 0 800 400], 'Color', [1 1 1])
  223. imagesc( spec_t, spec_f, spec_d' ); axis xy
  224. xlabel('Time (sec)'); ylabel('Frequency (Hz)');
  225. axis([0 240 1 50])
  226. hold on
  227. plot([1 1]*60*1,ylim,'w--');
  228. plot([1 1]*60*2,ylim,'w--');
  229. plot([1 1]*60*3,ylim,'w--');
  230. set(gca, 'XTick', [0 60 120 180 240])
  231. colormap('jet')
  232. cbar=colorbar; ylabel(cbar, 'Amplitude (\muV/Hz)')
  233. caxis([0 20])
  234. title(sprintf('Example spectrogram, [%s]',eeg_data_name.name))
  235. % *1-3) Visualizing grand-averaged spectrogram*
  236. n_mouse = 8; % number of mouse
  237. n_sess = 16; % number of session
  238. n_ch = 2; % number of channel
  239. spec_n_t = 2390; spec_n_f = 512;
  240. if ~exist('spec.mat')
  241. spec = single(nan([spec_n_t, spec_n_f, n_ch, n_mouse, n_sess]));
  242. for mouse = 1:n_mouse
  243. for sess = 1:n_sess
  244. eeg_data_path = sprintf('%ssub-%02d/ses-%02d/eeg/',path_base, mouse, sess);
  245. eeg_data_name = dir([eeg_data_path '*.set']);
  246. if ~isempty(eeg_data_name)
  247. EEG = pop_loadset('filename', eeg_data_name.name, 'filepath', eeg_data_name.folder, 'verbose', 'off');
  248. for ch = 1:n_ch
  249. [spec_d, spec_t, spec_f] = get_spectrogram( EEG.data(ch,:), EEG.times );
  250. spec(:,:,ch,mouse,sess) = spec_d*1000; % Unit scaling - millivolt to microvolt
  251. end
  252. end
  253. end
  254. end
  255. save('spec.mat', 'spec', 'spec_f', 'spec_t')
  256. else
  257. load('spec.mat')
  258. end
  259. %%
  260. % Now, we have spectrograms derived from all recordings (n = 8 mice x 8 sessions
  261. % x 2 solitary/group conditions). Before visualizing the grand-averaged spectrogram,
  262. % we will first perform baseline correction. This is done by subtracting the mean
  263. % value of each frequency component during the baseline period (Stage 1, 0-60
  264. % sec).
  265. mean_f = @nanmean;
  266. % Baseline correction
  267. stage_1_win = 1:size( spec,1 )/4;
  268. spec_baseline = repmat( mean_f( spec(stage_1_win,:,:,:,:), 1 ), [size(spec,1), 1, 1, 1, 1]);
  269. spec_norm = spec - spec_baseline;
  270. fig = figure(3); clf; set(gcf, 'Position', [0 0 800 400], 'Color', [1 1 1])
  271. for ch = 1:2
  272. % single channel visualization
  273. spec_avg = mean_f( mean_f( spec_norm(:,:,ch,:,:), 5), 4);
  274. subplot(2,1,ch)
  275. imagesc( spec_t, spec_f, imgaussfilt( spec_avg', 1) ); axis xy
  276. xlabel('Time (sec)'); ylabel('Frequency (Hz)');
  277. axis([0 240 1 60])
  278. hold on
  279. plot([1 1]*60*1,ylim,'k--');
  280. plot([1 1]*60*2,ylim,'k--');
  281. plot([1 1]*60*3,ylim,'k--');
  282. set(gca, 'XTick', [0 60 120 180 240])
  283. colormap('jet')
  284. cbar=colorbar; ylabel(cbar, '\DeltaAmplitude (\muV^{2}/Hz)')
  285. caxis([-1 1]*3/ch)
  286. title(sprintf('Grand-averaged spectrogram [Channel %d]',ch))
  287. end
  288. %% (2) Accessing position data
  289. % 2-1) Loading position data (solitary)
  290. % In this instance, we will go through the behavioral data extracted from a
  291. % sample video. For this specific example, we use one example solitary threat
  292. % session (mouse #2, session #3) to depict the trajectory of the mouse's movement
  293. % in space.
  294. % path_base = './data_BIDS/';
  295. path_to_load = [path_base 'stimuli/position/Single/'];
  296. % Data selection & loading
  297. day = 1; sess = 1;
  298. day = ceil(sess/4); trial = mod(sess,4); if ~trial,trial=4; end;
  299. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load,day,trial,mouse);
  300. pos = table2struct(readtable(pos_fname));
  301. xy_mouse = [[pos.x];[pos.y]];
  302. pos_fname = sprintf('%sRobot-Day%d-Trial%d-Mouse%d.csv',path_to_load,day,trial,mouse);
  303. pos = table2struct(readtable(pos_fname));
  304. xy_robot = [[pos.x];[pos.y]];
  305. % Visualization
  306. fig = figure(4); clf; set(gcf, 'Position', [0 0 600 500], 'Color', [1 1 1])
  307. hold on
  308. plot( xy_mouse(1,:), xy_mouse(2,:), '.-', 'Color', 'r');
  309. axis([1 1024 1 1024])
  310. plot( xy_robot(1,:), xy_robot(2,:), '*', 'Color', 'k','MarkerSize',10);
  311. xlabel('X (pixel)'); ylabel('Y (pixel)');
  312. legend({'mouse','robot'}, 'Location', 'eastoutside', 'FontSize', 12);
  313. set(gca, 'YDir' ,'reverse', 'Box', 'on');
  314. title(sprintf('Example position data, mouse #%d session #%d',mouse, sess))
  315. % 2-2) Loading position data (group)
  316. % Using the same method, position data for the group threat condition can also
  317. % be plotted as follows. Here, we use session 3 of the group threat condition.
  318. % path_base = './data_BIDS/';
  319. path_to_load = [path_base 'stimuli/position/Group/'];
  320. % Data loading
  321. sess = 1;
  322. xy_mouse = zeros([2,7200,8]);
  323. xy_robot = zeros([2,7200,1]);
  324. for mouse = 1:8
  325. day = ceil(sess/4); trial = mod(sess,4); if ~trial,trial=4; end;
  326. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load,day,trial,mouse);
  327. pos = table2struct(readtable(pos_fname));
  328. xy_mouse(:,:,mouse) = [[pos.x];[pos.y]];
  329. end
  330. pos_fname = sprintf('%sRobot-Day%d-Trial%d.csv',path_to_load,day,trial);
  331. pos = table2struct(readtable(pos_fname));
  332. xy_robot(:,:,1) = [[pos.x];[pos.y]];
  333. % Visualization
  334. fig = figure(5); clf; set(gcf, 'Position', [0 0 600 500], 'Color', [1 1 1])
  335. color_mice = [.74 .25 .3;.91 .47 .44;.96 .71 .56;.96 .89 .71; .9 .93 .86; .63 .86 .93;.20 .75 .92; 0 .53 .8; ];
  336. color_mice(9,:) = [.1 .1 .1];
  337. plts =[]; labs = {};
  338. for mouse = 1:8
  339. plts(mouse)=plot( xy_mouse(1,:,mouse), xy_mouse(2,:,mouse), '.-', 'Color', color_mice(mouse,:));
  340. if mouse==1, hold on; end
  341. labs{mouse} = sprintf('mouse %d',mouse);
  342. end
  343. axis([1 1024 1 1024])
  344. plts(end+1)=plot( xy_robot(1,:), xy_robot(2,:), '*', 'Color', color_mice(end,:),'MarkerSize',10);
  345. xlabel('X (pixel)'); ylabel('Y (pixel)');
  346. labs{9} = 'robot';
  347. legend(plts,labs, 'Location', 'eastoutside', 'FontSize', 12);
  348. set(gca, 'YDir' ,'reverse', 'Box', 'on');
  349. title(sprintf('Example position data, Group session #%d',sess))
  350. % 2-3) Overlay with frame
  351. % This positional data is extracted directly from the frame file (extracted
  352. % from the attached video using |parse_video_rgb.m|), where the unit of measurement
  353. % is in pixels. Additionally, position overlay on frames can be achieved using
  354. % the following code. The example provided below illustrates sequential frames
  355. % at one-second intervals. The resolution of the frames is (1024 x 1024), while
  356. % the arena is a square of 60 x 60 cm, allowing conversion to physical units.
  357. % For further details, refer to the Methods section of the original publications
  358. % (Kim et al., 2020; Han et al., 2023).
  359. % Import position data
  360. path_to_load = [path_base 'stimuli/position/Group/'];
  361. for mouse = 1:8
  362. day = ceil(sess/4); trial = mod(sess,4); if ~trial,trial=4; end;
  363. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load,day,trial,mouse);
  364. pos = table2struct(readtable(pos_fname));
  365. xy_mouse(:,:,mouse) = clean_coordinates([[pos.x];[pos.y]]);
  366. end
  367. pos_fname = sprintf('%sRobot-Day%d-Trial%d.csv',path_to_load,day,trial);
  368. pos = table2struct(readtable(pos_fname));
  369. xy_robot(:,:,1) = clean_coordinates( [[pos.x];[pos.y]] );
  370. % Overlay frame image + position data
  371. % frames_path = [path_base './stimuli/video_parsed/'];
  372. % if ~isdir(frames_path), parse_video_rgb(); end
  373. frames_path = ['./data_BIDS/stimuli/video_parsed/'];
  374. sess = 1;
  375. vidname = sprintf('Day%d-Trial%d-Group',ceil(sess/4), mod(sess,4));
  376. % Visualization
  377. color_mice(9,:) = [1 1 1];
  378. fig = figure(6); clf; set(fig, 'Position',[0 0 800 800]);
  379. tiledlayout('flow', 'TileSpacing', 'compact', 'Padding','compact')
  380. for frameIdx = [4190 4220 4250 4280]
  381. % Import & draw frame
  382. fname_jpg = sprintf(['%sGroup/%s/frame-%06d-%s.jpg'],frames_path,vidname,frameIdx,vidname);
  383. frame = imread( fname_jpg );
  384. nexttile;
  385. imagesc(frame);
  386. hold on
  387. % Mouse coordinates overlay
  388. for mouse = 1:8
  389. plot( xy_mouse(1,frameIdx,mouse),xy_mouse(2,frameIdx,mouse), 'o', ...
  390. 'MarkerSize', 10, 'Color', color_mice(mouse,:))
  391. text(xy_mouse(1,frameIdx,mouse),xy_mouse(2,frameIdx,mouse)+70,sprintf('Mouse%d',mouse),...
  392. 'Color', color_mice(mouse,:), 'FontSize', 10, 'HorizontalAlignment','center');
  393. end
  394. % Spider coordinates overlay
  395. plot( xy_robot(1,frameIdx),xy_robot(2,frameIdx), '*', ...
  396. 'MarkerSize', 20, 'Color', color_mice(end,:))
  397. text(xy_robot(1,frameIdx),xy_robot(2,frameIdx)+70,'Robot',...
  398. 'Color', color_mice(end,:), 'FontSize', 10, 'HorizontalAlignment','center');
  399. title(sprintf('%s, t = %.1f sec', vidname, frameIdx/30));
  400. ylim([24 1000]); xlim([24 1000]); set(gca, 'XTick', [], 'YTick', [])
  401. end
  402. %% (3) Cross-referencing neural & behavioral data
  403. % The utility of this dataset lies in its capacity to investigate alterations
  404. % in neural activity in response to behavioral changes. In this example, we will
  405. % introduce an example method to analyze neural data in conjunction with behavioral
  406. % (video) data. Our previous findings (Han et al., 2023) indicated that the mFPC-BLA
  407. % circuit exhibits different theta frequencies depending on the situation: low
  408. % theta (~5 Hz) during freezing and high theta (~10 Hz) during flight in threat
  409. % situations. The original study (Han et al., 2023, PNAS) employed single recording
  410. % sessions; however, in this example, we will also analyze group data.
  411. % 3-1) Loading example group session data
  412. % To do this, let's load session 1 of the group recording (labeled sess 09 in
  413. % 01-16 numbering), including position data, frame data, and corresponding LFP
  414. % data.
  415. % Pick an arbitrary group session
  416. sess = 1;
  417. day = ceil(sess/4); trial = mod(sess,4); if ~trial,trial=4; end;
  418. % Prepare Position data
  419. path_to_load = [path_base 'stimuli/position/Group/'];
  420. for mouse = 1:8
  421. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load,day,trial,mouse);
  422. pos = table2struct(readtable(pos_fname));
  423. xy_mouse(:,:,mouse) = [[pos.x];[pos.y]];
  424. end
  425. pos_fname = sprintf('%sRobot-Day%d-Trial%d.csv',path_to_load,day,trial);
  426. pos = table2struct(readtable(pos_fname));
  427. xy_robot(:,:,1) = [[pos.x];[pos.y]];
  428. % Prepare frame data
  429. frame_dir = sprintf(['./data_BIDS/stimuli/video_parsed/Group/Day%d-Trial%d-Group/*.jpg'], day, trial);
  430. frame_list = dir(frame_dir);
  431. srate_video = 30;
  432. t_video = 1/srate_video:1/srate_video:length(frame_list)/srate_video;
  433. % Prepare LFP data
  434. EEG_data = {};
  435. for mouse = 1:n_mouse
  436. eeg_data_path = sprintf('%ssub-%02d/ses-%02d/eeg/',path_base, mouse, sess+8);
  437. eeg_data_name = dir([eeg_data_path '*.set']);
  438. EEG_data{mouse} = pop_loadset('filename', eeg_data_name.name, 'filepath', eeg_data_name.folder, 'verbose', 'off');
  439. end
  440. % 3-2) Simultaneous visualization of neural & behavioral data
  441. % For a demonstration purpose, we'll pick an arbitrary point in time (frameIdx
  442. % = 2418, around ~80 sec in video) and plot the neural activity for that period.
  443. % To examine the relationship between locomotion and mPFC neural activity, we'll
  444. % display the 1-second trajectory on the plot and perform a Fourier transform
  445. % on the neural data from the past second to include in the analysis. Based on
  446. % our previous finding (Han et al., 2023), we can hypothesize that a mouse with
  447. % minimal movement (M1/M8, in this instance) would exhibit pronounced low theta
  448. % activity, whereas a mouse with rapid locomotion (M3, in this instance) would
  449. % show pronounced high theta activity.
  450. % Pick a frame
  451. frameIdx = 2418;
  452. frame = imread( [frame_list(frameIdx).folder '\' frame_list(frameIdx).name] );
  453. % Get neural data around the selected time (=frame)
  454. epoch_win = [-1.5 .5];
  455. t_eeg = EEG_data{1}.times;
  456. epoch_idx = findIdx( epoch_win+t_video(frameIdx), t_eeg);
  457. t_epoch = linspace(epoch_win(1),epoch_win(2),length(epoch_idx));
  458. % Configuration for color, position trace, etc.
  459. color_mice = [.74 .25 .3;.91 .47 .44;.96 .71 .56;.96 .89 .71; .9 .93 .86; .63 .86 .93;.20 .75 .92; 0 .53 .8; ];
  460. tail_length = 1 * srate_video; % 1 sec
  461. tail_alpha = linspace(0, 1, tail_length); % Comet-like visualization effect
  462. % Open figure handle
  463. fig=figure(7); clf; set(fig, 'Position',[0 0 1500 700], 'Color', [1 1 1]);
  464. % Drawing frame and position trace
  465. sp1=subplot(1,6,[1 3]); hold on
  466. sp1.Position(1) = .02;
  467. imshow( frame );
  468. for mouse = 1:8
  469. xy = xy_mouse(:,frameIdx-tail_length+1:frameIdx,mouse,sess);
  470. for tailIdx = 1:tail_length
  471. plt=scatter(xy(1,tailIdx), xy(2,tailIdx), 's', ...
  472. 'MarkerFaceAlpha', tail_alpha(tailIdx), 'MarkerFaceColor', color_mice(mouse,:), 'MarkerEdgeColor', 'none');
  473. end
  474. text(xy(1,end),xy(2,end)+10, sprintf('M%d',mouse), 'Color', color_mice(mouse,:), 'FontSize', 13, ...
  475. 'HorizontalAlignment', 'center', 'VerticalAlignment', 'top')
  476. end
  477. % Adding robot position trace
  478. xy = xy_robot(:,frameIdx-tail_length+1:frameIdx);
  479. for tailIdx = 1:tail_length
  480. plt=scatter(xy(1,tailIdx), xy(2,tailIdx), '.', 'MarkerFaceAlpha', tail_alpha(tailIdx), ...
  481. 'MarkerFaceColor', 'w', 'MarkerEdgeColor', 'w');
  482. end
  483. text(xy(1,end),xy(2,end)-10, sprintf('Robot'), 'Color', 'w', 'FontSize', 15, ...
  484. 'HorizontalAlignment', 'center', 'VerticalAlignment', 'bottom')
  485. axis([1+50 1024-50 1+50 1024-50])
  486. % Drawing neural data raw trace
  487. ch = 1;
  488. sp2=subplot(1,6,[4 5]); hold on
  489. sp2.Position(1) = .45; sp2.Position(3) = .33;
  490. spacing_factor = 7; % Spacing between data
  491. for mouse = 1:8
  492. eeg_epoch = EEG_data{mouse}.data(ch,epoch_idx); % get raw data
  493. eeg_epoch = zerofilt(EEG_data{mouse}.data(ch,epoch_idx), 2, 100, 1024); % filtering
  494. eeg_epoch = [eeg_epoch-mean(eeg_epoch)]/std(eeg_epoch); % unit: z-score
  495. plot( t_epoch, eeg_epoch-mouse*spacing_factor, 'Color', color_mice(mouse,:));
  496. text(-1.7, -mouse*spacing_factor, sprintf('M%d', mouse), 'FontSize', 11 )
  497. end
  498. ylim([-8.5*spacing_factor -.25*spacing_factor])
  499. set(gca,'YColor', 'w', 'XColor', 'w')
  500. % Add Scale bar
  501. plot([0 0]+.55, [-2.5 2.5]-(8*spacing_factor), 'k-')
  502. xlim(epoch_win)
  503. set(gca, 'Clipping', 'off');
  504. plot([0 0]-1, ylim, '--', 'Color', [1 1 1]*.5);
  505. plot([0 0], ylim, '-', 'Color', [1 1 1]*.5);
  506. text(-1, -8.8*spacing_factor, 'time = -1 s','FontSize',11,'HorizontalAlignment','center')
  507. text(0, -8.8*spacing_factor, 'time = 0 s','FontSize',11,'HorizontalAlignment','center')
  508. set(gca, 'Box', 'off', 'LineWidth', 1)
  509. % Add spectrum (FFT for 1-sec LFP chunk)
  510. sp3=subplot(1,6,6);hold on
  511. spacing_factor = 2; % Spacing between data
  512. for mouse = 1:8
  513. eeg_epoch = EEG_data{mouse}.data(ch,epoch_idx); % get raw data
  514. eeg_epoch = zerofilt(EEG_data{mouse}.data(ch,epoch_idx), 2, 100, 1024); % filtering
  515. [power, freq] = get_fft(eeg_epoch,EEG_data{mouse}.srate);
  516. power = smoothdata( abs(power).^2, 2, 'gauss', 10); % Spectrum smoothing
  517. [~,where] = max(power); power_max = power(where); % Peak detection
  518. power_norm = power/power_max; % Spectrum normalization
  519. % Draw power spectrum
  520. plot( freq, power_norm-mouse*spacing_factor, 'Color', color_mice(mouse,:));
  521. % Mark peak frequency
  522. plot( freq(where), power_norm(where)-mouse*spacing_factor, '*', 'Color', color_mice(mouse,:));
  523. text( freq(where), power_norm(where)-mouse*spacing_factor, ...
  524. sprintf(' peak:%.1f Hz', freq(where)), ...
  525. 'Color', color_mice(mouse,:), 'FontSize', 10, 'HorizontalAlignment', 'left');
  526. end
  527. ylim([-16.3 0.1]); xlim([1 32])
  528. xlabel('Frequency (Hz)');
  529. set(gca,'YColor', 'w', 'YTick', [], 'FontSize', 10, 'Box', 'off')
  530. set(gca, 'XScale', 'log', 'XMinorTick', 'off', 'XTick', 2.^[1:10])
  531. ttltxt = sprintf('[%s] (video time = %.3f s)', frame_list(frameIdx).name, frameIdx/30);
  532. sgtitle(ttltxt);
  533. % 3-3) Analyze neural data in conjunction with behavior
  534. % In this example, the pattern of low/high theta differences related to locomotion
  535. % across an entire recording will be analyzed. To achieve this, we will divide
  536. % each mouse's speed into 10 discrete bins (based on ranks) and then aggregate
  537. % to average the LFP power values corresponding to each speed bin. This can be
  538. % done in following order;
  539. %%
  540. % # Load the position data and store it in a variable named |xy_mouse|.
  541. % # Resize this data to match the dimensions of the spectrogram (using imresize()
  542. % function).
  543. % # Label each time point with the speed rank it corresponds to, storing this
  544. % information in a variable named |t_idx|.
  545. % # Collect the time points corresponding to each specific bin (in the |spectrum_sort|
  546. % variable).
  547. % Pooling mouse position from all sessions
  548. xy_mouse = zeros([2,7200,n_mouse,n_sess]);
  549. xy_robot = zeros([2,7200,1,n_sess]);
  550. path_to_load_group = [path_base 'stimuli/position/Group/'];
  551. path_to_load_single = [path_base 'stimuli/position/Single/'];
  552. for sess = 1:16
  553. for mouse = 1:8
  554. if sess <= 8 % Single condition
  555. day = ceil(sess/4); trial = mod(sess,4); if ~trial,trial=4; end;
  556. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load_single,day,trial,mouse);
  557. else % Group condition
  558. day = ceil((sess-8)/4); trial = mod((sess-8),4); if ~trial,trial=4; end;
  559. pos_fname = sprintf('%sMouse-Day%d-Trial%d-Mouse%d.csv',path_to_load_group,day,trial,mouse);
  560. end
  561. pos = table2struct(readtable(pos_fname));
  562. xy_mouse(:,:,mouse,sess) = [[pos.x];[pos.y]];
  563. end
  564. end
  565. %% Sort speed data
  566. n_bin = 10;
  567. tIdx_valid = findIdx(spec_t([1,end]), t_video); % cut out omitted (because of fft) time points
  568. for mouse = 1:8
  569. for sess = 1:16
  570. xy = xy_mouse(:,:,mouse,sess);
  571. speed = [0, abs(diff(xy(1,:))) + abs(diff(xy(2,:)))]';
  572. speed_resize = imresize( speed(tIdx_valid), [size(spec_norm,1),1], 'bilinear');
  573. quant = [min(speed_resize) quantile( speed_resize, n_bin-1 ) max(speed_resize)];
  574. for quantIdx = 1:n_bin
  575. t_idx_temp = find(speed_resize>=quant(quantIdx) & speed_resize<quant(quantIdx+1));
  576. t_idx{quantIdx,mouse,sess} = t_idx_temp;
  577. end
  578. end
  579. end
  580. % Sort spectrogram based on speed-sorted time points
  581. ch=1; % mPFC only
  582. spectrum_sort = nan([length(spec_f), n_mouse, n_sess, n_bin]);
  583. for binIdx = 1:n_bin, for mouse = 1:8, for sess = 1:16
  584. spectrum_sort(:,mouse,sess,binIdx) = mean_f( ...
  585. spec(t_idx{binIdx,mouse,sess},:,ch,mouse,sess), 1);
  586. end, end, end
  587. %%
  588. % Now we're ready to plot the amplitude (power) spectrum. In this example, we
  589. % will plot results of one example mouse (mouse 1), by selecting two example sessions
  590. % to compare: one from a single recording (ses-03) and the other from a group
  591. % recording (ses-11). In the meantime, to measure the changes of low/high theta,
  592. % we defined a theta amplitude ratio (amplitude of high theta divided by amplitude
  593. % of low theta) and plotted this as a function of speed bins.
  594. % Select example mouse & session
  595. mouse = 1;
  596. sess = [3, 11]; % Pick 2; one from single, one from group session
  597. titles = {[sprintf('Example single recording (M%1d, ses-%02d)', mouse, sess(1))];...
  598. [sprintf('Example group recording (M%1d, ses-%02d)', mouse, sess(2))]};
  599. theta_low_index = findIdx([3 8], spec_f); % 3-8 Hz
  600. theta_high_index = findIdx([8 12], spec_f); % 8-12 Hz
  601. theta_ratio = zeros([2,n_bin]);
  602. colorspace_spd = [.0157 .4745 .2510; .1804 .6471 .3412; .4902 .7804 .3961; .7216 .8824 .4431;.9451 .9765 .6627;
  603. .9961 .9333 .6235; .9922 .7569 .4314; .9725 .5137 .2980; .8902 .2627 .1725; .7216 .0588 .1490];
  604. % Open figure handle
  605. fig=figure(8); clf; set(gcf, 'Color', [1 1 1], 'Position', [0 0 1200 300])
  606. tiledlayout(1,3,'TileSpacing','tight')
  607. for sessIdx = 1:2
  608. spec_avg = mean_f(mean_f(spectrum_sort(:,mouse,sess(sessIdx),:),4),3);
  609. % Draw colorful spectrum
  610. nexttile
  611. for binIdx = 1:n_bin
  612. spec_bin = mean_f(spectrum_sort(:,mouse,sess(sessIdx),binIdx),3);
  613. plt = plot(spec_f, spec_bin, '-');
  614. plt.Color = colorspace_spd(binIdx,:);
  615. hold on
  616. % Calculate & save ratio data
  617. theta_ratio(sessIdx,binIdx) = mean_f(spec_bin(theta_high_index)) ./ mean_f(spec_bin(theta_low_index));
  618. end
  619. set(gca, 'FontSize', 10, 'Box', 'off', 'LineWidth', 1);
  620. ylabel('Change (\Delta%)');
  621. ylabel('Amplitude (\muV)');
  622. xlabel('Freq (Hz)');
  623. xlim([1 64]); ylim([0 40]);
  624. title(titles{sessIdx}, 'FontWeight', 'normal')
  625. if sessIdx==1
  626. % Mark theta peak
  627. plot(5.5, 29, 'v', 'Color', colorspace_spd(1,:), 'MarkerFaceColor', colorspace_spd(1,:) )
  628. plot(11, 18, 'v', 'Color', colorspace_spd(end,:), 'MarkerFaceColor', colorspace_spd(end,:))
  629. text(4.5, 31.5, 'low theta', 'Color', colorspace_spd(1,:), 'HorizontalAlignment', 'left', 'VerticalAlignment', 'middle')
  630. text(8.5, 20.5, 'high theta', 'Color', colorspace_spd(end,:), 'HorizontalAlignment', 'left', 'VerticalAlignment', 'middle')
  631. end
  632. % Add color bar
  633. cbar=colorbar;
  634. ylabel(cbar, 'Speed bin (rank)');
  635. caxis([0, 1]);
  636. set(cbar, 'YTick', [0,1], 'YTickLabel',{'Slow','Fast'});
  637. colormap(colorspace_spd);
  638. set(gca, 'XTick', [1, 2, 4, 8, 16, 32, 64], 'XScale', 'log', 'XMinorTick', 'off');
  639. end
  640. % Draw theta ratio figure (1 example session)
  641. nexttile
  642. hold on
  643. line1=plot(1:n_bin, theta_ratio(1,:), 'o-', 'Color', [1 1 1]*.5 ); line1.MarkerFaceColor = 'w';
  644. line2=plot(1:n_bin, theta_ratio(2,:), '*--', 'Color', [1 1 1]*.5 );
  645. for binIdx = 1:n_bin
  646. plt1 = plot( binIdx, theta_ratio(1,binIdx), 'Color', colorspace_spd(binIdx,:) );
  647. plt2 = plot( binIdx, theta_ratio(2,binIdx), 'Color', colorspace_spd(binIdx,:) );
  648. plt1.Marker = 'o'; plt1.MarkerFaceColor = plt1.Color; plt1.MarkerEdgeColor = plt1.Color;
  649. plt2.Marker = '*'; plt2.MarkerFaceColor = plt1.Color;
  650. end
  651. xlabel('Speed bin (rank)')
  652. set(gca, 'XTick', [1,n_bin], 'XTickLabel',{'Slow','Fast'});
  653. xlim([0 n_bin+1])
  654. title('Example single vs. group comparison', 'FontWeight', 'normal')
  655. set(gca, 'FontSize', 10, 'Box', 'off', 'LineWidth', 1);
  656. ylabel('Theta amplitude ratio (a.u.)')
  657. ylim([.2 1.6]); set(gca, 'YTick', .2:.2:2)
  658. legend([line1,line2],{'Single','Group'}, 'Location', 'northwest')
  659. %%
  660. % In the figure above, the theta ratio appears to increase monotonically with
  661. % speed rank. To verify whether this trend is statistically significant, we will
  662. % analyze data from all eight mice using the same method in this code block. To
  663. % address this, we will store the mean theta ratio for each individual mouse in
  664. % the variable |theta_ratio| and then plot the grand average. Additionally, we
  665. % will use Spearman's correlation to assess the statistical significance of this
  666. % trend.
  667. % Calc theta ratio for grand-avg
  668. theta_ratio = zeros([2,n_bin,n_mouse]);
  669. for binIdx = 1:n_bin
  670. % Single
  671. spec_bin = mean_f(spectrum_sort(:,:,1:8,binIdx),3);
  672. theta_ratio(1,binIdx,:) = mean_f(spec_bin(theta_high_index,:),1) ./ mean_f(spec_bin(theta_low_index,:),1);
  673. % Group
  674. spec_bin = mean_f(spectrum_sort(:,:,9:16,binIdx),3);
  675. theta_ratio(2,binIdx,:) = mean_f(spec_bin(theta_high_index,:),1) ./ mean_f(spec_bin(theta_low_index,:),1);
  676. end
  677. % Draw theta ratio figure (All session grand-averaged)
  678. fig=figure(9); clf; set(gcf, 'Color', [1 1 1], 'Position', [0 0 600 400])
  679. hold on
  680. line1=plot([1:n_bin], mean(theta_ratio(1,:,:),3), 'o-', 'Color', [1 1 1]*.5 ); line1.MarkerFaceColor = 'w';
  681. line2=plot([1:n_bin], mean(theta_ratio(2,:,:),3), '*--', 'Color', [1 1 1]*.5 );
  682. for binIdx = 1:n_bin
  683. plt1 = errorbar( binIdx, mean_f(theta_ratio(1,binIdx,:),3), std(theta_ratio(1,binIdx,:),0,3), 'Color', colorspace_spd(binIdx,:) );
  684. plt2 = errorbar( binIdx, mean_f(theta_ratio(2,binIdx,:),3), std(theta_ratio(2,binIdx,:),0,3), 'Color', colorspace_spd(binIdx,:) );
  685. plt1.Marker = 'o'; plt1.MarkerFaceColor = plt1.Color; plt1.MarkerEdgeColor = plt1.Color;
  686. plt2.Marker = '*'; plt2.MarkerFaceColor = plt1.Color;
  687. end
  688. xlabel('Speed bin (rank)')
  689. set(gca, 'XTick', [1,n_bin], 'XTickLabel',{'Slow','Fast'});
  690. xlim([0 n_bin+1])
  691. title('Grand-averaged (mouse n=8)', 'FontWeight', 'normal')
  692. set(gca, 'FontSize', 12, 'Box', 'off', 'LineWidth', 1);
  693. ylabel('Theta amplitude ratio (a.u.)')
  694. ylim([.2 1.6]); set(gca, 'YTick', .2:.2:2)
  695. legend([line1,line2],{'Single','Group'}, 'Location', 'northwest', 'FontSize', 12)
  696. % % Add statistical test
  697. stat_x = repelem(1:n_bin, n_mouse);
  698. stat_y = squeeze( theta_ratio(1,:,:) )'; % Single
  699. [coef, pval] = corr( stat_x(:), stat_y(:), 'Type', 'Spearman');
  700. stat_result_single = sprintf("[Single] Spearman's %s = %.4f, p = %.4f", char(961), coef, pval);
  701. text(3.5, .40, stat_result_single, 'FontSize', 12);
  702. stat_y = squeeze( theta_ratio(2,:,:) )'; % Group
  703. [coef, pval] = corr( stat_x(:), stat_y(:), 'Type', 'Spearman');
  704. stat_result_group = sprintf("[Group] Spearman's %s = %.4f, p = %.4f", char(961), coef, pval);
  705. text(3.5, .32, stat_result_group, 'FontSize', 12);