Browse Source

업데이트 'README.md'

Hio-Been Han 4 years ago
parent
commit
afaefb8bf3
1 changed files with 293 additions and 156 deletions
  1. 293 156
      README.md

+ 293 - 156
README.md

@@ -1,3 +1,10 @@
+[![G-Node GIN](https://gin.g-node.org/img/favicon.png)](https://gin.g-node.org/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al/)
+**👈🏼 Click to open in G-Node GIN repository!**
+
+</br>
+
+</br>
+
 # 1. Dataset information
 
 A set of high-density EEG (electroencephalogram) recording obtained from awake, freely-moving mice (*mus musculus*) (n = 6). Details of experimental method are described in the original research article using the same dataset [Hwang et al., 2019, *Brain Structure and Function*].
@@ -5,16 +12,12 @@ A set of high-density EEG (electroencephalogram) recording obtained from awake,
 * Title: High-density EEG recording in mice for auditory steady-state response with optogenetic stimulation in the basal forebrain
 * Authors: Eunjin Hwang, Hio-Been Han, Jeongyeong Kim, & Jee Hyun Choi [corresponding: jeechoi@kist.re.kr]
 * Version: 1.0.0
-* Related publication: [Hwang et al., 2019, *Brain Structure and Function*](https://link.springer.com/article/10.1007/s00429-019-01845-5). 
-* Dataset repository: G-Node GIN, https://gin.g-node.org/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al/ (DOI will be provided soon)
+* Related publication: [Hwang et al., 2019, *Brain Structure and Function*](https://link.springer.com/article/10.1007/s00429-019-01845-5).
+* Dataset repository: G-Node GIN (DOI will be provided soon) https://gin.g-node.org/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al/
 
-<center>
 **Step-by-step tutorial is included, fully functioning with _Google Colaboratory_ environment.**
 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1LRmR0fxqWFvwxaNIJxMJkMQVblcuTazk)
-</center>
 
-<br>
-<br>
 # 2. File organization
 
 Raw EEG data are saved in EEGLAB dataset format (*.set). Below are the list of files included in this dataset.
@@ -38,25 +41,21 @@ Raw EEG data are saved in EEGLAB dataset format (*.set). Below are the list of f
     
 **d) Example python scripts**
 
-    [Data_Description_gin-Mouse_hdEEG_ASSR_Hwang_et_al.ipynb]
+    [analysis_tutorial.ipynb]
      * written and tested on Google COLAB - Python 3 environment
     
-<br>
-<br>
 
 # 3. How to get started (Python 3 without _gin_)
 As the data are saved in EEGLAB format, you need to install appropriate module to access the data in Python3 environment. The fastest way would be to use <code>read_epochs_eeglab()</code> function in *MNE-python* module. You can download the toolbox from the link below (or use <code>pip install mne</code> in terminal shell).
 
 
 *[MNE-python]* https://martinos.org/mne/stable/index.html
-<br>
-<br>
+
 ## Part 1. Accessing dataset
-<br>
 
 ### 1-1. Download dataset and MNE-python module
 
-The dataset has been uploaded on G-Node and can be accessed by git command, by typing <code>git clone https://gin.g-node.org/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al</code>. However, it's currently not functioning because of the large size of each dataset (>100 MB). Instead, you can use <code>gin</code> command or custom function written below to copy dataset into your work environment. Also, you need to install *MNE-Python* module using *pip* command to load EEGLAB-formatted EEG data.
+The dataset has been uploaded on G-Node and can be accessed by git command, by typing <code>git clone https://gin.g-node.org/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al</code>. However, it's currently not functioning because of the large size of each dataset (>100 MB). Instead, you can use *gin* command or custom function written below to copy dataset into your work environment. In *gin* repository, a python script <code>download_sample.py</code> is provided. It doesn't require *git* or *gin* command, simply using <code>request</code> module in Python 3. Try typing <code>python download_sample.py</code> on terminal/command after changing desired directory. In this Notebook document, Demo 1-1 is composed of download_sample.py script. 
 
 > Warning: Direct cloning using *git clone git@gin.g-node.org:/hiobeen/Mouse_hdEEG_ASSR_Hwang_et_al.git* may not work because of the large size of each dataset (>100 MB).
 
@@ -64,22 +63,25 @@ To download dataset and install MNE-python module into your environment (local m
 
 > Note: Through this step-by-step demonstration, we will use data from one animal (#Animal 2). Unnecessary data files will not be downloaded to prevent long download time. To download whole dataset, change this part; <code>dataset_to_download = [2]</code> into <code>dataset_to_download = [1,2,3,4,5,6]</code>.
 
-
+Also, you need to install *MNE-Python* module using *pip* command to load EEGLAB-formatted EEG data. Install command using *pip* is located at the end of script <code>download_sample.py</code>.
 
 
 ```python
-# Demo 1-1. Setting an enviroment
-dir_origin = 'content/' # <- Change this in local machine
+# Demo 1-1. Setting an enviroment (download_sample.py)
+from os import listdir, mkdir, path, system, getcwd
+dir_origin = dir_origin = getcwd()+'/' # <- Change this in local machine
 dir_dataset= 'dataset/'
+print('\n1)============ Start Downloading =================\n')
+print('Target directory ... => [%s%s]'%(dir_origin,dir_dataset))
 
+#!rm -rf /content/dataset/
 import requests
-from os import listdir, mkdir, path, system
-def download_dataset( dataset_to_download = range(1,7), dir_dataset = dir_dataset ): 
+def download( dataset_to_download = range(1,7), dir_dataset = dir_dataset ):
   # Check directory
   if not path.isdir('%s%s'%(dir_origin,dir_dataset)):
     mkdir('%s%s'%(dir_origin,dir_dataset))
     mkdir('%s%s/rawdata/'%(dir_origin,dir_dataset))
-  
+
   # File names to be downloaded
   file_ids = [ 'meta.csv', 'montage.csv' ]
   for set_id in dataset_to_download:
@@ -93,57 +95,61 @@ def download_dataset( dataset_to_download = range(1,7), dir_dataset = dir_datase
     if path.isfile(fname_dest) is False:
       print('...copying to [%s]...'%fname_dest)
       file_url = '%s%s'%(repo_url, file_id)
-      r = requests.get(file_url, stream = True)  
-      with open(fname_dest, "wb") as file:  
+      r = requests.get(file_url, stream = True)
+      with open(fname_dest, "wb") as file:
           for block in r.iter_content(chunk_size=1024):
               if block: file.write(block)
     else:
       print('...skipping already existing file [%s]...'%fname_dest)
 
 # Initiate downloading
-print('\n============= Start Downloading =================\n')
 dataset_to_download = [2] # Partial download to prevent long download time
-#dataset_to_download = [1,2,3,4,5,6]
-download_dataset(dataset_to_download)
+#dataset_to_download = [1,2,3,4,5,6] # Download of whole dataset
+download(dataset_to_download)
 print('\n============= Download finished ==================\n\n')
 
 # List up 'dataset/' directory
-print('\n==== List of available files in google drive ====\n')
+print('\n2)=== List of available files in google drive ====\n')
 print(listdir('%sdataset/'%dir_origin))
 print('\n============= End of the list ==================\n\n')
 
 # List up dataset/rawdata/*.set and ~/*.fdt files
-print('\n==== List of available files in google drive ====\n')
+print('\n3)=== List of available files in google drive ====\n')
 print(listdir('%sdataset/rawdata/'%dir_origin))
 print('\n============= End of the list ==================\n\n')
 
 # Install mne-python module
 system('pip install mne');
+
+# Make figure output directory
+dir_fig = 'figures/'
+if not path.isdir(dir_fig): mkdir('%s%s'%(dir_origin, dir_fig))
 ```
 
     
-    ============= Start Downloading =================
+    1)============ Start Downloading =================
     
-    ...skipping already existing file [content/dataset/meta.csv]...
-    ...skipping already existing file [content/dataset/montage.csv]...
-    ...skipping already existing file [content/dataset/rawdata/epochs_animal2.set]...
-    ...skipping already existing file [content/dataset/rawdata/epochs_animal2.fdt]...
+    Target directory ... => [/content/dataset/]
+    ...skipping already existing file [/content/dataset/meta.csv]...
+    ...skipping already existing file [/content/dataset/montage.csv]...
+    ...skipping already existing file [/content/dataset/rawdata/epochs_animal2.set]...
+    ...skipping already existing file [/content/dataset/rawdata/epochs_animal2.fdt]...
     
     ============= Download finished ==================
     
     
     
-    ==== List of available files in google drive ====
+    2)=== List of available files in google drive ====
     
-    ['meta.csv', 'montage.csv', 'rawdata']
+    ['rawdata', 'meta.csv', 'montage.csv']
     
     ============= End of the list ==================
     
     
     
-    ==== List of available files in google drive ====
+    3)=== List of available files in google drive ====
     
-    ['epochs_animal2.fdt', 'epochs_animal2.set']
+    ['epochs_animal2.set', 'epochs_animal2.fdt']
     
     ============= End of the list ==================
     
@@ -152,7 +158,10 @@ system('pip install mne');
 
 ### 1-2. Accessing meta-data table
 
-File *meta.csv* contains the demographic information of 6 mice. Using <code>read_csv()</code> of *pandas* module, meta-data table can be visualized as follow. 
+File *meta.csv* contains the demographic information of 6 mice. Using <code>read_csv()</code> of *pandas* module, meta-datat able can be visualized as follow. 
+
+
+
 
 
 ```python
@@ -179,7 +188,7 @@ meta
         text-align: right;
     }
 </style>
-<table border="0" class="dataframe">
+<table border="1" class="dataframe">
   <thead>
     <tr style="text-align: right;">
       <th></th>
@@ -192,7 +201,7 @@ meta
   </thead>
   <tbody>
     <tr>
-      <td>0</td>
+      <th>0</th>
       <td>animal1</td>
       <td>epochs_animal1.set</td>
       <td>589</td>
@@ -200,7 +209,7 @@ meta
       <td>Male</td>
     </tr>
     <tr>
-      <td>1</td>
+      <th>1</th>
       <td>animal2</td>
       <td>epochs_animal2.set</td>
       <td>557</td>
@@ -208,7 +217,7 @@ meta
       <td>Male</td>
     </tr>
     <tr>
-      <td>2</td>
+      <th>2</th>
       <td>animal3</td>
       <td>epochs_animal3.set</td>
       <td>349</td>
@@ -216,7 +225,7 @@ meta
       <td>Male</td>
     </tr>
     <tr>
-      <td>3</td>
+      <th>3</th>
       <td>animal4</td>
       <td>epochs_animal4.set</td>
       <td>493</td>
@@ -224,7 +233,7 @@ meta
       <td>Female</td>
     </tr>
     <tr>
-      <td>4</td>
+      <th>4</th>
       <td>animal5</td>
       <td>epochs_animal5.set</td>
       <td>956</td>
@@ -232,7 +241,7 @@ meta
       <td>Male</td>
     </tr>
     <tr>
-      <td>5</td>
+      <th>5</th>
       <td>animal6</td>
       <td>epochs_animal6.set</td>
       <td>959</td>
@@ -258,7 +267,7 @@ import numpy as np
 def get_eeg_data(dataset_idx, CAL=1e-6):
   f_name = '%s%srawdata/%s'%(dir_origin,dir_dataset,meta.file_name[dataset_idx])
   EEG = loadeeg(f_name, verbose=False)
-  EEG.data = np.moveaxis( EEG.get_data(), 0, 2 ) / CAL
+  EEG.data = np.moveaxis(EEG.get_data(), 0, 2) / CAL
   return EEG, f_name
 
 # Data loading
@@ -269,7 +278,11 @@ print('File name  :  [%s]'%f_name)
 print('File contains [%d channels, %4d time points, %3d trials]'%(EEG.data.shape))
 ```
 
-    File name  :  [content/dataset/rawdata/epochs_animal2.set]
+    /usr/local/lib/python3.6/dist-packages/numba/decorators.py:146: RuntimeWarning: Caching is not available when the 'parallel' target is in use. Caching is now being disabled to allow execution to continue.
+      warnings.warn(msg, RuntimeWarning)
+
+
+    File name  :  [/content/dataset/rawdata/epochs_animal2.set]
     File contains [38 channels, 5200 time points, 557 trials]
 
 
@@ -281,21 +294,26 @@ Note that voltage calibration value (*CAL*) is set to 1e-6 in 0.11.0 version of
 The EEG data are recorded with 38 electrode array, and two of the electrodes were used as ground and reference site - total 36 channel data are available. Coordinates of each electrode are in the file [data/montage.csv], and can be accessed and visualized by following script.
 
 
+
 ```python
 # Demo 1-4. Import montage matrix
 from matplotlib import pyplot as plt; plt.style.use('ggplot')
+plt.rcParams['font.family']='sans-serif'
+plt.rcParams['text.color']='black'; plt.rcParams['axes.labelcolor']='black'
+plt.rcParams['xtick.color']='black' ;plt.rcParams['ytick.color']='black'
+
 from pandas import read_csv
 montage_table = read_csv('%s%smontage.csv'%(dir_origin, dir_dataset))
 elec_montage = np.array(montage_table)[:, 1:3]
 
 # Open figure handle
-plt.figure(figsize=(6,6))
+plt.figure(figsize=(4.5,5))
 
 # Plot EEG channels position (total 36 channels)
 plt.plot( elec_montage[:36,0], elec_montage[:36,1], 'go' )
 for chanIdx in range(36):
   plt.text( elec_montage[chanIdx,0], elec_montage[chanIdx,1]+.2,
-           EEG.info['ch_names'][chanIdx][5:], ha='center', fontsize=10 )
+           EEG.info['ch_names'][chanIdx][5:], ha='center', fontsize=8 )
 
 # Plot Ref/Gnd electrode position
 plt.plot( elec_montage[36:,0], elec_montage[36:,1], 'rs' )
@@ -304,9 +322,9 @@ plt.text(0,-4.2, 'LP', fontsize=12, weight='bold', ha='center',va='center');
 
 plt.xlabel('ML coordinate (mm)'); plt.ylabel('AP coordinate (mm)');
 plt.title('2D electrode montage');
-plt.legend(['Active','Ref/Gnd'], loc='upper right');
+plt.legend(['Active','Ref/Gnd'], loc='upper right', facecolor='w');
 plt.gca().set_facecolor((1,1,1))
-plt.grid(False); plt.axis([-6, 6, -7, 5.5])
+plt.grid(False); plt.axis([-5.5, 6.5, -7, 6])
 
 # Draw head boundary
 def get_boundary():
@@ -330,10 +348,12 @@ def get_boundary():
     -4.6673, -0.9197, -4.6266, -0.7297, -4.5799, -0.5397, -4.5268, -0.3497,-4.4670, -0.1597, -4.4000, 0.03025]).reshape(-1, 2)
 boundary = get_boundary()
 for p in range(len(boundary)-1): plt.plot(boundary[p:p+2,0],boundary[p:p+2,1], 'k-')
+#plt.axis((-5))
+plt.gcf().savefig(dir_fig+'fig1-4.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_12_0.png)
+![png](figures/output_12_0.png)
 
 
 ## Part 2. Plotting Event-Related Potentials
@@ -346,7 +366,7 @@ Event information is saved in struct-type variable EEG.event and you can access
 ```python
 # Demo 2-1. Event profile (sound/light stimuli)
 condNames = ['In-phase','Out-of-phase','Delayed','Advanced',
-             'Continuous','SoundOnly', 'LightOnly'];
+             'Continuous','Sound only', 'Light only'];
 
 plt.figure(figsize=(12,10))
 yshift = .8;
@@ -369,10 +389,11 @@ for condition in range(1,len(condNames)+1):
   plt.title('Condition #%d. %s'%(condition,condNames[condition-1]))
   plt.gca().set_facecolor((1,1,1))
 plt.subplots_adjust(wspace=.3, hspace=.8)
+plt.gcf().savefig(dir_fig+'fig2-1.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_15_0.png)
+![png](figures/output_15_0.png)
 
 
 ### 2-2. Visualizing example single-trial trace
@@ -409,11 +430,12 @@ Using <code>plot_multichan()</code> function, example single-trial EEG trace can
 # Demo 2-2b. Visualization of raw EEG time trace
 trial_index = 0
 y_center = plot_multichan(EEG.times, EEG.data[:,:,trial_index])
-plt.title('Example trial (index=%d) trace'%(1+trial_index))
+plt.title('Example trial (index=%d) trace'%(1+trial_index));
+plt.gcf().savefig(dir_fig+'fig2-2.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_19_0.png)
+![png](figures/output_19_0.png)
 
 
 Note that channels 1 to 36 contain actual EEG data from 36-channel electrode array (from FP1 to PO8), and channel 37 and 38 contain binary stimulus profile (0: no stimulation, 1: stimulation) of light and sound, respectively. 
@@ -426,16 +448,17 @@ Using same function, <code>plot_multichan()</code>, ERP (Event-related potential
 ```python
 # Demo 2-3. Visualization of ERP time trace
 condNames = ['In-phase', 'Out-of-phase', 'Delayed', 'Advanced',
-             'Continuous','SoundOnly', 'LightOnly']
-targetCondition = 5 # <- Try changing this
+             'Continuous','Sound only', 'Light only']
+targetCondition = 6 # <- Try changing this
 trialIdx = np.where((EEG.events[:,2])==targetCondition)[0]
 erp = np.nanmean(EEG.data[:,:,trialIdx],2)
 c = plot_multichan(EEG.times, erp, spacing = 300 )
-plt.title('Condition #%d: %s (n=%d)'%(targetCondition, condNames[targetCondition-1],len(trialIdx))); 
+plt.title('ERP sample. Condition: %s'%(condNames[targetCondition-1]));
+plt.gcf().savefig(dir_fig+'fig2-3.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_22_0.png)
+![png](figures/output_22_0.png)
 
 
 ### 2-4. ERP in frequency domain
@@ -443,40 +466,120 @@ plt.title('Condition #%d: %s (n=%d)'%(targetCondition, condNames[targetCondition
 To calculate the amplitude of 40-Hz auditory steady-state response, fast Fourier transform can be applied as follow. 
 
 
+```python
+EEG.info['ch_names']
+```
+
+
+
+
+    ['Ch01-FP1',
+     'Ch02-FP2',
+     'Ch03-AF3',
+     'Ch04-AF4',
+     'Ch05-AF7',
+     'Ch06-AF8',
+     'Ch07-F1',
+     'Ch08-F2',
+     'Ch09-F5',
+     'Ch10-F6',
+     'Ch11-FC1',
+     'Ch12-FC2',
+     'Ch13-FC5',
+     'Ch14-FC6',
+     'Ch15-C1',
+     'Ch16-C2',
+     'Ch17-C3',
+     'Ch18-C4',
+     'Ch19-C5',
+     'Ch20-C6',
+     'Ch21-CP1',
+     'Ch22-CP2',
+     'Ch23-CP3',
+     'Ch24-CP4',
+     'Ch25-CP5',
+     'Ch26-CP6',
+     'Ch27-P1',
+     'Ch28-P2',
+     'Ch29-P3',
+     'Ch30-P4',
+     'Ch31-P5',
+     'Ch32-P6',
+     'Ch33-PO3',
+     'Ch34-PO4',
+     'Ch35-PO7',
+     'Ch36-PO8',
+     'LightStim',
+     'SoundStim']
+
+
+
+
 ```python
 # Demo 2-4. Time- and frequency-domain visualization of grand-averaged ERP
-from numpy.fft import fft
-def fft_half(x, Fs=2000): return fft(x)[:int(len(x)/2)]/len(x), np.linspace(0,Fs/2,len(x)/2)
-
-# Get ERP in frequency domain
-erp = np.nanmean(EEG.data[:,:,np.where((EEG.events[:,2])==2)[0]],2)
-ac_erp = np.mean(erp[:36,],0)
-ac_erp_fft,freq = fft_half(ac_erp)
-
-# Plot time-domain signal
-plt.figure(figsize=(14,8))
-plt.subplot(2,1,1)
-plt.plot( EEG.times, ac_erp, linewidth=.5 )
-plt.xlim((-.8,1.8))
-plt.xlabel('Time (sec)')
-plt.ylabel('Amplitude (mV)')
-plt.title('ERP signal in time domain')
-plt.gca().set_facecolor((1,1,1))
+def fft_half(x, Fs=2000): return np.fft.fft(x)[:int(len(x)/2)]/len(x), np.linspace(0,Fs/2,len(x)/2)
+plt.figure(figsize=(16,3))
+trialIdx = np.where((EEG.events[:,2])==6)[0]
 
-# Plot frequency-domain signal
-plt.subplot(2,1,2)
-plt.plot( freq, np.abs(ac_erp_fft), linewidth=1 )
-plt.xlim((0,80))
-plt.xlabel('Freq (Hz)')
-plt.ylabel('Amplitude (mV/Hz)')
-plt.title('ERP signal in frequency domain')
+# Channel selection
+ch_frontal = (2,6) # channel index of frontal electrodes 
+ch_parietal = (20,24) # channel index of parietal electrodes
+print( 'Channel selection\n..Frontal channels -> %s'%EEG.info['ch_names'][ch_frontal[0]:ch_frontal[1]] )
+print( '..Parietal channels -> %s\n'%EEG.info['ch_names'][ch_parietal[0]:ch_parietal[1]] )
+
+# Calc ERP traces for visualization
+erp = np.mean(EEG.data[:,:,trialIdx],2)
+frontal_erp = np.mean(erp[ch_frontal[0]:ch_frontal[1],:],0)     # Average of frontal-area channels
+parietal_erp = np.mean(erp[ch_parietal[0]:ch_parietal[1],:],0)  # Average of parietal-area channels
+frontal_erp_fft,freq = fft_half(frontal_erp)
+parietal_erp_fft,freq = fft_half(parietal_erp)
+sound_stim = np.where(erp[-1,:])[0]
+
+# Plot ERP (Time)
+font_size = 13
+color_f = (.68,.210,.27) # Custom color value
+color_p = (.01,.457,.74)
+plt.subplot(1,3,(1,2))
+plt.grid('off')
+plt.plot( EEG.times, frontal_erp,  color= color_f)
+plt.plot( EEG.times, parietal_erp, color= color_p)
+plt.xlim((-.2,1.1))
+plt.ylim((-20,22))
+plt.text( -.1, 19.5, 'Sound', ha='center', fontsize=font_size, color='k' )
+plt.text( -.1, 15.5, 'Frontal', ha='center', fontsize=font_size, color=color_f )
+plt.text( -.1, 11.5,'Parietal', ha='center', fontsize=font_size, color=color_p )
+plt.title('ERP signal in time domain', fontsize=font_size)
+plt.xlabel('Time (sec)', fontsize=font_size)
+plt.ylabel('Voltage (μV)', fontsize=font_size)
+plt.plot( EEG.times[sound_stim], np.zeros(sound_stim.shape)+21, 'k|' )
 plt.gca().set_facecolor((1,1,1))
+  
+# Plot ERP (Frequency)
+def smoothing(x, L=5): # smoothing function
+    return np.convolve(np.ones(L,'d')/np.ones(L,'d').sum(),
+    np.r_[x[L-1:0:-1],x, x[-2:-L-1:-1]],mode='valid')[round((L-1)*.5):round(-(L-1)*.5)]
+plt.subplot(1,3,3)
+plt.plot( freq, smoothing(np.abs(frontal_erp_fft)), linewidth=2, color=color_f )
+plt.plot( freq, smoothing(np.abs(parietal_erp_fft)), linewidth=2, color=color_p )
+plt.xlim((10,70))
+plt.ylim((0,.5))
+plt.xlabel('Freq (Hz)', fontsize=font_size)
+plt.ylabel('Amplitude (μV/Hz)', fontsize=font_size)
+plt.title('ERP signal in frequency domain', fontsize=font_size)
+plt.gca().set_facecolor((1,1,1))
+plt.subplots_adjust(wspace=.3, hspace=.1, bottom=.2)
 
-plt.subplots_adjust(wspace=.3, hspace=.5)
+plt.gcf().savefig(dir_fig+'fig2-4.png', format='png', dpi=300);
 ```
 
+    Channel selection
+    ..Frontal channels -> ['Ch03-AF3', 'Ch04-AF4', 'Ch05-AF7', 'Ch06-AF8']
+    ..Parietal channels -> ['Ch21-CP1', 'Ch22-CP2', 'Ch23-CP3', 'Ch24-CP4']
+    
 
-![png](figure_output/output_24_0.png)
+
+
+![png](figures/output_25_1.png)
 
 
 ### 2-5. ERP in time-frequency domain
@@ -527,7 +630,7 @@ def get_spectrogram( data, t=EEG.times, Fs=2000,
 
 # Calculation & Visualization
 from matplotlib import cm
-plt.figure(figsize=(18,6))
+plt.figure(figsize=(16,5))
 conditions = [6,4,1,3,2]
 for targetCondition in conditions:
   trialIdx = np.where((EEG.events[:,2])==targetCondition)[0]
@@ -535,39 +638,48 @@ for targetCondition in conditions:
   # Calc Frontal Mean
   erp = np.mean(EEG.data[:,:,trialIdx],2)
   Spec, Spec_t, Spec_f = get_spectrogram(erp)
-  frontal_Spec = np.mean(Spec[:,:,2:5],2)
-  parietal_Spec = np.mean(Spec[:,:,20:24],2)
+  frontal_Spec = np.mean(Spec[:,:,ch_frontal[0]:ch_frontal[1]],2)
+  parietal_Spec = np.mean(Spec[:,:,ch_parietal[0]:ch_parietal[1]],2)
   
-  frontal_erp = np.mean(erp[2:5,:],0)
-  parietal_erp = np.mean(erp[20:24,:],0)
+  frontal_erp = np.mean(erp[ch_frontal[0]:ch_frontal[1],:],0)
+  parietal_erp = np.mean(erp[ch_parietal[0]:ch_parietal[1],:],0)
   
   # Frontal
   plt.subplot(2,len(conditions),np.where(np.array(conditions)==targetCondition)[0]+1)
   plt.contourf( Spec_t, Spec_f, frontal_Spec.transpose(),
                cmap=cm.jet, levels = np.linspace(0,1.8,100))
-  plt.ylabel('Freq (Hz)')
-  plt.xlabel('Time (sec)')
-  plt.title('%s (Frontal)'%condNames[targetCondition-1])
+  if targetCondition==6:
+    plt.ylabel('Frontal\nFrequency (Hz)', fontsize=font_size)
+    plt.xlabel('Time (sec)', fontsize=font_size)
+  else: plt.axis('off')
+  plt.title('%s'%condNames[targetCondition-1], fontsize=font_size)
   plt.plot(EEG.times, .5*frontal_erp+60, 'w-', linewidth=.5)
   plt.xlim([-.5,1.5])
   plt.ylim([5,80])
+  plt.plot((0,0),(0,100), '--', linewidth = 1, color = (.8,.8,.8))
+  plt.plot((1,1),(0,100), '--', linewidth = 1, color = (.8,.8,.8))
 
   # Parietal
   plt.subplot(2,len(conditions),len(conditions)+np.where(np.array(conditions)==targetCondition)[0]+1)
   plt.contourf( Spec_t, Spec_f, parietal_Spec.transpose(),
                cmap=cm.jet, levels = np.linspace(0,1.8,100))
-  plt.ylabel('Freq (Hz)')
-  plt.xlabel('Time (sec)')
-  plt.title('%s (Parietal)'%condNames[targetCondition-1])
+  if targetCondition==6:
+    plt.ylabel('Parietal\nFrequency (Hz)', fontsize=font_size)
+    plt.xlabel('Time (sec)', fontsize=font_size)
+  else: plt.axis('off')
   plt.plot(EEG.times, .5*parietal_erp+60, 'w-', linewidth=.5)
   plt.xlim([-.5,1.5])
   plt.ylim([5,80])
+  plt.plot((0,0),(0,100), '--', linewidth = 1, color = (.8,.8,.8))
+  plt.plot((1,1),(0,100), '--', linewidth = 1, color = (.8,.8,.8))
 
-plt.subplots_adjust(wspace=.3, hspace=.5)
+plt.subplots_adjust(wspace=.1, hspace=.3)
+#plt.colorbar(orientation="horizontal")
+plt.gcf().savefig(dir_fig+'fig2-5.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_26_0.png)
+![png](figures/output_27_0.png)
 
 
 ## Part 3. Drawing topography
@@ -581,7 +693,6 @@ For 2D interpolation, additional *class* of <code>bi_interp2</code> is defined.
 
 ```python
 # Demo 3-1a. Preparation of 2D power topography
-
 """ (1) Class for 2D interpolation """
 import numpy as np
 import matplotlib.pyplot as plt
@@ -721,69 +832,68 @@ def plot_topo2d(data, clim=(-15,25), montage_file='%s%smontage.csv'%(dir_origin,
 
 In usual EEG recordings, large-amplitude artifacts coming from few bad channels sometimes be a problem. To prevent this, researchers have developed various methods of artifact-rejection and bad-channel selection. Here, simple bad channel identification method is implemented using channel correlation. Data from the bad channels identified here will be ignored in topography and replaced by median value hereafter.
 
+
 ```python
 # Demo 3-1b. Identification of bad-channel using correlation
 from scipy.stats import ttest_1samp as ttest
-
-ga_erp = np.nanmean(EEG.data[:36,:,:],2)
+ga_erp = np.nanmean(EEG.data[:36,:,:],2) # grand-averaged ERP
 corr_coef = np.corrcoef(ga_erp)
 bad_channels = []
 for chIdx in range(corr_coef.shape[1]):
 
   # Eliminating self-correlation
-  r_data = corr_coef[chIdx,:].tolist()
-  r_data.pop(chIdx)
+  r_data = corr_coef[chIdx,:].tolist() 
+  r_data.pop(chIdx) # eliminating self-correlation
   
   # Calculating p-value from One-sample t-test
   pval = ttest(r_data,popmean=0).pvalue
-  if pval > 1: pval=1
   if pval > .001: 
     bad_channels.append(chIdx)
     marker = '> '
-  else:
-    marker = '  '
-  print( marker+'Ch=%02d) p = %.3f, R_mean = %.3f, R_std = %.3f'%(
+  else: marker = '  '
+  if pval > 1: pval=1
+  print( marker+'Ch=%02d) p = %.5f, R_mean = %.3f, R_std = %.3f'%(
       chIdx+1, pval, np.mean(r_data), np.std(r_data)))
 print('\nLow-correlated (bad) channels: %s'%(bad_channels))
 
 ```
 
-      Ch=01) p = 0.000, R_mean = 0.561, R_std = 0.335
-      Ch=02) p = 0.000, R_mean = 0.537, R_std = 0.340
-      Ch=03) p = 0.000, R_mean = 0.679, R_std = 0.301
-      Ch=04) p = 0.000, R_mean = 0.657, R_std = 0.324
-      Ch=05) p = 0.000, R_mean = 0.651, R_std = 0.321
-      Ch=06) p = 0.000, R_mean = 0.618, R_std = 0.338
-      Ch=07) p = 0.000, R_mean = 0.742, R_std = 0.268
-      Ch=08) p = 0.000, R_mean = 0.736, R_std = 0.289
-      Ch=09) p = 0.000, R_mean = 0.744, R_std = 0.272
-      Ch=10) p = 0.000, R_mean = 0.711, R_std = 0.301
-      Ch=11) p = 0.000, R_mean = 0.762, R_std = 0.223
-      Ch=12) p = 0.000, R_mean = 0.711, R_std = 0.231
-      Ch=13) p = 0.000, R_mean = 0.763, R_std = 0.246
-      Ch=14) p = 0.000, R_mean = 0.715, R_std = 0.282
-      Ch=15) p = 0.000, R_mean = 0.747, R_std = 0.207
-      Ch=16) p = 0.000, R_mean = 0.755, R_std = 0.224
-      Ch=17) p = 0.000, R_mean = 0.753, R_std = 0.214
-      Ch=18) p = 0.000, R_mean = 0.749, R_std = 0.232
-      Ch=19) p = 0.000, R_mean = 0.765, R_std = 0.223
-      Ch=20) p = 0.000, R_mean = 0.721, R_std = 0.270
-      Ch=21) p = 0.000, R_mean = 0.701, R_std = 0.193
-      Ch=22) p = 0.000, R_mean = 0.733, R_std = 0.202
-      Ch=23) p = 0.000, R_mean = 0.724, R_std = 0.194
-      Ch=24) p = 0.000, R_mean = 0.769, R_std = 0.223
-      Ch=25) p = 0.000, R_mean = 0.752, R_std = 0.181
-      Ch=26) p = 0.000, R_mean = 0.710, R_std = 0.257
-      Ch=27) p = 0.000, R_mean = 0.398, R_std = 0.163
-      Ch=28) p = 0.000, R_mean = 0.648, R_std = 0.182
-      Ch=29) p = 0.000, R_mean = 0.647, R_std = 0.188
-      Ch=30) p = 0.000, R_mean = 0.737, R_std = 0.197
-      Ch=31) p = 0.000, R_mean = 0.619, R_std = 0.140
-      Ch=32) p = 0.000, R_mean = 0.658, R_std = 0.228
-    > Ch=33) p = 0.051, R_mean = 0.106, R_std = 0.306
-      Ch=34) p = 0.000, R_mean = 0.262, R_std = 0.135
-    > Ch=35) p = 0.012, R_mean = 0.124, R_std = 0.270
-      Ch=36) p = 0.000, R_mean = 0.281, R_std = 0.123
+      Ch=01) p = 0.00000, R_mean = 0.561, R_std = 0.335
+      Ch=02) p = 0.00000, R_mean = 0.537, R_std = 0.340
+      Ch=03) p = 0.00000, R_mean = 0.679, R_std = 0.301
+      Ch=04) p = 0.00000, R_mean = 0.657, R_std = 0.324
+      Ch=05) p = 0.00000, R_mean = 0.651, R_std = 0.321
+      Ch=06) p = 0.00000, R_mean = 0.618, R_std = 0.338
+      Ch=07) p = 0.00000, R_mean = 0.742, R_std = 0.268
+      Ch=08) p = 0.00000, R_mean = 0.736, R_std = 0.289
+      Ch=09) p = 0.00000, R_mean = 0.744, R_std = 0.272
+      Ch=10) p = 0.00000, R_mean = 0.711, R_std = 0.301
+      Ch=11) p = 0.00000, R_mean = 0.762, R_std = 0.223
+      Ch=12) p = 0.00000, R_mean = 0.711, R_std = 0.231
+      Ch=13) p = 0.00000, R_mean = 0.763, R_std = 0.246
+      Ch=14) p = 0.00000, R_mean = 0.715, R_std = 0.282
+      Ch=15) p = 0.00000, R_mean = 0.747, R_std = 0.207
+      Ch=16) p = 0.00000, R_mean = 0.755, R_std = 0.224
+      Ch=17) p = 0.00000, R_mean = 0.753, R_std = 0.214
+      Ch=18) p = 0.00000, R_mean = 0.749, R_std = 0.232
+      Ch=19) p = 0.00000, R_mean = 0.765, R_std = 0.223
+      Ch=20) p = 0.00000, R_mean = 0.721, R_std = 0.270
+      Ch=21) p = 0.00000, R_mean = 0.701, R_std = 0.193
+      Ch=22) p = 0.00000, R_mean = 0.733, R_std = 0.202
+      Ch=23) p = 0.00000, R_mean = 0.724, R_std = 0.194
+      Ch=24) p = 0.00000, R_mean = 0.769, R_std = 0.223
+      Ch=25) p = 0.00000, R_mean = 0.752, R_std = 0.181
+      Ch=26) p = 0.00000, R_mean = 0.710, R_std = 0.257
+      Ch=27) p = 0.00000, R_mean = 0.398, R_std = 0.163
+      Ch=28) p = 0.00000, R_mean = 0.648, R_std = 0.182
+      Ch=29) p = 0.00000, R_mean = 0.647, R_std = 0.188
+      Ch=30) p = 0.00000, R_mean = 0.737, R_std = 0.197
+      Ch=31) p = 0.00000, R_mean = 0.619, R_std = 0.140
+      Ch=32) p = 0.00000, R_mean = 0.658, R_std = 0.228
+    > Ch=33) p = 0.05145, R_mean = 0.106, R_std = 0.306
+      Ch=34) p = 0.00000, R_mean = 0.262, R_std = 0.135
+    > Ch=35) p = 0.01168, R_mean = 0.124, R_std = 0.270
+      Ch=36) p = 0.00000, R_mean = 0.281, R_std = 0.123
     
     Low-correlated (bad) channels: [32, 34]
 
@@ -801,8 +911,8 @@ for targetCondition in conditions:
 
   # Calc ERP traces for visualization
   erp = np.mean(EEG.data[:,:,trialIdx],2)
-  frontal_erp = np.mean(erp[2:5,:],0)     # Average of frontal-area channels
-  parietal_erp = np.mean(erp[20:24,:],0)  # Average of parietal-area channels
+  frontal_erp = np.mean(erp[ch_frontal[0]:ch_frontal[1],:],0)     # Average of frontal-area channels
+  parietal_erp = np.mean(erp[ch_parietal[0]:ch_parietal[1],:],0)  # Average of parietal-area channels
 
   # Plot ERP trace
   plt.figure(figsize=(14,20))
@@ -819,19 +929,18 @@ for targetCondition in conditions:
   plt.ylim((-30,45))
   plt.axis('off')
   plt.gca().set_facecolor((1,1,1))
-  plt.text( -.1, 27, 'Frontal', ha='center', weight='bold', fontsize=12, color=color_f )
-  plt.text( -.1, 20,'Parietal', ha='center', weight='bold', fontsize=12, color=color_p )
+  plt.text( -.1, 27, 'Frontal', ha='center', fontsize=12, color=color_f )
+  plt.text( -.1, 20,'Parietal', ha='center', fontsize=12, color=color_p )
   plt.title('%s'%condNames[targetCondition-1])
   
   # Calculate topography data 
   t_slice = [  (.005, .025), (.0250, .0550), (.200, .210),(.502, .512), (.925, .935)  ]  
   y_mark = [-20, 33, 10, 5, 10]
-  colors = ['r','g','b','c','m'] # color marker
   topos = []
   for tIdx in range(len(t_slice)):
     x_start, x_end, y_pos = t_slice[tIdx][0], t_slice[tIdx][1], y_mark[tIdx]
     idx_start, idx_end = np.where(EEG.times==x_start)[0][0], np.where(EEG.times==x_end)[0][0]
-    plt.plot( EEG.times[[idx_start,idx_end]], [y_pos, y_pos], colors[tIdx]+'|-')
+    plt.plot( EEG.times[[idx_start,idx_end]], [y_pos, y_pos], 'k|-')
     topo_in = np.mean( erp[:36,idx_start:idx_end],1 )
     # bad-channel replacement
     topo_in[bad_channels] = np.median( topo_in.flatten() ) 
@@ -845,11 +954,28 @@ for targetCondition in conditions:
     topo_x = np.linspace(topo_shift[tIdx][0]-topo_size[0]*.5,topo_shift[tIdx][0]+topo_size[0]*.5,topos[tIdx].shape[0])
     topo_y = np.linspace(topo_shift[tIdx][1]-topo_size[1]*.5,topo_shift[tIdx][1]+topo_size[1]*.5,topos[tIdx].shape[1])
     plt.contourf( topo_x, topo_y, topos[tIdx], cmap=cm.jet, levels=topo_clim )
-  plt.show()
+  plt.draw()
+plt.gcf().savefig(dir_fig+'fig3-2.png', format='png', dpi=300);
 ```
 
 
-![png](figure_output/output_33_0.png)
+![png](figures/output_34_0.png)
+
+
+
+![png](figures/output_34_1.png)
+
+
+
+![png](figures/output_34_2.png)
+
+
+
+![png](figures/output_34_3.png)
+
+
+
+![png](figures/output_34_4.png)
 
 
 ### 3-3. Band-limited power topography
@@ -880,7 +1006,7 @@ erp = np.mean(EEG.data[:,:,trialIdx],2)
 period = [ (-.5,0.), (0.,.5), (.5, 1.), (1.,1.5) ] # time in second
 periodName = ['Pre-stim', 'Stim (Early)', 'Stim (Late)', 'Post-stim'];
 freq = 40 # Hz
-plt.figure(figsize=(15,3))
+plt.figure(figsize=(16,3))
 for periodIdx in range(len(period)):
   tIdx = (EEG.times>period[periodIdx][0]) & (EEG.times<=period[periodIdx][1])
   
@@ -892,12 +1018,15 @@ for periodIdx in range(len(period)):
   plt.subplot(1,len(period),periodIdx+1)
   plot_topo2d(power, clim=(0,3) )
   plt.title('%s, t = [%.1f, %.1f]s'%(periodName[periodIdx],period[periodIdx][0],period[periodIdx][1]))
-
+plt.gcf().savefig(dir_fig+'fig3-3.png', format='png', dpi=300);
 ```
 
+    /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:118: RuntimeWarning: invalid value encountered in greater
+    /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:119: RuntimeWarning: invalid value encountered in less
 
 
-![png](figure_output/output_35_1.png)
+
+![png](figures/output_36_1.png)
 
 
 ### 3-4. Band-power topography: Comparison across various experimental conditions
@@ -908,7 +1037,7 @@ Applying the same routine above, power topography figures of five different expe
 ```python
 # Demo 3-4. Band-power topography: Summary comparison across various stimulus conditions
 freq = 40 # Hz
-plt.figure(figsize=(17,4))
+plt.figure(figsize=(16,4))
 conditions = [6,4,1,3,2]
 tIdx = (EEG.times>0) & (EEG.times<=1)
 for targetCondition in conditions:
@@ -922,12 +1051,20 @@ for targetCondition in conditions:
   # Draw
   plt.subplot(1,len(conditions),np.where(np.array(conditions)==targetCondition)[0]+1)
   plot_topo2d(power, clim=(0,7) )
-  plt.title('%s'%condNames[targetCondition-1])
+  plt.title('%s'%condNames[targetCondition-1], fontsize = font_size)
+  plt.axis('off')
   if targetCondition is not conditions[0]: plt.ylabel('')
+  
+plt.subplots_adjust(wspace=.1, hspace=.3)
+plt.gcf().savefig(dir_fig+'fig3-4.png', format='png', dpi=300);
 ```
 
+    /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:118: RuntimeWarning: invalid value encountered in greater
+    /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:119: RuntimeWarning: invalid value encountered in less
+
+
 
-![png](figure_output/output_37_1.png)
+![png](figures/output_38_1.png)
 
 
 <br>