Browse Source

Updated documentation

Ioannis Agtzidis 4 years ago
parent
commit
fea5f7bbd2
1 changed files with 19 additions and 18 deletions
  1. 19 18
      README.md

+ 19 - 18
README.md

@@ -1,6 +1,7 @@
-Here we provide 15 360-degree equirectangular videos, togeher with eye tracking
-recordings of 13 subjects and a manually labelled ground-truth subset of all
-gaze recordings. Finally we also provide a algorithmic implemetntation of the
+Here we provide 15 360-degree equirectangular videos, together with the eye
+tracking recordings of 13 subjects and a manually labelled ground-truth subset
+of the gaze recordings. We also offer information about the manual labelling
+tool in Section 1.4.  Finally we provide a algorithmic implementation of the
 the process that was followed during manual labelling.
 
 ## 1. CONTENT
@@ -19,13 +20,13 @@ commands.
 We used as an approximation to naturalistic stimuli 14 Youtube
 videos from diverse contexts. The videos were published under the Creative
 Commons license and we give attribution to the original creators by attaching
-the Youtude IDS at the end of each video clip. 
+the Youtube IDs at the end of each video clip. 
 
-A single syntetic stimulus was created by the authors and comprises of a moving
+A single synthetic stimulus was created by the authors and comprises of a moving
 target, which tries to elicit many different kinds of eye motion (i.e.
 fixations, saccade, SP, head pursuit, VOR, OKN). 
 
-Files found in `videos`.
+Files found in the `videos` folder.
 
 ### 1.2 Gaze Recordings
 
@@ -33,17 +34,17 @@ In total 13 subjects participated in our study, which amounts to ca. 3.5 hours
 of eye tracking data. Information about the participants can be found
 [here](https://web.gin.g-node.org/ioannis.agtzidis/360_em_dataset/src/master/participant_info.csv).
 
-Gaze files are found in `gaze` folder.
+Gaze files are found in the `gaze` folder.
 
 ### 1.3 Ground Truth
 
-we manually labelled part of the full data set accroding to the rules presented
+We manually labelled part of the full data set according to the rules presented
 in our paper. The labelled gaze recordings were split in two non overlapping
-(subject wise) subsets, where one can be use as training and the other as
-testing. In total the hand-lablled portion comprise of 2 labelled gaze files per
+(subject wise) subsets, where one can be used as training and the other as
+testing. In total the hand-labelled portion comprises of 2 labelled gaze files per
 video stimulus and about 16 % of the data.
 
-Manually annotated ground-truthf files are found in `ground_truth` folder.
+Manually annotated ground-truth files are found in the `ground_truth` folder.
 
 ### 1.4 Manual Labelling Interface
 
@@ -53,20 +54,20 @@ The GTA-VI repository contains the extension that was developed for labelling
 this data set and enables labelling of 360-degree equirectangular recordings with our
 two tier method.
 
-### 1.5 Algorithmic Implemetation
+### 1.5 Algorithmic Implementation
 
 We provide an algorithmic implementation for eye movement classification based
-on the definitions that we provied in our paper. The resulting eye movements
-adter applying our algorithms to ground truth files can be foundin the relevant
+on the definitions that we provide in our paper. The resulting eye movements
+after applying our algorithms to ground truth files can be found in the relevant
 files.
 
-Algorithms and their output are found in `em_algorithms`, `output_I-S5T_combined`, `output_I-S5T_FOV`, `output_I-S5T_E+H` folders.
+The algorithms and their output are found in the `em_algorithms`, `output_I-S5T_combined`, `output_I-S5T_FOV`, `output_I-S5T_E+H` folders.
 
 ## 2. DATA FORMAT
 
-All the function use the ARFF data format for input and output to the disk. The initial
-ARFF format was extended as described in Agtzidis et al. (2016) and was further expanded 
-for 360-degree gaze data.
+For the whole analysis we use the ARFF data format for input and output from the
+disk. The initial ARFF format was extended as described in Agtzidis et al.
+(2016) and was further expanded for 360-degree gaze data.
 
 Here the "@RELATION" is set to gaze_360 to distinguish the recordings from
 plain gaze recordings. We also make use of the "%@METADATA" special comments