URUMETRICS
Uruguayan Chatbot Project
Description
This repository contains the code to extract the metrics and generated the messages of the Uruguayan Chatbot Project.
Installation
This code should not be used directly but should be embedded in a datalad repository in ChildProject format.
To add this repository as a submodule, run the following command:
mkdir scripts && cd scripts
git submodule add git@gin.g-node.org:/LAAC-LSCP/URUMETRICS-CODE.git
and install the necessary dependencies
pip install -r requirements.txt
Repository structure
acoustic_annotations
contains the code that computes the acoustic annotations
turn_annnotations
contains the code that compute the conversational annotations
import_data
contains the code that imports the new recordings and new annotations to the data set
compute_metrics
contains the code that computes and save the metrics
generate_messages
is the code that reads the metrics file and generates the messages sent to the families
Requirements
Running requirements
All the runnable files should be run from the root of the data set (i.e. the directory containing the scripts
directory). If not, an exception will be raised and the code will stop running.
Naming convention: recording file names
Recording file names should be WAV files that follow the following naming convention
CHILD-ID_[info1_info2_..._infoX]_YYYYMMDD_HHMMSS.wav
where
YYYYMMDD
corresponds to the date formatted according to the ISO 8601 format (i.e. YYYY
for the year, MM
for the month (from 01 to 12), and DD
for the day (from 01 to 31)) and
HHMMSS
date formatted according to the ISO 8601 format (HH
for hours (from 00 to 23, 24-hour clock system), MM
for minutes (from 00 to 59), and SS
for seconds (from 00 to 59)).
[info1_info2_..._infoX]
corresponds to optional information (without [
and ]) separated by underscores
_`
CHILD-ID
may use any character except the underscore character (_
).
Additional information will be store in the metadata file metadata/recordings.csv
in the column experiment_stage
.
How to use?
The following instructions explain how to use this code when it is embedded (as a submodule) in a ChildProject project using datalad
(0) Define the following bash variables
today=$(date '+%Y%m%d')
dataset="URUMETRICS"
The content of dataset
should be the name of the data set you are interested in and should exist in the LAAC-LSCP GIN repository.
(1) Run the following commands to install the data set
datalad install -r git@gin.g-node.org:/LAAC-LSCP/${dataset}.git
cd ${dataset}
datalad get extra/messages/definition
datalad get extra/metrics && datalad unlock extra/metrics
datalad get metadata && datalad unlock metadata
Note that you will only be allowed to clone and install the data set if you are added as a collaborator on GIN. Please ask William or Alex for more information.
(2) Prepare the data set by running the following command
python -u scripts/URUMETRICS-CODE/import_data/prepare_data_set.py
This will create the necessary directories required by ChildProject if they do not exist.
(3) Place the new recordings in dat/in/recordings/raw
(4) Place the VTC, VCM, and ALICE annotation files in their respective folder in dat/in/annotations/{vtc|vcm|alice}/raw
Note that the annotation files should have unique names (e.g. like the acoustic annotations) and should by no means overwrite the files already present in the aforementioned directories.
(5) Save the data set and push the new annotations to GIN
datalad save recordings -m "Imported new recordings for date ${today}"
datalad save annotations/*/raw -m "Imported raw annotations for date ${today}"
datalad push --to origin
This is a very important step. This allows us to push the new recordings and annotations before running any script that could potentially fail.
(6) Import the new recordings
python -u scripts/URUMETRICS-CODE/import_data/import_recordings.py --experiment Uruguayan_Chatbot_2022
This command will look at the new recordings found in the raw
directory and add them to the metadata file metadata/recordings.csv
. If some recordings belong to previously unknown children, they will be added to the metadata file metadata/children.csv
.
Note that the recording file names should comply with the file naming convention described above!
(7) Extract the acoustic annotations using the following command
python -u scripts/URUMETRICS-CODE/acoustic_annotations/compute_acoustic_annotations.py --path-vtc ./annotations/vtc/raw/VTC_FILE_FOR_WHICH_TO_DERIVE_ACOUSTIC_ANNOTATIONS_FOR.rttm --path-recordings ./recordings/raw/ --save-path ./annotations/acoustic/raw
This command will compute acoustic annotations (mean pitch, pitch range) for the VTC file passed as argument. The output file will have the same name as the input VTC file with the rttm
extension replaced by csv
. Of course, in the previous command replace VTC_FILE_FOR_WHICH_TO_DERIVE_ACOUSTIC_ANNOTATIONS_FOR
by the name of the RTTM file which you want to compute acoustic annotations for.
(8) Run the following commands to convert and import the annotations to the ChildProject format:
python -u scripts/URUMETRICS-CODE/import_data/import_annotations.py --annotation-type VTC --annotation-file VTC_FILE.rttm
python -u scripts/URUMETRICS-CODE/import_data/import_annotations.py --annotation-type VCM --annotation-file VCM_FILE.vcm
python -u scripts/URUMETRICS-CODE/import_data/import_annotations.py --annotation-type ALICE --annotation-file ALICE_FILE.txt
python -u scripts/URUMETRICS-CODE/import_data/import_annotations.py --annotation-type ACOUSTIC --annotation-file ACOUSTIC_FILE.csv
This will import the VTC, VCM, ALICE and ACOUSTIC annotations contains in the specified files. Note that you shouldn't specify the full path to the file, but only its raw filename and extension
This script can also take the additional (optional) parameter --recording
. When used, only the annotations pertaining to the specified recording (filename.wav
) will be imported. This can be useful when you need to import only the annotations for a specific recording and not all the annnotations for all the recordings.
(9) Compute the conversational annotations using the following command:
python scripts/URUMETRICS-CODE/turn_annotations/compute_turn_annotations.py --save-path ./annotations/conversations/raw --save-name CONV_${today}
This command will only compute the conversational annotations for the newly imported VTC files.
(10) Import the conversational annotations using the following command
python -u scripts/URUMETRICS-CODE/import_data/import_annotations.py --annotation-type CONVERSATIONS --annotation-file CONVERSATIONS_FILE.csv
(11) Run the following command to compute ACLEW metrics as well as the additional metrics defined in compute_metrics/metrics_functions.py
:
python -u scripts/URUMETRICS-CODE/compute_metrics/metrics.py
This command will generate a file metrics.csv
which will be stored in extra/metrics
. If the file already exists, new lines will be appended.
Note that the metrics are only computed for newly imported recordings and not for all the files. If no annotations are linked to the new files (e.g. you forgot to import them) the columns will be empty.
(12) Generate the message using the following command
python -u scripts/URUMETRICS-CODE/generate_messages/messages.py [--date YYYYMMDD]
This command will create a file in extra/messages/generated
with the following name pattern messages_YYYYMMDD.csv
The file will contain the messages that correspond to each new audio file. The date parameter is used to specify the date for which to generate messages. If the date is before the current date, only recording available at the specified date will be considered to generate the messages. This allows to re-generate past messages if needed. If no date is specified, the current date is used.
Do something with the generated message file
(13) Save the data set and push everything to GIN
datalad save annotations/*/raw -m "Imported derived raw annotations for date ${today}"
datalad save annotations/*/converted -m "Converted annotations for date ${today}"
datalad save metadata -m "Updated metadata for date ${today}"
datalad save extra/metrics -m "Computed new metrics for date ${today}"
datalad save extra/messages/generated -m "Message generated for date ${today}"
datalad save .
datalad push --to origin
(14) Uninstall the data set
git annex dead here
datalad push --to origin
cd ..
datalad remove -d ${dataset}
Return codes
Every command returns either a 0
(i.e. no problem) or 1
(i.e. problem) return code. They might print information, warning and error messages to STDERR.
Test
TO DO!
Version Requirements