An end-to-end multimodal system to an end-to-end system to support the creation, evaluation, and analysis of uni/multimodal ED models for children with autism pre-configured to use the CALMED dataset.
OTHER License
An end-to-end multimodal system to an end-to-end system to support the creation, evaluation, and analysis of uni/multimodal ED models for children with autism pre-configured to use the CALMED dataset. It supports two different type of input modalities, i.e. video and audio.
This system has the following features:
The system supports any combination of features and inputs described in the CALMED dataset paper.
python -m venv ./venv
source ./venv/bin/activate
python app.py
docker compose build
docker compose up -d
annotation_tool
to create the labels.features_extraction
to preprocess the features extracted from tools, e.g., OpenFace andsplit_dataset
to split features and labels dataset into train, dev and test sets. The output/dataset/video
or /dataset/audio
depending on the modality youRun the script from the main folder /emotion_detection_system
This repository is released under dual-licencing:
For non-commercial use of the Software, it is released under the 3-Cause BSD Licence.
For commercial use of the Software, you are required to contact the University of Galway to arrange a commercial licence.
Please refer to LICENSE.md file for details on the licence.
If you use any of the resources provided on this repository in any of your publications we ask you to cite the following work:
Sousa, Annanda, et al. "Introducing CALMED: Multimodal Annotated Dataset for Emotion Detection in Children with Autism." International Conference on Human-Computer Interaction. Cham: Springer Nature Switzerland, 2023.
Author: Annanda Sousa
Author's contact: [email protected]