Comparing OpenFace to Manual Annotations of Communicative Facial Signals
Keywords
Loading...
Authors
Issue Date
2020-06-30
Language
en
Document type
Journal Title
Journal ISSN
Volume Title
Publisher
Title
ISSN
Volume
Issue
Startpage
Endpage
DOI
Abstract
This thesis assesses the differences between the output of OpenFace and manual annotations
of communicative and holistic facial signals. OpenFace is a software program that detects
facial signals in videos of human faces as Action Units. These unit of facial movement are not
always of interest for research. Human coders might only want to annotate communicative
and holistic facial signals, instead of all visible signals. Video annotation is a time-consuming
process to do manually, so automation is desired. This thesis explains how the output of
OpenFace and annotations of communicative signals differ on conceptual level, goal, and
features. These differences should be considered when using OpenFace for annotation of
communicative and holistic facial signals. An attempt is made to transform the output of
OpenFace into annotations of frowns, blinks, smiles, and gaze aversion by manually finding
thresholds and constraints. A minimal agreement is reached between the transformed output
and the manual annotations. The conclusion is that OpenFace can be used to automate the
annotation of communicative facial signals, but only with the help of machine learning.
Unbiased data is required for training, together with objective definitions of communicative
facial signals.
Description
Citation
Supervisor
Faculty
Faculteit der Sociale Wetenschappen