Evaluation of Manual and Non-manual Components for Sign Language Recognition
Mukushev, Medet; Sabyrov, Arman; Imashev, Alfarabi; Koishibay, Kenessary; Kimmelman, Vadim; Sandygulova, Anara
Original version
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), 6075–6080Abstract
The motivation behind this work lies in the need to differentiate between similar signs that differ in non-manual components present in any sign. To this end, we recorded full sentences signed by five native signers and extracted 5200 isolated sign samples of twenty frequently used signs in Kazakh-Russian Sign Language (K-RSL), which have similar manual components but differ in non-manual components (i.e. facial expressions, eyebrow height, mouth, and head orientation). We conducted a series of evaluations in order to investigate whether non-manual components would improve sign’s recognition accuracy. Among standard machine learning approaches, Logistic Regression produced the best results, 78.2% of accuracy for dataset with 20 signs and 77.9% of accuracy for dataset with 2 classes (statement vs question). Dataset can be downloaded from the following website: https://krslproject.github.io/krsl20/