Skip to Main content Skip to Navigation

Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive

Abstract : This thesis presents a human-robot interaction (HRI) framework to classify large vocabularies of static and dynamic hand gestures, captured with wearable sensors. Static and dynamic gestures are classified separately thanks to the segmentation process. Experimental tests on the UC2017 hand gesture dataset showed high accuracy. In online frame-by-frame classification using raw incomplete data, Long Short-Term Memory (LSTM) deep networks and Convolutional Neural Networks (CNN) performed better than static models with specially crafted features at the cost of training and inference time. Online classification of dynamic gestures allows successful predictive classification. The rejection of out-of-vocabulary gestures is proposed to be done through semi-supervised learning of a network in the Auxiliary Conditional Generative Adversarial Networks framework. The proposed network achieved a high accuracy on the rejection of untrained patterns of the UC2018 DualMyo dataset.
Complete list of metadata

Cited literature [204 references]  Display  Hide  Download
Contributor : ABES STAR :  Contact
Submitted on : Friday, March 22, 2019 - 3:58:15 PM
Last modification on : Friday, August 5, 2022 - 2:54:01 PM
Long-term archiving on: : Sunday, June 23, 2019 - 3:46:14 PM


Version validated by the jury (STAR)


  • HAL Id : tel-02077066, version 1


Miguel Simao. Segmentation et reconaissance des gestes pour l'interaction homme-robot cognitive. Mécanique des matériaux [physics.class-ph]. Ecole nationale supérieure d'arts et métiers - ENSAM; Universidade de Coimbra, 2018. Français. ⟨NNT : 2018ENAM0048⟩. ⟨tel-02077066⟩



Record views


Files downloads