Skip to Main content Skip to Navigation
New interface

Bi-lateral interaction between a humanoid robot and a human in mixed reality

Abstract : This thesis can be divided into two parts: action recognition and emotion recognition. Each part is done in two method, classic method of Machine Learning and deep network. In the Action Recognition section, we first defined a local descriptor based on the LMA, to describe the movements. LMA is an algorithm to describe a motion by using its four components: Body, Space, Shape and Effort. Since the only goal in this part is gesture recognition, only the first three factors have been used. The DTW, algorithm is implemented to find the similarities of the curves obtained from the descriptor vectors obtained by the LMA method. Finally SVM, algorithm is used to train and classify the data. In the second part of this section, we constructed a new descriptor based on the geometric coordinates of different parts of the body to present a movement. To do this, in addition to the distances between hip centre and other joints of the body and the changes of the quaternion angles in time, we define the triangles formed by the different parts of the body and calculated their area. We also calculate the area of the single conforming 3-D boundary around all the joints of the body. At the end we add the velocity of different joint in the proposed descriptor. We used LSTM to evaluate this descriptor. In second section of this thesis, we first presented a higher-level module to identify the inner feelings of human beings by observing their body movements. In order to define a robust descriptor, two methods are carried out: The first method is the LMA, which by adding the "Effort" factor has become a robust descriptor, which describes a movement and the state in which it was performed. In addition, the second on is based on a set of spatio-temporal features. In the continuation of this section, a pipeline of recognition of expressive motions is proposed in order to recognize the emotions of people through their gestures by the use of machine learning methods. A comparative study is made between these 2 methods in order to choose the best one. The second part of this part consists of a statistical study based on human perception in order to evaluate the recognition system as well as the proposed motion descriptor.
Keywords : Machine Learning
Document type :
Complete list of metadata
Contributor : ABES STAR :  Contact
Submitted on : Thursday, January 28, 2021 - 12:07:21 PM
Last modification on : Thursday, January 27, 2022 - 3:03:37 AM


Version validated by the jury (STAR)


  • HAL Id : tel-03120401, version 2


Zahra Ramezanpanah. Bi-lateral interaction between a humanoid robot and a human in mixed reality. Other [cs.OH]. Université Paris-Saclay, 2020. English. ⟨NNT : 2020UPASG039⟩. ⟨tel-03120401v2⟩



Record views


Files downloads