In this study, the authors propose a novel and robust approach to control auxiliary tasks in vehicles using hand gestures. First, they create a three-dimensional video volume by appending one frame to other that captures the motion history of frames. Then, they extract features using histogram of oriented gradients on each video volume. These features are represented in the form of subspaces on Grassmann manifold. To improve the recognition accuracy, they map the data from one manifold to another manifold with the help of a Grassmann kernel. Grassmann graph embedding discriminant analysis framework is used to classify the gestures. They perform experiments on two datasets: LISA and Cambridge Hand Gesture in three different testing methods such as 1/3-subject, 2/3-subject and cross-subject. Experimental results show that their proposed model outperforms and is comparable with the state-of-the-art methods.


    Zugriff

    Zugriff über TIB


    Exportieren, teilen und zitieren




    A Real-Time Applicable Dynamic Hand Gesture Recognition Framework

    Kopinski, Thomas / Gepperth, Alexander / Handmann, Uwe | IEEE | 2015


    Learning prototypes and similes on Grassmann manifold for spontaneous expression recognition

    Liu, Mengyi / Wang, Ruiping / Shan, Shiguang et al. | British Library Online Contents | 2016


    Heterogeneous hand gesture recognition using 3D dynamic skeletal data

    De Smedt, Quentin / Wannous, Hazem / Vandeborre, Jean-Philippe | British Library Online Contents | 2019


    Capturing drone system using hand gesture recognition

    LEE KWANG SEOB / MOON SUNG WOOK / BAE JUNG HOON et al. | Europäisches Patentamt | 2018

    Freier Zugriff