Inter-human communication is characterized by a high degree of expressiveness, comfort and robustness. Moreover, humans possess complex knowledge resources that are expanded permanent by continuous learning and adaptation processes in everyday life. Many user interfaces show very poor usability, which is a result of growing functional complexity and mostly restriction to tactile input and visual output. Thus, the appropriate systems require extensive learning periods and adaptation to a high degree, which often increases the potential of errors and user frustration. To overcome these limitations, a promising approach is to develop more natural user interfaces that are modelled with regard to human communication skills. As a result of a long-term research cooperation between the Technical University of Munich and BMW Research and Technology, in this work the authors describe a robust and flexible system for the video-based analysis of dynamic hand and head gestures that has been adapted to the individual needs of the driver and the specific in-car requirements. Moreover, the system is fully integrated in a multimodal architecture. First, the authors briefly explain the fundamental characteristics of gestures, describe relevant automotive use case scenarios and review selected work in the field of automatic head- and hand gesture recognition. The overall system architecture is based on the classic image processing pipeline consisting of the two different stages spatial image segmentation and gesture classification. This conventional process model has been extended by a spotting module that facilitates a fully automatic temporal segmentation of the continuous input stream. To increase the overall system performance, the entire parameter set can additionally be controlled by available context information of the user, the environment and the dialog situation. The system, implemented in a BMW limousine, evaluates a continuous stream of infrared pictures using a combination of adapted preprocessing methods and a hierarchical, mainly rule based classification scheme. Currently, 17 different hand gestures and six different head gestures can be recognized in real-time on standard hardware. As a key-feature of the system, the active gesture vocabulary can be reduced with regard to the current operating context yielding more robust performance.


    Access

    Access via TIB

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Robust multimodal hand and head gesture recognition for controlling automotive infotainment systems


    Additional title:

    Robuste multimodale Hand- und Kopfgestik-Erkennung zur Steuerung von Fahrzeug-Infotainment-Systemen


    Contributors:


    Publication date :

    2005


    Size :

    19 Seiten, 11 Bilder, 18 Quellen




    Type of media :

    Conference paper


    Type of material :

    Print


    Language :

    English




    Robust multimodal hand and head gesture recognition for controlling automotive infotainment systems

    Althoff,F. / Lindl,R. / Walchshaeusl,L. et al. | Automotive engineering | 2005


    Robust multimodal hand- and head gesture recognition for controlling automotive infotainment systems

    Althoff, F. / Lindl, R. / Walchshausl, L. et al. | British Library Conference Proceedings | 2005


    Hand-free Gesture Recognition for Vehicle Infotainment System Control

    Ye, Qi / Yang, Lanqing / Xue, Guangtao | IEEE | 2018


    Real-time gesture control for automotive infotainment system

    Chee, Ying Xuan / Lau, Phooi Yee | SPIE | 2021


    Hand Gesture based driver-vehicle interface Framework for automotive In-Vehicle Infotainment system

    Kaliappan, Vishnu Kumar / Sureshkumar, Prasanah Kumar / Velliangiri, Nehru Prasanth et al. | IEEE | 2023