In this paper, a multimodal system is designed in the form of an active audio-vision in order to improve the perceptual capability of a robot in a noisy environment. The system running in real-time consists of 1) audition modality, 2) a complementary vision modality and 3) motion modality incorporating intelligent behaviors based on the data obtained from both modalities. The tasks of audition and vision are to detect, localize and track a speaker independently. The aim of motion modality is to enable a robot to have intelligent and human-like behaviors by using localization results from the sensor fusion. The system is implemented on a mobile robot platform in a real-time environment and the speaker tracking performance of the fusion is confirmed to be improved compared to each of sensory modalities.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Audio-visual human tracking for active robot perception


    Contributors:


    Publication date :

    2015-05-01


    Size :

    531178 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Audio-Visual Coupling in Human Perception

    Haverkamp, M. | British Library Conference Proceedings | 2004


    Active Visual 3D Perception

    Marchand, E. / Chaumette, F. / IEEE | British Library Conference Proceedings | 1995


    Mobile Robot Guidance By Visual Perception

    Llario, V. / Martinez, A. / Montseny, E. | SPIE | 1986



    Selective visual perception for mobile robot navigation

    Faure,A. / Vasselin,E. / Desbordes,J.L. et al. | Automotive engineering | 1997