Human motion detection plays an important role in automated surveillance systems. However, it is challenging to detect non-rigid moving objects (e.g. human) robustly in a cluttered environment. In this paper, we compare two approaches for detecting walking humans using multi-modal measurements- video and audio sequences. The first approach is based on the Time-Delay Neural Network (TDNN), which fuses the audio and visual data at the feature level to detect the walking human. The second approach employs the Bayesian Network (BN) for jointly modeling the video and audio signals. Parameter estimation of the graphical models is executed using the Expectation-Maximization (EM) algorithm. And the location of the target is tracked by the Bayes inference. Experiments are performed in several indoor and outdoor scenarios: in the lab, more than one person walking, occlusion by bushes etc. The comparison of performance and efficiency of the two approaches are also presented.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Tracking Humans using Multi-modal Fusion


    Contributors:
    Xiaotao Zou, (author) / Bhanu, B. (author)


    Publication date :

    2005-01-01


    Size :

    776585 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing

    Karle, Phillip / Fent, Felix / Huch, Sebastian et al. | IEEE | 2023


    MMF-Track: Multi-Modal Multi-Level Fusion for 3D Single Object Tracking

    Li, Zhiheng / Cui, Yubo / Lin, Yu et al. | IEEE | 2024


    Multi-modal tracking using texture changes

    Kemp, C. / Drummond, T. | British Library Online Contents | 2008


    Multi-Modal Face Tracking Using Bayesian Network

    Liu, F. / Lin, X. / Li, S. et al. | British Library Conference Proceedings | 2003


    Multi-modal face tracking using Bayesian network

    Fang Liu, / Xueyin Lin, / Li, S.Z. et al. | IEEE | 2003