Visual-Inertial Odometry (VIO) refers to dead reckoning based navigation integrating visual and inertial data. With the advent of deep learning (DL), a lot of research has been done in this realm yielding competitive performances. DL based VIO approaches usually adopt a sensor fusion strategy which can have varying levels of intricacy. However, sensor data can suffer from corruptions and missing frames and is therefore imperfect. Hence, need arises for a strategy which not only fuses sensor data but also selects the features based on their reliability. This work addresses the monocular VIO problem with a more representative sensor fusion strategy involving attention mechanism. The proposed framework neither needs extrinsic sensor calibration nor the knowledge of intrinsic inertial measurement unit (IMU) parameters. The network, being trained in an end-to-end fashion, is assessed with various types of sensory data corruptions and compared against popular baselines. The work highlights the complementary nature of the employed sensors in such scenarios. The proposed approach has achieved state-of-the-art results showing competitive performance against the baselines, thereby contributing to an advance in the field. We also make use of Bayesian uncertainty in order to obtain information about model’s certainty in its predictions. The model is cast into a Bayesian Neural Network (BNN) without making any explicit changes in it and inference is made using a simple tractable approach - Laplace approximation. We show that notion of uncertainty can be exploited for VIO and sensor fusion, particularly that sensor degradation results in more uncertain predictions and the uncertainty correlates well with pose errors.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Uncertainty-Aware Attention Guided Sensor Fusion For Monocular Visual Inertial Odometry


    Beteiligte:

    Erscheinungsdatum :

    2020-06-02


    Medientyp :

    Sonstige


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Attention Guided Unsupervised learning of Monocular Visual-inertial Odometry

    Wang, Zhenke / Zhu, Yuan / Lu, Ke et al. | IEEE | 2022


    Robust Monocular Visual Odometry by Uncertainty Voting

    Van Hamme, D. / Peter, V. / Philips, W. et al. | British Library Conference Proceedings | 2011


    GROUND VEHICLE MONOCULAR VISUAL-INERTIAL ODOMETRY VIA LOCALLY FLAT CONSTRAINTS

    RAMIREZ LLANOS EDUARDO JOSE / YU XIN / VERMA DHIREN | Europäisches Patentamt | 2022

    Freier Zugriff

    VIDO: A Robust and Consistent Monocular Visual-Inertial-Depth Odometry

    Gao, Yuanxi / Yuan, Jing / Jiang, Jingqi et al. | IEEE | 2023


    Ground Vehicle Monocular Visual Odometry

    Sabry, Mohamed / Al-Kaff, Abdulla / Hussein, Ahmed et al. | IEEE | 2019