Every urban environment contains a rich set of dominant surfaces which can provide a solid foundation for visual odometry estimation. In this work visual odometry is robustly estimated by computing the motion of camera mounted on a vehicle. The proposed method first identifies a planar region and dynamically estimates the plane parameters. The candidate region and estimated plane parameters are then tracked in the subsequent images and an incremental update of the visual odometry is obtained. The proposed method is evaluated on a navigation dataset of stereo images taken by a car mounted camera that is driven in a large urban environment. The consistency and resilience of the method has also been evaluated on an indoor robot dataset. The results suggest that the proposed visual odometry estimation can robustly recover the motion by tracking a dominant planar surface in the Manhattan environment. In addition to motion estimation solution a set of strategies are discussed for mitigating the problematic factors arising from the unpredictable nature of the environment. The analyses of the results as well as dynamic environmental strategies indicate a strong potential of the method for being part of an autonomous or semi-autonomous system.


    Access

    Access via TIB


    Export, share and cite




    Multimodal scale estimation for monocular visual odometry

    Fanani, Nolang / Sturck, Alina / Barnada, Marc et al. | IEEE | 2017


    Large scale visual odometry using stereo vision

    Hernández-Gutiérrez, Andrés / Nieto, Juan I. / Vidal-Calleja, Teresa A. et al. | BASE | 2009

    Free access


    3D Visual Odometry for Road Vehicles

    García García, Rufino / Sotelo, Miguel Ángel / Parra Alonso, Ignacio et al. | BASE | 2008

    Free access