In this paper we describe a monocular vision based method to learn navigable terrain for autonomous rover navigation. A self-supervised learning mechanism adjusts the surface appearance model using monocular image sequences. We propose a computationally inexpensive approach to labeling of the ground plane pixels using a reactive pre-filter. An active window, centered at the current position of the robot, implements the pre-filter based on majority voting. The selection criterion employs only a comparison, addition, and bit shifts using integer arithmetic. Hence, the scoring mechanism is directly implementable using an integer data path, associated with a reduced area overhead for resource constrained rover applications. The labeled navigable pixels are used as training data. The learning algorithm uses a mixture of Gaussians to model the terrain. We present empirical results on heterogeneous obstacle field configurations and varying terrain types.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Computationally inexpensive labeling of appearance based navigable terrain for autonomous rovers


    Contributors:


    Publication date :

    2013-04-01


    Size :

    3021109 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Terrain Mapping for Autonomous Navigation of Lunar Rovers

    Werner, Lennart | TIBKAT | 2024

    Free access

    Terrain Sensing for Planetary Rovers

    Dimastrogiovanni, Mauro / Cordes, Florian / Reina, Giulio | TIBKAT | 2021


    Terrain Adaptive Navigation for Mars Rovers

    Matthies, Larry H. / Helmick, Daniel M. / Angelova, Anelia et al. | NTRS | 2007


    Terrain Adaptive Navigation for Mars Rovers

    Helmick, Daniel M. / Angelova, Anelia / Livianu, Matthew et al. | IEEE | 2007


    Terrain Adaptive Navigation for planetary rovers

    Helmick, D. / Angelova, A. / Matthies, L. | British Library Online Contents | 2009