Depth estimation from a single RGB image has attracted great interest in autonomous driving and robotics. State-of-the-art methods are usually designed on top of complex and extremely deep network architectures, which require more computational resources. Moreover, the inherent characteristic of the backbone used by the existing approaches results in severe spatial information loss in the produced feature maps, which impairs the accuracy of depth estimation on small sized images. In this study, we aimed to design a novel and efficient Convolutional Neural Network (CNN) to address these problems. Specifically, we stacked two shallow encoder-decoder style subnetworks successively in a unified network. Extensive experiments have been conducted on the NYU depth v2, KITTI, Make3D and Unreal data sets. Experimental results show that the proposed network achieves comparable accuracy to state-of-the-art methods that have extremely deep architectures but runs at a much faster speed on a single, less powerful GPU.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    MobileXNet: An Efficient Convolutional Neural Network for Monocular Depth Estimation



    Published in:

    Publication date :

    2022-11-01


    Size :

    4866531 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    An Improved Convolutional Neural Network for Monocular Depth Estimation

    Kang, Jing / Dang, Anrong / Zhang, Bailing et al. | TIBKAT | 2020


    An Improved Convolutional Neural Network for Monocular Depth Estimation

    Kang, Jing / Dang, Anrong / Zhang, Bailing et al. | Springer Verlag | 2020


    An Improved Convolutional Neural Network for Monocular Depth Estimation

    Kang, Jing / Dang, Anrong / Zhang, Bailing et al. | British Library Conference Proceedings | 2020



    Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks

    Mern, John M. / Julian, Kyle D. / Tompa, Rachael E. et al. | AIAA | 2019