Depth estimation from single monocular image attracts increasing attention in autonomous driving and computer vision. While most existing approaches regress depth values or classify depth labels based on features extracted from limited image area, the resulting depth maps are still perceptually unsatisfying. Neither local context nor low-level semantic information is sufficient to predict depth. Learning based approaches suffer from inherent defects of supervision signals. This paper addresses monocular depth estimation with a general information exchange convolutional neural network. We maintain a high-resolution prediction throughout the network. Meanwhile, both low-resolution features capturing long-range context and fine-grained features describing local context can be refined with information exchange path stage by stage. Mutual channel attention mechanism is applied to emphasize interdependent feature maps and improve the feature representation of specific semantics. The network is trained under the supervision of improved log-cosh and gradient constraints so that the abnormal predictions have less impacts and the estimation can be consistent in high order. The results of ablation studies verify the efficiency of every proposed components. Experiments on the popular indoor and street-view datasets show competitive results compared with the recent state-of-the-art approaches.
Monocular Depth Estimation Using Information Exchange Network
IEEE Transactions on Intelligent Transportation Systems ; 22 , 6 ; 3491-3503
01.06.2021
8452619 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
An Improved Convolutional Neural Network for Monocular Depth Estimation
British Library Conference Proceedings | 2020
|An Improved Convolutional Neural Network for Monocular Depth Estimation
Springer Verlag | 2020
|Local Scene Depth Estimation Using Rotating Monocular Camera
British Library Conference Proceedings | 2015
|