LiDARs and RGB cameras are commonly used sensors in autonomous driving vehicles. However, the high-resolution LiDAR is too expensive, limiting its large-scale application in commercial autonomous vehicles. The low-resolution LiDAR is more affordable and it can approximate the perception level of high-resolution LiDAR combined with corresponding images. In this paper, we propose a hierarchical cross-attention Transformer in dual-branch to predict a dense depth map. The hierarchical architecture builds a feature pattern at all scales and the cross-attention modules fuse the features from different modalities at multiple feature levels. Furthermore, we develop a depth refinement stage to amend the dense depth map predicted by the fusion stage. The proposed method is evaluated on the indoor NYUDepthV2 dataset and outdoor KITTI Odometry dataset. The experiments demonstrate its effectiveness and accuracy compared with the present state-of-the-art methods.
CASwin Transformer: A Hierarchical Cross Attention Transformer for Depth Completion
2022-10-08
2659325 byte
Conference paper
Electronic Resource
English
SPARSE VOXEL TRANSFORMER FOR CAMERA-BASED 3D SEMANTIC SCENE COMPLETION
European Patent Office | 2024
|