LiDARs and RGB cameras are commonly used sensors in autonomous driving vehicles. However, the high-resolution LiDAR is too expensive, limiting its large-scale application in commercial autonomous vehicles. The low-resolution LiDAR is more affordable and it can approximate the perception level of high-resolution LiDAR combined with corresponding images. In this paper, we propose a hierarchical cross-attention Transformer in dual-branch to predict a dense depth map. The hierarchical architecture builds a feature pattern at all scales and the cross-attention modules fuse the features from different modalities at multiple feature levels. Furthermore, we develop a depth refinement stage to amend the dense depth map predicted by the fusion stage. The proposed method is evaluated on the indoor NYUDepthV2 dataset and outdoor KITTI Odometry dataset. The experiments demonstrate its effectiveness and accuracy compared with the present state-of-the-art methods.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    CASwin Transformer: A Hierarchical Cross Attention Transformer for Depth Completion


    Contributors:


    Publication date :

    2022-10-08


    Size :

    2659325 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    SPARSE VOXEL TRANSFORMER FOR CAMERA-BASED 3D SEMANTIC SCENE COMPLETION

    LI YIMING / YU ZHIDING / CHOY CHRISTOPHER B et al. | European Patent Office | 2024

    Free access



    HSPFormer: Hierarchical Spatial Perception Transformer for Semantic Segmentation

    Chen, Siyu / Han, Ting / Zhang, Changshe et al. | IEEE | 2025