Color–thermal (RGB-T) urban scene parsing has recently attracted widespread interest. However, most existing approaches to RGB-T urban scene parsing do not deeply explore the information complementarity between RGB-T features. In this study, we propose a cross-modal attention-cascaded fusion network (CACFNet) that fully exploits cross-modality. In our design, a cross-modal attention fusion module mines complementary information from two modalities. Subsequently, a cascaded fusion module decodes the multi-level features in an up-bottom manner. Noting that each pixel is labeled with the category of the region to which it belongs, we present a region-based module that explores the relationship between pixel and region. Moreover, in contrast to previous methods that employ only the cross-entropy loss to penalize pixel-wise predictions, we propose an additional loss to learn pixel–pixel relationships. Extensive experiments on two datasets demonstrate that the proposed CACFNet achieves state-of-the-art performance in RGB-T urban scene parsing.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    CACFNet: Cross-Modal Attention Cascaded Fusion Network for RGB-T Urban Scene Parsing


    Contributors:
    Zhou, Wujie (author) / Dong, Shaohua (author) / Fang, Meixin (author) / Yu, Lu (author)

    Published in:

    Publication date :

    2024-01-01


    Size :

    2524402 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    EGFNet: Edge-Aware Guidance Fusion Network for RGB–Thermal Urban Scene Parsing

    Dong, Shaohua / Zhou, Wujie / Xu, Caie et al. | IEEE | 2024



    Spatial Prior for Nonparametric Road Scene Parsing

    Di, Shuai / Zhang, Honggang / Mei, Xue et al. | IEEE | 2015