Color–thermal (RGB-T) urban scene parsing has recently attracted widespread interest. However, most existing approaches to RGB-T urban scene parsing do not deeply explore the information complementarity between RGB-T features. In this study, we propose a cross-modal attention-cascaded fusion network (CACFNet) that fully exploits cross-modality. In our design, a cross-modal attention fusion module mines complementary information from two modalities. Subsequently, a cascaded fusion module decodes the multi-level features in an up-bottom manner. Noting that each pixel is labeled with the category of the region to which it belongs, we present a region-based module that explores the relationship between pixel and region. Moreover, in contrast to previous methods that employ only the cross-entropy loss to penalize pixel-wise predictions, we propose an additional loss to learn pixel–pixel relationships. Extensive experiments on two datasets demonstrate that the proposed CACFNet achieves state-of-the-art performance in RGB-T urban scene parsing.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    CACFNet: Cross-Modal Attention Cascaded Fusion Network for RGB-T Urban Scene Parsing


    Beteiligte:
    Zhou, Wujie (Autor:in) / Dong, Shaohua (Autor:in) / Fang, Meixin (Autor:in) / Yu, Lu (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    01.01.2024


    Format / Umfang :

    2524402 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    EGFNet: Edge-Aware Guidance Fusion Network for RGB–Thermal Urban Scene Parsing

    Dong, Shaohua / Zhou, Wujie / Xu, Caie et al. | IEEE | 2024



    Spatial Prior for Nonparametric Road Scene Parsing

    Di, Shuai / Zhang, Honggang / Mei, Xue et al. | IEEE | 2015


    Exploiting Large Image Sets for Road Scene Parsing

    Alvarez, Jose M | Online Contents | 2016