RGB-T semantic segmentation can effectively pop-out objects from challenging scenarios (e.g., low illumination and low contrast environments) by combining RGB and thermal infrared images. However, the existing cutting-edge RGB-T semantic segmentation methods often present insufficient exploration of multi-modal feature fusion, where they overlook the differences between the two modalities. In this paper, we propose an adaptive gated fusion network (AGFNet) to conduct RGB-T semantic segmentation, where the multi-modal features are combined via the gating mechanisms and the spatial details are enhanced via the introduction of edge information. Specifically, the AGFNet employs a cross-modal adaptive gated-attention fusion (CAGF) module to aggregate the RGB and thermal features, where we give a sufficient exploration of the complementarity between the two-modal features via the gated attention unit (GAU). Particularly, in GAU, the gates can be used to purify the features, and the channel and spatial attention mechanisms are further employed to enhance the two-modal features interactively. Then, we design an edge detection (ED) module to learn the object-related edge cues, which simultaneously incorporates local detail information from low-level features and global location information from high-level features. After that, we deploy the edge guidance (EG) module to emphasize the spatial details of the fused features. Next, we deploy the contextual elevation (CE) module to enrich the contextual information of features by iteratively introducing the sine and cosine functions. Finally, considering that the quality of thermal images is usually lower than that of RGB images, we progressively integrate the multi-level RGB encoder features with multi-level decoder features, thereby focusing more on appearance information. Following this way, we can acquire the final high-quality segmentation result. Extensive experiments are performed on three public datasets including MFNet, PST900 and FMB datasets, and the experimental results show that our method achieves competitive performance when compared with the 22 state-of-the-art methods.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    AGFNet: Adaptive Gated Fusion Network for RGB-T Semantic Segmentation


    Contributors:
    Zhou, Xiaofei (author) / Wu, Xiaoling (author) / Bao, Liuxin (author) / Yin, Haibing (author) / Jiang, Qiuping (author) / Zhang, Jiyong (author)


    Publication date :

    2025-05-01


    Size :

    4182501 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Gated-Residual Block for Semantic Segmentation Using RGB-D Data

    Qian, Yeqiang / Deng, Liuyuan / Li, Tianyi et al. | IEEE | 2022


    A Lightweight RGB-T Fusion Network for Practical Semantic Segmentation

    Zhang, Haoyuan / Li, Zifeng / Wu, Zhenyu et al. | IEEE | 2023


    Convolutional gated recurrent networks for video semantic segmentation in automated driving

    Siam, Mennatullah / Valipour, Sepehr / Jagersand, Martin et al. | IEEE | 2017


    MLRFNet: Multi-Level Real-Time Fusion Semantic Segmentation Network for Autonomous Driving

    Ma, Xiaochuan / Xun, Zhijie / Mao, Bomin et al. | IEEE | 2025