Remote sensing image semantic segmentation is crucial for various applications, including land resource management, biosphere monitoring, and urban planning. Despite significant advancements in convolutional neural networks (CNNs) for semantic segmentation, existing models still have some limitations, such as insufficient utilization of multiscale features, inadequate exploration of long-range dependencies, and computational efficiency issues in attention mechanisms.To address these issues, this paper introduces a Cross-Layer Multi-Scale Feature Fusion Network (CLMFNet), which comprises two attention modules: the Multi-Level Attention Module (MLAM) and the Cross-Layer Attention Module (CLAM).The MLAM module efficiently captures contextual dependencies, enhancing the utilization of multi-scale features, which helps the network better handle features at different scales and resolutions. On the other hand, the CLAM module achieves cross-layer interaction, effectively modeling long-range dependencies and improving the handling of hierarchical features. Experimental results demonstrate that CLMFNet exhibits outstanding performance on the ISPRS remote sensing images from the Potsdam datasets. In image semantic segmentation tasks, CLMFNet outperforms other models, showcasing excellent segmentation capabilities.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Semantic Segmentation of Remote Sensing Images Based on Cross-Layer Multi-Scale Feature Fusion


    Contributors:
    Liu, Lingling (author) / Yang, Hualan (author) / Zhang, Mei (author)


    Publication date :

    2023-10-11


    Size :

    2723694 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English