Lightweight models are pivotal in efficient semantic segmentation, but they often suffer from insufficient context information due to limited convolution and small receptive field. To address this problem, we propose a tailored approach to efficient semantic segmentation by leveraging two complementary distillation schemes for supplementing context information to small networks: 1) a self-attention distillation scheme, which transfers long-range context knowledge adaptively from large teacher networks to small student networks; and 2) a layer-wise context distillation scheme, which transfers structured context from deep layers to shallow layers within student networks for promoting semantic consistency of the shallow layers. Extensive experiments on the ADE20K, Cityscapes, and Camvid datasets well demonstrate the effectiveness of our proposal.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Efficient Semantic Segmentation via Self-Attention and Self-Distillation


    Contributors:
    An, Shumin (author) / Liao, Qingmin (author) / Lu, Zongqing (author) / Xue, Jing-Hao (author)


    Publication date :

    2022-09-01


    Size :

    5131633 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Self-attention technology in image segmentation

    Cao, Fude / Lu, Xueyun | British Library Conference Proceedings | 2022


    Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation

    Niemeijer, Joshua / Schäfer, P. Jörg | German Aerospace Center (DLR) | 2021

    Free access



    RGB-D Co-attention Network for Semantic Segmentation

    Zhou, Hao / Qi, Lu / Wan, Zhaoliang et al. | British Library Conference Proceedings | 2021