Lightweight models are pivotal in efficient semantic segmentation, but they often suffer from insufficient context information due to limited convolution and small receptive field. To address this problem, we propose a tailored approach to efficient semantic segmentation by leveraging two complementary distillation schemes for supplementing context information to small networks: 1) a self-attention distillation scheme, which transfers long-range context knowledge adaptively from large teacher networks to small student networks; and 2) a layer-wise context distillation scheme, which transfers structured context from deep layers to shallow layers within student networks for promoting semantic consistency of the shallow layers. Extensive experiments on the ADE20K, Cityscapes, and Camvid datasets well demonstrate the effectiveness of our proposal.
Efficient Semantic Segmentation via Self-Attention and Self-Distillation
IEEE Transactions on Intelligent Transportation Systems ; 23 , 9 ; 15256-15266
2022-09-01
5131633 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Self-attention technology in image segmentation
British Library Conference Proceedings | 2022
|Combining Semantic Self-Supervision and Self-Training for Domain Adaptation in Semantic Segmentation
Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2021
|RGB-D Co-attention Network for Semantic Segmentation
British Library Conference Proceedings | 2021
|