Feature fusion approaches have been widely used in object detection and semantic segmentation to improve accuracy. Global feature fusion integrates semantic information and detail spatial information. Combining the fine feature maps in the bottom-up stage and the coarse feature maps in the top-down stage is very effective in the network where it is necessary to understand the contextual information of a given image. In this paper, we propose a method to integrate multiple feature maps in the local region as well as global feature fusion. Local multi-scale feature fusion integrates neighboring feature maps from different levels and scales to get a more diverse range of receptive fields with less computation while keeping detail appearance information. Experimental results demonstrate that the proposed network, which is based on the global and local feature fusion, achieves competitive accuracy with real-time inference speed in semantic segmentation and object detection tasks over the previous state-of-the-art methods.
Global and Local Multi-scale Feature Fusion for Object Detection and Semantic Segmentation
2019 IEEE Intelligent Vehicles Symposium (IV) ; 2557-2562
2019-06-01
468307 byte
Conference paper
Electronic Resource
English
British Library Conference Proceedings | 2021
|