Foreground object segmentation is one of the most important pre-processing steps in intelligent transportation and video surveillance systems. Although background modeling methods are efficient to segment foreground objects, their results are easily affected by dynamic backgrounds and updating strategies. Recently, deep learning-based methods have achieved more effective foreground object segmentation results compared with background modeling methods. However, a large number of labeled training frames are usually required. To reduce the number of training frames, we propose a novel cross-scale guidance network (CSGNet) for few-shot moving foreground object segmentation in surveillance videos. The proposed CSGNet contains the cross-scale feature expansion encoder and cross-scale feature guidance decoder. The encoder aims to represent the scenes by extracting cross-scale expansion features based on cross-scale and multiple field-of-view information learned from a limited number of training frames. The decoder aims to obtain accurate foreground object segmentation results under the guidance of the encoder features and the foreground loss. The proposed method outperforms the state-of-the-art background modeling methods and the deep learning-based methods around 2.6% and 3.1%, and the average computation time is 0.073 and 0.046 seconds for each frame in the CDNet2014 dataset and the UCSD dataset under a single GTX 1080 GPU computer. The source code will be available at https://github.com/nchucvml/CSGNet.
Cross-Scale Guidance Network for Few-Shot Moving Foreground Object Segmentation
IEEE Transactions on Intelligent Transportation Systems ; 26 , 6 ; 7726-7739
2025-06-01
2688092 byte
Article (Journal)
Electronic Resource
English
Chromatic shadow detection and tracking for moving foreground segmentation
British Library Online Contents | 2015
|Background Foreground Segmentation for SLAM
IEEE | 2011
|A Bayesian Network for Foreground Segmentation in Region Level
Springer Verlag | 2007
|