This paper proposes a novel approach to visual tracking of moving objects based on the dynamic coupled conditional random field (DcCRF) model. The principal idea is to integrate a variety of relevant knowledge about object tracking into a unified dynamic probabilistic framework, which is called the DcCRF model in this paper. Under this framework, the proposed approach integrates spatiotemporal contextual information of motion and appearance, as well as the compatibility between the foreground label and object label. An approximate inference algorithm, i.e., loopy belief propagation, is adopted to conduct the inference. Meanwhile, the background model is adaptively updated to deal with gradual background changes. Experimental results show that the proposed approach can accurately track moving objects (with or without occlusions) in monocular video sequences and outperforms some state-of-the-art methods in tracking and segmentation accuracy.
Visual Tracking Based on Dynamic Coupled Conditional Random Field Model
2016
Article (Journal)
English
Visual Tracking Based on Dynamic Coupled Conditional Random Field Model
Online Contents | 2015
|Tracking with a mixed continuous-discrete Conditional Random Field
British Library Online Contents | 2013
|Condensation - Conditional Density Propagation for Visual Tracking
British Library Online Contents | 1998
|