We present a model-based method for accurate extraction of pedestrian silhouettes from video sequences. Our approach is based on two assumptions, 1) there is a common appearance to all pedestrians, and 2) each individual looks like him/herself over a short amount of time. These assumptions allow us to learn pedestrian models that encompass both a pedestrian population appearance and the individual appearance variations. Using our models, we are able to produce pedestrian silhouettes that have fewer noise pixels and missing parts. We apply our silhouette extraction approach to the NIST gait data set and show that under the gait recognition task, our model-based silhouettes result in much higher recognition rates than silhouettes directly extracted from background subtraction, or any nonmodel-based smoothing schemes.
Learning pedestrian models for silhouette refinement
2003-01-01
337365 byte
Conference paper
Electronic Resource
English
Learning Pedestrian Models for Silhouette Refinement
British Library Conference Proceedings | 2003
|Pedestrian Categorization Using Heterogenous HOG Cascade and Motion Difference Silhouette
British Library Online Contents | 2013
|Refinement of human silhouette segmentation in omni-directional indoor videos
British Library Online Contents | 2014
|British Library Conference Proceedings | 2003
|