Tracking articulated objects in image sequences remains a challenging problem, particularly in terms of the ability to localize the individual parts of an object given self-occlusions and changes in viewpoint. In this paper we propose a two-dimensional spatio-temporal modeling approach that handles both self-occlusions and changes in viewpoint. We use a Bayesian framework to combine pictorial structure spatial models with hidden Markov temporal models. Inference for these combined models can be performed using dynamic programming and sampling methods. We demonstrate the approach for the problem of tracking a walking person, using silhouette data taken from a single camera viewpoint. Walking provides both strong spatial (kinematic) and temporal (dynamic) constraints, enabling the method to track limb positions in spite of simultaneous self-occlusion and viewpoint change.
A unified spatio-temporal articulated model for tracking
01.01.2004
575484 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
A Unified Spatio-Temporal Articulated Model for Tracking
British Library Conference Proceedings | 2004
|A Unified Spatio-Temporal Description Model of Environment for Intelligent Vehicles
Springer Verlag | 2020
|A Unified Spatio-Temporal Description Model of Environment for Intelligent Vehicles
British Library Conference Proceedings | 2021
|