This paper describes the recognition of the 3-D pose and shape of articulated objects like a human hand and visual tracking of moving persons from a sequence of images. In the first stage of pose and shape recognition, the rough estimation of the pose is obtained by silhouette matching to a rough model of a hand and fingers. In the second stage, the model is refined using restrictions of the shape and pose of the object. Modifying the extended Kalman filter so as to satisfy the restrictions, the depth ambiguity is gradually resolved from observed images. Next, a method is proposed for tracking an object from the optical flow and depth data acquired from a sequence of stereo images. A target region is extracted by Baysian inference in terms of the optical flow, disparity and the predicted target location. Occlusion of the target can also be detected from the abrupt change of the disparity of the target region. Real-time human tracking in a real image sequence is shown.
Estimation of 3-D pose and shape from a monocular image sequence and real-time human tracking
01.01.1997
995873 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Estimation of 3-D Pose and Shape from a Monocular Image Sequence and Realtime Human Tracking
British Library Conference Proceedings | 1997
|3D Face pose estimation and tracking from a monocular camera
British Library Online Contents | 2002
|Silhouette lookup for monocular 3D pose tracking
British Library Online Contents | 2007
|