This paper addresses the 3D tracking of pose and animation of the human face in monocular image sequences using deformable 3D models. For each frame, the proposed adaptation is split into two consecutive stages: global and local. In the first stage, the 3D pose of the face is recovered using a RANSAC-based technique involving both the consensus measure and the consistency with a statistical model of a face texture. In the second stage, the local motion associated with some facial features is recovered using the concept of the active appearance model search. Adaptation examples demonstrate the feasibility and robustness of the developed framework.
Face model adaptation using robust matching and active appearance models
01.01.2002
420321 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Face Model Adaptation Using Robust Matching and Active Appearance Models
British Library Conference Proceedings | 2002
|Pose Robust Face Tracking by Combining Active Appearance Models and Cylinder Head Models
British Library Online Contents | 2008
|Generative face alignment through 2.5D active appearance models
British Library Online Contents | 2013
|The painful face - Pain expression recognition using active appearance models
British Library Online Contents | 2009
|Fitting 3D face models for tracking and active appearance model training
British Library Online Contents | 2006
|