We present a system that can recover and track the 3D speech movements of a speaker's face for each image of a monocular sequence. A speaker-specific face model is used for tracking: model parameters are extracted from each image by an analysis-by-synthesis loop. To handle both the individual specificities of the speaker's articulation and the complexity of the facial deformations during speech, speaker-specific models of the face 3D geometry and appearance are built from real data. The geometric model is linearly controlled by only six articulatory parameters. Appearance is seen either as a classical texture map or through local appearance of a relevant subset of 3D points. We compare several appearance models: they are either constant or depend linearly on the articulatory parameters. We evaluate these different appearance models with ground truth data.
Shape and appearance models of talking faces for model-based tracking
2003-01-01
538264 byte
Conference paper
Electronic Resource
English
Shape and Appearance Models of Talking Faces for Model-based Tracking
British Library Conference Proceedings | 2003
|Shape based appearance model for kernel tracking
British Library Online Contents | 2012
|Computer Graphics Animations of Talking Faces Based on Stochastic Models
British Library Conference Proceedings | 1994
|Towards a low bandwidth talking face using appearance models
British Library Online Contents | 2003
|