A comprehensive novel multi-view dynamic face model is presented in this paper to address two challenging problems in face recognition and facial analysis: modelling faces with large pose variation and modelling faces dynamically in video sequences. The model consists of a sparse 3D shape model learnt from 2D images, a shape-and-pose-free texture model, and an affine geometrical model. Model fitting is performed by optimising (1) a global fitting criterion on the overall face appearance while it changes across views and over time, (2) a local fitting criterion on a set of landmarks, and (3) a temporal fitting criterion between successive frames in a video sequence. By temporally estimating the model parameters over a sequence input, the identity and geometrical information of a face is extracted separately. The former is crucial to face recognition and facial analysis. The latter is used to aid tracking and aligning faces. We demonstrate the results of successfully applying this model on faces with large variation of pose and expression over time.
Modelling faces dynamically across views and over time
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 ; 1 ; 554-559 vol.1
2001-01-01
755533 byte
Conference paper
Electronic Resource
English
Modelling Faces Dynamically across Views and Over Time
British Library Conference Proceedings | 2001
|Composite support vector machines for detection of faces across views and pose estimation
British Library Online Contents | 2002
|Tracking across Multiple Cameras with Disjoint Views
British Library Conference Proceedings | 2003
|Tracking across multiple cameras with disjoint views
IEEE | 2003
|Modelling Measurement Systems: Putting Views Together
NTIS | 1994
|