A comprehensive novel multi-view dynamic face model is presented in this paper to address two challenging problems in face recognition and facial analysis: modelling faces with large pose variation and modelling faces dynamically in video sequences. The model consists of a sparse 3D shape model learnt from 2D images, a shape-and-pose-free texture model, and an affine geometrical model. Model fitting is performed by optimising (1) a global fitting criterion on the overall face appearance while it changes across views and over time, (2) a local fitting criterion on a set of landmarks, and (3) a temporal fitting criterion between successive frames in a video sequence. By temporally estimating the model parameters over a sequence input, the identity and geometrical information of a face is extracted separately. The former is crucial to face recognition and facial analysis. The latter is used to aid tracking and aligning faces. We demonstrate the results of successfully applying this model on faces with large variation of pose and expression over time.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Modelling faces dynamically across views and over time


    Contributors:
    Yongmin Li, (author) / Shaogang Gong, (author) / Liddell, H. (author)


    Publication date :

    2001-01-01


    Size :

    755533 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Modelling Faces Dynamically across Views and Over Time

    Li, Y. / Gong, S. / Liddell, H. et al. | British Library Conference Proceedings | 2001



    Tracking across Multiple Cameras with Disjoint Views

    Javed, O. / Rasheed, Z. / Shafique, K. et al. | British Library Conference Proceedings | 2003


    Tracking across multiple cameras with disjoint views

    Javed, / Rasheed, / Shafique, et al. | IEEE | 2003