A comprehensive novel multi-view dynamic face model is presented in this paper to address two challenging problems in face recognition and facial analysis: modelling faces with large pose variation and modelling faces dynamically in video sequences. The model consists of a sparse 3D shape model learnt from 2D images, a shape-and-pose-free texture model, and an affine geometrical model. Model fitting is performed by optimising (1) a global fitting criterion on the overall face appearance while it changes across views and over time, (2) a local fitting criterion on a set of landmarks, and (3) a temporal fitting criterion between successive frames in a video sequence. By temporally estimating the model parameters over a sequence input, the identity and geometrical information of a face is extracted separately. The former is crucial to face recognition and facial analysis. The latter is used to aid tracking and aligning faces. We demonstrate the results of successfully applying this model on faces with large variation of pose and expression over time.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Modelling faces dynamically across views and over time


    Beteiligte:
    Yongmin Li, (Autor:in) / Shaogang Gong, (Autor:in) / Liddell, H. (Autor:in)


    Erscheinungsdatum :

    01.01.2001


    Format / Umfang :

    755533 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Modelling Faces Dynamically across Views and Over Time

    Li, Y. / Gong, S. / Liddell, H. et al. | British Library Conference Proceedings | 2001



    Tracking across Multiple Cameras with Disjoint Views

    Javed, O. / Rasheed, Z. / Shafique, K. et al. | British Library Conference Proceedings | 2003


    Tracking across multiple cameras with disjoint views

    Javed, / Rasheed, / Shafique, et al. | IEEE | 2003