Non-verbal behavior is crucial for positive perception of humanoid robots. If modeled well it can improve the interaction and leave the user with a positive experience, on the other hand, if it is modelled poorly it may impede the interaction and become a source of distraction. Most of the existing work on modeling non-verbal behavior show limited variability due to the fact that the models employed are deterministic and the generated motion can be perceived as repetitive and predictable. In this paper, we present a novel method for generation of a limited set of facial expressions and head movements, based on a probabilistic generative deep learning architecture called Glow. We have implemented a workflow which takes videos directly from YouTube, extracts relevant features, and trains a model that generates gestures that can be realized in a robot without any post processing. A user study was conducted and illustrated the importance of having any kind of non-verbal behavior while most differences between the ground truth, the proposed method, and a random control were not significant (however, the differences that were significant were in favor of the proposed method). ; QC 20191007


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Learning Non-verbal Behavior for a Social Robot from YouTube Videos


    Beteiligte:
    Jonell, Patrik (Autor:in) / Kucherenko, Taras (Autor:in) / Ekstedt, Erik (Autor:in) / Beskow, Jonas (Autor:in)

    Erscheinungsdatum :

    2019-01-01


    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629