In this paper we introduce a novel dataset, the Multimodal Human-Human-Robot-Interactions (MHHRI) dataset, with the aim of studying personality simultaneously in human-human interactions (HHI) and human-robot interactions (HRI) and its relationship with engagement. Multimodal data was collected during a controlled interaction study where dyadic interactions between two human participants and triadic interactions between two human participants and a robot took place with interactants asking a set of personal questions to each other. Interactions were recorded using two static and two dynamic cameras as well as two biosensors, and meta-data was collected by having participants to fill in two types of questionnaires, for assessing their own personality traits and their perceived engagement with their partners (self labels) and for assessing personality traits of the other participants partaking in the study (acquaintance labels). As a proof of concept, we present baseline results for personality and engagement classification. Our results show that (i) trends in personality classification performance remain the same with respect to the self and the acquaintance labels across the HHI and HRI settings; (ii) for extroversion, the acquaintance labels yield better results as compared to the self labels; (iii) in general, multi-modality yields better performance for the classification of personality traits.


    Access

    Download


    Export, share and cite



    Title :

    Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement


    Contributors:

    Publication date :

    2017-08-09


    Remarks:

    Celiktutan , O , Skordos , E & Gunes , H 2017 , ' Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement ' , IEEE Transactions on Affective Computing . https://doi.org/10.1109/TAFFC.2017.2737019



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    150 / 629



    Multimodal Human-Robot Collaboration in Assembly

    Liu, Sichao | BASE | 2022

    Free access

    Safe Multimodal Communication in Human-Robot Collaboration

    Ferrari, Davide / Pupa, Andrea / Signoretti, Alberto et al. | Springer Verlag | 2024



    Toward Multimodal Human-Robot Cooperation and Collaboration

    Perzanowski, Dennis / Brock, Derek / Bugajska, Magdalena et al. | AIAA | 2004


    Multimodal User Feedback During Adaptive Robot-Human Presentations

    Axelsson, Agnes / Skantze, Gabriel | BASE | 2022

    Free access