The majority of socially assistive robots interact with their users using multiple modalities. Multimodality is an important feature that can enable them to adapt to the user behavior and the environment. In this work, we propose a resource-based modality-selection algorithm that adjusts the use of the robot interaction modalities taking into account the available resources to keep the interaction with the user comfortable and safe. For example, the robot should not enter the board space while the user is occupying it, or speak while the user is speaking. We performed a pilot study in which the robot acted as a caregiver in cognitive training. We compared a system with the proposed algorithm to a baseline system that uses all modalities for all actions unconditionally. Results of the study suggest that a reduced complexity of interaction does not significantly affect the user experience, and may improve task performance. © 2018 Authors. ; Peer Reviewed ; Postprint (author's final draft)


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Adaptable multimodal interaction framework for robot-assisted cognitive training

    Taranović, Aleksandar / Jevtić, Aleksandar / Torras, Carme | BASE | 2018

    Freier Zugriff

    Adaptable multimodal interaction framework for robot-assisted cognitive training

    Taranovic, Aleksandar / Jevtic, Aleksandar / Torras, Carme | BASE | 2018

    Freier Zugriff

    Alert modality selection for alerting a driver

    JOO NICHOLAS FRANK / ALEXANDER RACHEL GRAY / MAHADEVAN SHABIN | Europäisches Patentamt | 2024

    Freier Zugriff

    Deciding the different robot roles for patient cognitive training

    Andriella, Antonio / Alenyà, Guillem / Hernández-Farigola, Joan et al. | BASE | 2018

    Freier Zugriff

    Controlling patient participation during robot-assisted gait training

    Koenig, A / Omlin, X / Bergmann, J et al. | BASE | 2011

    Freier Zugriff