When robots work in social contexts they are often required to partake in increasingly complex social interactions. This sets high requirements for their social capabilities. To participate in complex social interactions, the robots must be properly understood by the humans with whom they interact. This effect can be influenced by how the robots express themselves and how they are emotionally perceived. The more robots are able to comprehend whom they are interacting with and where the interaction occurs, the more they can adapt their behaviors to different scenarios. This adaption can potentially make it possible to optimize how they are perceived and as a result make them better equipped for handling complex social interactions. We, humans express ourselves with both verbal and nonverbal communication methods to convey how we feel, and we do this with a deep understanding of the context of the interaction and the people we interact with. To us as humans, this task is often trivial when interacting in familiar environments and with people we know. When we no longer have the advantage of familiarity with known contexts, we often have to rely on our ability to interpret the cues and signals of the immediate situation. As a path towards improving the affective abilities of robots, it is crucial that we focus on the robots’ ability to understand the immediate context. Even a simple understanding of the context will allow them to adapt to both contextual changes as well as to the humans they interact with. These abilities are vital for strengthening their affective impact and successfully introducing them into complex social scenarios. This dissertation reports the conduct and results of a series of experiments that aims to contribute to our understanding of how to create more impactful and context-aware affective robots. The subject was investigated through robot engineering and experiments in human-robot interactions. Inspired by patterns in human-human interactions, the experimental setup outlined in this dissertation aimed at highlighting both the engineering and behavioral aspects of each human-robot interaction with the aim to strengthen the affective impact of social robots. In each of our interactions, we humans often express ourselves using multiple interaction modalities. These include the way we gesture, how we move, how we speak, and even how we look. To understand the specific synergy among these interaction modalities, we designed and constructed two non-humanoid robots. The technical and behavioral implementations of the robots were verified through multiple human-robot interactions, and the results contribute to the research field of affective robots in a number of ways. The initial finding defines a model for systematically assessing and characterizing the affective strengths of a robot. The model is useful both for the comparison of robots and as a guideline for roboticists in the design phase of affective robots. By applying the model to existing social robots it was also found that the use of multiple interaction modalities for robots in human-robot interactions has an untapped potential for success. The second finding outline how the coordination and specific timing of a robot’s reactions in an interaction influence the robot’s affective impact and alter how humans perceive it. That is, the project showed that when robots can respond to a broad variety of input types with the proper timing, humans perceive them as having a greater affective impact. The current generation of social robots works fairly well in the context for which they have been designed. However, they rarely adapt their (affective) behavior to the changes in the environment, and they often struggle to comprehend the social requirements of their interaction. Throughout each of our interactions, we humans regulate our behaviors by interpreting subtle cues from the people we interact with and by understanding the constraints of the places in which we interact. This may be as simple as not laughing when someone is sad or lowering our voice in a smaller space. We also adapt to the mood of the people we interact with and adjust our behaviors to the requirements of the physical context. To navigate interactions that demand such contextual comprehension, robots need a subset of the same skills. In this project, we designed a humanoid robot to facilitate autonomous context awareness through human-robot interactions combined with context-informing cues in the physical environment. We also presented a system that enables robots to adapt to different physical contexts using immediately available sensor data in each interaction. We found that simple context awareness in robots can be facilitated using data that are easily attainable from the physical context. The system is applicable to other robots and requires only simple sensors available in most robots. Finally, we came up with a method that allows a robot to adapt to different users based on simple sensors and through a short-duration interaction. Through our experiments, we found that the speech and movement patterns of humans gained from the initial moments of an interaction could sufficiently be used to distinguish individual users and could potentially facilitate further user adaptation of robot behaviors. This dissertation argues for the need to shift our perspective on and approach to designing, constructing, and testing social robots, with the aim of increasing the affec-tive impact of robots. The approach focuses on simultaneously using multiple simple interaction modalities to optimize how robots convey complex affective information. This is in contrast to using a single but highly specialized expression modality. The approach also focuses on combining several sources of context information to gain knowledge on the circumstances of the interaction, and on adapting to such circumstances to create more impactful and believable robots. It aims to outline how simple, information obtainable from the immediate interactions between humans and robots can help the robots become more context aware and have a stronger affective impact. The simple information in our experiments consisted of the physical dimensions of the test environment while the measured human attributes consisted of the speech and movement characteristics of each participant. Using such information may give robots the ability to adapt to changes in the physical context and to meet the user-specific behavioral demands of each interaction. As a future strategy, this dissertation suggests that robot designers change their perspective on when to use contextual knowledge and decrease the requirements on systems that provide contextual comprehension. Although a complete and human-like understanding of the current context may not be possible with the current technology, it is beneficial to already use the available contextual information in robots, as even simplistic context information may be useful for informing affective robot behaviors.


    Access

    Download


    Export, share and cite



    Title :

    Toward Context-Aware, Affective, and Impactful Social Robots


    Contributors:

    Publication date :

    2021-01-01


    Remarks:

    Frederiksen , M R 2021 , Toward Context-Aware, Affective, and Impactful Social Robots . ITU-DS , no. 183 , IT-Universitetet i København .


    Type of media :

    Book


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    DDC:    629




    Context-Specific Affective and Cognitive Responses to Humanoid Robots

    Jung, Yoonhyuk / Cho, Eunae | BASE | 2018

    Free access

    Toward affective social interaction in VR

    Jacucci, G. | BASE | 2017

    Free access

    Context Aware Body Regulation of Redundant Robots

    Mohammadi, Pouya | DataCite | 2020


    Context-Aware 3D Object Anchoring for Mobile Robots Dataset

    Günther, Martin / Ruiz-Sarmiento, José Raúl / Galindo, Cipriano et al. | BASE | 2018

    Free access