How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer’s motor system are crucially involved in our ability to understand actions of others’, to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human–robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human–robot team has to construct toy objects from their components. The experiments focus on the robot’s capacity to anticipate the user’s needs and to detect and communicate unexpected events that may occur during joint task execution. ; Fundação para a Ciência e a Tecnologia (FCT) - Bolsa POCI/V.5/A0119/2005 and CONC-REEQ/17/2001 ; European Commission through the project JAST (IP-003747)
Integrating verbal and nonverbal communication in a dynamic neural field architecture for human–robot interaction
01.05.2010
doi:10.3389/fnbot.2010.00005
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
DDC: | 629 |
British Library Online Contents | 2016
|ArXiv | 2012
|ArXiv | 2021
|Towards enabling human-robot handovers : exploring nonverbal cues for fluent human-robot handovers
BASE | 2018
|