For social robots to move and behave appropriately in dynamic and complex social contexts they need to be flexible in their movement behaviors. The natural complexity of social interaction makes this a difficult property to encode programmatically. Instead of programming these algorithms by hand it could be preferable to have the system learn these behaviors. In this project a framework is created in which an agent, through deep reinforcement learning, can learn how to mimic poses, here defined as the most basic case of social movements. The framework aimed to be as agent agnostic as possible and suitable for both real life robots and virtual agents through an approach called "dancer in the mirror". The framework utilized a learning algorithm called PPO and trained agents, as a proof of concept, on both a virtual environment for the humanoid robot Pepper and for virtual agents in a physics simulation environment. The framework was meant to be a simple starting point that could be extended to incorporate more and more complex tasks. This project shows that this framework was functional for agents to learn to mimic poses on a simplified environment.


    Access

    Download


    Export, share and cite



    Title :

    A Deep Reinforcement Learning Framework where Agents Learn a Basic form of Social Movement


    Contributors:

    Publication date :

    2018-01-01


    Type of media :

    Theses


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    006 / 629



    Deep Reinforcement Learning Applied to Airport Surface Movement Planning

    Tien, Shin-Lai Alex / Tang, Huang / Kirk, Daniel et al. | IEEE | 2019




    Utilizing Reinforcement Learning to Learn Safe Conflict Resolution

    Brandon Waddell / Swee Balachandran / Tanner Slagel et al. | NTRS | 2020


    Development of people mass movement simulation framework based on reinforcement learning

    Pang, Yanbo / Kashiyama, Takehiro / Yabe, Takahiro et al. | Elsevier | 2020