Occlusions, restricted field of view and limited resolution all constrain a robot's ability to sense its environment from a single observation. In these cases, the robot first needs to actively query multiple observations and accumulate information before it can complete a task. In this paper, we cast this problem of active vision as active inference, which states that an intelligent agent maintains a generative model of its environment and acts in order to minimize its surprise, or expected free energy according to this model. We apply this to an object-reaching task for a 7-DOF robotic manipulator with an in-hand camera to scan the workspace. A novel generative model using deep neural networks is proposed that is able to fuse multiple views into an abstract representation and is trained from data by minimizing variational free energy. We validate our approach experimentally for a reaching task in simulation in which a robotic agent starts without any knowledge about its workspace. Each step, the next view pose is chosen by evaluating the expected free energy. We find that by minimizing the expected free energy, exploratory behavior emerges when the target object to reach is not in view, and the end effector is moved to the correct reach position once the target is located. Similar to an owl scavenging for prey, the robot naturally prefers higher ground for exploring, approaching its target once located.


    Access

    Download


    Export, share and cite



    Title :

    Active vision for robot manipulators using the free energy principle


    Contributors:

    Publication date :

    2021-01-01


    Remarks:

    FRONTIERS IN NEUROROBOTICS ; ISSN: 1662-5218



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629



    Robot Manipulators

    Y. P. Popov | NTIS | 1975



    Robot Manipulators

    Bräunl, Thomas | Springer Verlag | 2022



    Energy-efficient trajectory planning for robot manipulators

    Lorenz, Michael | TIBKAT | 2021

    Free access