A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.


    Access

    Download


    Export, share and cite



    Title :

    Contour-based next-best view planning from point cloud segmentation of unknown objects



    Publication date :

    2018-01-01



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629




    Surfel-Based Next Best View Planning

    Monica, Riccardo / Aleotti, Jacopo | BASE | 2018

    Free access

    A 3D Robot Self Filter for Next Best View Planning

    Monica, Riccardo / Aleotti, Jacopo | IEEE | 2019


    Discovery, segmentation and reactive grasping of unknown objects

    Schiebener, David / Schill, Julian / Asfour, Tamim | IEEE | 2012


    Unsupervised segmentation of unknown objects in complex environments

    Asif, U. | British Library Online Contents | 2016