A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Contour-based next-best view planning from point cloud segmentation of unknown objects



    Erscheinungsdatum :

    01.01.2018



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629




    Surfel-Based Next Best View Planning

    Monica, Riccardo / Aleotti, Jacopo | BASE | 2018

    Freier Zugriff

    Planning for unknown objects by autonomous vehicles

    FRAZZOLI EMILIO / QIN BAOXING | Europäisches Patentamt | 2022

    Freier Zugriff

    Unsupervised segmentation of unknown objects in complex environments

    Asif, U. | British Library Online Contents | 2016


    Planning for unknown objects by an autonomous vehicle

    FRAZZOLI EMILIO / QIN BAOXING | Europäisches Patentamt | 2019

    Freier Zugriff