The ability to perform successful robot-to-human handovers has the potential to improve robot capabilities in the circumstances involving symbiotic human-robot collaboration. Recent computer vision research has shown that object affordance segmentation can be trained on large hand-labeled datasets and perform well in task-oriented grasping pipelines. However, producing and training in such datasets can be time-consuming and resource-intensive. In this paper, we eliminate the necessity for training in these datasets by proposing a novel approach in which training occurs on a synthetic dataset that accurately translates to real-world robotic manipulation scenarios. The synthetic training dataset contains 30245 RGB images with ground truth affordance masks and bounding boxes with class labels for each rendered object. The object set used for rendering consists of 21 object classes capturing 10 affordance classes. We propose a variant of AffordanceNet enhanced with domain randomization on the generated dataset to perform affordance segmentation without the need of fine-tuning on real-world data. Our approach, outperforms the state-of-the-art method on synthetic data, by 23%, and achieves performance levels similar to other methods trained on massive, hand-labeled RGB datasets and fine-tuned on real images from the experimental setup. We demonstrate the effectiveness of our approach on a collaborative robot setup with an end-to-end robotic handover pipeline using various objects in real-world scenarios. Code, the synthetic training dataset, and supplementary material will be made publicly available.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Learning to Segment Object Affordances on Synthetic Data for Task-oriented Robotic Handovers



    Erscheinungsdatum :

    2022-12-01


    Anmerkungen:

    Christensen , A D , Lehotský , D , Jørgensen , M W & Chrysostomou , D 2022 , Learning to Segment Object Affordances on Synthetic Data for Task-oriented Robotic Handovers . in The 33rd British Machine Vision Conference (BMVC) . British Machine Vision Association , The 33rd British Machine Vision Conference , London , United Kingdom , 21/11/2022 . < https://bmvc2022.mpi-inf.mpg.de/544/ >


    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Klassifikation :

    DDC:    629



    MACHINE LEARNING CONTROL OF OBJECT HANDOVERS

    WAY JAN / CHRISTOPHER JASON PAXTON / CHAO YU-WEI et al. | Europäisches Patentamt | 2022

    Freier Zugriff

    Using Affordances to Improve Robotic Understanding Based on Deep Learning

    Yi, Chang’an / Chen, Haotian / Zhong, Jingtang et al. | Springer Verlag | 2022


    KPAM: KeyPoint Affordances for Category-Level Robotic Manipulation

    Manuelli, Lucas / Gao, Wei / Florence, Peter et al. | TIBKAT | 2022


    Using Object Affordances to Improve Object Recognition

    Castellini, Claudio / Tommasi, Tatiana / Noceti, Nicoletta et al. | Deutsches Zentrum für Luft- und Raumfahrt (DLR) | 2011

    Freier Zugriff

    Relational affordances for multiple-object manipulation

    Moldovan, B. | British Library Online Contents | 2018