Can a robot grasp an unknown object without seeing it? In this paper, we present a tactile-sensing based approach to this challenging problem of grasping novel objects without prior knowledge of their location or physical properties. Our key idea is to combine touch based object localization with tactile based re-grasping. To train our learning models, we created a large-scale grasping dataset, including more than 30K RGB frames and over 2.8 million tactile samples from 7800 grasp interactions of 52 objects. Moreover, we propose an unsupervised auto-encoding scheme to learn a representation of tactile signals. This learned representation shows a significant improvement of 4–9% over prior works on a variety of tactile perception tasks. Our system consists of two steps. First, our touch localization model sequentially “touch-scans” the workspace and uses a particle filter to aggregate beliefs from multiple hits of the target. It outputs an estimate of the object’s location, from which an initial grasp is established. Next, our re-grasping model learns to progressively improve grasps with tactile feedback based on the learned features. This network learns to estimate grasp stability and predict adjustment for the next grasp. Re-grasping thus is performed iteratively until our model identifies a stable grasp. Finally, we demonstrate extensive experimental results on grasping a large set of novel objects using tactile sensing alone. Furthermore, when applied on top of a vision-based policy, our re-grasping model significantly boosts the overall accuracy by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10.6\%$$\end{document}. We believe this is the first attempt at learning to grasp with only tactile sensing and without any prior object knowledge. For supplementary video and dataset see: cs.cmu.edu/GraspingWithoutSeeing.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Learning to Grasp Without Seeing


    Weitere Titelangaben:

    Springer Proceedings in Advanced Robotics


    Beteiligte:
    Xiao, Jing (Herausgeber:in) / Kröger, Torsten (Herausgeber:in) / Khatib, Oussama (Herausgeber:in) / Murali, Adithyavairavan (Autor:in) / Li, Yin (Autor:in) / Gandhi, Dhiraj (Autor:in) / Gupta, Abhinav (Autor:in)

    Kongress:

    International Symposium on Experimental Robotics ; 2018 ; Buenos Aires, Argentina November 05, 2018 - November 08, 2018



    Erscheinungsdatum :

    2020-01-23


    Format / Umfang :

    12 pages





    Medientyp :

    Aufsatz/Kapitel (Buch)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Learning to Grasp Without Seeing

    Murali, Adithyavairavan / Li, Yin / Gandhi, Dhiraj et al. | TIBKAT | 2020


    Displays for seeing without looking.

    Vallerie, L. L. | NTRS | 1966


    Learning to Recognize and Grasp Objects

    Pauli, J. | British Library Online Contents | 1998


    Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual‐Arm Grasp Recovery

    Giuseppe Averta / Federica Barontini / Irene Valdambrini et al. | BASE | 2022

    Freier Zugriff

    Domain Adaptation Grasp Network for Novel Object Grasp Detection

    Cai, Xiangting / Xu, Xin / Ren, Shuai et al. | Springer Verlag | 2022