The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Using Object Affordances to Improve Object Recognition


    Beteiligte:


    Erscheinungsdatum :

    2011



    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Deutsch




    Visual object-action recognition: Inferring object affordances from human demonstration

    Kjellstrom, H. / Romero, J. / Kragic, D. | British Library Online Contents | 2011


    Relational affordances for multiple-object manipulation

    Moldovan, B. | British Library Online Contents | 2018



    Using Affordances to Improve Robotic Understanding Based on Deep Learning

    Yi, Chang’an / Chen, Haotian / Zhong, Jingtang et al. | Springer Verlag | 2022


    Learning to Segment Object Affordances on Synthetic Data for Task-oriented Robotic Handovers

    Christensen, Albert Daugbjerg / Lehotský, Daniel / Jørgensen, Marius Willemoes et al. | BASE | 2022

    Freier Zugriff