The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.
Using Object Affordances to Improve Object Recognition
IEEE Transactions on Autonomous Mental Development ; 3 , 3 ; 207-215
2011
Aufsatz (Zeitschrift)
Elektronische Ressource
Deutsch
Visual object-action recognition: Inferring object affordances from human demonstration
| British Library Online Contents | 2011
Relational affordances for multiple-object manipulation
| British Library Online Contents | 2018
Perceiving, learning, and exploiting object affordances for autonomous pile manipulation
| British Library Online Contents | 2014
Using Affordances to Improve Robotic Understanding Based on Deep Learning
| Springer Verlag | 2022
Using Affordances to Improve Robotic Understanding Based on Deep Learning
| British Library Conference Proceedings | 2022