Deep learning-based affordance research is motivated by the need to achieve higher degrees of automation and accuracy. On the contrary, the concept of affordance originated to explain the intuitive intelligent responses of humans in their interaction with the environment. With a state-of-the-art supervised deep learning model, achieving high accuracy in an object affordance recognition challenge is excellent and normal nowadays. Does it ensure that this model understands object affordance correctly and explains its decisions to users? Deep learning algorithms and architectures for affordance research must possess an understanding of their own. We present a brief study on the existing literature on explainable affordance research and offer few suggestions to improve the explainability and interpretability of current deep learning methods for affordance detection. Three pretrained vision models are considered for supervised object affordance classification without having affordance heatmaps as teaching signal. The output of these models obtained after the experimentation over modified CAD-120 dataset is fed to smooth grad-cam + + \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$++$$\end{document} for post hoc explainability analysis. These experiments lead to a proposal of a framework for object affordance classification.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Visual Affordance Recognition: A Study on Explainability and Interpretability for Human Robot Interaction



    Published in:

    Publication date :

    2024-07-24


    Size :

    21 pages




    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Affordance-based altruistic robotic architecture for human–robot collaboration

    Imre, M. / Öztop, Erhan / Nagai, Y. et al. | BASE | 2019

    Free access

    Affordance Based Disambiguation and Validation in Human-Robot Dialogue

    Wölfel, Kim / Henrich, Dominik | Springer Verlag | 2020

    Free access

    Visual recognition of pointing gestures for human-robot interaction

    Nickel, K. / Stiefelhagen, R. | British Library Online Contents | 2007


    GPU-accelerated affordance cueing based on visual attention

    May, S. / Klodt, M. / Rome, E. et al. | BASE | 2007

    Free access

    Affordance-based indirect task communication for astronaut-robot cooperation

    Heikkil, S. S. / Halme, A. / Schiele, A. | British Library Online Contents | 2012