Anticipating human actions in front of autonomous vehicles is a challenging task. Several papers have recently proposed model architectures to address this problem by combining multiple input features to predict pedestrian crossing actions. This paper focuses specifically on using images of the pedestrian's context as an input feature. We present several spatio-temporal model architectures that utilize standard CNN and Transformer modules to serve as a backbone for pedestrian anticipation. However, the objective of this paper is not to surpass state-of-the-art benchmarks but rather to analyze the positive and negative predictions of these models. Therefore, we provide insights on the explainability of vision-based Transformer models in the context of pedestrian action prediction. We will highlight cases where the model can achieve correct quantitative results but falls short in providing human-like explanations qualitatively, emphasizing the importance of investing in explainability for pedestrian action anticipation problems.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Analysis Over Vision-Based Models for Pedestrian Action Anticipation


    Contributors:


    Publication date :

    2023-09-24


    Size :

    2994541 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    DPCIAN: A Novel Dual-Channel Pedestrian Crossing Intention Anticipation Network

    Yang, Biao / Wei, Zhiwen / Hu, Hongyu et al. | IEEE | 2024


    Anticipation of Heaviness in Vision and Grasp

    Steckner, C. / Bulthoff, Heinrich H | British Library Conference Proceedings | 2005


    PEDESTRIAN ACTION PREDICTION DEVICE AND PEDESTRIAN ACTION PREDICTION METHOD

    KINDO TOSHIKI / OGAWA MASAHIRO / FUNAYAMA RYUJI | European Patent Office | 2018

    Free access