Data-driven approaches have been recently proven effective in pedestrian action prediction by extensive works. However, frame-level annotations of pedestrian actions require a significant amount of manpower and time. In this paper, we propose a simple yet effective contrastive learning framework that enables pedestrian action prediction models to be trained on data without action labels. First of all, we regard disentangled visual observations, such as appearance, motion and trajectories, as multiple modalities. Then we construct a joint latent space where multimodal features from the same sample are encouraged to be close, whereas features from different samples are encouraged to be far from each other. Since most existing models use a similar architecture composed of separate feature extractors and fusion modules, our proposed framework can be applied directly to existing methods to boost the feature extractors. We pretrained state-of-the-art models on datasets without action labels, nuScenes and BDD100k, and evaluated these models on PIE, JAAD and TITAN. Quantitative results show that the pretrained with only the fusion parameters fine-tuned can compete with or even outperform models that are completely trained the one dataset.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Contrasting Disentangled Partial Observations for Pedestrian Action Prediction


    Contributors:


    Publication date :

    2024-06-02


    Size :

    1010012 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Disentangled Bad Weather Removal GAN for Pedestrian Detection

    Yang, Hanting / Carballo, Alexander / Takeda, Kazuya | IEEE | 2022


    PEDESTRIAN ACTION PREDICTION DEVICE AND PEDESTRIAN ACTION PREDICTION METHOD

    KINDO TOSHIKI / OGAWA MASAHIRO / FUNAYAMA RYUJI | European Patent Office | 2018

    Free access

    Multi-Modal Hybrid Architecture for Pedestrian Action Prediction

    Rasouli, Amir / Yau, Tiffany / Rohani, Mohsen et al. | IEEE | 2022


    Pedestrian path prediction based on body language and action classification

    Quintero, R. / Parra, I. / Llorca, D. F. et al. | IEEE | 2014


    A Revisit of Total Correlation in Disentangled Variational Auto-Encoder with Partial Disentanglement

    Li, Chengrui / Wang, Yunmiao / Wang, Yule et al. | ArXiv | 2025

    Free access