Data-driven approaches have been recently proven effective in pedestrian action prediction by extensive works. However, frame-level annotations of pedestrian actions require a significant amount of manpower and time. In this paper, we propose a simple yet effective contrastive learning framework that enables pedestrian action prediction models to be trained on data without action labels. First of all, we regard disentangled visual observations, such as appearance, motion and trajectories, as multiple modalities. Then we construct a joint latent space where multimodal features from the same sample are encouraged to be close, whereas features from different samples are encouraged to be far from each other. Since most existing models use a similar architecture composed of separate feature extractors and fusion modules, our proposed framework can be applied directly to existing methods to boost the feature extractors. We pretrained state-of-the-art models on datasets without action labels, nuScenes and BDD100k, and evaluated these models on PIE, JAAD and TITAN. Quantitative results show that the pretrained with only the fusion parameters fine-tuned can compete with or even outperform models that are completely trained the one dataset.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Contrasting Disentangled Partial Observations for Pedestrian Action Prediction


    Beteiligte:
    Feng, Yan (Autor:in) / Carballo, Alexander (Autor:in) / Niu, Yingjie (Autor:in) / Takeda, Kazuya (Autor:in)


    Erscheinungsdatum :

    02.06.2024


    Format / Umfang :

    1010012 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Disentangled Bad Weather Removal GAN for Pedestrian Detection

    Yang, Hanting / Carballo, Alexander / Takeda, Kazuya | IEEE | 2022


    PEDESTRIAN ACTION PREDICTION DEVICE AND PEDESTRIAN ACTION PREDICTION METHOD

    KINDO TOSHIKI / OGAWA MASAHIRO / FUNAYAMA RYUJI | Europäisches Patentamt | 2018

    Freier Zugriff

    Multi-Modal Hybrid Architecture for Pedestrian Action Prediction

    Rasouli, Amir / Yau, Tiffany / Rohani, Mohsen et al. | IEEE | 2022


    Pedestrian path prediction based on body language and action classification

    Quintero, R. / Parra, I. / Llorca, D. F. et al. | IEEE | 2014


    Handbook for Pedestrian Action

    R. Brambilla / G. Longo | NTIS | 1976