Data-driven approaches have been recently proven effective in pedestrian action prediction by extensive works. However, frame-level annotations of pedestrian actions require a significant amount of manpower and time. In this paper, we propose a simple yet effective contrastive learning framework that enables pedestrian action prediction models to be trained on data without action labels. First of all, we regard disentangled visual observations, such as appearance, motion and trajectories, as multiple modalities. Then we construct a joint latent space where multimodal features from the same sample are encouraged to be close, whereas features from different samples are encouraged to be far from each other. Since most existing models use a similar architecture composed of separate feature extractors and fusion modules, our proposed framework can be applied directly to existing methods to boost the feature extractors. We pretrained state-of-the-art models on datasets without action labels, nuScenes and BDD100k, and evaluated these models on PIE, JAAD and TITAN. Quantitative results show that the pretrained with only the fusion parameters fine-tuned can compete with or even outperform models that are completely trained the one dataset.
Contrasting Disentangled Partial Observations for Pedestrian Action Prediction
2024 IEEE Intelligent Vehicles Symposium (IV) ; 2828-2833
2024-06-02
1010012 byte
Conference paper
Electronic Resource
English
PEDESTRIAN ACTION PREDICTION DEVICE AND PEDESTRIAN ACTION PREDICTION METHOD
European Patent Office | 2018
|A Revisit of Total Correlation in Disentangled Variational Auto-Encoder with Partial Disentanglement
ArXiv | 2025
|