The prediction of pedestrian intentions is crucial and one of the most challenging problems for self-driving vehicles. For this reason, a fast, efficient, and robust vision-based model is required to predict pedestrian crossing as fast as possible and to prevent serious injuries or casualties that may occur. Transformers have rapidly replaced recurrent neural networks (RNN) based architectures for their better generalization and fast performance. Vision transformer (ViT) is a variant of transformers that has also proven to be efficient in image classification and has outperformed the state-of-the-art convolutional neural networks (CNN) when trained on large datasets. In this paper, a fully transformer-based architecture is presented to efficiently predict pedestrian intention with minimum latency. The proposed architecture is composed of two branches: the first branch handles the non-visual features while the second branch handles the visual features. The model is trained on the Joint Attention in Autonomous Driving (JAAD) dataset and different variants of the architecture are tested to find the optimal model. Experimental analysis shows that the proposed model outperforms all the previous state-of-the-art techniques, achieving the highest accuracy (83 %) and F1 score (64 %) on the testing dataset while maintaining the lowest processing time.
Pedestrian Crossing Intent Prediction Using Vision Transformers
2024-09-24
355137 byte
Conference paper
Electronic Resource
English
PEDESTRIAN CROSSING PREDICTION METHOD AND PEDESTRIAN CROSSING PREDICTION DEVICE
European Patent Office | 2023
|Spatiotemporal relationship reasoning for pedestrian intent prediction
European Patent Office | 2021
|