Pedestrian safety is one primary concern in autonomous driving. The under-representation of vulnerable groups in today's pedestrian datasets points to an urgent need for a dataset of vulnerable road users. To help train well-rounded self-driving visual detectors and subsequently drive research to improve the accuracy of vulnerable pedestrian detection, we first introduce a new dataset in this paper: the Bowling Green Vulnerable Pedestrian (BGVP) dataset. The dataset includes four classes, i.e., Children without Disability, Elderly without Disability, With Disability, and Non-Vulnerable. This dataset consists of images collected from the public domain and manually-annotated bounding boxes. In addition, on the proposed dataset, we have trained and tested five classic or state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet. Our results indicate that YOLOX and YOLOv4 perform the best on our dataset, with YOLOv4 scoring 0.7999 and YOLOX scoring 0.7779 on the mAP 0.5 metric, while YOLOX outperforms YOLOv4 by 3.8% on the mAP 0.5:0.95 metric. Overall, all five detectors do well in predicting the With Disability class and perform poorly in the Elderly without Disability class. YOLOX consistently outperforms all other detectors on the mAP 0.5:0.95 per class metric, obtaining 0.5644, 0.5242, 0.4781, and 0.6796 for the Children without Disability, Elderly without Disability, Non-vulnerable, and With Disability categories, respectively. Our dataset and codes are available at https://github.com/devvansh1997/BGVP.
Comparison of Deep Object Detectors on a New Vulnerable Pedestrian Dataset
24.09.2024
2435882 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Subway Station Pedestrian Dataset
DataCite | 2024
|Car Pedestrian Interaction (CPI) dataset
DataCite | 2024
|VEHICLE TO PEDESTRIAN COMMUNICATIONS FOR PROTECTION OF VULNERABLE ROAD USERS
British Library Conference Proceedings | 2014
|