Roadside cooperative perception systems can enhance safety for connected vehicles. Accurately estimating the state of traffic participants at intersections can be challenging because detection areas of roadside sensors are overlapped. We proposed a purely vision-based roadside cooperative perception system with a multi-camera data fusion method at an intersection. Firstly, object detection and tracking are conducted for each camera. Then the position of detected targets is transformed from image coordinates to WGS-84 coordinates with spatial synchronization. Thirdly, an asynchronous multi-camera fusion method is proposed. We associate trackers and detections with historical trajectory and utilize a position filter to solve the position inconsistency problem for traffic participants detected by multi-camera at an intersection. The state of traffic participants is estimated using the Kalman filter. Finally, the information of detected traffic participants after fusion is shared with connected vehicles using a roadside unit. We verify our proposed method at an intersection in Changsha. The results show the proposed method achieves 0.8-meter localization accuracy, 0.4-m/s velocity accuracy, and 3-degree heading angle accuracy, which meets the requirements of V2X applications. This study provides an effective way for connected vehicles to reduce collisions at intersections.
A Roadside Cooperative Perception System with Multi-Camera Fusion at an Intersection
24.09.2023
3364163 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Roadside friction monitoring for cooperative intersection safety
Tema Archiv | 2010
|Design, Implementation, and Evaluation of a Roadside Cooperative Perception System
Transportation Research Record | 2022
|