Cooperative driving automation attracts great attention for its potential to improve traffic safety. Knowing each vehicle’s accurate position serves as the cornerstone for the information fusion necessary in cooperative driving tasks. However, inherent errors within a vehicle’s self-localization system often necessitate correction to facilitate cooperative perception and downstream tasks. Leveraging intermediate features shared among other Connected Automated Vehicles (CAVs), we propose an end-to-end learning localization framework aimed at estimating the relative pose error between the ego vehicle and the CAV. We investigate factors that may influence learning performance and validate the algorithm’s performance using a simulation dataset. The proposed method is compared with the traditional point cloud matching-based relative localization method. Remarkably, our framework effectively corrects relative pose errors even when the vehicle exhibits significant initial localization inaccuracies, and it can be integrated into the cooperative perception system.
End-to-end Cooperative Localization via Neural Feature Sharing
2024-06-02
1502868 byte
Conference paper
Electronic Resource
English