Perception serves as the vital cornerstone of autonomous driving system, influencing the decision-making and control performance of vehicles. The rich semantic color information of images, the low cost of cameras and the support of deep learning make visual perception play a pivotal role. However, there are occlusions and blind areas when capturing data using only the on-board camera. With the development of vehicle-to-everything (V2X), information interaction can be achieved based on vehicle-to-vehicle (V2V), cooperative perception of connected autonomous vehicles (CAVs) based on information interaction has become a new trend. This study delves into visual perception based on Transformer attention, and enhances the encoder-decoder through multi-scale feature extraction and queries initialization. Furthermore, a visual cooperative perception method driven by V2V interaction is proposed. Based on spatial registration, data association and multi-source cooperation, perception enhancement of far-sight and see-through is achieved. Experiments were conducted on the real-world dataset and the PreScan simulator, evaluating the proposed method under various traffic state and density scenarios. Experimental results demonstrate that visual cooperative perception can improve the perception effect of CAVs and adapt to more complex traffic environments.
V2V Based Visual Cooperative Perception for Connected Autonomous Vehicles: Far-Sight and See-Through
2024 IEEE Intelligent Vehicles Symposium (IV) ; 2784-2790
2024-06-02
3770987 byte
Conference paper
Electronic Resource
English
Cooper: Cooperative Perception for Connected Autonomous Vehicles based on 3D Point Clouds
ArXiv | 2019
|Graph based Cooperative Localization for Connected and Semi-Autonomous Vehicles
BASE | 2020
|