Collaborative perception aims for a holistic perceptive construction by leveraging complementary information from nearby connected automated vehicle (CAV), thereby endowing the broader probing scope. Nonetheless, how to aggregate individual observation reasonably remains an open problem. In this article, we propose a novel vehicle-to-vehicle perception framework dubbed V2VFormer with Transformer-based Collaboration (CoTr). Specifically. it re-calibrates feature importance according to position correlation via Spatial-Aware Transformer (SAT), and then performs dynamic semantic interaction with Channel-Wise Transformer (CWT). Of note, CoTr is a light-weight and plug-in-play module that can be adapted seamlessly to the off-the-shelf 3D detectors with an acceptable computational overhead. Additionally, a large-scale cooperative perception dataset V2V-Set is further augmented with a variety of driving conditions, thereby providing extensive knowledge for model pretraining. Qualitative and quantitative experiments demonstrate our proposed V2VFormer achieves the state-of-the-art (SOTA) collaboration performance in both simulated and real-world scenarios, outperforming all counterparts by a substantial margin. We expect this would propel the progress of networked autonomous-driving research in the future.
V2VFormer: Vehicle-to-Vehicle Cooperative Perception With Spatial-Channel Transformer
IEEE Transactions on Intelligent Vehicles ; 9 , 2 ; 3384-3395
2024-02-01
4981439 byte
Article (Journal)
Electronic Resource
English
V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
British Library Conference Proceedings | 2022
|