In the context of autonomous driving environment perception, multi-modal fusion plays a pivotal role in enhancing robustness, completeness, and accuracy, thereby extending the performance boundary of the perception system. However, directly applying LiDAR-related algorithms to radar and camera fusion leads to significant challenges, such as radar sparsity, absence of height information, and noise, resulting in substantial performance loss. To address these issues, our proposed method, SparseFusion3D, utilizes a dual-branch feature-level fusion network that fully models sensor interactions, effectively mitigating the adverse effects of radar sparsity and noise on modality association. Additionally, to enhance modal correlations and accuracy while alleviating radar point cloud sparsity and measurement ambiguity, we introduce MSPCP, which compensates for point cloud offset. Moreover, we integrate Radar Painter to leverage image information and further enhance MSPCP. SparseFusion3D exhibits competitive performance compared to previous radar-camera fusion models, achieving approximately 1.5x inference speedup with similar performance to dense query methods, while also improving by 20.1% compared to the baseline approach.
SparseFusion3D: Sparse Sensor Fusion for 3D Object Detection by Radar and Camera in Environmental Perception
IEEE Transactions on Intelligent Vehicles ; 9 , 1 ; 1524-1536
01.01.2024
3175105 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
British Library Conference Proceedings | 2012
|Deep Learning-based Radar, Camera, and Lidar Fusion for Object Detection
TIBKAT | 2022
|OBJECT DETECTION METHOD BASED ON RADAR-CAMERA FUSION AND ELECTRONIC APPARATUS
Europäisches Patentamt | 2022
|A Concise Camera-Radar Fusion Framework for Object Detection and Data Association
SAE Technical Papers | 2022
|