3D object detection has aroused increasing interest as a crucial component of autonomous driving systems. While recent works have explored various multi-modal fusion methods to enhance accuracy and robustness, fusing multi-view images and high-definition (HD) maps remains uncharted. Inspired by our previous work, we endeavor to introduce HD maps to camera-based detection, prompting the design of a new framework. To address this, we first analyze the function of HD maps in object detection to understand their benefits and the rationale for their fusion. From this analysis, we identify key disparities in view, semantics, and scale, leading to the development of MIM, a framework for HD Maps Incorporated Multi-view 3D object detection. HD maps are enriched in semantics by sampling unlabeled areas and encoding them into map features. Simultaneously, multi-view images are transformed into features in bird’s-eye view (BEV) using the adopted baseline. These features are then fused using attention mechanisms to align scales. Experiments conducted on the nuScenes dataset demonstrate that MIM outperforms camera-based methods. Moreover, an in-depth analysis investigates how HD maps impact object detection regarding each semantic layer. The results underscore the operational intricacies of HD maps in perception, setting the stage for future research. Code is available at https://github.com/WHU-xjs/MIM-3D-Det.
MIM: High-Definition Maps Incorporated Multi-View 3D Object Detection
IEEE Transactions on Intelligent Transportation Systems ; 26 , 3 ; 3989-4001
2025-03-01
2455510 byte
Article (Journal)
Electronic Resource
English
View-based object recognition using saliency maps
British Library Online Contents | 1999
|