In the field of autonomous driving, Bird's Eye View (BEV) technology has garnered significant attention due to its excellent utilization of multi-view multi-modal data. However, current BEV detection frameworks still encounter challenges arising from insufficient incorporation of image semantic information and the sparsity of LiDAR data. This paper introduces a depth-guided BEV (DG-BEV) 3D detection method comprising a depth-guided view Transformation module (DG-VTM) and a visual-based depth completion module, enabling mitigating the limitations of sparse LiDAR data and enhancing the overall perception. Additionally, a multi-scale semantic enhancement module (MSEM) is proposed to ensure holistic and nuanced integration of semantic details into the detection process. The DG-VTM and MSEM are seamlessly incorporated as a plug-and-play unit, making it adaptable for integration into various BEV detection models. In experiments conducted on the nuScenes validation dataset, DG-BEV reaches an NDS of 71.87%, exceeding several state-of-the-art methods.
DG-BEV: Depth-Guided BEV 3D Object Detection with Sparse LiDAR Data
2024-09-24
4131487 byte
Conference paper
Electronic Resource
English
Deterministic Guided LiDAR Depth Map Completion
IEEE | 2021
|Depth-Guided Progressive Network for Object Detection
IEEE | 2022
|LiDAR-Guided Monocular 3D Object Detection for Long-Range Railway Monitoring
ArXiv | 2025
|