As an emerging technology and a relatively affordable device, the 4D imaging radar has already been confirmed effective in performing 3D object detection in autonomous driving [1] . Nevertheless, the sparsity and noisiness of 4D radar point clouds hinder further performance improvement, and in-depth studies about its fusion with other modalities are lacking. On the other hand, as a new image view transformation strategy, sampling has been applied in a few image-based detectors and shown to outperform the widely applied depth-based splatting proposed in Lift-Splat-Shoot (LSS) [2] , even without image depth prediction [3] . However, the potential of sampling is not fully unleashed. As a result, this paper investigates the sampling strategy on the camera and 4D imaging radar fusion-based 3D object detection. In the proposed LiDAR Excluded Lean (LXL) model, predicted image depth distribution maps and radar 3D occupancy grids are generated from image perspective view (PV) features and radar bird’s eye view (BEV) features, respectively. They are sent to the core of LXL, called radar occupancy-assisted depth-based sampling , to aid image view transformation.
LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion
02.06.2024
783673 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
RADAR LIDAR OBJECT DETECTION USING RADAR AND LIDAR FUSION
Europäisches Patentamt | 2023
|Deep Learning-based Radar, Camera, and Lidar Fusion for Object Detection
TIBKAT | 2022
|Spatial aware object detection by LIDAR and camera fusion based super-resolution
Europäisches Patentamt | 2024
|