This paper presents a real-time approach to detect and localize surrounding vehicles in urban driving scenes. We propose a multimodal fusion framework that processes both 3D LIDAR point cloud and RGB image to obtain robust vehicle position and size in a Bird's Eye View (BEV). Semantic segmentation from RGB images is obtained using our efficient Convolutional Neural Network (CNN) architecture called ERFNet. Our proposal takes advantage of accurate depth information provided by LIDAR and detailed semantic information processed from a camera. The method has been tested using the KITTI object detection benchmark. Experiments show that our approach outperforms or is on par with other state-of-the-art proposals but our CNN was trained in another dataset, showing a good generalization capability to any domain, a key point for autonomous driving.
Vehicle Detection and Localization using 3D LIDAR Point Cloud and Image Semantic Segmentation
2018-11-01
1945320 byte
Conference paper
Electronic Resource
English
Leveraging Smooth Deformation Augmentation for LiDAR Point Cloud Semantic Segmentation
IEEE | 2024
|POINT CLOUD SEGMENTATION USING A COHERENT LIDAR FOR AUTONOMOUS VEHICLE APPLICATIONS
European Patent Office | 2023
|POINT CLOUD SEGMENTATION USING A COHERENT LIDAR FOR AUTONOMOUS VEHICLE APPLICATIONS
European Patent Office | 2022
|POINT CLOUD SEGMENTATION USING A COHERENT LIDAR FOR AUTONOMOUS VEHICLE APPLICATIONS
European Patent Office | 2022
|POINT CLOUD SEGMENTATION USING A COHERENT LIDAR FOR AUTONOMOUS VEHICLE APPLICATIONS
European Patent Office | 2024
|