Environment perception for autonomous driving traditionally uses sensor fusion to combine the object detections from various sensors mounted on the car into a single representation of the environment. Non-calibrated sensors result in artifacts and aberration in the environment model, which makes tasks like free-space detection more challenging. In this study, we improve the LiDAR and camera fusion approach of Levinson and Thrun. We rely on intensity discontinuities and erosion and dilation of the edge image for increased robustness against shadows and visual patterns, which is a recurring problem in point cloud related work. Furthermore, we use a gradientfree optimizer instead of an exhaustive grid search to find the extrinsic calibration. Hence, our fusion pipeline is lightweight and able to run in real-time on a computer in the car. For the detection task, we modify the Faster R-CNN architecture to accommodate hybrid LiDAR-camera data for improved object detection and classification. We test our algorithms on the KITTI data set and locally collected urban scenarios. We also give an outlook on how radar can be added to the fusion pipeline via velocity matching.
Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
2018 IEEE Intelligent Vehicles Symposium (IV) ; 1632-1638
2018-06-01
6358272 byte
Conference paper
Electronic Resource
English
ONLINE CAMERA LIDAR FUSION AND OBJECT DETECTION ON HYBRID DATA FOR AUTONOMOUS DRIVING
British Library Conference Proceedings | 2018
|Panoptic Based Camera and Lidar Fusion for Distance Estimation in Autonomous Driving Vehicles
British Library Conference Proceedings | 2022
|LiDAR and Camera-Based Convolutional Neural Network Detection for Autonomous Driving
British Library Conference Proceedings | 2020
|