This paper presents a method of fusion of lidar and camera multisensor data fusion systems, which compensates the shortcomings of monocular camera-based navigation system and Lidar-based navigation system. Because of the different imaging principle of lidar and camera, unify two coordinate systems before being calibrated. In order to complete the fusion and compensation between the lidar and camera in the data layer, the Rodriguez matrix is used to register the measured parameters. Then, the high accuracy and registered parameters are obtained by improved Danish iteration method with variable weights based on collinearity equation. Texture seams in color images are also differential corrected to provide the depth image of a high-resolution scene, which containing depth information of the unmanned vehicle. The new Slam algorithm is also designed by Invoking the Find-Contour function and Contour-Area function in Open CV library. In addition, the experiments prove the better performance of new SLAM method.
On SLAM Based on Monocular Vision and Lidar Fusion System
2018-08-01
692806 byte
Conference paper
Electronic Resource
English
Vision-Based SLAM: Stereo and Monocular Approaches
British Library Online Contents | 2007
|British Library Online Contents | 2016
|Monocular Vision SLAM Research for Parking Environment with Low Light
Springer Verlag | 2022
|