We proposed a multi-sensor fusion-based localization and scene reconstruction method for a complex dynamic scene. The multi-level fusion between multiple sensors was implemented by fusing data collected from different sensors in different system modules. In the front-end of the system, the camera and the LiDAR assisted each other. The LiDAR point clouds provided 3D information for the feature points in the image. The moving objects elimination method based on the image can remove the points on the moving objects in the LiDAR point clouds for localization accuracy improvement and static 3D scene reconstruction. To further improve the localization accuracy, a combination of visual loop closure detection and LiDAR loop closure detection was utilized to ensure the global consistency of scene reconstruction. At the system’s back-end, the observation model of different sensors was integrated to construct a multiple constraint factor graph with nonlinear optimization to obtain the optimal system states. Experimental results demonstrated that the proposed multi-sensor fusion-based localization and scene reconstruction algorithm could operate robustly in multiple complex dynamic scenes.
MSF-SLAM: Multi-Sensor-Fusion-Based Simultaneous Localization and Mapping for Complex Dynamic Environments
IEEE Transactions on Intelligent Transportation Systems ; 25 , 12 ; 19699-19713
2024-12-01
9386134 byte
Article (Journal)
Electronic Resource
English
European Patent Office | 2024
|European Patent Office | 2024
|Map Management for Efficient Simultaneous Localization and Mapping (SLAM)
British Library Online Contents | 2002
|