Reliable localization and mapping are the key technologies for autonomous driving. In complex and dynamic traffic scenarios, a single sensor cannot provide sufficient information to achieve reliable and accurate Simultaneous Localization and Mapping (SLAM). Therefore, more and more multi-sensor fusion SLAM works have emerged. However, previous multi-sensor fusion SLAM systems mainly utilize geometric information, but not fully leverage semantic information, which plays a crucial role in understanding complex scenes. This paper proposes a semantic-enhanced LiDAR-Visual-Inertial Odometry system named MSE-LVIO, which utilizes the spatial consistency between image semantic segmentation and point cloud clustering to construct a semantic map integrating object attributes, dynamic, and static information. By fully leveraging semantic and object information, real-time dynamic obstacle filtering can be achieved during the front-end registration phase. Our method has been validated in the Carla simulation environment, KITTI raw dataset, and M2DGR dataset. The results show that our approach performs better in dynamic scenes.
MSE-LVIO: Multi-Modal Semantic-Enhanced LiDAR-Visual-Inertial Odometry in Dynamic Traffic Scenes
2024-09-24
2009758 byte
Conference paper
Electronic Resource
English
Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry
BASE | 2022
|Stereo based visual odometry in difficult traffic scenes
IEEE | 2012
|Stereo Based Visual Odometry in Difficult Traffic Scenes
British Library Conference Proceedings | 2012
|