Most existing LiDAR-Inertial odometry (LIO) systems heavily depend on the assumption of a static environment, which hinders their ability to effectively utilize dynamic objects in enhancing localization. Furthermore, accurate tracking of surrounding objects is crucial for emerging applications, such as autonomous driving and multi-robot collaboration. To this end, we present LIO-LOT, a tightly coupled LiDAR-inertial and multi-object tracking (MOT) system. In our system, each movable object is represented by a 3-D bounding box, derived from an object detector. Subsequently, these objects are associated with historical trajectories through a hybrid information matching strategy that integrates detections and geometric feature information. Considering tracking continuity and motion consistency, the associated objects are further categorized into high-confidence ones and low-confidence ones. Building upon this foundation, these two categories of objects are flexibly coupled with inertial measurement unit (IMU) pre-integration measurements and static scene structures to optimize the poses of both tracked objects and ego-vehicle under a factor graph optimization framework. For implementation, the aggregated information of high- and low-confidence objects are fully leveraged to form the object-related factors, which are hierarchically optimized with LIO, enhancing system stability in cases of poor object detections. The system’s performance is extensively evaluated using the KITTI and nuScenes datasets. Experimental results explicitly demonstrate that the proposed system outperforms other state-of-the-art methods in terms of ego-localization and multi-object tracking, particularly in highly dynamic scenarios with situations of occlusion and truncation.
LIO-LOT: Tightly-Coupled Multi-Object Tracking and LiDAR-Inertial Odometry
IEEE Transactions on Intelligent Transportation Systems ; 26 , 1 ; 742-756
2025-01-01
6270034 byte
Article (Journal)
Electronic Resource
English
Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry
BASE | 2022
|