In this paper, we present a LiDAR Visual In-ertial Odometry (LVIO) based on feedforward and feed-backs. Compared to traditional Kalman filter-based methods or optimization-based methods for sensor fusion, the pro-posed system achieves sensor fusion through feedforward and feedbacks. This system, named Feedforward-feedback LiDAR Visual Inertial System (FLiVIS) consists of a Visual Inertial Odometry (VIO) subsystem and a LiDAR Inertial Odometry (LIO) subsystem, these two subsystems are coupled through complementary filters. Instead of directly integrating gyroscope data and accelerometer data, our framework leverages the complementary nature of gyroscope and accelerometer measurements. FLiVIS is evaluated on public datasets, it achieves a relative translation error of 0.68% on the KITTI dataset and 0.138 m absolute translation error on the NTU-Viral dataset, respectively. The experiment results demonstrate the accuracy and robustness of FLiVIS with respect to other state-of-the-art counterparts. FLiVIS is capable of accommodating both multi-line spinning LiDARs and emerging solid-state LiDARs, which employ distinct scanning patterns. Additionally, it can perform real-time operations on a range of platforms, from laptops to upboard processors.
LiDAR Stereo Visual Inertial Pose Estimation Based on Feedforward and Feedbacks
2024-06-04
912721 byte
Conference paper
Electronic Resource
English
Robust Direct Visual Inertial Odometry via Entropy-Based Relative Pose Estimation
British Library Conference Proceedings | 2015
|