Visual Simultaneous Localization and Mapping (VSLAM) plays an important role in advanced driver assistance systems and autonomous driving. Feature-based VSLAM generates very promising and visually pleasant results due to its robustness and localization precision. However, traditional feature-based VSLAM systems are prone to be degraded or fail when either the environment or the motion of robots is too challenging. To handle these problems, we proposed BASL-AD SLAM. Firstly, we leveraged the robustness of deep learning and designed a binary deep learning-based descriptor to enhance the accuracy of feature detection and matching for SLAM systems in challenging environments. Meanwhile, the real-time performance of the SLAM system can be also guaranteed. Furthermore, we proposed an adaptive motion model to supply more accurate initial poses, which facilitated subsequent feature tracking and pose optimization in SLAM. The performance was validated on public datasets. Results verified that our BASL-AD SLAM can carry out robust feature matching and tracking in real-time under challenging environments, meanwhile, pose estimation accuracy was significantly improved and the proposed SLAM system showed competing robustness and accuracy compared with ORB-SLAM3.
BASL-AD SLAM: A Robust Deep-Learning Feature-Based Visual SLAM System With Adaptive Motion Model
IEEE Transactions on Intelligent Transportation Systems ; 25 , 9 ; 11794-11804
01.09.2024
10241034 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
ASD-SLAM: A NOVEL ADAPTIVE-SCALE DESCRIPTOR LEARNING FOR VISUAL SLAM
British Library Conference Proceedings | 2020
|