Simultaneous localization and mapping (SLAM) is a critical component of autonomous vehicles, which can estimate their current pose and construct a precision map of the environment. However, its performance is often limited by insufficient perception ability and non-robust odometry. In this paper, we introduce a Progressive Multi-Modal Semantic Segmentation guided SLAM (PM2S2-SLAM), which utilizes tightly-coupled LiDAR-Visual-Inertial odometry with multi-modal semantic information to enhance the robustness and accuracy of SLAM. To address the limitations of a single sensor based perception method and the inefficiency of multi-modal semantic networks, a progressive multi-modal network is designed to efficiently extract multi-sensor semantic information in a segmentation network. This approach progressively enhances the subsequent point cloud segmentation network with calibration and image semantics prior, thereby improving the accuracy and efficiency of perception. Additionally, we propose semantic information enhanced tightly-coupled LiDAR-visual-inertial odometry, which employs the semantic trimmed iterative closest point method to enhance the robustness and accuracy of multi-modal odometry. Finally, the effectiveness of the PM2S2-SALM is verified by real-world experiments through the public datasets, which reduces the Absolute Trajectory Error by 25.1% compared with the state-of-the-art performance method.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Progressive Multi-Modal Semantic Segmentation Guided SLAM Using Tightly-Coupled LiDAR-Visual-Inertial Odometry


    Beteiligte:
    Xiao, Hanbiao (Autor:in) / Hu, Zhaozheng (Autor:in) / Lv, Chen (Autor:in) / Meng, Jie (Autor:in) / Zhang, Jianan (Autor:in) / You, Ji'an (Autor:in)


    Erscheinungsdatum :

    01.02.2025


    Format / Umfang :

    3469196 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry

    Wisth, D / Camurri, M / Das, S et al. | BASE | 2022

    Freier Zugriff

    Hierarchical Distribution-Based Tightly-Coupled LiDAR Inertial Odometry

    Wang, Chengpeng / Cao, Zhiqiang / Li, Jianjie et al. | IEEE | 2024


    InLIOM: Tightly-Coupled Intensity LiDAR Inertial Odometry and Mapping

    Wang, Hanqi / Liang, Huawei / Li, Zhiyuan et al. | IEEE | 2024



    LIO-LOT: Tightly-Coupled Multi-Object Tracking and LiDAR-Inertial Odometry

    Li, Xingxing / Yan, Zhuohao / Feng, Shaoquan et al. | IEEE | 2025