Simultaneous localization and mapping (SLAM) is a critical component of autonomous vehicles, which can estimate their current pose and construct a precision map of the environment. However, its performance is often limited by insufficient perception ability and non-robust odometry. In this paper, we introduce a Progressive Multi-Modal Semantic Segmentation guided SLAM (PM2S2-SLAM), which utilizes tightly-coupled LiDAR-Visual-Inertial odometry with multi-modal semantic information to enhance the robustness and accuracy of SLAM. To address the limitations of a single sensor based perception method and the inefficiency of multi-modal semantic networks, a progressive multi-modal network is designed to efficiently extract multi-sensor semantic information in a segmentation network. This approach progressively enhances the subsequent point cloud segmentation network with calibration and image semantics prior, thereby improving the accuracy and efficiency of perception. Additionally, we propose semantic information enhanced tightly-coupled LiDAR-visual-inertial odometry, which employs the semantic trimmed iterative closest point method to enhance the robustness and accuracy of multi-modal odometry. Finally, the effectiveness of the PM2S2-SALM is verified by real-world experiments through the public datasets, which reduces the Absolute Trajectory Error by 25.1% compared with the state-of-the-art performance method.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Progressive Multi-Modal Semantic Segmentation Guided SLAM Using Tightly-Coupled LiDAR-Visual-Inertial Odometry


    Contributors:
    Xiao, Hanbiao (author) / Hu, Zhaozheng (author) / Lv, Chen (author) / Meng, Jie (author) / Zhang, Jianan (author) / You, Ji'an (author)


    Publication date :

    2025-02-01


    Size :

    3469196 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry

    Wisth, D / Camurri, M / Das, S et al. | BASE | 2022

    Free access

    Hierarchical Distribution-Based Tightly-Coupled LiDAR Inertial Odometry

    Wang, Chengpeng / Cao, Zhiqiang / Li, Jianjie et al. | IEEE | 2024


    InLIOM: Tightly-Coupled Intensity LiDAR Inertial Odometry and Mapping

    Wang, Hanqi / Liang, Huawei / Li, Zhiyuan et al. | IEEE | 2024



    LIO-LOT: Tightly-Coupled Multi-Object Tracking and LiDAR-Inertial Odometry

    Li, Xingxing / Yan, Zhuohao / Feng, Shaoquan et al. | IEEE | 2025