Simultaneous localization and mapping (SLAM) is a critical component of autonomous vehicles, which can estimate their current pose and construct a precision map of the environment. However, its performance is often limited by insufficient perception ability and non-robust odometry. In this paper, we introduce a Progressive Multi-Modal Semantic Segmentation guided SLAM (PM2S2-SLAM), which utilizes tightly-coupled LiDAR-Visual-Inertial odometry with multi-modal semantic information to enhance the robustness and accuracy of SLAM. To address the limitations of a single sensor based perception method and the inefficiency of multi-modal semantic networks, a progressive multi-modal network is designed to efficiently extract multi-sensor semantic information in a segmentation network. This approach progressively enhances the subsequent point cloud segmentation network with calibration and image semantics prior, thereby improving the accuracy and efficiency of perception. Additionally, we propose semantic information enhanced tightly-coupled LiDAR-visual-inertial odometry, which employs the semantic trimmed iterative closest point method to enhance the robustness and accuracy of multi-modal odometry. Finally, the effectiveness of the PM2S2-SALM is verified by real-world experiments through the public datasets, which reduces the Absolute Trajectory Error by 25.1% compared with the state-of-the-art performance method.
Progressive Multi-Modal Semantic Segmentation Guided SLAM Using Tightly-Coupled LiDAR-Visual-Inertial Odometry
IEEE Transactions on Intelligent Transportation Systems ; 26 , 2 ; 1645-1656
01.02.2025
3469196 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry
BASE | 2022
|