Autonomous driving and intelligent robot technology have become cutting-edge research hotspots in recent years, but the sudden changes in short-term dynamics and the gradual changes in long-term dynamics presented in semi-static scenes can make it difficult for SLAM system to provide the desired localization and mapping results. Along these lines, an RGB-D image based visual SLAM (vSLAM) leveraging semantic Patch-NetVLAD loop closure detection for autonomous driving vehicles in semi-static scenes is proposed. First, the lightweight SeaFormer is utilized to perform semantic segmentation on the input RGB image, and an ORB feature point extraction method based on lighting invariance is designed to obtain a high-confidence static feature point set. Then, a “coarse-to-fine” high-quality keyframe selection strategy is developed, ensuring the efficiency and real-time performance of the system for long-term operation. Further, a high-performance screening strategy of closed-loop candidate keyframes is constructed by combining structural similarity (SSIM) and cosine similarity. On this basis, a high-precision loop closure detection strategy combining semantics and patch-based multi-scale fusion of vector of locally aggregated descriptors (Patch-NetVLAD) is constructed, which effectively eliminates the closed-loop mismatching due to dynamic and invalid matching. Finally, a global semantic octree map that can be used for navigation is generated using keyframes and semantic masks. A series of simulation studies and experimental tests demonstrate the performance superiority of the proposed algorithm.
SPVL-vSLAM: Visual SLAM for Autonomous Driving Vehicles Based on Semantic Patch-NetVLAD Loop Closure Detection in Semi-Static Scenes
IEEE Transactions on Intelligent Transportation Systems ; 26 , 6 ; 8975-8991
01.06.2025
6112981 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Visual SLAM for Autonomous Ground Vehicles
Tema Archiv | 2011
|Object Detection-based Visual SLAM for Dynamic Scenes
British Library Conference Proceedings | 2022
|