In visual SLAM, the accurate detection of a place benefits in relocalization and increased map accuracy. However, its performance is largely degraded when place appearance changes due to variation in illumination conditions, viewpoints, seasons, and presence of dynamic objects. Focusing the advantages of semantics to achieve human-like scene understanding, this research investigates the semantics aided visual place recognition methods and presents a novel visual and semantic information fusion-based place recognition framework, ViSem, for visual SLAM systems. The proposed method employs semantic matching for visually similar place match candidates and performs late fusion of point features based visual appearance matching model with the semantics based landmark matching to achieve high F1-Score for visual place recognition in drastic environmental changes. Experimental results demonstrates that ViSem achieves high robustness in comparison to the handcrafted and CNN features based methods on benchmark datasets.
ViSem: A Visual and Semantic Information Fusion Based Place Recognition for Long Term Autonomous Navigation
24.09.2023
1499765 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Autonomous aerial navigation using monocular visual‐inertial fusion
British Library Online Contents | 2018
|FUSION FRAMEWORK OF NAVIGATION INFORMATION FOR AUTONOMOUS NAVIGATION
Europäisches Patentamt | 2023
|FUSION FRAMEWORK OF NAVIGATION INFORMATION FOR AUTONOMOUS NAVIGATION
Europäisches Patentamt | 2025
FUSION FRAMEWORK OF NAVIGATION INFORMATION FOR AUTONOMOUS NAVIGATION
Europäisches Patentamt | 2025
|