With the development of autonomous vehicles and intelligent robots, visual simultaneous localization and mapping (SLAM) has attracted great attentions. Most existing visual SLAM systems assume that the objects are stationary in static environments. However, in the real world, there are many objects that are non-stationary in dynamic environments, which will cause performance degradation of visual SLAM systems. In this paper, to address this issue, we propose a novel visual SLAM system based on multi-task deep neural networks. Specifically, we apply multi-task deep neural networks to extract oriented keypoints and perceive dynamic semantic regions, which are used to perform outlier rejection in the SLAM system. We evaluate our method on public datasets, and the results show that our method outperforms existing visual SLAM systems. The presentation video url is: https://youtu.be/qGE1OvaJvV0.
A Novel Visual SLAM System for Autonomous Vehicles in Dynamic Environments
10.10.2023
3256380 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Visual SLAM for Autonomous Ground Vehicles
Tema Archiv | 2011
|Tree-SLAM: Localization and Mapping in Dense Forest Environments for Autonomous Vehicles
Springer Verlag | 2024
|Stereo Graph-SLAM for Autonomous Underwater Vehicles
Springer Verlag | 2015
|