Direct SLAM methods have drawn much attention in the recent years since they have achieved exceptional performance on visual odometry tasks. However, they are prone to suffer from lighting or weather changes. To overcome this, we employ an adapted U-Net that translates the colors of regular images into a high-dimensional feature space. The network is trained to be insensitive to lighting effects as a Siamese U-Net, using labels that are automatically generated from synthetic datasets, without any human intervention. To generate more consistent high-dimensional feature maps, we propose the Cross Triplet Loss utilizing cross information in two images under different domains, and a new sampling method which can generate a wider range of samples by adding weights while sampling. Experiments on different weather and sequences with different textures show that the proposed method outperforms classical feature extraction methods and state-of-art deep learned feature extraction methods.
Input Image Adaption for Robust Direct SLAM using Deep Learning
01.11.2020
Sonstige
Elektronische Ressource
Englisch
DL-SLAM: Direct 2.5D LiDAR SLAM for Autonomous Driving
IEEE | 2019
|DL-SLAM: DIRECT 2.5D LIDAR SLAM FOR AUTONOMOUS DRIVING
British Library Conference Proceedings | 2019
|ICAS2016_0503: DEPTH IMAGE BASED DIRECT SLAM FOR SMALL UAVS
British Library Conference Proceedings | 2016
|