Direct SLAM methods have drawn much attention in the recent years since they have achieved exceptional performance on visual odometry tasks. However, they are prone to suffer from lighting or weather changes. To overcome this, we employ an adapted U-Net that translates the colors of regular images into a high-dimensional feature space. The network is trained to be insensitive to lighting effects as a Siamese U-Net, using labels that are automatically generated from synthetic datasets, without any human intervention. To generate more consistent high-dimensional feature maps, we propose the Cross Triplet Loss utilizing cross information in two images under different domains, and a new sampling method which can generate a wider range of samples by adding weights while sampling. Experiments on different weather and sequences with different textures show that the proposed method outperforms classical feature extraction methods and state-of-art deep learned feature extraction methods.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Input Image Adaption for Robust Direct SLAM using Deep Learning


    Beteiligte:
    Wang, Sen (Autor:in)

    Erscheinungsdatum :

    01.11.2020


    Medientyp :

    Sonstige


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    RWT-SLAM: Robust Visual SLAM for Weakly Textured Environments

    Peng, Qihao / Zhao, Xijun / Dang, Ruina et al. | IEEE | 2024


    DL-SLAM: Direct 2.5D LiDAR SLAM for Autonomous Driving

    Li, Jun / Zhao, Junqiao / Kang, Yuchen et al. | IEEE | 2019


    DL-SLAM: DIRECT 2.5D LIDAR SLAM FOR AUTONOMOUS DRIVING

    Li, Jun / Zhao, Junqiao / Kang, Yuchen et al. | British Library Conference Proceedings | 2019


    ICAS2016_0503: DEPTH IMAGE BASED DIRECT SLAM FOR SMALL UAVS

    Park, S. Y. / Shim, D. H. | British Library Conference Proceedings | 2016