Direct SLAM methods have drawn much attention in the recent years since they have achieved exceptional performance on visual odometry tasks. However, they are prone to suffer from lighting or weather changes. To overcome this, we employ an adapted U-Net that translates the colors of regular images into a high-dimensional feature space. The network is trained to be insensitive to lighting effects as a Siamese U-Net, using labels that are automatically generated from synthetic datasets, without any human intervention. To generate more consistent high-dimensional feature maps, we propose the Cross Triplet Loss utilizing cross information in two images under different domains, and a new sampling method which can generate a wider range of samples by adding weights while sampling. Experiments on different weather and sequences with different textures show that the proposed method outperforms classical feature extraction methods and state-of-art deep learned feature extraction methods.


    Access

    Download


    Export, share and cite



    Title :

    Input Image Adaption for Robust Direct SLAM using Deep Learning


    Contributors:
    Wang, Sen (author)

    Publication date :

    2020-11-01


    Type of media :

    Miscellaneous


    Type of material :

    Electronic Resource


    Language :

    English




    DL-SLAM: Direct 2.5D LiDAR SLAM for Autonomous Driving

    Li, Jun / Zhao, Junqiao / Kang, Yuchen et al. | IEEE | 2019


    ICAS2016_0503: DEPTH IMAGE BASED DIRECT SLAM FOR SMALL UAVS

    Park, S. Y. / Shim, D. H. | British Library Conference Proceedings | 2016


    Towards Robust Single-Shot Monocular SLAM

    Schroeder, Gregory / Hussein, Ahmed | Springer Verlag | 2024


    One Robust Loosely Coupled 4D Millimeter-Wave Image Radar SLAM Method

    Ye, Tingfeng / Lu, Xinfei / Zhao, Yingzhong | SAE Technical Papers | 2023


    ROSE: Robust State Estimation via Online Covariance Adaption

    Fakoorian, Seyed / Otsu, Kyohei / Khattak, Shehryar et al. | Springer Verlag | 2023