Accurate localization is a vital prerequisite for future assistance or autonomous driving functions in intelligent vehicles. To achieve the required localization accuracy and availability, long-term visual SLAM algorithms like LLama-SLAM are a promising option. In such algorithms visual feature tracks, i. e. landmark observations over several consecutive image frames, have to be matched to feature tracks recorded days, weeks or months earlier. This leads to a more challenging matching problem than in short-term visual localization and known descriptor matching methods cannot be applied directly. In this paper, we devise several approaches to compare and match feature tracks and evaluate their performance on a long-term data set. With the proposed descriptor combination and masking ("CoMa") method the best track matching performance is achieved with minor computational cost. This method creates a single combined descriptor for each feature track and furthermore increases the robustness by capturing the appearance variations of this track in a descriptor mask.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    How to Match Tracks of Visual Features for Automotive Long-Term SLAM


    Contributors:


    Publication date :

    2019-10-01


    Size :

    581317 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Automotive Over-slam bumper

    KIM SUNG SOOL / JUNG JUNGUN / PARK TEA HO | European Patent Office | 2022

    Free access

    Automotive Over-slam bumper

    European Patent Office | 2023

    Free access


    LLama-SLAM: Learning High-Quality Visual Landmarks for Long-Term Mapping and Localization

    Luthardt, Stefan / Willert, Volker / Adamy, Jurgen | IEEE | 2018


    Visual SLAM: Why filter?

    Strasdat, H. / Montiel, J. M. / Davison, A. J. | British Library Online Contents | 2012