Positioning is essential for the safe and accurate navigation of vehicles. Global Positioning System (GPS) is the most widely used positioning technique but requires online external correction information to achieve lane-level position accuracy $( < 1.5\mathrm{m})$. Furthermore, GPS suffers from poor performance when receiver is in signal-shadowed area such as urban environments. To address this problem, we propose the deep learning-based visual map-matching algorithm for correcting position of vehicles. Visual Map Matching Network (VMMNet) is introduced as an end-to-end network to match high-definition map (HD map) and Bird-Eye View (BEV) image from monocular camera for position correction. Because VMMNet exploits predefined HD map, it operates offline and can localize vehicles even in GPS-denied situation. Moreover, VMMNet can achieve lane-level accuracy regardless of signal environment while it only requires monocular camera which is already equipped in most of modern vehicles. We demonstrated superiority of VMMNet by quantitatively comparing to conventional position correcting techniques. The results showed VMMNet improves position accuracy of low-cost GPS receiver to the lane level at various urban areas with large margin to conventional techniques. We also confirmed qualitatively that VMMNet can function stably as an independent positioning system replacing GPS.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Vehicle Localization Network using Simultaneous Coarse and Fine Visual Map Matching


    Contributors:


    Publication date :

    2023-09-24


    Size :

    6729200 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English