This paper presents an algorithm for ego-positioning by using a low-cost monocular camera for systems based on the Internet-of-Vehicles. To reduce the computational and memory requirements, as well as the communication load, we tackle the model compression task as a weighted $k$-cover problem for better preserving the critical structures. For real-world vision-based positioning applications, we consider the issue of large scene changes and introduce a model update algorithm to address this problem. A large positioning data set containing data collected for more than a month, 106 sessions, and 14 275 images is constructed. Extensive experimental results show that submeter accuracy can be achieved by the proposed ego-positioning algorithm, which outperforms existing vision-based approaches.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Vision-Based Positioning for Internet-of-Vehicles


    Contributors:


    Publication date :

    2017-02-01


    Size :

    3476657 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Vision-Based Positioning for Internet-of-Vehicles

    Chen, Kuan-Wen | Online Contents | 2017


    Internet of Vehicles Communication Method and Positioning Method, and Internet of Vehicles Communications Apparatus

    ZHANG YUXIANG / XIE HONG / ZHOU KAI | European Patent Office | 2021

    Free access

    Vehicle positioning method and system of Internet of Vehicles

    ZENG JIJUN / LONG ZHENYUE / ZHANG XIAOLU et al. | European Patent Office | 2022

    Free access

    Vision-Based Indoor Positioning System for Connected Vehicles in Small-scale Testbed Environments

    Hamza, Mahmoud S. / Shehata, Omar M. / Morgan, Elsayed I. et al. | IEEE | 2024


    Vision-based positioning of Unmanned Surface Vehicles using Fiducial Markers for automatic docking

    Digerud, Lars / Volden, Øystein / Christensen, Kim Alexander et al. | BASE | 2022

    Free access