In recent years, there has been a notable increase in the development of autonomous vehicle (AV) technologies aimed at improving safety in transportation systems. While AVs have been deployed in the real-world to some extent, a full-scale deployment requires AVs to robustly navigate through challenges like heavy rain, snow, low lighting, construction zones and GPS signal loss in tunnels. To be able to handle these specific challenges, an AV must reliably recognize the physical attributes of the environment in which it operates. In this paper, we define context recognition as the task of accurately identifying environmental attributes for an AV to appropriately deal with them. Specifically, we define 24 environmental contexts capturing a variety of weather, lighting, traffic and road conditions that an AV must be aware of. Motivated by the need to recognize environmental contexts, we create a context recognition dataset called DrivingContexts with more than 1.6 million context-query pairs relevant for an AV. Since traditional supervised computer vision approaches do not scale well to a variety of contexts, we propose a framework called ContextVLM that uses vision-language models to detect contexts using zero- and few-shot approaches. ContextVLM is capable of reliably detecting relevant driving contexts with an accuracy of more than 95% on our dataset, while running in real-time on a 4GB Nvidia GeForce GTX 1050 Ti GPU on an AV with a latency of 10.5 ms per query.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving Using Vision Language Models


    Beteiligte:
    Sural, Shounak (Autor:in) / Naren (Autor:in) / Rajkumar, Ragunathan Raj (Autor:in)


    Erscheinungsdatum :

    24.09.2024


    Format / Umfang :

    4460616 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    ZERO SHOT MACHINE VISION SYSTEM VIA JOINT SPARSE REPRESENTATIONS

    KOLOURI SOHEIL / RAO SHANKAR / KIM KYUNGNAM | Europäisches Patentamt | 2019

    Freier Zugriff

    ZERO SHOT MACHINE VISION SYSTEM VIA JOINT SPARSE REPRESENTATIONS

    KOLOURI SOHEIL / RAO SHANKAR R / KIM KYUNGNAM | Europäisches Patentamt | 2021

    Freier Zugriff

    Zero shot machine vision system via joint sparse representations

    KOLOURI SOHEIL / RAO SHANKAR R / KIM KYUNGNAM | Europäisches Patentamt | 2020

    Freier Zugriff