This paper introduces an innovative application of foundation models, enabling Unmanned Ground Vehicles (UGVs) equipped with an RGB-D camera to navigate to designated destinations based on human language instructions. Unlike learning-based methods, this approach does not require prior training but instead leverages existing foundation models, thus facilitating generalization to novel environments. Upon receiving human language instructions, these are transformed into a ‘cognitive route description’ using a large language model (LLM)-a detailed navigation route expressed in human language. The vehicle then decomposes this description into landmarks and navigation maneuvers. The vehicle also determines elevation costs and identifies navigability levels of different regions through a terrain segmentation model, GANav, trained on open datasets. Semantic elevation costs, which take both elevation and navigability levels into account, are estimated and provided to the Model Predictive Path Integral (MPPI) planner, responsible for local path planning. Concurrently, the vehicle searches for target landmarks using foundation models, including YOLO-World and EfficientViT-SAM. Ultimately, the vehicle executes the navigation commands to reach the designated destination, the final landmark. Our experiments demonstrate that this application successfully guides UGVs to their destinations following human language instructions in novel environments, such as unfamiliar terrain or urban settings.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Words to Wheels: Vision-Based Autonomous Driving Understanding Human Language Instructions Using Foundation Models


    Beteiligte:
    Ryu, Chanhoe (Autor:in) / Seong, Hyunki (Autor:in) / Lee, Daegyu (Autor:in) / Moon, Seongwoo (Autor:in) / Min, Sungjae (Autor:in) / Shim, D. Hyunchul (Autor:in)


    Erscheinungsdatum :

    22.06.2025


    Format / Umfang :

    2181857 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Words to Wheels: Vision-Based Autonomous Driving Understanding Human Language Instructions Using Foundation Models

    Ryu, Chanhoe / Seong, Hyunki / Lee, Daegyu et al. | ArXiv | 2024

    Freier Zugriff

    SYSTEMS AND METHODS FOR VISION-LANGUAGE PLANNING (VLP) FOUNDATION MODELS FOR AUTONOMOUS DRIVING

    PAN CHENBIN / YAMAN BURHANEDDIN / NESTI TOMMASO et al. | Europäisches Patentamt | 2025

    Freier Zugriff


    AUTONOMOUS DRIVING INSTRUCTIONS

    XU JINGWEI / ROUTRAY SIDHARTHA | Europäisches Patentamt | 2020

    Freier Zugriff

    Autonomous driving platform with independent driving wheels

    Europäisches Patentamt | 2024

    Freier Zugriff