In the field of autonomous vehicles (AVs), accurately discerning commander intent and executing linguistic commands within a visual context presents a significant challenge. This paper introduces a sophisticated encoder-decoder framework, developed to address visual grounding in AVs. Our Context-Aware Visual Grounding (CAVG) model is an advanced system that integrates five core encoders—Text, Emotion, Image, Context, and Cross-Modal—with a multimodal decoder. This integration enables the CAVG model to adeptly capture contextual semantics and to learn human emotional features, augmented by state-of-the-art Large Language Models (LLMs) including GPT-4. The architecture of CAVG is reinforced by the implementation of multi-head cross-modal attention mechanisms and a Region-Specific Dynamic (RSD) layer for attention modulation. This architectural design enables the model to efficiently process and interpret a range of cross-modal inputs, yielding a comprehensive understanding of the correlation between verbal commands and corresponding visual scenes. Empirical evaluations on the Talk2Car dataset, a real-world benchmark, demonstrate that CAVG establishes new standards in prediction accuracy and operational efficiency. Notably, the model exhibits exceptional performance even with limited training data, ranging from 50% to 75% of the full dataset. This feature highlights its effectiveness and potential for deployment in practical AV applications. Moreover, CAVG has shown remarkable robustness and adaptability in challenging scenarios, including long-text command interpretation, low-light conditions, ambiguous command contexts, inclement weather conditions, and densely populated urban environments.


    Access

    Download


    Export, share and cite



    Title :

    GPT-4 enhanced multimodal grounding for autonomous driving: Leveraging cross-modal attention with large language models


    Contributors:
    Haicheng Liao (author) / Huanming Shen (author) / Zhenning Li (author) / Chengyue Wang (author) / Guofa Li (author) / Yiming Bie (author) / Chengzhong Xu (author)


    Publication date :

    2024




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    Unknown







    LEVERAGING UNCERTAINTIES FOR DEEP MULTI-MODAL OBJECT DETECTION IN AUTONOMOUS DRIVING

    Feng, Di / Cao, Yifan / Rosenbaum, Lars et al. | British Library Conference Proceedings | 2020


    Leveraging Uncertainties for Deep Multi-modal Object Detection in Autonomous Driving

    Feng, Di / Cao, Yifan / Rosenbaum, Lars et al. | IEEE | 2020