Describing a traffic scenario from the driver’s perspective is a challenging process for Advanced Driving Assistance System (ADAS), involving different sub-tasks of detection, tracking, segmentation, etc. Previous methods mainly focus on independent sub-tasks and have difficulties to comprehensively describe the incidents. In this study, this problem is novelly treated as a video captioning task, and a Guidance Attention Captioning Network (GAC-Network) structure is proposed for describing the incidents in a concise single sentence. In GAC-Network, an Attention based Encoder-Decoder Net (AED-Net) is built as the main network; with the temporal spatial attention mechanisms, the AED-Net make it possible to effectively reject the unimportant traffic behaviors and redundant backgrounds. Considering various driving scenarios, the Spatio-Temporal Layer Normalization is used to improve the generalization ability. To generate captions for incidents in driving, the novel Guidance Module is proposed to boost the encoder-decoder model to generate words in a caption, which have better relationship to the past and future words. Because there is no public dataset for captioning of driving scenarios, the Traffic Video Captioning (TVC) dataset is released for the video captioning task in driving scenarios. Experimental results show that the proposed methods can fulfill the captioning task for complex driving scenarios, and achieve higher performance than the methods for comparison, including at least 2.5%, 1.8%, 3.6%, and 13.1% better results on BLEU_1, METEOR, ROUGE_L and CIDEr, respectively.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Traffic Scenario Understanding and Video Captioning via Guidance Attention Captioning Network


    Beteiligte:
    Liu, Chunsheng (Autor:in) / Zhang, Xiao (Autor:in) / Chang, Faliang (Autor:in) / Li, Shuang (Autor:in) / Hao, Penghui (Autor:in) / Lu, Yansha (Autor:in) / Wang, Yinhai (Autor:in)


    Erscheinungsdatum :

    2024-05-01


    Format / Umfang :

    1974668 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Gated Hierarchical Attention for Image Captioning

    Wang, Qingzhong / Chan, Antoni B. | British Library Conference Proceedings | 2019


    Thinking Hallucination for Video Captioning

    Ullah, Nasib / Mohanta, Partha Pratim | British Library Conference Proceedings | 2023



    Multimodal Sentiment Analysis Based on Image Captioning and Attention Mechanism

    Sun, Ye / Jin, Guozhe / Zhao, Yahui et al. | IEEE | 2023


    Attention Neural Baby Talk: Captioning of Risk Factors while Driving

    Mori, Yuki / Fukui, Hiroshi / Hirakawa, Tsubasa et al. | IEEE | 2019