Describing a traffic scenario from the driver’s perspective is a challenging process for Advanced Driving Assistance System (ADAS), involving different sub-tasks of detection, tracking, segmentation, etc. Previous methods mainly focus on independent sub-tasks and have difficulties to comprehensively describe the incidents. In this study, this problem is novelly treated as a video captioning task, and a Guidance Attention Captioning Network (GAC-Network) structure is proposed for describing the incidents in a concise single sentence. In GAC-Network, an Attention based Encoder-Decoder Net (AED-Net) is built as the main network; with the temporal spatial attention mechanisms, the AED-Net make it possible to effectively reject the unimportant traffic behaviors and redundant backgrounds. Considering various driving scenarios, the Spatio-Temporal Layer Normalization is used to improve the generalization ability. To generate captions for incidents in driving, the novel Guidance Module is proposed to boost the encoder-decoder model to generate words in a caption, which have better relationship to the past and future words. Because there is no public dataset for captioning of driving scenarios, the Traffic Video Captioning (TVC) dataset is released for the video captioning task in driving scenarios. Experimental results show that the proposed methods can fulfill the captioning task for complex driving scenarios, and achieve higher performance than the methods for comparison, including at least 2.5%, 1.8%, 3.6%, and 13.1% better results on BLEU_1, METEOR, ROUGE_L and CIDEr, respectively.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Traffic Scenario Understanding and Video Captioning via Guidance Attention Captioning Network


    Contributors:
    Liu, Chunsheng (author) / Zhang, Xiao (author) / Chang, Faliang (author) / Li, Shuang (author) / Hao, Penghui (author) / Lu, Yansha (author) / Wang, Yinhai (author)


    Publication date :

    2024-05-01


    Size :

    1974668 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Multimodal Sentiment Analysis Based on Image Captioning and Attention Mechanism

    Sun, Ye / Jin, Guozhe / Zhao, Yahui et al. | IEEE | 2023


    Attention Neural Baby Talk: Captioning of Risk Factors while Driving

    Mori, Yuki / Fukui, Hiroshi / Hirakawa, Tsubasa et al. | IEEE | 2019


    Learning explicit video attributes from mid-level representation for video captioning

    Nian, Fudong / Li, Teng / Wang, Yan et al. | British Library Online Contents | 2017


    Enhanced Dense Image Captioning Based On Transformers

    Goswami, Tilottama / Potu, Sathvika / Reddy, Kuntluri Prasanna et al. | IEEE | 2024