Pedestrian intent prediction is crucial for developing safe self-driving systems. Pedestrian crossing behavior is influenced by various factors, and self-driving cars can use on-board front cameras to monitor and record a pedestrian's historical trajectory and surroundings to predict future crossing behavior. Effectively fusing multivariate information from different modalities is key to accurate predictions. This paper proposes a multimodal information fusion model based on a hybrid attention mechanism. This model employs hybrid attention networks to extract complementary information from both original and semantic images, enhancing focus on image regions relevant to pedestrian crossing. Additionally, an asymmetric bi-directional gated loop BU-GRU module with an optimal fusion strategy is introduced to achieve superior performance in F1 score and accuracy.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Multimodal feature fusion for pedestrian crossing intention prediction based on hybrid attention mechanism


    Beteiligte:
    Guo, Jieru (Autor:in) / Ding, Yutong (Autor:in) / Tian, Aoshang (Autor:in)


    Erscheinungsdatum :

    26.07.2024


    Format / Umfang :

    1056275 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch