Identifying driver behaviour and activities from in-cabin video cameras (especially the distracting non-driving activities), has been recently shown to be effective in enhancing the safety and the driving experience in smart and partially-automated vehicles. In the literature, the problem of video-based driver activity recognition is often tackled by using traditional deep learning-based human-action recognition systems. Despite their powerful capabilities, they seem not well-suited for video-based driver activity recognition, due to their complex and inefficient architecture that requires a huge amount of computational resources. Additionally, given the similarities of different non-driving activities that share the same pattern of upper body movements (e.g. drinking versus eating), it makes it harder for traditional human-action recognition systems to pick up or differentiate between these subtle changes. Thus, in this work we are proposing a novel framework based on an efficient spatio-temporal neural network architecture augmented with an attention mechanism that can differentiate between the subtle differences of similar non-driving activities. Our framework has been evaluated on one of the largest benchmark datasets for fine-grained recognition of driver activities and it has outperformed the state-of-art approach by more than 4% in the top-1 accuracy score with a boosting of 13x the run-time speedup during the inference.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Real-time Attention-Augmented Spatio-Temporal Networks for Video-based Driver Activity Recognition


    Beteiligte:
    Saleh, Khaled (Autor:in) / Mihaita, Adriana-Simona (Autor:in) / Yu, Kun (Autor:in) / Chen, Fang (Autor:in)


    Erscheinungsdatum :

    2022-10-08


    Format / Umfang :

    391770 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Spatio-temporal attention model for video content analysis

    Guironnet, M. / Guyader, N. / Pellerin, D. et al. | IEEE | 2005


    Real-Time Driver State Monitoring Using a CNN Based Spatio-Temporal Approach*

    Kose, Neslihan / Kopuklu, Okan / Unnervik, Alexander et al. | IEEE | 2019


    Spatio-temporal Attention Model for Video Content Analysis

    Guironnet, M. / Guyader, N. / Pellerin, D. et al. | British Library Conference Proceedings | 2005


    Driver activity recognition using spatial‐temporal graph convolutional LSTM networks with attention mechanism

    Chaopeng Pan / Haotian Cao / Weiwei Zhang et al. | DOAJ | 2021

    Freier Zugriff

    Driver activity recognition using spatial‐temporal graph convolutional LSTM networks with attention mechanism

    Pan, Chaopeng / Cao, Haotian / Zhang, Weiwei et al. | Wiley | 2021

    Freier Zugriff