A new spatio-temporal model for simulating the bottom-up visual attention is proposed. It has been built from numerous important properties of the human visual system (HVS). This paper focuses both on the architecture of the model and on its performances. Given that the spatial model of the bottom-up visual attention has already been defined [O. Le Meur et al., 2004], the temporal dimension is more accurately described. A qualitative and quantitative comparison with human fixations collected from an eye tracking apparatus is undertaken. From the former, the quality of the prediction is deemed very good whereas the latter illustrates that the best predictor of the human fixation consists of the sum all visual features (achromatic, chromatic and motion).


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A spatio-temporal model of the selective human visual attention


    Contributors:
    Le Meur, O. (author) / Thoreau, D. (author) / Le Callet, P. (author) / Barba, D. (author)


    Publication date :

    2005-01-01


    Size :

    203045 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    A Spatio-Temporal Model of the Selective Human Visual Attention

    Le Meur, O. / Thoreau, D. / Le Callet, P. et al. | British Library Conference Proceedings | 2005


    Spatio-temporal attention model for video content analysis

    Guironnet, M. / Guyader, N. / Pellerin, D. et al. | IEEE | 2005


    Spatio-temporal Attention Model for Video Content Analysis

    Guironnet, M. / Guyader, N. / Pellerin, D. et al. | British Library Conference Proceedings | 2005


    Selective spatio-temporal interest points

    Chakraborty, B. / Holte, M. B. / Moeslund, T. B. et al. | British Library Online Contents | 2012


    Spatio-Temporal Graph Attention Convolution Network for Traffic Flow Forecasting

    Liu, Kun / Zhu, Yifan / Wang, Xiao et al. | Transportation Research Record | 2024