We aim to determine the speed of ego-vehicle motion from a video stream. Previous work by Konda et al. [1] has shown that motion can be detected and quantified with the help of a synchrony autoencoder, which has multiplicative gating interactions introduced between its hidden units, and hence, across video frames. In this work we modify their synchrony autoencoder method to achieve a ”real time” performance in a wide variety of driving environments. Our modifications led to a model which is 1.5 times faster and uses only half of the total memory by comparison with the original. We also benchmark the speed estimation performance against a model based on CaffeNet. CaffeNet is known for visual classification and localization but we employ its architecture with a little tweak for speed determination using sequential video frames and blur patterns. We evaluate our models on self-collected data, KITTI, and other standard sets.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Velocity estimation from monocular video for automotive applications using convolutional neural networks


    Contributors:


    Publication date :

    2017-06-01


    Size :

    976051 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Velocity Estimation from Monocular Video for Automotive Applications Using Convolutional Neural Networks

    Banerjee, Koyel / Van Dinh, Tuan / Levkova, Ludmila | British Library Conference Proceedings | 2017



    Lightweight and Effective Convolutional Neural Networks for Vehicle Viewpoint Estimation From Monocular Images

    Magistri, Simone / Boschi, Marco / Sambo, Francesco et al. | BASE | 2022

    Free access


    Visual Depth Mapping from Monocular Images using Recurrent Convolutional Neural Networks

    Mern, John M. / Julian, Kyle D. / Tompa, Rachael E. et al. | AIAA | 2019