We aim to determine the speed of ego-vehicle motion from a video stream. Previous work by Konda et al. [1] has shown that motion can be detected and quantified with the help of a synchrony autoencoder, which has multiplicative gating interactions introduced between its hidden units, and hence, across video frames. In this work we modify their synchrony autoencoder method to achieve a ”real time” performance in a wide variety of driving environments. Our modifications led to a model which is 1.5 times faster and uses only half of the total memory by comparison with the original. We also benchmark the speed estimation performance against a model based on CaffeNet. CaffeNet is known for visual classification and localization but we employ its architecture with a little tweak for speed determination using sequential video frames and blur patterns. We evaluate our models on self-collected data, KITTI, and other standard sets.
Velocity estimation from monocular video for automotive applications using convolutional neural networks
01.06.2017
976051 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
British Library Conference Proceedings | 2017
|BASE | 2022
|