Stereo cameras are crucial sensors for self-driving vehicles as they are low-cost and can be used to estimate depth. It can be used for multiple purposes, such as object detection, depth estimation, semantic segmentation, etc. In this paper, we propose a stereo vision-based perception framework for autonomous vehicles. It uses three deep neural networks simultaneously to perform free-space detection, lane boundary detection, and object detection on image frames captured using the stereo camera. The depth of the detected objects from the vehicle is estimated from the disparity image computed using two stereo image frames from the stereo camera. The proposed stereo perception framework runs at 7.4 Hz on the Nvidia Drive PX 2 hardware platform, which further allows for its use in multi-sensor fusion for localization, mapping, and path planning by autonomous vehicle applications.
A Stereo Perception Framework for Autonomous Vehicles
01.05.2020
513305 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Omnidirectional Stereo Vision for Autonomous Vehicles
Katalog Medizin | 2014
|Omnidirectional Stereo Vision for Autonomous Vehicles
GWLB - Gottfried Wilhelm Leibniz Bibliothek | 2014
|