Stereo cameras are crucial sensors for self-driving vehicles as they are low-cost and can be used to estimate depth. It can be used for multiple purposes, such as object detection, depth estimation, semantic segmentation, etc. In this paper, we propose a stereo vision-based perception framework for autonomous vehicles. It uses three deep neural networks simultaneously to perform free-space detection, lane boundary detection, and object detection on image frames captured using the stereo camera. The depth of the detected objects from the vehicle is estimated from the disparity image computed using two stereo image frames from the stereo camera. The proposed stereo perception framework runs at 7.4 Hz on the Nvidia Drive PX 2 hardware platform, which further allows for its use in multi-sensor fusion for localization, mapping, and path planning by autonomous vehicle applications.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A Stereo Perception Framework for Autonomous Vehicles


    Contributors:


    Publication date :

    2020-05-01


    Size :

    513305 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English





    Omnidirectional stereo vision for autonomous vehicles

    Schönbein, Miriam | TIBKAT | 2015

    Free access


    Stereo Perception on an Off-Road Vehicles

    Rieder, A. / Southall, B. / Salgian, G. et al. | British Library Conference Proceedings | 2003