Stereo cameras are crucial sensors for self-driving vehicles as they are low-cost and can be used to estimate depth. It can be used for multiple purposes, such as object detection, depth estimation, semantic segmentation, etc. In this paper, we propose a stereo vision-based perception framework for autonomous vehicles. It uses three deep neural networks simultaneously to perform free-space detection, lane boundary detection, and object detection on image frames captured using the stereo camera. The depth of the detected objects from the vehicle is estimated from the disparity image computed using two stereo image frames from the stereo camera. The proposed stereo perception framework runs at 7.4 Hz on the Nvidia Drive PX 2 hardware platform, which further allows for its use in multi-sensor fusion for localization, mapping, and path planning by autonomous vehicle applications.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    A Stereo Perception Framework for Autonomous Vehicles


    Beteiligte:
    Kemsaram, Narsimlu (Autor:in) / Das, Anweshan (Autor:in) / Dubbelman, Gijs (Autor:in)


    Erscheinungsdatum :

    2020-05-01


    Format / Umfang :

    513305 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch





    Omnidirectional stereo vision for autonomous vehicles

    Schönbein, Miriam | TIBKAT | 2015

    Freier Zugriff


    Omnidirectional Stereo Vision for Autonomous Vehicles

    Schönbein, Miriam | GWLB - Gottfried Wilhelm Leibniz Bibliothek | 2014

    Freier Zugriff