Object recognition and depth perception are two tightly coupled tasks that are indispensable for situational awareness. Most autonomous systems are able to perform these tasks by processing and integrating data streaming from a variety of sensors. The multiple hardware and sophisticated software architectures required to operate these systems makes them expensive to scale and operate. This paper implements a fast, monocular vision system that can be used for simultaneous object recognition and depth perception. We borrow from the architecture of a start-of-the-art object recognition system, YOLOv3, and extend its architecture by incorporating distances and modifying its loss functions and prediction vectors to enable it to multitask on both tasks. The vision system is trained on a large database acquired through the coupling of LiDAR measurements with complementary 360-degree camera to generate a high-fidelity labeled dataset. The performance of the multipurpose network is evaluated on a test dataset consisting of a total of 7,634 objects collected on a different road network. When compared with ground truth LiDAR data, the proposed network achieves a mean absolute percentage error rate of 11% on the passenger car within 10 m and a mean error rate of 7% or 9% on the truck within 10 m and beyond 10 m, respectively. It was also observed that adding a second task (depth perception) to the modeling network improved the accuracy of object detection by about 3%. The proposed multipurpose model can be used for the development of automated alert systems, traffic monitoring, and safety monitoring.


    Access

    Download

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Spatio-Temporal Fusion of LiDAR and Camera Data for Omnidirectional Depth Perception


    Additional title:

    Transportation Research Record: Journal of the Transportation Research Board


    Contributors:
    Zhang, Linlin (author) / Yu, Xiang (author) / Adu-Gyamfi, Yaw (author) / Sun, Carlos (author)


    Publication date :

    2023-07-07




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    LiDAR-Camera Fusion for Depth Enhanced Unsupervised Odometry

    Fetic, Naida / Aydemir, Eren / Unel, Mustafa | IEEE | 2022


    LiDAR - Stereo Camera Fusion for Accurate Depth Estimation

    Cholakkal, Hafeez Husain / Mentasti, Simone / Bersani, Mattia et al. | IEEE | 2020


    SPATIO-TEMPORAL DEPTH INTERPOLATION (STDI)

    Ochs, Matthias / Bradler, Henry / Mester, Rudolf | British Library Conference Proceedings | 2018


    Spatio-Temporal Depth Interpolation (STDI)

    Ochs, Matthias / Bradler, Henry / Mester, Rudolf | IEEE | 2018


    PERCEPTION SYSTEM LIDAR AND CAMERA BRACKET

    IMPOLA TODD A / O'DONNELL TIMOTHY M / MCALPINE JACOB J et al. | European Patent Office | 2021

    Free access