Abstract Accurate and robust semantic scene understanding for urban driving is challenging due to the complex object types, object motion, and as well as ego-motion. Typical approaches to this problem use the fusion of multiple sensors such as camera, IMU, LiDAR, and radar to identify the state of the surrounding objects, including distance, direction, position, and velocity. However, such sensor modalities are very much complex and costly. This paper proposes a new framework for object identification (FOI) from a moving camera in a complex driving environment utilizing only camera sensor data. The framework is capable of detecting objects, extracting their behavioral features in terms of motion, position, velocity, and distance. All of this information (referred to as object-wise semantic information) are fused in order to acquire a better understanding of the driving scenario. The work addresses ego-motion compensation and extraction of accurate motion information of moving objects from a moving camera using image registration and optical flow estimation. A moving object detection model is designed within the framework by integrating an encoder–decoder network with a semantic segmentation network. The approach involves two mutual tasks: semantic segmentation of objects into specific classes and binary pixel classification to predict whether the detected object is moving or static based on temporal information. The work also contributes a novel dataset for moving object detection that covers all types of dynamic objects. The evaluation of FOI has been performed on different sequences of KITTI, EU-life long, and the proposed datasets. The experimental results show that the proposed framework provides accurate object-wise semantic information.

    Highlights Proposed a new vision-based object identification framework for autonomous vehicles Fusion of motion and geometry related information for urban driving scenarios. Using Image registration for compensating ego motion induced by moving camera. Moving object detection model is proposed together with a new MOD dataset. Object-wise Semantic Information benchmark for assessing the proposed framework.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Motion and geometry-related information fusion through a framework for object identification from a moving camera in urban driving scenarios


    Beteiligte:
    Lateef, Fahad (Autor:in) / Kas, Mohamed (Autor:in) / Ruichek, Yassine (Autor:in)


    Erscheinungsdatum :

    2023-07-26




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Deep-PDANet: Camera-Radar Fusion for Depth Estimation in Autonomous Driving Scenarios

    Zheng, Lianqing / Ai, Wenjin / Ma, Zhixiong | SAE Technical Papers | 2023


    A multi-sensor fusion system for moving object detection and tracking in urban driving environments

    Cho, Hyunggi / Seo, Young-Woo / Kumar, B.V.K. Vijaya et al. | IEEE | 2014


    Continuous stereo camera calibration in urban scenarios

    Mueller, Georg R. / Wuensche, Hans-Joachim | IEEE | 2017


    Multi-view structure-from-motion for hybrid camera scenarios

    Bastanlar, Y. / Temizel, A. / Yardimci, Y. et al. | British Library Online Contents | 2012


    Generating Motion Scenarios for Self-Driving Vehicles

    WANG JINGKANG / PUN AVA ALISON / TU XUANYUAN et al. | Europäisches Patentamt | 2022

    Freier Zugriff