Occlusion is a major challenge for LiDAR-based object detection methods as it renders regions of interest unobservable to the ego vehicle. A proposed solution to this problem comes from collaborative perception via Vehicle-to-Everything (V2X) communication, which leverages a diverse perspective thanks to the presence of connected agents (vehicles and intelligent roadside units) at multiple locations to form a complete scene representation. The major challenge of V2X collaboration is the performance-bandwidth tradeoff which presents two questions 1) which information should be exchanged over the V2X network and 2) how the exchanged information is fused. The current state-of-the-art resolves to the mid-collaboration approach where Birds-Eye View (BEV) images of point clouds are communicated to enable a deep interaction among connected agents while reducing bandwidth consumption. While achieving strong performance, the real-world deployment of most mid-collaboration approaches are hindered by their overly complicated architectures and unrealistic assumptions about inter-agent synchronization. In this work, we devise a simple yet effective collaboration method based on exchanging the outputs from each agent that achieves a better bandwidth-performance tradeoff while minimising the required changes to the single-vehicle detection models. Moreover, we relax the assumptions used in existing state-of-the-art approaches about inter-agent synchronization to only require a common time reference among connected agents, which can be achieved in practice using GPS time. Experiments on the V2X-Sim dataset show that our collaboration method reaches 76.72 mean average precision which is 99% the performance of the early collaboration method while consuming as much bandwidth as the late collaboration (0.01 MB on average). The code will be released in https://github.com/quan-dao/practical-collab-perception.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection


    Beteiligte:
    Dao, Minh-Quan (Autor:in) / Berrio, Julie Stephany (Autor:in) / Fremont, Vincent (Autor:in) / Shan, Mao (Autor:in) / Hery, Elwan (Autor:in) / Worrall, Stewart (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    01.09.2024


    Format / Umfang :

    3731228 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Practical Collaborative Perception: A Framework for Asynchronous and Multi-Agent 3D Object Detection

    Dao, Minh-Quan / Berrio, Julie Stephany / Fremont, Vincent et al. | IEEE | 2024



    Multi-agent asynchronous perception simulation method and system for vehicle-road cooperation

    CAI YINGFENG / ZHANG SHUNYAO / LIU ZE et al. | Europäisches Patentamt | 2024

    Freier Zugriff

    Multi-Agent Collaborative Framework for Automated Agriculture

    Ankit, Kumar / Kolathaya, Shishir N.Y. / Ghose, Debasish | IEEE | 2021


    A late fusion framework for multivehicle collaborative perception

    Li, Zengwen / Gao, Yingying / Lv, Huaxin et al. | SPIE | 2024