High accuracy and quick response of environmental perception systems are crucial for the driving stability and safety of intelligent vehicles. In road scenes, dynamic traffic objects and static pavement information are essential components of perception systems. Current state-of-the-art entails constructing a separate network for each task and integrating outputs under multiple frameworks through postprocessing, which results in high energy consumption and latency in perception systems. In this study, a unified and novel multitask network (IDS-MODEL) for road scenes is proposed, which can simultaneously perform high-precision instance segmentation and drivable area segmentation in real-time. The proposed network uses an end-to-end convolutional neural network (CNN), which mainly consists of an optimized shared backbone and specific decoders designed for different tasks. The residual and attention mechanisms are first introduced to improve the feature extraction capability of the backbone. Depth-separable convolution is then used to reduce the number of parameters and increase the computational efficiency. Besides, two parallel decoders with feature fusion modules and the corresponding prediction heads are designed to simultaneously and efficiently extract important dynamic and static information from road scenes. Experimental results based on the autonomous driving dataset BDD100K demonstrate that the proposed multitask model achieves a high level of accuracy of 18.74% mean average precision (mAP) on the instance masks and 83.63% mean intersection over union (mIoU) on the drivable areas, with an inference speed of 36 FPS. The qualitative results of real-vehicle experiments indicate that the proposed method adapts well to real-world scenes and has high accuracy and performance in real-time.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    IDS-MODEL: An Efficient Multitask Model of Road Scene Instance and Drivable Area Segmentation for Autonomous Driving


    Beteiligte:
    Luo, Tong (Autor:in) / Chen, Yanyan (Autor:in) / Luan, Tianyu (Autor:in) / Cai, Baixiang (Autor:in) / Chen, Long (Autor:in) / Wang, Hai (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    01.03.2024


    Format / Umfang :

    14637515 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    DRIVABLE AREA SEGMENTATION IN DETERIORATING ROAD REGIONS FOR AUTONOMOUS VEHICLES USING 3D LIDAR SENSOR

    Ali, Abdelrahman / Gergis, Mark / Abdennadher, Slim et al. | British Library Conference Proceedings | 2021



    Drivable Area Segmentation in Deteriorating Road Regions for Autonomous Vehicles using 3D LiDAR Sensor

    Ali, Abdelrahman / Gergis, Mark / Abdennadher, Slim et al. | IEEE | 2021



    AUTONOMOUS DRIVING VEHICLE AND DYNAMIC PLANNING METHOD OF DRIVABLE AREA

    XIAO JIANXIONG | Europäisches Patentamt | 2022

    Freier Zugriff