Visual semantic segmentation is a key technology to realize scene understanding for autonomous driving and its accuracy is affected by the light changes in images. This paper proposes a novel multi-exposure fusion approach to visual semantic enhancement of autonomous driving. Firstly, a multi-exposure image sequence is aligned to construct a stable image input. Secondly, high contrast regions of multi-exposure image sequences are evaluated by context aggregation network (CAN) to predict image weight map. Finally, the high-quality image is generated by weighted fusion of multi-exposure image sequences. The proposed approach is validated by using Cityscapes’ HDR dataset and real environment data. The experimental results show that the proposed method effectively restores lost features in the light changing images and enhances accuracy of subsequent semantic segmentation.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    A novel multi-exposure fusion approach for enhancing visual semantic segmentation of autonomous driving


    Contributors:
    Huang, Tengchao (author) / Song, Shuang (author) / Liu, Qianjie (author) / He, Wei (author) / Zhu, Qingyuan (author) / Hu, Huosheng (author)


    Publication date :

    2023-06-01


    Size :

    16 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English





    RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

    El Madawi, Khaled / Rashed, Hazem / El Sallab, Ahmad et al. | IEEE | 2019




    Visual Odometry Integrated Semantic Constraints towards Autonomous Driving

    Yao, Siyu / Lan, FengChong / Chen, Jiqing | British Library Conference Proceedings | 2022