Perception tasks are critical for an autonomous driving system. In recent years, advancements in deep learning have enabled highly accurate perception. However, conquering perception tasks in poor visibility environments remains challenging. One of the main reasons is that most of the existing datasets are concentrated on visually clear environments. This makes it difficult to train deep learning-based perception models in limited visibility environments. Building a new dataset requires significant time and human resources, and annotating data from adverse visibility environments is even more challenging. To address these problems, many image translation methods have been proposed to translate annotated daytime images into nighttime ones to build a nighttime dataset without annotation. In this paper, we present an unsupervised day-to-night image translation network for generating synthetic data. Our proposed method extracts semantic information from input images. The extracted information is then applied to the image-to-image translation network as spatial attention maps. We conduct experiments to evaluate the proposed method. The experimental results show that our method outperforms the related works both qualitative and quantitative.
Semantic Attention-guided Day-to-Night Image Translation Network
2023-09-24
7478861 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
SG-Net: Semantic Guided Network for Image Dehazing
British Library Conference Proceedings | 2023
|RGB-D Co-attention Network for Semantic Segmentation
British Library Conference Proceedings | 2021
|