Intelligent behavior of autonomous vehicles must rely on the understanding of explanatory factors of the environment in order to operate safely. To implement such intelligence, machine learning models such as CNNs are used to identify objects such as traffic signs. However, the process behind arriving at a certain result is hard to understand. Explainable artificial intelligence has the potential to overcome this limitation by unveiling explanatory factors learned by models and hence, increase the reliability of recognition systems. Recent progress in explainable artificial intelligence motivates research in various fields, and their application must become a core part of intelligent transport systems. We present in this paper an explainable and unsupervised methodology for traffic sign classification. The proposed pipeline combines methods for explainable feature extraction as well as out-of-distribution detection, which were previously applied in anomaly detection and evolutionary biology. This pipeline learns explanatory factors of a traffic sign class, and models a classification function without knowing other classes. Our method is evaluated using the GTSRB as well as the Tsinghua-Tencent-100k dataset and compared to a deep learning counterpart, namely GANs. The results show that the presented methodology is feasible to classify traffic sign images, is explainable and outperforms deep learning-based models.
Unsupervised Traffic Sign Classification Relying on Explanatory Visible Factors
2023-09-24
1078931 byte
Conference paper
Electronic Resource
English
Traffic sign detection model transmitted according to traffic sign classification information
European Patent Office | 2024
|Traffic Sign Detection and Classification
IEEE | 2023
|Shape Classification for Traffic Sign Recognition
British Library Conference Proceedings | 1993
|Road traffic sign detection and classification
Tema Archive | 1997
|Traffic Sign Classification using Deep Learning
IEEE | 2023
|