Visual-based object detection has become a crucial component in the realm of autonomous vehicles. However, conducting reliable testing for such systems remains unresolved. In this paper, we advocate for the application of causal inference to investigate the pivotal environmental factors influencing detection accuracy. Through the integration of diffusion models, we address the specialized conditional generalization of hazardous testing images. Our approach involves the construction of observational data to attribute key factors and fine-tune the diffusion model. Additionally, we introduce an optimal prompt words search method that strikes a balance between test coverage and level of challenge. Subsequently, leveraging these optimal prompts, we propose a cost-effective testing image generation through both "Text2Scene" and "Image2Scene" fashions. The experimental results indicate that, on the generalized dataset, the performance of object detection algorithms is the poorest, with the average detection accuracy decreasing from 0.81 to 0.285. Moreover, retraining object detection models on our generalized critical test cases can ultimately enhance algorithm performance, achieving a median accuracy improvement of up to 8.13%. Overall, our research proposes a novel approach to generalize test cases, thereby contributing to the advancement and deployment of safer autonomous vehicles.
Critical Test Cases Generalization for Autonomous Driving Object Detection Algorithms
2024 IEEE Intelligent Vehicles Symposium (IV) ; 1149-1156
2024-06-02
3690859 byte
Conference paper
Electronic Resource
English
Autonomous Driving Object Detection Platform
Springer Verlag | 2024
|3D Object Detection for Autonomous Driving
Springer Verlag | 2022
|