Visual detection is a key task in autonomous driving, and it serves as a crucial foundation for self-driving planning and control. Deep neural networks have achieved promising results in various visual tasks, but they are known to be vulnerable to adversarial attacks. A comprehensive understanding of deep visual detectors’ vulnerability is required before people can improve their robustness. However, only a few adversarial attack/defense works have focused on object detection, and most of them employed only classification and/or localization losses, ignoring the objectness aspect. In this paper, we identify a serious objectness-related adversarial vulnerability in YOLO detectors and present an effective attack strategy targeting the objectness aspect of visual detection in autonomous vehicles. Furthermore, to address such vulnerability, we propose a new objectness-aware adversarial training approach for visual detection. Experiments show that the proposed attack targeting the objectness aspect is 45.17% and 43.50% more effective than those generated from classification and/or localization losses on the KITTI and COCO_traffic datasets, respectively. Also, the proposed adversarial defense approach can improve the detectors’ robustness against objectness-oriented attacks by up to 21% and 12% mAP on KITTI and COCO_traffic, respectively.
Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios
2022 IEEE Intelligent Vehicles Symposium (IV) ; 1011-1017
2022-06-05
2488100 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Adversarial scenarios for safety testing of autonomous vehicles
Europäisches Patentamt | 2023
|Adversarial scenarios for safety testing of autonomous vehicles
Europäisches Patentamt | 2024
|ADVERSARIAL SCENARIOS FOR SAFETY TESTING OF AUTONOMOUS VEHICLES
Europäisches Patentamt | 2023
|ADVERSARIAL SCENARIOS FOR SAFETY TESTING OF AUTONOMOUS VEHICLES
Europäisches Patentamt | 2021
|