Testing of autonomous vehicles is a crucial step to ensure safety, but it can be costly and time-consuming. Virtual testing can be a solution to reduce these costs and time limitations. However, it is necessary to evaluate the reliability of virtual testing before it can be used as a replacement for specific field tests. Especially deep learning (DL) models in automated driving systems (ADS) can be sensitive to the data they encounter. In this work, we propose a cascade method to evaluate the reliability of virtual testing for DL models in autonomous driving. This method considers both the scenario level and the model level to identify the origins of the reality gap concerning scenario modeling, the fidelity of the simulation environment, and the model performance. We show an exemplary case study to demonstrate the method, which focuses on the object tracking task in autonomous driving. Based on the case study, we identify several challenges at different stages of the virtual testing pipeline. The result of this work can be used to improve the reliability of virtual testing for DL models in autonomous driving via calibrating the pipeline according to the identified challenges.
Assessing Reliability of Virtual Testing for Deep Learning Models in Autonomous Vehicles
24.09.2024
3471232 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Deep Learning & Autonomous Vehicles
TIBKAT | 2018
|WEIGHT SHARING BETWEEN DEEP LEARNING MODELS USED IN AUTONOMOUS VEHICLES
Europäisches Patentamt | 2023
|