As deep neural networks (DNNs) are becoming the prominent solution for many computational problems, the aviation industry seeks to explore their potential in alleviating pilot workload and improving operational safety. However, the use of DNNs in these types of safety-critical applications requires a thorough certification process. This need could be partially addressed through formal verification, which provides rigorous assurances — e.g., by proving the absence of certain mispredictions. In this case-study paper, we demonstrate this process on an image-classifier DNN currently under development at Airbus, which is intended for use during the aircraft taxiing phase. We use formal methods to assess this DNN's robustness to three common image perturbation types: noise, brightness and contrast, and some of their combinations. This process entails multiple invocations of the underlying verifier, which might be computationally expensive; and we therefore propose a method that leverages the monotonicity of these robustness properties, as well as the results of past verification queries, in order to reduce the overall number of verification queries required by nearly 60%. Our results indicate the level of robustness achieved by the DNN classifier under study, and indicate that it is considerably more vulnerable to noise than to brightness or contrast perturbations.
Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing
2024-09-29
1241189 byte
Conference paper
Electronic Resource
English
LED Landing, Taxiing, Runway Turnoff, and Recognition Lights
SAE Technical Papers | 2022
LED Landing, Taxiing, Runway Turnoff, and Recognition Lights
SAE Technical Papers | 2011