AI techniques, encompassing machine learning, have made enormous progress over the last decade with several models already implemented across various aerospace applications, such as, aircraft design, operation, production and maintenance, as well as air traffic control. But the question is: Are there any behaviour validations of such AI models which could help establish assurances that they will continue to perform as specified when deployed in a real-time environment? With Explainable AI (XAI), a sub-field of AI, possibilities of exposing complex AI models to human users/operators in an interpretable and understandable ways are opening. This paper tends to explore valid answers to such questions that perplex the Aerospace AI community in fully capturing the essence of complex AI models (the black boxes) through various known XAI approaches and classes. Accordingly, various techniques, for instance, white-box AI, Black-box AI, model agnostic, fuzzy logic, and knowledge graphs, are investigated to find their efficacy levels in terms of explainability. In addition, the XAI requirements are clearly laid down for safety-critical systems from the perspective of creators, guarantors, and interpreters. Finally, this paper puts forth a comparison of various degrees of explain ability with the standard elements of Intelligence Community Directive (ICD) to set out the capabilities of XAI that would be required to build trust in complex AI models.
(Explainable) Artificial Intelligence in Aerospace Safety-Critical Systems
05.03.2022
961579 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance
TIBKAT | 2020
|ONBOARD EXPLAINABLE ARTIFICIAL INTELLIGENCE
TIBKAT | 2022
|