Since the first use of computers in space and aircraft, software errors have occurred. These errors can manifest as loss-of-life or less catastrophically. As the demand for automation increases, software in mission or safety-critical systems should be designed to be tolerant to the most likely software faults. This paper categorizes a set of 56 historic aerospace software error incidents from 1962 to 2023 to determine trends of how and where automation is most likely to fail or behave unexpectedly. A distinction between software producing unexpected (erroneous) output versus no output (fail-silent) is introduced. Of the historical incidents analyzed, 86% were from software producing wrong output rather than simply stopping. Rebooting was found to be ineffective to clear erroneous behavior, and not reliable to recover from silent failures. Error origin was within the code/logic itself in 59% of cases, 16% from configurable data, 14% from unexpected sensor input, and 11% from command/operator input. A substantial forty percent (41%) of unexpected software behavior was indicated by the absence of code, arising from unanticipated situations and missing requirements, and 18% of incidents were subjectively deemed "unknown-unknowns". No incidents were found to be the result of programming language, compiler, tool, or operating system; and only eighteen percent (18%) of all incidents were considered traditional computer science/programming-related in nature. These findings indicate that for fault tolerance, erroneous automation behavior must be a primary consideration especially at critical moments, and reboot recoverability may not be viable. Based on this data, we recommend some best practices for pre-flight software error prevention and in-flight error mitigation. Care should be taken to validate configurable data and commands prior to use. "Test-like-you-fly", including hardware-in-the-loop combined with robust off-nominal testing should be used to uncover missing logic arising from unanticipated situations not covered by requirements alone. Monitoring, override, and backup systems should be architected and employed in accordance with time and safety criticality. This study uniquely focuses on manifestations of unexpected flight software behavior, independent of ultimate root cause. We characterize software error behavior and origin to improve software design, test, and operations for resilience to the most common manifestations, and provide a rich dataset for further study.
Historical Aerospace Software Errors Categorized to Influence Fault Tolerance
02.03.2024
1727618 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Fault-tolerance features of an aerospace multiprocessor
Tema Archiv | 1974
|Automating software fault tolerance
AIAA | 1987
|License plate recognition for categorized applications
IEEE | 2011
|Automating software fault tolerance
NTRS | 1987
|Software Fault‐Tolerance Techniques
Wiley | 2017
|