For the integration of autonomy and machine learning in the next generation of systems for urban air mobility and unmanned aircraft, it needs to be shown that these functions can be integrated safely. Moreover, it must be shown that these systems can operate safely with these active machine-learning functions. One prerequisite is that the machine-learning function is only utilized when it is expected to operate safely within the specified environmental conditions. This paper shows a model-based approach for the definition of the Operational Design Domain (ODD). The ODD enables the formalisation of the expected environmental conditions during the operation and the system states. With the formal model, the ODD specification can then be transformed into a specification of a runtime monitoring language called RTLola. This enables the utilization of the RTLola runtime monitoring framework to check log files against violations of the ODD and, later, supervise the ODD during operation and in flight. The goal is to automate the supervision and make it more user-friendly. This is shown in two separate use cases, where an ODD model is created and then exported and transformed into a monitoring specification. The approach is validated with a set of log files from the use cases.
From Operational Design Domain to Runtime Monitoring of AI-Based Aviation Systems
2024-09-29
610492 byte
Conference paper
Electronic Resource
English
Springer Verlag | 2025
|AN AUTOMATED VEHICLE SAFETY CONCEPT BASED ON RUNTIME RESTRICTION OF THE OPERATIONAL DESIGN DOMAIN
British Library Conference Proceedings | 2018
|