Two different metrics quantifying model and simulator predictive capability are formulated and evaluated; both metrics exploit results from conducted validation experiments where simulation results are compared to the corresponding measured quantities. The first metric is inspired by the modified nearest neighbor coverage metric and the second by the Kullback–Liebler divergence. The two different metrics are implemented in Python and in a here-developed general metamodel designed to be applicable for most physics-based simulation models. These two implementations together facilitate both offline and online metric evaluation. Additionally, a connection between the two, here separated, concepts of predictive capability and credibility is established and realized in the metamodel. The two implementations are, finally, evaluated in an aeronautical domain context.
Toward Objective Assessment of Simulation Predictive Capability
Journal of Aerospace Information Systems ; 20 , 3 ; 152-167
2023-03-01
Article (Journal)
Electronic Resource
English
Model Predictive Capability Assessment Under Uncertainty
AIAA | 2006
|Model Predictive Capability Assessment under Uncertainty
AIAA | 2005
|Assessment of Predictive Capability of Aeromechanics Methods
Online Contents | 2010
|Assessment of predictive capability of aeromechanics methods
Tema Archive | 2010
|Assessment of Predictive Capability of Aeromechanics Methods
British Library Conference Proceedings | 2008
|