The paper advocates utilizing Explainable Artificial Intelligence (XAI) to enhance the trustworthiness of both black-box and interpretable models in the context of performance testing. The proposed methodology involves employing the Shapley Additive exPlanation (SHAP) algorithm as a surrogate model to aid performance analysts in comprehending the decision-making process of black-box machine learning models. By incorporating SHAP around black-box models, analysts can gain insights into the factors influencing the models' pass-or-fail predictions and understand the relative importance of performance data. To validate the effectiveness of the approach, extensive load testing experiments were conducted on a real-world testbed, incorporating industry-standard benchmarks and manual injection of performance bugs. The results demonstrate that the proposed approach significantly improves the trustworthiness of machine learning models by offering explanatory capabilities for their decision-making. Furthermore, the approach can be applied across various domains and requires minimal effort to operate, thus showcasing its generalizability and practicality.
Decoding Performance Testing Results: Empowering Trust with Explainable Artificial Intelligence (XAI)
2023-08-28
1939973 byte
Conference paper
Electronic Resource
English