The introduction of Global Positioning System (GPS) has greatly transformed navigation services; however, GPS remains vulnerable to threats such as spoofing attacks. While machine learning (ML) methods have shown promise in detecting these attacks, they often lack interpretability, causing uncertainty about the reasons for classifying a signal as spoofed. Furthermore, ML methods have typically overlooked the underlying causal relationships among the features, which offer valuable insights for analysis and strategies to mitigate the effects of spoofing attacks. In this paper, we propose using a causality-based Shapley additive explanation method called asymmetric Shapley values (ASV) to understanding why a signal is been detected as spoofed. By employing a deep neural network model, we achieve a high prediction accuracy of 0.993 in identifying spoofing attacks. Crucially, our ASV analysis reveals that disregarding the causal relationships among the feature variables may lead to misleading conclusions regarding the Shapley feature contributions.
CASAD-GPS: Causal Shapley Additive Explanation for GPS Spoofing Attacks Detection
02.03.2024
5519555 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Deep Learning Model for Crash Injury Severity Analysis Using Shapley Additive Explanation Values
Transportation Research Record | 2022
|Intelligent Detection System for Spoofing and Jamming Attacks in UAVs
Springer Verlag | 2023
|