In recent years, the Aerospace Corporation has been developing machine learning systems to detect cyber anomalies in space system command and telemetry streams. However, to enable the use of deep learning in such high consequence environments, the models must be trustworthy. One aspect of trust is a model’s ability to accurately quantify the uncertainty of its predictions. Although many deep learning models output what seem to be confidence scores, current academic research has repeatedly shown that models often return high confidence even when very wrong and are unable to diagnose and respond appropriately to out-of-distribution inputs. This can result in catastrophic overconfidence when models are faced with adversarial inputs or concept drift. Even on routine inputs, without reliable uncertainty quantification, it is hard for human-machine teaming to take place as humans cannot trust the model’s reported confidence score. In short, all models are wrong sometimes, but models which know when they are wrong are considerably more useful. To this end, The Aerospace Corporation conducted a literature review and implemented current state of the art methods, including deep ensembles and temperature scaling for confidence calibration, to accurately quantify the uncertainty of deep learning model predictions. We further incorporated and tested these techniques within the existing cyber defense model framework for more trustworthy cyber anomaly detection models. We show that not only are these techniques successful, they are also easy to implement, extensible to many applications and machine learning model variants, and provide interpretable results for a wide audience. From this, Aerospace recommends further adoption of such techniques in high consequence environments.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Uncertainty Quantification for Trusted Machine Learning in Space System Cyber Security


    Beteiligte:


    Erscheinungsdatum :

    2021-07-01


    Format / Umfang :

    520831 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Leveraging Field Programmable Gate Arrays for Trusted Space Cyber Defense

    Cohen, Nicholas / Wheeler, Wayne A. / Betser, Joseph et al. | AIAA | 2018


    Leveraging Field-Programmable Gate Arrays (FPGAs) for Trusted Space Cyber Defense

    Cohen, Nicholas / Wheeler, Wayne A. / Betser, Joseph et al. | AIAA | 2017


    Security Technology of Trusted Internet of Things Based on Machine Learning

    Zheng, Weitao / Wu, Shulin / Cai, Yuxiang et al. | IEEE | 2022


    Space mission cyber-security risks

    Marin, Ricardo / Hernandez, Eduardo / Vivero, Julio et al. | AIAA | 2014


    Air Traffic Control System Cyber Security Using Humans and Machine Learning

    Atkins, Garett / Sampigethaya, Krishna | IEEE | 2023