Recent literature work has shown that many machine learning models are vulnerable to adversarial attacks, but there also exist techniques that can be used to improve their robustness. Here, we apply some adversarial attack methods for the purpose of generating adversarial features. The said adversarial attacks are directed towards a neural network that is initially trained to classify malware programs in the labeled subset of the BIG 2015 dataset based on the malware's re-sampled and resized HEX codes, which thereby constitute the aforementioned features. Furthermore, via adversarial training, we investigate the creation of a more robust neural network.
Investigating the Generation of Adversarial Malware Features and the Use of Adversarial Training
16.08.2021
1691251 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Adversarial Training of Variational Auto-encoders for High Fidelity Image Generation
ArXiv | 2018
|A Search for Visual Features in Adversarial Networks
VDE-Verlag | 2020
|