Abstract
Recent research shows that executable and linkable format (ELF) malware detection models based on deep learning are vulnerable to adversarial attacks. The most commonly used method in previous work is adversarial training to defend adversarial examples. Nevertheless, it is inefficient and only effective for specific adversarial attacks. Given that the perturbation byte insertion positions of existing adversarial malware generation methods are relatively fixed, we propose a new method to detect adversarial ELF malware. Using model interpretation techniques, we analyze the decision-making basis of the malware detection model and extract the features of adversarial examples. We further use anomaly detection techniques to identify adversarial examples. As an add-on module of the malware detection model, the proposed method does not require modifying the original model and does not need to retrain the model. Evaluating results show that the method can effectively defend the adversarial attacks against the malware detection model.
Original language | English |
---|---|
Pages (from-to) | 605-615 |
Number of pages | 11 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 19 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2023 |
Bibliographical note
Funding Information:This work was supported in part by the Major Key Project of PCL under Grant PCL2021A02, in part by the Key-Area Research and Development Program of Guangdong Province under Grant 2020B0101360001, and in part by the National Natural Science Foundation of China under Grant 62102202.
Publisher Copyright:
© 2005-2012 IEEE.