Adversarial ELF Malware Detection Method Using Model Interpretation

Yanchen Qiao, Weizhe Zhang, Zhicheng Tian, Laurence T. Yang, Yang Liu, Mamoun Alazab

Research output: Contribution to journalArticlepeer-review


Recent research shows that executable and linkable format (ELF) malware detection models based on deep learning are vulnerable to adversarial attacks. The most commonly used method in previous work is adversarial training to defend adversarial examples. Nevertheless, it is inefficient and only effective for specific adversarial attacks. Given that the perturbation byte insertion positions of existing adversarial malware generation methods are relatively fixed, we propose a new method to detect adversarial ELF malware. Using model interpretation techniques, we analyze the decision-making basis of the malware detection model and extract the features of adversarial examples. We further use anomaly detection techniques to identify adversarial examples. As an add-on module of the malware detection model, the proposed method does not require modifying the original model and does not need to retrain the model. Evaluating results show that the method can effectively defend the adversarial attacks against the malware detection model.

Original languageEnglish
Pages (from-to)605-615
Number of pages11
JournalIEEE Transactions on Industrial Informatics
Issue number1
Publication statusPublished - 1 Jan 2023


Dive into the research topics of 'Adversarial ELF Malware Detection Method Using Model Interpretation'. Together they form a unique fingerprint.

Cite this