Adversarial ELF Malware Detection Method Using Model Interpretation

Yanchen Qiao, Weizhe Zhang, Zhicheng Tian, Laurence T. Yang, Yang Liu, Mamoun Alazab

    Research output: Contribution to journalArticlepeer-review

    7 Citations (Scopus)

    Abstract

    Recent research shows that executable and linkable format (ELF) malware detection models based on deep learning are vulnerable to adversarial attacks. The most commonly used method in previous work is adversarial training to defend adversarial examples. Nevertheless, it is inefficient and only effective for specific adversarial attacks. Given that the perturbation byte insertion positions of existing adversarial malware generation methods are relatively fixed, we propose a new method to detect adversarial ELF malware. Using model interpretation techniques, we analyze the decision-making basis of the malware detection model and extract the features of adversarial examples. We further use anomaly detection techniques to identify adversarial examples. As an add-on module of the malware detection model, the proposed method does not require modifying the original model and does not need to retrain the model. Evaluating results show that the method can effectively defend the adversarial attacks against the malware detection model.

    Original languageEnglish
    Pages (from-to)605-615
    Number of pages11
    JournalIEEE Transactions on Industrial Informatics
    Volume19
    Issue number1
    DOIs
    Publication statusPublished - 1 Jan 2023

    Bibliographical note

    Funding Information:
    This work was supported in part by the Major Key Project of PCL under Grant PCL2021A02, in part by the Key-Area Research and Development Program of Guangdong Province under Grant 2020B0101360001, and in part by the National Natural Science Foundation of China under Grant 62102202.

    Publisher Copyright:
    © 2005-2012 IEEE.

    Fingerprint

    Dive into the research topics of 'Adversarial ELF Malware Detection Method Using Model Interpretation'. Together they form a unique fingerprint.

    Cite this