Generative AI for pentesting: The good, the bad, the ugly

Eric Hilario, Sami Azam, Jawahar Sundaram, Khwaja Imran Mohammed, Bharanidharan Shanmugam

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
160 Downloads (Pure)

Abstract

This paper examines the role of Generative AI (GenAI) and Large Language Models (LLMs) in penetration testing exploring the benefits, challenges, and risks associated with cyber security applications. Through the use of generative artificial intelligence, penetration testing becomes more creative, test environments are customised, and continuous learning and adaptation is achieved. We examined how GenAI (ChatGPT 3.5) helps penetration testers with options and suggestions during the five stages of penetration testing. The effectiveness of the GenAI tool was tested using a publicly available vulnerable machine from VulnHub. It was amazing how quickly they responded at each stage and provided better pentesting report. In this article, we discuss potential risks, unintended consequences, and uncontrolled AI development associated with pentesting.

Original languageEnglish
Pages (from-to)2075-2097
Number of pages23
JournalInternational Journal of Information Security
Volume23
Issue number3
Early online date2024
DOIs
Publication statusPublished - Jun 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Fingerprint

Dive into the research topics of 'Generative AI for pentesting: The good, the bad, the ugly'. Together they form a unique fingerprint.

Cite this