Generative AI for pentesting: The good, the bad, the ugly

Eric Hilario, Sami Azam, Jawahar Sundaram, Khwaja Imran Mohammed, Bharanidharan Shanmugam

Research output: Contribution to journalArticlepeer-review

42 Downloads (Pure)


This paper examines the role of Generative AI (GenAI) and Large Language Models (LLMs) in penetration testing exploring the benefits, challenges, and risks associated with cyber security applications. Through the use of generative artificial intelligence, penetration testing becomes more creative, test environments are customised, and continuous learning and adaptation is achieved. We examined how GenAI (ChatGPT 3.5) helps penetration testers with options and suggestions during the five stages of penetration testing. The effectiveness of the GenAI tool was tested using a publicly available vulnerable machine from VulnHub. It was amazing how quickly they responded at each stage and provided better pentesting report. In this article, we discuss potential risks, unintended consequences, and uncontrolled AI development associated with pentesting.

Original languageEnglish
Pages (from-to)1-23
Number of pages23
JournalInternational Journal of Information Security
Publication statusE-pub ahead of print - 2024


Dive into the research topics of 'Generative AI for pentesting: The good, the bad, the ugly'. Together they form a unique fingerprint.

Cite this