In 2023, the world witnessed the emergence of the first generative AI models explicitly designed for criminal purposes. Among the most notorious was WormGPT, which showcased its ability to assist hackers in creating malicious software. It was soon followed by WolfGPT and EscapeGPT. Recently, cybersecurity researchers uncovered a new AI tool—GhostGPT.
According to experts from Abnormal Security, GhostGPT leverages a hacked version of OpenAI’s ChatGPT or a similar language model stripped of all ethical safeguards.
“GhostGPT, free from built-in security mechanisms, provides direct and unfiltered responses to dangerous queries that traditional AI systems would block or flag,” the company stated in a blog post dated January 23.
The developers of GhostGPT actively promote it as a tool with four defining features:
- Absence of censorship;
- High-speed data processing;
- No logging, minimizing the risk of evidence generation;
- Ease of use.
The tool is accessible directly via a Telegram bot, making it particularly appealing to cybercriminals. Widely advertised on hacking forums, GhostGPT is primarily aimed at facilitating business email compromise (BEC) attacks.
Researchers from Abnormal Security tested GhostGPT by requesting it to craft a phishing email using DocuSign. The resulting message was highly convincing, demonstrating the tool’s capability to deceive potential victims effectively.
Beyond generating phishing emails, GhostGPT can also be employed for programming malware and developing exploits.
One of the most significant threats posed by this tool is its ability to lower the barriers to entry for criminal activity. With the help of generative AI, fraudulent emails become more polished and harder to detect, particularly benefitting hackers whose native language is not English. Additionally, GhostGPT offers unparalleled convenience and speed—users no longer need to hack into ChatGPT or configure open-source models. For a fixed fee, they gain instant access and can immediately focus on executing their attacks.