While 2022 marked the birth of the OpenAI LLM, also known as the Generative AI tool – ChatGPT, it was in 2023 that it gained mainstream adoption. However, it did not take long for cybersecurity experts to point out the hazards of such a tool being commissioned to their own interests. And now this eventuality has turned into an actual risk.
Let me introduce you to WormGPT, the dark side of ChatGPT.
Decoding WormGPT
In a nutshell, WormGPT is a Gen AI LLM solution like ChatGPT, MS copilot, Google PaLM/Bard, Anthropic Claude2, and openLLMs open-source alternatives, etc., but it is created for malicious purposes. Widely available without any of the ethical boundaries or limitations of other legit LLMs, it is used (even advertised!) to produce very convincing phishing emails and help malicious actors in business email compromise attacks. It is also supposed to be an integral factor in creating malware in Python.
In short, WormGPT can easily produce the following:
- Malicious code in Python
- Phishing emails personalized to the recipient
- BEC attacks code
- Fluency in a foreign language that can enable attackers to avoid spelling and grammar errors
- Tips on crafting malicious attacks and guidance on illegal cyber activities
WormGPT’s AI is based on GPT-J language model, an openLLM, and was specifically trained and specialized on malware datasets.
Accessing WormGPT: Is it easily available?
Fortunately, this catastrophic tool is not freely available. In fact, it is quite expensive: approximately USD 60 per month or USD 550 for an annual subscription [1]. Compare this to the lesser-priced USD 20 per month ChatGPT Plus subscription that also gives access to GPT-4 model and its plugins and browsing, and we may have a clear winner.
Also, some WormGPT buyers have complained of weak performance.
Now, while the subscription price might deter some script-kiddies, it might still look attractive to professional hackers, already making a living from their activity and wanting to automate/optimize it. Those actors will produce more tools, quicker and cheaper, for the lower base who cannot afford the monthly 60 USD subscription.
However, SlashNext, who tested it, shared some interesting insights: “The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks”[2].
Early warnings
As ChatGPT LLM took over the market, both the US Federal Trade Commission (FTC) and the UK Information Commission’s Office (ICO) raised a red flag on the data privacy and protection problem posed by OpenAI’s tool. The EU’s law enforcement agency, Europol, had also warned about the impact of LLMs on law enforcement, raising a flag on the risk of seeing dark LLMs develop. It even considered that dark LLMs “may become a key criminal business model of the future. This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge”.
Unfortunately, this is not surprising and this may only be the first step towards democratizing such tools. Probably, they will become cheaper and improve over time.
Dark LLMs still have fine days ahead of them
There are many ways to fool the prey — the easiest and fastest way is to automate it. In the near future (some might already be possible), it is easy to imagine what dark LLMs can do:
- Generate convincing fake content for fake news/disinformation, including media like images, videos, and even voice samples
- Provide content crafted to specific writing styles of bloggers, journalists or public persons to impersonate people online
- Create toxic, abusive, or offensive text/images on demand
- Produce fake legal documents
- Automate the creation of fake websites for phishing attacks
Dark LLMs could go even darker
LLMs, or AIs in general, are algorithms generating an output. If trained on the right dataset it could perfectly be instructed to find weaknesses in encryption keys.
This refers to keys generated with low entropy – as per the Shannon Information Theory – showing partial repetition or predictable patterns.
Indeed, future AI models could potentially be trained on captured encrypted data (captured by traffic copy or through data leaks) to discover those patterns if they exist. It would need 3 key components:
- Significant volumes of encrypted data from the same source of encryption keys generation
- Available AI algorithms using deep learning for statistical pattern recognition
- Substantial compute and time (as for any AI/ML training)
This would help a model discover patterns and weaknesses related to poorly generated encryption keys and have it, during inference (such as after training when used on live or recent data) to generate specific candidate key sequences based on those identified patterns to test them (not randomly or brute force then) on the captured data.
Of course, with encryption relying on strongly generated encryption keys, such as a hardware security module (HSM) with True Random Number Generator (TRNG) or soon with Quantum Random Number Generator (QRNG) the risk is extremely low, close to non-existent, regardless of the AI model and capabilities. If relying on truly random secrets, the encryption keys will not show any pattern.
Has the dark sun risen yet?
While it was no surprise that LLM tools would be emulated and twisted to serve malicious interests of some actors, the magnitude of dark LLMs might only be in its infancy. I wonder if Quantum computers will make such LLMs’ learning easier, cheaper, or accessible.
References and sources
- [1] Sources available are not unanimous: respectively 60 EUR and 550 EUR (PC Mag), and 60 USD and 700 USD (ZDnet)
- [2] https://uk.pcmag.com/ai/147755/wormgpt-is-a-chatgpt-alternative-with-no-ethical-boundaries-or-limitations
- https://uk.pcmag.com/ai/147755/wormgpt-is-a-chatgpt-alternative-with-no-ethical-boundaries-or-limitations
- https://www.csoonline.com/article/646441/wormgpt-a-generative-ai-tool-to-compromise-business-emails.html