The Dark Side of AI: The Rise of WormGPT and Cybercrime
Unmasking the Dangerous Intersection of Advanced Technology and Modern Cybercrime
By: Mack Jackson Jr.
Do you know how superhero movies sometimes show people using cool tech for lousy stuff? Well, something similar is happening in the real world. Cybercriminals have started to use advanced artificial intelligence (AI) technology to do their dirty work. This technology is called generative AI, and it's being used very sneakily.
Imagine you get an email that seems like it's from your boss or friend, but it's actually from a lousy guy pretending to be them. That's what a tool named WormGPT does. It's like a digital villain that helps criminals send out convincing fake emails to trick people. SlashNext, a cybersecurity research firm, reported this.
Daniel Kelley, a cybersecurity researcher, warns that the software is like a wicked version of the AI model, GPT. Cybercriminals use it to automate the creation of fake emails that look so real because they are personalized for the recipient. This increases the chances that people will fall for the scam.
The creator of WormGPT even bragged about it, calling it the "biggest enemy" of a well-known AI called ChatGPT, which is usually used for good. WormGPT is based on the GPT-J language model developed by EleutherAI.
AI technology has always been a double-edged sword. While ChatGPT and Google Bard, another AI, are trying to prevent their tools from being misused to create fake emails and harmful code, WormGPT shows how bad guys can turn the same technology into a weapon.
According to a recent report by Check Point, a cybersecurity firm, Google Bard is easier to misuse than ChatGPT. This past February, an Israeli cybersecurity firm revealed that cybercriminals were exploiting the weaknesses of ChatGPT's protective measures to do all sorts of illegal activities.
WormGPT is particularly scary because it doesn't play by the rules. This threat means that even inexperienced cybercriminals can use it to quickly launch large-scale attacks without needing much technical knowledge or money. It's like giving a magic wand to a villain.
Bad guys have also figured out how to "trick" ChatGPT into creating content that could reveal sensitive information or produce harmful code. Generative AI is so good at creating realistic emails that they usually pass the spam check.
Kelley explained that this technology makes it easier for more cybercriminals to carry out sophisticated scams. Even those who aren't skilled can use this technology, making it a threat to more people.
In another troubling development, researchers from Mithril Security found that an existing AI model named GPT-J-6B could be altered to spread false information. They then uploaded it to a public platform where other apps could use it. This method, called "LLM supply chain poisoning," is successful if the tampered model is uploaded under a name that appears to belong to a recognized company. They called this method PoisonGPT.
All these incidents show that as AI technology evolves, so do the threats. It's like a digital game of cat and mouse. So, always be cautious about the emails you receive and the information you share online.
As AI technologies like WormGPT evolve, cybercrime becomes more sophisticated. We must remain vigilant against these threats. Always be skeptical of unexpected emails, even if they appear to come from familiar sources. Never share sensitive information, like passwords or financial details, without double-checking. Install trusted cybersecurity software and keep it updated to detect potential threats. Finally, continue learning about the latest cyber scams to stay ahead. We can minimize the risks these misused AI technologies pose by staying informed and maintaining a robust cyber defense.
Reference research for this article: WormGPT