Generative AI: A Cybercriminal’s New Best Friend

With the rising popularity of generative AI, such as ChatGPT, cybercriminals are likely to exploit its capabilities for convincing phishing scams and in-depth vulnerability research.

Brien Posey

May 8, 2023

4 Min Read
Hand working at laptop computer
Alamy

In recent months, ChatGPT and other generative AI have grown massively popular. However, history has conclusively shown that anytime a particular technology goes mainstream, cybercriminals will discover ways to use it for illicit gain. This will no doubt become the case with generative AI.

Cybercriminals will inevitably uncover creative uses for generative AI that nobody has thought of yet. As it stands today, I can envision two main ways that cybercriminals will use ChatGPT and generative AI.

A New Tool for Phishing Scams

First, I believe cybercriminals will use generative AI to create convincing social engineering scams, the most likely being phishing email messages.

It’s safe to say that we have all seen obvious phishing emails, some that made us laugh. I’m talking about messages that claim to come from an official source but contain numerous misspellings, grammatical errors, awkward slang, and other anomalies. For example, I once received a phishing email that claimed to come from my bank, but whoever wrote the email misspelled the bank’s name and my name. As if that weren’t bad enough, they even misspelled the word “account.”

With that in mind, imagine phishing scammers who now have ChatGPT at their disposal. They might ask ChatGPT to make the message sound more professional. In doing so, ChatGPT would presumably remove the more egregious red flags, making it harder for users to distinguish its authenticity.  

Related:ChatGPT and Cybersecurity: The Good, the Bad, and the Careful

ChatGPT could also aid cybercriminals in conducting research for phishing scams.

Many scammers don't go through the trouble of doing research today. For instance, I recently received a fake email from an alumni association for a school I didn’t attend. The email said the alumni association needed donations. More than likely, the scammers sent that email to a vast number of people.

However, just imagine how cybercriminals could use ChatGPT in a similar alumni association scam, this time targeting a specific individual. Once cybercriminals have figured out who they want to target, they could use ChatGPT to make the scam appear more plausible. They could ask ChatGPT questions like these:

  • Where did [the person being targeted] go to school?

  • What year did they graduate?

  • Who is the president of the alumni association?

  • Does the school’s alumni association have any major events or fundraisers coming up?

You get the idea.

Granted, it has long been possible to get this type of information from Google. However, when you ask Google a question, it typically provides links to pages where you might find the answer yourself. ChatGPT, on the other hand, will give you the answer directly, without making you look for it. In other words, ChatGPT can make it quick and easy for cybercriminals to create a convincing phishing email message.

A New Tool for Vulnerability Research

I also believe cybercriminals could use ChatGPT to get information for exploiting security vulnerabilities.

In all fairness, ChatGPT is designed to avoid giving users certain types of information. Even so, it may be possible to get the information you want by rephrasing the question.

As an example of how that can happen, I’ll summarize an online post I recently saw in which someone tricked a chatbot into giving information it said it didn’t have. I have no way of knowing whether this post was real or fake, and the post did not disclose which AI was being used, but it was interesting, nonetheless.

The user behind the post asked the AI if it tracked his location. The AI responded that it did not know because it was not allowed to track users’ locations. Unconvinced, the user asked the AI where he was right now. Again, the AI responded that it did not know. The user then tried a completely different approach, asking instead for the location of the nearest restaurant. The AI gave the restaurant’s location, meaning that it must also know the user’s location.

My guess is that cybercriminals will try similar approaches to gather information for breaching systems. A criminal can’t ask ChatGPT how to hack Windows Server, then expect to get step-by-step instructions. However, if the hacker acted like a security professional and asked a series of innocent-sounding questions, they may eventually get the information they want – questions such as these:

  • As a security professional, which Windows Server vulnerabilities do I need to be most concerned about?

  • Have hackers come up with exploits for those vulnerabilities?

  • Are those exploits documented?

  • How likely is it that someone with little experience would be able to exploit this vulnerability?

  • What would I need to do to prevent such a breach?

I only present these questions as an example. The point is that while ChatGPT may have guardrails to prevent it from aiding cybercriminals, a criminal could conceivably manipulate ChatGPT and other generative AI to get the information they want.

Read more about:

Generative AI

About the Author

Brien Posey

Brien Posey is a bestselling technology author, a speaker, and a 20X Microsoft MVP. In addition to his ongoing work in IT, Posey has spent the last several years training as a commercial astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space.

https://brienposey.com/

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like