I have often said that the biggest vulnerability in any organization is the end user. As important as it is to harden servers and other computing and networking assets, a user can nullify an organization’s security efforts with just a couple of errant mouse clicks. Email is an especially mistake-fraught area, and organizations have used a number of email security services and products in an attempt to ensure that mistakes don't get made--or, if they do, that they are not catastrophic. So far, no one has built an email security service or product that is 100% effective, but artificial intelligence may get us closer than we have ever been before.
The battle to keep malicious messages out of users’ mailboxes has historically involved a series of moves and countermoves. First-generation filtering solutions were largely signature-based. These products would cross reference messages and their attachments against massive databases of files and messages that were known to be malicious. It didn’t take the bad actors long, however, to figure out that signature-based filtering could easily be circumvented my making slight changes to messages, so as to alter their signature.
Similar stealth tactics in email security services and products are still in use, even today. Although message signature-based detection has largely fallen out of favor, there are scanning engines that compare the hyperlinks found within email messages against a database of links that are known to be malicious. To get around this, the creators of these malicious messages have adopted a technique in which each message that they send contains a unique URL. If users click on one of these URLs, their browsers will be redirected to a malicious site.
Most current-generation message filtering solutions also require an administrator to tune a filter’s aggressiveness. If the filter is too aggressive, then false positives occur and users fail to receive legitimate messages. More relaxed filtering policy may allow malicious messages to slip through the filter and make their way into the users’ inboxes. Either outcome is problematic for obvious reasons.
Message analytics are being put into play to help detect when a message is malicious and when it is not. There is an old saying that if an animal walks like a duck, quacks like a duck and swims like a duck, then it is probably a duck. This saying accurately describes the way that message analytics work, and reminds me of something that I once overheard while passing through an airport. A guy was talking on the phone and said something to the effect of, “Well, a spear phishing message is hard to describe, but I know one when I see it.”
The idea behind both of these phrases (at least as it applies to message analytics) is that although a filtering engine might not truly understand what a phishing message is, there are certain characteristics that are commonly found in malicious messages but rarely found in legitimate messages. These characteristics can be used to help determine whether a message is malicious.
The use of these and other message analytics techniques has been relatively effective in the fight against malicious email messages. The problem, however, is that bad actors actively work to develop techniques that can fool the detection filters.
Because of this, a number of mail security providers has begun to work on AI-based email security services and products. These AI solutions take on a variety of forms, but some are designed to solve what has thus far been a major shortcoming of message analytics solutions.
Historically, solutions for detecting malicious email messages has been largely reactive. In other words, a product might not realize that a certain link within a message is malicious until someone, somewhere clicks on the link. Conversely, some of the AI-based products available today attempt to scan the internet, looking for phishing sites. Knowledge of these sites can then be used to proactively detect phishing messages before anyone falls victim to them.
Another AI-based technique involves using AI to make visual comparisons between legitimate sites and suspected phishing sites. Early on, many phishing sites were obvious fakes, littered with spelling and grammatical errors. Today, though, there are phishing sites that almost perfectly mimic sites like Google.com or Microsoft Office 365.
AI-based scanning engines, such as Lookout Phishing AI, are designed to perform a visual analysis of legitimate sites, and then use this knowledge to spot the subtle differences that exist within phishing sites.
Only time will tell whether these AI-based techniques will ultimately be effective, but one thing seems certain: The criminals will also be using AI in an effort to try to find weaknesses in these modern email security services and products.