The war against cyber attackers isn't a fair battle. Companies have to defend against all attacks, while the attackers only have to get through once. And it's about to get much, much worse.
The same artificial intelligence technologies used to power speech recognition, self-driving cars, and "deep fake" videos have the potential to be turned to other uses, like creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently going after a data center’s entire perimeter to find the smallest vulnerability and then use it to burrow in.
"We already know that a skilled and determined human attacker is the most difficult to catch," said Ryan Shaw, co-founder at Bionic, a Washington, DC-based cybersecurity startup. "However, much like defenders, our adversaries have a scaling problem -- there is only so much time and skill to go around."
Until now, attackers have been relying on mass distribution and sloppy security, he said.
"If attackers can write their code in a way that mimics the creativity and flexibility of a person, they will be able to replicate more sophisticated methods without any interaction and make life far more difficult for defenders," he said.
For example, after attackers get into a system they could observe network traffic and user behavior, and then customize their command-and-control traffic to mimic that activity.
Today, they would have to do that manually, so they'd focus on the most high-value targets, because at present the use of AI for these kinds of situations is rudimentary, according to Shaw.
"However, we expect to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently," he said.
Not only are the AI tools available as free, open source downloads, together with comprehensive, free, online training programs, but nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them.
"There will be a flood of new malware that's constantly being upgraded," said Adam Kujawa, director of Malwarebytes Labs, adding that the cybersecurity industry will shift from human versus humans to AI versus AI.
So Far, the Good Guys Are Ahead
AI and machine learning, in fact, are already being used by defenders to look for malware not based on signatures but on behavior, to identify hijacked accounts by looking for anomalous user activity, and to automatically spot unusual traffic to systems and applications.
"At the end of the day, the technological skills and numbers usually are on the white hat side," Kujawa said.
However, the gap between the technology "haves" and the "have nots" will narrow, and it's time for data center cybersecurity managers to make sure they're doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses and place, and replace time-consuming cybersecurity tasks with automation.
"Healthy networks are large, and will grow and change rapidly," said Mike Lloyd, CTO at the security firm RedSeal. "Human effort won’t scale – there are too many threats, too many changes, and too many network interactions."
Just as attackers will use AI and automation to continually probe for potential weak spots, defenders need to adopt automation.
Automated, intelligent tools can audit whether machines are set up and working as intended, Lloyd said, and are appropriate to the level of security required.
"The modern answer is to augment human teams, who focus on policy and strategy, with automation to focus on continuous validation of the forests of details," he said.
First Signs of the Coming Wave of Bad AI
Last year there were several examples of automated and sophisticated attacks in the wild, said Justin Fier, director of cyber intelligence and analysis at Darktrace, a company that specializes in using AI in cyber defense. They include Trickbot malware, worming crypto-mining malware, and SquirtDanger, which is known as the Swiss Army knife malware.
"Examples of using AI for malicious purposes have already emerged," said Leigh-Anne Galloway, cybersecurity resilience lead at computer and network security focused Positive Technologies.
That includes using AI to evade defensive systems and analyzing the results of mass scans, she said.
"Cybercrime proves to become more and more technologically advanced and there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios," she said.
It's not just data centers and enterprises at risk, but the entire political fabric of society said Darktrace's Fier.
"As we begin to see AI-powered chatbots, and extensive influence peddling through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinion," he said.
That has implications for security in both the private sector and in public discourse.
"Controlling data may soon become more important than stealing it," he said.