It comes as no surprise that security operations centers are taking advantage of the latest technologies like AI and machine learning to defend against cyberattacks. However, what may be surprising is the extent to which marketing hyperbole has led organizations to expect AI to completely level the playing field in the SOC, rendering human analysts obsolete and leaving just AI to handle decisions about cyber threats.
In reality, AI is still a long way from replacing human intuition and making analytical judgments.
“It's not possible to replace [humans in] the SOC,” said Forrester principal analyst Allie Mellen. Mellen stressed that SOCs rely heavily on human input, requiring significant creativity and effort. Despite current advancements in generative AI, “it’s just not going to ever rise to the same level as what we will be able to get from security analysts.”
At the same time, Mellen acknowledged the valuable role that AI can play, particularly in supporting human analysts with investigation and response activities. For the foreseeable future, however, AI will not supplant humans in making the final decisions. On the plus side, though, she predicts generative AI will take over the writing of security reports, freeing up analysts to engage in more of “the cool, fun stuff.”
The Maturation of Generative AI
Despite the hype, generative AI is still a relatively young technology. Some experts view it as early-release software. Although there are numerous uses and commercialized versions of generative AI available in popular web browsers, limitations and unforeseen consequences continue to be a part of its development.
“Generative AI is a new technology that’s very rapidly evolving,” noted Gal Tal-Hochberg, group CTO at Team8, based in Tel Aviv. “What leads today may not lead in six to 12 months. Things that require a vendor today may be something an analyst can do by themselves tomorrow. As Gen AI is very intuitive, understanding where Gen AI can have the most impact in your organization is something line workers, and people at the edge, are well positioned to do.”
Tal-Hochberg cautioned that AI, be it in the SOC or general use, is still a rather generic term. “We need to differentiate between AI and generative AI. ‘AI’ as a word is oversold, mostly due to it being a marketing term more than anything. Is a rule engine ‘AI’? Is it a simple condition? AI has been around for so long that at this point we can just call it a ‘traditional technology’ and assume every product uses AI in some form.”
He further cautioned that while generative AI’s current progress is very promising, it remains uncertain just what uses will ultimately prove effective.
AI in Security Operations Centers
AI will have a growing presence in the SOC, noted Ron Konigsberg, co-founder and CTO of Gem Security, which provides SOCs with cloud-based automatic detection and response capabilities.
Konigsberg said that AI-based applications will expand the use of AI for tasks such as anomaly detection and threat hunting. This is consistent with the trends observed in the industry, where organizations are already using extended detection and response applications in their corporate data centers. In addition, organizations are employing managed detection and response services from managed service providers and managed security services providers.
“At the top of the funnel specifically,” he noted, “the key for security operators will be to make sure that the AI products they use are actually explainable and understandable by the humans in the SOC.”
Taking that a step further, it will be essential to make these AI products understandable to boards and C-suite executives, who sign the checks for major software acquisitions.
AI’s Big Promise
One of the most significant challenges faced by security teams today is the ability to distinguish practical AI capabilities from the ambitious promises of AI.
Michelle Abraham, IDC research director of security and trust, advised caution, particularly when it comes to generative AI applications that support SOC analysts in writing reports. Abraham pointed out that these “AI assistants” are still in their early developmental stages, primarily focusing on identifying patterns and adhering to predefined rules. It is important to put guardrails around AI and machine learning applications and continuously monitor them. Organizations must make sure the applications stay on task and aren’t “slowly moving to where it's not giving the correct information anymore,” she said.
A potential risk associated with AI is it leaking private data to the public, Abraham added. Such leakage can occur in public generative AI applications, but it also is possible that a company’s private application could expose data to a service provider or vendor if the company shares its private model with them.
Abraham noted that AI and ML in the SOC are shifting from vulnerability management to a focus on exposure management. This entails using technology to identify misconfigured devices or networks, one of the futures for automating SOC security. By asking a natural question without any special wording, such as, “Do I have any misconfigured devices?” or “Show me everything that was spun up yesterday,” generative AI will be able to provide straightforward responses. This has the potential to greatly benefit SOC analysts, providing them with quick and concise answers to their questions.
Ultimately, what matters most is getting the answers, whether they are delivered to the analyst through AI and ML or obtained by querying the system with predefined rules, Abraham said.