The “Autonomous SOC” Is a Pipe Dream

The autonomous SOC will never be a reality because technology is simply not capable of human ingenuity.

Forrester Blog Network

October 27, 2022

5 Min Read
The “Autonomous SOC” Is a Pipe Dream

Many vendors across the security industry share a vision: to deliver an autonomous security operations center (SOC) with their technology at the center. This idea is about as likely as me being able to join Starfleet and voyage with Captain Janeway in my lifetime. Why? First, let’s level-set on exactly what autonomous means:

denoting or performed by a device capable of operating without direct human control

The idea of a security system that you could literally set and forget is a compelling one — no more staffing shortage, no more security policies, no more breaches. Yet even in the physical world, we have not perfected the art of security by machines. Estimates suggest that there are more than 20 million private security workers globally. Why so many people? Because machines cannot observe, interpret, and react to the infinite variations of human decisions quickly, completely, and accurately in the physical world … or the digital.

Second, the expectation that many have of automation does not meet the reality, for a few reasons. Automation in the SOC comes in one of two forms: manual process automation or automation built into security technologies. Manual process automation is limiting because of the following:

  • Like with physical security, humans are still mandatory, even for basic processes. Human/machine collaboration is a necessity for the automation used in global enterprises, especially with regards to security technologies. The classic example of automation technology built for practitioners in security is SOAR; the security equivalent of basic DPA, which effectively automates repeatable processes like sales orders, lead generation, and payroll. It is best used for simple, repeatable systems. But even this can go off the rails without human review such as when employees are automatically offboarded. If checks are not in place to ensure employees that have actually left the firm are offboarded, you could end up in damaging situations…like CEOs losing access or integrity of all their accounts.

  • Automation is not designed for complex systems that require resilience. Reaping the benefits of DPA requires a simple system, and unfortunately, the SOC isn’t one. The SOC faces very inconsistent inputs in the form of constantly evolving threats, which yields inconsistent outputs. When automation is applied to a complex system made up of unpredictable inputs, it does the opposite of what we come to expect from a simple system – it lowers consistency and quality, and it reduces resilience while increasing the risk of downtime. Imagine an assembly line that starts with the wrong base part – that one little unexpected part can break the whole system.

  • Each added step to an automation chain limits the scope of applicability. Say your security team wants to set up a SOAR playbook that, in the event of a potentially malicious file executing on a box originating from a phishing email: isolates the endpoint, checks the reputation on Virus Total, confirms the file is malicious, deletes the file, deletes the phishing email across the environment, and blocks the sender of the email. This can be a very advantageous playbook for a security team that regularly faces this issue. However, it limits the scope of applicability, as a series of steps must be met prior to the playbook being triggered. This works fine for consistent inputs, but when inputs are so dramatically inconsistent as in the SOC, with constant new attacks, new technology, and new people, it stops being nearly as effective.

These are also some of the reasons practitioners have a hard time getting a lot of value out of SOAR beyond 5 – 10 playbooks (most of which are focused on enrichment).

Lastly, automation built into security technologies is limited because:

  • Humans can always outsmart machines. Human attackers do not follow rules of engagement; they identify gaps in security technologies or even attack the security tech itself. In contrast, security tools must follow a set of rules – they are built with an intention in mind, whether it’s to detect threats on the endpoint or to find anomalies in otherwise consistent data. These constraints force a limitation on technology that cannot be overcome without the aid of humans. If an organization uses EDR, an attacker will find a way to bypass it, or not target an endpoint. If an organization collects all logs from every single asset into a SIEM, an attacker will find a vulnerable employee to leverage for covert access. Technology will always be limited by the purpose it was designed for and will always lack the creativity and scope to address every single potential threat.

Some come back to this last point and say, "well actually, Deep Blue beat world champion Garry Kasparov at chess back in 1997!" And that is true. But it misses the point. Chess is a game based on a finite set of rules both sides agree to follow. Machine learning is built within a particular set of constraints and optimizes based on those constraints – so chess would actually be a good game for ML or AI to excel at.

Security is different. If anything, attackers like breaking rules, especially the very rules our technology is structured around. The ‘Autonomous SOC’ will not be able to operate beyond the constraints we define, and thus, will always be susceptible to attack and pose its own risk to the organization. We cannot be constrained to rules we put into our technology if we want a robust defense. The autonomous SOC will never be a reality because technology is simply not capable of human ingenuity.

There’s more to the Machine Learning aspects of this problem that I’ll be digging into in future research and a blog post. In the meantime, I’ll be speaking on this topic at Forrester’s Security and Risk Forum 2022. Come join us in DC or the virtual experience and hear my talk on The Truth Behind ML’s Madness: How AI Is Actually Used In Detection And Response.

This article originally appeared on Forrester's Featured Blogs

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like