Security alert fatigue isn’t new, but it has evolved over the years. In the early days of Windows Server, for example, Microsoft regularly advised its customers to configure the Windows event logs to log only what was absolutely necessary. The basis for this recommendation was that if left unchecked, Windows could create an overwhelming number of log entries--potentially depleting a server’s disk space in the process. While IT pros no longer have to deal with that particular issue, we do manage security monitoring tools that can produce so many alerts that it becomes almost impossible to separate important alerts from those that amount to meaningless noise.
Unfortunately, there is no one thing that you can do to completely eliminate security alert fatigue (short of getting rid of your monitoring tools). Based on my own experience, one of the single most important things that you can do to reduce the number of alerts you receive is to take a hard look at all of your applications and your operating systems to ensure that they are configured according to publishers’ established best practices. Keep in mind that this is something that you should be doing periodically over time because best practices tend to evolve.
It’s also important to make adjustments to security-related administrative settings and access control policies, but I would recommend taking things a step further. Sometimes application-level configuration settings that seem to be unrelated to security can have unforeseen security implications. Making sure that all of your software is configured in a way that strictly adheres to the publisher’s stated best practices can go a long way toward eliminating meaningless security alerts.
Here are some additional tips for fighting security alert fatigue based on my experience:
1. Disable features you don’t need.
Some tools use separate modules or management packs for each individual thing that they monitor. If you are using a tool that leverages such a design, then consider removing any components that are irrelevant to your organization. Doing so will simplify things and may reduce the volume of alerts that you receive.
2. Figure out where the alerts are coming from.
Another suggestion is to take some time to evaluate where the alerts that you are receiving are being generated in the first place. Again, not every tool has this capability, but some can tell you which detection rules spawned the most alerts. It could be that there are specific detection rules that are a poor fit for your organization. You may be able to significantly decrease the number of alerts that are being generated by simply disabling or modifying rules that have proven to be particularly problematic.
3. Use automation (even for situations that resolve themselves).
I have worked in IT for 30 years, and during that time I have seen any number of situations in which a problem went away on its own for some completely inexplicable reason. As IT pros, we like to imagine that we always know what is going on with the systems that have been trusted to our care, but sometimes things happen that we just can’t explain.
I mention this is because it is sometimes possible to use automation to cope with situations that resolve themselves. If you are using a rule-based alerting solution, you may be able to use automation to dismiss alerts for reported conditions that self-resolve (unless the self-resolving condition is persistent, in which case it probably points to a deeper issue). In any case, any alerts that the monitoring system can dismiss on its own are alerts that the IT staff doesn’t have to manually assess.
4. Check your security tools' configuration.
At the very beginning of this blog post, I mentioned that it’s important to make sure that all of your software is configured in a way that adheres to the publisher’s best practices. In doing so, don’t forget to check your alerting tool’s configuration. Under the right circumstances, a poorly configured alerting tool can actually cause its own false positives. It’s a long story, but I once saw a security tool that was configured to send email notifications. The tool was configured improperly, and interpreted its own failed attempts to send messages as an attack on the organization’s mail system.
Security alert fatigue will likely never go away, and it's a continuously morphing situation. IT pros should implement existing best practices, but always be on the lookout for ways that best practices can and should change over time.