Skip navigation
Blood cells

AI Risk: We Can't Trust Critical Infrastructure to Artificial Intelligence--Yet

When it comes to critical infrastructure management, the AI risk is higher than the reward.

Is artificial intelligence (AI) the solution to all of our critical infrastructure management problems? Or, put another way, is AI reward worth the AI risk? Probably not. Here's why.

AI, of course, refers to the use of data-driven algorithms and machine learning to make automated decisions. Critical infrastructure means any kind of physical or virtual system that affects your health, well-being or safety. Power plants and hospitals are often given as examples of critical infrastructure. I like to use a more expansive definition that also includes devices like my home's smart thermostat--which plays an important role in my personal health and safety by keeping my house at a reasonable temperature.

As the technology has grown more sophisticated in recent years, folks have begun suggesting that AI play a larger role in managing critical infrastructure. The conversation has centered primarily around using AI to help deflect cyberattacks (which could themselves take advantage of AI to break security defenses), a growing concern for critical infrastructure.

To an extent, this is already happening. Modern SIEM platforms make extensive use of AI. Any hospital, power plant or other organization that uses SIEM to help protect its IT networks is taking advantage of AI.

AI also plays some role in critical infrastructure management tasks beyond security. A power grid that uses software to automate load balancing is using AI, for example.

But when people talk today about the potential of AI for managing critical infrastructure, they are usually thinking of applications that involve more than traditional SIEM or monitoring. They want AI to automate management tasks fully. And that's not a great idea.

The Limitations of Artificial Intelligence

While AI has some role to play in helping to manage critical infrastructure, it's important to recognize the AI risk in this domain.

Indeed, AI risk and limitations can be stated simply: While modern AI tools can make the right decision most of the time, they can never be trusted to make the right decision all of the time. That is a big problem in situations where the wrong decision could result in a significant negative impact on health or safety.

To illustrate the the consequences of this limitation, let me use my house's smart thermostat as an example. Most of the time, the thermostat does a great job of using AI to determine automatically how to control my furnace. When I leave the house, it automatically turns the heat down. It turns it back up in time for the house to be comfortable by the time of my anticipated arrival home.

This is great except for the occasions where the thermostat's AI features cause it to make the wrong decision and I arrive home to a house that is only 58 degrees, or my furnace wastes fuel maintaining a cozy temperature for hours while no one is home.

These instances have led me to question whether having an AI-powered thermostat is worth it. Some days, I am convinced that I'd be safer and happier with a less-smart thermostat whose heating schedule could be configured manually, and that would not use data to make automated decisions.

The same risks apply to public critical infrastructure. If AI is given too big a role in managing the infrastructure of organizations like hospitals, dangerous unintended results could occur.

This is what makes AI in applications like critical infrastructure so different from AI in other contexts. Writing an algorithm that makes product recommendations to visitors of your retail website based on their browsing history is one thing. If your algorithm is wrong some of the time, it's not a big deal.

Similarly, using AI to automate, say, firewall rules as part of a security strategy is also usually OK. Occasionally, your algorithm may make the wrong decision and cause legitimate traffic to be blocked. Or maybe a bad decision leaves a port open when it should be closed, and a breach occurs as a result. Both of these outcomes are bad, but this type of risk is acceptable in most cases as long as there are no consequences for health and well-being.

What AI Can Do for Critical Infrastructure

This is not to say that AI has no role at all to play in managing critical infrastructure. It can assist human operators in making decisions, even if it should not make decisions itself in most cases.

In other words, if you manage the IT network of a hospital, by all means, take advantage of AI to help monitor your servers and balance network traffic. AI could also help on the security front in identifying potential vulnerabilities and bringing intrusions to admins' attention.

But if the response to an infrastructure monitoring or security incident could affect the safety or well-being of humans, you shouldn't trust AI to perform the response automatically for you.

Conclusion

This is the hard reality of where AI stands today. Will we reach a future in which AI is sophisticated enough to better automatic decisions? Perhaps, although it's hard to imagine that future arriving anytime soon.

For now, IT pros should be careful to make a sober analysis of AI. While AI is exciting and has great potential for making admins' work easier, and can even fully replace human admins in some use cases, it can't be trusted to manage critical infrastructure on its own.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish