Australia, like much of the world, has a problem with people crashing cars. A significant component of the road safety strategy down here boils down to speed, in particular fining motorists for exceeding the posted limit. Enforcement strategies differ state by state, but down south in Victoria, they’ve decided that at 3kph over the posted limit you’re fair game for a ticket. For international readers, that’s 2mph or in other words, about the width of your speedo needle. As part of the campaign to keep motorists’ eyes focused downwards on the dashboard, the Victorian government has been running a campaign known as Towards Zero which has a somewhat self-explanatory objective:
“Towards Zero is a vision for a future free of deaths and serious injuries on our roads”
Which, of course, is impossible. Whilst there are metal structures zooming around at speed, occasionally they’ll bump into other things. Now there are many things we can do to reduce the frequency of these incidents, but nothing will ever reduce deaths down to zero because driving carries with it inherent risks. In fact, there’s a term for this – “Normal Accidents” – which came from the book of the same name a few decades ago. Whilst none of us would willingly accept that the death of a loved one should be defined as “normal”, the premise is extremely important to understand:
“accidents are inevitable in extremely complex systems”
Which brings me to information security. We often see online risk being represented in the same way as our southern government approaches road safety, that is that if we just did the right things we could eradicate failure. Entirely. But this attitude neglects the complex nature of both driving motor vehicles and writing software which is why neither objectives can ever completely succeed.
And computers are extremely complex systems; just think about what’s involved in the very practice of you reading this article: You’ve got a highly advanced machine on your desktop (or perhaps even held in your hand), which then talks (probably wirelessly) to a service provider who’s then streaming your requests to some other location on the planet (you probably don’t even know where), at which point it’s received by a web server which locates information stored in a database, parses it, formats it into HTML and then repeats the whole transport process in reverse. When you think about it like that, software is an unfathomably complex system and we’re never going to get all those parts working properly all the time. Some degree of failure – included security related ones – is inevitably.
I love this quote from David Heinemeier Hansson, the creator of Ruby on Rails:
“Good software is uncommon because writing it is hard. In the abstract, we all know that it is hard. We talk incessantly about how it’s hard. And yet, we also collectively seem shocked — just shocked! — when the expectable happens and the software we’re exposed to or is working on turns out poor.”
Whether we’re driving around at speeds that humans were never designed for or building complex computer systems, some level of accidents will always be normal. There is no “zero” in either case, not whilst we want the conveniences they both provide. It’s why security pros get very uncomfortable when we see claims such as “bulletproof security” or “hack proof” because frankly, it’s the same claim as “uncrashable car”. Great idea, but it’s not going to happen.