Skip navigation
Underside of airplane on runway Thinkstock Photos

Why Cyberattacks Need to be Treated Like Air Disasters

What if companies were as transparent about cybersecurity events as airlines are about plane accidents?

Many years ago I made a decision that saved my life. Living in Sydney at the time, I deferred a planned trip back to Auckland, New Zealand, because it would be close to my brother’s 21st birthday, and I didn’t have the funds to make both trips. The purpose of that earlier trip was to join an Air New Zealand flight to Antarctica; a day trip that had proven immensely popular and that I very much wanted to make. I figured I could postpone the trip till the following year, when hopefully I had some savings again.

Of course, there never was another chance. On November 28, 1979, Air New Zealand Flight 901 flew into Mt Erebus on Ross Island. There were no survivors.

The airline accident report that was produced caused great controversy. It effectively accused the pilots of flying the aircraft into dense cloud and exonerated the airline. But a subsequent inquiry found that the pilots were blameless. An optical illusion common in polar regions led them to believe they had clear space in front of them with a ‘horizon’ stretching to infinity. They never saw the mountain. And – critically – navigation coordinates had been reprogrammed without their knowledge, causing them to believe they were safely 30 miles away from their true location.

So – apart from the fact that I wouldn’t be here telling you this had I taken that ill-fated flight – what has this to do with cyberattacks?

Well, let’s talk about airline accident reports a bit more. When a plane crashes, the standard procedure is that the responsible government organization in the country of origin investigates the accident and produces a detailed, publicly available report. In the U.S. this organization is the National Transportation Safety Board (NTSB).

The accident report attempts to determine why the accident occurred. Was it pilot error? Did a mechanical malfunction occur? Did the plane fly into a storm?

Most importantly, the report makes recommendations. After the terrible DC-10 crash in 1979 at O’Hare, it was found that the airline was using a dangerous maintenance procedure which caused damage to the pylons which secure the engines. Additionally, design deficiencies in the DC-10 itself contributed to the crash.

The idea of this very public process is that each accident contributes knowledge, by which, in theory, future accidents can be avoided. That it works can be seen by the progressive, spectacular improvements in airline safety in the last few decades. Processes, training, and aircraft design are all influenced by these awful events, and, as a consequence, it’s hoped to avoid repeating the same mistakes.

Now although cyberattacks don’t (generally) cost lives, they do have enormous costs both in the collective security of the victims, whose sensitive personal details have been compromised, through to the reputation of the organizations who have been the targets.

It’s particularly ironic that a number of companies hit by cyberattacks over the years have themselves offered either security consulting services or actual security-focused products as part of their portfolio. Consequently you’d expect them to be acutely aware of the important of processes, training and design, not to mention the use of case studies as a means of learning from the mistakes of others.

Notable examples include HB Gary, who were compromised by members of Anonymous, RSA Security, who were compromised through a sophisticated ‘spear-phishing’ attack, and, most notoriously, Booz Allen, the former employees of a certain Mr. Snowden.

Then, last week, it was reported that Deloitte, who offer cybersecurity consulting services to clients, had large amounts of sensitive internal email exfiltrated by an unknown attacker, possibly over a period of several months.

Now the big difference between a cyberattack and an air disaster is that in most cases we never do get a public accident investigation report. There are rare exceptions, of course.

RSA were a particularly spectacular example of a security-focused organization themselves being compromised. The consequence of the attack was that their two-factor authentication token security was completely compromised by the breach. As a result, millions of devices had to be recalled.

The attackers used “spear-phishing” – a targeted attack on relatively low-profile users that used an email with the innocuous title “2011 Recruitment Plan”.

We know this because – unlike many of the companies who have been hit by cyberattacks – RSA disclosed a good deal of detail about the attack. They did so, partly to regain the trust of their business partners and customers, but also so that other organizations could learn from the mistakes made and improve their defenses.

In the case of the other organizations I’ve mentioned – who also have a significant focus on security – details are sketchy. We know that Deloitte allegedly didn’t enable two-factor authentication on critical administrator accounts and that Booz Allen appeared to have inadequate internal auditing on the activities of highly privileged system administrators. But other than this, organizations who have been compromised tend to be tight-lipped about the exact details.

If we are to improve cybersecurity the way we improved airline safety, though, we have to discuss the details of cyberattacks publicly, rather than rely on investigators such as Brian Krebs and Bruce Schneier (or newspapers like The Guardian) piecing together as much as they can through their excellent investigatory skills and then sharing it with us.

In essence, we need the equivalent of the NTSB for cybersecurity. The National Cybersecurity Safety Board, if you like.

Lacking this, at the very least, victims of cyberattacks owe it to the community to be up-front about the detailed root cause(s) of the attack – it’s embarrassing, sure, but as we’ve learned the hard way in the airline industry, the only way to improve safety is to publicly discuss failings.

People make mistakes. Organizations make mistakes. I’m sure the organizations who were compromised thought they had rigorous, effective, security policies in place. So do the organizations who haven’t – yet – been compromised. But the best way to test those defenses is not through throwing pentesters (penetration testers) at the problem. It’s by reading and learning about the mistakes made by others and then ensuring that you share your own errors.

Some of the attacks have been ingenious, some have exploited obvious oversights. But if you don’t know the details, neither do your pentesters. Unless they independently come up with the same attack vectors, how do you know you’re not vulnerable?

So – if your organization is unlucky enough to be compromised – don’t keep it a secret. Firstly – as Deloitte found out last week – someone will tell the press sooner or later. Secondly, you’re part of a greater community. Software engineers want to make the software you run more secure; they need to understand what went wrong so they can fix the vulnerabilities. And your peers can learn from the mistakes you made and collectively modify their processes, design and training to ensure these mistakes aren’t repeated. Follow RSA’s example, and give us an Accident Investigation Report. 

 

About the Author  

Senior Systems Architect, 1E
Andrew Mayo has been involved in IT, both in software and hardware roles, for enough years to have worked through the tail-end of the punched card and paper tape era, and the subsequent invention of the PC. Currently he’s working on the evolution of 1E’s Tachyon solution, looking in depth at both attack and defense strategies and the evolution of the threat landscape. Previously Team Lead for the AppClarity project, he’s worked previously in various verticals including healthcare, finance and ERP. When he’s not wrangling with databases, he enjoys playing piano and hiking, especially when the destination is one of England’s picturesque pubs.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish