There seems to be this ongoing uneasiness about how to respond to security concerns within many organisations. Security remains something that they genuinely want to take seriously, but they seem to lack the motivation, at least until they really have to. Just think about it – when do you most frequently see mainstream organisations talking about how seriously they take security? It’s usually immediately after they didn’t take it seriously enough and they’re communicating a serious incident to customers.
It can be enormously difficult to influence a company to do the right thing when it’s just you as an individual having a discussion with them. I’ve had this experience myself many times over now where private discussions about serious security risks result in either very slow action or no action at all. I reckon there’s usually just not quite enough motivation for the organisation to act, but that all changes very quickly when the discussion is public.
A case in point: I recently watched on as this tweet about the chat tool for a Swedish public transport support tool gained momentum. They weren’t encrypting the communication with HTTPS which is unacceptable in this era so a number of us give it a “nudge” in the right direction where it then gained considerable airtime. Five days later – fixed!
A resource that’s sprung up with the express intent of public shaming is Plain Text Offenders. This has become the go-to location for pointing out one of the most obvious security failings an organisation can have – emailing people their password. It’s proven to be quite effective too with their Twitter account often congratulating “reformed” offenders. Occasionally, it takes a mere hour or so for a site to see the error of their ways and come to their senses, such is the impact of public shaming.
Perhaps the most noteworthy example of this that I’ve been involved in recently is the Nissan LEAF shortcomings I wrote about last month. Here we had a car disclosing driving habits and enabling anyone to control its climate control simply by passing an API an easily discoverable number. As serious as it was, one full month after private disclosure, nothing had happened. Nissan thought it might be “another few weeks” and didn’t deem it serious enough to take the service offline… until the story was made public. Suddenly, it was serious and the risk – and the entire service – was gone within 24 hours. It wasn’t “not serious” when I first reported it, Nissan simply wasn’t motivated enough to take it seriously whilst customers didn’t know about it.
To be clear, there is a time and a place for public pressure. Some vulnerabilities could cause immediate harm to those who use the service, for example if it’s a SQL injection flaw that enables an attacker to immediately exfiltrate personal data. Public pressure such as what came Nissan’s way had to come at the right time (more than a month after they first knew of it) and for the right sort of risk (one they could immediately shut down with little impact).
It’s unfortunate that we find ourselves here in a place where many organisations choose to consciously neglect security unless publicly pressured into actually taking it seriously. Then again, at least in these cases the news headlines are about how the organisation involved could have faced a major security incident rather than being about how the organisation is now taking security seriously after it’s all gone wrong.