Skip navigation
Security Sense: Websites Need to be More Resilient to Password Reuse

Security Sense: Websites Need to be More Resilient to Password Reuse

One of the inevitabilities of a data breach is that when usernames and passwords are leaked, nefarious individuals will take those credentials and see where else they work. This is possible because of the simple reality is that rightly or wrongly (ok, mostly wrongly) people reuse passwords. What it means is that the breach of site A due to security deficiencies on their part leads to the compromise of accounts on sites B and C and an untold number of other sites. But are those secondary sites in any way responsible for protecting users against reused accounts?

Let’s take a couple of recent issues and the first is MailChimp’s dramas a couple of weeks ago. Now this is not a headline any company really wants to see themselves in, not when they’re implicated as being involved in spreading malware to customers via compromised accounts. But after investigating the issue, MailChimp was adamant the root cause was shortcomings on behalf of the owners themselves as opposed to any flaw on their end.

Then just last week it was the National Lottery in the UK under fire. Same deal again where a heap of accounts were “hacked” yet the target claimed “no breach” and instead pointed to shortcomings on behalf of the account owners. In this case, we’re talking about 26,500 accounts too so no small number. The BBC interviewed me for that piece and I made the following comment:

"If there's 26,500 accounts here and they are saying the credentials are correct but they didn't come from us, they still let an attacker log in 26,500 times"

Camelot (the owners of the lottery system in question) then went on to claim that “We do have extremely robust systems in place” which is a pretty usual statement to make, but it left me wondering…

Let’s look at it like this: 26,500 compromised accounts due to password reuse means the attacker successfully authenticated to that many accounts. There would have been many more they didn’t successfully authenticate to because if they’re pulling credentials from other sources, the passwords won’t always be the same. The same accounts won’t always exist either; the attackers almost certainly made over 100k login attempts and that begs the question: should a system allow this?

This is a hard one because whilst the easy answer to that is “no”, attackers shouldn’t get free reign in this fashion, the trick is figuring out how to stop it. Clever attackers obfuscate their behaviour to avoid controls such as brute force limits on accounts and IP addresses, but there are other more holistic, system-wide behavioural observations that can be made. A higher rate of requests and particularly a higher rate of failures, for example, could signal an attack.

Then there’s the whole “is this a human” space of technology controls that can be implemented. For example, there’s a range of techniques to look at the behaviour within the browser before the login attempt is submitted. Humans exhibit very particular mannerisms that are difficult for automated tools to replicate (such as those submitting logins). Then there’s the likes of reCAPTCHA when confidence levels of legitimacy are low; if the “user” isn’t human enough or there’s unusual site-wide behaviour, challenge the user. Even better, Google has Invisible reCAPTCHA coming soon precisely to help with these sorts of problems.

This isn’t an easy discussion to have because it’s effectively asking organisations to become more resilient when other sites get hacked, but that’s pretty much where we find ourselves today. For sites such as the National Lottery which are high-value targets, this sort of behaviour needs to be both expected and defended against.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish