Thanks to recent security debacles from the likes of Target and Home Depot, it’s impossible to overstate the need for organizations to take application, data, and infrastructure security seriously—especially in terms of threats raised from external sources. However, while these recent breaches are clear examples of the kind of damage that can be initiated from the outside, the reality is that internal threats remain a critical security concern that need to be taken just as seriously.
The Peril of Inside Threats
For example, consider that AT&T recently suffered a second data breach—initiated from the inside. In this case, an AT&T employee absconded with Social Security Numbers and Driver’s License details for an unspecified number of customers. AT&T states that the employee in question no longer works for them – even if they don’t appear to specify that said employee has been turned over to the proper authorities. The point, though, is that insider data breaches are a very real and very expensive problem – and something that virtually all organizations need to protect against. Obviously, for organizations with PII (Personally Identifiable Information) being stored on hand, the question of due-diligence is a no-brainer. But, the reality is that most organizations (even if they’re not storing credit card numbers or managing EMR (Electronic Medical Records) have some type of confidential or sensitive information that needs to be safeguarded against everything from disgruntled employees (bent on some sort of revenge or destruction) to dishonest and ‘enterprising’ employees (who are stealing intellectual property and/or data for their own enrichment) on up to things like corporate espionage.
Security and the Role of Permissions vs. Auditing
Unfortunately, while most organizations understand the need to secure and protect their sensitive data from internal and external threats, far too many don't understand how to properly protect data from internal threats. Too frequently, many organizations rely solely upon permissions (or the lack thereof) to police access to their most sensitive data. In addition to permissions not being enough of a deterrent to stop some kinds of internal breeches, this approach comes with a more sinister negative: when locking down permissions too aggressively makes it so hard for employees to do their jobs that they routinely end up needing to circumvent some security routines just to do their jobs, which also causes nearly irreparable damage on local security culture.
To put the problem of permissions versus auditing into context, think of what happens when you physically visit your bank to move $1,000 from Savings into Checking. To facilitate your transaction, the bank teller you interact with not only has the ability to view account balances, but can also arbitrarily move money into and out of accounts pretty much at will. Put simply, permissions are pretty wide open for these employees. From a security standpoint, that raises the specter of some serious potential problems. For example, with such wide open permissions, what’s to stop a bank teller from moving $1,000 out of your savings account and into their own account (in the Cayman Islands)?
If permissions are the only tool that banks can use to prevent such abuses, then the only viable option that banks have is to ratchet down permission to the point where tellers can’t even move money between accounts—making tellers effectively useless. Consequently, banks use another security mechanism: auditing. Because of the high cost associated with the potential for theft or malfeasance, and because of the high degree of permissions needed by even junior employees, the only real tool that banks can use is to audit every operation. In this way, if an employee decides to steal and route funds into their own account, a clear history of their actions can be provided to Federal and Local Law Enforcement whenever internal breaches or problems occur. Granted, it’s arguable that auditing doesn’t prevent the actual theft of data or funds, but by the same token, the fact that operations are audited provides enough of a deterrent to keep the vast majority of bank tellers from even dreaming of schemes or plots to try and openly abuse their permissions.
Example: Auditing Data via Seeding
Of course, the biggest and most obvious problem with auditing is that it can be non-trivial to set up and properly manage. Typically for every bit of auditing put into place you’ll not only need the technology and infrastructure to handle such auditing, but a system of checks and balances for who can view and interact with that audit information as well, otherwise folks with access to the logs could too easily cover their tracks.
However, auditing doesn’t always have to be complex to be effective. For example, one of the retail companies I worked at many years ago had a customer list with millions of customer email addresses (among other things). Access – or permissions – to this data was obviously limited to just a handful of employees (primarily developers, the DBA, and some marketing folks). In addition to those restrictions, this list was also secretly seeded with a number of bogus email accounts that were monitored solely by internal auditors. Accordingly, when these monitored accounts ended up receiving non-corporate correspondence (i.e., spam) one day that was a clear indicator that an internal employee had pilfered the list and, effectively, sold it to spammers. In this scenario, the damage was already done (millions of customers had already been spammed), but the company quickly detected the breach and was then able to do some internal sleuthing and call the appropriate authorities—theft of this many addresses constituted a felony.
Granted, not all auditing problems can be solved in such a simple fashion (i.e., sometimes you may need much more than to simply seed your data with markers). But it’s important that organizations recognize and embrace the role that auditing can play in addressing concerns around both internal and external breaches.