Windows & .NET Magazine Security UPDATE--August 20, 2003
Full disclosure has spurred hot security debates for years. As you know, the Organization for Internet Safety (OIS) has been leading the latest effort toward establishing a more responsible disclosure policy.
In the past, I've advocated full disclosure for learning purposes--as have many security professionals. Although I knew that "black hats" use published code to wreak havoc on other people's systems, I saw a benefit in what legitimate scientific researchers ("white hats") could learn by having that code available. The trade-off seemed reasonable then, and it still does--but the timing of information release is obviously a problem.
Now, even if somebody's published code can be useful (e.g., the code can show that a patch might be broken another way)--far more often than not, that benefit doesn't outweigh the danger of someone taking that code, twisting it into an attack mechanism, and unleashing it on the Internet shortly after the code is released. Clearly, the act of publishing such code only days after the problem has been reported is irresponsible, dangerous, and potentially damaging. Therefore, I want to make it clear that I don't condone such behavior, nor do I condone anyone's use of code for malicious purposes.
Some full-disclosure proponents imply that users deserve to be attacked because they use Microsoft software and the software is full of security holes. That's just another jab at Microsoft. Other proponents maintain that users are responsible for their own problems because they should load available patches. However, as we know, loading patches isn't always the best first step to prevent intrusion. And--although users do need to take responsibility for security--the latter attitude is a short-sighted way to address the victims of predators. Why not use the opportunity to teach people about better security?
The remote procedure call (RPC)/Distributed COM (DCOM) worm (MBlaster) offers a good example of when loading a patch wasn't necessarily the best first step. For some people, loading the Microsoft patch might have actually been the slowest way to defend themselves; for others, the patch wasn't required at all. Also, many people didn't load the patch on their systems, yet their network Intrusion Detection System (IDS) didn't pick up any attempts of the worm trying to infiltrate. The worm might not have scanned that particular network address block looking for open systems, or those people might have defended themselves by other means, such as Network Address Translation (NAT), border firewalls, server firewalls, desktop firewalls, and antivirus software.
In cases in which patches were required, we can't reasonably blame users for not patching their systems fast enough--because all users have their own issues. Also, not everybody uses the Internet constantly, and those who don't might not immediately come across the latest news of a security outbreak. Some home users might not turn on their computers daily or even weekly, and others are ignorant about many security problems and products, including firewalls and antivirus software. Whatever responsibility we assign to them for their own security, they should carry far less blame than the perpetrators.
Some small office/home office (SOHO) users are in a similar predicament; they too might lack the knowledge to gauge the problem as well as the resources to become educated and to properly administer their networks. But they still need to be better protected through their own efforts and through responsible disclosure practices. Large enterprises probably have access to the personnel and know-how, but in any given instance, they might lack the resources to move as swiftly as they'd like.
Obviously, something more must be done to help slow the initial release of malicious programs. Knowing that, I can immediately think of two ways (ideas that others have long held).
The OIS is already taking steps to promote responsible disclosure, which includes limiting who has early access to working exploit code. I think that's a good step, but perhaps we can do more.
Still, mailing lists and other types of discussion forums present a challenge. Some of these forums promote full disclosure with the intent of legitimate study. Even so, rogue elements are an ever-present problem. I question whether a truly responsible student of security would quickly post code (before users have time to become aware of the danger as well as ample time to protect themselves) to a forum in which rogue elements undoubtedly lurk.
If people are responsible, they should try to find a safe outlet for the work they want to publish, one for which timing is a primary consideration. Although finding a safe outlet that considers timing paramount seems like common sense, I point out the need to do so because a few popular forums have long been used to publish security information--so much so that they're "traditional" elements in the security arena. The interchange among the forums' users is largely professional, the signal-to-noise ratio is low, and the discussions stay on topic. Most of you probably know which forums I'm talking about.
Could the operators of those forums become a part of responsible disclosure by more carefully taking into consideration the need for adequate timing--despite the fact that allowing such posting has been longstanding policy? Even in instances in which the posted code is somehow "broken on purpose" to prevent the less educated from using it maliciously, it still presents a danger, especially when people don't consider timing. Let's face it, the worst offenders are smart, so posting broken code is irresponsible disclosure because sooner or later, some attacker will fix and use it. Let's not give them a head start.
By limiting public disclosure of code (and command sequences) related to vulnerabilities, a line will begin to appear dividing responsible security students who do have the public interest entirely at heart from those who don't "get" the inherent dangers of some forms of open discussion when conducted at the wrong time. Security students can find other ways to conduct and discuss security vulnerability details without resorting to a public forum that anyone with an email address can join unchecked.