Skip navigation

To Disclose or Not to Disclose, That Is the Question

The practice of full disclosure of security risk information is again under attack. According to an article at MSNBC, Russ Cooper, moderator of the NTBugTraq mailing list and surgeon general at TruSecure, has undertaken a project to create what he calls the "Responsible Disclosure Forum." Cooper thinks such a forum will better govern the release of security risk information to the public because the forum will decide what information to release and when to release it. Cooper didn't say how the forum will entice membership from the worldwide hacker community, but nonetheless, its objective seems clear: Curb the release of risk details in a manner that prevents exploitation.

In the article, Cooper said, "It's better for everyone if we keep \[this data\] to ourselves. Why not keep it amongst the people who are considered responsible security practitioners? Most attackers aren't smart enough to write exploits themselves, so they rely on other people to release them."

Actually, Cooper's statements make slight sense to me, but such a forum simply won't work. The rogues of the hacker community have already proven that when given only minor details about a bug, they can produce a working exploit in a relatively short amount of time. Also, heated discussions have taken place in past years about full disclosure of security risk details. Those discussions eventually led to several written policies that suggest a proper course of action that hackers should take with any release of security risk information. Russ Cooper has such a policy posted on his Web site; however, a policy known as RFPolicy, authored by a person using the alias "rain forest puppy," is probably the most widely used standard in the hacker community today.

According to either policy, the basic course of action is for the hacker to notify the vendor about the alleged bug, give the vendor a reasonable response time, give the vendor time to produce a patch, and release the bug information in relative unison (not beforehand) with the company suffering from the bug. Both policies seem reasonable, and many hackers adhere to the policies. But now it seems those practices are no longer good enough.

Case in point: eEye Digital Security. When eEye recently produced a sample program that demonstrates a security problem with Microsoft IIS, many users frowned on the company for doing so. Even though eEye worked with Microsoft to correct the problem, and timed the release of its research with the release of Microsoft's own security bulletin and patch, certain circles still chastised eEye because the company's information included a working example. Certain people prefer that this practice—the open sharing of security-related scientific research and working models—be completely eliminated. Why? Because it's too easy for someone to turn such a model into a weapon. That's a weak argument in my opinion.

The problems with network intrusion aren't based on the number of script kiddies using hand-me-down code snagged from a full disclosure mailing list or a Web site. The problems actually seem to be based on only two factors: solid code and solid network administration. With both of those factors in place, the actions of script kiddies, and even many of the best hackers, become relatively moot. The reality is that if someone intrudes on a computer system and the intrusion is due to a bug for which there is no patch, the code's vendor is entirely at fault because they wrote the code. Certainly, software vendors disclaim legal liability, but such disclaimers don't change the facts of where fault truly lies. A faulty product is a faulty product, so trying to reduce a person's ability to obtain usable exploit code is like placing a Band-Aid on a shotgun blast to the head. It only masks a small part of an incredibly serious problem.

And that problem is firmly in vendors' hands. It's up to them to stop bug-related intrusion by producing better code before releasing that code into production. Typically, hackers do a lot of research to figure out all the details about a security risk they've discovered. When they hand that research over to a vendor in its entirety, they generally don't receive any compensation other than a simple written thanks from the vendor. Researchers are left to generate a living from their work (and 15 seconds of fame) in some other manner while the vendor freely enjoys the results of the researcher's labor. That's the way the security bug discovery game works today.

If vendors want to see an end to full disclosure, they just might get a lot more than they bargained for. What if vendors no longer received full disclosure offerings from bug hunters? What if bug hunters change their policies so that they typically go to a vendor and say, for example, "We've been researching your product XYZ123 for 3 months and have found two dangerous holes in the ABC321 component of that product, which grant complete system access to a remote user. We'll release full details of our research to the public in exactly 30 days unless you release a patch first, in which case, we'll release our details to coincide with your own release. Happy hunting"? How would vendors react to that kind of cessation of full disclosure? If nothing else, it would teach vendors to become better bug hunters, if only after the fact.

Instead of creating a "Responsible Disclosure Forum," I think Cooper would better spend his time trying to help vendors develop better debugging practices—especially more extensive beta testing programs. Why don't companies such as Microsoft develop tailored beta programs that seriously entice top-notch bug hunters to find holes in their products before releasing the products? Why can't a beta program remain operational even after a vendor releases a product into production? After all, a large number of security problems are found after vendors release products to the public. Why shouldn't a beta program also compensate bug hunters handsomely for their efforts? Microsoft and other software vendors certainly have the money to do so, and frankly, I think that'd be a fantastic investment on their part—everyone benefits. But will such a program come into existence? Don't hold your breath.

TAGS: Security
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish