You're probably aware of the underlying division in the security community about revealing security risk and exploit information. Some people think such information should remain unreleased; other people insist on full disclosure immediately upon any new security risk discovery. I watched this old sore spot become irritated once again over the past few weeks.
As you might already know, a situation developed quickly after someone reported vulnerability in Microsoft's Jet database engine to several major security mailing lists. Initially, the person described the security problem in general but gave no details regarding exploit information. Instead, one security mailing list moderator posted the general risk information and stated that he would withhold the exact exploit details for approximately 1 week while Microsoft worked on the fix. During this interim, the list moderator offered to test users' Internet Information Server (IIS) for vulnerability, but many people cried foul, accusing the list moderator and other people of taking part in a "good ol' boy's club" by withholding the exact exploit details from other users.
A few days after the initial general risk information was publicly posted, someone managed to reverse engineer the exploit, and subsequently posted the exact details to the cheers of the waiting public--myself included.
The above scenario piques my interest every time it pops up. It's no secret that I'm a huge fan of full and immediate disclosure, but not everyone agrees with that philosophy. Some people argue that full disclosure leads to more compromised systems because exploit details are available to anyone who wants them--including intruders--and they're somewhat right. But what's the other side of that coin look like?
People who push for full and immediate disclosure think that such action helps heighten security across the board, improving not only software but also administrative and programming skills. In either case, a fine line exists between the benefits and deficits of full disclosure. However, if more people diligently monitored the available online information for new security risks, I think the balance would quickly tip in favor of full and immediate disclosure. What's your opinion? Are you a proponent of full and immediate disclosure? And if you are, do you routinely watch for new risk information? Based on your feedback to date, I don't think many of you do.
As many of you have pointed out, finding time to monitor online resources for security risk information doesn't always make it on your agendas. And a large percentage of you have told me you think that it's the vendor's role to provide you with this type of information; I agree for the most part. But vendors aren't always the first to learn of security-related bugs in their software (I learned this by closely monitoring online security resources.) In more than half the cases of new risk discovery, the discoverer first posts the information publicly, and then notifies the vendor sometime thereafter. So how can you expect a vendor to inform you of something it doesn't know about yet? In many cases, several days (if not weeks) pass before the vendor issues a subsequent response. Can you afford to wait days or weeks when it comes to information security?
To me, hunting down new security risk information is no different than keeping a backup of my hard disks. It's one of those things I do just in case. I'd bet a dime and a donut that most of you perform routine backups in your shop. So why not backup Microsoft's and other vendor's security bulletins with a little research of your own? Improving your overall security definitely outweighs the pains of research and remedy. Ask yourself again which is easier: keeping intruders out or removing intruders once they've penetrated your network defenses? Until next time, have a great week.