How many reports and related news stories have you read that allege they will reveal that Linux is more secure than Windows or vice versa? Get set for yet another one.
A recent news story, "Controversial Report Finds Windows More Secure than Linux," discusses a soon-to-be released report by a research professor at Florida Institute of Technology's College of Engineering and a director of research at a security technology provider. The report will compare Windows Server 2003 and Red Hat Enterprise Linux ES 3.0. As you might expect, the report is causing a stir of debate even before its release.
There are problems with these kinds of comparison reports and their related news stories. One problem is that the media often generalize to the point that they propagate misinformation to the unknowing. For example, some people might not know that there are multiple versions of Linux, just as there are multiple versions of Windows. Dozens of entities produce their own unique brands of Linux, updating these brands with new versions over time. A statement such as "Windows is more secure than Linux" is broad to the point of being meaningless.
Another problem with the comparative reports is that they lack adequate context. The researchers often seem somewhat blind to other factors that play a key role in the risk in using any OS or application.
According to the news story, the research report covers (among other information) statistics about the vulnerabilities that were found in each platform during 2004. Certainly that kind of information helps determine the overall security of an OS, but other data is necessary to put such reports in context. Among the data should be the answers to such questions as: How many security researchers were looking for security bugs and in what time frame? In which OS version were they looking? How much time did they spend on such efforts? What were their capabilities and what tools did they have at their disposal?
Obviously, if less collective time is spent looking for security problems in a platform, then the probability is high that fewer problems will be found in that platform. Likewise, if more time is spent looking for problems in a platform, then the probability of discovering more problems in that platform increases. Applications also play a key role in the security of a platform. So data could be gathered about application vulnerabilities and how they've affected overall security.
Equally as important, if not even more important, is the question of what motivates intruders and the makers of malware. How did these people spend their time? What OSs did they target most often and why?
Another set of interesting questions relate to how many of the cited vulnerabilities can be mitigated using configuration changes or defenses that are (or should) already be in place. For example, could a simple configuration change or a border or desktop firewall or Intrusion Prevention System (IPS) adequately defend against a particular vulnerability?
None of this type of data is offered in any comparative reports that I know of. Yet all these questions should come into play when researching for security comparison purposes because this data would provide a much more complete picture of how much risk is involved in using a particular piece of software, whether it be an OS, a related service, or an application. Without this kind of data to offer a larger context, these comparative reports are far less useful than their production and associated media coverage imply. If you know of a report that includes this sort of context, please let me know about it. I'd surely like to read it.