Skip navigation

How I Secured One Company's Network

Using Log Parser, virtualization, and a little psychology

Keeping malware from infecting networks is a never-ending battle. Over the past few years, the IT world has made great progress in maintaining acceptable network defenses, although sometimes at the expense of usability and compatibility. For one company, that was too great a price to pay. Here's how I lowered the usability cost of malware prevention for one of my clients while maintaining security.

The Situation
Recently, a company hired me as a security consultant. Due to the nature of my client's business, employees spend a lot of time on many different Web sites. The company encourages employees to use their computers to play, to communicate with others, and to do whatever else is needed to foster creativity. Its problem was that ever since the release of Windows Server 2003 Service Pack 1 (SP1) and XP SP2, employees couldn't get the Web sites they visited to work properly.

Support incidents piled up as employees requested help to install ActiveX components, troubleshoot zone issues, enable pop-ups, and adjust cookie settings—whatever it took to get Web sites to work.

Sometimes users went ahead and eased security settings themselves. This relaxed security had consequences: Despite taking such basic precautions as using firewalls, email virus scanning, and automatic updates, the company still experienced an increase in spyware, viruses, and other malware-related incidents.

However, my job wasn't to harden Windows, but rather to relax some of the company's OS security defaults to improve usability while sacrificing as little security as possible. The company set some guidelines for my solution: It had to be simple and not require a major network overhaul, and it couldn't add significant administrative training or overhead. In short, the company wanted something I could easily attach to the existing network infrastructure.

I explained the risks involved with relaxing security restrictions on Web browsers and other software. For example, a user might visit a Web site that exploited some unpatched vulnerability in Microsoft Internet Explorer (IE) to install a keylogger on the user's system, giving the attacker access to everything the user typed on the keyboard. From that point on, user's passwords and just about any sensitive information typed would be at risk—it happens all the time. Or a user might inadvertently install spyware that significantly slowed the system or made it unusable.

I proposed my standard Windows lockdown procedure, but management quickly shot that down. The company wanted me to relax security just enough to get everything working but without affecting usability—it wanted to fully use the Internet without being vulnerable to it.

As I walked through the office, I caught a glimpse of the challenge before me: I saw IM windows open on many desktops, elaborate custom desktop themes, system trays loaded with icons, and USB hubs connecting devices of all varieties. One thing was clear: These were highly skilled, creative users whose work lives were centered on their PC. One executive told me he wished he could just have two separate networks: a safe one for work and a compatible one for Internet access. Turns out he was on to something.

Virtual Solution
I contemplated the idea of having two separate networks. We all knew that building two separate networks simply wasn't practical, but the idea intrigued me—there are plenty of ways to isolate networks on a single connection. My first thought was to experiment with a virtual test network using virtual machines (VMs)—something I do all the time for research and testing.

Then I realized that my test-network idea itself was the solution. I could quickly build an entire parallel network made up of VMs that could easily accommodate any level of isolation.

The virtual network would initially coexist with the main network, but I would use virtualization technology to direct that network's traffic to VMs running on each desktop. We could achieve reasonably good isolation of the VMs by assigning IP addresses from a separate subnet and using IPsec to secure the traffic on the main network. Although this approach wouldn't stop a malicious hacker from specifically targeting the main network, it would work well for isolating malware threats.

A big bonus of using VMs was that if users at any time felt their system security was compromised or otherwise unstable, they could revert to a clean master image in a matter of seconds. Users could play with—or destroy—VMs to their heart's content without threatening the stability of their work system.

Of course, no company, product, or technology can guarantee total security. If someone has hacked the VM to the extent that they can access the host, you probably have bigger concerns anyway.

The point with this solution was to add a pretty good layer of isolation. The goal was to balance risk and usability and to isolate the risk to minimize the effect on the company's network. Furthermore, we would have a method to quickly identify and recover from security incidents.

VM Subnet
I started my planning at the lowest level of isolation: the virtual network. Although I could have physically isolated the network, for simplicity I placed the VMs on their own subnet. The solution wasn't perfect, but at least the unique IP addresses would help me configure the firewall, routers, and intrusion detection systems to treat the VM subnet differently. By having a unique subnet for the VMs, I could easily identify any problems originating from those machines. Best of all, I knew I could enforce strict firewall rules that could limit security problems without affecting Web browsing.

I went with a 10.1.0.0/16 network for use with the VMs, because the company used the 192.168.0.0/16 subnet for its main network. I dedicated a router port to that subnet and carefully built firewall rules to limit network communications from the VMs.

Because the VMs shared the physical network adaptors with the host machines, I needed a DHCP server that would give IP addresses only to the VMs. VMware has a DHCP feature for guest OSs, but from the main router I wouldn't be able to distinguish VM traffic from the host's traffic. Since I was already planning to use IPsec, my solution was to configure a DHCP server that could only communicate with other systems authenticated via IPsec.

As for my decision to use IPsec, I knew it might conflict with the requirement of low administrative overhead. After analyzing the risks, I decided to use IPsec authenticating with a shared key, which would achieve the isolation I needed but wouldn't require implementing a public key infrastructure. (For information on configuring IPsec, see "Using IP Security Policies to Restrict Access to a Server," March 2005, InstantDoc ID 45217; for information on using IPsec for domain isolation, see the Microsoft article "Server and Domain Isolation" at http://www.microsoft.com/technet/itsolutions/network/sdiso/default.mspx.)

Browser Configuration
A big part of the project involved configuring browsers so that users felt unobstructed even though we maintained some level of security. The most obvious solution was to relax some of the security settings so that IE was less restrictive about such tasks as installing Java or downloading and running an ActiveX component.

To achieve this balance, I had to adjust the default zone policies. Rather than blocking components and disabling scripting, I adjusted the settings to prompt before taking these actions. However, it didn't take long for me to realize that users would experience a lot of annoying prompts, even when browsing the most common Web sites.

Consequently, I stepped back and monitored a handful of users as they used the Internet. Most users had developed some kind of workaround to cope with security restrictions— either using Mozilla Firefox or Opera to access some sites or adding sites to the trusted-sites list in IE. But every user was quick to point out one or two sites that didn't work no matter what. This scenario often arose when a Web site that was in the Trusted Sites zone tried to download a component— such as Java—from a Web site in a restricted zone.

I noticed that users didn't need unrestricted Internet access—they just needed a few basic components to be able to work with a variety of Web sites. Furthermore, the components they needed were quite common and widely accepted as safe to install: Adobe Acrobat, Adobe Shock-Wave Player, Macromedia Flash, and Apple QuickTime, among others.

It occurred to me that I didn't have to significantly change security policies—I could simply preinstall the most common components to give the appearance that I'd removed security restrictions. In fact, I could probably increase security while giving users the illusion that I had relaxed it. After all, it wasn't the security restrictions that users disliked; they just needed Web sites to work. I could preinstall the most common components and prepopulate the trusted-sites list with the most commonly visited sites. Doing these two things would give users the compatibility they needed while making it less likely that they'd encounter a new component that needed to be installed, and I wouldn't have to completely remove the restrictions on ActiveX components.

To enable all sites to use these components, I configured Group Policy to let users install ActiveX components, but only those that were on the preapproved list. I manually compiled the list of components shown in Table 1 by looking at the components installed on various systems. I then installed the components as administratorapproved components using the Profile Manager tool from the Internet Explorer Administration Kit (IEAK) 6 SP1.

Populated Zones
Populating the trusted-sites list was a bit complicated. Everyone visited common Web sites, but to avoid compatibility problems, I wanted the list of sites to be comprehensive. To generate the list, I used Microsoft's free Log Parser utility.

I started by writing a query to pull the current trusted-sites list from a dozen systems on the network that together closely reflected overall network usage. On each system, I ran the following Log Parser command (which must be typed all on one line):

C:\>logparser "SELECT DISTINCT 
  EXTRACT_TOKEN(Path, 8, '\\')
  As Domain INTO 
  TrustedSites-
   %COMPUTERNAME%.txt 
  FROM 'HKCU\Software\Microsoft\ 
  Windows\CurrentVersion\ 
  Internet Settings\ZoneMap\ 
  EscDomains' 
  WHERE ValueName LIKE 
  'http%' " -o:NAT -rtp:-1 
  -headers:off 

This query extracts domain names from the list of hosts in the HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\InternetSettings\ZoneMap\EscDomains registry key and saves the list of names to a text file in the current directory.

After collecting a list from each queried computer, I placed all the lists in a directory and parsed them with the command

C:\>logparser "SELECT Text AS 
  Domain, Count(*) AS Total INTO 
  TrustedSites.txt FROM 
  TrustedSites*.txt GROUP BY 
  Domain HAVING Total > 1 
  ORDER BY Total DESC" 
  -i:textline -o:nat -rtp:-1 

This query gathers the domain names from all the trusted-sites lists and sorts them by frequency of use across all machines, excluding those domain names that appear just once on a list. The aggregate list consisted of several hundred trusted domain names. I manually reviewed the list and removed domains that were obviously inappropriate, such as known porn or spyware sites. In the end, my trusted-sites list probably read a lot like a list of the top 500 Web sites.

I realized that there was no reason not to accept all hosts for each domain on the list. So rather than allowing www.microsoft.com, sup port.microsoft.com, windowsupdate .microsoft.com, and download.micro soft.com, for example, I could use a wildcard and allow *.microsoft.com. Using wildcards made putting my list back into the registry much simpler.

To make a list I could easily import to the registry, I ran yet another Log Parser query, but this time the output format was a bit more complicated. To create a properly formatted .reg file, I created a template file by saving the text shown in Figure 1 to a file named reg.tpl. The following command uses the -tpl parameter to format the output using the specified template:

C:\> logparser "SELECT Domain 
  INTO TrustedSites.reg FROM 
  TrustedSites.txt" -i:tsv 
  -iSeparator:space -nSep:2 
  -nFields:2 -o:tpl -tpl:reg.tpl 

This action creates a file, Trusted-Sites.reg, which you can double-click to add to the registry.

Because I was populating the trusted-sites list, I also thought it prudent to populate the restricted-sites list. This task was much easier, because the work had already been done for me. To add sites to the Restricted Sites zone, I used several other programs that add their own lists of restricted sites, including IESPYAD (available at http://www.spywarewarrior.com/uiuc/resource.htm), which adds a list of known undesirable sites and domains to IE's Restricted Sites zone. (There is no equivalent to a Restricted Sites zone in Firefox.)

Building the VMs
The VM template I built was just a basic XP SP2 installation. To keep things simple, I used the built-in securews.inf security template to set the system's security policy.

Because the VMs needed effective protection from malware threats, I loaded several free antivirus and antimalware applications: Javacool Software SpywareBlaster, Clam AntiVirus, the Microsoft Windows Defender beta, and Spybot-Search & Destroy. Many of these tools' features overlapped, which I felt made the coverage more comprehensive.

After I hardened and preconfigured the OS, I installed the latest patches and created a new account for users to log on to. Rather than managing a second set of account credentials for each user, I created one user account on the base image, installed the image on every machine, and required the user to change the password at the next logon. When users first boot their VM, they have to set a password on that account. Although a fully managed domain dedicated to the VMs would have allowed more precise control over each account, using one account was a simple solution that still met the design goals.

(Virtual) Mission Accomplished
After establishing some basic security policies—I convinced company management to forbid the use of peer-topeer applications, a common source of spyware—and conducting some basic user training, I deployed the image across the network. Initially, there was some user resistance, but that quickly disappeared after users became comfortable using the VMs, and especially after they found that most of their Web sites worked without any problems. To the users, I made the Internet work; to the administrators, I made it work securely.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish