Malware Evolves to Bypass Common Controls

Botnets and Trojans are huge headaches. They're everywhere, and their numbers are growing exponentially. Sometimes that kind of malware is discovered by securing scanning software. Other times it's discovered by unusual traffic patterns sent to specific IP addresses, sometimes on atypical ports.

When you discover such malware, you can typically, monitor it to learn which IP addresses it's communicating with and then block access to those addresses. The blocking technique is particularly effective in stopping bots and Trojans. Therefore one key to survival for many types of malware is decentralization of malware command and control centers. The next wave of malware promises to make the task of blocking far more difficult.

In a new report, security solution maker Finjan describes upcoming trends in malware behavior. Finjan points out that instead of using typical point-to-point communication, new forms of malware will use seemingly harmless technologies and existing Web sites to mask their traffic.

Many Web sites, such as Google, Yahoo!, and Feedburner (to name just a few) are available for access from within enterprise networks and certainly from within most every home user's network. Traffic to and from such sites wouldn't seem unusual in most cases. Several companies (including the companies I just named) provide incredibly useful technologies such as RSS feed aggregation and data aggregation from disparate sources. Malware developers realize that and aim to take advantage of it by using these publicly available resources as a go-between.

In one type of scenario, a botnet operator could post a message to a site, such as a blog on a free blog hosting site (MySpace, for example). Bots in the botnet could then download the blog's RSS feed, parse the content, extract commands, and act on them. In another scenario, spyware could do the same thing the bots do, but it could also post information back to the blog as comments if the blog is configured so that all comments must be approved before being published (thereby keeping any data out of sight). Or the spyware could post the data back to the blog as an unpublished post by using such technologies as XML-RPC.

The problem here is obvious. It's not reasonable to think you can protect your network by blocking access to sites in hopes of stopping botnets and spyware because any number of different sites could be used and blocking sites reduces overall Internet value. One solution that might help is packet content inspection, although that's not foolproof either. Any number of innocuous word combinations could be used as commands for bots and spyware. So we're facing a much more difficult problem to solve. Of course when it comes to security, an ounce of prevention is worth a megaton of cure, which means that you should use the best security products you can get.

Next week, I'll tell you about a particular set of preventive solutions and how they stack up against their peers. Until then, if you're interested, head over to Finjan's site and get a copy of its report. It's available in PDF format at

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.