In previous blog posts we have discussed some application control best practices through application whitelisting. As organizations look at application virtualization technologies, allowing applications to run on an operating system without being installed, it can play havoc with our whitelisting approach.
Though there are a number of different ways an application can be virtualized, they all follow the same basic principle.
The traditional method of installing software on a computer takes time. Components are registered on the machine, setup routines are executed, and over time, the computer starts to slow down as registered DLLs fattened up the machine. When an organization wants to deploy a new application they have to do a lot of testing to make sure it won’t interfere with other applications. When we virtualize applications, it runs in its own virtual bubble. The virtual application has its own virtual file system, virtual registry and other services. Even though the application has no footprint locally, the application still runs on the local resources and thinks it’s installed on your local PC. This results in these applications being quick and easy to deploy since there is no installation required. We just pull down the virtual layers over the network, with an individual bubble for each application, so there are no application to application compatibility challenges.
Some application virtualization solutions have a service running on the user’s computer. This service facilitates virtual application access, creating the virtual bubbles, and loads in the application’s virtual layers into that application’s bubble. The virtual layers are exposed through a virtual drive, where the application appears to be installed. The Microsoft App-V solution uses this approach and typically uses the Q: drive letter for its virtual drive. Then, within this virtual Q: drive, we would see the program files for our applications (but you can only see the Q: drive content from a virtual application). Depending on how you setup your application whitelisting solution, this may introduce a change for how we whitelist applications.
If your applications are whitelisted based on the application path, e.g. %ProgramFiles%, then the virtual applications will be unable to run since they are part of the Q: drive. In order to support virtual applications with this whitelisting solution, you also need to add the Q: drive to the allowed path. However, if your rules are based on hash values of the installed application executable, then your existing rules should still work since the application files are not changed by this type of application virtualization. If you choose to approach virtual application whitelisting by allowing anything on the Q: drive, make sure you have policies in place so users can’t create their own Q: drive and bypass your protection!
Other types of application virtualization do not use an agent. Instead, they repackage the entire application into a single file that contains the virtualized application and the virtualization technology. This enables the virtualized application to run, completely self-contained. However, this type of virtual application is not exposing the normal application executable, at least not on the initial execution and may be run from a local path or a network share. When you are creating your whitelists, you need to consider the single file type of virtualized applications and update your white list rules accordingly.
Ultimately, while the use of virtualized applications may add some additional effort to your whitelisting so0lution, if you understand the small changes needed to the whitelisting process for the ‘type’ of application virtualization you are using, the benefits of application virtualization and application whitelisting together offer a highly flexible and secure environment for your users.