Skip navigation

Measuring and Managing Windows NT Workstation 4.0 Application Performance

Take a new look at maximizing NT Workstation 4.0 performance

When you want to maximize Windows NT Workstation 4.0 performance, my immediate thought is that you simply strip off the explorer shell, use a command-prompt interface, and don't run those pesky GUI applications. NT Workstation will rip. I call this approach the "nix mentality," which is common among UNIX followers, who believe, in some cases correctly, that graphical applications slow you down. However, no matter how fast the NT kernel is, NT Workstation operates in a graphical environment and runs graphical applications. In most cases, you can't disable the Windows Explorer shell without crippling the system's functionality. Given that reality, it's time to take a fresh look at how you can measure and manage your NT Workstation applications' performance to get the most bang for your buck. You can use Performance Monitor counters to identify problem applications, which is a good starting point and one that I find consistently useful. You can also use some Microsoft Windows NT Workstation 4.0 Resource Kit utilities that help you measure and watch for performance problems.

What's Important in Performance Monitor
Performance Monitor is a great tool for measuring NT Workstation 4.0 or Server performance, and you can find a lot of information about using the program to gather metrics on NT computers. (For information about how to use Performance Monitor, see "Related Articles in Previous Issues," page 62.) However, I want to focus on the Performance Monitor features that measure NT Workstation application performance. I spend most of my time working with two performance objects—process and memory. The process performance object returns metrics that are related to all running processes, whether system processes, user applications, or NT services. The memory object returns metrics that are related to NT's memory management subsystem elements, including file cache, physical memory, and several paged and nonpaged pools that NT uses for system processes.

When you're considering NT Workstation performance, you might want to consider the system's disk subsystems performance. However, I'm not going to focus too much attention in this area. (For information about disk subsystems, see Curt Aubley, "Tuning NT Server Disk Subsystems," March 1999.) I'm interested in getting to the problem's source, and I want to see how my application is using the system's memory and processor resources and how that usage affects my overall system performance.

To a degree, disk problems, such as thrashing, are symptoms of other problems within an application. A pagefile that grows excessively might present problems on a slow disk subsystem or highly fragmented volume, but you need to know why the pagefile is growing in the first place. Table 1 lists objects and counters, which are good starting points for monitoring application performance, and briefly describes the value each feature provides. If you use these counters to create a Performance Monitor workspace (.pmw) file, you can quickly load the files whenever you need to monitor application performance. However, when you're using workspace files, they embed the name of the workstation or server on which you've configured them into the .pmw file. You'll need to edit the name after you load the new workspace file on a different computer.

Monitoring for Memory Leaks
The first two counters that Table 1 lists, Process: Working Set and Process: Pagefile Bytes, let me monitor my application's memory consumption footprint. The working set is an important metric for application performance because it tells you how much physical memory (i.e., actual pages in RAM) an application is consuming. You can monitor the working set over time to detect memory leaks in applications. If you see a steady increase in the working set, as Screen 1 shows, the application might not be properly releasing previously allocated memory. However, you need to know the application to understand how it's supposed to behave. For example, if I leave Microsoft Word running but inactive on my desktop and Word's working set steadily increases over time, I can be pretty sure that Word has some kind of memory leak. However, if I have a data acquisition software program that might be collecting data into larger and larger arrays as it runs, then that software's working set might increase, which is typical behavior (although perhaps not desirable).

Process: Pagefile Bytes tracks an application's working set pretty closely as the application's memory consumption increases. For example, if you use the working set to monitor an application that leaks over time, its Pagefile Bytes counter will follow the working set in a nearly linear fashion.

Committed Bytes and the Pagefile
The Committed Bytes, Commit Limit, and % Committed Bytes In Use counters are handy tools you can use to determine the systemwide memory pressures on NT Workstation. Before I discuss these counters, you need the background information about the pagefile.

Microsoft recommends that you use a minimum pagefile size of physical RAM plus approximately 12MB. However, you can optimize this number as needed for your system's real memory requirements. If you're curious about the maximum pagefile size, forget it! In every NT version that I've measured the maximum pagefile size on, including NT 4.0 Service Pack 6 (SP6), NT ignores whatever value you enter. NT will increase the pagefile to meet increasing memory pressures until the OS runs out of disk space. To test how NT responds to increasing memory needs, enter a maximum pagefile size value in the Control Panel System applet's Performance tab. In the resource kit, look for the leakyapp.exe tool in the \perftool\meastool directory. Microsoft designed My Leaky App to test your system's behavior as it continues to allocate memory. My Leaky App grows over time, consuming more and more system memory. You start the application and select the Start Leaking button to begin the process. My Leaky App shows current pagefile usage and lets you stop and start the leaking process, as Screen 2 shows. If you let My Leaky App run long enough, it will start to increase the pagefile size and will continue to increase the size well past the limit you've specified in the NT Performance tab. After the pagefile size increases beyond the initial minimum value you've specified, you need to reboot to shrink the pagefile back to its original size.

When NT starts increasing the pagefile to accommodate memory pressures, performance deteriorates rapidly, especially if the pagefile grows on a slow or badly fragmented disk partition. You can use Committed Bytes and Commit Limit metrics to determine when memory pressure on the system is causing the abrupt pagefile growth. NT calculates the Commit Limit as roughly equal to the sum of the system's installed physical memory plus the minimum pagefile size that the user specified in the NT Performance tab. Committed Bytes is the running processes' total commit charge. As Committed Bytes grows, it approaches the Commit Limit; you can reach the limit when one or more applications increasingly allocate more memory. When you monitor % Committed Bytes In Use, you'll see that as this metric approaches 100 percent, the pagefile will begin to grow to meet increasing memory demands. To try to keep up with memory demands, NT will increase the pagefile until no more disk space is available. You'll also see the message Out of Virtual Memory, which Screen 3 shows. If you receive this message, run Performance Monitor. Select the Process object, working set, and Pagefile Bytes counter, then select all running applications. You'll see fairly quickly whether one application is responsible for the precipitous growth in memory demands. You can also use the % Committed Bytes In Use metric to tune your pagefile's size. If you monitor this metric over time, you can adjust your minimum pagefile size to meet the needs of your particular set of applications.

RELATED ARTICLES IN PREVIOUS ISSUES
You can obtain the following articles from Windows 2000 Magazine's Web site at http://www.win2000mag.com/articles.
CURT AUBLEY
"Tuning NT Server Disk Subsystems," March 1999, InstantDoc ID 4826
"Troubleshooting Windows NT Performance," January 1999, InstantDoc ID 4717

MICHAEL D. REILLY
"Performance Monitor and Networks," May 1998, InstantDoc ID 3072

JOHN SAVILL
"Troubleshooting NT Performance Monitoring," April 1998, InstantDoc ID 3023

Processor Utilization
Process: % Processor Time measures how much processor time an application is using, which is important for determining system bottlenecks. However, you need to be careful when you use Performance Monitor to look at this metric. For example, certain applications might introduce loops in their processing, which can happen when they're waiting on a particular event. These loops can show up as 100 percent processor utilization, which doesn't necessarily mean that the workstation can't process anything else. In most cases, these loops are low priority and will concede processor cycles to other applications that start up and request processing. Earlier Netscape Browser versions introduced loops that showed 100 percent utilization, and you couldn't tell whether Netscape was a CPU hog or was simply waiting on a certain event. Of course, if excessive disk activity, memory utilization, and overall system slowdown accompany 100 percent processor utilization on an application, then you might have just found a bug in that application. The resource kit's CPU Stress tool lets you artificially load a processor to get an idea of how the system will behave under heavy processor load. You can use this tool to adjust thread priorities and the activity level for four threads, control how much load applications place on the CPU, and determine how crucial a thread is (i.e., you can see which low-priority threads cede control to higher-priority ones).

Resource Kit Utilities for Performance Management
The resource kit includes several handy utilities in addition to My Leaky App and CPU Stress for managing your NT computers' performance. You'll find most of these tools in the resource kit's Perftool folder. For a list of some interesting tools you can use to manage and monitor NT performance, see the sidebar "Performance Management Utilities."

The key to managing NT Workstation performance is to be familiar with your applications and how they use NT's resources. The techniques I've described are a first step toward meeting that goal. After you thoroughly understand the Performance Monitor metrics, I encourage you to take a look at the resource kit's Response Probe utility. This tool lets you take a proactive approach to designing high-performance applications. You can create artificial workloads that let you simulate a user's likely stresses on a system. After all, optimizing performance for an application that is running alone is easy. The real fun begins when you must contend with 20 applications, various services, and desktop utilities that might be running at the same time.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish