Skip navigation

Inside the Windows NT Scheduler, Part 1

Assigning CPU time in a uniprocessor environment

Windows NT is a preemptive multithreading operating system. That is, NT lets several programs run simultaneously and switches among them often enough to create the illusion that each program is the only program running on the machine. Well, that's the idea anyway. How to smoothly share one CPU (or multiple CPUs) among many threads of control is a complicated problem. Solving this problem dynamically many times per second is the job of the NT scheduler. The NT scheduler must honor the relative priorities that the application's programmers designate for each thread and attempt to provide responsiveness to user-interactive threads.

In this first part of a two-part series about the algorithms NT's scheduler employs, I'll introduce basic information about the NT scheduler. (For an overview of how NT schedules applications to run, see Christa Anderson, "Foreground Application Handling in NT 4.0," June 1997.) You'll find out about the priority levels that NT assigns to threads, how Win32 programs specify priorities for their threads, the situations that invoke the scheduler, and the algorithms NT uses on uniprocessors in those situations. I'll wrap up with a discussion of some advanced features of the scheduler, including priority boosting and starvation prevention. Next month, I'll provide an in-depth tour of how the NT scheduler implements multiprocessor scheduling.

Threads and Priorities
The basic scheduling unit in NT is a thread. A thread is a point of control within a process. Processes consist of a virtual address space that includes executable instructions, a set of resources such as file handles, and one or more threads that execute within its address space. Typical applications consist of only one process, so program and process are often used synonymously. Most programs today are single-threaded, which means they run as one process with one thread. However, multithreaded programs are becoming more commonplace. An example of a multithreaded program is a program that lets a user sort a list, with an option to cancel. One thread in the program's process might perform the CPU-intensive sorting task while another thread in the process displays a how-to-cancel message to the user and waits for a response. The scheduler does not differentiate between threads of different processes. Instead, the scheduler examines the priorities of all the threads ready to run at a given instant to pick which thread to execute.

NT assigns each thread a priority number from 1 to 31, where higher numbers signal higher priorities. (NT uses priority 0 for the system idle thread, which executes when no other thread is able to.) NT reserves priorities 16 through 31 (realtime priorities) for use by time-critical operations. Only a user with Administrator privileges can direct the system to execute threads in this range. NT uses priorities 1 through 15 (dynamic priorities) for the program threads of typical applications (e.g., Notepad, Word, Lotus Notes).

The NT kernel provides functions that let you set a thread to any of the 31 priority levels, but the Win32 API is more indirect. In Win32, specifying a thread's priority is a two-step process. You must first set the priority class of the process; then, you can apply a relative priority to individual threads.

A process priority class is a priority level around which NT lets the process' threads execute. The Win32 API defines four priority classes: realtime, high, normal, and idle. These names correspond to priority levels 24, 13, 8, and 4. Setting a process priority class causes all the threads of that process to begin executing at priorities within ±2 of the class priority. This scheme is shown in Figure 1, page 168. New processes inherit the priority class of their parent. Process threads start at the priority level associated with their process' priority class.

The relative priorities that can change a thread's priority from its process class priority are highest, above normal, normal, below normal, and lowest. Highest adds 2 to the thread's priority, above normal adds 1, normal adds 0, below normal adds -1, and lowest adds -2. Figure 2, page 168, shows the relative priorities applied to the Normal priority class range.

The Win32 API includes two special-case priority modifiers: time-critical and idle. Time-critical moves a dynamic thread's priority to the top of the dynamic range (15), and idle moves it to the bottom (1). Similarly, time-critical and idle move realtime threads to the top (31) and bottom (16) of the realtime range.

Whose Turn Is It?
Threads must take turns running on the CPU so that one thread doesn't prevent other threads from performing work. One of the scheduler's jobs is to assign units of CPU time (quantums) to threads. A quantum is typically very short in duration, but threads receive quantums so frequently that the system appears to run smoothly--even when many threads are performing work. One difference between NT Server and NT Workstation is the length of a user thread's quantum. On most x86 systems running NT Server, a quantum is 120 milliseconds (ms). On x86 systems running NT Workstation, a quantum can be 20ms, 40ms, or 60ms, depending on your system settings and whether the thread is a background or foreground application thread (more on this topic later).

The scheduler must make a CPU scheduling decision every time one of three situations occurs:

* A thread's quantum on the CPU expires.

* A thread waits for an event to occur.

* A thread becomes ready to execute.

When a thread's quantum expires, the scheduler executes the FindReadyThread algorithm to decide whether another thread needs to take over the CPU. If a higher-priority thread is ready to execute, it replaces (or preempts) the thread that was running.

In many cases, threads perform processing and then wait for an event to occur. For example, a client/server application might have a server thread that performs processing when it receives client requests and then waits for more requests. A waiting (or blocked) thread gives up its quantum early, and the scheduler must execute the FindReadyThread algorithm to find a new thread to run.

When a new thread or a blocked thread becomes ready to execute (e.g., when the client/server application server thread receives another client request), the scheduler executes the ReadyThread algorithm. This algorithm determines whether the ready thread will take over the CPU immediately or be placed in an eligible-to-execute list.

FindReadyThread and ReadyThread are the key algorithms the NT scheduler uses to determine how threads take turns on the CPU. The uniprocessor versions of FindReadyThread and ReadyThread are straightforward algorithms. Let's examine how FindReadyThread and ReadyThread work.

FindReadyThread. FindReadyThread locates the highest-priority thread that's ready to execute. The scheduler keeps track of all ready-to-execute threads in the Dispatcher Ready List. The Dispatcher Ready List contains 31 entries, each of which corresponds to a priority level and a queue of threads assigned to that priority level. The FindReadyThread algorithm scans the Dispatcher Ready List and picks the front thread in the highest-priority nonempty queue. Figure 3 shows an example Dispatcher Ready List with three ready threads--two at priority 10 and one at priority 7. FindReadyThread directs the scheduler to choose the first thread in priority 10's queue as the next thread to run.

ReadyThread. ReadyThread is the algorithm that places threads in the Dispatcher Ready List. When ReadyThread receives a ready-to-execute thread, it checks to see whether the thread has a higher priority than the executing thread. If the new thread has a higher priority, it preempts the current thread and the current thread goes to the Dispatcher Ready List. Otherwise, ReadyThread places the ready-to-execute thread in the appropriate Dispatcher Ready List. At the front of the queue, ReadyThread places threads that the scheduler pulls off the CPU before they complete at least one quantum; all other threads (including blocked threads) go to the end of the queue.

Boosting and Decay
The picture I've presented so far is of a fairly static system: Threads execute at a priority level until a program changes their priorities or they exit. What actually happens is more dynamic: In a variety of situations, NT boosts (or increases) the priority of dynamic range threads. The most common boost occurs when an event happens that a blocked thread was waiting for. For example, a thread waiting for input from the keyboard increases six priority levels (a 6-point boost) when a keystroke wakes it up. Other increases include a 6-point boost for mouse events and a 1-point boost when a thread wakes up from a wait on a general event.

Boosting applies to only dynamic range threads. The system never changes the priority of a realtime thread--only a program can change a realtime priority. In addition, a boost never causes a thread's priority to move into the realtime range; priority level 15 is the upper limit for boosts. Event-related boosts are temporary because the boost decays over time. Each time a thread runs through an entire quantum, its boost decreases by 1 point. This decay continues until the thread reaches its programmed priority level (the priority it had before its first boost).

NT's boosting logic lets the system boost a thread repeatedly before its priority has decayed to its base priority. Thus, a priority 8 thread that receives keyboard input gets boosted to priority 14. If the thread completes a quantum, its priority decays to 13. If the thread waits for and receives another keyboard event, its priority gets boosted to the 15 limit.

Another type of boost NT Workstation applies is a foreground application boost, which you can control from the Performance tab of the System applet in Control Panel (shown in Screen 1). This type of boost affects quantum length, rather than priority. For the default Maximum setting, NT extends the quantums of foreground application threads to 60ms. If you position the slider in the middle, NT sets the quantums to 40ms. If you position the slider on None, the quantums are 20ms--the same as the quantums of background application threads.

Starvation Prevention
Left alone, the FindReadyThread and ReadyThread might prevent low-priority threads from getting a chance to execute. For example, a priority 4 thread running on a system with continuously running priority 8 threads would be starved for CPU time. However, NT provides a mechanism that gives low-priority threads a shot at the CPU. The NT Balance Set Manager is a system thread that wakes up every second or so to perform memory tuning. As a secondary responsibility, Balance Set Manager executes the ScanReadyQueues algorithm, which implements NT's anti-CPU starvation policy.

ScanReadyQueues scans the Dispatcher Ready List, working down the list from priority 31. It looks for threads that haven't executed in more than 3 seconds. When it finds one, ScanReadyQueues gives the thread a special anti-starvation boost, doubles its quantum, and calls ReadyThread with the thread as a parameter. The anti-starvation boost differs from other boosts: Instead of applying a relative priority increment, the anti-starvation boost slams the thread's priority to the top of the dynamic range. (On pre-Service Pack 2--SP2--systems, the anti-starvation boost was to priority 14; post-SP2 systems boost to priority 15). When a thread that receives an anti-starvation boost finishes its extended quantum (or the thread blocks), its priority returns to the pre-starvation boost level and its quantum returns to its usual length.

Next Month
Scheduling in a uniprocessor environment is relatively straightforward, but factors within a multiprocessor environment complicate how FindReadyThread and ReadyThread work. For example, NT lets applications define threads to execute on only certain CPUs, and NT tries to keep threads running on the same CPU for performance benefits. Next month, I'll describe the multiprocessor implementations of FindReadyThread and ReadyThread. These algorithms are complex--so complex that you might argue that a better way must exist for scheduling in a multiprocessor environment. Stay tuned.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish