Right now, the ultimate goal of Internet applications is on-demand, high-quality streaming video. An array of high-powered technologies from teleconferencing to music-video jukeboxes is waiting for an adequate delivery system. Such a delivery system is highly dependent on a group of technologies that fall under the rubric of Quality of Service (QoS).
QoS is a goal, not a particular technology or application. QoS offers applications a bandwidth-level guarantee. Today, no network traffic prioritization is possible on the Internet. You can buy high-speed connections to the Internet, but after your data enters the network, it has to compete with all the other data on the network. The Internet transmits data on a best-effort basis and is unable to provide any sort of network-resources guarantee. In a QoS-enabled network, applications that need constant, high levels of bandwidth and low levels of latency, jitter, and loss (e.g., high-quality streaming video) can request and receive a guarantee for those network resources. ISPs can assign traffic that is less demanding (e.g., email) a lower priority.
The goal of QoS research is to achieve universal end-to-end QoS, which means functional QoS from one computer to any other computer, no matter how many intervening subnets the data stream must pass through. End-to-end QoS will enable better functionality for many available mission-critical applications and will let a new breed of applications that require QoS flourish. For example, the Internet2 research team (an academic, governmental, and industrial research initiative) has a group dedicated to high-quality streaming video. This group recently sent a High-Definition Television (HDTV) feed—a 1920 X 1080 resolution video stream at 30 frames per second—over the Internet. This feat required a dependable, unfluctuating connection: a QoS-enabled connection.
QoS is already functional in intranets. (For more information about how to set up QoS in your intranet, see Tao Zhou, "Build a Better Network with QoS," November 1998.) However, QoS isn't functional over an extranet or the Internet. QoS researchers are directing their efforts toward solving the difficulty of transmission across subnets. QoS developers have two approaches that they can take to build QoS into the Internet: Integrated Services (IntServ) and Differentiated Services (DiffServ). Both QoS solutions introduce further complexity into network design and construction.
Microsoft has incorporated features into Windows 2000 (Win2K) that make this OS a powerful tool in the coming era of a QoS-enabled Internet. However, simply installing Win2K on your server won't give you access to a QoS-enabled Internet. End-to-end QoS on the Internet requires improvements to every part of every public network. The Internet2 team is building a QoS-enabled multisubnet network, known as QBone, for college campuses. After QBone is functional, this network will showcase multimedia tools and show the world the potential of QoS.
Let's divide the architecture of a simple QoS-enabled network into three parts: the end system, the policy enforcement point (PEP), and the policy decision point (PDP) or policy manager. The end system is the computer that sends or receives network traffic. The PEP resides in QoS-enabled routers on the network or in the form of a Subnet Bandwidth Manager (SBM) on networks that lack QoS-enabled routers. The PDP, typically a dedicated server, is the central QoS manager that contains the static policy.
The QoS PEP and PDP model, which Figure 1 shows, uses WAN resources to establish dedicated network resources for transmission. First, the end system communicates a request to the PEP for resources. In most networks, this request uses the Internet Engineering Task Force (IETF) Resource Reservation Protocol (RSVP), which Request for Comments (RFC) 2205 defines, to communicate. (All Microsoft QoS systems use Common Open Policy Service—COPS—for PEP and PDP communication.) Second, the PEP queries the PDP. Third, the PDP examines the request in the context of the network load, the priority of the end system making the request, the priority of other network traffic, and the static policy. The PDP makes a decision and sends this decision to the PEP. Fourth, the PEP enforces the decision, either granting or denying the request. Finally, if the PEP granted the request, the data flows from the computer requesting to transmit data to the destination computer.
This model falls short when considering QoS on the Internet. When a flow must travel across different subnets, and each flow has a distinct policy and policy manager, all sorts of complexities ensue. The simple QoS network model doesn't scale because without a PDP, no one server has total power over the network.
IntServ and DiffServ are the two major schools of thought about how to provide QoS on an extranet. IntServ is a general approach to QoS on any sort of network; DiffServ is an approach for large-scale networks. Most people working on QoS extranets use one of these frameworks or a combination of the two.
The essential difference between the two frameworks is the management scope. IntServ provides for per-flow management, in which each router perceives the individual traffic flow and allocates network resources to each flow. (Microsoft documentation refers to IntServ as a per-conversation approach, in which routers assign one conversation to each flow. A conversation is all traffic between an instance of an application on one host and the instance of the peer application on a peer host.) End systems make individual requests for network services, and each router passes the request to the next router along the flow path. Each router must communicate with a PDP, which grants or denies the request. IntServ can provide powerful QoS, but this model runs into problems with scalability. To establish IntServ on the Internet, core backbone routers need to manage hundreds of thousands or even millions of flows simultaneously. Although IntServ developers have a few stopgap aggregating tools, the QoS Working Group—an Internet2 subgroup—opted for DiffServ.
DiffServ takes a different approach to QoS. According to Ben Teitelbaum, chairman of the QoS Working Group, this approach "pushes complexity out \[from the core of the network\] to the edges of the network." Teitelbaum said developers created DiffServ in direct response to IntServ's scalability problems. DiffServ provides for aggregate-flow management, which aggregates many conversations into one flow. The DiffServ model establishes certain QoS categories, then marks the packets near the point of origin (at the end system, host, or router). By the time the packet reaches the network core, each packet requires minimal attention from routers; each router simply glances at the packet heading and shuffles the packet accordingly. Therefore, DiffServ requires radically more powerful policing features.
IntServ is the everything-you-could-possibly-want QoS approach. IntServ, which RFC 2382 defines, uses asynchronous transfer mode (ATM). In the IntServ model, which Figure 2 shows, first an end system generates an RSVP request that it sends to PEPs (routers or SBMs) along the data-flow pathway. (For more information about RSVP requests, see the sidebar "RSVP Revealed.") Second, each PEP must negotiate with its subnet PDP. Third, each PDP grants or denies the request for resources and signals the PEPs. Fourth, PEPs signal the end system and tell the end system whether the PDP granted or denied the request for network resources. Finally, if the PDP granted the request, the routers create a guaranteed bandwidth to carry the data stream across the subnets.
Critics claim that IntServ puts all the complexity in the core. Every router in the path of a flow must commit a nontrivial amount of computing power to each individual flow. Each flow also forces each router to signal to the PDP to signal each router. At the backbone router, in which hundreds of thousands or even millions of flows pass, the sum of all those little computations and signals is overwhelming.
DiffServ attempts to push the workload from the center of the network to the edge of the network. With DiffServ, hosts and routers at the edge of the network classify packets into a limited number of service categories, then pass the packet into the core of the network. Core routers extract the packet label and put the packet into a preestablished priority level. This process requires astronomically fewer resources per-flow than IntServ.
In the DiffServ model, which Figure 3 shows, the intranet host performs most of the work at the edge of the network. First, the end system sends requests to the intranet host. Second, the intranet host marks the packets according to bandwidth and passes them to the network ingress. Third, the ingress demotes excess traffic to a lower service level and re-marks it accordingly, delays some traffic to maintain the traffic profile, or forwards the traffic directly to the routers. Fourth, the routers treat the traffic according to its marking and provide it the necessary bandwidth. Finally, the traffic reaches the end system. The DiffServ marking relieves core routers from the burden of per-flow negotiation and management. DiffServ eliminates the dense signaling that IntServ uses.
The DiffServ framework defines a DiffServ Codepoint mark, which is a field in the Layer 3 header of IP packets. Routers in the network examine each packet and use the DiffServ Codepoint mark to apply Per-Hop Behavior (PHB). PHB is the heart of DiffServ. From an abstract standpoint, each PHB represents a different level of prioritization that a DiffServ network offers; from a technical standpoint, a PHB is a prespecified queuing or scheduling behavior. The two PHB types are Expedited Forwarding (EF) and Assured Forwarding (AF). The EF PHB provides the high-quality, low-latency service of Virtual Leased Line (VLL) point-to-point line emulation. The AF PHB provides a level of service lower than EF PHB but better than the present best-effort Internet traffic standards. QoS developers call this service Better than Best-Effort (BBE).
DiffServ operations are fairly simple in an intranet. As long as the network's routers recognize a packet's DiffServ Codepoint mark, every router automatically passes the packet along according to the preset policy. The intricacies of DiffServ functionality all occur at the network edge. Careful policing is necessary because if too many packets come in with a high-level priority stamp in their header, they'll overload the network. The intranet host must mark or condition the traffic at the edge of the network. For more information about traffic conditioning, see the sidebar "Traffic Conditioning Exposed".
DiffServ can use marking and conditioning to provide either quantitative or qualitative service guarantees. For example, a strong quantitative bandwidth guarantee (service level A) might be 75Kbps sustained, with 300Kbps available for 100Kb bursts and latency not more than 1 second. The network ingress will re-mark traffic beyond this amount for service level B. Service level B will contain a similarly structured profile for lower priority traffic; the provider can link many priority levels in a chain and drop the level of excessive traffic in the chain. Another sort of quantitative guarantee might be that 90 percent of traffic at service level C would experience no more than 20ms latency.
Qualitative degrees are more relative. For example, an ISP might provide a chart of qualitative priority levels in which service level A traffic will receive twice as much bandwidth as traffic under service level B, service level B traffic will receive 20 percent more bandwidth than service levels C and D, service level C traffic will have a higher queuing priority than service level D, and service level D traffic will have a higher queuing priority than non-QoS traffic.
The idea behind DiffServ is to remove the entire process of automated per-flow negotiation. (DiffServ provides allowances for automated traffic negotiation between subnets. (However, such cases are rare and result in a fairly low strain on network resources.) People can negotiate the content of the service levels and choose the service levels they want at what price. But after the administrator sets the service levels, no more negotiation is possible. The elimination of negotiation makes DiffServ far more inflexible than IntServ; but it also eliminates the need for signaling between the end system and the router and the need for signaling between the router and the central server. DiffServ developers have contemplated introducing certain automated negotiations into the system. For example, developers might use RSVP for communication to let a network dynamically adjust qualitative service levels in response to traffic. These negotiations aren't per-flow and would occur based on overall traffic. Therefore, these negotiations won't suffer from the scalability problems of per-flow negotiation.
DiffServ's marking and conditioning efficiencies make it the more scalable solution. Teitelbaum said, "The motivator for DiffServ was scalability concerns with IntServ and RSVP. Per-hop, per-flow QoS won't scale to the high speeds and flow fan-in degree found in core networks. The goal is to build on the IETF DiffServ standardization of a few differentiated per-hop forwarding behaviors to build a variety of scalable end-to-end services that provide value beyond the best-effort model of today's Internet."
Pulling It Together
So how do we weave different subnets together? In QoS terms, achieving QoS across different subnets is concatenating subnets. Multisubnet QoS has no official route, just a few different plans. The most visible application is QBone, the experimental QoS-enabled network that the Internet2 group is testing.
Networking gurus widely debate the technological feasibility of concatenating subnets for end-to-end QoS. Teitelbaum said although his group has high hopes for multisubnet QoS, it is experimental because no one knows if it is possible. Although Teitelbaum's group hopes that the bilateral DiffServ service level agreements (SLAs), which QBone is exploring, might overcome these problems, QBone is still an experiment because nobody has ever built an interdomain DiffServ network on such a grand scale.
Ron Tully, Microsoft's lead product manager for Windows Networking, thought differently. He said the technology necessary for multisubnet QoS is available, and multisubnet QoS simply depends on working out a workable business model for ISP's to negotiate how they carry each other's QoS traffic.
Let's look at DiffServ concatenation. Putting two DiffServ networks side by side and simply connecting the routers on the edges of the networks presents problems in pushing the packets across the policy boundary. In QoS terms, DiffServ problems lie in mapping the service levels of one network onto the service levels of another. The simplest mapping problem is making the networks show service levels with different DiffServ Codepoint marks. Some kind of translation program can re-mark packets, or more efficiently, some task force can create a table of standardized markings.
A much more problematic scenario is when different subnets have substantially different service levels. ISP A might sell three service levels: 25Kbps, 75Kbps, and 100Kbps; ISP B might sell four service levels: 25Kbps, 50Kbps, 100Kbps, and 200Kbps. If the ISP A and ISP B networks exchange packets, what will ISP A's network do with the packets that come in marked for 50Kbps service? ISP A has no corresponding service level. Marking such traffic in ISP A's 25Kbps would destroy ISP B's guarantees to customers; marking such traffic to 75Kbps would waste ISP A's network resources. An even more difficult problem arises when ISP A's network receives traffic marked with a 200Kbps bandwidth guarantee. ISP A has no service level that comes close to providing that level of guaranteed bandwidth.
Even worse, ISPs might not offer QoS under the same terms. ISP C might offer customers QoS service levels based on bandwidth and latency; ISP D might sell its customers QoS based on the probability of traffic getting through or on queuing priority.
So what are the solutions? The answer depends on whether you're offering qualitative services or quantitative services. General agreement among the QoS developers is that service providers can simply stitch together qualitative services from similar service-level categories. Because providers set the network resource gains of qualitative services in a loose manner, it might make sense to simply take a packet from ISP C's network marked for a midlevel amount of bandwidth and re-mark it for a midlevel amount of priority queuing in ISP D's network. The problems in multisubnet QoS implementation are in setting out exactly what the midlevel is. The profit-minded analysts attached to ISP's must resolve the problem of translating quantitative services across ISPs.
Quantitative services are a much more difficult arena for multisubnet concatenation. Approximate mappings just won't work because customers who buy quantitative guarantees want what they pay for—measurable, consistent network resource allocations. The ideas on how to deal with the problem fall into two major camps. The Internet2 group recommends the creation of standardized service levels for different networks. This recommendation is a workable solution for the QBone project, but might be harder to apply to the profit-minded and competitive environment of commercial ISPs.
Microsoft offers a different proposition, which the company presents in white papers. The Microsoft approach is a mixed DiffServ and IntServ approach that relies on the efficiencies of DiffServ to handle the mass of traffic across the network, while letting a limited number of signaled IntServ transactions provide for high-quality, qualitative QoS connections. You can find an overview of Microsoft's theory of QoS in the white paper "The Quality of Service Technical White Paper" (http://www.microsoft.com/windows/ server/technical/networking/qosover.asp).
Instead of requiring different ISPs to offer a standardized set of service levels, the mixed DiffServ and IntServ approach offers another solution. In the mixed DiffServ and IntServ approach, when one customer requires a quantitative connection to another customer, the first customer's computer generates an RSVP request that travels across the subnets and requests resources from each router along the way. So, when network A wants to send network B a flow at 50Kbps with 20ms latency, and network B doesn't have such a service level specified, network A's customer can generate an RSVP signal that, if network B approves, will create a virtual current (VC) through network A and network B.
The mixed DiffServ and IntServ solution is viable provided that a very small number of users need high-quality, quantitative QoS. The solution suffers the same problem as IntServ: poor scalability because signaling and negotiation consume network resources.
Windows 2000 and Multisubnet QoS
Win2K will feature a QoS Service Provider (SP) and all the Windows 98 QoS components, which include Winsock2 and Generic QoS (GQoS) APIs. Together, these components provide an interface for applications to request QoS. GQoS is a subset of Winsock2 and lets a desktop application request QoS at a relatively high level of abstraction. (For more information about GQoS APIs, see David Durham and Raj Yavatkar, Inside the Internet's Resource reSerVation Protocol, John Wiley & Sons, 1999.) Windows applications can request QoS and react to QoS responses without any understanding of the particular QoS mechanisms of the system. GQoS passes the request to the QoS SP. Then, the QoS SP generates RSVP signals on behalf of the application and receives RSVP responses from PEPs. The QoS SP can also interact with an SBM. The article "Build a Better Network with QoS" describes these Win98 QoS components in detail.
Win2K adds several new components that make it a much more powerful platform for potential multisubnet QoS applications, such as traffic control providers, a packet scheduler, an SBM, an Admission Control Service (ACM), and a Local Policy Module (LPM). Traffic control providers are the main movers of QoS functionality in Win2K. These movers call for traffic control through Microsoft's Traffic Control API. TC API calls can directly create or control ATM VCs. For non-ATM traffic control situations, the packet scheduler processes the API calls.
The packet scheduler provides full DiffServ traffic conditioning functionality. This functionality includes a marker for marking packets' DiffServ Codepoint, a meter for checking packets against the TCS traffic profile (Microsoft calls its meter component a conformance analyzer), and a shaper for delaying traffic and shaping traffic into the desired traffic profile. The packet scheduler can treat individual flows according to one of three modes: borrow mode, which lets excessive traffic borrow resources from idle flows; shape mode, which shapes nonconforming traffic with the shaper; and discard mode, which simply drops excessive traffic.
The SBM and ACS provide PEP and PDP functionality for Win2K. These components are useful for IntServ QoS deployments. The newest component is ACS, which uses the LPM to combine traditional SBM functionality with Active Directory (AD). ACS replaces traditional SBM services and has a special integrated SBM subcomponent. (Because SBMs act as PEPs for subnets lacking QoS-enabled switches and routers, Microsoft says you might need to disable designated SBM functionality on switches and routers on the subnet for ACS to function.) ACS provides for policy-based admission control: therefore, it integrates a subnet's PEP and PDP functionality with the policy you set in the AD.
Let's say user A tries to initiate a teleconference from his desktop. The desktop sends an RSVP signal to the local network, which is the SBM part of the ACS. ACS passes the RSVP signal to the LPM. The LPM pulls policy-related objects from the signal and uses Kerberos to process the signal and extract user A's ID. Then, LPM checks policy in AD, searching for user A's authorizations and priority allowances. Finally, LPM passes its findings to ACS, which uses the LPM's findings to grant or reject the RSVP request.
Microsoft designed earlier versions of Windows for IntServ-style functionality. Win2K's strength in both IntServ and DiffServ functionality show Microsoft's dedication to making Windows a strong OS for multisubnet QoS. Win2K is ready for IntServ, DiffServ, and mixed IntServ and DiffServ solutions.
The IntServ opposition's criticism about IntServ is accurate. Although powerful, IntServ lacks scalability. And although IntServ can be useful on intranets and on extranets that have a only a few subnets, IntServ can't be the basis for a QoS solution for the Internet. Microsoft's suggested IntServ and DiffServ combination suffers from the same weakness as IntServ.
However, DiffServ's concept is on the mark. The greatest barrier to using the DiffServ architecture isn't technological, but business-related. Getting service providers to agree on standard service levels or somehow map their service levels onto the service levels of other providers will be a difficult feat. However, the financial rewards for getting functional QoS over the Internet are promising, and companies are motivated to make DiffServ operational.