In "AD Sites, Part 1," June 2000, I introduced Active Directory (AD) sites and explained how to create and configure them to control replication in your Windows 2000 forest. You're now ready to explore replication in depth and learn how to establish and maintain replication paths within a site and between sites. Let's put your knowledge about AD sites to work.
AD replication can be a complex process because AD has a multipartitioned architecture. In addition, each partition within AD can have several replication elements that affect one another. Microsoft collectively calls these partitions directory partitions, or naming contexts in the x.500 Directory Service (DS) standard. The most common partitioning method is to divide AD into domains. You find as many domain partitions as you have domains in a forest, and you replicate information among domain controllers within a given domain's boundaries.
AD also has other partitions in which you replicate information. The configuration partition contains information about the structure of the forest and its domains, which permits forestwide replication in the partition. The schema partition contains all class and attribute information (i.e., the AD database structure) for the forest; therefore, information replicates across the entire forest. You don't need to worry about configuration and schema partitions very often because their contents seldom change, so you typically don't have information to replicate. Also, the configuration and schema partitions replicate information forestwide, so you can group them in the same topology.
The Global Catalog (GC) is a replication element that you also need to become familiar with. AD's GC is the equivalent of a White Pages telephone directory. Similarly to how the White Pages indexes people and businesses, the GC indexes every object in the forest. The GC contains the most common information (e.g., user account, telephone number, email address) about an AD object and where you can locate it if you need more detail.
Although Microsoft doesn't list the GC as a directory partition, the GC has a replication topology that you must account for when you plan your site design and domain controller placement. The GC resides only on domain controllers that you flag as GC servers. For GC servers to be available throughout the forest, they must replicate their information to one another. GC servers contain only a few attributes, but they contain every AD object, which means they experience heavier replication traffic than other infrastructure servers. Whenever you add, change, or delete an object in the forest, the action ripples through all the GC servers. (For more information about the GC, see Zubair Ahmad, "An Overview of Active Directory," http://www.win2000mag.com/articles, InstantDoc ID 8178.)
Let's put this AD information into perspective. AD includes all the objects in the forest and all the objects' attributes. Unless you have only one domain, no one domain controller will contain all these objects. The portion of AD that resides on a domain controller is the domain partition's objects and attributes, a read-only configuration-partition replica, and a schema-partition replica. If you designate the domain controller as a GC server, the domain controller will contain every object in the forest (but only a few of the objects' attributes). A large company with large domains can have sizeable GC servers but not have one server that contains the entire AD content.
Windows NT 4.0 uses a master-slave update model in which the master (i.e., the PDC) is the only copy of writeable domain information. The slaves (i.e., the BDCs) receive regular and frequent updates. Synchronization is necessary between only the PDC and BDCs. AD, however, uses a multiple-master update model in which all the domain controllers have writeable AD copies. Because all the domain controllers can create updates, they need a method to replicate their changes to one another. AD uses two replication methods—one for replication within a site (i.e., intrasite replication) and one for replication between sites (i.e., intersite replication).
Intrasite replication. When you make a change to a domain controller, the change must propagate through the domain controller's directory partition. With the exception of some special cases (e.g., account lockout, password changes), AD uses a loosely consistent replication model to propagate changes. Loosely consistent means that changes on a domain controller take a finite amount of time (5 minutes by default) to propagate to the domain controller's neighbor and across the domain controller's directory partition. For example, if users change their pager numbers, the time it takes for the changes to propagate across the domain depends on how many domain controllers the users' sites have, how many other sites the users' domains span, and the number of domain controllers in those other sites. Special cases cause urgent replication in which domain controllers don't wait the typical 5 minutes before triggering replication.
AD uses connection objects to support replication between domain controllers. A connection object is a one-way, remote procedure call (RPC)-based incoming replication route from a domain controller's neighboring domain controllers, which Microsoft calls replication partners. Similarly to how two one-way trusts form a two-way trust in NT 4.0, replication partners use two connection objects—one for each direction—to perform bidirectional replication, as Figure 1 shows.
You can observe a domain controller's connection objects in the Active Directory Sites and Services Microsoft Management Console (MMC) snap-in window. Double-click a site container to get a list of the site's domain controllers. Within a site, each domain controller has an NTDS object, which you can see when you double-click the domain controller. Double-click an NTDS object to see the connection objects that transport replication traffic from other domain controllers (i.e., inbound replication connection objects) to this domain controller. Right-click the inbound replication connection object and select the Replicate Now menu item to request an immediate update from a domain controller's replication partners.
Managing AD's connection objects is too labor-intensive to be a manual task. Fortunately, AD provides the Knowledge Consistency Checker (KCC), which dynamically configures and updates connection objects between domain controllers to create the replication pathways. The KCC is active on every domain controller and is the glue that holds AD together. You can't directly observe KCC as a service, but you can see its actions in the DS event log.
The KCC follows a specific algorithm to connect domain controllers within a site. The tool generates a list of the site's servers that hold a given directory partition (e.g., the domain partition) and uses connection objects to join the servers in a bidirectional ring, as Figure 2 shows. This ring topology ensures that if one server fails, the remaining servers can bypass the failed server to replicate with the other servers.
As you increase the number of domain controllers in a site, the KCC changes its algorithm to avoid the lengthy replication process of each domain controller passing the originating update to its neighbor. The KCC uses a three-hop rule in which no domain controller can be more than three replication hops from another domain controller. In this way, an update takes no more than three hops before it reaches another domain controller that has already received the update through another path.
To enforce the three-hop rule, the KCC creates optimizing connections (i.e., shortcuts) on the replication pathways between the domain controllers, as Figure 3 shows. The KCC creates the optimizing connections at random—not necessarily on every third domain controller. You can click each domain controller's NTDS containers in the Active Directory Sites and Services snap-in and draw lines between the domain controllers to manually draw out the replication topology. This task gets complicated, so you can use the Replmon tool to view the replication topology in more detail. (Replmon is a Win2K Support Tool that you can install from the \support\tools folder on the Win2K Advanced Server disk.) The KCC runs on every domain controller, and all KCCs use the same algorithm to create the site topology. Therefore, the domain controllers will replicate with one another until they all contain the same information.
A good site design can predict latency, which is how long an originating update on one domain controller in the forest will take to propagate to another point in the forest. For example, if an update originates on domain controller A in Figure 3, the maximum amount of time a change takes to replicate around a site (to domain controller B) is 15 minutes because AD's default replication interval is 5 minutes and KCC invokes the three-hop rule.
When you determine the amount of time an object or attribute will take to replicate through the forest, you must consider the type of object or attribute that you're replicating. For example, when users change their pager numbers, the changes replicate through only the users' domains. However, when users change their email addresses, those attributes replicate to all GC servers in the forest because email addresses are GC objects. Another change that takes more time to replicate through a forest is adding a new domain to the forest. When you add a domain in the configuration partition, you create a change that must replicate across all domain controllers in the forest.
Intersite Replication. Win2K assigns intersite replication management to bridgehead servers. (For my intersite replication discussion, I assume you use only IP for the intersite transport. If you use the SMTP intersite transport, see the Microsoft Windows 2000 Resource Kit.) The KCC automatically designates bridgehead servers. However, because these servers encounter more network traffic than other site domain controllers encounter, you might want to manually select your bridgehead servers.
Selecting your bridgehead servers has several advantages. First, you can designate a high-capacity machine as your bridgehead server instead of letting the KCC choose a machine that lacks the capacity you need. Second, the KCC might select different bridgehead servers for each directory partition and GC. When you manually choose your bridgehead servers, you can decide how to consolidate them. Third, designating preferred bridgehead servers facilitates troubleshooting because you already know the bridgehead servers' locations. However, if you overrule the KCC and manually select bridgehead servers, you must create more than one preferred bridgehead server so that you have a fallback server if the primary one dies.
To understand the intersite replication process, envision the intrasite replication method in which domain controllers are objects within the site, each with an engine (i.e., the KCC) to generate the topology. For intersite replication, take the intrasite replication method and move it up a notch in scale. The intersite replication topology considers sites as objects within the forest, each with an engine to generate the topology. This engine is the Inter-Site Topology Generator (ISTG). One domain controller per site (typically the site's first domain controller) assumes the ISTG role. You can't designate the ISTG manually, although you can still configure intersite replication. The ISTG domain controller's KCC creates the connections between the domain controllers in its site and the domain controllers in other sites. These connections include the inbound replication connection objects for all bridgehead servers in the domain controller's site.
If the ISTG dies, the domain controller with the next highest globally unique ID (GUID), which is the 128-bit value that uniquely identifies every AD object, assumes the ISTG role. To see a site's ISTG, you can click that site's object in the Active Directory Sites and Services snap-in. In the details right-hand pane, right-click the NTDS Site Settings object, then click Properties. The current role owner (i.e., the ISTG domain controller) appears in the Server box under Inter-Site Topology Generator.
When you create sites, you need to establish clear criteria for making a location a site. You then apply the criteria to your company locations to develop a short list of sites. To start the process, begin with one site for the entire company, then apply tests for creating a new site. For example, let's assume that you have a high-speed network that connects many users. To determine the site boundaries, follow an important rule—satisfy the requirement, don't go for the obvious answer. The requirement is that users need to authenticate quickly, and the obvious answer is that these users need their own site. If you design your site boundaries to satisfy the requirement, you arrive at a different answer than if you go for the obvious answer. To satisfy the requirement and take a first cut at defining your site boundaries, ask yourself whether you want users at one location authenticating on a domain controller at another location. If the answer is yes, you probably have good connection speed and bandwidth between these locations and could consider the locations as one site. If the answer is no, keep these two locations in separate sites (or parts of larger sites).
Apply the requirement for quick user authentication across your entire company. If your company is a multinational, you will immediately see the need for several sites, at least one site per region of the world. Slow WAN circuits (such as the 512KBs circuits that are common in the Asia-Pacific region) are a reason for creating additional sites. To test site boundaries, you again ask yourself whether you want users at one location authenticating on a domain controller at another location. You wouldn't want users to authenticate over a transatlantic or transpacific circuit, for example. And even if such authentication is possible, increasing the traffic over such expensive circuits just to save a few extra sites is not a wise idea.
Now apply a second test to the site: Will this location have AD-aware client server applications? For example, will the users at a site use Dfs to pick a replica that is local to the site? If applications use sites to define what the applications consider to be local, you might want smaller sites that have very high connectivity. For example, you might not mind if users at one location authenticate to another location, but you don't want these users running a next-generation Source Access Point (SAP) application from a database at a remote location.
After you define your sites, you connect them with site links. The amount of latency that you want your forest to have determines your site configuration. For example, if you have more than two sites in your configuration and you want the least latency, you need a full mesh topology (i.e., one in which all sites have a site link to one another, as Figure 4 shows). However, like a complete trust NT domain model, this configuration doesn't scale well. To handle a moderate number of sites with low latency, your best bet is a hub-and-spoke design, which Figure 5 shows.
In a hub-and-spoke design, the hub is typically a company's physical network hub and a site is no more than two hops from another site. If you plan a 15-minute latency across a site and a minimum 15-minute latency across each site link, your maximum latency across the forest will be 1 hour and 15 minutes for a three-site hub-and-spoke design (i.e., as many as 15 minutes across site A, depending on the number of domain controllers in the site; plus a 15-minute replication interval on site link A-B; plus as many as 15 minutes across site B; plus a 15-minute replication interval on site link B-C; plus as many as 15 minutes across site C). You can also factor in some time (e.g., 5 minutes) for data to physically propagate between sites over WAN circuits. The completed replication time can be much smaller, depending on how many domain controllers your sites have, how you select your bridgehead servers, and the location of the originating update domain controller.
In a fully routed IP network, all sites can talk to one another, and the transitive nature of site links lets replication pass from one site to another site throughout the forest. In a nonrouted network, you can use the site-link bridge feature to turn off the transitive site-link feature for the IP transport. When you define site-link bridges manually, you can configure which sites can replicate to others.
More Design Tips
After you understand the basic site-design concepts, you need to follow a few guidelines to design an effective site. First, use site-link costs to control replication pathways and introduce fault tolerance. Site-link costs assign a number between 1 and 100 to each site link; the ISTG uses these costs to prefer one site link over another. You can configure links to other sites and set the links' costs higher than costs for the primary links. Replication will always travel along the lowest-cost links, but if these links fail, the higher-cost links will bear the traffic. For example, if a site has two links with costs of 10 and 50, replication will always use the link with the cost of 10. If this link fails, replication traffic will use the link with the cost of 50.
You must have at least one domain controller at each site to benefit from reducing WAN traffic and local domain controller selection. If you don't have a local domain controller, all authentication traffic must go to another site. Indeed, I could argue that you should have two domain controllers at a site or none. If your site doesn't have a domain controller, domain controllers at a nearby site will elect themselves to appear as members of the site that lacks a domain controller.
You want to designate at least one GC server per site. You need CG servers to successfully log on users (i.e., to look up users' universal group membership). If GC servers are available only offsite, the authentication traffic must go to that off-site GC—which decreases the benefits you gain from creating a site. For the same reason, you need to designate at least one DNS server per site.
You must factor in the number of users at a site when you determine the number of domain controllers your forest requires. A site with 2000 users might require two domain controllers, but a site with 50 users probably won't require a domain controller at all.
Limit the number of sites that cross domain boundaries. If multiple domains use one site, the overall topology can become complex because each directory partition has its own replication topology. For example, if you have two domains in a site, you need to track four replication topologies—two domain partitions, the configuration and schema partition, and the GC. Each of these topologies, except for the GC, can also have its own bridgehead server. If you create only a few Win2K domains and use organizational units (OUs) for security instead of using several domains to delegate administration and security, you can limit complicated situations involving multiple domains in one site.
Designing an AD site is not a simple task, and you must thoroughly understand your company's physical network to design a good site. Also, you need to balance the features you incorporate into the design with the cost of supporting these features. If the cost to maintain a feature is more than the cost of the problem that the feature solves, leave the feature out. For example, if you have a relatively small network with good connectivity between locations, defining costs for all your site links might be overkill.
Be patient when you test your site designs because AD's loosely consistent replication model is more difficult to use than NT 4.0's model. NT 4.0 quickly propagates an administrator's changes to the OS's directory through the domain. However, the same process in Win2K is slower, and AD changes can seem to take forever.
Designing your AD site can become a balancing act. You can choose to create several small sites, resulting in less WAN traffic and highly localized domain controller selection, or you can create fewer, but larger sites that you can manage more easily. Fewer and larger sites also have less latency and provide more replication partners within a site. Your final site design will likely be a balance between the site types.