Logo
programming4us
programming4us
programming4us
programming4us
Home
programming4us
XP
programming4us
Windows Vista
programming4us
Windows 7
programming4us
Windows Azure
programming4us
Windows Server
programming4us
Windows Phone
 
Windows Server

Windows Server 2003 on HP ProLiant Servers : Active Directory Logical Design - Replication Topology (part 2)

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
2/20/2013 5:08:33 PM

5. GC Servers

GC servers are special-purpose DCs that hold a writeable copy of all objects and attributes for the domain in which it resides as a DC, and a read-only copy of all objects and some of their attributes from other domains in the forest. Thus, GC servers are really only important in multidomain forests and have no special function in single-domain forests. The placement of GC servers in the enterprise is an important design decision. The important tasks that GCs perform include

  • Authentication for all users in a native-mode domain in a multidomain forest. If a GC isn't available and no cached credentials are on the computer, the user is denied logon ability. This is because only a GC can enumerate universal security groups and apply that information to the user's token at logon. A single-domain forest ignores the necessity to contact a GC during logon.

  • Authentication for users logging on with the UPN. Logging on with a username in UPN format ([email protected]) will fail if a GC is not available.

  • Exchange 2000 and above uses the GC for the GAL. Exchange will preferentially find a GC in the same site as the Exchange server. If one is not available, Exchange will find one in the nearest site.

  • Services users' requests for information in the AD, such as enterprisewide LDAP (Lightweight Directory Access Protocol) queries performed by the user or an application. For example, if an application performs LDAP queries to locate resources such as users, file servers, and so on across a multiple-domain environment, a local GC will improve that search performance.

The placement of GCs in the enterprise has always been a much-debated issue. Some companies put a GC in every site, whereas others put multiple GCs in each site for redundancy. Compaq's strategy was to put GCs in poorly connected sites, in prime sites on the corporate backbone, and as few others as possible. The philosophy was to utilize the network bandwidth as much as possible. If the network bandwidth was sufficient between locations, Compaq would collect them into a single site with a single DC and GC, thus utilizing the network to reduce the need for GCs. For example, the entire country of Canada only had 2 sites, and enterprisewide, only about 80 sites were serving about 800 locations. Exchange 2000 caused GCs to be placed deeper in the structure than originally anticipated, since it is used for Exchange's Global Address List and is accessed frequently by users.

The key to GC placement is a good understanding of the GC functions and how they fit in your computing infrastructure. In the early days of Windows 2000, customers complained that the GC authentication required them to place GCs in small, remote sites to ensure authentication capability. Microsoft provided a Registry key (see Microsoft KB 241789, “How to Disable the Requirement that a GC Server Be Available to Validate User Logons”) that turned this feature off, but warned that disabling this feature and using universal groups would cause a potential security hole. To mitigate this issue, Microsoft provided the Universal Group Membership Caching feature or Global Catalog Caching feature in Windows Server 2003 . This allows DCs to contact a GC on behalf of a user, obtain the user's universal group membership, and cache it locally so the user can log on if a GC is not available. This is done on a site-by-site basis, allowing you to apply it on poorly connected sites that have a DC, but no GC. This feature makes a big difference in deploying GCs. Now you can enable GC Caching at a site until the population grows enough to justify a GC.

Another Windows Server 2003 feature that impacts GC deployment is the Install from Media (IFM) feature. IFM allows promotion of a member server to a DC or a GC using a restored system state backup of another DC or GC in the domain. IFM not only prevents a surge in network traffic while DCPromo runs, but also significantly reduces the amount of time it takes to build a DC or GC. HP identified IFM as a critical feature that contributed to the company's deployment of Windows Server 2003 at RC2. Consider the problem of GC failure for whatever reason. Broken replication, hardware failures, and so on can cause GCs to have to be rebuilt. With less than one GC per site, a GC failure impacted multiple sites. Further, with Exchange depending on the GC, a failure caused Exchange transactions to be serviced by a GC at another site. Rebuilding a GC of HP's size (the database was 18GB at the time) took about 3 to 5 days, depending on the location, network traffic, and so on. Using IFM, HP can rebuild a GC in about 20 minutes. Even if HP had to build the media and ship it overnight to the remote site, it's faster than doing it over the network. Remember that the backup media for IFM has the same limitation as other backup media; it has a useful shelf life of less than the tombstonelifetime period (default 60 days).

Placement of GC servers is a key element in an AD design.

6. Other Windows Server 2003 Replication Improvements

Improved Spanning Tree Algorithm

The spanning tree algorithm, used to calculate intersite topology (least cost path), was completely rewritten using Kruskal and Dijkstra's algorithms and made much more efficient. This algorithm allowed the KCC to calculate the topology for 3,000 sites and a single domain in about 30 seconds, although the hardware resources affect this value. This is a major step in removing scalability issues that were present in Windows 2000. Even if you don't have an environment with hundreds of sites, this is important because the efficiency of the spanning tree algorithm removes any design limits that might be imposed by Windows Server 2003's scalability, allowing you to design the infrastructure based on business needs rather than Windows limits.

Improved Data Compression

Another improvement is the compression of data for intersite replication. Data compression was a two-edged sword—it reduced the size of the data to be transferred across the network by approximately a factor of 10, but decompression required a huge hit on the BHS in the target site to the point that DCs became overwhelmed with that and other tasks, such as servicing outbound replication requests from its partners, and the outbound partners didn't get serviced in a timely manner, adding to latency. High latency can also happen if you have a large number of sites replicating to a single BHS at the hub site, a large number of sites causing the KCC to consume inordinate amounts of CPU time, or a high frequency replication schedule. High latency can cause replication to appear broken, and password changes, policy changes, and other Group Policy changes are not replicated in a timely manner. Microsoft improved the decompression engine in Windows Server 2003 to drastically reduce the load on the DC. This was done at the expense of compression ratio, being somewhat less than in Windows 2000, and slightly increasing the WAN traffic. Thus, Windows Server 2003 reduces the load on BHSs significantly, while increasing the WAN traffic only slightly.

Because some environments might have low bandwidth utilization in their network and can afford to replicate uncompressed data to eliminate the decompression cost on the target BHS, Windows Server 2003 provides a way to turn off compression of intersite replicated data. This is accomplished by modifying the options attribute on the site link and NTDS-Connections object. This can be done via the ADSIedit snap-in (part of the support tools). Open the snap-in by clicking Start, Run, and then entering adsiedit.msc. Expand the Configuration container, and then go to:

CN=Configuration,DC=Company,DC=com (where company.com is
   the name of the domain)
         CN=Sites
               CN=Inter-Site Transports
                   CN=IP

Click on the CN=IP folder, right-click on the site link in the right pane that you want to disable intersite compression over, and select Properties. In the link properties page, scroll through the attributes list to locate the Options attribute. Double-click the Options attribute and in the ensuing Integer Attribute Editor dialog box, enter 4, as shown in Figure 3. Click OK twice to close the properties dialog boxes. This will eliminate intersite compression on all connections using this link. You can optionally configure this feature on individual connection objects by setting the value of 4 on the Options attribute of the individual connection object, as shown in Figure 4.

Figure 3. Disabling intersite compression on IP transport via ADSIEdit.

Figure 4. Disabling intersite compression on individual connection objects via ADSIEdit.

warning

Disabling intersite compression is an advanced feature of replication configuration. You should understand the consequences of this action by thoroughly testing it in a lab environment that represents your production network. Disabling compression will cause an increase in traffic since you will have larger amounts of data. You should document this action for troubleshooting purposes. If your network is running close to capacity, there might not be enough margin to support this feature, and might likewise be difficult to diagnose as the cause of pushing the network over the limit.


Load Balancing to BHSs

In addition, Windows Server 2003 implements a random BHS selection process. As connections are made to a site, the ISTG randomly selects an eligible DC for the BHS. This takes place as soon as the DC joins the domain, unlike Windows 2000. In addition, Windows Server 2003 implements the capability to balance replication schedules by staggering the replication schedules of downstream partners.

Replication schedule staggering is implemented via Windows Server 2003's ADLB, available in the Resource Kit, downloadable from http://www.microsoft.com/windowsserver2003/techinfo/reskit/resourcekit.mspx. Unfortunately, there isn't a lot of documentation on ADLB from Microsoft, and the Resource Kit help file isn't much help either. However, here are a few pointers:

  • The ADLB is run on sites with multiple DCs in a domain. If there is only one DC in the site for any given domain, ADLB won't help because it will have only one DC to manage.

  • ADLB creates its own connections, and the KCC relinquishes control over building and maintaining connections. Replication takes place over the ADLB's connections.

  • Modifying site link schedules and costs with the ADLB in place will not take effect because replication is not using the KCC generated objects.

  • To return to using the KCC's connection objects, just delete the connections created by ADLB, and the KCC will generate new ones and begin managing them.

To balance the BHS, run the tool once per hub site. Run it again every time the topology changes. In a large environment, you can balance the connections in stages to soften the impact. In schedule balancing, ADLB changes schedules so that they are balanced over time rather than randomized in normal KCC behavior.
Other -----------------
- Extending the Real-Time Communications Functionality of Exchange Server 2007 : Exploring Office Communications Server Tools and Concepts
- Local Continuous Replications in Exchange Server 2007
- Microsoft Dynamics CRM 4.0 : SharePoint Integration - Store Attachments in SharePoint Using a Custom Solution
- Microsoft Dynamics CRM 4.0 : SharePoint Integration - Custom SharePoint Development
- Windows Server 2008 R2 file and print services : Administering Distributed File System Services (part 2) - Configuring and administering DFS Replication
- Windows Server 2008 R2 file and print services : Administering Distributed File System Services (part 1) - Configuring and administering DFS Namespaces
- Customizing Dynamics AX 2009 : Number Sequence Customization
- Microsoft Systems Management Server 2003 : Queries (part 3) - Executing Queries
- Microsoft Systems Management Server 2003 : Queries (part 2) - Creating a Query
- Microsoft Systems Management Server 2003 : Queries (part 1) - Query Elements
 
 
Top 10
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
 
programming4us
Windows Vista
programming4us
Windows 7
programming4us
Windows Azure
programming4us
Windows Server