A baseline is a performance level that can be used as
a starting point to compare against future network performance
operations. When a server is first monitored, there is very little to
compare the statistics against. After a baseline is created, information
can be gathered at any time in the future and compared against the
baseline. The difference between the current statistics and the baseline
statistics is the variance caused by system load, application
processing, or system performance contention.
To be able to set a baseline
value, you need to gather a normal set of statistics on each system that
will eventually be monitored or managed in the future. Baselines should
be created for normal and stressed times. The workload on a machine at
night when there are fewer users connected to it provides a poor
baseline to compare real-time data in the middle of the day. Information
sampled in the middle of the day should be compared with a baseline of
information collected at around the same time of day during normal load
prior to the sample comparison.
Creating baselines should be an
ongoing process. If an application or a new service is added to a
server, a new baseline should be created so that any future comparisons
can be made with a baseline with the most current status of system
performance.
Reducing Performance Monitoring Overhead
Performance monitoring
uses system resources that can affect the performance of a system as
well as affect the data being collected. To ensure that performance
monitoring and analyzing do not affect the machines being monitored
themselves, you need to decrease the impact of performance monitoring.
Some steps can be taken to ensure that performance monitoring overhead
is kept to a minimum on the server being monitored to create as accurate
of an analysis on a system as possible:
Use a remote server
to monitor the target server. Servers can actually be dedicated to
monitoring several remote servers. Although this might also lead to an
increase in network bandwidth, at least the monitoring and tracking of
information do not drastically degrade CPU or disk I/O as if the
monitoring tool were actually running on the server being monitored.
Consider
reducing the frequency of the data collection interval because more
frequent collection can increase overhead on the server.
Avoid
using too many counters. Some counters are costly in terms of taxing a
server for system resources and can increase system overhead. Monitoring
several activities at one time also becomes difficult.
Use
logs instead of displaying graphs. The logs can then be imported into a
database or report. Logs can be saved on hard disks not being monitored
or analyzed.
Important Objects to Monitor
The numbers of system
and application components, services, and threads to measure in Windows
Server 2008 R2 are so extensive that it is impossible to monitor
thousands of processor, print queue, network, or storage usage
statistics. Defining the roles a server plays in a network environment
helps to narrow down what needs to be measured. Servers could be defined
and categorized based on the function of the server, such as
application server, file and print server, or services server such as
DNS, domain controller, and so on.
Because servers
perform different roles, and hence have different functions, it makes
sense to monitor the essential performance objects. This helps prevent
the server from being overwhelmed from the monitoring of unnecessary
objects for measurement or analysis.
Overall, four major areas
demand the most concern: memory, processor, disk subsystem, and network
infrastructure. They all tie into any role the server plays.
The following list describes objects to monitor based on the roles played by the server:
Active Directory Domain Services—
Because the DC provides authentication, stores the Active Directory
database, holds schema objects, and so on, it receives many requests. To
be able to process all these requests, it uses up a lot of CPU
resources, disks, memory, and network bandwidth. Consider monitoring
memory, CPU, system, network segment, network interface, and protocol
objects such as TCP, UDP, NBT, NetBIOS, and NetBEUI. Also worth
monitoring are the Active Directory NTDS service and site server LDAP
service objects. DNS and WINS also have applicable objects to be
measured.
File and print server—
The print servers that process intensive graphics jobs can utilize
extensive resources of system CPU cycles very quickly. The file server
takes up a lot of storage space. Monitor the PrintQueue object to track
print spooling data. Also monitor CPU, memory, network segment, and
logical and physical disks for both file and print data collection.
Messaging collaboration server—
A messaging server such as an Exchange Server 2010 uses a lot of CPU,
disk, and memory resources. Monitor memory collection, cache, processor,
system, and logical and physical disks. Exchange objects are added to
the list of objects after Exchange is installed, such as message queue
length or name resolution response time.
Web server—
A web server is usually far less disk intensive and more dependent on
processing performance or memory space to cache web pages and page
requests. Consider monitoring the cache, network interface, processor,
and memory usage.
Database server—
Database servers such as Microsoft SQL Server 2008 can use a lot of CPU
and disk resources. Database servers can also use an extensive amount
of memory
to cache tables and data, so RAM usage and query response times should
be monitored. Monitoring objects such as system, processor, logical
disk, and physical disk is helpful for overall system performance
operations.