When you install SQL Server in a clustered server
configuration, you create it as a virtual SQL Server. A virtual SQL
Server is not tied to a specific physical server; it is associated with a
virtualized SQL Server name that is assigned a separate IP address (not
the IP address or name of the physical servers on which it runs).
Handling matters this way allows for your applications to be completely
abstracted away from the physical server level.
Failover clustering has a new workflow for all Setup scenarios in SQL Server 2008. The two options for installation are
Integrated installation—
This option creates and configures a single-node SQL Server failover
cluster instance. Additional nodes are added by using the Add Node
functionality in Setup. For example, for Integrated installation, you
run Setup to create a single-node failover cluster. Then you run Setup
again for each node you want to add to the cluster.
Advanced/Enterprise installation—
This option consists of two steps; the prepare step prepares all nodes
of the failover cluster to be operational. Nodes are defined and
prepared during this initial step. After you prepare the nodes, the
Complete step is run on the active node—the node that owns the shared
disk—to complete the failover cluster instance and make it operational.
With all the SQL Server components identified. This virtual SQL Server
is the only thing the end user will ever see. As you can also see in Figure 1, the virtual server name is VSQLSERVER2008, and the SQL Server instance name defaults to blank (you can, of course, give your instance a name). Figure 1
also shows the other cluster group resources that will be part of the
SQL Server Clustering configuration: MSDTC, SQL Agent, SQL Server
Full-Text Search, and the shared disk where the databases will live.
SQL
Server Agent will be installed as part of the SQL Server installation
process, and it is associated with the SQL Server instance it is
installed for. The same is true for SQL Server Full-Text Search; it is
associated with the particular SQL Server instance that it is installed
to work with. The SQL Server installation process completely installs
all software on all nodes you designate.
Configuring SQL Server Database Disks
Before we go too much
further, we need to talk about how you should lay out a SQL Server
implementation on the shared disks managed by the cluster. The overall
usage intent of a particular SQL Server instance dictates how you might
choose to configure your shared disk and how it might be best configured
for scalability and availability.
In general, RAID 0 is great
for storage that doesn’t need fault tolerance; RAID 1 or RAID 10 is
great for storage that needs fault tolerance but doesn’t have to
sacrifice too much performance (as with most online transaction
processing [OLTP] systems); and RAID 5 is great for storage that needs
fault tolerance but whose data doesn’t change that much (that is, low
data volatility, as in many decision support systems [DSSs]/read-only
systems).
All this means that there is a time and place to use each of the different fault-tolerant disk configurations. Table 1
provides a good rule of thumb to follow for deciding which SQL Server
database file types should be placed on which RAID level disk
configuration. (This would be true regardless of whether or not the RAID
disk array was a part of a SQL Server cluster.)
Table 1. SQL Server Clustering Disk Fault-Tolerance Recommendations
Device | Description | Fault Tolerance |
---|
Quorum drive | The quorum drive used with MSCS should be isolated to a drive by itself (often mirrored as well, for maximum availability). | RAID 1 or RAID 10 |
OLTP SQL Server database files | For OLTP systems, the database data/index files should be placed on a RAID 10 disk system. | RAID 10 |
DSS SQL Server database files | For DSSs that are primarily read-only, the database data/index files should be placed on a RAID 5 disk system. | RAID 5 |
tempdb | This is a highly volatile form of disk I/O (when not able to do all its work in the cache). | RAID 10 |
SQL Server transaction log files | The
SQL Server transaction log files should be on their own mirrored volume
for both performance and database protection. (For DSSs, this could be
RAID 5 also.) | RAID 10 or RAID 1 |
Tip
A good practice is to
balance database files across disk arrays (that is, controllers). In
other words, if you have two (or more) separate shared disk arrays (both
RAID 10) available within a cluster group’s resources, you should put
the data file of Database 1 on the first cluster group disk resource
(for example, DiskRAID10-A) and its transaction log on the second cluster group disk resource (for example, DiskRaid10-B). Then you should put the data file of Database 2 on the second cluster group disk resource of DiskRAID10-B and its transaction log on the first cluster group disk resource of DiskRAID10-A.
In this way, you can stagger these allocations and in general balance
the overall RAID controller usage, minimizing any potential bottlenecks
that might occur on one disk controller. In addition, FILESTREAM
filegroups must be put on a shared disk, and FILESTREAM must be enabled
on each node in the cluster that will host the FILESTREAM instance. You
can also use geographically dispersed cluster nodes, but additional
items such as network latency and shared disk support must be verified
before you get started. Check the Geographic Cluster hardware
Compatibility List (http://msdn.microsoft.com/en-us/library/ms189910.aspx).
On Windows 2008, most hardware and ISCSI supported hardware can be
used, without the need to use “certified hardware.” When you are
creating a cluster on Windows 2008, you can use the cluster validation
tool to validate the Windows cluster; it also blocks SQL Server Setup
when problems are detected with the Windows 2008 cluster.
Installing Network Interfaces
You might want to take a final glance at Cluster Administrator so that you can verify that both CLUSTER1 and CLUSTER2
nodes and their private and public network interfaces are completely
specified and their state (status) is up. If you like, you should also
double-check the IP addresses and network names against the Excel
spreadsheet created for this cluster specification.
Installing MSCS
As you can see in Figure 2, the MSCS “service” is running and has been started by the ClusterAdmin login account for the GOTHAM domain.
Note
If MSCS is not started and
won’t start, you cannot install SQL Server Clustering. You have to
remove and then reinstall MSCS from scratch. You should browse the Event
Viewer to familiarize yourself with the types of warnings and errors
that can appear with MSCS.