Installing SQL Server
For SQL Clustering, you must
install a new SQL Server instance within a minimum two-node cluster. You
should not move a SQL Server instance from a nonclustered configuration
to a clustered configuration. If you already have SQL Server installed
in a nonclustered environment, you need to make all the necessary
backups (or detach databases) first, and then you need to uninstall the
nonclustered SQL Server instance. Some upgrade paths and migration paths
are possible from prior versions of SQL Server and Windows server. You
are also limited to a maximum of 25 instances of SQL Server per failover
cluster. There is no uninstall SQL Server failover cluster option; you
must run Setup from the node that is to be removed. You must specify the
same product key on all the nodes that you are preparing for the same
failover cluster. You also should make sure you use the same SQL Server
instance ID for all the nodes that are prepared for the failover
cluster.
With all MSCS
resources running and in the online state, you run the SQL Server Setup
program from the node that is online (for example, CLUSTER1).
You are asked to install all software components required prior to
installing SQL Server (.NET Framework 3.0 or 3.5, Microsoft SQL Native
Client, and the Microsoft SQL Server 2008 Setup support files).
SQL Server integrated failover cluster installation consists of the following steps:
1. | Create
and configure a single-node SQL Server failover cluster instance. When
you configure the node successfully, you have a fully functional
failover cluster instance. At this point, it does not have high
availability because there is only one node in the failover cluster.
|
2. | On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.
|
Alternatively, you can use the following SQL Server Advanced/Enterprise failover cluster installation:
1. | On
each node that will be an owner of the new SQL Server failover cluster,
follow the Prepare Failover Cluster setup steps listed in the Prepare
section. After you run the Prepare Failover Cluster on one node, Setup
creates the Configuration.ini file, which lists all the
settings you specified. On the additional nodes to be prepared, instead
of following these steps, you can supply the Configuration.ini
file from first node as an input to the Setup command line. This step
prepares the nodes ready to be clustered, but there is no operational
instance of SQL Server at the end of this step.
|
2. | After
the nodes are prepared for clustering, run Setup on one of the prepared
nodes, preferably on the node that owns the shared disk that has the
Complete Failover Cluster functionality. This step configures and
finishes the failover cluster instance. After completing this step, you
have an operational SQL Server failover cluster instance. and all the
nodes prepared previously for that instance are the possible owners of
the newly created SQL Server failover cluster.
|
After you take these steps,
the standard Welcome to SQL Server Installation Center Wizard begins.
It starts with a System Configuration check of the node in the cluster (CLUSTER1). Figure 3 shows the SQL Server Installation Center launch dialog and the results of a successful system check for CLUSTER1.
Note
SQL Server Clustering is
available with SQL Server 2008 Standard Edition, Enterprise Edition,
and Developer Edition. However, Standard Edition supports only a
two-node cluster. If you want to configure a cluster with more than two
nodes, you need to upgrade to SQL Server 2008 Enterprise Edition.
If this check fails
(warnings are acceptable), you must resolve them before you continue. If
the check is successful, you are then prompted for the checklist of
features you want to install. Figure 4 shows the Feature Selection to install dialog.
You then see the Instance Configuration dialog, as shown in Figure 5, where you specify the network name for the new SQL Server failover cluster (the Virtual Server name, VSQLSERVER2008
in this example) and then either can use the default SQL Server
instance name (no name) or specify a unique SQL Server instance name (we
chose to use the default instance name of MSSQLSERVER).
This virtual SQL Server
name is the name the client applications will see (and to which they
will connect). When an application attempts to connect to an instance of
SQL Server 2008 that is running on a failover cluster, the application
must specify both the virtual server name and instance name (if an
instance name was used), such as VSQLSERVER2008\VSQLSRV1 (virtual server name\SQL Server instance name) or VSQLSERVER2008
(just the virtual server name without the default SQL Server instance
name). The virtual server name must be unique on the network. You also
specify the local directory locations (root) for the installation.
Note
A good naming convention to follow is to preface all virtual SQL Server names and virtual SQL Server instance names with a V. This way, you can easily identify which SQL Server machines on your network are clustered. For example, you could use VSQLSERVER2008 as a virtual SQL Server name and VSQLSRV1 as an instance name.
Next comes the disk space
requirements dialog, followed by the Cluster Resource Group
specification. This is where the SQL Server resources are placed within
MSCS. Here, you use the existing resource cluster group (named Cluster Group).
Immediately following the resource group assignment comes the
identification of which clustered disks are to be used via the Cluster
Disk Selection dialog, shown in Figure 6. It contains an S: drive (which you want SQL Server to use) and Q: and R:
drive being used for the quorum files (do not select this drive!). You
simply select the available drive(s) where you want to put your SQL
database files (the S: drive in this example). As you can also see, the only “qualified” disk is the S:
drive. If the quorum resource is in the cluster group you have
selected, a warning message is issued, informing you of this fact. A
general rule of thumb is to isolate the quorum resource to a separate
cluster group.
The next thing you need to do
for this new virtual server specification is to identify an IP address
and which network it should use. As you can see in the Cluster Network
Configuration dialog, shown in Figure 7,
you simply type in the IP address (for example, 192.168.3.110) that is
to be the IP address for this virtual SQL Server for the available
networks known to this cluster configuration (in this example, it is for
the ClusterPublic network). If the IP address being specified is already in use, an error occurs.
Note
Keep in mind that you are
using a separate IP address for the virtual SQL Server that is
completely different from the cluster IP addresses. In a nonclustered
installation of SQL Server, the server can be referenced using the
machine’s IP address. In a clustered configuration, you do not use the
IP addresses of the servers themselves; instead, you use this separately
assigned IP address for the “virtual” SQL Server.
Figure 8
shows the next step in identifying the Cluster Security Policy for each
SQL Server component (Database Engine, SQL Server Agent, and Analysis
Services). Here, you use the domain Admin group. Figure 8
also shows the Server Configuration “service accounts” to use for all
the services within this SQL Server install. You use a ClusterAdmin
account set up for this purpose. Remember, this account must have administrator
rights within the domain and on each server (that is, it must be a
member of the Administrators local group on any node in the cluster).
This is followed by the Database Engine Configuration dialog, where you
set what type of authentication mode to use, the data directories for
the root and subfolders, and the FILESTREAM options. Needless to say,
the Data root directory is on the S: drive.
You then are prompted
through the Analysis Services Configuration and Reporting Services
Configuration dialogs. Your Analysis Services Data directories are
within a subfolder of S:\OLAP\.
At this point, you have
worked your way down to the Cluster Installation Rules check to
determine if everything specified to this point is correct. Figure 9
shows this rules check “passing” status, a summary of what is about to
be done, and the location of the configuration file (and path) that can
be used later if you are doing command-line installs of new nodes in the
cluster. A box appears around this configuration file path location at
the bottom right of the Ready to Install dialog to show you where it is
being created (if needed).
The next step is to click on the Install button.
The setup process installs SQL Server binaries locally on each node in the cluster (that is, in C:\Program Files\Microsoft SQL Server). The database files for the master, model, tempdb, and msdb databases are placed on the S: drive in this example. This is the shared disk location that must be available to all nodes in the SQL Server cluster.
When the process is complete,
you can pop over into the Cluster Administrator and see all the
resources just installed within the failover cluster. This is not highly
available yet because you have completed only one node of the two-node
failover cluster. But, as you can see in Figure 10, the SQL Server components have been successfully installed and are usable within the cluster.
Adding the next node (and any
more subsequent nodes) to the cluster will make this configuration
highly available because you will have other nodes to fail over to. To
install the second node, you must now start back over that the Setup
process (using SQL Server Installation Center). But this time, you can
choose the Add Node to a SQL Server Failover Cluster option, as shown in
Figure 11. Just as before, the Setup Support Rules check occurs for the next cluster node (CLUSTER2,
in this example). As you can also see, adding a node is much simpler
(many fewer steps) than creating a completely new SQL Server failover
cluster installation.
If all items have passed on the new node, you come to the Cluster Node Configuration dialog, as shown in Figure 12. Here, you can see that the name of this node (CLUSTER2) is being associated with (added to) the original cluster node (CLUSTER1).
This is truly where the magic occurs. You are then prompted to specify
the service accounts and collation configuration of this second node.
Again, you should specify the domain account that was specified in the
first cluster configuration setup (ClusterAdmin in this example).
Now you are ready to verify
that the rules for adding this node are being followed. If any check
doesn’t pass, you must correct it before the node can be added. Figure 13 shows this Add Node Rules check along with the summary of all the features to be installed as part of the add node operation.
Again, click the Install button to install the SQL Server features and add this node to the cluster.
You must specify
what type of authentication mode you want for SQL Server access: Windows
Authentication (only) or mixed mode (Windows Authentication and SQL
Server Authentication). For this example, you should choose the mixed
mode option and provide a password for the sa SQL Server administration login.
Finally, you must specify the collation settings used for sorting order and compatibility with previous versions of SQL Server.
The SQL Setup program now has enough information to do the complete installation of the new node in the SQL Server cluster. Figure 14
shows the installation of the new SQL Server Failover Cluster node is
complete. In particular, binaries are being installed locally,
additional services are being created on the second node for SQL Server,
and SQL resources are being associated to both cluster nodes.
As you can also see in Figure 14,
Cluster Administrator shows the online resources within the cluster
group and that both clusters are up and all resources are online (but
controlled by CLUSTER1 now).
Following are the SQL Server resource entries:
The SQL Server virtual IP address
The SQL Server network name
SQL Server (MSSQLSERVER)
SQL Server Agent (for the instance)
Analysis Services
Disk S: (physical disks where the DBs reside)
MSDTC
Each resource entry should say Online in the State column and be owned by the same node (CLUSTER1 in this example).
In the Cluster Administrator,
you can easily view the properties of each of the new SQL Server
resources by right-clicking a resource and selecting Properties. Figure 15 shows the properties of the Networks and Network Interface IP Address resources.
When you
right-click a resource entry in the Cluster Administrator, you have an
option to take the resource offline or to initiate a failure. You
sometimes need to do this when you’re trying to fix or test a SQL Server
Clustering configuration. However, when you’re initiating full SQL
Server failover to another node (for example, from CLUSTER1 to CLUSTER2),
you typically use the Move Group cluster group technique because you
want all the resources for the cluster group to fail over—not just one
specific resource. Figure 16 shows that you simply right-click the Cluster Group item entry and select Move Group. All resources then fail over to CLUSTER2.