Implementing Cluster Replication – Part 1

Imagine that you're researching Continuous Cluster Replication, looking for a simple, direct guide to the basics of getting it set up and running. What do you do if everything you find is either too specialised, or goes into far more detail than you need? If you're Brien Posey, and you can't find that practical, to-the-point article, you roll up your sleeves and write it yourself. To kick off, he tackles the rather tedious task of setting up a Majority Node Set Cluster.

A few weeks ago, someone asked me to help them with an Exchange Server deployment. In order to make the deployment a bit more robust, they wanted to cluster their mailbox servers using Cluster Continuous Replication, or CCR. Although I had set up CCR a few times in the past, it had been a while since I had done one, so I decided to look online to re-familiarize myself with the process.

As I looked around on the Internet, I began to realize that there weren’t any articles that met my needs. Some of the articles that I found only covered part of the process. Others covered the entire process but contained a lot more material than what I wanted. Nobody had published a simple article, written in plain English, which described the procedure of setting up CCR from start to finish. That being the case, I decided to write one myself.

Planning the Cluster

As you may already know, CCR is not a feature that you can enable on a whim. That’s because CCR makes use of a majority node set cluster. You have to create the cluster at the operating system level. Only then can you install Exchange and create a clustered mailbox server.  Since that is the case, I want to start out by showing you how to plan for and set up the cluster. In Part 2, I will show you how to install Exchange onto the cluster that you have created, and I will also show you how to perform a manual failover on the cluster.

The Server Hardware and Software

Remember that at any
time either one of the
servers could be
acting as the active
cluster node, so make
sure that both have
sufficient hardware to
host the mailbox
server role

The first step in the planning process is to make sure that you have the required hardware. Generally speaking, the requirements for setting up CCR are roughly the same as for creating any other mailbox server. The biggest difference is that you will need two of everything. CCR does not make use of shared storage like a single copy cluster does, so you will need two servers, and each of the servers will have to have sufficient disk resources to accommodate the mailbox database and the transaction logs. As is the case with a non-clustered mailbox server, you should place the database and the transaction logs on separate disks in accordance with Microsoft’s best practices for Exchange.

Your two servers don’t have to be completely identical to each other, but they should at least have similar capabilities. Remember that at any time either one of the servers could be acting as the active cluster node, so it is important to make sure that both servers have hardware that is sufficient to host the mailbox server role in an efficient manner.

Although not an absolute requirement, I recommend installing two NICs in each of the servers. It is also worth noting that Microsoft won’t support CCR unless you use two NICs in each node. One of the NICs will be used for communications with the rest of the network, and the other will be used for communications between cluster nodes. The installation steps outlined in this article assume that each server has two NICs.

Finally, you must ensure that you have the necessary software licenses. You will need two copies of Windows Server Enterprise Edition and two copies of Exchange 2007 Enterprise Edition (clustering is not supported in the standard editions), plus any necessary Client Access Licenses. For the purposes of this article, I will be using Windows Server 2008 and Exchange Server 2007 with SP1.

Other Cluster Resources

The next step in planning the cluster is to set aside the necessary names and IP addresses.  Believe it or not, you are going to have to use five different names and seven different IP addresses.

The first step in setting up the cluster is going to be to install Windows on to both of the cluster nodes. At this point in the process your servers are not cluster nodes, but rather just a couple of Windows servers. Like any other Windows Servers, you are going to have to assign a name to both of the servers. You are also going to have to assign each of the servers an IP address. That accounts for two of the four names and two of the six IP addresses.

You will only be using
the alternate subnet
for cluster level
communications
between the two
nodes over the
secondary network
adapters.

As you will recall, we have two NICs in each of the servers. You must assign an IP address to both of the secondary NICs. The IP addresses that you used for the primary NICs should fall into the same subnet as any other machines on the network segment. The addresses assigned to the secondary NICs should belong to a unique subnet, although it is permissible to connect a crossover cable directly between the two cluster nodes.
So far we have used two of our four names, and four of our six IP addresses. The next name and IP address are assigned to the cluster. This name and IP address are used to communicate with the cluster as a whole, rather than with an individual cluster node.

Finally, you will need to set aside a name and an IP address to be assigned at the Exchange Server level to the clustered mailbox server. The IP address for the cluster and the IP address for the clustered mailbox server should fall into the same subnet as the other computers on the network segment. You will only be using the alternate subnet for cluster level communications between the two nodes over the secondary network adapters.
The table below should help to give you a clearer picture of how the various names and IP addresses will be used. You don’t have to use the same names and addresses as I am. They are just samples:

Name

Primary NIC

Secondary NIC

Cluster Node 1

Node1

192.168.0.1

10.1.10.11

Cluster Node 2

Node2

192.168.0.2

10.1.10.12

MNS Cluster

WinCluster

192.168.0.3

Clustered Mailbox Server

Exch1

192.168.0.4

Another thing that you are going to need before you begin creating the cluster is a cluster service account. You should create a dedicated account in your Active Directory domain, and set its password so that it never expires.
One last thing that I want to mention before I show you how to create the cluster is that the Clustered Mailbox Server role cannot be combined with any other Exchange Server roles. Since every Exchange 2007 organization requires at least one hub transport server and client access server, you will need at least one additional Exchange 2007 server in your organization.

Creating the Cluster

Now that I have explained what resources you will need I want to go ahead and show you how to create the cluster. This section assumes that you have already installed Windows Server 2008 onto both cluster nodes, assigned the appropriate names and IP addresses to the two nodes, and joined the nodes to your domain.

Configuring the First Cluster Node

Your cluster is going to consist of two separate nodes, and the setup procedure is going to be different for the two nodes. That being the case, I am going to refer to the first cluster Node as Node 1, and the second cluster node as Node 2. When the configuration process is complete, Node 1 will initially act as the active cluster node.
With that said, let’s go ahead and get started by configuring Node 1. Begin the process by logging into the server with administrative credentials, and opening a Command Prompt window.  Now, enter the following command:

Cluster /Cluster:<your cluster name> /Create /Wizard

For example, if you chose to call your cluster WinCluster, then you would enter:

Cluster: /Cluster:WinCluster /Create /Wizard

This command tells Windows that you want to create a new cluster named WinCluster, and that you want to use the wizard to configure the cluster.

Windows will now launch the New Server Cluster Wizard. Click Next to bypass the wizard’s Welcome screen. Now, select your domain from the Domain drop down list. Verify that the cluster name that is being displayed matches the one that you typed when you entered the Cluster command, and then click Next.

Windows will now perform a quick check to make sure that the server is ready to be clustered. This process typically generates some warnings, but those warnings aren’t usually a big deal so long as no errors are displayed. The warnings are often related to 1394 firewire ports being used as network interfaces, and other minor issues.

Click Next to clear any warning messages, and you will be prompted to enter the IP address that you have reserved for the MNS cluster. Enter this IP address into the space provided, and then click Next.

You will now be prompted to enter your service account credentials. Enter the required username and password, and click Next.

At this point, the wizard will display a summary screen. With virtually every other wizard that Microsoft makes, you can just take a second to verify the accuracy of the information, and then click Next. In this case though, you need to click the Quorum button instead. After doing so, you must set the quorum type to Majority Node Set. After doing so, click OK, followed by Next. When the wizard completes, click Next, followed by Finish. You have now created the first cluster node!

Adding the Second Node to the Cluster

Now that we have created our cluster, we need to add Node 2 to it. To do so, log into Node 2 as an administrator, and open a Command Prompt window. When the window opens, enter the following command:

Cluster /Cluster:<your cluster name> /Add /Wizard

For example, if you called your cluster WinCluster, you would enter this command:

Cluster /Cluster:WinCluster /Add /Wizard

Notice that this time we are using the /Add switch instead of the /Create switch because our cluster already exists.

Windows should now launch the Add Nodes Wizard. Click Next to clear the wizard’s welcome screen. You must now select your domain from the Domain drop down list. While you are at it, take a moment to make sure that the cluster name that is being displayed matches what you typed.

Click Next, and you will be prompted to enter the name of the server that you want to add to the cluster. Enter the server name and click the Add button.

Click Next, and the wizard will perform a quick check to make sure that the server is ready to be added to the cluster. Once again, it is normal to get some warning messages. As long as you don’t receive any error messages, you can just click Next.

At this point, you will be prompted to enter the credentials for the cluster’s service account. After doing so, click Next.

You should now see the now familiar configuration summary screen. This time, you don’t have to worry about clicking a Quorum button. Just click Next, and Windows will add the node to the cluster. When the process completes, click Next, followed by Finish.

Some Additional Configuration Tasks

Now that we have created our Majority Node Set Cluster, we need to tell Windows which NICs are going to be used for which purpose. To do so, select the Cluster Administrator console from Node 1’s Administrative Tools menu. When the Cluster Administrator starts, take a moment to make sure that both of your cluster nodes are listed in the Cluster Administrator’s console tree.

Make sure […] that
the IP addresses that
are listed are associated
with the private subnet.

Now, navigate through the console tree to <your cluster name> | Cluster Configuration | Networks | Local Area Connection. This container should display IP addresses for both cluster nodes. Take a moment to verify that the addresses that are listed are the ones that fall into the same subnet as the other servers on the network segment. Now, right click on the Local Area Connection container, and choose the Properties command from the resulting shortcut menu. When the properties sheet opens, make sure that the Enable this Network for Cluster Use check box is selected. You must also select the Client Access Only (Public Network) option. When you have finished, click OK.

Now, we have to check the other network connection. To do so, navigate through the console tree to <your cluster name> | Cluster Configuration | Networks | Local Area Connection 2. Make sure that when you select the Local Area Connection 2 container, that the details pane displays both cluster nodes, and that the IP addresses that are listed are associated with the private subnet.

At this point, you must right click on the Local Area Connection 2 container, and select the Properties command from the resulting shortcut menu. When Windows opens the properties sheet for the connection, make sure that the Enable This Network for Cluster Use check box is selected. You must also select the Internal Cluster Communications Only (Private Network) option. When you are done, click OK and close the Cluster Administrator.

Creating a Majority Node Set File Share Witness

The problem with a Majority Node Set cluster is that the active node must be able to communicate with the majority of the nodes in the cluster, but there is no way to have a clear majority in a two node cluster. Windows can’t allow a single node to count as the majority, because otherwise a failure of the communications link between the two cluster nodes could result in a split brains failure. This is a condition in which both nodes are functional, and each node believes that the other node has failed, and therefore tries to become the active node.
In order to prevent this from happening, we must create a Majority Node Set File Share Witness. The basic idea behind this is that we will create a special file share on our hub transport server. In the event of a failure, the share that we create will be counted as a node (even though it isn’t really a node) in determining which cluster node has the majority of the node set.

To create the Majority Node Set File Share Witness, go to your hub transport server, open a Command Prompt window, and enter the following commands:

C:
CD\
MNS_FSW_CCR
Net Share MNS_FSW_CCR=C:\MNS_FSW_CCR /Grant:<your service account name>,Full
CACLS C:\MNS_FSW_CCR /G Builtin\Administrators:F <your service account>:F

When Windows asks you if you are sure, press Y.

What we have done is created a folder on our hub transport server named C:\MNS_FSW_CCR. We then created a share named MNS_FSW_CCR, and gave our service account full access to the share. Finally, we gave the built in Administrator account and the service account full access to the folder at the NTFS level.

Now, go to Node 1 and open a Command Prompt window. You must now enter the following commands:

Cluster <your cluster name> Res “Majority Node Set” /Priv MNSFileShare=\\<your hub  transport server’s name>\MNS_FSW_CCR
Cluster <your cluster name> group “Cluster Group” /move
Cluster <your cluster name> group “Cluster Group” /move
Cluster <your cluster name>  Res “Majority Node Set” /Priv

The first command in this sequence tells Windows to use the share that we have created as the Majority Node Set File Share Witness. When you enter this command, you will receive an error message telling you that the properties were stored, but that your changes won’t take effect until the next time that the cluster is brought online.

The easiest way to get around this problem is to move the cluster group from Node 1 to Node 2 and then back to Node 1. That’s what the second and third commands in the sequence above accomplish for us.

The last command in the sequence above simply causes Windows to display the private properties for the Majority Node Set. The first line of text within the list of properties should make reference to the share that we have created for our Majority Node Set File Share Witness. This confirms that the cluster is using our Majority Node Set File Share Witness.

Conclusion

As you can see, creating a Majority Node Set Cluster can be a bit tedious. In Part 2 of this series, we will wrap things up by installing Exchange Server onto our cluster and then working through the failover procedure.