You are on page 1of 4

Restore Cluster DB

As CLUSDB is part of the system services the only way to do this would be to do the
restore on separate hardware.
I have successfully restored clusters in a DR test using the Microsoft utility
clusterrecovery.exe from the W2K3 resource kit.
You need to start the cluster service with the /fixquorum switch
if it doesn't start use this (http://support.microsoft.com/kb/224999) to fix it and then
clusterrecovery.exe to change the quorum disk

1. Stop cluster service
2. Shutdown node2
3. Restart node 1
4. Copy the Chkxxx.tmp file with the most recent time and date stamp from the \Mscs folder on the
shared quorum drive to a disk and to the local %SystemRoot%\Cluster folder.
5. Rename the Clusdb file in the %SystemRoot%\Cluster folder to Clusdb.old.
6. Rename the Chkxxx.tmp file in the %SystemRoot%\Cluster folder to Clusdb
7. Restart node1

We must move the operating system of each node
the cluster services/applications (MS SQL 2000 and 2005
the cluster data (cluster/SAN LUNs) to the new environment
The difficult part would be the actual cluster pieces, as the cluster services rely on the shared
cluster disks and their associated data.
Dumpcfg
It will list out current disk signatures and allow you to assign new disk signatures. For Windows
2008 you can use diskpart.exe which is a part of the operating system.
1. http://technet.microsoft.com/en-us/magazine/2009.04.utilityspotlight.aspx

RichCopy This is a superior file copy utility for Windows.
1. http://www.microsoft.com/en-us/download/details.aspx?id=18986

Create a table of your cluster disks. Information that you will need for each disk is as follows:
1. Drive letter
2. Drive label
3. Disk size
4. Disk signature (use dumpcfg to determine the signatures)

Create a temporary storage area for the cluster data
Failover all cluster resources to a single node I chose node 1.
Disable all non-Microsoft services on both nodes.
Keep the Cluster service online at this time, otherwise the disks will not be online.
P2V both nodes of the cluster using VMware Converter.
Select ONLY the local disk, not the cluster disks, for the conversion. Change from Thick to Thin
provision.
Make sure that the NIC is not connected at power on.
Do not automatically start the target VM.
Install VMware Tools automatically I normally power on the VM after the P2V completes and
walk away for about twenty minutes. When I return VMware Tools has installed and the server
has rebooted.
While the P2V is in progress copy the cluster data to a temporary location.
Install RichCopy onto your temporary location. I used the same VM that is performing
the P2V migration.
The following command is an example of how to use it:
C:\Program Files (x86)\Microsoft Rich Tools\RichCopy 4.0\richcopy.exe \\source\h$ e:\h /CT
/CA /CSA /CSD /CSG /CSO /CSS
Power on both virtual nodes once the P2V is complete. Ensure that VMware Tools install
successfully. Perform a sanity check to verify that the operating system appears healthy
You will see errors from the Cluster service this is expected. Essentially you are making sure
that the operating system can boot and load all necessary libraries and that it does not crash.
Shutdown both physical nodes once the cluster data has finished copying.
Configure the network on both virtual nodes (node IP, Subnet, DNS, etc). Do not connect the
NICs at this time!
On node 1 attach all of the iSCSI LUNs. I recommend attaching them one at a time and
configuring to match the original LUNs. For example, the first LUN should be the H: drive so
after presenting the iSCSI LUN to the virtual node you should initialize the disk and assign the
H: drive letter.
Use the dumpcfg utility to assign the proper disk signatures to each iSCSI LUN. The following
command will assign the signature:
Connect the NIC for virtual node 1. Ensure that it can be accessed on the network.
Copy the cluster data to the disks attached to virtual node 1. The following command is
what we used during our migration:
C:\Program Files (x86)\Microsoft Rich Tools\RichCopy 4.0\richcopy.exe e:\h \\target\h$
/CT /CA /CSA /CSD /CSG /CSO /CSS
Shutdown virtual node 1.
On node 2 attach all of the iSCSI LUNs.
At this time you should not have to initialize anything. Please note that the disks will most likely
remain offline, as we moved all resources to node 1 at the beginning of this process. This is
expected behavior.
Shutdown virtual node 2 and power on virtual node 1.
At this time virtual node 1 should have ownership of all resources. The disks should all be
available on virtual node 1. Cluster services should show the disks online and in a healthy state.
Also note that cluster services should continue to complain about application services like SQL
not being available this is because those services should still be in a disabled state.
Power on virtual node 2 after connecting the NIC.
What I would do is ensure that all services and cluster functions reside on the primary node.
Then shutdown the secondary node and stop all the associated services on the primary.
Perform the P2V and allow the process to copy all the data to new virtual hard disks within
vSphere. Once that is complete you can power on the VM and power down the source.
After you remove cluster services youll simply need to modify the DNS record for the cluster
name to point to the server IP address. Or, alternatively, change the DNS record to be a
CNAME to the server DNS record. Or, alternatively, add the cluster IP address to the server
network configuration.
The only item of importance here is the disk drive letter, which should be retained during the
P2V process.

You might also like