You are on page 1of 17

Configure Red Hat Cluster

Hostname:
Station10.example.com: 192.168.5.10 Station20.example.com: 192.168.5.20 Station30.example.com: 192.168.5.30 Luci Admin Console: Cluster Node: Cluster Name: VIP: station10.example.com (192.168.5.10) station10.example.com, station10.example.com, station10.example.com Cluster_01 192.168.5.100

Installation:
Station10: Install packages# yum install luci # /usr/sbin/luci_admin password # /etc/init.d/luci restart # chkconfig luci on # yum install ricci # chkconfig ricci on Admin console URL: https://station10.example.com:8084

Station20 and Station30: Install packages# yum install ricci # /etc/init.d/ricci start # chkconfig ricci on

# grep locking_type /etc/lvm/lvm.conf locking_type = 1 Create a New Cluster: log in to luci admin console: Click Cluster > Create a new cluster > Add Node Host Name and password: Give Cluster Name: Cluster_01 Node: station20.example.com Node: station30.example.com Click view SSL cert fingerprints to verify the communication.

After the finish:

On Any Cluster Node: # clustat Cluster Status for Cluster_01 @ Fri Jun 1 15:56:08 2012 Member Status: Quorate Member Name ------ ---station30.example.com station20.example.com ID Status ---- -----1 Online 2 Online, Local

# cman_tool status Version: 6.2.0 Config Version: 1 Cluster Name: Cluster_01 Cluster Id: 25517 Cluster Member: Yes Cluster Generation: 8 Membership state: Cluster-Member Nodes: 2 Expected votes: 1 Total votes: 2 Quorum: 1 Active subsystems: 9 Flags: 2node Dirty Ports Bound: 0 11 177 Node name: station20.example.com Node ID: 2 Multicast addresses: 239.192.99.17 Node addresses: 192.168.5.20 # grep locking_type /etc/lvm/lvm.conf locking_type = 3

Configure Fence:
On Any one node run: # ls /etc/cluster/ cluster.conf Add a Fence Device: Click Cluster Name > Shared Fence Devices > Add a Fence Device > In Fencing Type select Virtual Machine Fancing > Give Name XEN_Fencing > Click Add this shared fence device > OK

Add Failover Domains: Click Cluster Name > Failover Domains > Add a Failover Domain

Add Fence Device to each node: Cluster > Cluster List > Click Cluster Name > Nodes > Click on Node station20.example.com > Add a fence device to this level > Main Fencing Method > Select XEN_Fencing (Virtual Machine Fencing) > Remember to give the XEN VM hypervisor Name for host station20.example.com (not machine host name) Domain XEN_VM01 > Click update main fence properties.

Repeat the same for station30.example.com. Remember to give the XEN VM hypervisor Name for host station30.example.com (not machine host name) Domain XEN_VM02

Configure fence key and distribute to both node: Click Cluster name > Fence > Check Run XVM fence daemon > Give the node name > Click Retrieve Cluster Node > Create and Distribute Keys > Apply

On Any one node run : # ls /etc/cluster/ cluster.conf fence_xvm.key Copy /etc/cluster/fence_xvm.key to luci admin host in /etc/cluster/fence_xvm.key Verify Fencing: On any node say station30.example.com: # fence_xvm -H XEN_VM01 This will reboot/fence station20.example.com

Format Clustered LVM with GFS2 file system: /dev/vg0/lv0 is an existing lvm # mkfs.gfs2 -p lock_dlm Cluster_01:vg0 -j 3 /dev/vg0/lv0 mkfs.gfs2: More than one device specified (try -h for help)

[root@station30 ~]# mkfs.gfs2 -p lock_dlm -t Cluster_01:vg0 -j 3 /dev/vg0/lv0 This will destroy any data on /dev/vg0/lv0. It appears to contain a gfs filesystem.

Are you sure you want to proceed? [y/n] y

Device: Blocksize: Device Size Filesystem Size: Journals: Resource Groups: Locking Protocol: Lock Table: UUID:

/dev/vg0/lv0 4096 0.48 GB (126976 blocks) 0.48 GB (126973 blocks) 3 2 "lock_dlm" "Cluster_01:vg0" A4599910-69AF-5814-8FA9-C1F382B7F5E5

#mount /dev/vg0/lv0 /var/www/html/ #gfs2_tool df /dev/mapper/vg0-lv0

Add Resource: IP:

GFS File System:

Http Script:

Add Service Group:

Add resources in dependency order > IP > File System > script to run the service successfully. Start the Webby Service: Verify:

# clusvcadm -r Webby -m station30.example.com

Quorum Disk: First Add station10.example.com as a third node.

Create a partition /dev/sdi1 LV name is : /dev/qdisk-vg/qdisk-lv # mkqdisk -c /dev/qdisk-vg/qdisk-lv -l qdisk

# mkqdisk L

[Run on both node]

On all Nodes: # /etc/init.d/qdiskd restart # chkconfig qdiskd on

With 3 Nodes: # clustat

# cman_tool status

Power off Station30: # clustat

# cman_tool status

Power off Station20: # clustat

# cman_tool status

Power on Station20:

Power on Station30:

You might also like