Professional Documents
Culture Documents
CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster. The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.
# cfscluster status
CLUSTER MANAGER STATE CVM STATE running not-running running not-running running not-running running not-running
# hastatus -sum
-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING
Frozen 0 0 0 0
# cfscluster config
The cluster configuration information as read from cluster configuration file is as follows. Cluster : MyCluster Nodes : serverA serverB serverC serverD
You will now be prompted to enter the information pertaining to the cluster and the individual nodes. Specify whether you would like to use GAB messaging or TCP/UDP messaging. If you choose gab messaging then you will not have to configure IP addresses. Otherwise you will have to provide IP addresses for all the nodes in the cluster. ------- Following is the summary of the information: -----Cluster : MyCluster Nodes : serverA serverB serverC serverD Transport : gab -----------------------------------------------------------
======================================================== Cluster File System Configuration is in progress... cfscluster: CFS Cluster Configured Successfully
# cfscluster status
Node : serverA Cluster Manager : running CVM state : running No mount point registered with cluster configuration
Node : serverB Cluster Manager : running CVM state : running No mount point registered with cluster configuration
Node : serverC Cluster Manager : running CVM state : running No mount point registered with cluster configuration
Node : serverD Cluster Manager : running CVM state : running No mount point registered with cluster configuration
# vxdctl -c mode
# /etc/vx/bin/vxclustadm nidmap
CVM Nid CM Nid State 0 0 Joined: Master 1 1 Joined: Slave 2 2 Joined: Slave 3 3 Joined: Slave
# /etc/vx/bin/vxclustadm -v nodestate
state: cluster member nodeId=0 masterId=1 neighborId=1 members=0xf joiners=0x0 leavers=0x0 reconfig_seqnum=0xf0a810 vxfen=off
# hastatus -sum
-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING
Frozen 0 0 0 0
-- GROUP STATE -- Group System B B B B cvm cvm cvm cvm serverA serverB serverC serverD Y Y Y Y
Probed N N N N
This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager. When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk. Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes. First, make sure you are on the master node:
Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format
serverA # vxdisksetup -if EMC0_1 format=cdsdisk serverA # vxdisksetup -if EMC0_2 format=cdsdisk
Create a shared disk group with the disks you just initialized.
Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).
ActivationMode @serverA = { mysharedg = sw } ActivationMode @serverB = { mysharedg = sw } ActivationMode @serverC = { mysharedg = sw } ActivationMode @serverD = { mysharedg = sw }
serverA # cfsdgadm display
Node Name : serverA DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverB DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverC DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverD DISK GROUP ACTIVATION MODE mysharedg sw
We can now create volumes and filesystems within the shared diskgroup.
serverA # vxassist -g mysharedg make mysharevol1 100g serverA # vxassist -g mysharedg make mysharevol2 100g serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1 serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2
Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.
Cluster Configuration for Node: apqma519 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS MOUNT OPTIONS /mountpoint1 Regular mysharevol1 mysharedg NOT MOUNTED crw /mountpoint2 Regular mysharevol2 mysharedg NOT MOUNTED crw
That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.
serverA # hastatus -sum
-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING
Frozen 0 0 0 0
Probed
cvm serverA Y N cvm serverB Y N cvm serverC Y N cvm serverD Y N vrts_vea_cfs_int_cfsmount1 serverA vrts_vea_cfs_int_cfsmount1 serverB vrts_vea_cfs_int_cfsmount1 serverC vrts_vea_cfs_int_cfsmount1 serverD vrts_vea_cfs_int_cfsmount2 serverA
Y Y Y
N N N
Each volume will have its own Service group and looks really ugly, so you may want to modify your main.cf file and group them. Be creative!