You are on page 1of 8

Veritas Cluster File System (CFS)

CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster. The CFS is designed with master/slave architecture. Though any node can initiate an operation to create, delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism, called GLM, is used for metadata and cache coherency among the multiple nodes.

The examples here are :


1. Based on VCS 5.x but should also work on 4.x 2. A new 4 node cluster with no resources defined. 3. Diskgroups and volumes will be created and shared across all nodes.

Before you configure CFS


1. Make sure you have an established Cluster and running properly. 2. Make sure these packages are installed on all nodes: VRTScavf Veritas cfs and cvm agents by Symantec VRTSglm Veritas LOCK MGR by Symantec 3. Make sure you have a license installed for Veritas CFS on all nodes. 4. Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).

Check the status of the cluster


Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured yet.

# cfscluster status

NODE serverA serverB serverC serverD

CLUSTER MANAGER STATE CVM STATE running not-running running not-running running not-running running not-running

Error: V-35-41: Cluster not configured for data sharing application


# vxdctl -c mode

mode: enabled: cluster inactive


# /etc/vx/bin/vxclustadm nidmap

Out of cluster: No mapping information available


# /etc/vx/bin/vxclustadm -v nodestate

state: out of cluster

# hastatus -sum

-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING

Frozen 0 0 0 0

Configure the cluster for CFS


During configuration, veritas will pick up all information that is set on your cluster configuration. And will activate CVM on all the nodes.

# cfscluster config

The cluster configuration information as read from cluster configuration file is as follows. Cluster : MyCluster Nodes : serverA serverB serverC serverD

You will now be prompted to enter the information pertaining to the cluster and the individual nodes. Specify whether you would like to use GAB messaging or TCP/UDP messaging. If you choose gab messaging then you will not have to configure IP addresses. Otherwise you will have to provide IP addresses for all the nodes in the cluster. ------- Following is the summary of the information: -----Cluster : MyCluster Nodes : serverA serverB serverC serverD Transport : gab -----------------------------------------------------------

Waiting for the new configuration to be added.

======================================================== Cluster File System Configuration is in progress... cfscluster: CFS Cluster Configured Successfully

Check the status of the cluster


Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is required to be online before we can bring up any clustered filesystem on the nodes

# cfscluster status

Node : serverA Cluster Manager : running CVM state : running No mount point registered with cluster configuration

Node : serverB Cluster Manager : running CVM state : running No mount point registered with cluster configuration

Node : serverC Cluster Manager : running CVM state : running No mount point registered with cluster configuration

Node : serverD Cluster Manager : running CVM state : running No mount point registered with cluster configuration
# vxdctl -c mode

mode: enabled: cluster active - MASTER master: serverA

# /etc/vx/bin/vxclustadm nidmap

Name serverA serverB serverC serverD

CVM Nid CM Nid State 0 0 Joined: Master 1 1 Joined: Slave 2 2 Joined: Slave 3 3 Joined: Slave

# /etc/vx/bin/vxclustadm -v nodestate

state: cluster member nodeId=0 masterId=1 neighborId=1 members=0xf joiners=0x0 leavers=0x0 reconfig_seqnum=0xf0a810 vxfen=off
# hastatus -sum

-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING

Frozen 0 0 0 0

-- GROUP STATE -- Group System B B B B cvm cvm cvm cvm serverA serverB serverC serverD Y Y Y Y

Probed N N N N

AutoDisabled State ONLINE ONLINE ONLINE ONLINE

Creating a Shared Disk Group and Volumes/Filesystems

This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk groups before they can be used by the Volume Manager. When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any existing data on the disk. Before you begin, make sure the disks that you add to the shared-disk group must be directly attached to all the cluster nodes. First, make sure you are on the master node:

serverA # vxdctl -c mode

mode: enabled: cluster active - MASTER master: server

Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may optionally specify the disk format

serverA # vxdisksetup -if EMC0_1 format=cdsdisk serverA # vxdisksetup -if EMC0_2 format=cdsdisk

Create a shared disk group with the disks you just initialized.

serverA # vxdg -s init mysharedg mysharedg01=EMC0_1 mysharedg02=EMC0_2 serverA # vxdg list

mysharedg enabled,shared,cds 1231954112.163.serverA

Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option for Shared Write (sw).

serverA # cfsdgadm add mysharedg all=sw

Disk Group is being added to cluster configuration...

Verify that the cluster configuration has been updated.


serverA # grep mysharedg /etc/VRTSvcs/conf/config/main.cf

ActivationMode @serverA = { mysharedg = sw } ActivationMode @serverB = { mysharedg = sw } ActivationMode @serverC = { mysharedg = sw } ActivationMode @serverD = { mysharedg = sw }
serverA # cfsdgadm display

Node Name : serverA DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverB DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverC DISK GROUP ACTIVATION MODE mysharedg sw Node Name : serverD DISK GROUP ACTIVATION MODE mysharedg sw

We can now create volumes and filesystems within the shared diskgroup.
serverA # vxassist -g mysharedg make mysharevol1 100g serverA # vxassist -g mysharedg make mysharevol2 100g serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol1 serverA # mkfs -F vxfs /dev/vx/rdsk/mysharedg/mysharevol2

Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mountpoints will be automatically created.

serverA # cfsmntadm add mysharedg mysharevol1 /mountpoint1

Mount Point is being added... /mountpoint1 added to the cluster-configuration


serverA # cfsmntadm add mysharedg mysharevol2 /mountpoint2

Mount Point is being added... /mountpoint2 added to the cluster-configuration

Display the CFS mount configurations in the cluster.


serverA # cfsmntadm display -v

Cluster Configuration for Node: apqma519 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS MOUNT OPTIONS /mountpoint1 Regular mysharevol1 mysharedg NOT MOUNTED crw /mountpoint2 Regular mysharevol2 mysharedg NOT MOUNTED crw

That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.
serverA # hastatus -sum

-- SYSTEM STATE -- System State A A A A serverA serverB serverC serverD RUNNING RUNNING RUNNING RUNNING

Frozen 0 0 0 0

-- GROUP STATE -- Group System B B B B B B B B B

Probed

AutoDisabled State ONLINE ONLINE ONLINE ONLINE Y N Y N Y N Y N Y N

cvm serverA Y N cvm serverB Y N cvm serverC Y N cvm serverD Y N vrts_vea_cfs_int_cfsmount1 serverA vrts_vea_cfs_int_cfsmount1 serverB vrts_vea_cfs_int_cfsmount1 serverC vrts_vea_cfs_int_cfsmount1 serverD vrts_vea_cfs_int_cfsmount2 serverA

OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE

B vrts_vea_cfs_int_cfsmount2 serverB B vrts_vea_cfs_int_cfsmount2 serverC B vrts_vea_cfs_int_cfsmount2 serverD

Y Y Y

N N N

OFFLINE OFFLINE OFFLINE

Each volume will have its own Service group and looks really ugly, so you may want to modify your main.cf file and group them. Be creative!

You might also like