Professional Documents
Culture Documents
LLT and GAB Commands | Port Membership | Daemons | Log Files | Dynamic Configuration | Users | Resources | Resource Agents | Service Groups | Clusters | Cluster Status | System Operations | Sevice Group Operations | Resource Operations | Agent Operations | Starting and Stopping LLT and GRAB VCS uses two components, LLT and GAB to share data over the private networks among systems. These components provide the performance and reliability required by VCS.
LLT
GAB
LLT (Low Latency Transport) provides fast, ker network connections. The system admin configur file (llttab) that describes the systems in th among them. The LLT runs in layer 2 of the net GAB (Group membership and Atomic Broadcast) pr required to maintain a synchronised state amon such as that required by the VCS heartbeat uti driver by creating a configuration file ( gabt
The file is a database, containing one entry p ID with the hosts name. The file is identical The file contains information that is derived the utility lltconfig.
The file contains the information needed to co used by the gabconfig utility. The VCS configuration file. The file contains cluster and its systems.
Gabtab Entries
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123 /sbin/gabdiskconf - i /dev 1124 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123 /sbin/gabdiskhb -a 1124 /sbin/gabconfig -c -n2
-i -a -c
Start Block -S
Si
Add a gab disk heartbeat resource -s Configure the driver for use -n
Start Block -
Number of systems
Note: port a indicates that GAB is VCS is started gabconfig -U gabconfig -c -n <number of nodes> gabconfig -c -x
Port Function
/opt/VRTS/bin/fsclustadm cfsdeinit a gab driver b I/O fencing (designed to guara d ODM (Oracle Disk Manager) f CFS (Cluster File System) h VCS (VERITAS Cluster Server: h o VCSMM driver (kernel module ne q QuickLog daemon v CVM (Cluster Volume Manager) w vxconfigd (module for cvm)
Cluster daemons
High Availability Daemon Companion Daemon Resource Agent daemon Web Console cluster managerment daemon
had hashadow <resource>Agent CmdServer
/var/VRTSvcs/log /var/VRTSvcs/log/engine_A.log
hastart [
hasys -fo
hastop -l
hastop -l
hastop -a
Cluster Status
display cluster summary continually monitor cluster verify the cluster is operating
hastatus -summary hastatus hasys -display
Cluster Details
information about a cluster value for a specific cluster attribute modify a cluster attribute Enable LinkMonitoring Disable LinkMonitoring
haclus haclus haclus haclus haclus
-display -value <attribute> -modify <attribute name> <n -enable LinkMonitoring -disable LinkMonitoring
Users
add a user modify a user delete a user display all users
hauser hauser hauser hauser -add <username> -update <username> -delete <username> -display
System Operations
add a system to the cluster delete a system from the cluster Modify a system attributes list a system state Force a system to start Display the systems attributes
hasys hasys hasys hasys hasys hasys
-add <sys> -delete <sys> -modify <sys> <modify options -state -force -display [-sys]
List all the systems in the cluster Change the load attribute of a system Display the value of a systems nodeid (/etc/llthosts) Freeze a system (No offlining system, No groups onlining) Unfreeze a system ( reenable groups and resource back online)
Dynamic Configuration The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put back into read only mode the .stale file is removed.
Change configuration to read/write mode Change configuration to read-only mode Check what mode cluster is running in
haconf -makerw haconf -dump -makero haclus -display |grep -i 'readonly' 0 = write mode 1 = read only mode hacf -verify /etc/VRTSvcs/conf/config
Check the configuration file convert a main.cf file into cluster commands convert a command file into a main.cf file
Service Groups
haconf -makerw hagrp -add groupw hagrp -modify groupw SystemList s hagrp -autoenable groupw -sys sun haconf -dump -makero haconf -makerw hagrp -delete groupw haconf -dump -makero haconf -makerw hagrp -modify gro haconf -dump -makero
list the service groups list the groups dependencies list the parameters of a group display a service group's resource display the current state of the service group clear a faulted non-persistent resource in a specific grp
# add the new host (don't forget to grp_zlnrssd SystemList -add <hostna # update the autostart list hagrp <host> <host>
-online <group> -sys <sys> -offline <group> -sys <sys> -switch <group> to <sys> -enableresources <group> -disableresources <group> -freeze <group> [-persistent]
Note to check run the following com hagrp -flush <group> -sys <system>
Resources
haconf -makerw hares -add appDG DiskGroup groupw hares -modify appDG Enabled 1 hares -modify appDG DiskGroup appdg hares -modify appDG StartVolumes 0
add a resource
delete a resource change a resource change a resource attribute to be globally wide change a resource attribute to be locally wide list the parameters of a resource list the resources list the resource dependencies
haconf -dump -makero haconf -makerw hares -delete <resource> haconf -dump -makero haconf -makerw hares -modify appDG Enab
hares -local <resource> <attribute> <value hares -display <resource> hares -list hares -dep
Resource Operations
Online a resource Offline a resource display the state of a resource( offline, online, etc) display the parameters of a resource Offline a resource and propagate the command to its children Cause a resource agent to immediately monitor the resource Clearing a resource (automatically initiates the onlining)
hares hares hares hares -online <resource> [-sys] -offline <resource> [-sys] -state -display <resource>
Resource Types
Add a resource type Remove a resource type List all resource types Display a resource type List a partitcular resource type Change a particular resource types attributes
hatype hatype hatype hatype hatype hatype -add <type> -delete <type> -list -display <type> -resources <type> -value <type> <attr>
Resource Agents
add a agent remove a agent change a agent list all ha agents Display agents run-time information i.e has it started, is it running ?
pkgadd -d . <agent package> pkgrm <agent package> n/a haagent -list haagent -display <agent_name>