You are on page 1of 342

Veritas Cluster

Server 6.0 for UNIX:


Install and Configure
Lessons

100-002685-A

COURSE DEVELOPERS

Bilge Gerrits
Steve Hoffer
Siobhan Seeger
Pete Toemmes

LEAD SUBJECT MATTER


EXPERTS

Graeme Gofton
Sean Nockles
Brad Willer

TECHNICAL
CONTRIBUTORS AND
REVIEWERS

Copyright 2012 Symantec Corporation. All rights reserved.

Geoff Bergren
Kelli Cameron
Tomer Gurantz
Anthony Herr
James Kenney
Gene Henriksen
Bob Lucas
Paul Johnston
Rod Pixley
Clifford Barcliff
Danny Yonkers
Antonio Antonucci
Satoko Saito
Feng Liu

Copyright 2012 Symantec Corporation. All rights reserved.


Symantec, the Symantec Logo, and VERITAS are trademarks or
registered trademarks of Symantec Corporation or its affiliates in
the U.S. and other countries. Other names may be trademarks of
their respective owners.
THIS PUBLICATION IS PROVIDED AS IS AND ALL
EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE
DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR
INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH THE FURNISHING, PERFORMANCE,
OR USE OF THIS PUBLICATION. THE INFORMATION
CONTAINED HEREIN IS SUBJECT TO CHANGE WITHOUT
NOTICE.
No part of the contents of this book may be reproduced or
transmitted in any form or by any means without the written
permission of the publisher.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Symantec Corporation
World Headquarters
350 Ellis Street
Mountain View, CA 94043
United States
http://www.symantec.com

Table of Contents
Course Introduction
Veritas Cluster Server curriculum path.................................................... Intro-2
Cluster design ......................................................................................... Intro-5
Courseware contents .............................................................................. Intro-7
Lesson 1: High Availability Concepts
High availability concepts ............................................................................. 1-3
Clustering concepts...................................................................................... 1-5
HA application services ................................................................................ 1-8
Clustering prerequisites.............................................................................. 1-12
High availability references ........................................................................ 1-14
Lesson 2: VCS Building Blocks
VCS terminology .......................................................................................... 2-3
Cluster communication............................................................................... 2-12
VCS architecture ........................................................................................ 2-17
Lesson 3: Preparing a Site for VCS
Hardware requirements and recommendations ........................................... 3-3
Software requirements and recommendations............................................. 3-5
Preparing installation information ............................................................... 3-10
Preparing to upgrade.................................................................................. 3-14

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 4: Installing VCS


Using the Common Product Installer............................................................ 4-3
VCS configuration files ............................................................................... 4-10
Viewing the default VCS configuration ....................................................... 4-13
Updates and upgrades ............................................................................... 4-17
Cluster management tools ......................................................................... 4-19

Lesson 5: VCS Operations


Common VCS tools and operations ............................................................. 5-3
Service group operations ............................................................................. 5-7
Resource operations .................................................................................. 5-13
Using the VCS Simulator............................................................................ 5-15
Lesson 6: VCS Configuration Methods
Starting and stopping VCS ........................................................................... 6-3
Overview of configuration methods .............................................................. 6-8
Online configuration ..................................................................................... 6-9
Offline configuration ................................................................................... 6-15
Controlling access to VCS.......................................................................... 6-16
Lesson 7: Preparing Services for VCS
Table of Contents

i
Copyright 2012 Symantec Corporation. All rights reserved.

Preparing applications for VCS..................................................................... 7-3


Performing one-time configuration tasks ...................................................... 7-5
Testing the application service ................................................................... 7-10
Stopping and migrating an application service ........................................... 7-19
Collecting configuration information............................................................ 7-21
Lesson 8: Online Configuration
Online service group configuration ............................................................... 8-3
Adding resources.......................................................................................... 8-7
Solving common configuration errors ......................................................... 8-17
Testing the service group ........................................................................... 8-20
Lesson 9: Offline Configuration
Offline configuration examples ..................................................................... 9-3
Offline configuration procedures................................................................... 9-6
Solving offline configuration problems ........................................................ 9-15
Testing the service group ........................................................................... 9-19
Lesson 10: Configuring Notification
Notification overview................................................................................... 10-3
Configuring notification ............................................................................... 10-8
Overview of triggers.................................................................................. 10-15
Lesson 11: Handling Resource Faults
VCS response to resource faults ................................................................ 11-3
Determining failover duration...................................................................... 11-9
Controlling fault behavior .......................................................................... 11-14
Recovering from resource faults............................................................... 11-18
Fault notification and event handling ........................................................ 11-20

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 12: Intelligent Monitoring Framework


IMF overview .............................................................................................. 12-3
IMF configuration ........................................................................................ 12-7
Faults and failover with intelligent monitoring ........................................... 12-13

Lesson 13: Cluster Communications


VCS communications review ...................................................................... 13-3
Cluster interconnect configuration .............................................................. 13-7
Joining the cluster membership ................................................................ 13-12
System and cluster interconnect failures .................................................. 13-15
Changing the interconnect configuration .................................................. 13-21
Lesson 14: Protecting Data Using SCSI 3-Based Fencing
Data protection requirements ..................................................................... 14-3
I/O fencing concepts ................................................................................... 14-5
I/O fencing operations................................................................................. 14-9
I/O fencing implementation ....................................................................... 14-16

ii

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Configuring I/O fencing............................................................................. 14-20

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 15: Coordination Point Server


Coordination points .................................................................................... 15-3
CPS operations ........................................................................................ 15-10
Installing and configuring CP servers....................................................... 15-17
Installing and configuring CP client clusters............................................. 15-23
CPS administration................................................................................... 15-29
Coordination point agent .......................................................................... 15-35

Table of Contents

iii
Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

iv

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Course Introduction

Veritas Cluster Server curriculum path

Copyright 2012 Symantec Corporation. All rights reserved.

The Veritas Cluster Server for UNIX curriculum is a series of courses that are
designed to provide a full range of expertise with Veritas Cluster Server (VCS)
high availability solutionsfrom design through disaster recovery.
Veritas Cluster Server for UNIX: Install and Configure
This course covers installation and configuration of common VCS
environments, focusing on two-node clusters running application and database
services.
Veritas Cluster Server for UNIX: Manage and Administer
This course focuses on multinode VCS clusters and advanced topics related to
managing more complex cluster configurations.
eLearning Library
The eLearning Library is available with bundled training options and includes
content on advanced high availability and disaster recovery features.

Intro2

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Course overview

Copyright 2012 Symantec Corporation. All rights reserved.

This training provides comprehensive instruction on the installation and initial


configuration of Veritas Cluster Server (VCS). The course covers principles and
methods that enable you to prepare, create, and test VCS service groups and
resources using tools that best suit your needs and your high availability
environment. You learn to configure and test failover and notification behavior,
cluster additional applications, and further customize your cluster according to
specified design criteria.

Course Introduction

Intro3
Copyright 2012 Symantec Corporation. All rights reserved.

Manage and Administer course overview

Copyright 2012 Symantec Corporation. All rights reserved.

The second part of the VCS Administration course includes two books:
Example Application Configurations
This book describes how to cluster applications, databases, and NFS file
sharing services.
Cluster Management
This book describes how to customize service groups to implement more
complex configurations. Also covered are high availability and disaster
recovery solutions in enterprise environments.

10

Intro4

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster design
Sample cluster design input
A VCS design can be presented in many different formats with varying levels of
detail.
In some cases, you may have only the information about the application services
that need to be clustered and the desired operational behavior in the cluster. For
example, you may be told that the application service uses multiple network ports
and requires local failover capability among those ports before it fails over to
another system.

Copyright 2012 Symantec Corporation. All rights reserved.

In other cases, you may have the information you need as a set of diagrams with
notes on various aspects of the desired cluster operations.

11

If you receive the design information that does not detail the resource information,
develop a detailed design worksheet before starting the deployment.
Using a design worksheet to document all aspects of your high availability
environment helps ensure that you are well-prepared to start implementing your
cluster design.
In this course, you are provided with a set of design worksheets showing sample
values as a tool for implementing the cluster design in the lab exercises.
You can use a similar format to collect all the information you need before starting
deployment at your site.

Course Introduction

Intro5
Copyright 2012 Symantec Corporation. All rights reserved.

Lab design for the course


The diagram shows a conceptual view of the cluster design used as an example
throughout this course and implemented in hands-on lab exercises.
Each aspect of the cluster configuration is described in greater detail, where
applicable, in course lessons.

Copyright 2012 Symantec Corporation. All rights reserved.

The environment consists of:


Two two-node clusters, west and east
Several high availability services; including multiple failover and service
groups and one parallel network service group
iSCSI shared storage, accessible from each node
Private Ethernet interfaces for the cluster interconnect network
Ethernet connections to the public network

12

Additional complexity is added to the design throughout the labs to illustrate


certain aspects of cluster configuration in later lessons. The design diagram shows
a conceptual view of the cluster design.

Intro6

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Courseware contents
This course consists of slides for each lesson that feature concepts, processes, and
examples. Each lab is introduced at the end of the lesson, explaining the goals of
the hands-on exercises. Quiz slides are provided to reinforce your understanding of
the lesson objectives.

Copyright 2012 Symantec Corporation. All rights reserved.

The participant guides include a copy of the slide, along with supplementary
content with additional details supporting the slide content.

13

Two levels of lab guides are provided in the appendixes:


Appendix A provides steps for more experienced participants who want the
additional challenge of determining the tasks to be performed.
Appendix B provides steps and their detailed solutions, showing the
commands and output needed to successfully complete the tasks.
In most cases, optional advanced exercises are provided as an additional
challenge for more experienced participants. These can be skipped, if desired,
without affecting subsequent labs.
Other appendixes may be present, which provide supplementary information that
may be of interest to some participants, but is outside the scope of the course
objectives.

Course Introduction

Intro7
Copyright 2012 Symantec Corporation. All rights reserved.

Typographic conventions used in this course


The following tables describe the typographic conventions used in this course.
Typographic conventions in text and commands
Convention

Element

Examples

Courier New,
bold

Command input,
both syntax and
examples

To display the robot and drive configuration:


tpconfig -d
To display disk information:
vxdisk -o alldgs list

Courier New,
plain

Command output
Command
names, directory
names, file
names, path
names, URLs
when used within
regular text
paragraphs

In the output:
protocol_minimum: 40
protocol_maximum: 60
protocol_current: 0
Locate the altnames directory.
Go to http://www.symantec.com.
Enter the value 300.
Log on as user1.

Courier New,
Italic, bold or
plain

Variables in
command syntax,
and examples:
Variables in
command input
are Italic, plain.
Variables in
command output
are Italic, bold.

To install the media server:


/cdrom_directory/install
To access a manual page:
man command_name
To display detailed information for a disk:
vxdisk -g disk_group list
disk_name

Copyright 2012 Symantec Corporation. All rights reserved.

Typographic conventions in graphical user interface descriptions

14

Intro8

Convention

Element

Examples

Arrow

Menu navigation paths

Select File > Save.

Initial capitalization

Buttons, menus, windows,


options, and other interface
elements

Click Next.
Open the Task Status
window.
Clear the Print File check
box.

Bold

Interface elements

Mark the Include


subvolumes in object
view window check box.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 1

High Availability Concepts

15

Copyright 2012 Symantec Corporation. All rights reserved.

16

12

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

High availability concepts

Copyright 2012 Symantec Corporation. All rights reserved.

Levels of availability

17

Data centers may implement different levels of availability depending on their


requirements for availability.
Backup: At minimum, all data needs to be protected using an effective backup
solution, such as Veritas NetBackup.
Data availability: Local mirroring provides real-time data availability within
the local data center. Point-in-time copy solutions protect against corruption.
Online configuration keeps data available to applications while storage is
expanded to accommodate growth. DMP provides resilience against path
failure.
Shared disk groups and cluster file systems: These features minimize
application failover time because the disk groups, volumes, and file systems
are available on multiple systems simultaneously.
Local clustering: The next level is a application clustering solution, such as
Veritas Cluster Server, for application and server availability.
Remote replication: After implementing local availability, you can further
ensure data availability in the event of a site failure by replicating data to a
remote site. Replication can be application-, host-, or array-based.
Remote clustering: Implementing remote clustering ensures that the
applications and data can be started at a remote site.Veritas Cluster Server
supports remote clustering with automatic site failover capability.

Lesson 1 High Availability Concepts


Copyright 2012 Symantec Corporation. All rights reserved.

13

Costs of downtime
A Gartner study shows that large companies experienced a loss of between
$954,000 and $1,647,000 (USD) per month for nine hours of unplanned
downtime.
In addition to the monetary loss, downtime also results in loss of business
opportunities and reputation.
Planned downtime is almost as costly as unplanned. Planned downtime can be
significantly reduced by migrating a service to another server while maintenance is
performed.

Copyright 2012 Symantec Corporation. All rights reserved.

Given the magnitude of the cost of downtime, the case for implementing a high
availability solution is clear.

18

14

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Clustering concepts
The term cluster refers to multiple independent systems connected into a
management framework.

Copyright 2012 Symantec Corporation. All rights reserved.

Types of clusters

19

A variety of clustering solutions are available for various computing purposes.


HA clusters: Provide resource monitoring and automatic startup and failover
Parallel processing clusters: Break large computational programs into smaller
tasks executed in parallel on multiple systems
Load balancing clusters: Monitor system load and distribute applications
automatically among systems according to specified criteria
High performance computing clusters: Use a collection of computing
resources to enhance application performance
Fault-tolerant clusters: Provide uninterrupted application availability
Fault tolerance guarantees 99.9999 percent availability, or approximately 30
seconds of downtime per year. Six 9s (99.9999 percent) availability is appealing,
but the costs of this solution are well beyond the affordability of most companies.
In contrast, high availability solutions can achieve five 9s (99.999 percent
availabilityless than five minutes of downtime per year) at a fraction of the cost.
The focus of this course is Veritas Cluster Server, which is primarily used for high
availability, although it also provides some support for parallel processing and load
balancing.

Lesson 1 High Availability Concepts


Copyright 2012 Symantec Corporation. All rights reserved.

15

Local cluster configurations

Copyright 2012 Symantec Corporation. All rights reserved.

Depending on your clustering solution, you may be able to implement a variety of


configurations, enabling you to deploy the clustering solution to best suit your HA
requirements and utilize existing hardware.
Active/PassiveIn this configuration, an application runs on a primary or
master server and a dedicated redundant server is present to take over on any
failover.
Active/ActiveIn this configuration, each server is configured to run specific
applications or services, and essentially provides redundancy for its peer.
N-to-1In this configuration, the applications fail over to the spare when a
system crashes. When the server is repaired, applications must be moved back
to their original systems.
N + 1Similar to N-to-1, the applications restart on the spare after a failure.
Unlike the N-to-1 configuration, after the failed server is repaired, it can
become the redundant server.
N-to-NThis configuration is an active/active configuration that supports
multiple application services running on multiple servers. Each application
service is capable of being failed over to different servers in the cluster.

20

In the example shown in the slide, utilization is increased by reconfiguring four


active/passive clusters and one active/active cluster into one N-to-1 cluster and one
N-to-N cluster. This enables a savings of four systems.

16

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Campus and global cluster configurations


Cluster configurations that enable data to be duplicated among multiple physical
locations protect against site-wide failures.
Campus clusters

Copyright 2012 Symantec Corporation. All rights reserved.

The campus or stretch cluster environment is a single cluster stretched over


multiple locations, connected by an Ethernet subnet for the cluster interconnect
and a fiber channel SAN, with storage mirrored at each location.

21

Advantages of this configuration are:


It provides local high availability within each site as well as protection against
site failure.
It is a cost-effective solution; replication is not required.
Recovery time is short.
The data center can be expanded.
You can leverage existing infrastructure.
Global clusters
Global clusters, or wide-area clusters, contain multiple clusters in different
geographical locations. Global clusters protect against site failures by providing
data replication and application failover to remote data centers.
Global clusters are not limited by distance because cluster communication uses
TCP/IP. Replication can be provided by hardware vendors or by a software
solution, such as Veritas Volume Replicator, for heterogeneous array support.
Lesson 1 High Availability Concepts
Copyright 2012 Symantec Corporation. All rights reserved.

17

HA application services
An application service is a collection of hardware and software components
required to provide a service, such as a Web site, that an end-user can access by
connecting to a particular network IP address or host name. Each application
service typically requires components of the following three types:
Application binaries (executables)
Network
Storage
If an application service needs to be switched to another system, all of the
components of the application service must migrate together to re-create the
service on another system.
Copyright 2012 Symantec Corporation. All rights reserved.

These are the same components that the administrator must manually move from a
failed server to a working server to keep the service available to clients in a
nonclustered environment.

22

Application service examples include:


A Web service consisting of a Web server program, IP addresses, associated
network interfaces used to allow access into the Web site, a file system
containing Web data files, and a volume and disk group containing the file
system.
A database service may consist of one or more IP addresses, database
management software, a file system containing data files, a volume and disk
group on which the file system resides, and a NIC for network access.

18

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Local application service failover


Cluster management software performs a series of tasks in order for clients to
access a service on another server in the event a failure occurs. The software must:
Ensure that data stored on the disk is available to the new server, if shared
storage is configured (Storage).
Move the IP address of the old server to the new server (Network).
Start up the application on the new server (Application).

Copyright 2012 Symantec Corporation. All rights reserved.

The process of stopping the application services on one system and starting it on
another system in response to a fault is referred to as a failover.

23

Lesson 1 High Availability Concepts


Copyright 2012 Symantec Corporation. All rights reserved.

19

Local and global failover


In a global cluster environment, the application services are generally highly
available within a local cluster, so faults are first handled by the HA software,
which performs a local failover.

Copyright 2012 Symantec Corporation. All rights reserved.

When HA methods such as replication and clustering are implemented across


geographical locations, recovery procedures are started immediately at a remote
location when a disaster takes down a site.

24

110

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Application requirements for clustering


The most important requirements for an application to run in a cluster are crash
tolerance and host independence. This means that the application should be able to
recover after a crash to a known state, in a predictable and reasonable time, on two
or more hosts.

Copyright 2012 Symantec Corporation. All rights reserved.

Most commercial applications today satisfy this requirement. More specifically, an


application is considered well-behaved and can be controlled by clustering
software if it meets the requirements shown in the slide.

25

Lesson 1 High Availability Concepts


Copyright 2012 Symantec Corporation. All rights reserved.

111

Clustering prerequisites
Hardware and infrastructure redundancy
All failovers cause some type of client disruption. Depending on your
configuration, some applications take longer to fail over than others. For this
reason, good design dictates that the HA software first try to fail over within the
system, using agents that monitor local resources.
Design as much resiliency as possible into the individual servers and components
so that you do not have to rely on any hardware or software to cover a poorly
configured system or application. Likewise, try to use all resources to make
individual servers as reliable as possible.
Single point of failure analysis
Copyright 2012 Symantec Corporation. All rights reserved.

Determine whether any single points of failure exist in the hardware, software, and
infrastructure components within the cluster environment.

26

Any single point of failure becomes the weakest link of the cluster. The application
is equally inaccessible if a client network connection fails, or if a server fails.
Also consider the location of redundant components. Having redundant hardware
equipment in the same location is not as effective as placing the redundant
component in a separate location.
In some cases, the cost of redundant components outweighs the risk that the
component will become the cause of an outage. For example, buying an additional
expensive storage array may not be practical. Decisions about balancing cost
versus availability need to be made according to your availability requirements.
112

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

External dependencies
Whenever possible, it is good practice to eliminate or reduce reliance by high
availability applications on external services. If it is not possible to avoid outside
dependencies, ensure that those services are also highly available.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, network name and information services, such as DNS (Domain
Name System) and NIS (Network Information Service), are designed with
redundant capabilities.

27

Lesson 1 High Availability Concepts


Copyright 2012 Symantec Corporation. All rights reserved.

113

High availability references

Copyright 2012 Symantec Corporation. All rights reserved.

Use these references as resources for building a complete understanding of high


availability environments within your organization.
The Resilient Enterprise: Recovering Information Services from Disasters
This book explains the nature of disasters and their impacts on enterprises,
organizing and training recovery teams, acquiring and provisioning recovery
sites, and responding to disasters.
Blueprints for High Availability: Designing Resilient Distributed Systems
This book provides a step-by-step guide for building systems and networks
with high availability, resiliency, and predictability.
High Availability Design, Techniques, and Processes
This guide describes how to create systems that are easier to maintain, and
defines ongoing availability strategies that account for business change.
Designing Storage Area Networks
The text offers practical guidelines for using diverse SAN technologies to
solve existing networking problems in large-scale corporate networks. With
this book, you learn how the technologies work and how to organize their
components into an effective, scalable design.
Storage Area Network Essentials: A Complete Guide to Understanding and
Implementing SANs (VERITAS Series)
This book identifies the properties, architectural concepts, technologies,
benefits, and pitfalls of storage area networks (SANs).

28

114

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 2

VCS Building Blocks

29

Copyright 2012 Symantec Corporation. All rights reserved.

30

22

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

VCS terminology
VCS cluster

Copyright 2012 Symantec Corporation. All rights reserved.

A VCS cluster is a collection of independent systems working together under the


VCS management framework for increased service availability.

31

VCS clusters have the following components:


Up to 64systemssometimes referred to as nodes or servers
Each system runs its own operating system.
A cluster interconnect, which enables cluster communications
A public network, connecting each system in the cluster to a LAN for client
access
Shared storage (optional), accessible by each system in the cluster that needs to
run the application

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

23

Service groups
A service group is a virtual container that enables VCS to manage an application
service as a unit. The service group contains all the hardware and software
components required to run the service. The service group enables VCS to
coordinate failover of the application service resources in the event of failure or at
the administrators request.

Copyright 2012 Symantec Corporation. All rights reserved.

A service group is defined by these attributes:


The cluster-wide unique name of the group
The list of the resources in the service group, usually determined by which
resources are needed to run a specific application service
The dependency relationships between the resources
The list of cluster systems on which the group is allowed to run
The list of cluster systems on which you want the group to start automatically

32

24

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Service group types

Copyright 2012 Symantec Corporation. All rights reserved.

Service groups can be one of three types:


Failover
This service group runs on one system at a time in the cluster. Most application
services, such as database and NFS servers, use this type of group.
Parallel
This service group runs simultaneously on more than one system in the cluster.
This type of service group requires an application that can be started on more
than one system at a time without threat of data corruption.

33

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

25

Resources
Resources are VCS objects that correspond to hardware or software components,
such as the application, the networking components, and the storage components.
VCS controls resources through these actions:
Bringing a resource online (starting)
Taking a resource offline (stopping)
Monitoring a resource (probing)

Copyright 2012 Symantec Corporation. All rights reserved.

Resource categories
Persistent, never turned off
None
VCS can only monitor persistent resourcesthese resources cannot be
brought online or taken offline. The most common example of a persistent
resource is a network interface card (NIC), because it must be present but
cannot be stopped.
On-only
VCS brings the resource online if required but does not stop the resource if
the associated service group is taken offline. ProcessOnOnly is a resource
used to start, but not stop a process such as daemon, for example.
Nonpersistent, also known as on-off
Most resources fall into this category, meaning that VCS brings them online
and takes them offline as required. Examples are Mount, IP, and Process.

34

26

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Resource dependencies
Resources depend on other resources because of application or operating system
requirements. Dependencies are defined to configure VCS for these requirements.

Copyright 2012 Symantec Corporation. All rights reserved.

Dependency rules

35

These rules apply to resource dependencies:


A parent resource depends on a child resource. In the diagram, the Mount
resource (parent) depends on the Volume resource (child). This dependency
illustrates the operating system requirement that a file system cannot be
mounted without the Volume resource being available.
Dependencies are homogenous. Resources can only depend on other
resources.
No cyclical dependencies are allowed. There must be a clearly defined
starting point.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

27

Resource attributes
Resources attributes define the specific characteristics on individual resources. As
shown in the slide, the resource attribute values for the sample resource of type
Mount correspond to the UNIX command line to mount a specific file system.
VCS uses the attribute values to run the appropriate command or system call to
perform an operation on the resource.
Each resource has a set of required attributes that must be defined in order to
enable VCS to manage the resource.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, the Mount resource has four required attributes that must be defined
for each resource of type Mount:
The directory of the mount point (MountPoint)
The device for the mount point (BlockDevice)
The type of file system (FSType)
The options for the fsck command (FsckOpt)

36

The first three attributes are the values used to build the UNIX mount command
shown in the slide. The FsckOpt attribute is used if the mount command fails. In
this case, VCS runs fsck with the specified options (-y, which means answer yes
to all fsck questions) and attempts to mount the file system again.
Some resources also have additional optional attributes you can define to control
how VCS manages a resource. In the Mount resource example, MountOpt is an
optional attribute you can use to define options to the UNIX mount command.
For example, if this is a read-only file system, you can specify -ro as the
MountOpt value.
28

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Resource types and type attributes


Resources are classified by resource type. For example, disk groups, network
interface cards (NICs), IP addresses, mount points, and databases are distinct types
of resources. VCS provides a set of predefined resource typessome bundled,
some add-onsin addition to the ability to create new resource types.
Individual resources are instances of a resource type. For example, you may have
several IP addresses under VCS control. Each of these IP addresses is,
individually, a single resource of resource type IP.

Copyright 2012 Symantec Corporation. All rights reserved.

A resource type can be thought of as a template that defines the characteristics or


attributes needed to define an individual resource (instance) of that type.

37

You can view the relationship between resources and resource types by comparing
the mount command for a resource on the previous slide with the mount syntax
on this slide. The resource type defines the syntax for the mount command. The
resource attributes fill in the values to form an actual command line.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

29

Agents: How VCS controls resources


Agents are processes that control resources. Each resource type has a
corresponding agent that manages all resources of that resource type. Each cluster
system runs only one agent process for each active resource type, no matter how
many individual resources of that type are in use.

Copyright 2012 Symantec Corporation. All rights reserved.

Agents control resources using a defined set of actions, also called entry points.
The four entry points common to most agents are:
Online: Resource startup
Offline: Resource shutdown
Monitor: Probing the resource to retrieve status
Clean: Killing the resource or cleaning up as necessary when a resource fails to
be taken offline gracefully

38

The difference between offline and clean is that offline is an orderly termination
and clean is a forced termination. In UNIX, this can be thought of as the difference
between exiting an application and sending the kill -9 command to the
process.
Each resource type needs a different way to be controlled. To accomplish this, each
agent has a set of predefined entry points that specify how to perform each of the
four actions. For example, the startup entry point of the Mount agent mounts a
block device on a directory, whereas the startup entry point of the IP agent uses the
ifconfig (Solaris, AIX, HP-UX) or ip addr add (Linux) command to set
the IP address on a unique IP alias on the network interface.
VCS provides both predefined agents and the ability to create custom agents.
210

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Bundled agents documentation


The Veritas Cluster Server Bundled Agents Reference Guide describes the agents
that are provided with VCS and defines the required and optional attributes for
each associated resource type.
Symantec also provides additional application and database agents in an Agent
Pack that is updated quarterly. Some examples of these agents are:
Data Loss Prevention
Documentum
IBM DB2 Database

Copyright 2012 Symantec Corporation. All rights reserved.

Select Downloads >High Availability Agents at sort.symantec.com for a


complete list of agents available for VCS.

39

Note: The Veritas Cluster Users Guide provides an appendix with a complete
description of attributes for all cluster objects.
To obtain PDF versions of product documentation for VCS and agents, see the
SORT Web site.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

211

Cluster communication
VCS requires a cluster communication channel between systems in a cluster to
serve as the cluster interconnect. This communication channel is also sometimes
referred to as the private network because it is often implemented using a
dedicated Ethernet network.
Symantec recommends that you use a minimum of two dedicated communication
channels with separate infrastructuresfor example, multiple NICs and separate
network hubsto implement a highly available cluster interconnect.

Copyright 2012 Symantec Corporation. All rights reserved.

The cluster interconnect has two primary purposes:


Determine cluster membership: Membership in a cluster is determined by
systems sending and receiving heartbeats (signals) on the cluster interconnect.
This enables VCS to determine which systems are active members of the
cluster and which systems are joining or leaving the cluster.
In order to take corrective action on node failure, surviving members must
agree when a node has departed. This membership needs to be accurate and
coordinated among active membersnodes can be rebooted, powered off,
faulted, and added to the cluster at any time.
Maintain a distributed configuration: Cluster configuration and status
information for every resource and service group in the cluster is distributed
dynamically to all systems in the cluster.

40

Cluster communication is handled by the Group Membership Services/Atomic


Broadcast (GAB) mechanism and the Low Latency Transport (LLT) protocol, as
described in the next sections.

212

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Low-Latency Transport

Copyright 2012 Symantec Corporation. All rights reserved.

Clustering technologies from Symantec use a high-performance, low-latency


protocol for communications. LLT is designed for the high-bandwidth and lowlatency needs of not only Veritas Cluster Server, but also Veritas Cluster File
System and Veritas Storage Foundation for Oracle RAC.

41

LLT runs directly on top of the Data Link Provider Interface (DLPI) layer over
Ethernet and has several major functions:
Sending and receiving heartbeats over network links
Monitoring and transporting network traffic over multiple network links to
every active system
Balancing the cluster communication load over multiple links
Maintaining the state of communication
Providing a transport mechanism for cluster communications

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

213

Group Membership Services/Atomic Broadcast (GAB)

Copyright 2012 Symantec Corporation. All rights reserved.

GAB provides the following:


Group Membership Services: GAB maintains the overall cluster
membership by way of its group membership services function. Cluster
membership is determined by tracking the heartbeat messages sent and
received by LLT on all systems in the cluster over the cluster interconnect.
GAB messages determine whether a system is an active member of the cluster,
joining the cluster, or leaving the cluster. If a system stops sending heartbeats,
GAB determines that the system has departed the cluster.
Atomic Broadcast: Cluster configuration and status information are
distributed dynamically to all systems in the cluster using GABs atomic
broadcast feature. Atomic broadcast ensures that all active systems receive all
messages for every resource and service group in the cluster.

42

214

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

I/O fencing
The fencing driver implements I/O fencing, which prevents multiple systems from
accessing the same Volume Manager-controlled shared storage devices in the
event that the cluster interconnect is severed. In the example of a two-node cluster
displayed in the diagram, if the cluster interconnect fails, each system stops
receiving heartbeats from the other system.

Copyright 2012 Symantec Corporation. All rights reserved.

GAB on each system determines that the other system has failed and passes the
cluster membership change to the fencing module.

43

The fencing modules on both systems contend for control of the disks according to
an internal algorithm. The losing system is forced to panic and reboot. The
winning system is now the only member of the cluster, and it fences off the shared
data disks so that only systems that are still part of the cluster membership (only
one system in this example) can access the shared storage.
The winning system takes corrective action as specified within the cluster
configuration, such as bringing service groups online that were previously running
on the losing system.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

215

High Availability Daemon


The VCS engine, also referred to as the high availability daemon (HAD), is the
primary VCS process running on each cluster system.
HAD tracks all changes in cluster configuration and resource status by
communicating with GAB. HAD manages all application services (by way of
agents) whether the cluster has one or many systems.
Building on the knowledge that the agents manage individual resources, you can
think of HAD as the manager of the agents. HAD uses the agents to monitor the
status of all resources on all nodes.

Copyright 2012 Symantec Corporation. All rights reserved.

This modularity between had and the agents allows for efficiency of roles:
HAD does not need to know how to start up Oracle or any other applications
that can come under VCS control.
Similarly, the agents do not need to make cluster-wide decisions.

44

This modularity allows a new application to come under VCS control simply by
adding a new agentno changes to the VCS engine are required.
On each active cluster system, HAD updates all the other cluster systems with
changes to the configuration or status.
In order to ensure that the had daemon is highly available, a companion daemon,
hashadow, monitors had, and if had fails, hashadow attempts to restart had.
Likewise, had restarts hashadow if hashadow stops.

216

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

VCS architecture
Maintaining the cluster configuration
HAD maintains configuration and state information for all cluster resources in
memory on each cluster system. Cluster state refers to tracking the status of all
resources and service groups in the cluster. When any change to the cluster
configuration occurs, such as the addition of a resource to a service group, HAD
on the initiating system sends a message to HAD on each member of the cluster by
way of GAB atomic broadcast, to ensure that each system has an identical view of
the cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

Atomic means that all systems receive updates, or all systems are rolled back to the
previous state, much like a database atomic commit.

45

The cluster configuration in memory is created from the main.cf file on disk in
the case where HAD is not currently running on any cluster systems, so there is no
configuration in memory. When you start VCS on the first cluster system, HAD
builds the configuration in memory on that system from the main.cf file.
Changes to a running configuration (in memory) are saved to disk in main.cf
when certain operations occur. These procedures are described in more detail later
in the course.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

217

VCS configuration files


Configuring VCS means conveying to VCS the definitions of the cluster, service
groups, resources, and resource dependencies. VCS uses two configuration files in
a default configuration:
The main.cf file defines the entire cluster, including the cluster name,
systems in the cluster, and definitions of service groups and resources, in
addition to service group and resource dependencies.
The types.cf file defines the resource types.
Additional files similar to types.cf may be present if agents have been added.
For example, if the Oracle enterprise agent is added, a resource types file, such as
OracleTypes.cf, is also present.
Copyright 2012 Symantec Corporation. All rights reserved.

The cluster configuration is saved on disk in the /etc/VRTSvcs/conf/


config directory, so the memory configuration can be re-created after systems
are restarted.

46

218

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

47

Labs and solutions for this lesson are located on the following pages.
Lab environment, page A-3.
Lab environment, page B-3.

Lesson 2 VCS Building Blocks


Copyright 2012 Symantec Corporation. All rights reserved.

219

Copyright 2012 Symantec Corporation. All rights reserved.

48

220

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 3

Preparing a Site for VCS

49

Copyright 2012 Symantec Corporation. All rights reserved.

50

32

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Hardware requirements and recommendations


Hardware requirements
See the hardware compatibility list (HCL) at the Symantec Web site for the most
recent list of supported hardware for Veritas products by Symantec.
Cluster interconnect
Veritas Cluster Server requires a minimum of two heartbeat channels for the
cluster interconnect.

Copyright 2012 Symantec Corporation. All rights reserved.

Loss of the cluster interconnect results in downtime, and in nonfencing


environments, can result in split brain condition (described in detail later in the
course).

51

Configure a minimum of two physically independent Ethernet connections on each


node for the cluster interconnect:
Two-node clusters can use crossover cables.
Clusters with three or more nodes require hubs or switches.
You can use layer 2 switches; however, this is not a requirement.
For VCS clusters, the interconnect does not require high bandwidth components.
During steady state, the traffic on the interconnect is negligible.
For clusters using Veritas Cluster File System, Symantec recommends the use of
multiple gigabit interconnects and gigabit switches for the interconnect due.

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

33

Networking
For a highly available configuration, each system in the cluster should have a
minimum of two physically independent Ethernet connections for the public
network. Using the same interfaces on each system simplifies configuring and
managing the cluster.
Shared storage
VCS is designed primarily as a shared data high availability product; however, you
can configure a cluster that has no shared storage.

Copyright 2012 Symantec Corporation. All rights reserved.

For shared storage clusters, consider these recommendations:


One HBA minimum for shared and one for nonshared (boot) disks:
To eliminate single points of failure, Symantec recommends you have
two HBAs to connect to disks and use a dynamic multipathing
software, such as Veritas Volume Manager DMP.
Use multiple single-port HBAs or SCSI controllers rather than
multiport interfaces to avoid single points of failure.
Shared storage on a SAN must reside in the same zone as all cluster nodes.
Data should be mirrored or protected by a hardware-based RAID mechanism.
Use redundant storage and paths.
Include all cluster-controlled data in your backup planning, implementation,
and testing.

52

Note: Although the recommendation is to use identical hardware configurations,


your requirements may indicate using different hardware for differing workloads.
34

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Software requirements and recommendations

Copyright 2012 Symantec Corporation. All rights reserved.

Software requirements

53

Ensure that the software meets requirements for installing VCS.


Verify that the required operating system patches are installed on the systems
before installing VCS.
For the latest software requirements, refer to the Veritas Cluster Server Release
Notes and the Symantec Operations Readiness Tools Web site.
Verify that storage management software versions are supported.
Using storage management software, such as Veritas Volume Manager and
Veritas File System, enhances high availability by enabling you to mirror data
for redundancy and change the configuration or physical disks without
interrupting services.
Obtain VCS licenses.
During installation, you may select traditional or keyless licensing. In both
cases, you must have a legal license for the Symantec products you install.
With traditional licensing, you must obtain license keys for each cluster system
to complete the license process. Use the vLicense Web site,
http://licensing.symantec.com, or contact your Symantec sales
representative for license keys. For upgrades, contact Technical Support.
Keyless licensing requires clusters nodes to be configured as managed hosts
within a Veritas Operations Manager (VOM) domain. An overview of VOM is
provided in the Installing VCS lesson.

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

35

Software recommendations

Copyright 2012 Symantec Corporation. All rights reserved.

Follow these recommendations to simplify installation, configuration, and


management of the cluster:
Operating system: Although it is not a strict requirement to run the same
operating system version on all cluster systems, doing so greatly reduces the
complexity of installation and ongoing cluster maintenance.
Configuration: Setting up identical configurations on each system helps ensure
that your application services can fail over and run properly on all cluster
systems.
Application: Verify that you have the same revision level of each application
you are placing under VCS control. Ensure that any application-specific user
accounts are created identically on each system.
Ensure that you have appropriate licenses to enable the applications to run on
any designated cluster system.

54

36

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

System and network preparation


Perform these tasks before starting VCS installation.
Add directories to the PATH variable, if required. For the PATH settings, see
the installation guide for your platform.
Verify that administrative IP addresses are configured on your public network
interfaces and that all systems are accessible on the public network using fully
qualified host names.
For details on configuring administrative IP addresses, see the Job Aids
appendix.
Disable the operating system suspend/resume feature, if present.

Copyright 2012 Symantec Corporation. All rights reserved.

Solaris

55

Solaris operating systems can be paused and resumed using the Stop-A and go
sequence. When a Solaris system in a VCS cluster is paused with the Stop-A, the
system stops producing VCS heartbeats. This causes other systems to consider this
a failed node.
Ensure that the only action possible after an abort is a reset. To ensure that you
never issue a go function after an abort, create an alias for the go function that
displays a message. See the Veritas Cluster Server Installation Guide for the
detailed procedure.

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

37

Preparation assistance

Copyright 2012 Symantec Corporation. All rights reserved.

Several tools are available from the Symantec Operations Readiness Tools (SORT)
Web site to help you prepare your environment to implement clustering.
Data collection and reporting tools
A data collector can be run from the Web site, or downloaded locally, to gather
system information, run preinstallation checks, and generate reports.
Documentation and compatibility lists
All product documentation, as well as software and hardware compatibility
lists are available from SORT.
Preparation checklists
Platform-specific checklists can be created to assist in preparing an
environment for clustering.
Patch management
SORT provides access to all products in the Storage Foundation HA family.
Risk assessment
Checklists and reports can be used to analyze your environment and identify
risks and recommend remedies.
Error code lookup
SORT enables you to search for additional information about error messages.
You can also request help for undocumented error codes.
Inventory management service
Inventory management is a service that provides the ability to gather license
information from Storage Foundation HA deployments.

56

38

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Alternately, you can run installvcs from the location of your VCS product
distribution to check your environment and examine the resultant log file to assess
readiness to install VCS.
cd sw_location

Copyright 2012 Symantec Corporation. All rights reserved.

./installvcs -precheck system1 system2

57

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

39

Preparing installation information


Required installation input

Copyright 2012 Symantec Corporation. All rights reserved.

Verify that you have the information necessary to install VCS. Be prepared to
select:
Product, corresponding to licenses obtained from Symantec
End-user licensing agreement
Package set, which determines the amount of disk space required
Names of the systems that will be installed with the selected product
License keys or keyless licensing
Product level, which applies to keyless licensing, and determines the level of
functionality of the product
Product options, including Veritas Replicator and Global Clustering Option

58

For more information about these selections, see the Veritas Cluster Server
Installation Guide.

310

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster configuration options

Copyright 2012 Symantec Corporation. All rights reserved.

You are prompted to configure the cluster after the software installation is
complete. Be prepared to supply:
A name for the cluster, beginning with a letter of the alphabet (a-z, A-Z)
A unique ID number for the cluster in the range 0 to 64k
All clusters sharing a private network infrastructure (including connection to
the same public network if used for low-priority links) must have a unique ID.
Device names of the network interfaces used for the cluster interconnect

59

You can also configure additional cluster services, including:


Cluster virtual IP address: Used for some cluster configurations, including
global clusters
Security: Enable secure communication between cluster nodes and clients.
Secure cluster configuration is described in detail in the Veritas Cluster Server
Installation Guide.
VCS user accounts: Add accounts or change the default admin account.
Notification: Specify SMTP and SNMP information during installation to
configure the cluster notification service.

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

311

Duplicate cluster ID detection and automatic generation


Duplicate cluster IDs create configuration failures which may result in clusters
failing to start. The Common Product Installer for VCS can detect duplicate cluster
IDs on the network and enable you to configure a unique ID.
The installer may be unable to detect conflicting cluster IDs in certain
circumstances. For example:
Another cluster using the same network for the LLT links is offline
LLT links are not properly configured
NICs are not connected properly to related switches

Copyright 2012 Symantec Corporation. All rights reserved.

You can also prevent duplicate cluster IDs by opting to have CPI automatically
generate the cluster ID.

60

312

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Using a design worksheet

Copyright 2012 Symantec Corporation. All rights reserved.

You may want to use a design worksheet to collect the information required to
install VCS as you prepare the site for VCS deployment. You can then use this
worksheet later when you are installing VCS.

61

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

313

Preparing to upgrade
Checking versions of installed products
CPI provides a simple and fast way to check and display the version of SFHA
software installed on a server. The output displays the version information down to
the patch level, and provides detailed lists of installed and missing packages
showing which ones are required and which ones are optional.

Copyright 2012 Symantec Corporation. All rights reserved.

If the system being checked has access to the SORT Web site, information about
the latest updates and newer releases available for the installed products is also
displayed.

62

314

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Creating a preparation checklist


After you have identified the versions of existing software, you can connect to the
Symantec Operations Readiness Tools (SORT) Web site and generate an upgrade
checklist. The checklist provides important information you need to review before
actually performing the upgrade, from compatibility lists to specific system,
platform, or array information.
You can also access product manuals and support articles related to the type of
upgrade you are planning to perform.

Copyright 2012 Symantec Corporation. All rights reserved.

If your environment includes any nonstandard elements that are not covered by the
checklist, such as custom applications or unsupported versions of third-party
multi-pathing products, contact Symantec Support and try to test the upgrade
process in a non-production environment first.

63

See the Veritas Cluster Server Installation Guide for more information about
planning for upgrades.

Lesson 3 Preparing a Site for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

315

Copyright 2012 Symantec Corporation. All rights reserved.

64

Labs and solutions for this lesson are located on the following pages.
Lab 3: Validating site preparation, page A-33.
Lab 3: Validating site preparation, page B-51.

316

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 4

Installing VCS

65

Copyright 2012 Symantec Corporation. All rights reserved.

66

42

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Using the Common Product Installer


Symantec ships Veritas high availability and storage foundation products with a
product installation utility that enables you to install these products using the same
interface.
You can also use the CPI utility to add licenses, configure products, and start and
stop services.

Copyright 2012 Symantec Corporation. All rights reserved.

Viewing installation logs

67

At the end of every product installation, the installer creates three text files:
A log file containing any system commands executed and their output
A response file to be used in conjunction with the -responsefile option of
the installer
A summary file containing the output of the Veritas product installer scripts
These files are located in /opt/VRTS/install/logs. The names and
locations of each file are displayed at the end of each product
installationinstallertimestamp.log, .summary, and .response. It
is recommended that these logs be kept for auditing and debugging purposes.

Lesson 4 Installing VCS

43
Copyright 2012 Symantec Corporation. All rights reserved.

The installvcs utility


The installvcs utility is used by the product installer to automatically install
and configure a cluster. If remote root access is enabled, installvcs installs
and configures all cluster systems you specify during the installation process.
The installation utility performs these high-level tasks:
Installs VCS packages on all the systems in the cluster
Configures cluster interconnect links
Configures the cluster
Brings the cluster up without any application services

Copyright 2012 Symantec Corporation. All rights reserved.

For a list of software packages that are installed, see the release notes for your
VCS version and platform.

68

Options to installvcs
The installvcs utility supports several options that enable you to tailor the
installation process. For example, you can:
Perform an unattended installation.
Install software packages without configuring a cluster.
Configure secure cluster communications.
Upgrade an existing VCS cluster.
For a complete description of installvcs options, see the VERITAS Cluster
Server Installation Guide.

44

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Web installer

69

VCS includes a Web-based interface to the CPI installer. The key components of
the Web installer architecture are shown in the diagram in the slide.
The Web browser can be run on any platform that supports the browser
requirements and can connect securely (RSH or SSH) to the Web server.
The Web server runs the xprtlwid daemon, which is started using the
webinstaller command on the distribution media. The Web installer uses
the CPI installer scripts, and the software packages. Therefore, the system
acting as the Web server must have access to the software distribution media.
The Web server must be able to connect securely (RSH or SSH) to the
installation target systems.
The installation targets are the systems on which the software is installed and
configured.
The Web installer supports most features of the installer utility. See the Veritas
Cluster Server Installation Guide for a description of supported options. The guide
also includes the browser types and versions supported by the Web installer.

Lesson 4 Installing VCS

45
Copyright 2012 Symantec Corporation. All rights reserved.

Data protection
If you are using VCS with shared storage devices that support SCSI-3 Persistent
Reservations, configure fencing after VCS is initially installed.
SCSI-3-based fencing provides the highest level of protection for data that is
located on shared storage and accessed by multiple cluster nodes.

Copyright 2012 Symantec Corporation. All rights reserved.

You can configure fencing at any time using the installvcs -fencing
utility, as described in the I/O Fencing lesson. However, if you set up fencing
after you have service groups running, you must stop and restart VCS for fencing
to take effect.

70

46

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Secure cluster communication


Storage Foundation High Availability products include authentication services that
provide for secure communication between cluster systems.
You may opt to configure security during cluster installation and initial
configuration using the Common Product Installer.
You can also configure security any time after initial cluster configuration using
the security option to the installvcs script.

Copyright 2012 Symantec Corporation. All rights reserved.

For details about secure cluster communication, see the Veritas Cluster Server
Installation Guide.

71

Lesson 4 Installing VCS

47
Copyright 2012 Symantec Corporation. All rights reserved.

IPv6 support in CPI


The CPI installer supports installation and configuration of Storage Foundation
and high availability products on systems with IPv6 addresses.
This includes environments with all IPv6 addresses, as well as environments with
mixed IPv4 and IPv6 addresses, referred to as dual-stack configurations. You can
use IPv6 addresses as system names and in the case of cluster configurations, these
IPv6 addresses are added to the main.cf file.

Copyright 2012 Symantec Corporation. All rights reserved.

For more information about IPv6, see the Web-based training module or the
Veritas Cluster Server Installation Guide.

72

48

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Using native operating system tools for installation


You can install all SFHA products using native operating system tools, including
those shown in the slide.

Copyright 2012 Symantec Corporation. All rights reserved.

For details about using operating system tools, see the Veritas Cluster Server
Installation Guide for the applicable platforms.

73

Lesson 4 Installing VCS

49
Copyright 2012 Symantec Corporation. All rights reserved.

VCS configuration files


VCS installed file locations

Copyright 2012 Symantec Corporation. All rights reserved.

The VCS installation procedure creates several directory structures.


Commands:
/sbin, /usr/sbin, and /opt/VRTSvcs/bin
VCS engine and agent log files:
/var/VRTSvcs/log
Configuration files:
/etc and /etc/VRTSvcs/conf/config
Installation log files:
/opt/VRTS/install/logs

74

Product documentation is not included with the software packages. You can
download all documentation from the SORT Web site.

410

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Communication configuration files

75

The installvcs utility creates these VCS communication configuration files:


/etc/llttab
The llttab file is the primary LLT configuration file and is used to:
Set the cluster ID number.
Set system ID numbers.
Specify the network device names used for the cluster interconnect.
/etc/llthosts
The llthosts file associates a system name with a unique VCS cluster node
ID number for every system in the cluster. This file is the same on all systems
in the cluster.
/etc/gabtab
This file contains the command line that is used to start GAB.
Cluster communication is described in detail later in the course.

Lesson 4 Installing VCS

411
Copyright 2012 Symantec Corporation. All rights reserved.

Cluster configuration files


The following cluster configuration files are added as a result of package
installation:
/etc/VRTSvcs/conf/config/types.cf
/etc/VRTSvcs/conf/config/main.cf

Copyright 2012 Symantec Corporation. All rights reserved.

The installvcs utility modifies the main.cf file to configure the


ClusterService group if a cluster virtual IP or notification options are selected
during configuration. This service group includes the resources used to manage
SMTP and SNMP notification. If a cluster virtual IP address is specified during
cluster configuration, the resources for managing the IP address are also included
in the ClusterService group. VCS configuration files are discussed in detail
throughout the course.

76

412

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Viewing the default VCS configuration


Viewing installation results
After the initial installation, you can perform the following tasks to view the
cluster configuration performed during the installation process.
List the Veritas packages installed on the system:
Solaris

pkginfo | grep -i vrts


AIX

lslpp -L | grep -i vrts


HP-UX

Copyright 2012 Symantec Corporation. All rights reserved.

swlist -l product | grep -i vrts

77

Linux

rpm -qa | grep -i vrts


View the VCS and communication configuration files.

Lesson 4 Installing VCS

413
Copyright 2012 Symantec Corporation. All rights reserved.

Viewing LLT status


After installation is complete, you can check the status of VCS components.
Use the lltstat command to verify that links are active for LLT. This command
returns information about the LLT links for the system on which it is typed. In the
example shown in the slide, lltstat -nvv active is typed on the s1 system
to produce the LLT status in a cluster with two systems.
The -nvv options cause lltstat to list systems with very verbose status:
Link names from llttab
Status
MAC address of the Ethernet ports
Copyright 2012 Symantec Corporation. All rights reserved.

Note: This command line shows status only if a module is using LLT, such as
GAB. If GAB is not running, the output shows a comm wait state.

78

The configured and active options show only nodes where LLT is
configured or active.
The lltconfig command just displays whether LLT is running, with no detail.
LLT is discussed in more detail later in the course. For now, you can see that LLT
is running using these commands.

414

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Viewing GAB status


To display the cluster membership status, type gabconfig on each system. For
example:
gabconfig -a

The example output in the slide shows:


Port a, GAB membership, has two nodes numbered 0 and 1
Port h, VCS membership, has two nodes numbered 0 and 1

Copyright 2012 Symantec Corporation. All rights reserved.

This indicates that HAD and GAB are communicating on two nodes.

79

Lesson 4 Installing VCS

415
Copyright 2012 Symantec Corporation. All rights reserved.

Viewing VCS status


You can use the hastatus command to view the state of VCS cluster nodes,
service groups, and resources.
The example in the slide shows the state of the ClusterService group after a
successful installation and initial configuration of VCS on the s1 and s2 systems.

Copyright 2012 Symantec Corporation. All rights reserved.

The -sum option shows the status as a snapshot in time. If you run hastatus
with no options, the status is displayed continuously, showing any changes in the
state of systems, service groups, and resources as they occur. You can stop the
display by typing Ctrl-C.

80

416

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Updates and upgrades


Installing VCS updates
Updates for VCS are periodically created in the form of patches or maintenance
packs to provide software fixes and enhancements. Before proceeding to configure
your cluster, check the SORT Web site for information about any updates that
might be available.

Copyright 2012 Symantec Corporation. All rights reserved.

Download the latest update for your version of VCS according to the instructions
provided on the Web site. The installation instructions for VCS updates are
included with the update pack.

81

Before you install an update, make sure all prerequisites are met. At the end of the
update installation, you may be prompted to run scripts to update agents or other
portions of the VCS configuration. Continue through any additional procedures to
ensure that the latest updates are applied.

Lesson 4 Installing VCS

417
Copyright 2012 Symantec Corporation. All rights reserved.

Supported upgrade modes


The slide lists the upgrade methods supported with SFHA 6.0 products,
highlighting the advantage and the disadvantage of each method. The level of
difficulty is indicated by the arrow on the right, from the simplest to the most
complex.

Copyright 2012 Symantec Corporation. All rights reserved.

For more information about upgrade methods, refer to the installation guide for the
applicable platform and VCS version to which you are upgrading.

82

418

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster management tools


Veritas Operations Manager environment
Veritas Operations Manager provides a single, centralized management console for
the Storage Foundation and High Availability products. You can use VOM to
monitor, visualize, and manage storage and high availability resources and
generate reports.
A typical Veritas Operations Manager deployment consists of a management
server and the managed hosts. The management server receives information about
all the resources on the managed hosts within the domain.

Copyright 2012 Symantec Corporation. All rights reserved.

Managed hosts and the management server communicate securely using the
HTTPS protocol, through HTTP servers and clients implemented within the
XPRTL component of SFHA.

83

Managed hosts can be running different versions of SFHA. In some previous


versions of SFHA, you must install the VRTSsfmh package to enable the host to be
managed by VOM.
The Web management console is any system connecting to the management server
by way of a supported Web browser, also referred to as a VOM console.
For information about configuration VOM, see the product documentation:
Veritas Operations Manager Getting Started Guide
Veritas Operations Manager Installation Guide
Veritas Operations Manager Administrator's Guide

Lesson 4 Installing VCS

419
Copyright 2012 Symantec Corporation. All rights reserved.

Installing and configuring the VOM management server


When planning a VOM deployment, identify a host that is appropriate for the
management server using these criteria:
The system should provide data security and space for a growing database as
management server discovers new managed hosts and monitors network
events.
Ideally, the host should have RAID-protected storage and the capacity to
increase the size of file systems.
The slide shows the basic procedure for obtaining and installing the VOM
management server. After downloading the running the installation file, you
connect to port 5634 using a Web browser to complete the configuration.
Copyright 2012 Symantec Corporation. All rights reserved.

Upon completion, the Web console is launched to enable you to log on to the
management server.

84

420

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Accessing the VOM management server


To log on to the management server, connect to Web server port 14161 using the
HTTPS protocol from a supported browser.
When connecting locally, you can use a URL of the form:
https://localhost:14161

When connecting from a system on the network, use the fully-qualified host name
of the management server, or the IP address. For example:
https://vomserver.example.com:14161

Copyright 2012 Symantec Corporation. All rights reserved.

On the log on page, you must select the user domain to enable the authentication
service to recognize user accounts. For example, the unixpwd domain
authenticates the login using the operating system network domain account.

85

For details about user account configuration, see the Veritas Operations Manager
Administrator's Guide.

Lesson 4 Installing VCS

421
Copyright 2012 Symantec Corporation. All rights reserved.

Adding managed hosts to the VOM management server


After VCS is installed, you can add the cluster systems to the management server
as a managed host from the VOM console, as shown in the slide.

Copyright 2012 Symantec Corporation. All rights reserved.

If a cluster or system within a cluster is shut down, the system shows as failed in
the VOM console. When the system or cluster is restarted, you do not need to add
the systems to VOM again. Simply refresh the VOM display after the systems are
running and the systems are again recognized by VOM.

86

422

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster Manager Java GUI


You can download the VCS Java-based Cluster Manager GUI from the Symantec
Web site and install the software on a Windows system. The Java GUI is packaged
with the VCS Simulator and the Veritas Enterprise Administrator Console for
Storage Foundation.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: The Java GUI is being deprecated in favor of the VOM Web-based
administration tool and may not be supported for future versions of VCS.
Also, some features of VCS available in later versions are not supported by
the Java GUI.

87

While VOM is the preferred management tool for data center environments with
many clusters, or clusters using more advanced features available in the latest VCS
releases, the Java GUI can be a useful tool for managing small clusters.
To obtain the software, navigate to symantec.com, select Products > Cluster
Server and click the link labeled Veritas Cluster Server Java Console, Veritas
Cluster Server Simulator, Veritas Enterprise Administrator Console.

Lesson 4 Installing VCS

423
Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

88

Labs and solutions for this lesson are located on the following pages.
Lab 4: Installing Storage Foundation HA 6.0, page A-41.
Lab 4: Installing Storage Foundation HA 6.0, page B-69.
424

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 5

VCS Operations

89

Copyright 2012 Symantec Corporation. All rights reserved.

90

52

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Common VCS tools and operations


VCS management tools
You can use any of the VCS interfaces to manage the cluster environment,
provided that you have the proper VCS authorization. VCS user accounts are
described in more detail in the VCS Configuration Methods lesson.
The VCS command-line interface is installed by default and is best suited for
configuration and management of the local cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

Veritas Operations Manager (VOM) is a Web-based interface for administering


managed hosts in local and remote clusters. Installation and configuration of the
VOM environment is described in detail in the Veritas Operations Manager
Installation Guide.

91

The Java GUI is available for download from the Symantec Web site and is
supported on Windows systems only. The Java GUI is deprecated in favor of
VOM, but is useful for management of clusters in smaller environments. The Java
GUI is the only interface for using the Simulator.
The Simulator is useful for learning about VCS and modeling behavior. You can
use the Simulator to create and test a cluster configuration, and then move that
configuration into a real-world environment. However, you cannot use the
Simulator to manage a running cluster configuration. The Simulator is supported
on Windows only.

Lesson 5 VCS Operations

53
Copyright 2012 Symantec Corporation. All rights reserved.

Displaying logs
The engine log is located in /var/VRTSvcs/log/engine_A.log. You can
view this file with standard UNIX text file utilities such as tail, more, or view.
VCS provides the hamsg utility that enables you to filter and sort the data in log
files.
In addition, you can display the engine log in Cluster Manager to see a variety of
views of detailed status information about activity in the cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

You can also view the command log to see how the activities you perform using
the Java GUI are translated into VCS commands. You can use the command log as
a resource for creating batch files to use when performing repetitive configuration
or administration tasks.

92

Note: The command log is not saved to diskyou can view commands only for
the current session of the GUI.

54

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Displaying object information

Copyright 2012 Symantec Corporation. All rights reserved.

Displaying resources using the CLI

93

The following examples show how to display resource attributes and status.
Display values of attributes to ensure they are set properly.
hares -display webip
#Resource
Attribute
System
Value
. . .
webip
AutoStart
global
1
webip
Critical
global
1
Determine which resources are non-critical.
hares -list Critical=0
webapache
s1
webapache
s2
Determine the virtual IP address for the websg service group.
hares -value webip Address
#Resource
Attribute
System
Value
webip
Address
global
10.10.27.93
Determine the state of a resource on each cluster system.
hares -state webip
#Resource
Attribute
System
Value
webip
State
s1
OFFLINE
webip
State
s2
ONLINE
Lesson 5 VCS Operations

55
Copyright 2012 Symantec Corporation. All rights reserved.

Displaying service group information using the CLI

Copyright 2012 Symantec Corporation. All rights reserved.

The following examples show some common uses of the hagrp command for
displaying service group information and status.
Display values of all attributes to ensure they are set properly.
hagrp -display websg
#Group
Attribute
System
Value
. . .
websg
AutoFailOver global
1
websg
AutoRestart global
1
. . .
Determine which service groups are frozen, and are therefore not able to be
stopped, started, or failed over.
hagrp -list Frozen=1
websg
s1
websg
s2
Determine whether a service group is set to automatically start.
hagrp -value websg AutoStart
1
List the state of a service group on each system.
hagrp -state websg
#Group
Attribute
System
Value
websg
State
s1
|Online|
websg
State
s2
|Offline|

94

56

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Service group operations


Bringing service groups online
When a service group is brought online, resources are brought online starting with
the child resources and progressing up the dependency tree to the parent resources.
In order to bring a failover service group online, VCS must verify that all
nonpersistent resources in the service group are offline everywhere in the cluster.
If any nonpersistent resource is online on another system, the service group is not
brought online.

Copyright 2012 Symantec Corporation. All rights reserved.

A service group is considered online if all of its nonpersistent and autostart


resources are online. An autostart resource is a resource whose AutoStart attribute
is set to 1.

95

The state of persistent resources is not considered when determining the online or
offline state of a service group because persistent resources cannot be taken
offline. However, a service group is faulted if a persistent resource faults.
Bringing a service group online using the CLI
To bring a service group online, use either form of the hagrp command:
hagrp -online group -sys system
hagrp -online group -any
The -any option brings the service group online based on the groups failover
policy. Failover policies are described in detail later in the course.

Lesson 5 VCS Operations

57
Copyright 2012 Symantec Corporation. All rights reserved.

Taking service groups offline


When a service group is taken offline, resources are taken offline starting with the
highest (parent) resources in each branch of the resource dependency tree and
progressing down the resource dependency tree to the lowest (child) resources.
Persistent resources cannot be taken offline. Therefore, the service group is
considered offline when all nonpersistent resources are offline.
Taking a service group offline using the CLI

Copyright 2012 Symantec Corporation. All rights reserved.

To take a service group offline, use either form of the hagrp command:
hagrp -offline group -sys system
Provide the service group name and the name of a system where the service
group is online.
hagrp -offline group -any
Provide the service group name. The -any switch takes a failover service
group offline on the system where it is online. All instances of a parallel
service group are taken offline when the -any switch is used.

96

58

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Switching service groups


In order to ensure that failover can occur as expected in the event of a fault, test the
failover process by switching the service group between systems within the cluster.
Switching a service group does not have the same effect as taking a service group
offline on one system and bring the service group online on another system. When
you switch a service group, VCS replicates the state of each resource on the target
system. If a resource has been manually taken offline on a system before the
switch command is run, that resource is not brought online on the target system.
Switching a service group using the CLI
To switch a service group, type:
Copyright 2012 Symantec Corporation. All rights reserved.

hagrp -switch group -to system

97

Provide the service group name and the name of the system where the service
group is to be brought online.

Lesson 5 VCS Operations

59
Copyright 2012 Symantec Corporation. All rights reserved.

Example: Switching websg

Copyright 2012 Symantec Corporation. All rights reserved.

The slide shows how you can use the GUI and CLI together to develop an
understanding of how VCS responds to events in the cluster environment, and the
effects on application services under VCS control.

98

510

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Freezing a service group


When you freeze a service group, VCS continues to monitor the resources, but it
does not allow the service group (or its resources) to be taken offline or brought
online. Failover is also disabled, even if a resource faults.
You can also specify that the freeze is in effect even if VCS is stopped and
restarted throughout the cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

Warning: Freezing a service group effectively overrides VCS protection against a


concurrency violationwhich occurs when the same application is started on
more than one system simultaneously. You can cause possible data corruption if
you bring an application online outside of VCS while the associated service group
is frozen.

99

Freezing and unfreezing a service group using the CLI


To freeze and unfreeze a service group temporarily, type:
hagrp -freeze group
hagrp -unfreeze group
To freeze a service group persistently, you must first open the configuration:
haconf -makerw
hagrp -freeze group -persistent
hagrp -unfreeze group -persistent
To determine if a service group is frozen, display the Frozen (for persistent) and
TFrozen (for temporary) service group attributes for a service group.
hagrp -value group Frozen
Lesson 5 VCS Operations

511
Copyright 2012 Symantec Corporation. All rights reserved.

Application management practices


In a cluster environment, the application software is a resource that is a member of
the service group. When an application is placed under control of VCS, you must
change your standard administration practices for managing the application.
Consider a nonclustered, single-host environment running an Oracle database. A
common method for shutting down the database is to log on as the database
administrator (DBA) and use sqlplus to shut down the database.
In a clustered environment where Oracle is a resource in a failover service group,
the same action causes a failover, which results in VCS detecting a fault (the
database is offline) and bringing the database online on another system.

Copyright 2012 Symantec Corporation. All rights reserved.

It is also normal and common to do other things in a nonclustered environment,


such as forcibly unmounting a file system.

100 512

Under VCS, the manipulation of resources that are part of service groups and the
service groups themselves need to be managed using VCS utilities, such as the
GUI or CLI, with full awareness of resource and service group dependencies.
Alternately, you can freeze the service group to prevent VCS from taking action
when changes in resource status are detected.
Warning: In clusters that do not implement fencing, VCS cannot prevent someone
with proper permissions from manually starting another instance of the application
on another system outside of VCS control. VCS will eventually detect this and
take corrective action, but it may be too late to prevent data corruption.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Resource operations
Bringing resources online
In normal day-to-day operations, you perform most management operations at the
service group level.
However, you may need to perform maintenance tasks that require one or more
resources to be offline while others are online. Also, if you make errors during
resource configuration, you can cause a resource to fail to be brought online.
Bringing resources online using the CLI
To bring a resource online, type:

Copyright 2012 Symantec Corporation. All rights reserved.

hares -online resource -sys system

101

Provide the resource name and the name of a system that is configured to run the
service group.
Note: The service group shown in the slide is partially online after the webdg
resource is brought online. This is depicted by the textured coloring of the
service group circle.

Lesson 5 VCS Operations

513
Copyright 2012 Symantec Corporation. All rights reserved.

Taking resources offline


Taking resources offline should not be a normal occurrence. Taking resources
offline causes the service group to become partially online, and availability of the
application service is affected.
If a resource needs to be taken offline, for example, for maintenance of underlying
hardware, then consider switching the service group to another system.
If multiple resources need to be taken offline manually, then they must be taken
offline in resource dependency tree order, that is, from top to bottom.

Copyright 2012 Symantec Corporation. All rights reserved.

Taking a resource offline and immediately bringing it online may be necessary if,
for example, the resource must reread a configuration file due to a change. Or you
may need to take a database resource offline in order to perform an update that
modifies the database files.

102 514

Taking resources offline using the CLI


To take a resource offline, type:
hares -offline resource -sys system

Provide the resource name and the name of a system.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Using the VCS Simulator


You can use the VCS Simulator as a tool for learning how to manage VCS
operations and applications under VCS control. You can perform all basic service
group operations using the Simulator.
The Simulator also has other uses as a configuration and test tool. For this lesson,
the focus of the Simulator discussion is on using a predefined VCS configuration
to practice performing administration tasks.
You can download the Simulator for Windows from the Symantec Web site. The
Simulator and Java GUI are installed by running the Windows installer file.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: The Simulator is not supported on UNIX systems.

103

No additional licensing is required to install and use the Simulator.

Lesson 5 VCS Operations

515
Copyright 2012 Symantec Corporation. All rights reserved.

Simulator Java Console


The Simulator Java Console is provided with the Windows Simulator software to
create and manage multiple Simulator configurations, which can run
simultaneously.
When the Simulator Java Console is running, a set of sample Simulator
configurations is displayed, showing an offline status. You can start one or more
existing cluster configurations and then launch an instance of the Cluster Manager
Java Console for each running Simulator configuration.

Copyright 2012 Symantec Corporation. All rights reserved.

You can use the Cluster Manager Java Console to perform all the same tasks as an
actual cluster configuration. Additional options are available for Simulator
configurations to enable you to test various failure scenarios, including faulting
resources and powering off systems.

104 516

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

105

Labs and solutions for this lesson are located on the following pages.
Lab 5: Performing common VCS operations, page A-53.
Lab 5: Performing common VCS operations, page B-107.

Lesson 5 VCS Operations

517
Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

106 518

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 6

VCS Configuration Methods

107

Copyright 2012 Symantec Corporation. All rights reserved.

108 62

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Starting and stopping VCS

Copyright 2012 Symantec Corporation. All rights reserved.

VCS startup behavior

109

The default VCS startup process is demonstrated using a cluster with two systems
connected by the cluster interconnect. To illustrate the process, assume that no
systems have an active cluster configuration.
1 The hastart command is run on s1 and starts the had and hashadow
processes.
2 HAD checks for a valid configuration file (hacf -verify config_dir).
3 HAD checks for an active cluster configuration on the cluster interconnect.
4 Because there is no active cluster configuration, HAD on s1 reads the local
main.cf file and loads the cluster configuration into local memory.
The s1 system is now in the VCS local build state, meaning that VCS is
building a cluster configuration in memory on the local system.

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

63

6
7

Copyright 2012 Symantec Corporation. All rights reserved.

110

10
11

64

The hastart command is then run on s2 and starts had and hashadow on
s2.
The s2 system is now in the VCS current discover wait state, meaning VCS is
in a wait state while it is discovering the current state of the cluster.
HAD on s2 checks for a valid configuration file on disk.
HAD on s2 checks for an active cluster configuration by sending a broadcast
message out on the cluster interconnect, even if the main.cf file on s2 is
valid.
HAD on s1 receives the request from s2 and responds.
HAD on s1 sends a copy of the cluster configuration over the cluster
interconnect to s2.
The s1 system is now in the VCS running state, meaning VCS determines that
there is a running configuration in memory on system s1.
The s2 system is now in the VCS remote build state, meaning VCS is building
the cluster configuration in memory on the s2 system from the cluster
configuration that is in a running state on s1.
HAD on s2 performs a remote build to place the cluster configuration in
memory.
When the remote build process completes, HAD on s2 copies the cluster
configuration into the local main.cf file.
If s2 has valid local configuration files (main.cf and types.cf), these are
saved to new files with a name, including a date and time stamp, before the
active configuration is written to the main.cf file on disk.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Copyright 2012 Symantec Corporation. All rights reserved.

Note: If the checksum of the configuration in memory matches the main.cf on


disk, no write to disk occurs.

Copyright 2012 Symantec Corporation. All rights reserved.

The startup process is repeated on each system until all members have identical
copies of the cluster configuration in memory and matching main.cf files on
local disks. Synchronization is maintained by data transfer through LLT and GAB.

111

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

65

Stopping VCS
There are several methods of stopping the VCS engine (had and hashadow
daemons) on a cluster system.
The options you specify to hastop determine where VCS is stopped, and how
resources under VCS control are affected.
VCS shutdown examples

Copyright 2012 Symantec Corporation. All rights reserved.

The four examples show the effect of using different options with the hastop
command:
The -all option stops had on all systems and takes the service groups offline.
The -all -force options stop had on both systems and leave the services
running. Although they are no longer protected highly available services and
cannot fail over, the services continue to be available to users.
Use caution with this option. VCS does not warn you if the configuration is
open and you stop using the -force option.
The -local option causes the service group to be taken offline on s1 and
stops the VCS engine (had) on s1.
The -local -evacuate options cause the service group on s1 to be
migrated to s2 and then stop had on s1.

112

66

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Modifying VCS shutdown behavior


Use the EngineShutdown attribute to define VCS behavior when you run the
hastop command.
Note: VCS does not consider this attribute when hastop is issued with the
-force option.

Copyright 2012 Symantec Corporation. All rights reserved.

Configure one of the values shown in the table in the slide for the EngineShutdown
attribute depending on the desired functionality for the hastop command.

113

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

67

Overview of configuration methods

Copyright 2012 Symantec Corporation. All rights reserved.

VCS provides several tools and methods for configuring service groups and
resources, generally categorized as:
Online configuration
You can modify the cluster configuration while VCS is running using one of
the graphical user interfaces or the command-line interface. These online
methods change the cluster configuration in memory. When finished, you write
the in-memory configuration to the main.cf file on disk to preserve the
configuration.
Offline configuration
In some circumstances, you can simplify cluster implementation and
configuration using an offline method, including:
Editing configuration files manually
Using the Simulator to create, modify, model, and test configurations
This method requires you to stop and restart VCS in order to build the new
configuration in memory.

114

68

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Online configuration
How VCS changes the online cluster configuration
When you use Cluster Manager to modify the configuration, the GUI
communicates with had on the specified cluster system to which Cluster Manager
is connected.
Note: Cluster Manager configuration requests are shown conceptually as ha
commands in the diagram, but they are implemented as system calls.

Copyright 2012 Symantec Corporation. All rights reserved.

The had daemon communicates the configuration change to had on all other
nodes in the cluster, and each had daemon changes the in-memory configuration.

115

When the command to save the configuration is received from Cluster Manager,
had communicates this command to all cluster systems, and each systems had
daemon writes the in-memory configuration to the main.cf file on its local disk.
The VCS command-line interface is an alternate online configuration tool. When
you run ha commands, had responds in the same fashion.
Note: When two administrators are changing the cluster configuration
simultaneously, each administrator sees all changes as they are being made.

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

69

Opening the cluster configuration


You must open the cluster configuration to add service groups and resources, make
modifications, and perform certain operations.
The state of the configuration is maintained in an internal attribute (ReadOnly). If
you try to stop VCS with the configuration open, a warning is displayed that the
configuration is open. This helps ensure that you remember to save the
configuration to disk so you do not lose any changes you may have made while the
configuration was open.

Copyright 2012 Symantec Corporation. All rights reserved.

You can override this protection, as described later in this lesson.

116

610

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Saving the cluster configuration


When you save the cluster configuration, VCS copies the configuration in memory
to the main.cf file in the /etc/VRTSvcs/conf/config directory on all
running cluster systems. At this point, the configuration is still open. You have
only written the in-memory configuration to disk and have not closed the
configuration.

Copyright 2012 Symantec Corporation. All rights reserved.

If you save the cluster configuration after each change, you can view the main.cf
file to see how the in-memory modifications are reflected in the main.cf file.

117

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

611

Closing the cluster configuration

Copyright 2012 Symantec Corporation. All rights reserved.

When the administrator saves and closes the configuration, VCS:


1 Changes the state of the configuration to closed (ReadOnly=1)
2 Writes the configuration in memory to the main.cf file

118

612

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

How VCS protects the cluster configuration


When the cluster configuration is open, you cannot stop VCS without overriding
the warning that the configuration is open.
If you ignore the warning and stop VCS while the configuration is open, you may
lose configuration changes. If you forget to save the configuration and shut down
VCS, the configuration in the main.cf file on disk may not be the same as the
configuration that was in memory before VCS was stopped.

Copyright 2012 Symantec Corporation. All rights reserved.

You can configure VCS to automatically back up the in-memory configuration to


disk to minimize the risk of losing modifications made to a running cluster. This is
covered later in this lesson.

119

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

613

Automatic configuration backups


You can set the BackupInterval cluster attribute to automatically save the inmemory configuration to disk periodically.
When set to a value greater than or equal to three minutes, VCS automatically
saves the configuration in memory to the main.cf.autobackup file.
Note: If no changes are made to the cluster configuration during the time period
set in the BackupInterval attribute, no backup copy is created.

Copyright 2012 Symantec Corporation. All rights reserved.

If necessary, you can copy the main.cf.autobackup file to main.cf and


restart VCS to build the configuration in memory at the point in time of the last
backup.

120 614

Ensure that you understand the VCS startup sequence described in the Starting
and Stopping VCS section before you attempt this type of recovery.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Offline configuration
Characteristics
In some circumstances, you can simplify cluster implementation or configuration
tasks by directly modifying the VCS configuration files. This method requires you
to stop and restart VCS in order to build the new configuration in memory.

Copyright 2012 Symantec Corporation. All rights reserved.

The benefits of using an offline configuration method are that it:


Offers a very quick way of making major changes or getting an initial
configuration up and running
Is efficient for creating many similar resources or service groups
Provides a means for deploying a large number of similar clusters

121

One consideration when choosing to perform offline configuration is that you must
be logged into the a cluster system as root.
This section describes situations where offline configuration is useful. The next
section shows how to stop and restart VCS to propagate the new configuration
throughout the cluster. The Offline Configuration of Service Groups lesson
provides detailed offline configuration procedures and examples.

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

615

Controlling access to VCS


Relating VCS and UNIX user accounts
If you have not configured Symantec Product Authentication Service (SPAS)
security in the cluster, VCS has a completely separate list of user accounts and
passwords to control access to VCS.

Copyright 2012 Symantec Corporation. All rights reserved.

When using the Cluster Manager to perform administration, you are prompted for
a VCS account name and password. Depending on the privilege level of that VCS
user account, VCS displays the Cluster Manager GUI with an appropriate set of
options. If you do not have a valid VCS account, you cannot run Cluster Manager.

122 616

When using the command-line interface for VCS, you are also prompted to enter a
VCS user account and password and VCS determines whether that user account
has proper privileges to run the command. One exception is the UNIX root user.
By default, only the UNIX root account is able to use VCS ha commands to
administer VCS from the command line.
VCS access in secure mode
When running in secure mode, VCS uses operating system-based authentication,
which enables VCS to provide a single sign-on mechanism. All VCS users are
system and domain users and are configured using fully qualified user names, for
example, administrator@xyz.com.
When running in secure mode, you can add system or domain users to VCS and
assign them VCS privileges. However, you cannot assign or change passwords
using a VCS interface.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Simplifying VCS administrative access


The halogin command
The halogin command is provided to save authentication information so that
users do not have to enter credentials every time a VCS command is run.
The command stores authentication information in the users home directory. You
must set the VCS_HOST environment variable to the name of the node from which
you are running VCS commands to use halogin.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: The effect of halogin only applies for that shell session.

123

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

617

VCS user account privileges


You can ensure that the different types of administrators in your environment have
a VCS authority level to affect only those aspects of the cluster configuration that
are appropriate to their level of responsibility.
For example, if you have a DBA account that is authorized to take a database
service group offline or switch it to another system, you can make a VCS Group
Operator account for the service group with the same account name. The DBA can
then perform operator tasks for that service group, but cannot affect the cluster
configuration or other service groups. If you set AllowNativeCliUsers to 1, then
the DBA logged on with that account can also use the VCS command line to
manage the corresponding service group.
Copyright 2012 Symantec Corporation. All rights reserved.

Setting VCS privileges is described in the next section.

124 618

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Configuring cluster user accounts


VCS users are not the same as UNIX users except when running VCS in secure
mode. If you have not configured SPAS security in the cluster, VCS maintains a set
of user accounts separate from UNIX accounts. In this case, even if the same user
exists in both VCS and UNIX, this user account can be given a range of rights in
VCS that does not necessarily correspond to the users UNIX system privileges.

Copyright 2012 Symantec Corporation. All rights reserved.

The slide shows how to use the hauser command to create users and set
privileges. You can also add privileges with the -addpriv and -deletepriv
options to hauser.

125

In non-secure mode, VCS passwords are stored in the main.cf file in encrypted
format. If you use a GUI or CLI to set up a VCS user account, passwords are
encrypted automatically. If you edit the main.cf file, you must encrypt the
password using the vcsencrypt command.
Note: In non-secure mode, if you change a UNIX account, this change is not
reflected in the VCS configuration automatically. You must manually modify
accounts in both places if you want them to be synchronized.
Modifying user accounts
Use the hauser command to make changes to a VCS user account:
Change the password for an account.
hauser -update user_name
Delete a user account.
hauser -delete user_name

Lesson 6 VCS Configuration Methods


Copyright 2012 Symantec Corporation. All rights reserved.

619

Copyright 2012 Symantec Corporation. All rights reserved.

126 620

Labs and solutions for this lesson are located on the following pages.
Lab 6: Starting and stopping VCS, page A-69.
Lab 6: Starting and stopping VCS, page B-141.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 7

Preparing Services for VCS

127

Copyright 2012 Symantec Corporation. All rights reserved.

128 72

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Preparing applications for VCS


Application service overview
An application service is the service that the end-user perceives when accessing a
particular network address. An application service typically consists of multiple
components, some hardware- and some software-based, all cooperating together to
produce a service.
For example, a service can include application software (processes), a file system
containing data files, a physical disk on which the file system resides, one or more
IP addresses, and a NIC for network access.

Copyright 2012 Symantec Corporation. All rights reserved.

If this application service needs to be migrated to another system for recovery


purposes, all of the components that compose the service must migrate together to
re-create the service on another system.

129

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

73

Copyright 2012 Symantec Corporation. All rights reserved.

Identifying components

130 74

The first step in preparing services to be managed by VCS is to identify the


components required to support the services. These components should be
itemized in your design worksheet and may include the following, depending on
the requirements of your application services:
Shared storage resources:
Disks or components of a logical volume manager, such as Volume
Manager disk groups and volumes
File systems to be mounted
Directory mount points
Network-related resources:
IP addresses
Network interfaces
Application-related resources:
Identical installation and configuration procedures
Procedures to manage and monitor the application
The location of application binary and data files
The following sections describe the aspects of these components that are critical to
understanding how VCS manages resources.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Performing one-time configuration tasks


Configuration and migration procedure
Use the procedure shown in the diagram to prepare and test application services on
each system before placing the service under VCS control. Consider using a design
worksheet to obtain and record information about the service group and each
resource. This is the information you need to configure VCS to control these
resources.

Copyright 2012 Symantec Corporation. All rights reserved.

Details are provided in the following section.

131

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

75

Documenting attributes
In order to configure the operating system resources you have identified as
requirements for an application, you need the detailed configuration information
used when initially configuring and testing services.

Copyright 2012 Symantec Corporation. All rights reserved.

You can use a design diagram and worksheet while performing one-time
configuration tasks and testing to:
Show the relationships between the resources, which determine the order in
which you configure, start, and stop resources.
Document the values needed to configure VCS resources after testing is
complete.

132 76

Note: If your systems are not configured identically, you must note those
differences in the design worksheet. The Online Configuration lesson
shows how you can configure a resource with different attribute values for
different systems.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Checking resource attributes


Verify that the resources specified in your design worksheet are appropriate and
complete for your platform. Refer to the Veritas Cluster Server Bundled Agents
Reference Guide before you begin configuring resources.

Copyright 2012 Symantec Corporation. All rights reserved.

The examples displayed in the slides in this lesson show values for various
operating system platforms, indicated by the icons. In the case of the appsg service
group shown in the slide, the lan2 value of the Device attribute for the NIC
resource is specific to HP-UX. Solaris, Linux, and AIX have other operating
system-specific values, as shown in the respective Bundled Agents Reference
Guides.

133

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

77

Configuring shared storage


The diagram shows the procedure for configuring shared storage on the initial
system. In this example, Volume Manager is used to manage shared storage.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: Although examples used throughout this course are based on Veritas
Volume Manager, VCS also supports other volume managers. VxVM is
shown for simplicityobjects and commands are essentially the same on
all platforms. The agents for other volume managers are described in the
Veritas Cluster Server Bundled Agents Reference Guide.
Preparing shared storage, such as creating disk groups, volumes, and file systems,
is performed once, from one system. Then you must create mount point directories
on each system.
The options to mkfs differ depending on platform type, as displayed in the
following examples.
AIX

mkfs -V vxfs /dev/vx/rdsk/appdatadg/appdatavol


Linux

mkfs -t vxfs /dev/vx/rdsk/appdatadg/appdatavol


Solaris/HP-UX

134 78

mkfs -F vxfs /dev/vx/rdsk/appdatadg/appdatavol

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Configuring the application


You must ensure that the application is installed and configured identically on each
system that is a startup or failover target and manually test the application after all
dependent resources are configured and running.

Copyright 2012 Symantec Corporation. All rights reserved.

Some VCS agents have application-specific installation instructions to ensure the


application is installed and configured properly for a cluster environment. Check
the Symantec Operations Readiness Tools (SORT) Web site for applicationspecific guides, such as the Veritas Cluster Server Agent for Oracle Installation
and Configuration Guide.

135

Depending on the application requirements, you may need to


Create user accounts.
Configure environment variables.
Apply licenses.
Set up configuration files.
This ensures that you have correctly identified the information used by the VCS
agent scripts to control the application.
Note: The shutdown procedure should be a graceful stop, which performs any
cleanup operations.

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

79

Testing the application service


Before configuring a service group in VCS to manage an application, test the
application components on each system that can be a startup or failover target for
the service group. Following this best practice recommendation ensures that VCS
can successfully manage the application service after you configure a service
group to manage the application.

Copyright 2012 Symantec Corporation. All rights reserved.

The testing procedure emulates how VCS manages application services and must
include:
Startup: Online
Shutdown: Offline
Verification: Monitor

136 710

The actual commands used may differ from those used in this lesson. However,
conceptually, the same type of action is performed by VCS. Example operations
are described for each component throughout this section.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Bringing up shared storage resources

Copyright 2012 Symantec Corporation. All rights reserved.

Verify that shared storage resources are configured properly and accessible. The
examples shown in the slide are based on using Volume Manager.
1 Import the disk group.
2 Start the volume.
3 Mount the file system.
Mount the file system manually for the purposes of testing the application
service. Do not configure the operating system to automatically mount any file
system that will be controlled by VCS.
For example, on Linux systems, ensure that the application file system is not
added to /etc/fstab. VCS must control where the file system is mounted.
Examples of mount commands are provided for each platform.

137

AIX

mount -V vxfs /dev/vx/dsk/appdatadg/appdatavol /appdata


Linux

mount -t vxfs /dev/vx/dsk/appdatadg/appdatavol /appdata


Solaris/HP-UX
mount -F vxfs /dev/vx/dsk/appdatadg/appdatavol /appdata

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

711

Copyright 2012 Symantec Corporation. All rights reserved.

Virtual IP addresses

138 712

The example in the slide demonstrates how users access services through a virtual
IP address that is specific to an application. In this scenario, VCS is managing a
Web server that is accessible to network clients over a public network.
1 A network client requests access to http://eweb.com.
2 The DNS server translates the host name to the virtual IP address of the Web
server.
3 The virtual IP address is managed and monitored by a VCS IP resource in the
Web service group.
The virtual IP address is associated with the next virtual network interface for
e1000g0, which is e1000g0:1 in this example of Solaris network interfaces.
4 The system which has the service group online accepts the incoming request
on the virtual IP address.
Note: The administrative IP address is associated with a physical network
interface on a specific system and is configured by the operating system
during system startup. These are also referred to as base or test IP
addresses.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Virtual IP address migration

Copyright 2012 Symantec Corporation. All rights reserved.

The diagram in the slide shows what happens if the system running the Web
service group (s1) fails.
1 The IP address is no longer available on the network. Network clients may
receive errors that web pages are not accessible.
2 VCS on the running system (s2) detects the failure and starts the service group.
3 The IP resource is brought online, which configures the same virtual IP address
on the next available virtual network interface alias, e1000g:1 in this
example.
This virtual IP address floats, or migrates, with the service. It is not tied to a
system.
4 The network client Web request is now accepted by the s2 system.

139

Note: The admin IP address on s2 is also configured during system startup. This
address is unique and associated with only this system, unlike the virtual IP
address.

CAUTION

The administrative IP address cannot be placed under VCS control.


This address must be configured by the operating system. Ensure
that you do not configure an IP resource with the value of the
administrative IP address.

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

713

Configuring application IP addresses


Configure the application IP addresses associated with specific application
services to ensure that clients can access the application service using the specified
address.
Application IP addresses are configured as virtual IP addresses. On most
platforms, the devices used for virtual IP addresses are defined as
interface:number.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: These virtual IP addresses are only configured temporarily for testing
purposes. You must not configure the operating system to manage the
virtual IP addresses.
The following examples show the platform-specific commands used to configure a
virtual IP address for testing purposes.
AIX

140 714

Create an alias for the virtual interface and bring up the IP on the next available
logical interface.
ifconfig en1 inet 10.10.21.198 netmask 255.0.0.0 alias

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

HP-UX

Configure IP address using the ifconfig command.


ifconfig lan2:1 inet 10.10.21.198
2 Use ifconfig to manually configure the IP address to test the configuration
without rebooting.
ifconfig lan2:1 up
1

Linux

Configure IP address using the ip addr command.


ip addr add 10.10.21.198/24 broadcast 10.10.2.255 \
dev eth0 label eth0:1
ip addr show eth0
Solaris

Plumb the virtual interface and bring up the IP address on the next available
logical interface.
ifconfig e1000g0 addif 10.10.21.198 up

Note: In each case, you can edit /etc/hosts to assign a virtual host name
(application name) to the virtual IP address.

Copyright 2012 Symantec Corporation. All rights reserved.

10.10.21.198 eweb.com

141

Lesson 7 Preparing Services for VCS


Copyright 2012 Symantec Corporation. All rights reserved.

715

Starting the application

Copyright 2012 Symantec Corporation. All rights reserved.

When all dependent resources are available, you can start the application software.
Ensure that the application is not configured to start automatically during system
boot. VCS must be able to start and stop the application using the same methods
you use to control the application manually.

142 716

Examples of operating system control of applications:


On AIX and HP-UX, rc files may be present if the application is under
operating system control.
On Linux, you can use the chkconfig command to determine if an
application is under operating system control.
On Solaris 10 platforms, you must disable the Service Management Facility
(SMF) using the svcadm command for some services, such as Apache, to
ensure that SMF is not trying to control the service.
Follow the guidelines for your platform to remove an application from operating
system control in preparation for configuring VCS to control the application.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Verifying resources
You can perform some simple steps, such as those shown in the slide, to verify that
each component needed for the application to function is operating at a basic level.
Note: To test the network resources, access one or more well-known addresses
outside of the cluster, such as local routers, or primary and secondary DNS
servers.

Copyright 2012 Symantec Corporation. All rights reserved.

This helps you identify any potential configuration problems before you test the
service as a whole, as described in the Testing the Integrated Components
section.

143 Lesson 7 Preparing Services for VCS

Copyright 2012 Symantec Corporation. All rights reserved.

717

Testing the integrated components


When all components of the service are running, test the service in situations that
simulate real-world use of the service.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, if you have an application with a backend database, you can:
1 Start the database (and listener process).
2 Start the application.
3 Connect to the application from the public network using the client software to
verify name resolution to the virtual IP address.
4 Perform user tasks, as applicable; perform queries, make updates, and run
reports.

144 718

Another example that illustrates how you can test your service uses Network File
System (NFS). If you are preparing to configure a service group to manage an
exported file system, verify that you can mount the exported file system from a
client on the network. This is described in more detail later in the course.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Stopping and migrating an application service


Stopping application components
Stop resources in the order of the dependency tree from the top down after you
have finished testing the service. You must have all resources offline in order to
migrate the application service to another system for testing. The procedure also
illustrates how VCS stops resources.
The ifconfig options are platform-specific, as shown in the following
examples.
AIX

ifconfig en1 10.10.21.198 delete

Copyright 2012 Symantec Corporation. All rights reserved.

HP-UX

ifconfig lan2:1 0.0.0.0


Linux

ifdown eth0:1
Solaris

ifconfig e1000g0 removeif 10.10.21.198

145 Lesson 7 Preparing Services for VCS

Copyright 2012 Symantec Corporation. All rights reserved.

719

Manually migrating an application service


After you have verified that the application service works properly on one system,
manually migrate the service between all intended target systems. Performing
these operations enables you to:
Ensure that your operating system and application resources are properly
configured on all potential target cluster systems.
Validate or complete your design worksheet to document the information
required to configure VCS to manage the services.

Copyright 2012 Symantec Corporation. All rights reserved.

Perform the same type of testing used to validate the resources on the initial
system, including real-world scenarios, such client access from the network.

146 720

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Collecting configuration information


Documenting resource dependencies
Ensure that the steps you perform to bring resources online and take them offline
while testing the service are accurately reflected in a design worksheet. Compare
the worksheet with service group diagrams you have created or that have been
provided to you.

Copyright 2012 Symantec Corporation. All rights reserved.

The slide shows the resource dependency definition for the application used as an
example in this lesson.

147 Lesson 7 Preparing Services for VCS

Copyright 2012 Symantec Corporation. All rights reserved.

721

Validating service group attributes


Check the service group attributes in your design worksheet to ensure that the
appropriate startup and failover systems are listed. Other service group attributes
may be included in your design worksheet, according to the requirements of each
service.

Copyright 2012 Symantec Corporation. All rights reserved.

Service group definitions consist of the attributes of a particular service group.


These attributes are described in more detail later in the course.

148 722

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages.
Lab 7: Preparing application services, page A-77.
Lab 7: Preparing application services, page B-157.

149 Lesson 7 Preparing Services for VCS

Copyright 2012 Symantec Corporation. All rights reserved.

723

Copyright 2012 Symantec Corporation. All rights reserved.

150 724

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 8

Online Configuration

151

Copyright 2012 Symantec Corporation. All rights reserved.

152 82

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Online service group configuration


The chart on the left in the diagram illustrates the high-level procedure you can use
to modify the cluster configuration while VCS is running.
Online configuration procedure

Copyright 2012 Symantec Corporation. All rights reserved.

You can use the procedures shown in the diagram as a standard methodology for
creating service groups and resources. Although there are many ways you could
vary this configuration procedure, following a recommended practice simplifies
and streamlines the initial configuration and facilitates troubleshooting if you
encounter configuration problems.

153

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

83

Naming convention suggestions


Using a consistent pattern for selecting names for VCS objects simplifies initial
configuration of high availability. Perhaps more importantly, applying a naming
convention helps avoid administrator errors and can significantly reduce
troubleshooting efforts when errors or faults occur.
As shown in the slide, you are recommend to use pattern based on the function of
the service group, and match some portion of the name among all resources and
the service group in which the resources are contained.

Copyright 2012 Symantec Corporation. All rights reserved.

When deciding upon a naming convention, consider delimiters, such as dash (-)
and underscore (_), with care. Differences in keyboards may prevent use of some
characters, especially in the case where clusters span geographic locations.

154 84

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Adding a service group using the GUI

155

The minimum required information to create a service group is:


A unique name
Using a consistent naming scheme helps identify the purpose of the service
group and all associated resources.
The list of systems on which the service group can run
The SystemList attribute for the service group defines where the service group
can run, as displayed in the excerpt from the sample main.cf file. A priority
number is associated with each system to determine the order systems are
selected for failover. The lower-numbered system is selected first.
The list of systems where the service group can be started
The Startup box specifies that the service group starts automatically when VCS
starts on the system, if the service group is not already online elsewhere in the
cluster. This is defined by the AutoStartList attribute of the service group. In
the example displayed in the slide, the s1 system is selected as the system on
which appsg is started when VCS starts up.
The type of service group
The Service Group Type selection is Failover by default.
If you save the configuration after creating the service group, you can view the
main.cf file to see the effect of had modifying the configuration and writing the
changes to the local disk.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

85

Note: You can click the Show Command button to see the commands that are
run when you click OK.
Adding a service group using the CLI
You can also use the VCS command-line interface to modify a running cluster
configuration. The next example shows how to use hagrp commands to add the
appsg service group and modify its attributes.
haconf makerw
hagrp add appsg
hagrp modify appsg SystemList s1 0 s2 1
hagrp modify appsg AutoStartList s1 s2
haconf dump -makero

The corresponding main.cf excerpt for appsg is shown in the slide.


Notice that the main.cf definition for the appsg service group does not include
the Parallel attribute. When a default value is specified for a resource, the attribute
is not written to the main.cf file. To display all values for all attributes:
In the GUI, select the object (resource, service group, system, or cluster), click
the Properties tag, and click Show all attributes.
From the command line, use the -display option to the corresponding ha
command. For example:
hagrp -display appsg

Copyright 2012 Symantec Corporation. All rights reserved.

See the command-line reference card provided with this course for a list of
commonly used ha commands.

156 86

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Adding resources
Online resource configuration procedure

Copyright 2012 Symantec Corporation. All rights reserved.

Add resources to a service group in the order of resource dependencies starting


from the child resource (bottom up). This enables each resource to be tested as it is
added to the service group.

157

Adding a resource requires you to specify:


The service group name
The unique resource name
If you prefix the resource name with the service group name, you can more
easily identify the service group to which it belongs. When you display a list of
resources from the command line using the hares -list command, the
resources are sorted alphabetically.
The resource type
Attribute values
Use the procedure shown in the diagram to configure a resource.
Notes:
It is recommended that you set each resource to be non-critical during initial
configuration. This simplifies testing and troubleshooting in the event that you
have specified incorrect configuration information. If a resource faults due to a
configuration error, the service group does not fail over if resources are noncritical.
Enabling a resource signals the agent to start monitoring the resource.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

87

Adding a resource using the GUI: NIC example


The NIC resource has only one required attribute, Device, for all platforms other
than HP-UX, which also requires NetworkHosts unless PingOptimize is set to 0.

Copyright 2012 Symantec Corporation. All rights reserved.

Optional attributes for NIC vary by platform. Refer to the Veritas Cluster Server
Bundled Agents Reference Guide for a complete definition. These optional
attributes are common to all platforms.
NetworkType: Type of network, Ethernet (ether)
PingOptimize: Number of monitor cycles to detect if the configured interface
is inactive
A value of 1 optimizes broadcast pings and requires two monitor cycles. A
value of 0 performs a broadcast ping during each monitor cycle and detects the
inactive interface within the cycle. The default is 1.

158 88

Note: On the HP-UX platform, if the PingOptimize attribute is set to 1, the


monitor entry point does not send broadcast pings.

NetworkHosts: The list of hosts on the network that are used to determine if
the network connection is alive
It is recommended that you specify the IP address of the host rather than the
host name to prevent the monitor cycle from timing out due to DNS problems.
Example device attribute values:
AIX: en0; HP-UX: lan2; Linux: eth0; Solaris: e1000g0

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Persistent resources
If you add a persistent resource as the first resource of a new service group, as
shown in the lab exercise for this lesson, notice that the service group status is
offline, even though the resource status is online.

Copyright 2012 Symantec Corporation. All rights reserved.

Persistent resources are not taken into consideration when VCS reports service
group status, because they are always online. When a nonpersistent resource is
added to the group, such as IP, the service group status reflects the status of that
nonpersistent resource.

159

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

89

Adding an IP resource
The slide shows the required attribute values for an IP resource (on Solaris) in the
appsg service group. The corresponding entry is made in the main.cf file when
the configuration is saved.

Copyright 2012 Symantec Corporation. All rights reserved.

Notice that the IP resource on Solaris has two required attributes: Device and
Address, which specify the network interface and virtual IP address, respectively.
The required attributes vary depending on the platform.

160 810

Optional Attributes
NetMask: Netmask associated with the application IP address
The value may be specified in decimal (base 10) or hexadecimal (base 16).
The default is the netmask corresponding to the IP address class.
This is a required attribute on AIX.
Options: Options to be used with the ifconfig command
ArpDelay: Number of seconds to sleep between configuring an interface and
sending out a broadcast to inform routers about this IP address
The default is 1 second.
IfconfigTwice: If set to 1, this attribute causes an IP address to be configured
twice, using an ifconfig up-down-up sequence. This behavior increases the
probability of gratuitous ARPs (caused by ifconfig up) reaching clients.
The default is 0.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Adding a resource using the CLI: DiskGroup example


You can use the hares command to add a resource and configure the required
attributes. This example shows how to add a DiskGroup resource.
The DiskGroup resource
The DiskGroup resource has only one required attribute, DiskGroup, except on
Linux, which also requires StartVolumes and StopVolumes.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: As of version 4.1, VCS sets the vxdg autoimport option to no, which
disables autoimporting of disk groups.

161

Example optional attributes:


StartVolumes: Starts all volumes after importing the disk group
This also starts layered volumes by running vxrecover -s. The default is 1,
enabled, on all UNIX platforms except Linux.
StopVolumes: Stops all volumes before deporting the disk group with vxvol
The default is 1, enabled, on all UNIX platforms except Linux.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

811

The Volume resource


The Volume resource can be used to manage a VxVM volume. Although the
Volume resource is not strictly required, it provides additional monitoring. You can
use a DiskGroup resource to start volumes when the DiskGroup resource is
brought online. This has the effect of starting volumes more quickly, but only the
disk group is monitored.

Copyright 2012 Symantec Corporation. All rights reserved.

If you have a large number of volumes on a single disk group, the DiskGroup
resource can time out when trying to start or stop all the volumes simultaneously.
In this case, you can set the StartVolume and StopVolume attributes of the
DiskGroup to 0, and create Volume resources to start the volumes individually.

162 812

Also, if you are using volumes as raw devices with no file systems, and, therefore,
no Mount resources, consider using Volume resources for the additional level of
monitoring.
The Volume resource has no optional attributes.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

The Mount resource

Copyright 2012 Symantec Corporation. All rights reserved.

The Mount resource has the required attributes displayed in the main.cf file
excerpt in the slide.

163

Example optional attributes:


MountOpt: Specifies options for the mount command
When setting attributes with arguments starting with a dash (-), use the percent
(%) character to escape the arguments. Examples:
hares -modify appmnt FsckOpt %-y
The percent character is an escape character for the VCS CLI which prevents
VCS from interpreting the string as an argument to hares.
SnapUmount: Determines whether VxFS snapshots are unmounted when the
file system is taken offline (unmounted)
The default is 0, meaning that snapshots are not automatically unmounted
when the file system is unmounted.
Note: If SnapUmount is set to 0 and a VxFS snapshot of the file system is
mounted, the unmount operation fails when the resource is taken offline,
and the service group is not able to fail over.
This is desired behavior in some situations, such as when a backup is being
performed from the snapshot.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

813

File system locking


Storage Foundation enables a file system to be mounted with a key which must be
used to unmount the file system. The Mount resource has a VxFSMountLock
attribute to manage the file system mount key.
This attribute is set to the VCS string by default when a Mount resource is
added. The Mount agent uses this key for online and offline operations to ensure
the file system cannot be inadvertently unmounted outside of VCS control.
You can unlock a file system without unmounting by using the fsadm command:

Copyright 2012 Symantec Corporation. All rights reserved.

/opt/VRTS/bin/fsadm -o mntunlock="key" mount_point_name

164 814

Note: The example operating system commands for unmounting a locked file
system are specific to Solaris. Other operating systems may use different
methods for unmounting file systems.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

The Process resource


The Process resource controls the application and is added last because it requires
all other resources to be online in order to start. The Process resource is used to
start, stop, and monitor the status of a process.
Online: Starts the process specified in the PathName attribute, with options, if
specified in the Arguments attribute
Offline: Sends SIGTERM to the process
SIGKILL is sent if process does not exit within one second.
Monitor: Determines if the process is running by scanning the process table

Copyright 2012 Symantec Corporation. All rights reserved.

The optional Arguments attribute specifies any command-line options to use when
starting the process.

165

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

815

Copyright 2012 Symantec Corporation. All rights reserved.

Process attribute specification


If the executable is a shell script, you must specify the script name followed by
arguments. You must also specify the full path for the shell in the PathName
attribute.
The monitor script calls ps and matches the process name. The process name
field is limited to 80 characters in the ps output. If you specify a path name to
a process that is longer than 80 characters, the monitor entry point fails.

166 816

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Solving common configuration errors


Troubleshooting resources
Verify that each resource is online on the local system before continuing the
service group configuration procedure.

Copyright 2012 Symantec Corporation. All rights reserved.

If you are unable to bring a resource online, use the procedure in the diagram to
find and fix the problem. You can view the logs through Cluster Manager or in the
/var/VRTSvcs/logs directory if you need to determine the cause of errors.
VCS log entries are written to engine_A.log and agent entries are written to
resource_A.log files.

167

Note: Some resources must be disabled and reenabled. Only resources whose
agents have open and close entry points, such as MultiNICB, require you to
disable and enable again after fixing the problem. By contrast, a Mount
resource does not need to be disabled if, for example, you incorrectly
specify the MountPoint attribute.
However, it is generally good practice to disable and enable regardless because it is
difficult to remember when it is required and when it is not. In addition, a resource
is immediately monitored upon enabling, which would indicate potential problems
with attribute specification.
More detail on performing tasks necessary for solving resource configuration
problems is provided in the following sections.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

817

Disabling and enabling a resource


Disable a resource before you start modifying attributes to fix a misconfigured
resource. When you disable a resource:
VCS stops monitoring the resource, so it does not fault or wait to come online
while you are making changes.
The agent calls the close entry point, if defined. The close entry point is
optional.
When the close tasks are completed, or if there is no close entry point, the
agent stops monitoring the resource.

Copyright 2012 Symantec Corporation. All rights reserved.

When you enable a resource, VCS calls the agent to immediately monitor the
resource and then continues to periodically directs the agent to monitor the
resource.

168 818

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Clearing resource faults


A fault indicates that the monitor entry point is reporting an unexpected offline
state for a previously online resource. This indicates a problem with the underlying
component being managed by the resource.
Before clearing a fault, you must resolve the problem that caused the fault. Use the
VCS logs to help you determine which resource has faulted and why.

Copyright 2012 Symantec Corporation. All rights reserved.

It is important to clear faults for critical resources after fixing underlying problems
so that the system where the fault originally occurred can be a failover target for
the service group. In a two-node cluster, a faulted critical resource would prevent
the service group from failing back if another fault occurred. You can clear a
faulted resource on a particular system, or on all systems when the service group
can run.

169

Note: Persistent resource faults should be probed to force the agent to monitor the
resource immediately. Otherwise, the resource is not online until the next
OfflineMonitorInterval, up to five minutes.
Clearing and Probing Resources Using the CLI
To clear a faulted resource, type:
hares -clear resource [-sys system]
If the system name is not specified then the resource is cleared on all systems.
To probe a resource, type:
hares -probe resource -sys system
Lesson 8 Online Configuration
Copyright 2012 Symantec Corporation. All rights reserved.

819

Testing the service group


After you have successfully brought each resource online, link the resources and
switch the service group to each system on which the service group can run.
Test procedure
For simplicity, the example service group uses the default Priority failover policy.
That is, if a critical resource in appsg faults, the service group is taken offline and
brought online on the system with the lowest priority value that is available for
failover.

Copyright 2012 Symantec Corporation. All rights reserved.

The Handling Resource Faults lesson provides additional information about


configuring and testing failover behavior. Additional failover policies are also
described in the Veritas Cluster Server for UNIX: Cluster Management participant
guide.

170 820

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Linking resources
When you link a parent resource to a child resource, the dependency becomes a
component of the service group configuration. When you save the cluster
configuration, each dependency is listed at the end of the service group definition,
after the resource specifications, in the format show in the slide.
In addition, VCS creates a dependency tree in the main.cf file at the end of the
service group definition to provide a more visual view of resource dependencies.
This is not part of the cluster configuration, as denoted by the // comment
markers.
// resource dependency tree
//
Copyright 2012 Symantec Corporation. All rights reserved.

//group appsg

171

//{
//IP appip
//

//

NIC appnic

//

//}

Note: You cannot use the // characters as general comment delimiters. VCS strips
out all lines with // upon startup and re-creates these lines based on the
requires statements in the main.cf file.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

821

Resource dependencies
VCS enables you to link resources to specify dependencies. For example, an IP
address resource is dependent on the NIC providing the physical link to the
network.

Copyright 2012 Symantec Corporation. All rights reserved.

Ensure that you understand the dependency rules shown in the slide before you
start linking resources.

172 822

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Running a virtual fire drill


You can run a virtual fire drill for a service group to check that the underlying
infrastructure is properly configured to enable failover to other systems. The
service group must be fully online on one system, and can then be checked on all
other systems where it is offline.
You can select which type of infrastructure components to check, or run all checks.
In some cases, you can use the virtual fire drill to correct problems, such as making
a mount point directory if it does not exist.
However, not all resources have defined actions for virtual fire drills, in which case
a message is displayed indicating that no checks were performed.

Copyright 2012 Symantec Corporation. All rights reserved.

You can also run fire drills using the havfd command, as shown in the slide.

173

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

823

Setting the critical attribute


The Critical attribute is set to 1, or true, by default. When you initially configure a
resource, you set the Critical attribute to 0, or false. This enables you to test the
resources as you add them without the resource faulting and causing the service
group to fail over as a result of configuration errors you make.

Copyright 2012 Symantec Corporation. All rights reserved.

Some resources may always be set to non-critical. For example, a resource


monitoring an Oracle reporting database may not be critical to the overall service
being provided to users. In this case, you can set the resource to non-critical to
prevent downtime due to failover in the event that it was the only resource that
faulted.

174 824

Note: When you set an attribute to a default value, the attribute is removed from
main.cf. For example, after you set Critical to 1 for a resource, the
Critical = 0 line is removed from the resource configuration because
it is now set to the default value for the resource type.
To see the values of all attributes for a resource, use the hares command. For
example:
hares -display appdg

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

175

Labs and solutions for this lesson are located on the following pages.
Lab 8: Online configuration of a service group, page A-83.
Lab 8: Online configuration of a service group, page B-167.

Lesson 8 Online Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

825

Copyright 2012 Symantec Corporation. All rights reserved.

176 826

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 9

Offline Configuration

177

Copyright 2012 Symantec Corporation. All rights reserved.

178 92

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Offline configuration examples


Example 1: Reusing a cluster configuration
One example where offline configuration is appropriate is when your high
availability environment is expanding and you are adding clusters with similar
configurations.
In the example displayed in the diagram, the original cluster consists of two
systems, each system running a database instance. Another cluster with essentially
the same configuration is being added, but it is managing different databases.

Copyright 2012 Symantec Corporation. All rights reserved.

You can copy the configuration files from the original cluster, make the necessary
changes, and then restart VCS as described later in this lesson. This method may
be more efficient than creating each service group and resource using a graphicaluser interface or the VCS command-line interface.

179

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

93

Example 2: Reusing a service group configuration


Another example of using offline configuration is when you want to add a service
group with a similar set of resources as another service group in the same cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

In the example shown in the slide, the portion of the main.cf file that defines the
extwebsg service group is copied and edited as necessary to define a new intwebsg
service group.

180 94

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Example 3: Modeling a configuration


You can use the Simulator to create and test a cluster configuration on Windows
and then copy the finalized configuration files into a real cluster environment. The
Simulator enables you to create configurations for all supported UNIX, Linux, and
Windows platforms.
This only applies to the cluster configuration. You must perform all preparation
tasks to create and test the underlying resources, such as virtual IP addresses,
shared storage objects, and applications.

Copyright 2012 Symantec Corporation. All rights reserved.

After the cluster configuration is copied to the real cluster and VCS is restarted,
you must perform complete testing of all objects, as shown later in this lesson.

181

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

95

Offline configuration procedures

Copyright 2012 Symantec Corporation. All rights reserved.

New cluster

182 96

The diagram illustrates a process for modifying the cluster configuration when you
are configuring your first service group and do not already have services running
in the cluster. Select one system to be your primary node for configuration. Work
from this system for all steps up to the final point of restarting VCS.
1 Save and close the configuration.
Always save and close the configuration before making any modifications.
This ensures the configuration in the main.cf file on disk is the most recent
in-memory configuration.
2 Change to the configuration directory.
The examples used in this procedure assume you are working in the /etc/
VRTSvcs/conf/config directory.
3 Stop VCS.
Stop VCS on all cluster systems. This ensures that there is no possibility of
another administrator changing the cluster configuration while you are
modifying the main.cf file.
4 Edit the configuration files.
You must choose a system on which to modify the main.cf file. You can
choose any system. However, you must then start VCS first on that system.
5 Verify the configuration file syntax.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Run the hacf command in the /etc/VRTSvcs/conf/config directory


to verify the syntax of the main.cf and types.cf files after you have
modified them. VCS cannot start if the configuration files have syntax errors.
Run the command in the config directory using the dot (.) to indicate the
current working directory, or specify the full path.
Note: The hacf command only identifies syntax errors, not configuration errors.
Start VCS on the system with the modified configuration file.
Start VCS first on the primary system with the modified main.cf file.
7 Verify that VCS is running.
Verify that VCS is running on the primary configuration system before starting
VCS on other systems.
8 Start other systems
After VCS is in a running state on the first system, start VCS on all other
systems. If you cannot bring VCS to a running state on all systems, see the
Solving Offline Configuration Problems section.

Copyright 2012 Symantec Corporation. All rights reserved.

183

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

97

Existing cluster
The diagram illustrates a process for modifying the cluster configuration when you
want to minimize the time that VCS is not running to protect existing services.
This procedure includes several built-in protections from common configuration
errors and maximizes high availability.

Copyright 2012 Symantec Corporation. All rights reserved.

First system

184 98

Designate one system as the primary change management node. This makes
troubleshooting easier if you encounter problems with the configuration.
1 Save and close the configuration.
Save and close the cluster configuration before you start making changes. This
ensures that the working copy has the latest in-memory configuration.
2 Back up the main.cf file.
Make a copy of the main.cf file with a different name. This ensures that you
have a backup of the configuration that was in memory when you saved the
configuration to disk.
Note: If any *types.cf files are being modified, also back up these files.
3 Make a staging directory.
Make a subdirectory of /etc/VRTSvcs/conf/config in which you can
edit a copy of the main.cf file. This helps ensure that your edits are not
overwritten if another administrator changes the configuration simultaneously.
4 Copy the configuration files.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Copy the *.cf files /etc/VRTSvcs/conf/config to the staging


directory.
5 Modify the configuration files.
Modify the main.cf file in the staging directory on one system. The diagram
on the slide refers to this as the first system.
6 Freeze the service groups.
If you are modifying existing service groups, freeze those service groups
persistently by setting the Frozen attribute to 1. This simplifies fixing resource
configuration problems after VCS is started because the service groups will not
fail over between systems if faults occur.
. . .
group extwebsg (
SystemList = { s1 = 1, s2 = 0}
AutoStartList = { s1, s2 }
Operators = { extwebsgoper }
Frozen = 1
)

185

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

99

Verify the configuration file syntax.


Run the hacf command in the staging directory to verify the syntax of the
main.cf and types.cf files after you have modified them.

Note: The dot (.) argument indicates that the current working directory is used as
the path to the configuration files. You can run hacf -verify from any
directory by specifying the path to the configuration directory:
hacf -verify /etc/VRTSvcs/conf/config

Copyright 2012 Symantec Corporation. All rights reserved.

186 910

10
11
12

Stop VCS.
Stop VCS on all cluster systems after making configuration changes. To leave
applications running, use the -force option, as shown in the diagram.
Copy the new configuration file.
Copy the modified main.cf file and all *types.cf files from the staging
directory back into the configuration directory.
Start VCS.
Start VCS first on the system with the modified main.cf file.
Verify that VCS is in a local build or running state on the primary system.
Start other systems.
After VCS is in a running state on the first system, start VCS all other systems.
You must wait until the first system has built a cluster configuration in memory
and is in a running state to ensure the other systems perform a remote build
from the first systems configuration in memory.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Copyright 2012 Symantec Corporation. All rights reserved.

VCS startup using a specific main.cf file


The diagram illustrates how to start VCS to ensure that the cluster configuration in
memory is built from a specific main.cf file.

Copyright 2012 Symantec Corporation. All rights reserved.

Starting VCS using a modified main.cf file

187

Ensure that VCS builds the new configuration in memory on the system where the
changes were made to the main.cf file. All other systems must wait for the build
to successfully complete and the system to transition to the running state before
VCS is started elsewhere.
1 Run hastart on s1 to start the had and hashadow processes.
2 HAD checks for a valid main.cf file.
3 HAD checks for an active cluster configuration on the cluster interconnect.
4 Because there is no active cluster configuration, HAD on s1 reads the local
main.cf file and loads the cluster configuration into local memory on s1.
5 Verify that VCS is in a local build or running state on s1 using hastatus
-sum.

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

911

6
7
8
9
10

Copyright 2012 Symantec Corporation. All rights reserved.

11

When VCS is in a running state on s1, run hastart on s2 to start the had and
hashadow processes.
HAD on s2 checks for a valid main.cf file.
HAD on s2 checks for an active cluster configuration on the cluster
interconnect.
The s1 system sends a copy of the cluster configuration over the cluster
interconnect to s2.
The s2 system performs a remote build to load the new cluster configuration in
memory.
HAD on s2 backs up the existing main.cf and types.cf files and saves the
current in-memory configuration to disk.

188 912

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Resource dependencies

Copyright 2012 Symantec Corporation. All rights reserved.

Ensure that you create the resource dependency definitions at the end of the
service group definition. Add the links using the syntax shown in the slide.

189

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

913

A completed configuration file

Copyright 2012 Symantec Corporation. All rights reserved.

A portion of the completed main.cf file with the new service group definition
for intwebsg is displayed in the slide. This service group was created by copying
the extwebsg service group definition and changing the attribute names and values.

190 914

Two errors are intentionally shown in the example in the slide.


The extwebip resource name was not changed in the intwebsg service group.
This causes a syntax error when the main.cf file is checked using hacf
-verify because you cannot have duplicate resource names within the
cluster.
The intwebdg resource has the value of extwebdatadg for the DiskGroup
attribute. This does not cause a syntax error, but is not a correct attribute value
for this resource. The extwebdatadg disk group is being used by the extwebsg
service group and cannot be imported by another failover service group.
Note: You cannot include comment lines in the main.cf file. The lines you see
starting with // are generated by VCS to show resource dependencies. Any
lines starting with // are stripped out during VCS startup.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Solving offline configuration problems


Starting from an old configuration
If you are running an old cluster configuration because you started VCS on the
wrong system first, you can recover the main.cf file on the system where you
originally made the modifications using the main.cf.previous backup file
created automatically by VCS.
Recovering from an old configuration

Copyright 2012 Symantec Corporation. All rights reserved.

Use the offline configuration procedure to restart VCS using the recovered
main.cf file.

191

Note: You must ensure that VCS is in the local build or running state on the
system with the recovered main.cf file before starting VCS on other
systems.

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

915

All systems in a wait state

Copyright 2012 Symantec Corporation. All rights reserved.

This scenario results in all cluster systems entering a wait state:


Your new main.cf file has a syntax problem.
You forget to check the file with hacf -verify.
You start VCS on the first system with hastart.
The first system cannot build a configuration and goes into a wait state, such as
ADMIN_WAIT.

192 916

Forcing VCS to start from a wait state


1 An attempt to start VCS on a s1 with a bad main.cf results in the cluster in a
wait state.
2 Visually inspect the main.cf file and modify or replace the file as necessary
to ensure it contains the correct configuration content.
3 Verify the configuration with hacf -verify /opt/VRTSvcs/conf/
config.
4 Run hasys -force s1 on s1. This starts the local build process.
You must have a valid main.cf file to force VCS to a running state. If the
main.cf file has a syntax error, VCS enters the ADMIN_WAIT state.
5 HAD checks for a valid main.cf file.
6 The had daemon on s1 reads the local main.cf file, and if it has no syntax
errors, HAD loads the cluster configuration into local memory on s1.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

7
8
9
10
11

Copyright 2012 Symantec Corporation. All rights reserved.

12

When HAD is in a running state on s1, this state change is broadcast on the
cluster interconnect by GAB.
Next, run hastart on s2 to start HAD.
HAD on s2 checks for a valid main.cf file. This system has an old version of
the main.cf.
HAD on s2 then checks for another node in a local build or running state.
Since s1 is in a local build or running state, HAD on s2 performs a remote
build from the configuration on s1.
HAD on s2 copies the cluster configuration into the local main.cf and
types.cf files after moving the original files to backup copies with
timestamps.

193

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

917

Configuration file backups


Each time you save the cluster configuration, VCS maintains backup copies of the
main.cf and types.cf files.
This occurs as follows:
1 New main.cf.datetime and *types.cf.datetime files are created.
2 The hard links for main.cf, main.cf.previous, types.cf and
types.cf.previous (as well as any others) are changed to point to the
correct versions

Copyright 2012 Symantec Corporation. All rights reserved.

Although it is always recommended that you copy configuration files before


modifying them, you can revert to an earlier version of these files if you damage or
lose a file.

194 918

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Testing the service group


Service group testing procedure
After you restart VCS throughout the cluster, use the procedure shown in the slide
to verify that your configuration additions or changes are correct.
Notes:
This process is slightly different from online configuration, which tests each
resource before creating the next and before creating dependencies.
Resources should come online after you restart VCS if you have specified the
appropriate attributes to automatically start the service group.

Copyright 2012 Symantec Corporation. All rights reserved.

Use the procedures shown in the Online Configuration lesson to solve


configuration problems, if any.

195

If you need to make additional modifications, you can use one of the online tools
or modify the configuration files using the offline procedure.

Lesson 9 Offline Configuration


Copyright 2012 Symantec Corporation. All rights reserved.

919

Copyright 2012 Symantec Corporation. All rights reserved.

196 920

Labs and solutions for this lesson are located on the following pages.
Lab 9: Offline configuration, page A-95.
Lab 9: Offline configuration, page B-199.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 10

Configuring Notification

197

Copyright 2012 Symantec Corporation. All rights reserved.

198 102

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Notification overview
When VCS detects certain events, you can configure the notifier to:
Generate an SNMP (V2) trap to specified SNMP consoles.
Send an e-mail message to designated recipients.
Message queue

Copyright 2012 Symantec Corporation. All rights reserved.

VCS ensures that no event messages are lost while the VCS engine is running,
even if the notifier daemon stops or is not started. The had daemons
throughout the cluster communicate to maintain a replicated message queue.

199

If the service group with notifier configured as a resource fails on one of the nodes,
notifier fails over to another node in the cluster. Because the message queue is
guaranteed to be consistent and replicated across nodes, notifier can resume
message delivery from where it left off after it fails over to the new node.
Messages are stored in the queue until one of these conditions is met:
The notifier daemon sends an acknowledgement to had that at least one
recipient has received the message.
The queue is full. The queue is circularthe last (oldest) message is deleted in
order to write the current (newest) message.
Messages in the queue for one hour are deleted if notifier is unable to deliver to
the recipient.
Note: Before the notifier daemon connects to had, messages are stored
permanently in the queue until one of the last two conditions is met.

Lesson 10 Configuring Notification


Copyright 2012 Symantec Corporation. All rights reserved.

103

Copyright 2012 Symantec Corporation. All rights reserved.

You can view the entries in the message cue using the haclus -notes
command. You can also delete all queued messages on all cluster nodes using
haclus -delnotes, but the notifier must be stopped first.

200 104

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

High availability for notification


The notification service is managed by a NotifierMngr type resource contained in
the ClusterService group. ClusterService is created automatically during cluster
configuration, if certain configuration options are selected.
You can also configure ClusterService later using the VCS command-line interface
or Veritas Operations Manager.

Copyright 2012 Symantec Corporation. All rights reserved.

After configuring notification, ClusterService contains a NotifierMngr type


resource to manage the notifier daemon, and a csgnic resource to monitor the
network interface used by the notifier for sending messages.

201

ClusterService is a special-purpose service group that:


Is the first group to come online on the first node in a running state
Can fail over despite being frozen
Cannot be autodisabled
Switches to another node upon hastop -local on the online system
Attempts to start on all miniclusters if a network partition occurs
ClusterService is also used to manage the wide-area connector process in a global
cluster environment.
CAUTION

Do not add resources to ClusterService for managing non-VCS


applications or services.

Lesson 10 Configuring Notification


Copyright 2012 Symantec Corporation. All rights reserved.

105

Message severity levels


Event messages are assigned one of four severity levels by notifier:
Information: Normal cluster activity is occurring, such as resources being
brought online.
Warning: Cluster or resource states are changing unexpectedly, such as a
resource in an unknown state.
Error: Services are interrupted, such as a service group faulting that cannot be
failed over.
SevereError: Potential data corruption is occurring, such as a concurrency
violation.

Copyright 2012 Symantec Corporation. All rights reserved.

The administrator can configure notifier to specify which recipients are sent
messages based on the severity level.

202 106

A complete list of events and corresponding severity levels is provided in the


Veritas Cluster Server Administrators Guide.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Notifier and log events

Copyright 2012 Symantec Corporation. All rights reserved.

The table in the slide shows how the notifier levels shown in e-mail messages
compare to the log file codes for corresponding events. Notice that notifier
SevereError events correlate to CRITICAL entries in the engine log.

203 Lesson 10 Configuring Notification

Copyright 2012 Symantec Corporation. All rights reserved.

107

Configuring notification
Configuration methods
Although you can start and stop the notifier daemon manually outside of VCS,
you should make the notifier component highly available by placing the daemon
under VCS control.

Copyright 2012 Symantec Corporation. All rights reserved.

You can configure VCS to manage the notifier manually using the command-line
interface or Veritas Operations Manager, or set up notification during initial cluster
configuration.

204 108

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Notification configuration
These high-level tasks are required to manually configure highly available
notification within the ClusterService group.
1 Add a NotifierMngr type of resource to the ClusterService group.
Link the resource to the csgnic resource that is present
2 If SMTP notification is required:
a Modify the SmtpServer and SmtpRecipients attributes of the NotifierMngr
type of resource.
b Optionally, modify the ResourceOwner attribute of individual resources.
c Optionally, specify a GroupOwner e-mail address for each service group.
3 If SNMP notification is required:
a Modify the SnmpConsoles attribute of the NotifierMngr type of resource.
b Verify that the SNMPTrapPort attribute value matches the port configured
for the SNMP console. The default is port 162.
c Configure the SNMP console to receive VCS traps (described later in the
lesson).
4 Modify any other optional attributes of the NotifierMngr type of resource.
See the manual pages for notifier and hanotify for a complete description
of notification configuration options.

205 Lesson 10 Configuring Notification

Copyright 2012 Symantec Corporation. All rights reserved.

109

The NotifierMngr resource type


The notifier daemon runs on only one system in the cluster, where it processes
messages from the local had daemon. If the notifier daemon fails on that
system, the NotifierMngr agent detects the failure and migrates the service group
containing the NotifierMngr resource to another system.
Because the message queue is replicated throughout the cluster, any system that is
a target for the service group has an identical queue. When the NotifierMngr
resource is brought online, had sends the queued messages to the notifier
daemon.

Copyright 2012 Symantec Corporation. All rights reserved.

The example in the slide shows the configuration of a notifier resource for e-mail
notification.

206 1010

See the Veritas Cluster Server Bundled Agents Reference Guide for detailed
information about the NotifierMngr agent.
Note: Before modifying resource attributes, ensure that you take the resource
offline and disable it. The notifier daemon must be stopped and
restarted with new parameters in order for changes to take effect.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

The ResourceOwner attribute


You can set the ResourceOwner attribute to define an owner for a resource. After
the attribute is set to a valid e-mail address and notification is configured, an email message is sent to the defined recipient when one of the resource-related
events occurs, shown in the table in the slide.
VCS also creates an entry in the log file in addition to sending an e-mail message.

Copyright 2012 Symantec Corporation. All rights reserved.

ResourceOwner can be specified as an e-mail ID (dbas@company.com) or a


user account (gene). If a user account is specified, the e-mail address is
constructed as login@smtp_system, where smtp_system is the system that
was specified in the SmtpServer attribute of the NotifierMngr resource.

207 Lesson 10 Configuring Notification

Copyright 2012 Symantec Corporation. All rights reserved.

1011

The GroupOwner attribute


You can set the GroupOwner attribute to define an owner for a service group. After
the attribute is set to a valid e-mail address and notification is configured, an email message is sent to the defined recipient when one of the group-related events
occurs, as shown in the table in the slide.

Copyright 2012 Symantec Corporation. All rights reserved.

GroupOwner can be specified as an e-mail ID (dbas@company.com) or a user


account (gene). If a user account is specified, the e-mail address is constructed as
login@smtp_system, where smtp_system is the system that was specified
in the SmtpServer attribute of the NotifierMngr resource.

208 1012

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Additional recipients for notifications


Additional attributes enable broader specification of users to be notified of
resource and service group events. These attributes are configured at the
corresponding object level. For example, the GroupRecipients attribute is
configured within a service group definition.

Copyright 2012 Symantec Corporation. All rights reserved.

These attributes are specified by a list of e-mail addresses along with the severity
level. The registered users get only those events which have severity equal to or
greater than the severity requested. For example, if janedoe is configured in the
ClusterRecipients attribute with a severity level Warning, she would get events
of severity Warning, Error and SevereError but would not get events with
severity Information. A cluster event, such as a cluster fault, which is Error
level, would be sent to janedoe.

209 Lesson 10 Configuring Notification

Copyright 2012 Symantec Corporation. All rights reserved.

1013

Configuring the SNMP console


To enable an SNMP management console to recognize VCS traps, you must load
the VCS MIB into the console. The textual MIB is located in the
/etc/VRTSvcs/snmp/vcs.mib file.
For HP OpenView Network Node Manager (NNM), you must merge the VCS
SNMP trap events contained in the /etc/VRTSvcs/snmp/vcs_trapd file. To
merge the VCS events, type:
xnmevents -merge vcs_trapd

Copyright 2012 Symantec Corporation. All rights reserved.

SNMP traps sent by VCS are then displayed in the HP OpenView NNM SNMP
console.

210 1014

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Overview of triggers
Using triggers
VCS provides an additional method for notifying users of important events. When
VCS detects certain events, you can configure a trigger to notify an administrator
or perform other actions. You can use event triggers in place of, or in conjunction
with, notification.
Triggers are executable programs, batch files, shell or Perl scripts associated with
the predefined event types supported by VCS that are shown in the slide.

Copyright 2012 Symantec Corporation. All rights reserved.

Triggers are configured by specifying one or more keys in the TriggersEnabled


attribute. Some keys are specific to service groups or resources.

211

The RESSTATECHANGE, RESRESTART, and RESFAULT keys apply to both


resources and service groups. When one of these keys is specified in TriggerPath
at the service group level, the trigger applies to each resource in the service group.
Examples of some trigger keys include:
POSTOFFLINE: The service group went offline from a PARTIAL or ONLINE
state.
POSTONLINE: The service group went online from OFFLINE state.
RESFAULT: A resource faulted.
RESRESTART: A resource was restarted after a fault.
For a complete description of triggers, see the Veritas Cluster Server
Administrators Guide.

Lesson 10 Configuring Notification


Copyright 2012 Symantec Corporation. All rights reserved.

1015

Sample triggers
A set of sample trigger scripts is provided in /opt/VRTSvcs/bin/
sample_triggers. These scripts can be copied to /opt/VRTSvcs/bin/
triggers and modified to your specifications.

Copyright 2012 Symantec Corporation. All rights reserved.

The sample scripts include comments that explain how the trigger is invoked and
provide guidance about modifying the samples to your specifications.

212 1016

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Location of triggers
Trigger executable programs, batch files, shell or Perl scripts reside in
/opt/VRTSvcs/bin/triggers by default.
You can change the location of triggers by specifying the TriggerPath attribute at
the service group or resource level. This attribute enables you to set up different
trigger programs for resources or service groups. In previous versions of VCS, the
same triggers applied to all resources or service groups in the cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

The value of the TriggerPath attribute is appended to /opt/VRTSvcs (also


referred to as VCS_HOME) to form a directory containing the trigger programs. In
the example shown in the slide, TriggerPath is set to bin/websg. Therefore, the
files executed when the PREONLINE key is specified for the websg service group
must be located in /opt/VRTSvcs/bin/websg.

213

The example portion of the main.cf file shows the PREONLINE trigger enabled
for websg on both s1 and s2, and the trigger path customized to map to
/opt/VRTSvcs/bin/websg.

Lesson 10 Configuring Notification


Copyright 2012 Symantec Corporation. All rights reserved.

1017

Example configuration
The slide shows the basic procedure for creating a trigger using a sample script
provided with VCS.
In this case, the resfault script is copied from the sample_triggers
directory and then modified to use the Linux /bin/mail program to send e-mail
to the modified recipients list.

Copyright 2012 Symantec Corporation. All rights reserved.

The only changes required to make use of the sample resfault trigger in this
example are the following two lines:
@recipients=("student\@mgt.example.com");
. . .
"/bin/mail -s resfault $recipient < $msgfile";

214 1018

After a trigger is modified, you must ensure the file is executable by root, and then
copy the script or program to each system in the cluster that can run the trigger.
Finally, modify the TriggersEnabled attribute to specify the key for each system
that can run the trigger.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Using multiple scripts for a trigger


VCS supports the use of multiple scripts for a single trigger. This enables you to
break the logic of a trigger into components, rather than having all trigger logic in
one monolithic script.
The number contained in the file name determines the order in which the scripts
are run, similar to legacy UNIX startup scripts in rc* directories.

Copyright 2012 Symantec Corporation. All rights reserved.

To use multiple files for a single trigger, you must specify a custom path using the
TriggerPath attribute.

215

Lesson 10 Configuring Notification


Copyright 2012 Symantec Corporation. All rights reserved.

1019

Copyright 2012 Symantec Corporation. All rights reserved.

216 1020

Labs and solutions for this lesson are located on the following pages.
Lab 10: Configuring notification, page A-103.
Lab 10: Configuring notification, page B-215.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 11

Handling Resource Faults

217

Copyright 2012 Symantec Corporation. All rights reserved.

218 112

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

VCS response to resource faults


Failover decisions and critical resources
Critical resources define the basis for failover decisions made by VCS. When the
monitor entry point for a resource returns with an unexpected offline status, the
action taken by the VCS engine depends on whether the resource is critical.

Copyright 2012 Symantec Corporation. All rights reserved.

By default, if a critical resource in a failover service group faults or is taken offline


as a result of another resource fault, VCS determines that the service group is
faulted. VCS then fails the service group over to another cluster system, as defined
by a set of service group attributes. The rules for selecting a failover target are
described in the Startup and Failover Policies lesson in the Veritas Cluster
Server for UNIX: Manage and Administer course.

219

The default failover behavior for a service group can be modified using one or
more optional service group attributes. Failover determination and behavior are
described throughout this lesson.

Lesson 11 Handling Resource Faults


Copyright 2012 Symantec Corporation. All rights reserved.

113

Copyright 2012 Symantec Corporation. All rights reserved.

How VCS responds to resource faults by default

220 114

VCS responds in a specific and predictable manner to faults. When VCS detects a
resource failure, it performs the following actions:
1 Instructs the agent to execute the clean entry point for the failed resource to
ensure that the resource is completely offline
The resource transitions to a FAULTED state.
2 Takes all resources in the path of the fault offline starting from the faulted
resource up to the top of the dependency tree
3 If an online critical resource is part of the path that was faulted or taken offline,
faults the service group and takes the group offline to prepare for failover
If no online critical resources are affected, no more action occurs.
4 Attempts to start the service group on another system in the SystemList
attribute according to the FailOverPolicy defined for that service group and the
relationships between multiple service groups
Failover policies and the impact of service group interactions during failover
are discussed in detail in the Veritas Cluster Server for UNIX: Manage and
Administer course.
Note: The state of the group on the new system prior to failover must be
offline (not faulted).
5 If no other systems are available, the service group remains offline.
VCS also executes certain triggers and carries out notification while it performs
each task in response to resource faults. The role of notification and event triggers
in resource faults is explained in detail later in this lesson.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Copyright 2012 Symantec Corporation. All rights reserved.

The impact of service group attributes on failover


Several service group attributes can be used to change the default behavior of VCS
while responding to resource faults.
ManageFaults
The ManageFaults attribute can be used to prevent VCS from taking any automatic
actions whenever a resource failure is detected. Essentially, ManageFaults
determines whether VCS or an administrator handles faults for a service group.

Copyright 2012 Symantec Corporation. All rights reserved.

If ManageFaults is set to the default value of ALL, VCS manages faults by


executing the clean entry point for that resource to ensure that the resource is
completely offline, as shown previously. This is the default value (ALL).

221

If this attribute is set to NONE, VCS places the resource in an ADMIN_WAIT


state and waits for administrative intervention. This is often used for service
groups that manage database instances. You may need to leave the database in its
FAULTED state in order to perform problem analysis and recovery operations.
Note: This attribute is set at the service group level. This means that any resource
fault within that service group requires administrative intervention if the
ManageFaults attribute for the service group is set to NONE.

Lesson 11 Handling Resource Faults


Copyright 2012 Symantec Corporation. All rights reserved.

115

Frozen or TFrozen

Copyright 2012 Symantec Corporation. All rights reserved.

These service group attributes are used to indicate that the service group is frozen
due to an administrative command. When a service group is frozen, all agent
online and offline actions are disabled.
If the service group is temporarily frozen using the hagrp -freeze
group command, the TFrozen attribute is set to 1.
If the service group is persistently frozen using the hagrp -freeze group
-persistent command, the Frozen attribute is set to 1.
When the service group is unfrozen using the hagrp -unfreeze group
[-persistent] command, the corresponding attribute is set back to the
default value of 0.

222 116

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

AutoFailOver

Copyright 2012 Symantec Corporation. All rights reserved.

This attribute determines whether automatic failover takes place when a resource
or system faults. The default value of 1 indicates that the service group should be
failed over to other available systems if at all possible. However, if the attribute is
set to 0, no automatic failover is attempted for the service group, and the service
group is left in an OFFLINE | FAULTED state.

223 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

117

Practice: How VCS responds to a fault


The service group illustrated in the slide demonstrates how VCS responds to
faults. In each case (A, B, C, and so on), assume that the group is configured as
listed and that the service group is not frozen. As an exercise, determine what
occurs if the fourth resource in the group fails.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, in case A in the slide, the clean entry point is executed for resource 4
to ensure that it is offline, and resources 7 and 6 are taken offline because they
depend on 4. Because 4 is a critical resource, the rest of the resources are taken
offline from top to bottom, and the group is then failed over to another system.

224 118

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Determining failover duration

Copyright 2012 Symantec Corporation. All rights reserved.

Failover duration on a resource fault


When a resource failure occurs, application services may be disrupted until either
the resource is restarted on the same system or the application services migrate to
another system in the cluster. The time required to address the failure is a
combination of the time required to:
Detect the failure.
When traditional monitoring is configured, a resource failure is only detected
when the monitor entry point of that resource returns an offline status
unexpectedly. The resource type attributes used to tune the frequency of
monitoring a resource are MonitorInterval (default of 60 seconds) and
OfflineMonitorInterval (default of 300 seconds).
Fault the resource.
This is related to two factors:
How much tolerance you want VCS to have for false failure detections
For example, in an overloaded network environment, the NIC resource can
return an occasional failure even though there is nothing wrong with the
physical connection. You may want VCS to verify the failure a couple of
times before faulting the resource.
Whether or not you want to attempt a restart before failing over
For example, it may be much faster to restart a failed process on the same
system rather than to migrate the entire service group to another system.

225 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

119

Take the entire service group offline.


In general, the time required for a resource to be taken offline is dependent on
the type of resource and what the offline procedure includes. However, VCS
enables you to define the maximum time allowed for a normal offline
procedure before attempting to force the resource to be taken offline. The
resource type attributes related to this factor are OfflineTimeout and
CleanTimeout.
Select a failover target.
The time required for the VCS policy module to determine the target system is
negligible, less than one second in all cases, in comparison to the other factors.
Bring the service group online on another system in the cluster.
In most cases, in order to start an application service after a failure, you need to
carry out some recovery procedures. For example, a file systems metadata
needs to be checked if it is not unmounted properly, or a database needs to
carry out recovery procedures, such as applying the redo logs to recover from
sudden failures.
Take these considerations into account when you determine the amount of time
you want VCS to allow for an online process. The resource type attributes
related to bringing a service group online are OnlineTimeout,
OnlineWaitLimit, and OnlineRetryLimit.

Copyright 2012 Symantec Corporation. All rights reserved.

For more information on attributes that affect failover, refer to the Veritas Cluster
Server Bundled Agents Reference Guide.

226 1110

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Adjusting monitoring
You can change some resource type attributes to facilitate failover testing. For
example, you can change the monitor interval to see the results of faults more
quickly. You can also adjust these attributes to affect how quickly an application
fails over when a fault occurs.
MonitorInterval
This is the duration (in seconds) between two consecutive monitor calls for an
online or transitioning resource.
The default is 60 seconds for most resource types.

Copyright 2012 Symantec Corporation. All rights reserved.

OfflineMonitorInterval
This is the duration (in seconds) between two consecutive monitor calls for an
offline resource. If set to 0, offline resources are not monitored.
The default is 300 seconds for most resource types.
Refer to the Veritas Cluster Server Bundled Agents Reference Guide for the
applicable monitor interval defaults for specific resource types.

227 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

1111

Adjusting timeout values


The attributes MonitorTimeout, OnlineTimeout, and OfflineTimeout indicate the
maximum time (in seconds) within which the monitor, online, and offline entry
points must finish or be terminated. The default for the MonitorTimeout attribute
is 60 seconds. The defaults for the OnlineTimeout and OfflineTimeout attributes
are 300 seconds.

Copyright 2012 Symantec Corporation. All rights reserved.

For best results, measure the length of time required to bring a resource online,
take it offline, and monitor it before modifying the defaults. Simply issue an online
or offline command to measure the time required for each action. To measure how
long it takes to monitor a resource, fault the resource, and then issue a probe, or
bring the resource online outside of VCS control and issue a probe.

228 1112

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Poll-based and intelligent monitoring


All VCS versions prior to 5.1 SP1 support only poll-based monitoring, as
described throughout this lesson.
Intelligent monitoring is supported for select agents, as described in detail in the
Intelligent Monitoring Framework lesson.

Copyright 2012 Symantec Corporation. All rights reserved.

Intelligent monitoring provides substantial performance improvements over


traditional poll-based monitoring in environments with large number of resources.
Another advantage of intelligent monitoring is faster detection of resource faults.

229 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

1113

Controlling fault behavior


Type attributes related to resource faults
Although the failover capability of VCS helps to minimize the disruption of
application services when resources fail, the process of migrating a service to
another system can be time-consuming. In some cases, you may want to attempt to
restart a resource on the same system before failing it over to another system.

Copyright 2012 Symantec Corporation. All rights reserved.

Whether a resource can be restarted depends on the application service:


The resource must be successfully cleared (taken offline) after failure.
The resource must not be a child resource with dependent parent resources that
must be restarted.

230 1114

If you have determined that a resource can be restarted without impacting the
integrity of the application, you can potentially avoid service group failover by
configuring the RestartLimit, ConfInterval, and ToleranceLimit resource type
attributes.
For example, you can set the ToleranceLimit to a value greater than 0 to allow the
monitor entry point to run several times before a resource is determined to be
faulted. This is useful when the system is very busy and a service, such as a
database, is slow to respond.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Restart example

Copyright 2012 Symantec Corporation. All rights reserved.

This example illustrates how the RestartLimit and ConfInterval attributes can be
configured for modifying the behavior of VCS when a resource is faulted.

231

Setting RestartLimit = 1 and ConfInterval = 180 has this effect when a resource
faults:
1 The resource stops after running for 10 minutes.
2 The next monitor returns offline.
3 The ConfInterval counter is set to 0.
4 The agent checks the value of RestartLimit.
5 The resource is restarted because RestartLimit is set to 1, which allows one
restart within the ConfInterval counter
6 The next monitor returns online.
7 The ConfInterval counter is now 60; one monitor cycle has completed.
8 The resource stops again.
9 The next monitor returns offline.
10 The ConfInterval counter is now 120; two monitor cycles have completed.
11 The resource is not restarted because the RestartLimit counter is now 1 and the
ConfInterval counter is 120 (seconds). Because the resource has not been
online for the ConfInterval time of 180 seconds, it is not restarted.
12 VCS faults the resource.
If the resource had remained online for 180 seconds, the internal RestartLimit
counter would have been reset to 0.

Lesson 11 Handling Resource Faults


Copyright 2012 Symantec Corporation. All rights reserved.

1115

Modifying resource type attributes


You can modify the resource type attributes to affect how an agent monitors all
resources of a given type. For example, agents usually check their online resources
every 60 seconds. You can modify that period so that the resource type is checked
more often. This is good for either testing situations or time-critical resources.
You can also change the period so that the resource type is checked less often. This
reduces the load on VCS overall, as well as on the individual systems, but
increases the time it takes to detect resource failures.
For example, to change the ToleranceLimit attribute for all NIC resources so that
the agent ignores occasional network problems, type:

Copyright 2012 Symantec Corporation. All rights reserved.

hatype -modify NIC ToleranceLimit 2

232 1116

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Overriding resource type attributes


Resource type attributes apply to all resources of that type. You can override a
resource type attribute to change its value for a specific resource.
Use the options to hares shown on the slide or the GUI to override resource type
attributes.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: The configuration must be in read-write mode in order to modify and


override resource type attributes. The changes are reflected in the
main.cf file only after you save the configuration using the haconf dump command.
Some predefined static resource type attributes (those resource type attributes that
do not appear in types.cf unless their value is changed, such as
MonitorInterval) and all static attributes that are not predefined (static attributes
that are defined in the type definition file) can be overridden. For a detailed list of
predefined static attributes that can be overridden, refer to the VERITAS Cluster
Server Users Guide.

233 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

1117

Recovering from resource faults


When a resource failure is detected, the resource is put into a FAULTED or an
ADMIN_WAIT state depending on the cluster configuration. In either case,
administrative intervention is required to bring the resource status back to normal.
Recovering a resource from a faulted state
A critical resource in FAULTED state cannot be brought online on a system. When
a critical resource is FAULTED on a system, the service group status also changes
to FAULTED on that system, and that system can no longer be considered as an
available target during a service group failover.

Copyright 2012 Symantec Corporation. All rights reserved.

You have to clear the FAULTED status of a nonpersistent resource manually.


Before clearing the FAULTED status, ensure that the resource is completely
offline and that the fault is fixed outside of VCS.

234 1118

Note: You can also run hagrp -clear group [-sys system] to clear
all FAULTED resources in a service group. However, you have to ensure
that all of the FAULTED resources are completely offline and the faults are
fixed on all the corresponding systems before running this command.
The FAULTED status of a resource is cleared when the monitor returns an online
status for that resource. Note that offline resources are monitored according to the
value of OfflineMonitorInterval, which is 300 seconds (five minutes) by default.
To avoid waiting for the periodic monitoring, you can initiate the monitoring of the
resource manually by probing the resource.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Recovering a resource from an ADMIN_WAIT state


If the ManageFaults attribute of a service group is set to NONE, VCS does not take
any automatic action when it detects a resource fault. VCS places the resource into
the ADMIN_WAIT state and waits for administrative intervention. There are two
primary reasons to configure VCS in this way:
You want to analyze and recover from the failure manually with the aim of
continuing operation on the same system.
In this case, fix the fault and bring the resource back to the state it was in
before the failure (online state) manually outside of VCS. After the resource is
back online, you can inform VCS to take the resource out of ADMIN_WAIT
state and put it back into ONLINE state.
Notes:
If the next monitor cycle does not report an online status, the resource is
placed back into the ADMIN_WAIT state. If the next monitor cycle reports
an online status, VCS continues normal operation without any failover.
If the resource is restarted outside of VCS and a monitor cycle runs before
you can probe it, the resource returns to an online state automatically.
You cannot clear the ADMIN_WAIT state from the GUI.
You want to collect debugging information before any action is taken.
The intention in this case is prevent VCS intervention until the failure is
analyzed. You can then let VCS continue with the normal failover process.
When you clear the ADMIN_WAIT state, the clean entry point runs and the
resource changes status to OFFLINE | FAULTED. VCS then continues with
the service group failover, depending on the cluster configuration.

235 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

1119

Fault notification and event handling


Fault notification

Copyright 2012 Symantec Corporation. All rights reserved.

As a response to a resource fault, VCS carries out tasks to take resources or service
groups offline and to bring them back online elsewhere in the cluster. While
carrying out these tasks, VCS generates certain messages with a variety of severity
levels and the VCS engine passes these messages to the notifier daemon.
Whether these messages are used for SNMP traps or SMTP notification depends
on how the notification component of VCS is configured, as described in the
Configuring Notification lesson.

236 1120

The following events are examples that result in a notification message being
generated:
A resource becomes offline unexpectedly; that is, a resource is faulted.
VCS cannot determine the state of a resource.
A failover service group is online on more than one system.
The service group is brought online or taken offline successfully.
The service group has faulted on all nodes where the group could be brought
online, and there are no nodes to which the group can fail over.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Extended event handling using triggers


You can use triggers to customize how VCS responds to events that occur in the
cluster.
For example, you could use the ResAdminWait trigger to automate the task of
taking diagnostics of the application as part of the failover and recovery process. If
you set ManageFaults to NONE for a service group, VCS places faulted resources
into the ADMIN_WAIT state. If the resadminwait trigger is configured, VCS
runs the script when a resource enters ADMIN_WAIT. Within the trigger script,
you can run a diagnostic tool and log information about the resource, and then take
a desired action, such as clearing the state and faulting the resource:

Copyright 2012 Symantec Corporation. All rights reserved.

hagrp -clearadminwait -fault group -sys system


The role of triggers in resource faults
As a response to a resource fault, VCS carries out tasks to take resources or service
groups offline and to bring them back online elsewhere in the cluster. While these
tasks are being carried out, certain events take place. If corresponding event
triggers are configured, VCS executes the triggers, as shown in the slide.
Triggers are placed in the /opt/VRTSvcs/bin/triggers directory by
default. Sample trigger scripts are provided in /opt/VRTSvcs/bin/
sample_triggers. Trigger configuration is described in the Configuring
Notification lesson and the VERITAS Cluster Server Users Guide.

237 Lesson 11 Handling Resource Faults

Copyright 2012 Symantec Corporation. All rights reserved.

1121

Copyright 2012 Symantec Corporation. All rights reserved.

238 1122

Labs and solutions for this lesson are located on the following pages.
Lab 11: Configuring resource fault behavior, page A-113.
Lab 11: Configuring resource fault behavior, page B-239.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 12

Intelligent Monitoring Framework

239

Copyright 2012 Symantec Corporation. All rights reserved.

240 122

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

IMF overview
Drawbacks of traditional monitoring

Copyright 2012 Symantec Corporation. All rights reserved.

The Intelligent Monitoring Framework was created to meet customer demands for
supporting increasing numbers of highly available services. Some environments
are supporting large numbers resources (hundreds of mount points, for example)
running on already loaded systems. With traditional monitoring, VCS agents poll
each resource every 60 seconds, by default, which can add substantially to the
system load in large-scale environments. The periodic nature of traditional
monitoring, coupled with the requirement to run the monitor process for each
resource, results in the state of the resource being unknown between monitor
cycles, and requires additional system resources

241 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

123

Intelligent monitoring framework (IMF)


VCS 5.1 SP1 introduced the Intelligent Monitoring Framework as a complement
to poll-based monitoring.
IMF notification is implemented by the Asynchronous Monitoring Framework
(AMF) module that hooks into system calls and other kernel interfaces of the
operating system to get notifications on various events such as when a process
starts or dies, or when a block device gets mounted or unmounted from a mount
point.

Copyright 2012 Symantec Corporation. All rights reserved.

IMF reduces the VCS CPU footprint and system load, especially in large-scale
clusters with many resources. IMF also provides faster failure detection, and hence
faster failover, of resources, improving availability.

242 124

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

IMF event-driven notification


The intelligent monitoring framework is a notification-based mechanism that
minimizes load on the system and provides immediate notification of faults. IMF
is implemented through the asynchronous monitoring framework (AMF) kernel
module.
When IMF monitoring is configured for a resource, the agent registers the resource
with the AMF module. The AMF module receives event notifications from the
operating system when a registered resource changes states.

Copyright 2012 Symantec Corporation. All rights reserved.

The AMF module passes the notification to the agent for handling, as described
later in the lesson.

243 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

125

Agents with IMF support


When IMF was introduced in VCS 5.1 SP1, the agents listed on the right side of
the slide supported IMF monitoring.
In VCS 6.0, intelligent monitoring capability has been added for the DB2, Sybase,
Zone, and WPAR agents.

Copyright 2012 Symantec Corporation. All rights reserved.

In addition, you can create custom agents that use IMF monitoring by linking the
AMF plug-ins with the script agent and creating an XML file to enable registration
with the AMF module. For more information about using IMF monitoring for
custom agents, see the VCS Agent Developers Guide.

244126

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

IMF configuration
IMF modes
The Mode key of the IMF attribute determines whether IMF or traditional
monitoring is configured for a resource. Accepted values are:
0Does not perform intelligent resource monitoring
1Performs intelligent resource monitoring for offline resources and performs
poll-based monitoring for online resources
2Performs intelligent resource monitoring for online resources and performs
poll-based monitoring for offline resources

Copyright 2012 Symantec Corporation. All rights reserved.

3Performs intelligent resource monitoring for both online and for offline
resources

245 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

127

Other IMF attribute keys


The MonitorFreq key determines how often a resource is monitored by traditional
polling. When set to an integer greater than 0, the value of MonitorFreq is
multiplied by the value of the MonitorInterval and OfflineMonitorInterval
attributes to determine the frequency of running the poll-based monitor entry point
for online and offline resources, respectively.
If MonitorFreq is set to 0, the monitor entry point is not run a regular intervals and
only runs when the agent gets an event notification.

Copyright 2012 Symantec Corporation. All rights reserved.

The RegisterRetryLimit key defines the number of times the agent tries to register
a resource with the AMF module. If the resource cannot be registered within the
specified number of attempts, intelligent monitoring is disabled and the resource is
monitored using traditional poll-based monitoring.

246 128

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Combining IMF and poll-based monitoring


The configuration snippet in the slide shows a Process resource with IMF enabled
for monitoring online resources and traditional poll-based monitoring for offline
resources. For Process resources that are offline, the agent runs monitor entry point
periodically as specified by the OfflineMonitorInterval attribute.
With MonitorFreq set to 5, the agent runs the monitor entry point periodically,
calculated by multiplying the value of MonitorFreq (5) by MonitorInterval (60
seconds), which results in a poll-based monitor occurring every five minutes for an
online resource.

Copyright 2012 Symantec Corporation. All rights reserved.

Poll-based monitoring is performed by checking the process table for the process
IDs listed in the PidFile.

247 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

129

Combining IMF and second level monitoring


The configuration snippet in the slide shows an Oracle resource with IMF enabled
for online and offline monitoring.
The MonitorFreq attribute is set to 0 so polling-based monitoring does not occur at
all, and the LevelTwoMonitorFreq attribute is set to 5, enabling second level
monitoring to occur only every five minutes.
An example of a second level monitor procedure could be writing a timestamp to a
database table.

Copyright 2012 Symantec Corporation. All rights reserved.

The Oracle resource and second level monitoring are described in detail in the
"Configuring Databases" lesson.

248 1210

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Enabling IMF for all resources of a type


IMF is enabled by default for all agents that support intelligent monitoring. To
manually enable IMF for a particular resource type:
1 Make the VCS configuration writable.
2 Set the Mode key for the IMF attribute of the resource type to a value other
than 0, depending on your monitoring requirements.
3 Change the MonitorFreq and RegistryRetryLimit, as required.
4 Save and close the configuration.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: This is only required if IMF has been disabled for a resource type.

249 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

1211

Enabling and disabling IMF monitoring


In VCS 6.0, IMF monitoring is enabled by default. The haimfconfig command
is provided to simplify disabling and enabling IMF.
If you disable IMF, the AMF module is unloaded and IMF is unconfigured for all
IMF-aware agents. The resources are then monitored using traditional poll-based
monitoring.
When you enable IMF, the AMF kernel driver is loaded, all IMF-capable agents
are configured, and the agents are restarted so they re-register with AMF. The
resources of IMF-aware agents are then monitored by IMF.

Copyright 2012 Symantec Corporation. All rights reserved.

If you run haimfconfig enable amf, the AMF kernel module is loaded,
but agents are not configured.

250 1212

You can enable or disable IMF for a specific set of agents using the agent
option. See the haimfconfig man page and the Veritas Cluster Server
Administrators Guide for details about changing the IMF configuration.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Faults and failover with intelligent monitoring

Copyright 2012 Symantec Corporation. All rights reserved.

IMF-based monitoring process

251

The slide shows how IMF based monitoring works.


1 IMF is enabled for a resource.
2 The corresponding VCS agent waits for the resource to report the same steady
state, whether online or offline, for two consecutive monitor cycles.
3 The VCS agent registers the resource for IMF-based monitoring.
4 The agent then registers itself for receiving specific custom or operatingsystem-specific event notifications.
5 If an event occurs, the agent detects the affected resource and executes a
monitor cycle for that resource.
6 The monitor cycle determines the resource status and communicates it to the
VCS engine, HAD.
a If the resource state is offline, the VCS engine may initiate a failover
depending on the configuration.
b If the resource state remains the same, the agent moves to a wait state and
awaits for the next event to occur.

Lesson 12 Intelligent Monitoring Framework


Copyright 2012 Symantec Corporation. All rights reserved.

1213

Failover decisions and critical resources


The same methodology is applied for both poll-based and intelligent monitoring
when determining actions to be taken when a resource fault is detected.
The faulted resource must have the Critical attribute set for failover to be initiated.
Other resource attributes, such as RestartLimit, ConfInterval, and ToleranceLimit
also affect the actions that are taken if a resource faults.
Likewise, service group attributes, such as Frozen, ManageFaults, and
AutoFailOver affect the actions HAD takes if a critical resource faults.
Notification is also the same for IMF-managed resources.

Copyright 2012 Symantec Corporation. All rights reserved.

See the Handling Resource Faults lesson for details about how attributes affect
how VCS response to resource faults.

252 1214

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Failover duration when a resource faults


Failover duration for service groups when an IMF-monitored resource is
determined in similar fashion to the process described in the Handling Resource
Faults lesson.
The key difference is the time required to detect a resource fault. Depending on
when a resource faults in the traditional poll-based model, the detection of the fault
can take up to 60 seconds.

Copyright 2012 Symantec Corporation. All rights reserved.

For IMF-monitored resources, a fault is detected and the agent probes the resource
immediately to determine the resource state.
When a process dies or hangs, the operating system generates an alert. The agent is
registered to receive such alerts from the operating system, through the AMF
kernel module. The agent then probes the resource to determine the state and
notifies HAD if the resources is faulted. HAD can then take action within seconds
of a resource fault, rather than minutes, as with poll-based monitoring.

253 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

1215

Recovering resources from faulted or admin wait states


When a resource failure is detected, the resource is put into a FAULTED or an
ADMIN_WAIT state depending on the cluster configuration. In either case,
administrative intervention is required to bring the resource status back to normal.

Copyright 2012 Symantec Corporation. All rights reserved.

The methods for clearing IMF-managed resources are the same resources managed
by agents using traditional poll-based monitoring, as shown in the slide.

254 1216

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages.
Lab 12: IMF and AMF, page A-131.
Lab 12: IMF and AMF, page B-285.

255 Lesson 12 Intelligent Monitoring Framework

Copyright 2012 Symantec Corporation. All rights reserved.

1217

Copyright 2012 Symantec Corporation. All rights reserved.

256 1218

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 13

Cluster Communications

257

Copyright 2012 Symantec Corporation. All rights reserved.

258 132

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

VCS communications review


VCS maintains the cluster state by tracking the status of all resources and service
groups in the cluster. The state is communicated between had processes on each
cluster system by way of the atomic broadcast capability of Group Membership
Services/Atomic Broadcast (GAB). HAD is a replicated state machine, which uses
the GAB atomic broadcast mechanism to ensure that all systems within the cluster
are immediately notified of changes in resource status, cluster membership, and
configuration.

Copyright 2012 Symantec Corporation. All rights reserved.

Atomic means that all systems receive updates, or all systems are rolled back to the
previous state, much like a database atomic commit. If a failure occurs while
transmitting status changes, GABs atomicity ensures that, upon recovery, all
systems have the same information regarding the status of any monitored resource
in the cluster.
VCS on-node communications
VCS uses agents to manage resources within the cluster. Agents perform resourcespecific tasks on behalf of had, such as online, offline, and monitoring actions.
These actions can be initiated by an administrator issuing directives using the VCS
graphical or command-line interfaces, or by other events that require had to take
some action. Agents also report resource status back to had. Agents do not
communicate with one another, but only with had.
The had processes on each cluster system communicate cluster status information
over the cluster interconnect.

259 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

133

VCS inter-node communications


In order to replicate the state of the cluster to all cluster systems, VCS must
determine which systems are participating in the cluster membership. This is
accomplished by the group membership services mechanism of GAB.
Cluster membership refers to all systems configured with the same cluster ID and
interconnected by a pair of redundant Ethernet LLT links. Under normal operation,
all systems configured as part of the cluster during VCS installation actively
participate in cluster communications.
Systems join a cluster by issuing a cluster join message during GAB startup.
Cluster membership is maintained by heartbeats. Heartbeats are signals sent
periodically from one system to another to determine system state. Heartbeats are
transmitted by the LLT protocol.
VCS communications stack summary

Copyright 2012 Symantec Corporation. All rights reserved.

The hierarchy of VCS mechanisms that participate in maintaining and


communicating cluster membership and status information is shown in the slide
diagram.
Agents communicate with had.
The had processes on each system communicate status information by way of
GAB.
GAB determines cluster membership by monitoring heartbeats transmitted
from each system over LLT.

260 134

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster interconnect specifications

Copyright 2012 Symantec Corporation. All rights reserved.

LLT can be configured to designate links as high-priority or low-priority links.


High-priority links are used for cluster communications (GAB) as well as
heartbeats. Low-priority links carry only heartbeats unless there is a failure of all
configured high-priority links. At this time, LLT switches cluster communications
to the first available low-priority link. Traffic reverts to high-priority links as soon
as they are available.

261

Lesson 13 Cluster Communications


Copyright 2012 Symantec Corporation. All rights reserved.

135

GAB membership notation


To display the cluster membership status, type gabconfig on each system. For
example:
gabconfig -a

The first example in the slide shows:


Port a, GAB membership, has four nodes: 0, 1, 21, and 22
Port b, fencing membership, has four nodes: 0, 1, 21, and 22
Port h, VCS membership, has four nodes: 0, 1, 21, and 22

Copyright 2012 Symantec Corporation. All rights reserved.

Note: The port a, port b, and port h generation numbers change each time the
membership changes.

262 136

GAB membership notation


The gabconfig output uses a positional notation to indicate which systems are
members of the cluster. Only the last digit of the node number is displayed relative
to semicolons that indicate the 10s digit.
The second example shows gabconfig output for a cluster with 22 nodes.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster interconnect configuration


The VCS installation utility sets up all cluster interconnect configuration files and
starts LLT and GAB. You may never need to modify communication configuration
files. Understanding how these files work together to define the cluster
communication mechanism helps you understand VCS behavior.
LLT configuration files
The LLT configuration files are located in the /etc directory.

Copyright 2012 Symantec Corporation. All rights reserved.

The llttab file


The llttab file is the primary LLT configuration file and is used to:
Set the cluster ID number.
Set system ID numbers.
Specify the network device names used for the cluster interconnect.
Modify LLT behavior, such as heartbeat frequency.
Note: Ensure that there is only one set-node line in the llttab file.
This is the minimum recommended set of directives required to configure LLT.
The basic format of the file is an LLT configuration directive followed by a value.
These directives and their values are described in more detail in the next sections.
For a complete list of directives, see the sample_llttab file in the
/opt/VRTS/llt directory and the llttab manual page.

263 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

137

The llthosts file


The llthosts file associates a system name with a VCS cluster node ID
number. This file must be present in the /etc directory on every system in the
cluster. It must contain a line with the unique name and node ID for each system in
the cluster. The format is:

Copyright 2012 Symantec Corporation. All rights reserved.

node_number name

264 138

The critical requirements for llthosts entries are:


Node numbers must be unique. If duplicate node IDs are detected on the
Ethernet LLT cluster interconnect, LLT in VCS 4.0 is stopped on the joining
node. In VCS versions before 4.0, the joining node panics.
The system name must match the name in llttab if a name is configured for
the set-node directive (rather than a number).
System names must match those in main.cf, or VCS cannot start.
Note: The system (node) name does not need to be the UNIX host name found
using the hostname command. However, Symantec recommends that
you keep the names the same to simplify administration, as described in the
next section.
See the llthosts manual page for a complete description of the file.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

How node and cluster numbers are specified


A unique number must be assigned to each system in a cluster using the
set-node directive.
Each system in the cluster must have a unique llttab file, which has a unique
value for set-node, which can be one of the following:
An integer in the range of 0 through 63 (64 systems per cluster maximum)
A system name matching an entry in /etc/llthosts
The set-cluster directive

Copyright 2012 Symantec Corporation. All rights reserved.

LLT uses the set-cluster directive to assign a unique number to each cluster.
A cluster ID is set during installation and can be validated as a unique ID among
all clusters sharing a network for the cluster interconnect.
Note: You can use the same cluster interconnect network infrastructure for
multiple clusters. The llttab file must specify the appropriate cluster ID
to ensure that there are no conflicting node IDs.
If you bypass the installer mechanisms for ensuring the cluster ID is unique and
LLT detects multiple systems with the same node ID and cluster ID on a private
network, the LLT interface is disabled on the node that is starting up. This prevents
a possible split-brain condition, where a service group might be brought online on
the two systems with the same node ID.

265 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

139

The sysname file


The sysname file is an optional LLT configuration file that is configured
automatically during VCS installation. This file is used to store the short-form of
the system (node) name.
The purpose of the sysname file is to enable specification of a VCS node name
other than the UNIX host name. This may be desirable, for example, when the
UNIX host names are long and you want VCS to use shorter names.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: If the sysname file contains a different name from the llttab/
llthosts/main.cf files, this phantom system is added to the cluster
upon cluster startup.

266 1310

The sysname file can be specified for the set-node directive in the llttab
file. In this case, the llttab file can be identical on every node, which may
simplify reconfiguring the cluster interconnect in some situations.
See the sysname manual page for a complete description of the file.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

The GAB configuration file


GAB is configured with the /etc/gabtab file. This file contains one line that is
used to start GAB. For example:
/sbin/gabconfig -c -n 4

This example starts GAB and specifies that four systems are required to be running
GAB to start within the cluster. The -n option should always be set to the total
number of systems in the cluster.
A sample gabtab file is included in /opt/VRTSgab.

Copyright 2012 Symantec Corporation. All rights reserved.

Note: Other gabconfig options are discussed later in this lesson. See the
gabconfig manual page for a complete description of the file.

267 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1311

Joining the cluster membership


GAB and LLT are started automatically when a system starts up. HAD can only
start after GAB membership has been established among all cluster systems. The
mechanism that ensures that all cluster systems are visible on the cluster
interconnect is GAB seeding.
Seeding during startup

Copyright 2012 Symantec Corporation. All rights reserved.

Seeding is a mechanism to ensure that systems in a cluster are able to


communicate before VCS can start. Only systems that have been seeded can
participate in a cluster. Seeding is also used to define how many systems must be
online and communicating before a cluster is formed.

268 1312

By default, a system is not seeded when it boots. This prevents VCS from starting,
which prevents applications (service groups) from starting. If the system cannot
communicate with the cluster, it cannot be seeded.
Seeding is a function of GAB and is performed automatically or manually,
depending on how GAB is configured. GAB seeds a system automatically in one
of two ways:
When an unseeded system communicates with a seeded system
When all systems in the cluster are unseeded and able to communicate with
each other
The number of systems that must be seeded before VCS is started on any system is
also determined by the GAB configuration.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

When the cluster is seeded, each node is listed in the port a membership displayed
by gabconfig -a. In the following example, all four systems (nodes 0, 1, 2,
and 3) are seeded, as shown by port a membership:
# gabconfig -a
GAB Port Memberships
=======================================================
Port a gen a356e003

membership 0123

LLT, GAB, and VCS startup files


These startup files are placed on the system when VCS is installed.
AIX
/etc/rc.d/rc2.d/S70llt

Checks for /etc/llttab and runs


/sbin/lltconfig -c to start LLT

/etc/rc.d/rc2.d/S92gab

Calls /etc/gabtab

/etc/rc.d/rc2.d/S99vcs

Runs /opt/VRTSvcs/bin/hastart

/sbin/rc2.d/S680llt

Checks for /etc/llttab and runs


/sbin/lltconfig -c to start LLT

/sbin/rc2.d/S920gab

Calls /etc/gabtab

/sbin/rc2.d/S990vcs

Runs /opt/VRTSvcs/bin/hastart

/etc/rc[2345].d/llt

Checks for /etc/llttab and runs


/sbin/lltconfig -c to start LLT

/etc/rcX.d/gab

Calls /etc/gabtab

/etc/rcX.d/vcs

Runs /opt/VRTSvcs/bin/hastart

HP-UX

Copyright 2012 Symantec Corporation. All rights reserved.

Linux

Solaris 10
/lib/svc/method/llt

Checks for /etc/llttab and runs


/sbin/lltconfig -c to start LLT

/lib/svc/method/gab

Calls /etc/gabtab

/lib/svc/method/vcs

Runs /opt/VRTSvcs/bin/hastart

269 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1313

Probing resources during normal startup


During initial startup, VCS autodisables a service group until all its resources are
probed on all systems in the SystemList. When a service group is autodisabled,
VCS sets the AutoDisabled attribute to 1 (true), which prevents the service group
from starting on any system. This protects against a situation where enough
systems are running LLT and GAB to seed the cluster, but not all systems have
HAD running.

Copyright 2012 Symantec Corporation. All rights reserved.

In this case, port a membership is complete, but port h is not. VCS cannot detect
whether a service is running on a system where HAD is not running. Rather than
allowing a potential concurrency violation to occur, VCS prevents the service
group from starting anywhere until all resources are probed on all systems.

270 1314

After all resources are probed on all systems, a service group can come online by
bringing offline resources online. If the resources are already online, as in the case
where HAD has been stopped with the hastop -all -force option, the
resources are marked as online.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

System and cluster interconnect failures


VCS response to system failure
The example cluster used throughout most of this section contains three systems,
s1, s2, and s3, each of which can run any of the three service groups, A, B, and C.
The abbreviated system and service group names are used to simplify the
diagrams.
In this example, there are two Ethernet LLT links for the cluster interconnect.

Copyright 2012 Symantec Corporation. All rights reserved.

Prior to any failures, systems s1, s2, and s3 are part of the regular membership of
cluster number 1. When the s3 system fails, it is no longer part of the cluster
membership. Service group C fails over and starts up on either s1 or s2, according
to the SystemList and FailOverPolicy values.

271

Lesson 13 Cluster Communications


Copyright 2012 Symantec Corporation. All rights reserved.

1315

Copyright 2012 Symantec Corporation. All rights reserved.

Failover duration on a system failure

272 1316

When a system faults, application services that were running on that system are
disrupted until the services are started up on another system in the cluster. The time
required to address a system fault is a combination of the time required to:
Detect the system failure.
A system is determined to be faulted according to these default timeout
periods:
LLT timeout: If LLT on a running system does not receive a heartbeat from
a system for 16 seconds, LLT notifies GAB of a heartbeat failure.
GAB stable timeout: GAB determines that a membership change is
occurring, and after five seconds, GAB delivers the membership change to
HAD.
Select a failover target.
The time required for the VCS policy module to determine the target system is
negligible, less than one second in all cases, in comparison to the other factors.
Bring the service group online on another system in the cluster.
As described in an earlier lesson, the time required for the application service
to start up is a key factor in determining the total failover time.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Manual seeding
You can override the seed values in the gabtab file and manually force GAB to
seed a system using the gabconfig command. This is useful when one of the
systems in the cluster is out of service and you want to start VCS on the remaining
systems.
To seed the cluster if GAB is already running, use gabconfig with the -x
option to override the -n value set in the gabtab file. For example, type:
gabconfig -x
If GAB is not already started, you can start and force GAB to seed using -c and
-x options to gabconfig:

Copyright 2012 Symantec Corporation. All rights reserved.

gabconfig -c -x
CAUTION

Only manually seed the cluster when you are sure that no other
systems have GAB seeded. In clusters that do not use I/O fencing,
you can potentially create a split brain condition by using
gabconfig improperly.

After you have started GAB on one system, start GAB on other systems using
gabconfig with only the -c option. You do not need to force GAB to start with
the -x option on other systems. When GAB starts on the other systems, it
determines that GAB is already seeded and starts up.

273 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1317

Single LLT link failure


In the case where a node has only one functional LLT link, the node is a member of
the regular membership and the jeopardy membership. Being in a regular
membership and jeopardy membership at the same time changes only the failover
behavior on system fault. All other cluster functions remain. This means that
failover due to a resource fault or switchover of service groups at operator request
is unaffected.

Copyright 2012 Symantec Corporation. All rights reserved.

The only change is that other systems prevented from starting service groups on
system fault. VCS continues to operate as a single cluster when at least one
network channel exists between the systems.

274 1318

In the example shown in the diagram where one LLT link fails:
A jeopardy membership is formed that includes just system s3.
System s3 is also a member of the regular cluster membership with systems s1
and s2.
Service groups A, B, and C continue to run and all other cluster functions
remain unaffected.
Failover due to a resource fault or an operator request to switch a service group
is unaffected.
If system s3 now faults or its last LLT link is lost, service group C is not started
on systems s1 or s2.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Interconnect failure and potential split brain condition


When both LLT links fail simultaneously:
The cluster partitions into two separate clusters. No jeopardy membership is
formed and no service groups are autodisabled.
Each cluster determines that the other systems are down and tries to start the
service groups.

Copyright 2012 Symantec Corporation. All rights reserved.

If an application starts on multiple systems and can gain control of what are
normally exclusive resources, such as disks in a shared storage device, split brain
condition results and data can be corrupted.

275 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1319

Interconnect failures with a low-priority public link


LLT can be configured to use a low-priority network link as a backup to normal
heartbeat channels. Low-priority links are typically configured on the public
network or administrative network.
In normal operation, the low-priority link carries only heartbeat traffic for cluster
membership and link state maintenance. The frequency of heartbeats is reduced by
half to minimize network overhead.

Copyright 2012 Symantec Corporation. All rights reserved.

When the low-priority link is the only remaining LLT link, LLT switches all
cluster status traffic over the link. Upon repair of any configured link, LLT
switches cluster status traffic back to the high-priority link.

276 1320

Notes:
Nodes must be on the same public network segment in order to configure lowpriority links. LLT is a non-routable protocol.
You can have up to eight LLT links total, which can be a combination of lowand high-priority links. For example, if you have three high-priority links, you
have the same progression to jeopardy membership. The difference is that all
three links are used for regular heartbeats and cluster status information.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Changing the interconnect configuration


Example reconfiguration scenarios
You may never need to perform any manual configuration of the cluster
interconnect because the VCS installation utility sets up the interconnect based on
the information you provide about the cluster.

Copyright 2012 Symantec Corporation. All rights reserved.

However, certain configuration tasks require you to modify VCS communication


configuration files, as shown in the slide.

277 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1321

Manually modifying the interconnect


The procedure shown in the diagram can be used for any type of change to the
VCS communications configuration. The first task refers to the procedure
provided in the Offline Configuration lesson, and includes saving and closing
the cluster configuration before backing up and editing files.
Although some types of modifications do not require you to stop both GAB and
LLT, using this procedure ensures that any type of change you make takes effect.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, if you added a system to a running cluster, you can change the value
of -n in the gabtab file without having to restart GAB. However, if you added
the -j option to change the recovery behavior, you must either restart GAB or
execute the gabtab command manually for the change to take effect.

278 1322

Similarly, if you add a host entry to llthosts, you do not need to restart LLT.
However, if you change llttab, or you change a host name in llthosts, you
must stop and restart LLT, and, therefore, GAB.
Following this procedure ensures that any type of changes take effect. You can also
use the scripts in the rc*.d directories to stop and start services.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Example LLT link specification


You can add links to the LLT configuration as additional layers of redundancy for
the cluster interconnect. You may want an additional interconnect link for:
VCS for heartbeat redundancy
Storage Foundation for Oracle RAC for additional bandwidth

Copyright 2012 Symantec Corporation. All rights reserved.

To add an Ethernet link to the cluster interconnect:


1 Cable the link on all systems.
2 Use the process on the previous page to modify the llttab file on each
system to add the new link directive.
To add a low-priority public network link, add a link-lowpri directive using
the same syntax as the link directive, as shown in the llttab file example in the
slide.
VCS uses the low-priority link only for heartbeats (at half the normal rate), unless
it is the only remaining link in the cluster interconnect.

279 Lesson 13 Cluster Communications

Copyright 2012 Symantec Corporation. All rights reserved.

1323

Copyright 2012 Symantec Corporation. All rights reserved.

280 1324

Labs and solutions for this lesson are located on the following pages.
Lab 13: Cluster communications, page A-137.
Lab 13: Cluster communications, page B-299.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 14

Protecting Data Using SCSI 3-Based


Fencing

281

Copyright 2012 Symantec Corporation. All rights reserved.

282 142

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Data protection requirements


In order to understand how VCS protects shared data in a high availability
environment, it helps to see the problem that needs to be solvedhow a cluster
goes from normal operation to responding to various failures.
Split brain condition
A network partition can lead to a split brain conditionan issue faced by all
cluster implementations. This problem occurs when the HA software cannot
distinguish between a system failure and an interconnect failure. The symptoms
look identical.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, in the diagram, if the system on the right fails, it stops sending
heartbeats over the private interconnect. The left node then takes corrective action.
Failure of the cluster interconnect presents identical symptoms. In this case, both
nodes determine that their peer has departed and attempt to take corrective action.
This can result in data corruption if both nodes are able to take control of storage in
an uncoordinated manner.
Other scenarios can cause this situation. If a system is so busy that it appears to be
hung, to another system in the cluster it would seem to have failed. The second
system would then take the corrective action of starting the services of the hung
system. This can also happen on systems where the hardware supports a break and
resume function. If the system is dropped to command-prompt level with a break
and subsequently resumed, the system can appear to have failed. The cluster is
reformed and then the system recovers and begins writing to shared storage again.

283 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

143

I/O fencing
The key to protecting data in a shared storage cluster environment is to guarantee
that there is always a single consistent view of cluster membership. In other words,
when one or more systems stop sending heartbeats, the HA software must
determine which systems can continue to participate in the cluster membership and
how to handle the other systems.

Copyright 2012 Symantec Corporation. All rights reserved.

VCS uses a mechanism called I/O fencing to guarantee data protection. I/O
fencing uses SCSI-3 persistent reservations (PR) to fence out data drives to prevent
the data loss consequences of split-brain condition. Fencing ensures that data
protection is the highest priority concern, stopping running systems when
necessary to ensure systems cannot starts services when a split-brain condition is
encountered, as described in detail in this lesson.

284 144

SCSI-3 PR supports multiple nodes accessing a device while at the same time
blocking access to other nodes. Persistent reservations are persistent across SCSI
bus resets and also support multiple paths from a host to a disk.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

I/O fencing concepts


I/O fencing components
VCS uses fencing to allow write access to members of the active cluster and to
block access to nonmembers.
I/O fencing in VCS consists of several components. The physical components are
coordinator disks and data disks. Each has a unique purpose and uses different
physical disk devices.

Copyright 2012 Symantec Corporation. All rights reserved.

Coordinator disks
The coordinator disks act as a global lock mechanism used by the fencing driver to
determine which nodes are currently registered in the cluster. This registration is
represented by a unique key associated with each node that is written to the
coordinator disks. In order for a node to access a data disk, that node must have a
key registered on coordinator disks.
When system or interconnect failures occur, the coordinator disks enable the
fencing driver to ensure that only one cluster survives, as described in the I/O
Fencing Operations section.

285 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

145

Data disks
Data disks are standard disk devices used for shared data storage. These can be
physical disks or RAID logical units (LUNs). These disks must support SCSI-3
PR. Data disks are incorporated into standard VM disk groups. In operation,
Volume Manager is responsible for fencing data disks on a disk group basis.

Copyright 2012 Symantec Corporation. All rights reserved.

Disks added to a disk group are automatically fenced, as are new paths to a device
as they are discovered.

286 146

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

SCSI 3 registration keys for coordinator disks


SCSI 3 registration keys are used by the fencing driver as the locking mechanism
for the coordinator disks.
The registration keys are based on the LLT node number. Each key is eight
characters (bytes), specified as follows:
The left-most two bytes are the ASCII VF characters, indicating the keys are
written by Veritas fencing.
The next four bytes are the hexadecimal value of the cluster ID number.
The last two bytes are the hexadecimal value of the node ID number.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, in a cluster with an ID of 8, node 0 uses key VF000800, node 1 uses
key VF000801, node 2 is VF000802, and so on. For simplicity, these are shown
as 0 and 1 in subsequent diagrams.
Note: The registration key is not actually written to disk, but is stored in the drive
electronics or RAID controller.

287 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

147

SCSI 3 registration keys and reservations for data disks

Copyright 2012 Symantec Corporation. All rights reserved.

Registration keys for data disks are also based on the LLT node number. Each key
is eight characters (bytes), specified as follows:
The first byte (left-most character) is the LLT node number added to the
hexidecimal number A.
For example, the first byte for LLT node 0 is formed by adding hexadecimal A
to 0, which yields A.
The first byte of LLT node 1 is hexadecimal A plus 1, which yields B.
The next three bytes are the ASCII characters VCS, indicating the keys are
written by the VCS fencing driver.
The final four bytes are null.

288 148

As shown in the table in the slide, node 0 uses key AVCS, node 1 uses key BVCS,
node 2 would be CVCS, and so on. For simplicity, these are shown as A and B in
the diagram.
After registering with the data disks, a Write Exclusive Registrants Only
reservation is set on the data disk. This reservation means that only the registered
system can write to the data disk.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

I/O fencing operations


Registration with coordinator disks
After GAB has started and port a membership is established, each system registers
with the coordinator disks. HAD cannot start building the cluster configuration
until registration is complete.

Copyright 2012 Symantec Corporation. All rights reserved.

All systems are aware of the keys of all other systems, forming a membership of
registered systems. This fencing membershipmaintained by way of GAB port
bis the basis for determining which nodes have access to the data disks. When
the fencing membership is complete, the fencing driver signals HAD and HAD can
then start building the cluster configuration.

289 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

149

Service group startup


After the fencing membership is established and port b shows all systems as
members, each system writes registration keys to the coordinator disks. In the
example shown in the diagram, the cluster has two members, node 0 and node 1, so
port b membership shows 0 and 1.

Copyright 2012 Symantec Corporation. All rights reserved.

At this point, HAD is initialized on each system and one system starts building the
cluster configuration. When HAD is running and all systems have the cluster
configuration in memory, VCS brings service groups online according to their
specified startup policies. When a disk group resource associated with a service
group is brought online, the Volume Manager disk group agent (DiskGroup)
imports the disk group and writes a SCSI-3 registration key to the data disks. This
registration is performed in a similar way to coordinator disk registration.

290 1410

In the example shown in the diagram, node 0 is registered to write to the data disks
in the disk group belonging to the dbsg service group. Node 1 is registered to write
to the data disks in the disk group belonging to the appsg service group.
After registering with the data disk, Volume Manager sets a Write Exclusive
Registrants Only reservation on the data disk.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

System failure

291

The diagram shows the fencing sequence when a system fails.


1 Node 0 detects node 1 has failed when the LLT heartbeat times out and informs
GAB. At this point, port a on node 0 (GAB membership) shows only 0.
2 The fencing driver is notified of the change in GAB membership and node 0
races to win control of a majority of the coordinator disks.
This means node 0 must eject node 1 keys (1) from at least two of three
coordinator disks. The fencing driver ejects the registration of node 1 (1 keys)
using the SCSI-3 Preempt and Abort command. This command allows a
registered member on a disk to eject the registration of another. Because I/O
fencing uses the same key for all paths from a host, a single preempt and abort
ejects a host from all paths to storage.
3 In this example, node 0 wins the race for each coordinator disk by ejecting
node 1 keys from each coordinator disk.
4 Now port b (fencing membership) shows only node 0 because node 1 keys
have been ejected. Therefore, fencing has a consistent membership and passes
the cluster reconfiguration information to HAD.
5 GAB port h reflects the new cluster membership containing only node 0 and
HAD now performs failover operations defined for the service groups that
were running on the departed system.

Lesson 14 Protecting Data Using SCSI 3-Based Fencing


Copyright 2012 Symantec Corporation. All rights reserved.

1411

Copyright 2012 Symantec Corporation. All rights reserved.

Fencing takes place when a service group is brought online on a surviving


system as part of the disk group import process. When the DiskGroup
resources come online, the agent online entry point instructs Volume Manager
to import the disk group with options to remove the node 1 registration and
reservation, and place a SCSI-3 registration and reservation for node 0.

292 1412

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Interconnect failure
The diagram shows how VCS handles fencing if the cluster interconnect is severed
and a network partition is created. In this case, multiple nodes are racing for
control of the coordinator disks.
1 LLT on node 0 informs GAB that it has not received a heartbeat from node 1
within the timeout period. Likewise, LLT on node 1 informs GAB that it has
not received a heartbeat from node 0.
2 When the fencing drivers on both nodes receive a cluster membership change
from GAB, they begin racing to gain control of the coordinator disks.
The node that reaches the first coordinator disk (based on disk serial number)
ejects the failed nodes key. In this example, node 0 wins the race for the first
coordinator disk and ejects the VF000801 (shown as 1 in the diagram) key.
After the B key is ejected by node 0, node 1 cannot eject the key for node 0
because the SCSI-PR protocol says that only a member can eject a member.
This condition means that only one system can win.
3 Node 0 also wins the race for the second coordinator disk.
Node 0 is favored to win the race for the second coordinator disk according to
the algorithm used by the fencing driver. Because node 1 lost the race for the
first coordinator disk, node 1 has to sleep for one second (default) before it
tries to eject the other nodes key. This favors the winner of the first
coordinator disk to win the remaining coordinator disks. Therefore, node 1
does not gain control of the second or third coordinator disks.

293 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

1413

After node 0 wins control of the majority of coordinator disks (all three in this
example), node 1 loses the race and calls a kernel panic to shut down
immediately and reboot.
5 Now port b (fencing membership) shows only node 0 because node 1 keys
have been ejected. Therefore, fencing has a consistent membership and passes
the cluster reconfiguration information to HAD.
6 GAB port h reflects the new cluster membership containing only node 0, and
HAD now performs the defined failover operations for the service groups that
were running on the departed system.
When a service group is brought online on a surviving system, fencing takes
place as part of the disk group importing process.
4

I/O fencing behavior

Copyright 2012 Symantec Corporation. All rights reserved.

As demonstrated in the example failure scenarios, I/O fencing behaves the same
regardless of the type of failure:
The fencing drivers on each system race for control of the coordinator disks
and the winner determines cluster membership.
Reservations are placed on the data disks by Volume Manager when disk
groups are imported.

294 1414

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

I/O fencing with multiple nodes


In a multinode cluster, the lowest numbered (LLT ID) node always races on behalf
of the remaining nodes. This means that at any time only one node is the
designated racer for any mini-cluster.
If a designated racer wins the coordinator disk race, it broadcasts this success on
port b to all other nodes in the mini-cluster.
If the designated racer loses the race, it panics and reboots. All other nodes
immediately detect another membership change in GAB when the racing node
panics. This signals all other members that the racer has lost and they must also
panic.

Copyright 2012 Symantec Corporation. All rights reserved.

Majority clusters
The I/O fencing algorithm is designed to give priority to larger clusters in any
arbitration scenario. For example, if a single node is separated from a 16-node
cluster due to an interconnect fault, the 15-node cluster should continue to run. The
fencing driver uses the concept of a majority cluster. The algorithm determines if
the number of nodes remaining in the cluster is greater than or equal to the number
of departed nodes. If so, the larger cluster is considered a majority cluster. The
fencing driver gives the majority cluster advantage for winning the race for the
coordinator disks.
Fencing can be configured to override this default behavior and designate certain
nodes as being the preferred racing winners. See the Veritas Cluster Server
Administrators Guide for information on configuring preferred fencing.

295 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

1415

I/O fencing implementation


Fencing implementation in Volume Manager
Volume Manager handles all fencing of data drives for disk groups that are
controlled by the VCS DiskGroup resource type. After a node successfully joins
the GAB cluster and the fencing driver determines that a preexisting network
partition does not exist, the VCS DiskGroup agent directs VxVM to import disk
groups using SCSI-3 registration and a Write Exclusive Registrants Only
reservation. This ensures that only the registered node can write to the disk group.

Copyright 2012 Symantec Corporation. All rights reserved.

Each path to a drive represents a different I/O path. I/O fencing in VCS places the
same key on each path. For example, if node 0 has four paths to the first disk
group, all four paths have key AVCS registered. Later, if node 0 must be ejected,
VxVM preempts and aborts key AVCS, effectively ejecting all paths.

296 1416

Because VxVM controls access to the storage, adding or deleting disks is not a
problem. VxVM fences any new drive added to a disk group and removes keys
when drives are removed. VxVM also determines if new paths are added and
fences these, as well.HAD starts service groups.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Fencing implementation in VCS


When the UseFence cluster attribute is set to SCSI3, HAD cannot start unless the
fencing driver is running in enabled mode. This ensures that services cannot be
brought online by VCS unless fencing is already protecting shared storage disks.
The UseFence attribute cannot be changed while VCS is running because the disk
groups must be reimported after fencing is configured. Therefore, you must use the
installer script to configure all fencing components, described in the next
section.

Copyright 2012 Symantec Corporation. All rights reserved.

Alternately, you can use the offline configuration method and manually make the
change in the main.cf file, but this also means you must manually configure all
other components.

297 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

1417

Coordinator disk implementation


Coordinator disks are three standard disks or LUNs that are set aside for use by I/O
fencing during cluster reconfiguration.

Copyright 2012 Symantec Corporation. All rights reserved.

The coordinator disks can be any three disks that support persistent reservations.
Symantec typically recommends using small LUNs (at least 150 MB) for
coordinator use. Using LUNs of at least 150 MBs ensures that:
Certain array technologies interpret the LUN as a data device, not an internal
(gatekeeper) device.
Sufficient space is available for SCSI-3 support testing so that the private
region does not fill the disks.

298 1418

You cannot use coordinator disks for any other purpose in the VCS configuration.
Do not store data on these disks or include the disks in disk groups used for data.
The data would not be protected and would interfere with the fencing process.
Using the coordinator=on option to vxdg for the coordinator disk group
ensures that the coordinator disk group has exactly three disks. This flag is set by
default when fencing is configured using the installer.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

DMP support
VCS supports dynamic multipathing for both data and coordinator disks. The
/etc/vxfenmode file is used to set the mode for coordinator disks and these
sample files are provided for configuration:
/etc/vxfen.d/vxfenmode_scsi3_dmp
/etc/vxfen.d/vxfenmode_scsi3_raw
/etc/vxfen.d/vxfenmode_scsi3_sanvm
/etc/vxfen.d/vxfenmode_scsi3_disabled
/etc/vxfen.d/vxfenmode_scsi3_cps (for customized mode)

Copyright 2012 Symantec Corporation. All rights reserved.

The following example shows the vxfenmode file contents for a DMP
configuration:
vxfen_mode=scsi3
scsi3_disk_policy=dmp

299 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

1419

Configuring I/O fencing


Using CPI for automated fencing configuration

Copyright 2012 Symantec Corporation. All rights reserved.

Fencing can be configured using the CPI installer with the -fencing option.
This enables you to configure fencing without manually modifying configuration
files.

300 1420

Before configuring fencing:


Use the /opt/VRTSvcs/vxfen/bin/vxfentsthdw utility to verify that
the shared storage array supports SCSI-3 persistent reservations.
Warning: The vxfentsthdw utility overwrites and destroys existing data on
the disks by default. You can change this behavior using the -r option to
perform read-only testing. Other commonly used options include:
-f file (Verify all disks listed in the file.)
-g disk_group (Verify all disks in the disk group.)
After you have verified the paths to that disk on each system, you can run
vxfentsthdw with no arguments, which prompts you for the systems and
then for the path to that disk from each system. A verified path means that the
SCSI inquiry succeeds. For example, vxfenadm returns a disk serial number
from a SCSI disk and an ioctl failed message from non-SCSI 3 disk.
Initialize the disks to be used for the coordinator disk group.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Example fencing configuration files

301

The fencing configuration files created by the installer include:


The /etc/vxfendg file is created on each system in the cluster. This file
contains the coordinator disk group name.
The /etc/vxfentab file is automatically generated upon fencing startup.
The file contains a list of all paths to each coordinator disk. This is
accomplished during driver startup as follows:
a Read the vxfendg file to obtain the name of the coordinator disk group.
vxdisk -o alldgs list
b Run grep to create a list of each device name (path) in the coordinator
disk group.
c For each disk device in this list, run vxdisk list disk and create a list
of each device that is in the enabled state.
d Write the list of enabled devices to the vxfentab file.
This ensures that any time a system is rebooted, the fencing driver reinitializes
the vxfentab file with the current list of all paths to the coordinator disks.
The /etc/vxfenmode file is also created on each system in the cluster. This
file contains the fencing mode and disk policy. In the example file shown in the
slide, the mode is disk-based SCSI 3 fencing and the disk policy is DMP. The
Coordination Point Server lesson shows an example of a non-disk based
fencing policy.
The UseFence cluster attribute is set to SCSI3 in the main.cf file.

Lesson 14 Protecting Data Using SCSI 3-Based Fencing


Copyright 2012 Symantec Corporation. All rights reserved.

1421

Viewing keys and status


You can check the keys on coordinator and data disks using the vxfenadm
command, as shown in the slide.
The following example shows the vxfenadm -s command with a specific data
disk:
vxfenadm -s /dev/vx/rdmp/ams_wms0_51
Device Name: /dev/vx/rdmp/ams_wms0_51
Total Number Of Keys: 2
key[0]:
[Numeric Format]: 65,86,67,83,0,0,0,0
Copyright 2012 Symantec Corporation. All rights reserved.

[Character Format]: AVCS

302 1422

[Node Format]: Cluster ID 3 Node ID: 0 Node Name: s1


key[1]:
[Numeric Format]: 65,86,67,83,0,0,0,0
[Character Format]: AVCS
[Node Format]: Cluster ID 3 Node ID: 0 Node Name: s1

You can also use the -R, -r, options to view registrations.
Replacing coordinator disks
You can replace a coordinator disk using the vxfenswap command. See the Veritas
Cluster Server Administrators Guide for detailed information.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Stopping systems running I/O fencing


To ensure that keys held by a system are removed from disks when you stop a
cluster system, use the shutdown command. If you use the reboot command,
the fencing shutdown scripts do not run to clear keys from disks.

Copyright 2012 Symantec Corporation. All rights reserved.

If you inadvertently use reboot to shut down, you may see a message about a
pre-existing split brain condition when you try to restart the cluster. In this case,
you can use the vxfenclearpre utility described in the Veritas Cluster Server
Administrator's Guide.

303 Lesson 14 Protecting Data Using SCSI 3-Based Fencing

Copyright 2012 Symantec Corporation. All rights reserved.

1423

Copyright 2012 Symantec Corporation. All rights reserved.

304 1424

Labs and solutions for this lesson are located on the following pages.
Lab 14: Configuring SCSI3 disk-based I/O fencing, page A-147.
Lab 14: Configuring SCSI3 disk-based I/O fencing, page B-327.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Lesson 15

Coordination Point Server

305

Copyright 2012 Symantec Corporation. All rights reserved.

306 152

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Coordination points
The original implementation of I/O fencing for data protection supported only
SCSI 3 disk-based fencing using persistent reservations.
In VCS 5.1, server-based fencing was introduced to provide an additional
mechanism for fencing membership arbitration. The term coordination point refers
generically to any disk- or server-based object used to register coordinator keys.
Other terminology introduced for server-based fencing includes coordination point
(CP) client clusters, CP client nodes, and CP servers.

Copyright 2012 Symantec Corporation. All rights reserved.

A customized configuration refers to a client cluster that is configured with mix of


server- and disk-based coordination points used for fencing membership
arbitration.
CP servers may be single or multinode VCS clusters that have been installed with
all Storage Foundation/HA 5.1 packages.
.

307 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

153

Use cases
One key use case for CPS-based fencing is for supporting campus, or stretch,
clusters. A campus cluster is a single cluster with nodes placed in separate
geographical locations to protect against environmental disruptions. This provides
cost-effective disaster recovery solution when a server-based coordination point is
placed in a location separate from the storage arrays.
Another key use case is enterprise-scale configurations with large numbers of
clusters. Implementing CP servers can greatly reduce the number of SCSI 3 PRcompliant LUNs required for fencing.

Copyright 2012 Symantec Corporation. All rights reserved.

Server-based coordination points can also benefit implementations where a limited


number of SCSI 3 PR disks are available.

308 154

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Campus clusters with CPS as a third coordination point


The diagram in the slide shows a campus cluster configuration with two nodes at
different locations, ccsite1 and ccsite2. Each site has a SCSI-3 compliant disk
array with one coordinator disk per site.
A one-node VCS cluster is configured as a CP server at a different location, used
as the third coordination point. The CP server itself is not required to having
fencing configured. In a one-node configuration, the CPS database containing the
node registrations is on a local file system. This database is described more later in
the lesson.

Copyright 2012 Symantec Corporation. All rights reserved.

309 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

155

Supported coordination point configurations

Copyright 2012 Symantec Corporation. All rights reserved.

Symantec supports the following three coordination point configurations, depicted


in the slide:
Legacy vxfen driver based I/O fencing using SCSI-3 coordinator disks
Customized fencing using a combination of SCSI-3 disks and CP servers as
coordination points
Customized fencing using only CP servers as coordination points

310 156

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

CPS cluster components

Copyright 2012 Symantec Corporation. All rights reserved.

The diagram in the slide shows a configuration where the CP server is a two-node
VCS cluster. In this configuration:
Fencing is configured using SCSI 3 coordinator disks to protect shared storage.
The CPS database is located on shared storage.
The CPSSG service group is managing the shared storage resources, as well as
the networking resources used by CP client clusters to connect over the public
network.

311

The table shows a conceptual view of the contents of the CPS database. Clusters
are identified by name and a cluster universal unique identifier (UUID) number.
Cluster nodes are associated with clusters by way of the UUID and also identified
by name. When registered, cluster nodes are assigned the value of 1. Unregistered
nodes have a value of 0 in the Registered field.

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

157

Operating system and licensing requirements


CP servers have the same hardware, software, and licensing requirements as VCS
on supported UNIX platforms.
The cluster configured as the CP server can also be used for other purposes, such
as a Veritas Operations Manager server.

Copyright 2012 Symantec Corporation. All rights reserved.

CP client clusters have the same requirements as VCS or SFHA with all packages
installed. There is no additional license needed on a CP client cluster to use a CPS
coordination point.

312 158

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Networking requirements and recommendations


Networking requirements are also the same for a CP server cluster as a VCS
cluster. While Symantec recommends configuring Symantec Product
Authentication Service for secure communication among cluster nodes and CP
servers, this is not a requirement.
CP server uses TCP port 14250 by default, but this value is configurable.

Copyright 2012 Symantec Corporation. All rights reserved.

One unique network recommendation for CP servers is to have an equal number of


hops from CP client cluster nodes to CP servers. This equalizes the race for
coordinator keys among client nodes when fencing events occur.

313

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

159

CPS operations
Arbitration with coordination points
Fencing behavior in a customized- or CPS-only configuration is logically the same
as SCSI 3-based fencing.

Copyright 2012 Symantec Corporation. All rights reserved.

Upon startup, CP client cluster nodes register with the CP server to become a
member of an active cluster. The fencing driver on client nodes joins the fencing
membership on GAB port b. When all nodes are included in the port b
membership, the fencing driver notifies HAD to form port h membership.

314 1510

When a fencing event occurs, client cluster nodes race to preempt the keys of other
nodes. In the case of a CPS coordination point, the node preempts the losing node
registration by way of the cpsadm utility. When two coordination points are
registered to the winning node, the fencing driver on the losing node panics the
node. The losing nodes are sometimes referred to as victim nodes.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Script-based fencing
A customized- or CPS-based fencing configuration is said to be using script-based
fencing. A disk-only based fencing configuration uses only the vxfen driver, the
same as legacy fencing in earlier VCS versions.
With script-based fencing, the vxfen driver still manages GAB memberships.
When a membership change occurs, vxfen notifies the new vxfend fencing
daemon.

Copyright 2012 Symantec Corporation. All rights reserved.

This daemon calls the applicable script in the customized directory.

315

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

1511

Registration with coordination points


The diagram in the slide shows conceptually how coordination points register keys
during CP client cluster startup with a customized fencing configuration. In this
example, the fencing configuration has two SCSI 3-based coordinator disks and
one CPS coordination point.
The value of the registration keys for the coordinator disks is shown as 0 and 1 in
the slide for simplicity. The actual keys for a cluster with an ID of 57069 with two
nodes numbered 0 and 1, would be: VFDEED00 and VFDEED01, respectively.

Copyright 2012 Symantec Corporation. All rights reserved.

The CP server registrations do not have keys. When CP client nodes register with a
CP server, the CPS database keeps track of which nodes are registered. However,
for purposes of illustration, the 0 and 1 shown on CP 3 indicate that nodes 0 and 1
are registered.

316 1512

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Service group startup


The diagram in the slide shows conceptually how reservations are written for
shared storage disk groups during service group startup.
Note that regardless of the types of coordination points in use, the data disks for
shared storage in a CP client cluster must be SCSI 3 PR-conforming. SCSI 3
reservations are written to the data disks when a disk group is imported, just as
with legacy fencing.

Copyright 2012 Symantec Corporation. All rights reserved.

This occurs after the coordinator point registration has completed and VCS is
started. Service groups with AutoStartList configured are automatically brought
online during VCS startup. When DiskGroup resources are brought online, disk
groups are imported and Volume Manager writes the reservation keys to all disks
in the disk group.

317

The key is formed by adding hexadecimal A (decimal 10) to the LLT node ID to
create the ASCII character for the first byte of the key. The node with LLT node ID
of 0 writes AVCS keys to the data disks. LLT node ID of 1 writes BVCS keys, and
so on.

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

1513

Copyright 2012 Symantec Corporation. All rights reserved.

System failure

318 1514

The fencing sequence when a system fails is as follows:


1 Node 0 detects that node 1 has failed when the LLT heartbeat times out and
informs GAB.
At this point, port a on node 0 (GAB membership) shows only 0.
2 The fencing driver is notified of the change in GAB membership. Node 0 races
to win control of a majority of the coordinator disks. This means node 0 must
eject node 1 keys (B) from at least two of three coordination points.
vxfend calls to eject the registration of node 1 (B keys) using the SCSI-3
Preempt and Abort command:
cpsadm preempt_node
3 In this example, node 0 wins the race for each coordination point by ejecting
node 1 keys from each coordination point.
4 Now port b (fencing membership) shows only node 0 because node 1 keys
have been ejected. Therefore, fencing has a consistent membership and passes
the cluster reconfiguration information to HAD.
5 GAB port h reflects the new cluster membership containing only node 0 and
HAD now performs failover operations defined for the service groups that
were running on the departed system.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Copyright 2012 Symantec Corporation. All rights reserved.

Data disk fencing takes place when a service group is brought online on a
surviving system as part of the disk group importing process. When the DiskGroup
resources come online, the agent online entry point instructs Volume Manager to
import the disk group with options to remove the node 1 registration and
reservation, and place a SCSI-3 registration and reservation for node 0.

319

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

1515

Interconnect failure
As shown in the slide, the same type of procedure is performed in the case of a
network partition, where all links of the cluster interconnect fail simultaneously.
The difference in this case is that both nodes are racing for the coordination points.
In this example, node 0 again wins the race and ejects the registration keys for
node 1 from the coordination points, two of which are disks and one is a CP server.

Copyright 2012 Symantec Corporation. All rights reserved.

When node 1 loses the race, the fencing driver panics the system, causing appsg to
fail over to node 0. The appsg disk group is imported on node 0 and Volume
Manager writes AVCS registrations on the disks and places a WERO reservation.

320 1516

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Installing and configuring CP servers


Installing SFHA or VCS on a CP server
Installing a CPS server is the same as installing any SFHA or VCS cluster, except
that all packages must be selected. In the case of a one-node CPS cluster, you can
locate the CPS database on a local disk. For two node clusters, you must have
shared storage for the database. Configure fencing on the CP server to protect the
database on shared storage.
Symantec recommends that you configure SPAS (formerly VxSS) security on the
CP server and all CP client clusters, but this is not a requirement. The CP server
must be accessible over a public network to all CP client cluster nodes.

Copyright 2012 Symantec Corporation. All rights reserved.

High availability management of the CPS components is configured when you set
up CPS.

321

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

1517

Configuring the CP server


A script-based configuration utility enables you to configure all the necessary
components of a CP server, after you have installed and configured SFHA or VCS.
The configure_cps script prompts for the input shown in the slide, and then
sets up the shared storage objects needed for the CPS database and configures the
CPSSG service group to manage the storage and networking resources.
Note: In the case of a one-node CP server, the database is created on local
storage, which is not placed under VCS control.

Copyright 2012 Symantec Corporation. All rights reserved.

You cannot specify a fully-qualified host name for the CPS name, even if you have
DNS configured with a FQHN.

322 1518

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

CPSSG service group on the CP server


The CPSSG service group manages all resources required by the CP server.
The vxcpserv process is responsible for coordination point registrations, and
interacts with:
The cpsadm CLI
The CP client cluster nodes

Copyright 2012 Symantec Corporation. All rights reserved.

The vxcpserv process is dependent on shared storage for the CPS database (in a
multinode CPS cluster) as well as the public network connection for access to CP
client cluster nodes.

323 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1519

Example /etc/vxfenmode
The vxfenmode file contains new directives to specify new modes and
mechanisms:
scsi: All disks coordination points
CPS: Only CP servers (CPS)
customized: A combination of CPS and disks as coordination points

Copyright 2012 Symantec Corporation. All rights reserved.

The slide shows the contents of a sample vxfenmode file for a CP server cluster
with a disk-based fencing policy using dynamic multipathing.

324 1520

Note: This vxfenmode file is similar to example shown in the Data Protection
Using SCSI 3-Based Fencing lesson, because both are disk-only fencing
configurations. Keep in mind this refers to fencing on the CPS cluster, not
the client cluster, which is shown later in this lesson.
Recall that in disk-only fencing configurations, the name of the disk group is not
present in the vxfenmode file. Instead, the disk group name is included in the
legacy vxfendg file.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Other example configuration files on the CP server


The slide shows an example vxfentab file on the CP server cluster. Once again,
because the CP server is configured with disk-only fencing, the only difference in
the vxfentab file is the addition of the security directive. The setting of 1
indicates that security is enabled.
The vxcps.conf file is specific to a CP server and is not present on client
clusters. This defines the networking values used by clients to connect to the CP
server, as well as the location of the database. In this example, the database is on a
file system on shared storage with /cps1_db as the mount point.

Copyright 2012 Symantec Corporation. All rights reserved.

For one-node clusters, the default location of the CPS database is


/etc/VRTScps/db.

325 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1521

Multiple virtual IP addresses and the Quorum agent


If you have multiple IP addresses that can be used by client clusters to connect to
CPS, you can configure a Quorum type resource to provide high availability for
connectivity.
The Quorum attribute specifies the number of IP addresses that must be online and
the QuorumResources attribute specifies the IP type resources that are managing
the virtual IP addresses.

Copyright 2012 Symantec Corporation. All rights reserved.

If the number of IP resources online is lower than the value specified in the
Quorum attribute, the resource faults, causing the CPSSG service group to fail
over and bring the virtual IP addresses online on another node, in a multinode
cluster.

326 1522

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Installing and configuring CP client clusters


CP client cluster configuration
Configure the CP client clusters only after the CP server is set up and RSH or SSH
secure communication is in place over the public network between the CP server
and client cluster nodes.
Verify that the CP server is running and accessible.

Copyright 2012 Symantec Corporation. All rights reserved.

As with disk-based fencing, you can use the -fencing option to installvcs
or installsfha to configure customized- or CPS-based fencing on client
clusters. You must have the CP server name or virtual IP address to enable the
utility to access the CP server to add the client cluster nodes and user accounts to
the CPS database.
Note: The CP server name is not the UNIX host name of the system on which the
CP server cluster is running. It is a virtual name that is configured when
CPS is configured.
The CP server name, fully-qualified host name (FQHN), and virtual IP should be
added to DNS so all client clusters can access the CP server by name. This enables
you to use a name in the client fencing configuration. The advantage is that if the
IP address for the CP server must be changed in the future, the client
configurations are not required to change. These values are located in the
vxcps.conf file on the CP server cluster

327 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1523

Fencing wizard operations


The wizard first validates that the SFHA or VCS configuration is acceptable for a
CP client cluster environment. If a customized configuration is selected, you must
specify the disk group to contain the coordinator disks.You can select an existing
disk group or opt to create a new disk group.
Next, the wizard detects whether security is configured on the CP server. If so, a
user account is added to the CP server for each client cluster node for the security
credentials. If security is not configured, the wizard adds a VCS user to the CPS
cluster for each client cluster node.

Copyright 2012 Symantec Corporation. All rights reserved.

The vxcps.conf file containing the information needed to connect to the CP


server is created on each CP client cluster node.

328 1524

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

The wizard then creates the vxfenmode file based on your selections and restarts
fencing and VCS. A date-stamp version of the vxfenmode file is created as a
historical record of changes in the fencing configuration and can be used for
troubleshooting.

Copyright 2012 Symantec Corporation. All rights reserved.

Finally, the vxfentab file is created listing the coordination points. In disk-only
fencing configurations, the file has the same contents as a legacy VCS vxfentab
file. In a customized fencing configuration, both the disks and CP server
information is present.

329 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1525

Example /etc/vxfenmode on client cluster


The slide shows a sample vxfenmode file for a CP client cluster configuration
with customized-mode fencing. In this example, one coordination point is a CP
server with two virtual IP addresses using the default TCP port of 14250.

Copyright 2012 Symantec Corporation. All rights reserved.

The other two coordination points are SCSI 3 PR-compliant disks in a shared
storage device.

330 1526

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Other client cluster configuration files

Copyright 2012 Symantec Corporation. All rights reserved.

The slide shows samples of the other two configuration files on the CP client
cluster nodes:
The vxfentab file shows a customized mode fencing configuration with two
coordinator disks and a CP server with two virtual IP addresses.
Security is configured, and the CP server is not a single node cluster.
The clusuuid file contains the universally unique cluster ID.

331

Lesson 15 Coordination Point Server


Copyright 2012 Symantec Corporation. All rights reserved.

1527

Coordinator disk group differences


With a customized fencing configuration, the fencing disk group is specified in the
vxfenmode file and the vxfendg file is not used.
By default, the coordinator=on flag for the coordinator disk group when the
installer is used to configure fencing. This ensures that the fencing disk group has
three disks.

Copyright 2012 Symantec Corporation. All rights reserved.

Volume Manager does not allow this flag to be set for a coordinator disk group
containing less than three coordinator disks. Therefore, with customized fencing
configurations with only one or two disks in the fencing disk group, you cannot set
the coordinator flag.

332 1528

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

CPS administration
CPS user types and privileges
The CPS server requires a user account with admin privileges for each client
cluster node. These user accounts are added to the CP server when customized
fencing is configured on client clusters.
The admin user account is necessary to register and unregister nodes during
fencing operations. You must also use an account with admin privileges to create
snapshots of the CPS database for backup purposes.

Copyright 2012 Symantec Corporation. All rights reserved.

The operator and guest privileges are used when cpsadm is run on client cluster
nodes to test connectivity and display CP server objects. This also enables nonroot
users on the CP server to perform some CPS tasks.

333 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1529

Common operations for CP servers


The slide shows examples of some common operations performed on CP servers
with the cpsadm command.
The last example shows how to remove a registration. Although you would not
normally remove registrations, you must remove a stale registration in certain
circumstances.

Copyright 2012 Symantec Corporation. All rights reserved.

For example, if a CP server is not accessible and a client node leaves a cluster, the
registration is not automatically removed on that CP server. When that CP server
starts again, it has a stale registration and that client node cannot rejoin its cluster
until that stale registration is removed.

334 1530

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Managing client nodes


The slide shows examples of some additional CPS operations. Normally, you do
not manually add CP client clusters and nodes to the CP server. These operations
are performed by the fencing configuration wizard.

Copyright 2012 Symantec Corporation. All rights reserved.

However, in the case where you are manually configuring fencing, you can use
cpsadm to add clusters and nodes to the CPS configuration.

335 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1531

Managing user accounts


The slide shows examples of CPS operations related to user accounts. Normally,
you do not need to manually add user accounts to the CP server. These operations
are performed by the fencing configuration wizard.
However, in the case where you are manually configuring fencing, you can use
cpsadm to add user accounts to the CPS configuration.

Copyright 2012 Symantec Corporation. All rights reserved.

You can also change privileges for CPS user accounts. You can use cpsadm to
enable nonroot users on the CP server to perform admin-level operations.

336 1532

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Administering the CP server database


You can create snapshots of the CPS database using the cpsadm command. The
snapshot file name is date and time stamped so you can choose a version to restore
if the database is every lost or corrupted.
To restore a SQLite database:
1 Make a copy of the selected snapshot file.
2 Rename the snapshot file to the original cps_db database file.
3 Verify the data using cpsadm.

Copyright 2012 Symantec Corporation. All rights reserved.

You may want to set up a cron job to periodically create snapshots of the database.

337 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1533

Online reconfiguration of coordination points


You can reconfigure fencing using the installer while the client cluster continues to
run. You can add or swap coordinator disks and change between disk-based and
customized configurations.

Copyright 2012 Symantec Corporation. All rights reserved.

Recall that when changing from pure disk-based fencing to customized or serverbased fencing, you must first disable fencing if you want to reuse the same disk
group. If you use a new coordinator disk group, you can change from disk-based to
customized fencing without disabling fencing first.

338 1534

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Coordination point agent


CoordPoint agent
In customized configurations, the CoordPoint agent monitors CPS registrations of
client cluster nodes using cpsadm. If a node is unregistered, the CoordPoint
monitor entry point reports that the resource is unexpectedly offline.
The CoordPoint type resources are persistent. If a node is reregistered, the resource
comes back online at the next offline monitor interval, or when the resource is
probed.

Copyright 2012 Symantec Corporation. All rights reserved.

In disk-only configurations, the CoordPoint agent uses vxfenadm to read


registration keys on coordinator disks.

339 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1535

CoordPoint resource on cluster client nodes


Because CoordPoint resources are persistent, the service group containing the
CoordPoint resource appears to be offline after entering a high-availability
command such as hastatus -sum and viewing the system command output.
This is expected behavior. You can add a Phantom resource to the service group to
enable the status of the group to show online.

Copyright 2012 Symantec Corporation. All rights reserved.

The CoordPoint agent is automatically configured a new parallel service group


when fencing is configured using CPI. The default name is vxfen, but you are
prompted to type a new, unique service group name. Using a name such as vxfensg
helps distinguish the service group from the fencing driver name. The service
group contains only a single coordpoint agent and runs on all client cluster nodes
simultaneously. If a coordpoint resource faults, vxfen faults on that node as a
result.

340 1536

By default, the CoordPoint resource type has the FaultTolerance attribute set to 1.
This means that the resource does not fault when the first key is discovered to be
missing. If you change FaultTolerance to 0, the coordpoint resource faults when
the first missing key is discovered.
Recall, the client cluster is only affected by missing keys when fencing operations
occur. The resource fault is an indicator that a problem is affecting registrations
and you can take preventative actions to restore keys within the running cluster.

Veritas Cluster Server 6.0 for UNIX: Install and Configure


Copyright 2012 Symantec Corporation. All rights reserved.

Cluster startup with missing CPS coordination points


Versions of VCS or SFHA prior to 6.0 with CPS-based fencing configuration have
a built-in single point of failure if a CP server is offline. In this case, the client
cluster cannot form a membership and HAD cannot be started, leaving all HA
services offline.
In 6.0, the client cluster universally unique ID (UUID) and coordination point
serial numbers are cached locally.
In the event that a CP server is not available when a node is attempting to join the
cluster, the local fencing cache values are used to enable the node to join.

Copyright 2012 Symantec Corporation. All rights reserved.

The coordpoint resource faults if a coordination point is not accessible, which can
be configured to notify users, alerting them to a problem in the fencing
environment.
Note: Caching is configurable and can be disabled if 5.1 behavior is desired.

341 Lesson 15 Coordination Point Server

Copyright 2012 Symantec Corporation. All rights reserved.

1537

Copyright 2012 Symantec Corporation. All rights reserved.

342 1538

Labs and solutions for this lesson are located on the following pages.
Lab 15: Configuring CPS-based I/O fencing, page A-167.
Lab 15: Configuring CPS-based I/O fencing, page B-387.
Veritas Cluster Server 6.0 for UNIX: Install and Configure
Copyright 2012 Symantec Corporation. All rights reserved.

You might also like