Professional Documents
Culture Documents
The following is intended to outline our general product direction. It is intended for information
purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any
material, code, or functionality, and should not be relied upon in making purchasing decisions. The
development, release, and timing of any features or functionality described for Oracle’s products
remains at the sole discretion of Oracle.
The functionality of non-Oracle products, including development, release, and timing of any features or
functionality described, is solely at the discretion of the non-Oracle vendors.
HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Table of Contents
Introduction 1
Solution Overview 1
Implementation Strategy 3
Configuring Storage 5
Naming Conventions 7
1 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Zone Clustering of SAP liveCache 71
References 76
Installation References 93
HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Introduction
By using SAP Supply Chain Management (SAP SCM) software, businesses can more effectively and
efficiently manage their end-to-end supply chain processes, including partner collaboration and supply
network planning, execution, and coordination. In many SAP SCM deployments, SAP liveCache
technology is implemented because it can significantly accelerate the complex algorithmic processing
in data-intensive SCM applications, allowing companies to alter supply chain processes strategically
and quickly to achieve a competitive advantage.
Oracle SuperCluster is Oracle’s fastest, most secure, and most scalable engineered system. It is ideal
for consolidating a complete SAP landscape and providing high service levels. Consolidating the SAP
landscape can simplify and accelerate SCM application delivery, improve infrastructure utilization, and
create a highly available platform for mission-critical SAP-managed business processes .
This paper describes a SAP SCM with SAP liveCache deployment that was implemented as a proof-
of-concept on Oracle SuperCluster in an Oracle Solution Center. Located globally, Oracle Solution
Centers offer state-of-the-art systems, software, and expertise to develop architectures that support
specific requirements. Working closely with customer staff, Oracle experts develop and prototype
architectures to prove out solutions for real-world workloads . The goal of this particular proof-of-
concept was to document procedures and best practices to configure SAP SCM and SAP liveCache
services using a high availability (HA) architecture that meets stringent service level requirements.
Solution Overview
1 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The page Oracle Optimized Solution for SAP contains details for installing SAP on Oracle SuperCluster. For the
purpose of SCM functionality related to SAP liveCache, this document covers an example of an SCM/APO
configuration. To eliminate a single point of failure, engineers implemented a solution with zone clustering for high
availability (HA) of the SAP liveCache and SAP servers, as well as Oracle RAC for the database servers.
Oracle no-charge virtualiz ation technologies safely consolidate SAP application and database services and control
the underlying compute, memory, I/O, and storage resources. Physic al domains (PDOMs) are used to divide Oracle
SuperCluster resources into multiple electrically is olated hardware partitions that can be completely powered up or
down and manipulated without affecting each other. Each PDOM can be further divided using Oracle VM Server for
SPARC logical domain (LDOMs) that each run an independent instance of Oracle Solaris 11.
During the proof-of-concept implementation at the Oracle Solution Center, engineers shared an Oracle SuperCluster
M7-8 w ith other projects. The Oracle SuperCluster was configured with tw o database domains (DB) and two
application domains (APP), one on each PDOM (Figure 1). Because there was no performance testing conducted,
the siz e of the configured domains (16 cores for DB domains and 32 cores for APP domains) is not particularly
relevant.
Figure 1. Oracle SuperCluster conf iguration for SAP liv eCache HA proof -of -concept exercise.
Within the DB and APP domains, Oracle Solaris Zones provide an additional level of isolation, partitioning domain
resources for greater isolation and more granular resource control. Oracle Solaris Cluster provides the functionality
needed to support fault monitoring and automatic failover for critical servic es through the use of zone clustering.
2 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Each Oracle SuperCluster has a built-in Oracle ZFS Storage Appliance configured with tw o clustered heads for high
availability and a tray of 8TB disks. The storage dis ks provided by the internal appliance are used as boot disks for
Oracle SuperCluster domains; Oracle Solaris boot environments for zones; and swap space. On a fully configured
Oracle SuperCluster, it is recommended to limit the use of the internal Oracle ZFS Storage Appliance to
application usage.
An external Oracle ZFS Storage ZS3-2 or ZS4-4 appliance can als o be connected to the Infiniband network. The
recommendations for configuring an internal or external appliance are the same.
Implementation Strategy
The proof-of-concept followed these general high-level steps, which are subsequently described in detail:
1. Install and configure Oracle Solaris Cluster. For some customers, Oracle Advanced Customer Support
(Oracle ACS) performs this step as a part of the initial Oracle SuperCluster setup and installation.
2. Create zone clusters, netw ork resources (defining the logical hostnames), and resources to manage NFS
mount points.
3 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
3. Install SAP components in zones on the logical hostnames by using the SAPINST_USE_HOSTNAME
parameter.
a. SAP SCM and SAP components (ASCS, ERS, DB, PAS, APP) are installed.
b. The SAP liveCache (LC) instance is installed.
c. SAP SCM is then configured to connect to the LC instance.
4. Start the SAP components (ASCS, ERS, DB, PAS, APP, LC) in both zones of each zone cluster.
5. Create Oracle Solaris Cluster resources and configure them to manage the SAP component instances,
including the LC instance.
6. Perform testing to validate the configuration and confirm servic e recovery. Restart all components and
simulate component failures, observing the timely sw itch-over of applic ation components and ongoing
servic e availability.
The table below shows an example of network and hostname configurations w ith a short description of their
function. The first column shows if the netw ork is on an Infiniband (IB), 10GbE (E), or Management network (M). The
last column cab be completed to contain the corresponding IP address for each hostname (whic h is site-specific ).
These hostnames and IP addresses are in addition to the hostnames and IP addresses configured during the initial
component installations (such as hostnames and IP addresses for the DB domain, the DB zones, the APP domains,
the Oracle ZFS Storage Appliance heads), along with any virtual IP addresses.
4 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
E/IB Description Hostname IP Address
Configuring Storage
Storage for the installation of SAP APO can be allocated on the internal Oracle ZFS Storage Appliance or on the
external appliance. The decis ion to go with internal versus external storage is dependent on performance
requirements and the overall configuration of the Oracle SuperCluster. The internal appliance has only 20 disks
available, divided in two pools. The internal Oracle ZFS Storage Appliance provides boot dis ks (iSCSI LUNS) for
logical domains (LDOMs) and Oracle Solaris Zones, so these disks can undergo heavy I/O loads in environments
with many LDOMs and zones. Capacity planning is important to avoid degradation of servic e.
On either the internal or external appliance, one project needs to be created for the SAP APO install. This approach
allows for a simple snapshot, replication, and backup operations of all installation-related files. The browser-based
interface for the Oracle ZFS Storage Appliance is used to create the SAP APO project (Figure 3).
5 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 3. Creating a project f or SAP APO on the Oracle ZFS Storage Appliance.
6 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The number of shares and share parameters depends on the nature and scope of the SAP APO deployment. For a
non-production environment with limited performance requirements, the configuration shown in Figure 4 works well.
For production environments with more intensiv e I/O requirements, separate shares need to be created for the SAP
liveCache database. The table below lists share names and provides a short description of each share (and optional
shares) that the deployment requires. The project name can be the SAP <SID>.
SH ARES FOR LIVECACHE REPLICATION IN TEST/DEV (D) AND PRODUCTION (P) ENVIRONMENTS
Mounted
P/D Description Options Project Share
on
Naming Conventions
The previous two sections cover network and storage resources that must be configured. For ease of use and
management, naming conventions (such as those implemented in this proof-of-concept) are strongly recommended
when defining the following implementation objects:
» Zone cluster names (priv ate)
» Zone hostnames (public)
» Resource groups (priv ate)
» Storage resource groups (private)
» Logical hostnames (private)
7 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
» Hostnames (public)
» Resource names (private)
» Storage resource names (priv ate)
Some names are public and some are priv ate, as indic ated above. Naming conventions should take into
consideration security, ease of use (consistency and support of multiple SAP instances), and SAP-specif ic
requirements (such as the requirement that hostnames do not exceed 13 characters).
The following tables show building blocks used for naming conventions and how they are applied to construct the
naming conventions used in the proof -of-concept installation.
Variable Description
$stor sapmnt/usrsap/sapdb/saptrans
$R Random or company-defined
$D Domain ID
PROPOSED CONVENTIONS FOR ZONE CLUSTERS PER SAP SYSTEM INSTALL ATION
Zone hostnames $R
8 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Oracle Solaris Cluster is installed in two steps: fir st the environment is prepared, and then Oracle Solaris Cluster
browser-based user interface (BUI) is available to finalize the installation and start the configuration of the SAP
softw are installation.
Installing and configuring Oracle Solaris Cluster requir es four high-level steps:
IB partition data links must be created on the top of the IB physic al data links. On the Oracle SuperCluster, the 8511
and 8512 partitions are dedic ated to Oracle Solaris Cluster interconnects. On node 1:
root@sapm7adm-haapp-0101:~# dladm show-ib
LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS
net6 10E100014AC620 10E000654AC622 2 up -- -- 8503,8512,FFFF
net5 10E100014AC620 10E000654AC621 1 up -- -- 8503,8511,FFFF
root@sapm7adm-haapp-0101:~# dladm create-part -l net5 -P 8511 ic1
root@sapm7adm-haapp-0101:~# dladm create-part -l net6 -P 8512 ic2
root@sapm7adm-haapp-0101:~# dladm show-part
LINK PKEY OVER STATE FLAGS
sys-root0 8503 net5 up f---
sys-root1 8503 net6 up f---
stor_ipmp0_0 8503 net6 up f---
stor_ipmp0_1 8503 net5 up f---
ic1 8511 net5 unknown ----
ic2 8512 net6 unknown ----
root@sapm7adm-haapp-0101:~# ipadm create-ip ic1
root@sapm7adm-haapp-0101:~# ipadm create-ip ic2
On node 2:
root@sapm7adm-haapp-0201:~# dladm show-ib
LINK HCAGUID PORTGUID PORT STATE GWNAME GWPORT PKEYS
net5 10E100014AA7B0 10E000654AA7B1 1 up -- -- 8503,8511,FFFF
net6 10E100014AA7B0 10E000654AA7B2 2 up -- -- 8503,8512,FFFF
Interfaces ic1 and ic2 are now ready as Oracle Solaris Cluster interconnects using partitions 8511 and 8512. It is
important to configure the interfaces to use the same partitions on both nodes. In this example, ic1 is on partition
9 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
8511 and ic2 is on partition 8512 on both nodes. The interfaces are configured on different ports connected to
different IB sw itches, preventing the failure of a single switch from disabling both interconnects.
On node 1:
root@sapm7adm-haapp-0101:~# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:boot.00144ff828d4
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Max Connections: 65535/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS Access: disabled
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/240
Login Retry Time Interval: 60/-
Configured Sessions: 1
On node 2:
root@sapm7adm-haapp-0201:~# iscsiadm list initiator-node
Initiator node name: iqn.1986-03.com.sun:boot.00144ff9a0f9
Initiator node alias: -
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Max Connections: 65535/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS Access: disabled
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/240
Login Retry Time Interval: 60/-
Configured Sessions: 1
Notice the initiator node names ending in 28d4 (on node 1) and a0f9 (on node 2). Identif y the host names for the
Oracle ZFS Storage Appliance cluster heads. In the example deployment, the host names are:
10.129.112.136 sapm7-h1-storadm
10.129.112.137 sapm7-h2-storadm
Log into each cluster head host and create the quorum iSCSI initiator group as follows:
sapm7-h1-storadm:configuration san initiators iscsi> ls
Initiators:
NAME ALIAS
initiator-000 init_sc1cn1dom0
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.0010e0479e74
initiator-001 init_sc1cn1dom1
|
+-> INITIATOR
10 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
iqn.1986-03.com.sun:boot.00144ff8faae
initiator-002 init_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.00144ff97c9b
initiator-003 init_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.00144ff828d4
initiator-004 init_sc1cn2dom0
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.0010e0479e75
initiator-005 init_sc1cn2dom1
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.00144ffbf174
initiator-006 init_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.00144ffb3b6c
initiator-007 init_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
|
+-> INITIATOR
iqn.1986-03.com.sun:boot.00144ff9a0f9
Children:
groups => Manage groups
Initiators already exis t for the domains. The next commands create the quorum initiator group ( QuorumGroup-
haapp-01) containing both initiators (because both nodes must be able to access the quorum LUN):
GROUP NAME
group-000 QuorumGroup-haapp-01
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff9a0f9
iqn.1986-03.com.sun:boot.00144ff828d4
group-001 initgrp_sc1cn1_service
|
11 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff8faae
iqn.1986-03.com.sun:boot.0010e0479e74
group-002 initgrp_sc1cn1dom0
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.0010e0479e74
group-003 initgrp_sc1cn1dom1
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff8faae
group-004 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-app-0102
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff97c9b
group-005 initgrp_sc1cn1dom_ssccn1-io-sapm7adm-haapp-0101
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff828d4
group-006 initgrp_sc1cn2_service
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ffbf174
iqn.1986-03.com.sun:boot.0010e0479e75
group-007 initgrp_sc1cn2dom0
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.0010e0479e75
group-008 initgrp_sc1cn2dom1
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ffbf174
group-009 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-app-0202
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ffb3b6c
group-010 initgrp_sc1cn2dom_ssccn2-io-sapm7adm-haapp-0201
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ff9a0f9
12 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
ibpart7 offline ip ibpart7 0.0.0.0/32 p8503_ibp0
ibpart8 offline ip ibpart8 0.0.0.0/32 p8503_ibp1
igb0 up ip igb0 10.129.112.136/20 igb0
igb2 up ip igb2 10.129.97.146/20 igb2
ipmp1 up ipmp ibpart1 192.168.24.9/22 ipmp_versaboot1
ibpart2
ipmp2 offline ipmp ibpart3 192.168.24.10/22 ipmp_versaboot2
ibpart4
ipmp3 up ipmp ibpart5 192.168.28.1/22 ipmp_stor1
ibpart6
ipmp4 offline ipmp ibpart7 192.168.28.2/22 ipmp_stor2
ibpart8
vnic1 up ip vnic1 10.129.112.144/20 vnic1
vnic2 offline ip vnic2 10.129.112.145/20 vnic2
In the output above, notice that ipmp3 is the interface hosting the ZFS SA IP over IB address for head 1.
sapm7-h1-storadm:configuration san> targets iscsi
sapm7-h1-storadm:configuration san targets iscsi> create
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set
alias=QuorumTarget-haapp-01
alias = QuorumTarget-haapp-01 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> set
interfaces=ipmp3
interfaces = ipmp3 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi target-003 (uncommitted)> commit
sapm7-h1-storadm:configuration san targets iscsi> show
Targets:
TARGET ALIAS
target-000 QuorumTarget-haapp-01
|
+-> IQN
iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
target-001 targ_sc1sn1_iodinstall
|
+-> IQN
iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
target-002 targ_sc1sn1_ipmp1
|
+-> IQN
iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
target-003 targ_sc1sn1_ipmp2
|
+-> IQN
iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
Children:
groups => Manage groups
The new target ( QuorumTarget-haapp-01) is created. Next, create a group for the quorum target:
sapm7-h1-storadm:configuration san targets iscsi> groups
sapm7-h1-storadm:configuration san targets iscsi groups> create
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set
name=QuorumGroup-haapp-01
name = QuorumGroup-haapp-01 (uncommitted)
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> set
targets=iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
targets = iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-
fa190035423f (uncommitted)
13 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
sapm7-h1-storadm:configuration san targets iscsi group-003 (uncommitted)> commit
sapm7-h1-storadm:configuration san targets iscsi groups> show
Groups:
GROUP NAME
group-000 QuorumGroup-haapp-01
|
+-> TARGETS
iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
group-001 targgrp_sc1sn1_iodinstall
|
+-> TARGETS
iqn.1986-03.com.sun:02:5a8f6f30-5e1e-e3b9-c441-f53dd2c14eb1
group-002 targgrp_sc1sn1_ipmp1
|
+-> TARGETS
iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
group-003 targgrp_sc1sn1_ipmp2
|
+-> TARGETS
iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
The listing shows that the new target group (QuorumGroup-haapp-01) is created. Next, create a quorum project
and an iSCSI LUN for the quorum devic e.
sapm7-h1-storadm:configuration san targets iscsi groups> cd /
sapm7-h1-storadm:> shares
sapm7-h1-storadm:shares> ls
Properties:
pool = supercluster1
Projects:
IPS-repos
OSC-data
OSC-oeshm
OVMT
default
sc1-ldomfs
Children:
encryption => Manage encryption keys
replication => Manage remote replication
schema => Define custom property schema
14 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
aclinherit = restricted
aclmode = discard
atime = true
checksum = fletcher4
compression = off
dedup = false
compressratio = 100
copies = 1
creation = Fri Jan 22 2016 00:15:15 GMT+0000 (UTC)
logbias = latency
mountpoint = /export
quota = 0
readonly = false
recordsize = 128K
reservation = 0
rstchown = true
secondarycache = all
nbmand = false
sharesmb = off
sharenfs = on
snapdir = hidden
vscan = false
defaultuserquota = 0
defaultgroupquota = 0
encryption = off
snaplabel =
sharedav = off
shareftp = off
sharesftp = off
sharetftp = off
pool = supercluster1
canonical_name = supercluster1/local/QuorumProject
default_group = other
default_permissions = 700
default_sparse = false
default_user = nobody
default_volblocksize = 8K
default_volsize = 0
exported = true
nodestroy = false
maxblocksize = 1M
space_data = 31K
space_unused_res = 0
space_unused_res_shares = 0
space_snapshots = 0
space_available = 7.10T
space_total = 31K
origin =
Shares:
LUNs:
Children:
groups => View per-group usage and manage group
quotas
replication => Manage remote replication
snapshots => Manage snapshots
users => View per-user usage and manage user quotas
15 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Configure a statically configured iSCSI target and view the quorum LUN on each cluster node. On node 1:
root@sapm7adm-haapp-0101:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-
5ec2-6331-bbca-fa190035423f,192.168.28.1
root@sapm7adm-haapp-0101:~# iscsiadm list static-config
Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-
fa190035423f,192.168.28.1:3260
root@sapm7adm-haapp-0101:~# iscsiadm list target -S
Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
Alias: QuorumTarget-haapp-01
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
Alias: targ_sc1sn1_ipmp1
TPGT: 2
ISID: 4000002a0001
Connections: 1
LUN: 1
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2
Target: iqn.1986-03.com.sun:02:981136d4-173d-4ba2-b1c4-efc8765a0cd9
Alias: targ_sc1sn1_ipmp1
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 1
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA1A0011d0s2
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF200000569EDA210012d0s2
On node 2:
root@sapm7adm-haapp-0201:~# iscsiadm add static-config iqn.1986-03.com.sun:02:a685fb41-
5ec2-6331-bbca-fa190035423f,192.168.28.1
root@sapm7adm-haapp-0201:~# iscsiadm list static-config
Static Configuration Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-
fa190035423f,192.168.28.1:3260
root@sapm7adm-haapp-0201:~# iscsiadm list target -S
Target: iqn.1986-03.com.sun:02:a685fb41-5ec2-6331-bbca-fa190035423f
Alias: QuorumTarget-haapp-01
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09EF4EF20000056A1756A0015d0s2
Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
Alias: targ_sc1sn1_ipmp2
16 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
TPGT: 2
ISID: 4000002a0001
Connections: 1
LUN: 2
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2
Target: iqn.1986-03.com.sun:02:8e92e976-c490-46fc-870a-847c3ba388d3
Alias: targ_sc1sn1_ipmp2
TPGT: 2
ISID: 4000002a0000
Connections: 1
LUN: 2
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF860009d0s2
LUN: 0
Vendor: SUN
Product: Sun Storage 7000
OS Device Name: /dev/rdsk/c0t600144F09D4812E90000569EDF8D000Ad0s2
PHASE ITEMS
Installing new actions 19090/19090
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 2/2
17 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
On the next boot the Boot Environment solaris-small will be
mounted on '/'. Reboot when ready to switch to this updated BE.
On node 2:
root@sapm7adm-haapp-0201:~# pkg info -r solaris-small-server
Name: group/system/solaris-small-server
Summary: Oracle Solaris Small Server
Description: Provides a useful command-line Oracle Solaris environment
Category: Meta Packages/Group Packages
State: Not installed
Publisher: solaris
Version: 0.5.11
Build Release: 5.11
Branch: 0.175.3.1.0.5.0
Packaging Date: Tue Oct 06 13:56:21 2015
Size: 5.46 kB
FMRI: pkg://solaris/group/system/solaris-small-server@0.5.11,5.11-
0.175.3.1.0.5.0:20151006T135621Z
PHASE ITEMS
Installing new actions 19090/19090
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 2/2
18 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Reboot both nodes and confirm that the updated boot environment is running:
root@sapm7adm-haapp-0101:~# reboot
root@sapm7adm-haapp-0201:~# reboot
root@sapm7adm-haapp-0101:~# ls /net/192.168.28.1/export/IPS-repos/osc4/repo
pkg5.repository publisher
To install the Oracle Solaris Cluster software, the full package group ( ha-cluster-full) is installed on both nodes.
On node 1:
root@sapm7adm-haapp-0101:~# pkg set-publisher -g file:///net/192.168.28.1/export/IPS-
repos/osc4/repo ha-cluster
root@sapm7adm-haapp-0101:~# pkg info -r ha-cluster-full
Name: ha-cluster/group-package/ha-cluster-full
Summary: Oracle Solaris Cluster full installation group package
Description: Oracle Solaris Cluster full installation group package
Category: Meta Packages/Group Packages
State: Not installed
Publisher: ha-cluster
Version: 4.3 (Oracle Solaris Cluster 4.3.0.24.0)
Build Release: 5.11
Branch: 0.24.0
Packaging Date: Wed Aug 26 23:33:36 2015
Size: 5.88 kB
FMRI: pkg://ha-cluster/ha-cluster/group-package/ha-cluster-full@4.3,5.11-
0.24.0:20150826T233336Z
root@sapm7adm-haapp-0101:~# pkg install --accept --be-name ha-cluster ha-cluster-full
Packages to install: 96
Create boot environment: Yes
Create backup boot environment: No
PHASE ITEMS
Installing new actions 11243/11243
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 3/3
19 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
mounted on '/'. Reboot when ready to switch to this updated BE.
On node 2:
root@sapm7adm-haapp-0201:~# pkg set-publisher -g file:///net/192.168.28.1/export/IPS-
repos/osc4/repo ha-cluster
root@sapm7adm-haapp-0201:~# pkg install --accept --be-name ha-cluster ha-cluster-full
Packages to install: 96
Create boot environment: Yes
Create backup boot environment: No
PHASE ITEMS
Installing new actions 11243/11243
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 3/3
Reboot both nodes and confirm that the updated boot environment is running:
root@sapm7adm-haapp-0101:~# reboot
root@sapm7adm-haapp-0201:~# reboot
20 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
root@sapm7adm-haapp-0101:~# netadm list -p ncp defaultfixed
TYPE PROFILE STATE
ncp DefaultFixed online
root@sapm7adm-haapp-0201:~# netadm list -p ncp defaultfixed
TYPE PROFILE STATE
ncp DefaultFixed online
During initial configuration of a new cluster, cluster configuration commands are issued by one system, called the
control node. The control node issues the command to establis h the new cluster and configures other specified
systems as nodes of that cluster. The clauth command controls network access policies for machines configured
as nodes of a new cluster. Before running clauth on node 2, add the directory /usr/cluster/bin to the default
path for executables in the .profile file on node 1:
export PATH=/usr/bin:/usr/sbin
PATH=$PATH:/usr/cluster/bin
root@sapm7adm-haapp-0201:~# PATH=$PATH:/usr/cluster/bin
root@sapm7adm-haapp-0201:~# clauth enable -n sapm7adm-haapp-0101
21 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 5. Connecting to the Oracle Solaris Cluster Manager BUI.
The cluster creation wiz ard guides you through the process of creating an Oracle Solaris Cluster configuration. It
gathers configuration details, dis plays checks before installing, and then performs an Oracle Solaris Cluster install.
The same BUI is used for managing and monitoring the Oracle Solaris Cluster configuration after installation. When
using the BUI to manage the configuration, the comparable CLI commands are shown as they are run on the nodes.
The w iz ard (Figure 6) first verif ies prerequisites for cluster creation. Specif y the Creation Mode as "Typic al", whic h
works well on Oracle SuperCluster for clustered SAP environments.
Figure 6. The Oracle Solaris Cluster wizard simplif ies the process of cluster creation.
22 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Next, select the interfaces ic1 and ic2 configured earlier as the local transport adapters (Figure 7).
Figure 7. Specify the adapter interf aces f or the Oracle Solaris Cluster conf iguration.
23 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Next, specify the cluster name and nodes for the cluster configuration (Figure 8) and the quorum device (Figure 9).
When selecting a quorum devic e, Oracle Solaris Cluster can detect whic h is the only direct-attached shared disk. If
more than one is present, it w ill ask the user to make a choice.
Figure 8. Specify the nodes for the Oracle Solaris Cluster conf iguration.
24 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Resource security information is dis played (Figure 10), and then the entire configuration is presented for review
(Figure 11). At this point, the softw are is ready to create the cluster. If desir ed, select the option from the review
screen to perform a cluster check before actual cluster creation.
Figure 11. Rev iew the Oracle Solaris Cluster conf iguration.
25 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 12 shows the results of a cluster check. When the configuration is acceptable, click the Create button to
begin cluster creation. Figure 13 shows the results of an example cluster creation.
26 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Oracle Solaris Cluster is installed in the global zone. Figure 14 shows status information for the created cluster
sapm7-haapp-01. The nodes are rebooted to join the cluster. After the reboot, log in again to the BUI to view status.
At this time, there are no resource groups or zone clusters. More detailed information is available using the menu
options. For example, by selecting “Nodes”, the user can drill down for status information about each node
(Figure 15). By selecting “Quorum”, the user can als o see status for the quorum devic e and nodes ( Figure 16).
Figure 14. Oracle Solaris Cluster Manager prov ides status inf ormation about the created cluster.
27 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 15. The interf ace can present detailed status inf ormation about cluster nodes.
28 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Prepar ing the Environment
For high availability, SAP SCM is installed in zone clusters, whic h must be created before the SAP APO installation.
Oracle Solaris Cluster implements the concept of logical hostnames. A logic al hostname uses an IP address
managed by Oracle Solaris Cluster as a resource. A logic al hostname is available on one cluster node and it can be
transparently moved to other nodes as needed. Clients accessing the logic al hostname via its IP address are not
aware of the node’s actual identity.
The SAP Softw are Provis ioning Manager ( sapinst) can als o use the logical hostname specif ied by the parameter
SAPINST_USE_HOSTNAME=<hostname>. Before using sapinst to install the SAP SCM components, prepare
the SAP software environment by follow ing these steps:
1. Create zone clusters for SAP liveCache (LC), ASCS, and PAS servers (according to the configuration that was
selected to host these components). Customers installing all components in one zone must only create a single
zone cluster.
2. Create logical hostnames in the zone clusters. These are the virtual hosts for the LC, ASCS, ERS, PAS, and
APP servers.
3. Prepare for SAP installation on these zones by configuring prerequisites such as file system mounts.
4. Create the Oracle Solaris Cluster resources to monitor the NFS-mounted file systems requir ed for the SAP
NetWeaver stack (these are file systems such as /sapmnt/<SID>, /usr/SAP, and other customer-specif ic file
systems, if necessary).
5. Create projects for the user <SID>adm.
The following pages explain these steps in detail. Note that the steps described in this document were performed
multiple times and Oracle SuperCluster domains created did not alw ays have the same name. As a result, there are
some host name variations in different sections of this paper. However, within each section, the names are
consistent. Hostnames of the domains and the ZFS appliance heads are specif ic to each customer machine and
site, so these must be modified when using command examples from this paper.
29 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Preparing to Create Zone Clusters
Before creating zone clusters, it’s necessary to install the zfssa-client package on both nodes (alternatively , you
can add the package in each created zone). In either case, the command to install the package is pkg install
zfssa-client.
Start by identifying the specif ic naming conventions used in the Oracle SuperCluster deployment. The script is run
against one Oracle ZFS Storage Appliance head at a time (in the examples below, against osc7sn01-storIB as
head 1 and against osc7sn02-storIB as head 2):
root@osc7cn02pd00-d2:/opt/oracle.supercluster/bin# ./iscsi-lun.sh list projects -z
osc7sn01-storib
Password:
IPS-repos
OSC-data
OSC-oeshm
QuorumProject
SAP
default
sc1-ldomfs
The steps below create a LUN in the sc1-ldomfs project. This project is used to provide storage for the rpools of
the logical domains (LDOMs) in the Oracle SuperCluster.
root@osc7cn02pd00-d2:/opt/oracle.supercluster/bin# ./iscsi-lun.sh list luns -z osc7sn01-
storib -a sc1-ldomfs
Password:
LUNs:
The listing of LUNs shows the naming conventions for the LDOMs and that head 1 provides LUNs for cn1 and cn4,
whic h are PDOMs in an Oracle SuperCluster configuration with two SPARC M7 Servers. Head 2 provides LUNs for
PDOMs cn2 and cn3. In an Oracle SuperCluster configuration w ith a single SPARC M7 Server, only PDOMs cn1
and cn2 are present.
30 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Use the script iscsi-lun.sh to identify the correct initiator group and target group:
root@ osc7cn02pd00-d2:/opt/oracle.supercluster/bin# ./iscsi-lun.sh list initiator-groups -
z osc7sn02-storib
Verifying osc7sn02-storib is ZFSSA master head
Password:
Password:
GROUP NAME
group-000 QuorumGroup-haapp-01
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ffb9cdd
iqn.1986-03.com.sun:boot.00144ffb2743
group-001 initgrp_sc1cn1dom0
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.0010e04793e4
.
.
.
group-008 initgrp_sc1cn3dom1
|
+-> INITIATORS
iqn.1986-03.com.sun:boot.00144ffb2743
.
.
.
The quorum group ( QuorumGroup-haapp-01) contains an iSCSI Qualified Name (IQN) that also appears later in
the listing for the LDOM cn3dom. In this configuration, we know that head 2 provides LUNs for cn3, so the LDOM is
served by head 2 and ipmp2. We can identify the target group by looking for the group served by node2:
group-003 targgrp_sc1sn1_ipmp2
|
+-> TARGETS
iqn.1986-03.com.sun:02:fba62a3c-c1fe-6974-cda5-b89fe7cafa57
After identifying the initiator group and target group, use the script iscsi-lun.sh to add a new LUN, specifying the
initiator group and target group names, as in this example:
31 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Password:
Setting up iscsi devices on osc7cn02pd00-d2
Password:
c0t600144F0E170D4C5000057F2231C0002d0 has been formatted and ready to use
We now repeat the process to identify the initiator group and target group for the other Oracle ZFS Storage
Appliance head.
32 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The following command adds the LUN for this head:
root@osc7cn02pd01-d2:/opt/oracle.supercluster/bin# ./iscsi-lun.sh add -z osc7sn01-storib -
i `hostname` -n 1 -N 1 -s 200G -l 32k -I initgrp_sc1cn4dom1 -T targgrp_sc1sn1_ipmp1 -a
sc1-ldomfs
Verifying osc7sn01-storib owns all the required cluster resources
Password:
Adding lun(s) for osc7cn02pd01-d2 on osc7sn01-storib
Password:
Setting up iscsi devices on osc7cn02pd01-d2
Password:
c0t600144F09C1F8D64000057F226100007d0 has been formatted and ready to use
Lastly , on both cluster nodes, add static host information in the /etc/hosts file on each host, such as:
10.136.140.116 dlaz-100m
10.136.140.117 dlaz-101m
10.136.140.118 dlaz-102m
10.136.140.124 dlaz-200m
10.136.140.125 dlaz-201m
10.136.140.126 dlaz-202m
10.136.139.48 dlaz-100
10.136.139.49 dla-lc-lh
10.136.139.50 dlaz-101
10.136.139.51 dla-ascs-lh
10.136.139.52 dlaz-102
10.136.139.53 dla-pas-lh
10.136.139.64 dlaz-200
10.136.139.65 osc702-z3-vip
10.136.139.66 dlaz-201
10.136.139.67 dla-ers-lh
10.136.139.68 dlaz-202
10.136.139.69 dla-app-lh
#IB Hosts
192.168.139.225 idlaz-100
192.168.139.226 idla-lc-lh
192.168.139.227 idlaz-101
192.168.139.228 idla-ascs-lh
192.168.139.229 idlaz-102
192.168.139.230 idla-pas-lh
192.168.139.231 idla-z200
192.168.139.232 iosc702-z3-vip
192.168.139.233 idlaz-201
192.168.139.234 idla-ers-lh
192.168.139.235 idlaz-202
192.168.139.236 idla-app-lh
33 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Creating the Zone Clusters Using the BUI
Zone clusters can be created with using the clzonecluster or clzc command, or by using the browser-based
user interface (BUI) provided with Oracle Solaris Cluster. This section giv es an example of using the BUI to
implement zone clustering.
Use a browser to access the Oracle Solaris Cluster Manager by specifying the URL as https://node:8998/scm
(see the How to Access Oracle Solaris Cluster Manager documentation for more information). Under Tasks, select
Zone Clustering. Press Create to start the zone cluster creation wiz ard.
34 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The following example shows the process of fir st creating a zone cluster for SAP liveCache.
Figure 18. The zone cluster is named lc-zc and uses /zones/lc-zc as the zone path.
In this deployment, resource controls were not implemented. Because Oracle Solaris performs effectiv e resource
management, an effective approach is to skip making initial resource allocations and observe whether they are
needed after the system is in use. If resource controls are requir ed, they can be implemented at a later point in time.
Figure 19. The zone cluster creation wizard enables optional resource allocations f or zones.
35 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Memory capping is not supported on Oracle SuperCluster at this time.
The physical host nodes for the zone clusters are already selected.
36 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Enter zone host names, IP addresses, and netmask length for the zones in the zone cluster using settings specif ic
to your environment.
Review all configuration before starting the creation of the lc-zc zone cluster.
37 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The w iz ard creates the zone cluster and dis plays the commands executed to create it. (This makes it easy to
capture the commands into a script that can be used to create additional clusters.)
The lc-zc zone cluster is now configured and status information is available from the command line using the
Oracle Solaris Cluster clzc ( clzonecluster) command:
root@osc7cn02pd00-d2:~# clzc status lc-zc
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
lc-zc solaris osc7cn02pd01-d2 dlaz-100 Offline Configured
osc7cn02pd00-d2 dlaz-200 Offline Configured
On Oracle SuperCluster, each zone has tw o netw orks, a 10GB Ethernet netw ork and an InfiniBand (IB) netw ork.
Currently the Zone Cluster Creation Wizard does not support adding a second netw ork interface so it must be added
using a clzc configure command:
The configure subcommand can use an input file to modify the zone cluster non-interactively . In this example, the
file contains commands that add the second network interface:
select node physical-host=osc7cn02pd00-d2
add net
set address=192.168.139.225/22
set physical=stor_ipmp0
38 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
end
end
select node physical-host=osc7cn02pd01-d2
add net
set address=192.168.139.231/22
set physical=stor_ipmp0
end
end
commit
Figure 25. The sysconfig command configures each Oracle Solaris instance.
Navigating through all the screens, similar to an interactive Solaris zone initial boot configuration, creates the profile:
SC profile successfully generated as:
/net/osc7sn01-storib/export/software/prof/sc_profile.xml
root@osc7cn02pd00-d2:~# ls /net/osc7sn01-storib/export/software/prof/
sc_profile.xml
39 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Duplicate this initial profile to create specif ic profiles for the zones dlaz-100, dlaz-200, dlaz-101, dlaz-201,
dlaz-102, and dlaz-202:
root@osc7cn02pd00-d2:~# ls /net/osc7sn01-storib/export/software/prof/
dlaz-100-profile.xml dlaz-200-profile.xml sc_profile.xml
dlaz-101-profile.xml dlaz-201-profile.xml
dlaz-102-profile.xml dlaz-202-profile.xml
Customize the profiles by replacing the nodename string with the corresponding hostname. The diff command
highlights this change from the original profile file:
root@osc7cn02pd00-d2:~# diff prof/sc_profile.xml prof/dlaz-101-profile.xml
22c22
< <propval type="astring" name="nodename" value="dlaz-100"/>
---
> <propval type="astring" name="nodename" value="dlaz-101"/>
Install the Oracle Solaris Zones using the customized profiles. On node 1:
root@osc7cn02pd00-d2:~# clzc install -c dlaz-100-profile.xml -n `hostname` lc-zc
Waiting for zone install commands to complete on all the nodes of the zone cluster "lc-
zc"...
On node2:
root@osc7cn02pd01-d2:~# clzc install -c dlaz-200-profile.xml -n `hostname` lc-zc
Waiting for zone install commands to complete on all the nodes of the zone cluster "lc-
zc"...
Using the Oracle Solaris Cluster Manager BUI, note the status of the zone cluster nodes has changed to Installed.
Figure 26. Oracle Solaris Cluster Manager BUI shows zone cluster status.
After the zones are successfully installed, the zone cluster can be booted:
root@osc7cn02pd00-d2:~# clzc boot lc-zc
Test that the zone cluster is running and accessible, and that DNS is set up properly:
root@osc7cn02pd00-d2:~# zoneadm list
global
lc-zc
root@osc7cn02pd00-d2:~# zlogin -C lc-zc
[Connected to zone 'lc-zc' console]
40 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
dlaz-200 console login: root
Password:
Oct 3 13:46:44 dlaz-200 login: ROOT LOGIN /dev/console
Oracle Corporation SunOS 5.11 11.3 March 2016
root@dlaz-200:~# nslookup
> dlaz-100
Server: 140.83.186.4
Address: 140.83.186.4#53
The commands above show that the zone cluster nodes dlaz-100 and dlaz-200 are ready (DNS configuration
was included in the profile and is the same in all zones). The Oracle Solaris Cluster Manager BUI also shows the
status of these node as Online and Running.
Figure 27. Oracle Solaris Cluster Manager shows updated node status.
41 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
set address=192.168.139.233/22
set physical=stor_ipmp0
end
end
commit
Use the BUI to create the PAS zone cluster pas-zc and the clcz configure command to add the second network
interface:
On node 2:
root@osc7cn02pd01-d2:~# clzc install -c dlaz-201-profile.xml -n `hostname` ascs-zc
root@osc7cn02pd01-d2:~# clzc install -c dlaz-202-profile.xml -n `hostname` pas-zc
root@osc7cn02pd01-d2:~# clzc boot ascs-zc
root@osc7cn02pd01-d2:~# clzc boot pas-zc
The SAP liveCache, ASCS, and PAS zone clusters can now be monitored and managed from the BUI.
Figure 28. The BUI now shows status f or LC, ASCS, and PAS nodes.
42 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Configuring Logical Hostnames
Oracle Solaris Cluster manages the follow ing logical hostnames, whic h are in pairs: one for 10GbE and one for IB:
10.136.139.49 dla-lc-lh
10.136.139.51 dla-ascs-lh
10.136.139.53 dla-pas-lh
10.136.139.67 dla-ers-lh
10.136.139.69 dla-app-lh
192.168.139.226 idla-lc-lh
192.168.139.228 idla-ascs-lh
192.168.139.230 idla-pas-lh
192.168.139.234 idla-ers-lh
192.168.139.236 idla-app-lh
To modify /etc/hosts files on each node to include these hostnames, either vi all files or use a set of cat
commands to append to a previously modified file:
# vi /zones/dla-*/root/etc/hosts
# cat hosts >> /zones/dla-pas/root/etc/hosts
# cat hosts >> /zones/dla-lc/root/etc/hosts
# cat hosts >> /zones/dla-ascs/root/etc/hosts
To add a logical hostname as a resource for each zone cluster, you can use either the Oracle Solaris Cluster BUI or
the command line.
First, navigate to Zone Cluster Solaris resources pane and click on Add under Network Addresses.
43 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 30. Adding all logical hostnames, one at a time, in the popup Network Address – Add window,
Figure 31. The logical hostnames are inserted in the zone cluster configuration and zone conf iguration on each node.
Next, navigate to the Tasks screen and select Logic al hostname to create a resource for Oracle Solaris Cluster.
44 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Follow the steps on each screen (note that some are informative only, such as Verify Prerequisites).
Select the zone cluster in whic h to configure the logic al hostname resource.
45 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
The nodes for the zone cluster are pre-selected.
Chose one logic al hostname, such as the hostname for the 10GbE interface.
46 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
There are no PNM ( Public Netw ork Management) objects.
Enter a resource group name in line w ith the naming conventions dis cussed earlier. Click Return to go to next
screen.
Figure 38. Logical hostname resource and resource group rev iew.
47 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Review the Summary screen.
Unfortunately , the w iz ard cannot be used to create another logic al hostname in the same resource group (a bug is
filed for this ). In this example, to add the IB logical hostname, we use the generic resource workflow.
48 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Navigate to the Resource Groups screen and select the resource group, such as lc -rg.
Figure 41. Creating another resource in the logical hostname resource group.
49 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
There are no dependencies for this resource.
50 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Review the Summary screen.
Figure 46. A new resource in the resource group is created and the resource group’s status is updated.
51 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Configuring Logical Hostnames Using the Command Line
To add logical hostnames via the command line, log into the global zone. Create a script file containing the
commands to create the logical hostnames:
clrg create -Z ascs-zc -p nodelist=dlaz-101,dlaz-201 ers-rg
Run the script file to execute the commands, and then check the logical hostname status:
root@osc7cn02pd00-d2:~# clrs status -Z all -t LogicalHostname
52 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
dla-pas-lh dlaz-102 Online Online - LogicalHostname online.
dlaz-202 Offline Offline
Use a set of cat commands to append the file contents to the /etc/vfstab file for each LC, ASCS, or PAS node:
cat vfstab-pas >> /zones/dla-pas/root/etc/vfstab
cat vfstab-lc >> /zones/dla-lc/root/etc/vfstab
cat vfstab-a >> /zones/dla-ascs/root/etc/vfstab
Execute these commands from inside each LC, ASCS, or PAS zone to mount the appropriate file systems:
mount /usr/sap
mkdir /usr/sap/saptrans
mount /usr/sap/trans
mount /sapmnt
mount /oracle
In each zone cluster, we need to create zone cluster resources to monitor these NFS-mounted file systems:
/usr/sap
/usr/sap/saptrans
53 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
/sapmnt
/sapdb
For the Oracle ZFS Storage Appliance to provide fencing, there needs to be an exception list stored in the
sharenfs property of the project holding the shares for the SAP install. Identify all of the IP addresses in the IB
network on each node. For example, on node 1 in the global zone:
root@osc7cn02pd00-d2:~# ipadm |grep stor |grep 192 |sed -e "s/.*192/192/"
192.168.139.89/22
192.168.139.225/22
192.168.139.227/22
192.168.139.229/22
Then remove the address of the global zone (192.168.139.89) on this node. On node 2 in the global zone:
root@osc7cn02pd01-d2:~# ipadm |grep stor |grep 192 |sed -e "s/.*192/192/"
192.168.139.92/22
192.168.139.231/22
192.168.139.233/22
192.168.139.235/22
Then remove the address of the global zone (192.168.139.92) on this node.
Build the string for the sharenfs string. It contains each IP address w ith a netmask length of 32 preceded by the @
sign. Netmask 32 expresses that each IP address is treated indiv idually and not part of a range, which is important
for IO fencing of failed nodes. We remove the global zone nodes because SAP-specif ic file systems are only
accessed from inside zones.
root@osc7cn02pd01-d2:~# ssh osc7sn01-storib
Password:
Last login: Tue Oct 4 20:18:05 2016 from 10.136.140.53
osc7sn01:> shares
osc7sn01:shares> select SAP
osc7sn01:shares SAP> set
sharenfs="sec=sys,root=@192.168.139.225/32:@192.168.139.227/32:@192.168.139.229/32:@192.16
8.139.231/32:@192.168.139.233/32:@192.168.139.235/32,rw=@192.168.139.225/32:@192.168.139.2
27/32:@192.168.139.230/32:@192.168.139.231/32:@192.168.139.233/32:@192.168.139.236/32"
sharenfs =
sec=sys,root=@192.168.139.225/32:@192.168.139.227/32:@192.168.139.229/32:@192.168.139.231/
32:@192.168.139.233/32:@192.168.139.235/32,rw=@192.168.139.225/32:@192.168.139.227/32:@192
.168.139.230/32:@192.168.139.231/32:@192.168.139.233/32:@192.168.139.236/32 (uncommitted)
osc7sn01:shares SAP> commit
Configure the Oracle Solaris Cluster NFS workflow in the Oracle ZFS Storage Appliance.
root@osc7cn02pd01-d2:~# ssh osc7sn01-storib
Password:
Last login: Tue Oct 4 20:28:09 2016 from 10.136.140.53
osc7sn01:> maintenance workflows
osc7sn01:maintenance workflows> ls
Properties:
showhidden = false
Workflows:
54 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
workflow-004 Unconfigure Oracle Enterprise Manager Monitoring root false Sun
Microsystems, Inc. 1.0
We can verif y that the defined w orkflow executed successfully because the user osc_agent was created. The Next
step is to add the Oracle ZFS Storage Appliance to Oracle Solaris Cluster.
55 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Enter the IB hostname of the appliance head where the project for SAP shares is configured (in this case,
osc7sn01-storIB). Enter the username created during the earlier step (in this case, osc_agent).
56 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
In some cases, the export list w ill not contain all shared file systems. If this occurs, they export entries can be
entered manually or added as a property later on. A bug may not allow adding both IP addresses and shared
exported file systems at the same time; to cir cumvent this problem, simply add the IP addresses first and then add
the exported file systems.
Figure 49. File system export list on Oracle ZFS Storage Appliance head.
The zone cluster should show the status of the new NAS devic e as OK.
Figure 50. Zone cluster status shows the new NAS Dev ice.
57 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Create the ScalMountPoint resource to manage and monitor the availability of NFS mount points.
root@dlaz-100:~# clrg create -S scalmnt-rg
root@dlaz-100:~# clrt register ScalMountPoint
Pay attention to the hostname of the Oracle ZFS Storage Appliance head. Oracle Solaris Cluster treats NAS device
names as case-sensitive and expects the exact same name in /etc/vfstab.
root@dlaz-100:~# clrg online -eM scalmnt-rg
root@dlaz-100:~# clrs status
Repeat the same steps to add the NAS devic e for zone clusters ascs-zc and pas-zc. Resources are created as
needed for the SAP components. Put the password for user osc_agent in file /tmp/p and enter the follow ing
commands to add the appliance as a NAS devic e to the zone cluster ascs-zc:
root@dlaz-201:~# clnasdevice add -t sun_uss -u osc_agent -f /tmp/p osc7sn01-storib
root@dlaz-201:~# clnasdevice set -p nodeIPs{dlaz-101}=192.168.139.227 -p nodeIPs{dlaz-
201}=192.168.139.233 osc7sn01-storib
root@dlaz-201:~# clnasdevice add-dir -d supercluster1/local/SAP osc7sn01-storib
Via the command line, add the Oracle ZFS Storage Appliance as the NAS devic e for zone cluster pas-zc and
create the ScalMountPoint resource (it’s assumed that the password for user osc_agent is in the file /tmp/p):
root@dlaz-202:~# clnasdevice add -t sun_uss -u osc_agent -f /tmp/p osc7sn01-storib
root@dlaz-202:~# clnasdevice set -p nodeIPs{dlaz-102}=192.168.139.229 -p nodeIPs{dlaz-
202}=192.168.139.235 osc7sn01-storib
root@dlaz-202:~# clnasdevice add-dir -d supercluster1/local/SAP osc7sn01-storib
root@dlaz-202:~# clrg create -S scalmnt-rg
root@dlaz-202:~# clrt register ScalMountPoint
root@dlaz-202:~# clrs create -d -g scalmnt-rg -t ScalMountPoint -x MountPointDir=/usr/sap
-x FileSystemType=nas -x TargetFileSystem=osc7sn01-storib:/export/SAP/usr-sap-pas usrsap-
rs
root@dlaz-202:~# clrs create -d -g scalmnt-rg -t ScalMountPoint -x MountPointDir=/sapdb -x
FileSystemType=nas -x TargetFileSystem=osc7sn01-storib:/export/SAP/sapdb-pas sapdb-rs
root@dlaz-202:~# clrs create -d -g scalmnt-rg -t ScalMountPoint -x MountPointDir=/sapmnt -
x FileSystemType=nas -x TargetFileSystem=osc7sn01-storib:/export/SAP/sapmnt sapmnt-rs
root@dlaz-202:~# clrg online -eM scalmnt-rg
58 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Configuring a Highly Available Storage Resource
Using a highly available storage resource can improve the performance of I/O intensiv e data servic es, such import
and export operations for the SAP transport service. In an Oracle Solaris Cluster environment, the resource type
HAStoragePlus enables access to highly available cluster or local file systems that are configured for failover. (For
information about setting up this resource type, see Enabling Highly Available Local File Systems in the Oracle
Solaris Cluster documentation.)
As an example, you can use the BUI to create an HAStoragePlus resource for the transport directory
/usr/sap/trans. From the Tasks pane, select Highly Available Storage.
59 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Pick the zone cluster where the resource group and resource will be created (in this case, the zone cluster pas-zc),
and specify the configuration settings.
Figure 55. Select Shared File Sy stem as the shared storage ty pe.
60 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 56. Select the mount points and press the Return button to get to the next screen.
It’s recommended that you rename the resource, as the default name is long and cumbersome. Change the default
name for the resource group and reuse scalmnt-rg (otherwise a new resource group is created).
Figure 57. Rev iew the settings for the HAStoragePlus resource.
61 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
.
Figure 58. Rev iew the conf iguration choices and press Next to create the resource.
Figure 59. The Result screen shows that the resource conf iguration succeeded.
62 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 60. All resources can now be monitored using the BUI.
Create Project
In each zone where SAP is installed and running, the following project information is needed. Initially the project can
be created in the zone where SAP installer is run. The SAP installer runs in the zone where the logic al hostname is
active, whic h initially is the zones dlaz-100, dlaz-101, dlaz-102.
# projmod -s
\ -K "process.max-file-descriptor=(basic,65536,deny)"
\ -K "process.max-sem-nsems=(priv,2048,deny)"
\ -K "project.max-sem-ids=(priv,1024,deny)"
\ -K "project.max-shm-ids=(priv,256,deny)"
\ -K "project.max-shm-memory=(priv,18446744073709551615,deny)"
\ user.root
63 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Installing SAP SCM Software Components
Many readers are already familiar w ith installing the SAP NetWeaver ABAP stack. For this reason, this guide
summarizes procedures for installing the SAP SCM components in an appendix (“Appendix A: Installing SAP
SCM”). This appendix is based on detailed ABAP installation and LC installation steps performed as a part of the
sample installation in the Oracle Solution Center. It outlines the steps (using the graphical sapinst client interface)
for installing the following softw are components:
» The ABAP SAP Central Services (ASCS) instance
» Oracle Database (the primary Oracle RAC node)
» The Primary Application Server (PAS) instance
» The dialog instance
» The SAP liveCache instance
» The SAP Enqueue Replication Services (ERS) instance
Before following the procedures outlined in the appendix , there are a f ew steps necessary to prepare the
environment for the SAP software installation.
When using Oracle RAC for the SAP database, the following generated shell scripts create HA services for each
application server:
#!/bin/sh
#Generated shell script to create oracle RAC services on database host.
#Login as the owner of the oracle database software (typically as user 'oracle') on the
database host.
#Set the $ORACLE_HOME variable to the home location of the database.
#
$ORACLE_HOME/bin/srvctl add service -db QS1 -service QS1_DVEBMGS00 -preferred QS
1001 -available QS1002 -tafpolicy BASIC -policy AUTOMATIC -notification TRUE -failovertype
SELECT -failovermethod BASIC -failoverretry 3 -failoverdelay 5 $ORACLE_HOME/bin/srvctl
start service -db QS1 -service QS1_DVEBMGS00
#!/bin/sh
#Generated shell script to create oracle RAC services on database host.
#Login as the owner of the oracle database software (typicaly os user 'oracle') on the
database host.
#Set the $ORACLE_HOME variable to the home location of the database.
#
$ORACLE_HOME/bin/srvctl add service -db QS1 -service QS1_D10 -preferred QS1001 -available
QS1002 -tafpolicy BASIC -policy AUTOMATIC -notification TRUE -failovertype SELECT -
failovermethod BASIC -failoverretry 3 -failoverdelay 5 $ORACLE_HOME/bin/srvctl start
service -db QS1 -service QS1_D10
64 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Set the following parameters for the root user to control where SAP installation logs are created:
root@dlaz-202> export TMP=/sap-share/install/app/temp
root@dlaz-202> export TMPDIR=/sap-share/install/app/temp
root@dlaz-202> export TEMP=/sap-share/install/app/temp
Finally, start the sapinst client interface to install the required SAP instances. Use the option
SAPINST_USE_HOSTNAME=<LOGICAL HOSTNAME> to install the ASCS, ERS, and APP servers to run in the zone
where the corresponding logical hostname is active:
root@dlaz-202> ./sapinst GUISERVER_DIALOG_PORT=21201 SAPINST_DIALOG_PORT=21213
SAPINST_USE_HOSTNAME=<LogicalHost>
First, modify the SAP directory structure to have the hostctrl directory local to each node. On node 1:
cd /usr
mkdir local
mkdir local/sap
su - qs1adm
cd /usr/sap
mv hostctrl/ hostctrl.old
cp -r hostctrl.old ../local/sap/hostctrl
ln -s ../local/sap/hostctrl .
On node 2:
mkdir /usr/local
mkdir /usr/local/sap
cd /usr/sap
cp -r hostctrl.old ../local/sap/hostctrl
SAP is installed on the node where the logical hostname is running—by default, this is node 1. To be able to start
SAP manually on either node, it’s necessary to create SAP users also on node 2. Manually creating home
directories and adding entries in /etc files is one approach to doing this. An alternative approach is to create scripts
that create the users, running the scripts on both nodes prior to starting the SAP install; in this approach, sapinst
recognizes that users are already defined and does not attempt to create new ones.
To manually create SAP users on node 2, start by connecting to the zone containing the ASCS instance
( dlaz-201):
mkdir -p /export/home/qs1adm
mkdir -p /export/home/sapadm
65 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
echo "qs1adm:x:100:101:SAP System Administrator:/export/home/qs1adm:/bin/csh"
>>/etc/passwd
echo "sapadm:x:101:101:SAP System Administrator:/export/home/sapadm:/bin/false"
>>/etc/passwd
Modify /etc/services and copy all SAP-related services. These entries are the same in all zones:
saphostctrl 1128/tcp # SAPHostControl over SOAP/HTTP
saphostctrl 1128/udp # SAPHostControl over SOAP/HTTP
saphostctrls 1129/tcp # SAPHostControl over SOAP/HTTPS
saphostctrls 1129/udp # SAPHostControl over SOAP/HTTPS
sapmsQS1 3600/tcp # SAP System Message Server Port
sapdp00 3200/tcp # SAP System Dispatcher Port
...
66 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
sapgw98s 4898/tcp # SAP System Gateway Security Port
sapgw99s 4899/tcp # SAP System Gateway Security Port
Update environment files that are dependent on a hostname using a script such as the follow ing:
#!/bin/sh
for s1 in /export/home/qs1adm/.??*101*
do
s2=`echo $s1 | sed 's/101/201/'`
echo mv "$s1" "$s2"
mv "$s1" "$s2"
done
At this point in time, SAP <sid>adm users exist on both nodes and SAP can be started on either node. Next, it’s
recommended to test the ability to start and stop instances on both nodes and w ithin all zones (ASCS, PAS, and
LC). To start SAP in a zone, first make sure that the logic al hostname for the applic ation that needs to be started is
running on that node (run these commands in dlaz-202):
clrg status pas-rg
clrg switch -n dlaz-202 pas-rg
su - qs1adm
startsap -i DVBEMSG00
SAP-specif ic agents are implemented as resource types in Oracle Solaris Cluster and are made available during the
installation. The SAP-specif ic resource types must only be regis tered. Once registered, they are available in zone
clusters and in the global zone cluster of each node. Resource types are regis tered as needed.
clrt register ORCL.sapstartsrv
clrt register ORCL.sapcentr
clrt register ORCL.saprepenq
clrt register ORCL.saprepenq_preempt
Create resource groups, resources, and affinities to manage the instances in the SAP ABAP stack using Oracle
Solaris Cluster.
ASCS
#ASCS resources
clrs create -d -g ascs-rg -t ORCL.sapstartsrv \
-p SID=QS1 \
-p sap_user=qs1adm \
-p instance_number=00 \
-p instance_name=ASCS00 \
-p host=dla-ascs-lh \
-p child_mon_level=5 \
-p resource_dependencies_offline_restart=usrsap-rs,sapmnt--rs \
-p timeout_return=20 \
ascs-startsrv-rs
67 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
ERS
clrs create -d -g ers-rg -t saprepenq \
-p sid=QS1 \
-p sap_user=qs1adm \
-p instance_number=15 \
-p instance_name=ERS15 \
-p host=dla-ers-lh \
-p debug_level=0 \
-p resource_dependencies=dla-rep-startsrv-rs \
-p resource_dependencies_offline_restart= \
usrsap-ascs-rs, sapmnt-rs \
-p START_TIMEOUT=300 \
dla-rep-rs
clrs create -d -g ascs-rg -t saprepenq_preempt \
-p sid=QS1 \
-p sap_user=qs1adm \
-p repenqres=dla-rep-rs \
-p enq_instnr=00 \
-p debug_level=0 \
-p resource_dependencies_offline_restart=dla-ascs-rs \
preempter-rs
#Weak affinity - ASCS restart on ERS
clrg set -p RG_affinities=+ers-rg ascs-rg
Check the Oracle Solaris Cluster configuration of the ASCS and ERS resource types:
clrg show -p RG_affinities ascs-rg
clrs enable +
PAS
The Primary Application Server connects to the Oracle Database:
# clrt list
SUNW.LogicalHostname:5
SUNW.SharedAddress:3
SUNW.ScalMountPoint:4
ORCL.oracle_external_proxy
ORCL.sapstartsrv:2
ORCL.sapcentr:2
ORCL.saprepenq:2
ORCL.saprepenq_preempt:2
Create the Orac le Database monitoring agent. The agent can be configured to monitor either an Oracle
Database single instance or Oracle RA C.
68 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
-p instance_number=00 \
-p instance_name=DVEBMGS00 \
-p host=dla-pas-lh \
-p child_mon_level=5 \
-p resource_dependencies_offline_restart=\
scalosc7sn02-storIB_export_SAP_usr_sap_pas-rs, sapmnt-rs,\
scalosc7sn02-storIB_export_SAP_sapdb_pas-rs \
-p timeout_return=20 \
pas-startsrv-rs
## Comment PAS was installed using IB host im7pr1-pas-lh
Oracle Solaris Cluster provides the HA for Oracle External Proxy resource type that interrogates an Oracle
Database or Oracle RAC servic e and interprets the availability of that servic e as a part of an Oracle Solaris Cluster
configuration. To configure this resource type, connect to one of the database zones as the user oracle and create
a user that will be used by the Oracle External Proxy resource:
oracle@osc7cn01-z1:~$ srvctl status database -d LEX
Instance QS1001 is running on node osc7cn01-z1
Instance QS1002 is running on node osc7cn02-z1
oracle@osc7cn01-z1:~$ export ORACLE_HOME=/oracle/QS1/121
oracle@osc7cn01-z1:~$ export ORACLE_SID=QS1001
oracle@osc7cn01-z1:~$ sqlplus "/as sysdba"
SQL> create user hauser identified by hauser;
SQL> grant create session to hauser;
SQL> grant execute on dbms_lock to hauser;
SQL> grant select on v_$instance to hauser;
SQL> grant select on v_$sysstat to hauser;
SQL> grant select on v_$database to hauser;
SQL> create profile hauser limit PASSWORD_LIFE_TIME UNLIMITED;
SQL> alter user hauser identified by hauser profile hauser;
SQL> exit
In each zone where the agent is running and connecting to the Oracle Database, it is necessary to set up
tnsnames.ora and encrypted password files. Create /var/opt/oracle/tnsnames.ora as the default location for
tnsnames.ora:
mkdir -p /var/opt/oracle
cat << EOF >/var/opt/oracle/tnsnames.ora
QS1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = osc7cn01-z1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = osc7cn02-z1-vip)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = QS1)
)
)
EOF
69 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
mkdir -p /var/opt/oracle
scp /var/opt/oracle/tnsnames.ora dlaz-201:/var/opt/oracle
APP
To optimize resource use in the PA S zone cluster, an additional SA P application server (APP) is also
installed and managed by Oracle Solaris Cluster.
Because the agent that monitors Oracle Database services was already created, there is no need to recreate it.
Next it’s necessary to configure zone cluster dependencies across nodes. Connect to the global zone in the APP
domain and execute:
root@osc3cn01-d3:~# clrs list -Z dla-pas -t ORCL.sapdia
dla-pas:pas-rs
dla-pas:d10-rs
70 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
root@osc3cn01-d3:~# clrs set -Z dla-pas -p Resource_dependencies+=dla-pas:oep-rs pas-rs
d10-rs
Confirm that the dependencies are set properly for the APP zone:
root@osc3cn01-d3:~# clrs show -p Resource_dependencies -t ORCL.sapdia +
mkdir -p /export/home/qh1adm
mkdir -p /export/home/sdb
Next, copy the contents of the home directories and update the ownership:
cd /export/home
scp -p -r dlaz-100:/export/home/qh1adm .
scp -p -r dlaz-100:/export/home/sdb .
chown -R qh1adm:sapsys qh1adm
chown -R sdb:sdba sdb
Another approach to moving content over is to tar the dir ectory in a shared location and extract the tar file on the
other node. This approach preserves ownership and access:
root@dlaz-100:~# tar -cfB /sap-share/util/opt-sdb.tar /etc/opt/sdb
root@dlaz-200:~# tar -xfB /sap-share/util/opt-sdb.tar /etc/opt/sdb
root@dlaz-200:~# ln -s /sapdb/data/wrk /sapdb/QH1/db/wrk
71 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
-----------------------------------------------------------------
XUSER Entry 1
--------------
Key :DEFAULT
Username :CONTROL
UsernameUCS2 :.C.O.N.T.R.O.L. . . . . . . . . . . . . . . . . . . . . . . . .
Password :?????????
PasswordUCS2 :?????????
PasswordUTF8 :?????????
Dbname :QH1
Nodename :dla-lc-lh
Sqlmode :<unspecified>
Cachelimit :-1
Timeout :-1
Isolation :-1
Charset :<unspecified>
-----------------------------------------------------------------
XUSER Entry 2
--------------
Key :1QH1dla-lc-lh
Username :CONTROL
UsernameUCS2 :.C.O.N.T.R.O.L. . . . . . . . . . . . . . . . . . . . . . . . .
Password :?????????
PasswordUCS2 :?????????
PasswordUTF8 :?????????
Dbname :QH1
Nodename :dla-lc-lh
Sqlmode :<unspecified>
Cachelimit :-1
Timeout :-1
Isolation :-1
Charset :<unspecified>
dlaz-100:qh1adm 15% xuser -U 1QH1dla-lc-lh -u control,control20 clear
dlaz-100:qh1adm 16% xuser list
-----------------------------------------------------------------
XUSER Entry 1
--------------
Key :DEFAULT
Username :CONTROL
UsernameUCS2 :.C.O.N.T.R.O.L. . . . . . . . . . . . . . . . . . . . . . . . .
Password :?????????
PasswordUCS2 :?????????
PasswordUTF8 :?????????
Dbname :QH1
Nodename :dla-lc-lh
Sqlmode :<unspecified>
Cachelimit :-1
Timeout :-1
Isolation :-1
Charset :<unspecified>
In lcinit replace:
dbmcli -d $DATABASE -u $DBMUSER exec_lcinit $INITMODE $DEBUG $SAPUSER $ENCODING >>
/tmp/log2 2>&1
with:
dbmcli -U DEFAULT exec_lcinit $INITMODE $DEBUG $SAPUSER $ENCODING >> /tmp/log2 2>&1
72 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
dbmcli -U DEFAULT db_state
dbmcli -U DEFAULT db_enum
ps -ef | grep -i QS1
lcinit QS1 shutdown
dbmcli -U DEFAULT db_state
dbmcli -U DEFAULT db_enum
ps -ef | grep -i QS1
On node 2:
clrg switch -n dlaz-200 lc-rg
dbmcli -U DEFAULT db_state
dbmcli -U DEFAULT db_enum
ps -ef | grep -i QS1
lcinit QS1 shutdown
dbmcli -U DEFAULT db_state
dbmcli -U DEFAULT db_enum
ps -ef | grep -i QS1
Edit the lccluster script, replacing “put-LC_NAME-here” with the SAP liveCache instance name (“QH1” is the LC
instance name in this implementation):
clrt register SUNW.sap_livecache
clrt register SUNW.sap_xserver
Then enter:
73 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
root@dlaz-100:~# clrg switch -n dlaz-200 lc-rg
root@dlaz-100:~# clrs status
74 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Figure 62. Oracle Solaris Cluster Manager interf ace.
In addition to the management interface, Oracle Solaris Cluster provides s imple CLI commands—clrg status,
clrs status, and cluster check—that can be useful in monitoring status. Details on these commands are
available in the Oracle Solaris Cluster 4.3 Reference Manual.
During testing, be sure to check the following log files in the global zone for more information:
» /var/cluster/logs, including eventlog and commandlog
» /var/adm/messages
75 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
References
For more information about SAP applications on Oracle infrastructure, visit these sites:
» Oracle Solution Centers for SAP, http://www.oracle.com/us/solutions/sap/services/overview/index.html
» Oracle Solaris Cluster Data Servic e for SAP liveCache Guide,
https://docs.oracle.com/cd/E56676_01/html/E63549/index.html
» Oracle Solaris Cluster 4.3 documentation, https://docs.oracle.com/cd/E56676_01/
» Oracle Solaris Cluster Downloads, http://www.oracle.com/technetwork/server-storage/solaris-
cluster/downloads/index.html
» Oracle Technology Network artic le series: Best Practices for Migrating SAP Systems to Oracle Infrastructure
» Oracle Database and IT Infrastructure for SAP:
http://www.oracle.com/us/solutions/sap/introduction/overview/index.html
» Oracle SuperCluster: oracle.com/supercluster
» Oracle ZFS Storage Appliance: oracle.com/storage/nas/
» Oracle Solaris: https://www.oracle.com/solaris /
» Oracle Optimized Solution for SAP: https://www.oracle.com/solutions/optimized-solutions/sap.html
» SAP Community Netw ork (SCN) on Oracle site: https://go.sap.com/community/topic /oracle.html
» SAP Community Netw ork (SCN) on Oracle Solaris: https://go.sap.com/community/topic/oracle-solaris .html
» Additional collateral: oracle.com/us/solutions/sap/it-infrastructure/resources/
The procedures and solution configuration described in this document are based on an actual customer
implementation. Oracle acknow ledges and is grateful for how this customer generously shared information and
contributed tested procedures from their deployment experience.
76 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Appendix A: Installing SAP SCM
To install SAP SCM and SAP liveCache, use the graphical sapinst client interface. There are six major
components to install:
For additional information about installing SAP SCM using Oracle Database on an Oracle infrastructure, see the list
of Installation References at the end of this appendix .
77 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Step 2. Define parameters for installing the ASCS instance.
Step 3. Review the parameters to install and start the ASCS instance.
The screenshots show the parameter summary for the example SAP SCM installation.
78 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Installing the Oracle Database
The next task is to install the primary Oracle RAC node. Before starting sapinst, first set the environment variables
for TMP, TMPDIR, and TEMP.
SAP expects /oracle/SID to exist on the database servers. Create the directory and mount either /oracle or
/oracle/SID from a share on the internal or an external ZFS storage appliance. Failure to have /oracle/<SID>
mounted w ill result in filling the /root file system w ith logs generated by the Oracle Database and could result in
nodes panics.
In /oracle/<SID> create a soft link to ORACLE_HOME already installed in the database zone or domain:
ln –s /u01/app/oracle/product/12.1.0.2/dbhome_1 /oracle<SID>/121
Step 1. Run the SAP software provisioning application sapinst and select the option to install the database
instance.
79 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
» Listener configuration. Name (LISTENER); port (1521); netw ork configuration files (keep listener.ora and
tnsnames.ora)
» Parameters for Oracle Grid. Path to the software ( /u01/app/12.1.0.2/grid); ORACLE_SID for Grid (*ASM1).
» Configuration of the available Oracle ASM diskgroups. Names (+DATAC1, +REC0C1, +REC0C1); parameter
compatible in init.ora 11.2.0.2.0.
» Parameters for Oracle RAC. Database Name (QS1); number of instances (2); Scan lis tener IP address; Scan
listener port (1521); Length of instance No. (Three character: 001 … 009)
» Parameters for the secondary RAC node: Host name, init.ora parameters (including IP address of
remote_listener)
» Advanced configuration (select SAPDATA Directory Mapping).
» Parameters for additional SAPDATA directories, if needed.
» General load parameters: SAP Code page (4102); Number of Parallel jobs (3).
» Create database statistic s at the end of the import using the program call
brconnect –u / -c –o summary –f stats –o SAPSR3 –t all -p 0
» Location of the Oracle Database 12c client software packages:
/app-archive/solaris/oracle12c/51050177/OCL_SOLARIS_SPARC
» Archiv es to be automatically unpacked.
» Location of the SAP liveCache software:
/app-archive/solaris/scm/SAP_SCM_7.0_EHP2_liveCache_7.9_/DATA_UNITS/LC_SOLARIS_SPARC
The screenshots show the parameter summary for the example SAP SCM installation.
80 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
81 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
82 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
If an error occurs, identify and resolv e the error condition. For example, the database home parameter must point to
a valid ORACLE_HOME directory and can be linked in a UNIX command w indow:
# mv /oracle/QS1/121 /oracle/QS1/121.bak
# ln –s /u01/app/oracle/product/12.1.0.2/dbhome_1 /oracle/QS1/121
# chown –h oracle:oinstall /oracle/QS1/121
# cp –ip /oracle/QS1/121.bak/dbs/initQS1.ora /oracle/QS1/121/dbs
After resolving the error, click Retry to continue the installation. If an error occurs in whic h the
import_monitor.java.log file contains an error message about a Lock file, this is a known issue when an NFS
filesystem is used for TMP. It is necessary to shut down sapinst, move /oes_db_dumps/qs1/temp to a local
filesystem, and then restart sapinst. At that point, the previous run of the installation can be continued.
83 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Another important aspect of the install is the size (number of threads and total memory) of the database domain.
The SAP installer will install a database that takes advantage of a large percentage of the available resources,
regardless of other installed databases or requirements for more SAP systems to be installed afterwards. One way
of reducing the resources allocated to the database is to add a custom parameter max_parallel_servers during the
installation. Calculate the value assigned to this parameter based on the maximum number of cores allocated to
SCS database. Remember that each core has 8 threads.
An example of how to set max_parallel_servers starting from SAPS allocated to the database is:
max_parallel _servers= SAPS/3000*8
For 5000 SAPS, set max_parallel_servers=16 because each SPARC core is rated at approximatively 3000 SAPS.
Step 1. Run the SAP software provisioning application sapinst and select the option to install the Central Instance.
84 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Step 3. Review the parameters to install the Central Instance.
The screenshot below shows a summary of defined parameters for the example SAP SCM installation.
85 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
86 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Step 3. Run the script when prompted to update service in the Oracle RAC environment.
A message appears in the “Update servic e parameter in RAC environment” phase. Run the generated script
( QS1_DVEBMGS01.sh) to create the Oracle RAC servic e on the database host. Afterwards, continue with the
sapinst installation of the Central Instance. If errors occur in the “Start Instance” phase, check the log files. (If a
hostname is assigned to the loopback host 127.0.0.1, then an error may occur. This can be resolved by fixing the
localhost entry in /etc/hosts.) After resolving any errors, click Retry in the sapinst window to continue.
Step 1. Run the SAP software provisioning application sapinst and select the option to install the Dialog Instance.
The screenshot below shows a summary of defined parameters for the example SAP SCM installation.
87 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
88 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Step 4. Run the script when prompted to update service in the Oracle RAC environment.
A message appears in the “Update servic e parameter in RAC environment” phase. Run the generated script
( QS1_D00.sh) to create the Oracle RAC service on the database host. Return to sapinst and click “OK” to continue
the installation of the Dialog Instance.
Step 1. Run the SAP software provisioning application sapinst and select the option to install the SAP liveCache
Server.
» Parameter mode: Custom. Allow sapinst to set the read and execute bits on directories as necessary. An error
may also appear regarding the amount of swap space recognized by sapinst for the liveCache server. This error
may be ignored as long as the command swap --sh indic ates that there is adequate swap space available.
» SAP liveCache ID (QH1).
» Master SAP password (this is the same password previously used to install components; MaxDB requires this
password to be 8 or 9 characters in length).
» User and group information for the liv eCache database softw are owner, which was created previously .
89 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
» Path to the liveCache software:
/app-archive/solaris/scm/SAP_SCM_7.0_EHP2_liveCache_7.9_/DATA_UNITS/LC_SOLARIS_SPARC
» Passwords for liveCache system administrator ( superdba) and liveCache manager operator ( control).
» liveCache user name (SAPQH1) and password.
» Parameters for the liveCache server instance. Volume Medium Type (File System); number of CPUs to be used
concurrently (4). Setting a higher number of CPUs for concurrent use (such as 256) may result in an installation
error.
» Minimum log size (1000 MB) and log volume locations ( /sapdb/QH1/saplog).
» Minimum data volume siz e (5462 MB) and data volume locations (for example, /sapdb/QH1/sapdata1,
/sapdb/QH1/sapdata2, etc.)
The screenshot below shows a summary of parameters defined in the example SAP SCM installation.
90 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
If an error occurs, make sure that the maximum number of concurrently used CPUs is set to 4. SAP Note 1656325
also suggests replacing /sapdb/QH1/db/env/cserv.pcf w ith the file provided.
The final component to install in the SAP SCM installation w ith liveCache is the ERS instance. Before starting
sapinst, first set environment variables for TMP, TMPDIR, and TEMP.
Step 1. Run the SAP software provisioning application sapinst and select the option to install the ERS instance.
91 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Step 3. Review the parameters and install the ERS instance.
The screenshot below shows a summary of defined parameters for the example SAP SCM installation.
After the ERS instance is installed and the ASCS instance is restarted successfully , the process of installing SAP
SCM w ith liveCache is complete.
92 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Installation References
Refer to the follow ing resources for more information:
93 | HOW TO DEPLOY SAP SCM WITH SAP LIVECACHE IN AN HA CONFIGURATION ON ORACLE SUPERCLUSTER
Oracle Corporation, World Headquarters Worldwide Inquiries
500 Oracle Parkway Phone: +1.650.506.7000
Redwood Shores, CA 94065, USA Fax: +1.650.506.7200
C ON N EC T W IT H U S
blogs.oracle.com/oracle
Copyright © 2016, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
f acebook.com/oracle warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are
twitter.com/oracle formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without our prior written permission.
oracle.com/sap Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and
are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are
trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0615
How to Deploy SAP SCM with SAP liveCache in an HA Configuration on Oracle SuperCluster
November 2016
Author: Victor Gails