You are on page 1of 75

STEP BY STEP INSTALLATION

OF
ORACLE RAC 12cR1 (12.1.0.2)
ON LINUX X86-64

Oracle Global Customer Support - RAC / Scalability


Copyright 1993, 2014, Oracle and/or its affiliates. All rights reserved

Contents
1.

Introduction ....................................................................................................................................... 3
1.1

1.1.1

Server Hardware Checklist for Oracle Grid Infrastructure ................................................. 3

1.1.2

Environment Configuration for Oracle Grid Infrastructure and Oracle RAC ..................... 3

1.1.3

Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC ................... 4

1.1.4

Oracle Grid Infrastructure Storage Configuration Checks ................................................. 5

1. 2

Configuring Servers for Oracle Grid Infrastructure and Oracle RAC .......................................... 6

1.2.1

Checking Server Hardware and Memory Configuration .................................................... 6

1.2.2

Server Storage Minimum Requirements........................................................................... 7

1.2.3

64-bit System Memory Requirements ............................................................................... 7

1.3

2.

Oracle Grid Infrastructure Installation Server Hardware Checklist ........................................... 3

Operating System Requirements for x86-64 Linux Platforms ................................................... 7

1.3.1

Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64 ......... 8

1.3.2

Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64 ......... 9

1.3.3

Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64 ....... 10

Prepare the cluster nodes for Oracle RAC ....................................................................................... 12


2.1. User Accounts ............................................................................................................................... 12
2.2

Networking............................................................................................................................... 12

2.3. Synchronizing the Time on ALL Nodes .......................................................................................... 14


2.4

Installing the Oracle Preinstallation RPM with ULN support ................................................... 14

2.5

Configuring Kernel Parameters ................................................................................................ 15

2.6

Set shell limits for the oracle user............................................................................................ 16

2.7

Create the Oracle Inventory Directory..................................................................................... 17

2.7. Creating the Oracle Grid Infrastructure Home Directory ............................................................. 17


2.8. Creating the Oracle Base Directory ............................................................................................... 17
2.9. Creating the Oracle RDBMS Home Directory ................................................................................ 17
2.10. Stage the Oracle Software .......................................................................................................... 17
3. Prepare the shared storage for Oracle RAC ......................................................................................... 18
3.1. Shared Storage .............................................................................................................................. 18
3.1.1. Partition the Shared Disks ...................................................................................................... 18
3.1.2. Installing and Configuring ASMLib ......................................................................................... 19
3.1.3. Using ASMLib to Mark the Shared Disks as Candidate Disks ................................................. 20
3.2

Setting Disk I/O Scheduler on Linux ......................................................................................... 21

4.

Oracle Grid Infrastructure Install ..................................................................................................... 22

5.

RDBMS Software Install ................................................................................................................... 45

6.

ASMCA to create Diskgroups ........................................................................................................... 57

7.

Run DBCA to create the database.................................................................................................... 60

8.

Applying Latest PSUs to GRID & RDBMS Homes .............................................................................. 74

1.

Introduction

1.1

Oracle Grid Infrastructure Installation Server Hardware Checklist

1.1.1

Server Hardware Checklist for Oracle Grid Infrastructure

Server hardware: server make, model, core architecture, and host bus adaptors (HBA) are supported
to run with Oracle RAC.
Network Switches:

Public network switch, at least 1 GbE, connected to a public gateway.

Private network switch, at least 1 GbE, with 10 GbE recommended, dedicated for use only
with other cluster member nodes. The interface must support the user datagram protocol
(UDP) using high-speed network adapters and switches that support TCP/IP. Alternatively, use
InfiniBand for the interconnect.

Runlevel: Servers should be either in runlevel 3 or runlevel 5.


Random Access Memory (RAM): At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster
installations, including installations where you plan to install Oracle RAC.
Temporary disk space allocation: At least 1 GB allocated to/tmp.
Storage hardware: Either Storage Area Network (SAN) or Network-Attached Storage (NAS).
Local Storage Space for Oracle Software

At least 8 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home). Oracle
recommends that you allocate 100 GB to allow additional space for patches.

At least 12 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner
(Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files.

For Linux x86-64 platforms, if you intend to install Oracle Database, then allocate 6.4 GB of
disk space for the Oracle home (the location for the Oracle Database software binaries).

1.1.2

Environment Configuration for Oracle Grid Infrastructure and Oracle RAC

Create Groups and Users. A user created to own only Oracle Grid Infrastructure software installations
is called the grid user. A user created to own either all Oracle installations, or only Oracle database
installations, is called the oracle user.
Create mount point paths for the software binaries. Oracle recommends that you follow the
guidelines for an Optimal Flexible Architecture configuration.
Review Oracle Inventory (oraInventory) and OINSTALL Group Requirements. The Oracle Inventory
directory is the central inventory of Oracle software installed on your system. Users who have the
Oracle Inventory group as their primary group are granted the OINSTALL privilege to write to the
central inventory.
Ensure that the Grid home (the Oracle home path you select for Oracle Grid Infrastructure) uses
only ASCII characters

Unset Oracle software environment variables. If you have set ORA_CRS_HOME as an environment
variable, then unset it before starting an installation or upgrade. Do not use ORA_CRS_HOME as a
user environment variable.
If you have had an existing installation on your system, and you are using the same user account to
install this installation, then unset the following environment
variables: ORA_CRS_HOME;ORACLE_HOME; ORA_NLS10; TNS_ADMIN.
1.1.3

Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC

Public Network Hardware:

Public network switch (redundant switches recommended) connected to a public gateway and
to the public interface ports for each cluster member node.

Ethernet interface card (redundant network cards recommended, bonded as one Ethernet
port name).

The switches and network interfaces must be at least 1 GbE.

The network protocol is TCP/IP.

Private Network Hardware for the Interconnect

Private dedicated network switches (redundant switches recommended), connected to the


private interface ports for each cluster member node. NOTE: If you have more than one
private network interface card for each server, then Oracle Clusterware automatically
associates these interfaces for the private network using Grid Interprocess Communication
(GIPC) and Grid Infrastructure Redundant Interconnect, also known as Cluster High Availability
IP (HAIP).

The switches and network interface adapters must be at least 1 GbE, with 10 GbE
recommended. Alternatively, use InfiniBand for the interconnect.

The interconnect must support the user datagram protocol (UDP).

Oracle Flex ASM Network Hardware

Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its
own dedicated private networks. Each network can be classified PUBLIC or PRIVATE+ASM or
PRIVATE or ASM. ASM networks use the TCP protocol.

Cluster Names and Addresses: Determine and configure the following names and addresses for the
cluster

Cluster name: Decide a name for the cluster, and be prepared to enter it during installation.
The cluster name should have the following characteristics:

Globally unique across all hosts, even across different DNS domains.
At least one character long and less than or equal to 15 characters long.

Grid Naming Service Virtual IP Address (GNS VIP): If you plan to use GNS, then configure a GNS
name and fixed address on the DNS for the GNS VIP, and configure a subdomain on your DNS
delegated to the GNS VIP for resolution of cluster addresses. GNS domain delegation is
mandatory with dynamic public networks (DHCP, autoconfiguration).
4

Single Client Access Name (SCAN) and addresses

Using Grid Naming Service Resolution: Do not configure SCAN names and addresses in
your DNS. SCANs are managed by GNS.

Using Manual Configuration and DNS resolution: Configure a SCAN name to resolve to
three addresses on the domain name service (DNS).

Standard or Hub Node Public, Private and Virtual IP names and Addresses:

Public node name and address, configured on the DNS and in /etc/hosts (for example,
node1.example.com, address 192.0.2.10). The public node name should be the primary host
name of each node, which is the name displayed by the hostname command.

Private node address, configured on the private interface for each node.
The private subnet that the private interfaces use must connect all the nodes you intend to
have as cluster members. Oracle recommends that the network you select for the private
network uses an address range defined as private by RFC 1918.

Public node virtual IP name and address (for example,node1-vip.example.com, address


192.0.2.11).

1.1.4 Oracle Grid Infrastructure Storage Configuration Checks


During installation, you are asked to provide paths for the following Oracle Clusterware files. These
path locations must be writable by the Oracle Grid Infrastructure installation owner (Grid user). These
locations must be shared across all nodes of the cluster, either on Oracle ASM (preferred), or on a
cluster file system, because the files created during installation must be available to all cluster
member nodes.

Voting files are files that Oracle Clusterware uses to verify cluster node membership and
status. The location for voting files must be owned by the user performing the installation
(oracle or grid), and must have permissions set to 640.

Oracle Cluster Registry files (OCR) contain cluster and database configuration information for
Oracle Clusterware. Before installation, the location for OCR files must be owned by the user
performing the installation (grid or oracle). That installation user must have oinstall as its
primary group. During installation, the installer creates the OCR files and changes ownership
of the path and OCR files to root.

1. 2

Configuring Servers for Oracle Grid Infrastructure and Oracle RAC

1.2.1 Checking Server Hardware and Memory Configuration


Run the following commands to gather your current system information:
1. To determine the physical RAM size, enter the following command:
# grep MemTotal /proc/meminfo
If the size of the physical RAM installed in the system is less than the required size, then you must
install more memory before continuing.
2. To determine the size of the configured swap space, enter the following command:
# grep SwapTotal /proc/meminfo
If necessary, see your operating system documentation for information about how to configure
additional swap space.
3. To determine the amount of space available in the /tmp directory, enter the following
command:
# df -h /tmp
4. To determine the amount of free RAM and disk swap space on the system, enter the following
command:
# free
5. To determine if the system architecture can run the software, enter the following command:
# uname -m
Verify that the processor architecture matches the Oracle software release to install. For
example, you should see the following for a x86-64 bit system:
x86_64
If you do not see the expected output, then you cannot install the software on this system.
6. Verify that shared memory (/dev/shm) is mounted properly with sufficient size using the
following command:
# df -h /dev/shm

1.2.2

Server Storage Minimum Requirements


Each system must meet the following minimum storage requirements:

1 GB of space in the /tmp directory.

If the free space available in the /tmp directory is less than what is required, then complete
one of the following steps:
o
o

Delete unnecessary files from the /tmp directory to make available the space required.
Extend the file system that contains the /tmp directory. If necessary, contact your
system administrator for information about extending file systems.

At least 8.0 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home).
Oracle recommends that you allocate 100 GB to allow additional space for patches.

Upto 10 GB of additional space in the Oracle base directory of the Grid Infrastructure
owner for diagnostic collections generated by Trace File Analyzer (TFA) Collector.

At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure installation
owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files.

For Oracle Solaris platforms, if you intend to install Oracle Database, then allocate 5.2 GB
of disk space for the Oracle home (the location for the Oracle Database software binaries).

1.2.3 64-bit System Memory Requirements


Each system must meet the following memory requirements:

1.3

At least 4 GB of RAM for Oracle Grid Infrastructure for cluster installations, including
installations where you plan to install Oracle RAC.
Swap space equivalent to the multiple of the available RAM, as indicated in the following table:
Available RAM

Swap Space Required

Between 4 GB and 16 GB

Equal to RAM

More than 16 GB

16 GB of RAM

Operating System Requirements for x86-64 Linux Platforms

The Linux distributions and packages listed in this section are supported for this release on x86-64. No
other Linux distributions are supported.
Identify operating system requirements for Oracle Grid Infrastructure, and identify additional
operating system requirements for Oracle Database and Oracle RAC installations.

Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64

Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64
7

Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64

Supported SUSE Distributions for x86-64

1.3.1

Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64

Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 7
Supported distributions:

Oracle Linux 7 with the Unbreakable Enterprise kernel: 3.8.13-33.el7uek.x86_64 or later

Oracle Linux 7 with the Red Hat Compatible kernel: 3.10.0-54.0.1.el7.x86_64 or later

Red Hat Enterprise Linux 7


Supported distribution:

Red Hat Enterprise Linux 7: 3.10.0-54.0.1.el7.x86_64 or later

Packages for Oracle Linux 7 and Red Hat Enterprise Linux 7


binutils-2.23.52.0.1-12.el7.x86_64
compat-libcap1-1.10-3.el7.x86_64
gcc-4.8.2-3.el7.x86_64
gcc-c++-4.8.2-3.el7.x86_64
glibc-2.17-36.el7.i686
glibc-2.17-36.el7.x86_64
glibc-devel-2.17-36.el7.i686
glibc-devel-2.17-36.el7.x86_64
ksh
libaio-0.3.109-9.el7.i686
libaio-0.3.109-9.el7.x86_64
libaio-devel-0.3.109-9.el7.i686
libaio-devel-0.3.109-9.el7.x86_64
libgcc-4.8.2-3.el7.i686
libgcc-4.8.2-3.el7.x86_64
libstdc++-4.8.2-3.el7.i686
libstdc++-4.8.2-3.el7.x86_64
libstdc++-devel-4.8.2-3.el7.i686
libstdc++-devel-4.8.2-3.el7.x86_64
libXi-1.7.2-1.el7.i686
libXi-1.7.2-1.el7.x86_64
libXtst-1.2.2-1.el7.i686
libXtst-1.2.2-1.el7.x86_64
make-3.82-19.el7.x86_64
sysstat-10.1.5-1.el7.x86_64
8

1.3.2

Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64

Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 6
Supported distributions:

Oracle Linux 6 with the Unbreakable Enterprise kernel: 2.6.39-200.24.1.el6uek.x86_64 or later

Oracle Linux 6 with the Red Hat Compatible kernel: 2.6.32-71.el6.x86_64 or later

Red Hat Enterprise Linux 6


Supported distribution:

Red Hat Enterprise Linux 6: 2.6.32-71.el6.x86_64 or later

Packages for Oracle Linux 6 and Red Hat Enterprise Linux 6


binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (i686)
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (i686)
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (i686)
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6 (i686)
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6 (i686)
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6 (i686)
libXext-1.1 (x86_64)
libXext-1.1 (i686)
libXtst-1.0.99.2 (x86_64)
libXtst-1.0.99.2 (i686)
libX11-1.3 (x86_64)
libX11-1.3 (i686)
libXau-1.0.5 (x86_64)
libXau-1.0.5 (i686)
9

libxcb-1.5 (x86_64)
libxcb-1.5 (i686)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
1.3.3

Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64

Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 5
Supported distributions:

Oracle Linux 5 Update 6 with the Unbreakable Enterprise kernel: 2.6.32-100.0.19 or later

Oracle Linux 5 Update 6 with the Red Hat compatible kernel: 2.6.18-238.0.0.0.1.el5 or later

Red Hat Enterprise Linux 5


Supported distribution:

Red Hat Enterprise Linux 6: 2.6.18-238.0.0.0.1.el5 or later

Packages for Oracle Linux 5 and Red Hat Enterprise Linux 5


binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
coreutils-5.97-23.el5_4.1
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-58
glibc-2.5-58 (32 bit)
glibc-devel-2.5-58
glibc-devel-2.5-58 (32 bit)
ksh
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
libXext-1.0.1
libXext-1.0.1 (32 bit)
10

libXtst-1.0.1
libXtst-1.0.1 (32 bit)
libX11-1.0.3
libX11-1.0.3 (32 bit)
libXau-1.0.1
libXau-1.0.1 (32 bit)
libXi-1.0.1
libXi-1.0.1 (32 bit)
make-3.81
sysstat-7.0.2
The following command can be run on the system to list the currently installed packages:
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel
Any missing RPM from the list above should be added using the "--aid" of "/bin/rpm" option to ensure
all dependent packages are resolved and installed as well.
NOTE: Be sure to check on all nodes that the Linux Firewall and SE Linux is disabled.

11

2.

Prepare the cluster nodes for Oracle RAC

2.1. User Accounts


NOTE: We recommend different users for the installation of the Grid Infrastructure (GI) and the Oracle
RDBMS home. The GI will be installed in a separate Oracle base, owned by user 'grid.' After the grid
install the GI home will be owned by root, and inaccessible to unauthorized users.
1.

Create OS groups using the command below Enter commands as the root user:

#/usr/sbin/groupadd oinstall
#/usr/sbin/groupadd dba
#/usr/sbin/groupadd asmadmin
#/usr/sbin/groupadd asmdba
#/usr/sbin/groupadd asmoper
2.

Create the users that will own the Oracle software using the commands:

#/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -m grid


#/usr/sbin/useradd -g oinstall -G dba,asmdba -d /home/oracle -m oracle
3.
Set the password for the oracle account using the following command. Replace password with
your own password.
#passwd oracle
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully.
#passwd grid
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully
Repeat Step 1 through Step 3 on each node in your cluster.
OUI can setup passwordless SSH for you, if you want to configure this yourself, refer to Note. 300548.1

2.2

Networking

NOTE: This section is intended to be used for installations NOT using GNS.
Determine your cluster name. The cluster name should satisfy the following conditions:

The cluster name is globally unique throughout your host domain.

The cluster name is at least 1 character long and less than 15 characters long.

The cluster name must consist of the same character set used for host names: single-byte
alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).

12

Determine the public host name for each node in the cluster. For the public host name, use the
primary hostname of each node. In other words, use the name displayed by the hostname command
for example: racnode1.
Determine the public virtual hostname for each node in the cluster. The virtual host name is a public
node name that is used to reroute client requests sent to the node if the node is down. Oracle
recommends that you provide a name in the format <public hostname>-vip, for example: racnode1vip. The virtual hostname must meet the following requirements: -The virtual IP address and the
network name must not be currently in use.

The virtual IP address must be on the same subnet as your public IP address.

The virtual host name for each node should be registered with your DNS.

Determine the private hostname for each node in the cluster. This private hostname does not need to
be resolvable through DNS and should be entered in the /etc/hosts file. A common naming convention
for the private hostname is <public hostname>-pvt.

The private IP should NOT be accessible to servers not participating in the local cluster.
The private network should be on standalone dedicated switch(es).
The private network should NOT be part of a larger overall network topology.
The private network should be deployed on Gigabit Ethernet or better.
It is recommended that redundant NICs are configured with the Linux bonding driver.
Active/passive is the preferred bonding method due to its simplistic configuration.

Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN VIPs
must NOT be in the /etc/hosts file, it must be resolved by DNS.
Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts file on each node,
specifying the public IP, VIP and private addresses. Configure the /etc/hosts file so that it is similar to
the following example:
NOTE: The SCAN IPs MUST NOT be in the /etc/hosts file. This will result in only 1 SCAN IP for the entire
cluster.
[oracle@cehaovmsp145 ~]$ cat /etc/hosts
# Created by DB/RAC OVM at Tue Aug 25 16:59:39 EDT 2015
127.0.0.1
localhost localhost.localdomain
localhost4
::1
localhost localhost.localdomain localhost6
10.64.146.69 cehaovmsp145.us.oracle.com cehaovmsp145
10.64.131.119 cehaovmsp145-i.us.oracle.com cehaovmsp145-i
10.64.146.70 cehaovmsp145-v.us.oracle.com cehaovmsp145-v
10.64.146.92 cehaovmsp146.us.oracle.com cehaovmsp146
10.64.131.120 cehaovmsp146-i.us.oracle.com cehaovmsp146-i
10.64.146.93 cehaovmsp146-v.us.oracle.com cehaovmsp146-v
# For referene: DNS IP is 192.135.82.132; SCAN Name is cehaovmsp1-scan23

13

If you configured the IP addresses in a DNS server, then, as the root user, change the hosts search
order in /etc/nsswitch.conf on all nodes as shown here:
Old:
hosts: files nis dns
New:
hosts: dns files nis
After modifying the nsswitch.conf file, restart the nscd daemon on each node using the following
command:
# /sbin/service nscd restart
After you have completed the installation process, configure clients to use the SCAN to access the
cluster. Using the previous example, the clients would use docrac-scan to connect to the cluster.
The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_subdomain_name, for
example
docrac-scan.example.com.
The short SCAN for the cluster is docrac-scan. You can use any name for the SCAN, as long as it is
unique within your network and conforms to the RFC 952 standard.

2.3. Synchronizing the Time on ALL Nodes


Ensure that the date and time settings on all nodes are set as closely as possible to the same date and
time. Time may be kept in sync with NTP with the -x option or by using Oracle Cluster Time
Synchronization Service (ctssd). Instructions on configuring NTP with the -x option can be found in My
Oracle Support Ext Note : 551704.1.

2.4

Installing the Oracle Preinstallation RPM with ULN support

Use the following procedure to subscribe to Unbreakable Linux Network (ULN) Oracle Linux channels,
and to add the Oracle Linux channel that distributes the Oracle Preinstallation RPM:
1.
Register your server with Unbreakable Linux Network (ULN). By default, you are registered for
the Oracle Linux Latest channel for your operating system and hardware.
2.

Log in to Unbreakable Linux Network:

https://linux.oracle.com Opens a new window


3.
Click the Systems tab, and in the System Profiles list, select a registered server. The System
Details window opens and displays the subscriptions for the server.
4.

Click Manage Subscriptions. The System Summary window opens.

14

5.
From the Available Channels list, select the Linux installation media copy and update patch
channels corresponding to your Oracle Linux distribution. For example, if your distribution is Oracle
Linux 5 Update 6 for x86_64, then select the following:
Oracle Linux 5 Update 6 installation media copy (x86_64)
Oracle Linux 5 Update 6 Patch (x86_64)
6.

Click Subscribe.

7.
Start a terminal session and enter the following command as root, depending on your
platform. For example:
Oracle Linux 6:
# yum install oracle-rdbms-server-12cR1-preinstall
Oracle Linux 5:
# yum install oracle-validated
You should see output indicating that you have subscribed to the Oracle Linux channel, and that
packages are being installed. For example:
el5_u6_i386_base
el5_u6_x86_64_patch
Oracle Linux automatically creates a standard (not role-allocated) Oracle installation owner and
groups, and sets up other kernel configuration settings as required for Oracle installations.
Repeat steps 1 through 7 on all other servers in your cluster.

2.5

Configuring Kernel Parameters

Note:- This section can be ignored if you have setup the rpm using the previous steps
As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the
parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
NOTE: The latest information on kernel parameter settings for Linux can be found in My Oracle
Support ExtNote:169706.1.
Run the following as the root user to allow the new kernel parameters to be put in place:
15

#/sbin/sysctl p
Repeat the above steps on all cluster nodes.
NOTE: OUI checks the current settings for various kernel parameters to ensure they meet the
minimum requirements for deploying Oracle RAC.

2.6

Set shell limits for the oracle user

Note:- This section can be ignored if you have setup the rpm using the previous steps (2.4)
To improve the performance of the software on Linux systems, you must increase the shell limits for
the oracle user
1.

Add the following lines to the /etc/security/limits.conf file:

grid soft nproc 2047


grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
2.

Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

session required pam_limits.so


3.
Make the following changes to the default shell startup file, add the following lines to the
/etc/profile file:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
if ( $USER = "oracle" || $USER = "grid" ) then
limit maxproc 16384
limit descriptors 65536
endif
Repeat this procedure on all other nodes in the cluster.

16

2.7

Create the Oracle Inventory Directory

To create the Oracle Inventory directory, enter the following commands as the root user:
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory

2.7. Creating the Oracle Grid Infrastructure Home Directory


To create the Grid Infrastructure home directory, enter the following commands as the root user:
# mkdir -p /u01/app/12.1.0/grid
# chown -R grid:oinstall //u01/app/12.1.0/grid
# chmod -R 775 /u01/app/12.1.0/grid

2.8. Creating the Oracle Base Directory


To create the Oracle Base directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle
# mkdir /u01/app/oracle/cfgtoollogs --needed to ensure that dbca is able to run after the rdbms
installation.
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/app/oracle

2.9. Creating the Oracle RDBMS Home Directory


To create the Oracle RDBMS Home directory, enter the following commands as the root user:
# mkdir -p /u01/app/oracle/product/12.1.0/db_1
# chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1
# chmod -R 775 /u01/app/oracle/product/12.1.0/db_1

2.10. Stage the Oracle Software


It is recommended that you stage the required software onto a local drive on Node 1 of your cluster.
Starting with the first patch set for Oracle Database 12c Release 1 (12.1.0.2), Oracle Database patch
sets are full installations of the Oracle Database software. In past releases, Oracle Database patch sets
consisted of sets of files that replaced files in an existing Oracle home.
Oracle Database 12c Release 1, patch sets are full (out-of-place) installations that replace existing
installations. This simplifies the installation since you may simply install the latest patch set (version).
You are no longer required to install the base release, and then apply the patch set.

17

3. Prepare the shared storage for Oracle RAC


This section describes how to prepare the shared storage for Oracle RAC
Each node in a cluster requires external shared disks for storing the Oracle Clusterware (Oracle Cluster
Registry and voting disk) files, and Oracle Database files. To ensure high availability of Oracle
Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware
files in three separate failure groups, with at least three physical disks. Each disk must have at least 1
GB capacity to ensure that there is sufficient space to create Oracle Clusterware files. Use the
following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management diskgroup should be the same size and have
the same performance characteristics.
A diskgroup should not contain more than one partition on a single physical disk device.
Using logical volumes as a device in an Automatic Storage Management diskgroup is not supported
with Oracle RAC.
The user account with which you perform the installation (typically, 'oracle') must have write
permissions to create the files in the path that you specify.

3.1. Shared Storage


For this example installation we will be using ASM for Clusterware and Database storage on top of
SAN technology. The following Table shows the storage layout for this implementation:
3.1.1. Partition the Shared Disks
Once the LUNs have been presented from the SAN to ALL servers in the cluster, partition the LUNs
from one node only, run fdisk to create a single whole-disk partition with exactly 1 MB offset on each
LUN to be used as ASM Disk.
Tip: From the fdisk prompt, type "u" to switch the display unit from cylinder to sector. Then create a
single primary partition starting on sector 2048 (1MB offset assuming sectors of 512 bytes per unit).
See below:fdisk /dev/sda
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (61-1048575, default 61): 2048
Last sector or +size or +sizeM or +sizeK (2048-1048575, default 1048575):
Using default value 1048575
Command (m for help): w
The partition table has been altered!
18

Calling ioctl() to re-read partition table.


Syncing disks.
2.
Load the updated block device partition tables by running the following on ALL servers
participating in the cluster:
#/sbin/partprobe
3.1.2. Installing and Configuring ASMLib
The ASMLib is highly recommended for those systems that will be using ASM for shared storage within
the cluster due to the performance and manageability benefits that it provides. Perform the following
steps to install and configure ASMLib on the cluster nodes:
NOTE: ASMLib automatically provides LUN persistence, so when using ASMLib there is no need to
manually configure LUN persistence for the ASM devices on the system.
Download the following packages from the ASMLib OTN page, if you are an Enterprise Linux customer
you can obtain the software through the Unbreakable Linux network.
NOTE: The ASMLib kernel driver MUST match the kernel revision number, the kernel revision
number of your system can be identified by running the "uname -r" command. Also, be sure to
download the set of RPMs which pertain to your platform architecture, in our case this is x86_64.
oracleasm-support-2.1.3-1.el5x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm
Install the RPMs by running the following as the root user:
# rpm -ivh oracleasm-support-2.1.3-1.el5x86_64.rpm \
oracleasmlib-2.0.4-1.el5.x86_64.rpm \
oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm
3. Configure ASMLib by running the following as the root user:
NOTE: If using user and group separation for the installation (as documented here), the ASMLib driver
interface owner is 'grid' and the group to own the driver interface is 'asmadmin'. These groups were
created in section 2.1. If a more simplistic installation using only the Oracle user is performed, the
owner will be 'oracle' and the group owner will be 'dba'.
#/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library driver. The following questions
will determine whether the driver is loaded on boot and what permissions it will have. The current
values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that
current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
19

4. Repeat steps 2 - 4 on ALL cluster nodes.


3.1.3. Using ASMLib to Mark the Shared Disks as Candidate Disks
To create ASM disks using ASMLib:
1. As the root user, use oracleasm to create ASM disks using the following syntax:
# /usr/sbin/oracleasm createdisk disk_name device_partition_name
In this command, disk_name is the name you choose for the ASM disk. The name you choose must
contain only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter,
for example,DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an ASM disk is the
device_partition_name. For example:
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm createdisk OCR_VOTE01 /dev/xvdc1
Writing disk header: done
Instantiating disk: done
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm createdisk OCR_VOTE02 /dev/xvdd1
Writing disk header: done
Instantiating disk: done
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm createdisk DG01 /dev/xvde1
Writing disk header: done
Instantiating disk: done
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm createdisk DG02 /dev/xvdf1
Writing disk header: done
Instantiating disk: done
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm createdisk DG03 /dev/xvdg1
Writing disk header: done
Instantiating disk: done
If you need to unmark a disk that was used in a createdisk command, you can use the following syntax
as the root user:
# /usr/sbin/oracleasm deletedisk disk_name
2. Repeat step 1 for each disk that will be used by Oracle ASM.
3. After you have created all the ASM disks for your cluster, use the listdisks command to verify their
availability:
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@cehaovmsp145 ~]# /usr/sbin/oracleasm listdisks
DG01
DG02
DG03
OCR_VOTE01
OCR_VOTE02
[root@cehaovmsp145 ~]#

20

4. On all the other nodes in the cluster, use the scandisks command as the root user to pickup the
newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in
the cluster.
[root@cehaovmsp146 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VOTE01"
Instantiating disk "OCR_VOTE02"
Instantiating disk "DG01"
Instantiating disk "DG02"
Instantiating disk "DG03"
5. After scanning for ASM disks, display the available ASM disks on each node to verify their
availability:
[root@cehaovmsp146 ~]# /usr/sbin/oracleasm listdisks
DG01
DG02
DG03
OCR_VOTE01
OCR_VOTE02
[root@cehaovmsp146 ~]#

3.2

Setting Disk I/O Scheduler on Linux

Disk I/O schedulers reorder, delay, or merge requests for disk I/O to achieve better throughput and
lower latency. Linux has multiple disk I/O schedulers available, including Deadline, Noop, Anticipatory,
and Completely Fair Queuing (CFQ). For best performance for Oracle ASM, Oracle recommends that
you use the Deadline I/O Scheduler.
Enter the following command to ensure that the Deadline disk I/O scheduler is configured for use:
# echo deadline > /sys/block/${ASM_DISK}/queue/scheduler

21

4.

Oracle Grid Infrastructure Install

Basic Grid Infrastructure Install (without GNS and IPMI)


As the grid user (Grid Infrastructure software owner) start the installer by running "runInstaller" from
the staged installation media.
NOTE: Be sure the installer is run as the intended software owner, the only supported method to
change the software owner is to reinstall.
#xhost +
#su - grid
cd into the folder where you staged the Grid Infrastructure software
./runInstaller

Action:
Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next> '

22

Action:
Select radio button 'Advanced Installation' and click ' Next> '

23

Action:
Accept 'English' as language' and click ' Next> '

24

Action:
Specify your cluster name and the SCAN name you want to use and click ' Next> '
Note:
Make sure 'Configure GNS' is NOT selected.

25

Action:
Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured
previously in your /etc/hosts file. Use the 'SSH Connectivity' button to configure/test the
passwordless SSH connectivity between your nodes.

ACTION:
Type in the OS password for the user 'grid' and press 'Setup'

26

After click ' OK '

Action:
Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct
values for 'Public' and 'Private' and '. When finished click ' Next> '

27

Action:
Select radio button 'Automatic Storage Management (ASM) and click ' Next> '

28

Action:
Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done
click ' Next> '
NOTE: The number of voting disks that will be created depend on the redundancy level you specify:
EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks.
NOTE: If you see an empty screen for your candidate disks it is likely that ASMLib has not been
properly configured. If you are sure that ASMLib has been properly configured click on 'Change
Discovery Path' and provide the correct destination.

29

Action:
Specify and conform the password you want to use and click ' Next> '

30

Action:
Select NOT to use IPMI and click ' Next> '

31

Action:
Select if you wish to Register with EM Cloud control and click ' Next> '

32

Action:
Assign the correct OS groups for OS authentication and click ' Next> '

33

Action:
Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> '

34

Action:
Specify the locations for your Inventory directory and click ' Next> '

35

Action:
Specify the required credential if you wish to automatically run configuration scripts and click 'Next> '

36

Action:
Check that status of all checks is Succeeded and click ' Next> '
Note:
If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up the window
Action:
Execute the runfixup.sh script as described on the screen as root user

37

Action:
Wait for the OUI to complete its tasks, After it completes the copying of binaries to all the nodes of
the cluster, it will bring up a pop up window.
At this point you may need to run oraInstRoot.sh on all cluster nodes (if this is the first installation of
an Oracle product on this system).
root.sh script output on Node 1
[root@cehaovmsp145 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@cehaovmsp145 ~]# /u01/app/12.1.0/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by

38

Database Configuration Assistant when a database is created


Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/09/12 20:44:44 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2015/09/12 20:45:19 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA)
Collector.
2015/09/12 20:45:21 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
2015/09/12 20:46:13 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'
CRS-4133:
CRS-4123:
CRS-4133:
CRS-4123:
CRS-2672:
CRS-2672:
CRS-2676:
CRS-2676:
CRS-2672:
CRS-2676:
CRS-2672:
CRS-2672:
CRS-2676:
CRS-2676:
CRS-2672:
CRS-2672:
CRS-2676:
CRS-2676:

Oracle High Availability Services has been stopped.


Oracle High Availability Services has been started.
Oracle High Availability Services has been stopped.
Oracle High Availability Services has been started.
Attempting to start 'ora.evmd' on 'cehaovmsp145'
Attempting to start 'ora.mdnsd' on 'cehaovmsp145'
Start of 'ora.mdnsd' on 'cehaovmsp145' succeeded
Start of 'ora.evmd' on 'cehaovmsp145' succeeded
Attempting to start 'ora.gpnpd' on 'cehaovmsp145'
Start of 'ora.gpnpd' on 'cehaovmsp145' succeeded
Attempting to start 'ora.cssdmonitor' on 'cehaovmsp145'
Attempting to start 'ora.gipcd' on 'cehaovmsp145'
Start of 'ora.cssdmonitor' on 'cehaovmsp145' succeeded
Start of 'ora.gipcd' on 'cehaovmsp145' succeeded
Attempting to start 'ora.cssd' on 'cehaovmsp145'
Attempting to start 'ora.diskmon' on 'cehaovmsp145'
Start of 'ora.diskmon' on 'cehaovmsp145' succeeded
Start of 'ora.cssd' on 'cehaovmsp145' succeeded

ASM created and started successfully.


Disk Group OCRVD created successfully.
CRS-2672: Attempting to start 'ora.crf' on 'cehaovmsp145'
CRS-2672: Attempting to start 'ora.storage' on 'cehaovmsp145'
CRS-2676: Start of 'ora.storage' on 'cehaovmsp145' succeeded
CRS-2676: Start of 'ora.crf' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.crsd' on 'cehaovmsp145' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk ae201939c23b4f12bf57fceabf2ad60f.
Successfully replaced voting disk group with +OCRVD.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE
ae201939c23b4f12bf57fceabf2ad60f (/dev/oracleasm/disks/OCR_VOTE01) [OCRVD]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on
'cehaovmsp145'
CRS-2673: Attempting to stop 'ora.crsd' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.crsd' on 'cehaovmsp145' succeeded

39

CRS-2673: Attempting to stop 'ora.ctssd' on 'cehaovmsp145'


CRS-2673: Attempting to stop 'ora.evmd' on 'cehaovmsp145'
CRS-2673: Attempting to stop 'ora.storage' on 'cehaovmsp145'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'cehaovmsp145'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'cehaovmsp145'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.storage' on 'cehaovmsp145' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.drivers.acfs' on 'cehaovmsp145' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'cehaovmsp145' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cehaovmsp145' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'cehaovmsp145' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'cehaovmsp145' succeeded
CRS-2677: Stop of 'ora.asm' on 'cehaovmsp145' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cehaovmsp145' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.cssd' on 'cehaovmsp145' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.crf' on 'cehaovmsp145' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'cehaovmsp145'
CRS-2677: Stop of 'ora.gipcd' on 'cehaovmsp145' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'cehaovmsp145' has
completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'cehaovmsp145'
CRS-2672: Attempting to start 'ora.evmd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.mdnsd' on 'cehaovmsp145' succeeded
CRS-2676: Start of 'ora.evmd' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.gpnpd' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.gipcd' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cehaovmsp145'
CRS-2676: Start of 'ora.cssdmonitor' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cehaovmsp145'
CRS-2672: Attempting to start 'ora.diskmon' on 'cehaovmsp145'
CRS-2676: Start of 'ora.diskmon' on 'cehaovmsp145' succeeded
CRS-2676: Start of 'ora.cssd' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cehaovmsp145'
CRS-2672: Attempting to start 'ora.ctssd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.ctssd' on 'cehaovmsp145' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cehaovmsp145'
CRS-2676: Start of 'ora.asm' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cehaovmsp145'
CRS-2676: Start of 'ora.storage' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'cehaovmsp145'
CRS-2676: Start of 'ora.crf' on 'cehaovmsp145' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cehaovmsp145'
CRS-2676: Start of 'ora.crsd' on 'cehaovmsp145' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: cehaovmsp145
CRS-6016: Resource auto-start has completed for server cehaovmsp145
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/09/12 20:51:49 CLSRSC-343: Successfully started Oracle Clusterware stack
CRS-2672:
CRS-2676:
CRS-2672:
CRS-2676:

Attempting to start 'ora.asm' on 'cehaovmsp145'


Start of 'ora.asm' on 'cehaovmsp145' succeeded
Attempting to start 'ora.OCRVD.dg' on 'cehaovmsp145'
Start of 'ora.OCRVD.dg' on 'cehaovmsp145' succeeded

2015/09/12 20:53:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a


Cluster ... succeeded

root.sh script output on Node 2


[root@cehaovmsp146 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

40

[root@cehaovmsp146 ~]# /u01/app/12.1.0/grid/root.sh


Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/09/12 21:02:53 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2015/09/12 21:03:23 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA)
Collector.
2015/09/12 21:03:25 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
2015/09/12 21:05:12 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on
'cehaovmsp146'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'cehaovmsp146'
CRS-2677: Stop of 'ora.drivers.acfs' on 'cehaovmsp146' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'cehaovmsp146' has
completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'cehaovmsp146'
CRS-2672: Attempting to start 'ora.evmd' on 'cehaovmsp146'
CRS-2676: Start of 'ora.mdnsd' on 'cehaovmsp146' succeeded
CRS-2676: Start of 'ora.evmd' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'cehaovmsp146'
CRS-2676: Start of 'ora.gpnpd' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'cehaovmsp146'
CRS-2676: Start of 'ora.gipcd' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cehaovmsp146'
CRS-2676: Start of 'ora.cssdmonitor' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cehaovmsp146'
CRS-2672: Attempting to start 'ora.diskmon' on 'cehaovmsp146'
CRS-2676: Start of 'ora.diskmon' on 'cehaovmsp146' succeeded
CRS-2676: Start of 'ora.cssd' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cehaovmsp146'
CRS-2672: Attempting to start 'ora.ctssd' on 'cehaovmsp146'
CRS-2676: Start of 'ora.ctssd' on 'cehaovmsp146' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cehaovmsp146'
CRS-2676: Start of 'ora.asm' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cehaovmsp146'
CRS-2676: Start of 'ora.storage' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'cehaovmsp146'
CRS-2676: Start of 'ora.crf' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cehaovmsp146'
CRS-2676: Start of 'ora.crsd' on 'cehaovmsp146' succeeded
CRS-6017: Processing resource auto-start for servers: cehaovmsp146
CRS-2672: Attempting to start 'ora.net1.network' on 'cehaovmsp146'
CRS-2676: Start of 'ora.net1.network' on 'cehaovmsp146' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'cehaovmsp146'
CRS-2676: Start of 'ora.ons' on 'cehaovmsp146' succeeded
CRS-6016: Resource auto-start has completed for server cehaovmsp146
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/09/12 21:08:43 CLSRSC-343: Successfully started Oracle Clusterware stack
2015/09/12 21:09:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ...
succeeded

41

Action:
Wait for the OUI to finish the cluster configuration.

42

Action:
You should see the confirmation that installation of the Grid Infrastructure was successful. Click 'Close'
to finish the install.
[root@cehaovmsp145 ~]# crsctl stat res -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.LISTENER.lsnr
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.OCRVD.dg
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.asm
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
Started,STABLE
ora.net1.network
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.ons
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE

43

-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.MGMTLSNR
1
ONLINE ONLINE
cehaovmsp145
169.254.41.177 10.64
.131.119,STABLE
ora.cehaovmsp145.vip
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.cehaovmsp146.vip
1
ONLINE ONLINE
cehaovmsp146
STABLE
ora.cvu
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.mgmtdb
1
ONLINE ONLINE
cehaovmsp145
Open,STABLE
ora.oc4j
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cehaovmsp145
STABLE
-------------------------------------------------------------------------------[root@cehaovmsp145 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version
:
4
Total space (kbytes)
:
409568
Used space (kbytes)
:
1456
Available space (kbytes) :
408112
ID
: 1201040793
Device/File Name
:
+OCRVD
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@cehaovmsp145 ~]# crsctl query css votedisk
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE
ae201939c23b4f12bf57fceabf2ad60f (/dev/oracleasm/disks/OCR_VOTE01) [OCRVD]
Located 1 voting disk(s).
[root@cehaovmsp145 ~]# crsctl check cluster -all
**************************************************************
cehaovmsp145:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
cehaovmsp146:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

44

5.

RDBMS Software Install

As the oracle user (rdbms software owner) start the installer by running "runInstaller" from the staged
installation media.
NOTE: Be sure the installer is run as the intended software owner, the only supported method to
change the software owner is to reinstall.
Change into the directory where you staged the RDBMS software
./runInstaller

Action:
Provide your e-mail address, tick the check box and provide your Oracle Support Password if you want
to receive Security Updates from Oracle Support and click ' Next> '

45

Action:
Select the option 'Install Database software only' and click ' Next> '

46

Action:
Select the option Oracle Real Application Clusters database Installation' and click ' Next> '

47

Action:
Select all nodes.
Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your
nodes '
Type in the OS password for the oracle user and click 'Setup'

48

Action:
To confirm English as selected language click ' Next> '

49

Action:
Make sure radio button 'Enterprise Edition' is ticked, click ' Next> '

50

Action:
Specify path to your Oracle Base and below to the location where you want to store the software
(Oracle home). Click ' Next> '

51

Action:
Use the drop down menu to select the names of the Database Administrators and Database Operators
group and click ' Next> '

52

Action:
Check that the status of all checks is 'Succeeded' and click ' Next> '
Note:
If you are sure the unsuccessful checks can be ignored tick the box 'Ignore All' before you click ' Next> '

53

Action:
Perform last check that the information on the screen is correct before you click ' Finish '

54

Action:
Log in to a terminal window as root user and run the root.sh script on the first node. When finished do
the same for all other nodes in your cluster as well. When finished click 'OK'
NOTE: root.sh should be run on one node at a time.
[root@cehaovmsp145 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME=

/u01/app/oracle/product/12.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

[root@cehaovmsp146 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh


Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=

/u01/app/oracle/product/12.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

55

Action:
Click ' Close ' to finish the installation of the RDBMS Software.

56

6.

ASMCA to create Diskgroups

As the grid user start the ASM Configuration Assistant (ASMCA)


$ cd /u01/app/12.1.0/grid/bin
./asmca

Action:
Click 'Create' to create a new diskgroup

57

Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box
for the disks you want to assign to the new diskgroup.

58

Action:
Click 'OK'
Note:
It is Oracle's Best Practice to have an OCR mirror stored in a second diskgroup. To follow these
recommendations add an OCR mirror. Mind that you can only have one OCR in a diskgroup.
Action:
1. To add OCR mirror to an Oracle ASM diskgroup, ensure that the Oracle Clusterware stack is running
and run the following command as root from the $GRID_HOME /bin directory:
2. # ocrconfig -add +DATA
3. # ocrcheck

59

7.

Run DBCA to create the database

As the oracle user start the Database Configuration Assistant (DBCA)


cd /u01/app/oracle/product/12.1.0/db_1/bin
./dbca

Action:
Choose option 'Create a Database' and click 'Next'

60

Action:
Choose option 'Advanced Mode' and click 'Next >'

61

Action:
Select 'Oracle Real Application Clusters (RAC) database' and Admin Managed click 'Next'

62

Action:
Type in the name you want to use for your database and select Create As Container Database click
'Next >'

63

Action:
Select all nodes before you click 'Next>'

64

Action:
Select the options you want to use to manage your database and click 'Next'

65

Action:
Type in the passwords you want to use and click 'Next'

66

Action:
Select the diskgroup you created for the database files, optionally select FRA & Enable Archiving if you
wish to configure. Click Next >

67

Action:
Select Appropriate option, if you wish to configure Database Vault & Label Security click 'Next >'

68

Action:
Review and change the settings for memory allocation, character sets etc. according to your needs
and click 'Next >'

69

Action:
Make sure the tick box 'Create Database' is ticked and Optionally you mayselct Generate Database
Creation Scripts and click 'Next >'

70

Action:
Review the validation Results, if you are convinced you can go with Ignore All option, Click Next >

71

Action:
Review the database configuration details again and click 'Finish >'

72

Action:
The database is now created, you can either change or unlock your passwords or just click Exit to finish
the database creation.
73

8.

Applying Latest PSUs to GRID & RDBMS Homes

Please download the latest patch from MOS Doc ID 756671.1. When the document was written latest
PSU release was:Patch 20996835: GRID INFRASTRUCTURE PATCH SET UPDATE 12.1.0.2.4 (JUL2015)
The above patch contains both GI & RDBMS Patch and can be applied on a rolling manner.
Please note:1. Make sure that there are no active sqlplus sessions opened through putty.(ps -ef | grep sqlplus)
2. Make sure that emagent / em dbcontrol is stopped manually before you execute the patch steps
(ps -ef | grep em)
Steps for applying the patch
=======================
1. Download the latest opatch (6880880) utility and replace the existing OPatch with the new one (it
will not allow you to replace, so you have to move with root user and change the ownership to
grid:oinstall)
https://updates.oracle.com/download/6880880.html
2. Create the OCM response file for silent installation of "opatch auto" features as a root user.
$ $GRID_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/ocmconf.rsp
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
<<<< Press Enter/Return key and don't provide any input >>>>
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y <<< type
Y/Yes >>>
The OCM configuration response file (/tmp/ocmconf.rsp) was successfully created. <<

3. Unzip the patch file by GRID software owner.


4. Apply the patch using below command as a root user:# cd $GRID_HOME/OPatch
# ./opatchauto apply <UNZIPPED_PATCH_LOCATION>/20996835

-ocmrf /tmp/ocmconf.rsp

Loading Modified SQL Files into the Database


The following steps load modified SQL files into the database. For a RAC environment, perform these
steps on only one node.

74

Datapatch is run to complete the post-install SQL deployment for the PSU.
$ sqlplus /nolog
SQL>

Connect / as sysdba

SQL> startup

(if not started)

SQL> alter pluggable database all open;


SQL> quit
$ cd $ORACLE_HOME/OPatch
$ ./datapatch -verbose

Check the following log files in $ORACLE_BASE/cfgtoollogs/sqlpatch/20831110/<unique patch ID> for


errors:
20831110_apply_<database SID>_<CDB name>_<timestamp>.log

Please refer the readme.html file of the patch document for complete details.

75

You might also like