Professional Documents
Culture Documents
OF
ORACLE RAC 12cR1 (12.1.0.2)
ON LINUX X86-64
Contents
1.
Introduction ....................................................................................................................................... 3
1.1
1.1.1
1.1.2
Environment Configuration for Oracle Grid Infrastructure and Oracle RAC ..................... 3
1.1.3
Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC ................... 4
1.1.4
1. 2
Configuring Servers for Oracle Grid Infrastructure and Oracle RAC .......................................... 6
1.2.1
1.2.2
1.2.3
1.3
2.
1.3.1
Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64 ......... 8
1.3.2
Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64 ......... 9
1.3.3
Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64 ....... 10
Networking............................................................................................................................... 12
2.5
2.6
2.7
4.
5.
6.
7.
8.
1.
Introduction
1.1
1.1.1
Server hardware: server make, model, core architecture, and host bus adaptors (HBA) are supported
to run with Oracle RAC.
Network Switches:
Private network switch, at least 1 GbE, with 10 GbE recommended, dedicated for use only
with other cluster member nodes. The interface must support the user datagram protocol
(UDP) using high-speed network adapters and switches that support TCP/IP. Alternatively, use
InfiniBand for the interconnect.
At least 8 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home). Oracle
recommends that you allocate 100 GB to allow additional space for patches.
At least 12 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner
(Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files.
For Linux x86-64 platforms, if you intend to install Oracle Database, then allocate 6.4 GB of
disk space for the Oracle home (the location for the Oracle Database software binaries).
1.1.2
Create Groups and Users. A user created to own only Oracle Grid Infrastructure software installations
is called the grid user. A user created to own either all Oracle installations, or only Oracle database
installations, is called the oracle user.
Create mount point paths for the software binaries. Oracle recommends that you follow the
guidelines for an Optimal Flexible Architecture configuration.
Review Oracle Inventory (oraInventory) and OINSTALL Group Requirements. The Oracle Inventory
directory is the central inventory of Oracle software installed on your system. Users who have the
Oracle Inventory group as their primary group are granted the OINSTALL privilege to write to the
central inventory.
Ensure that the Grid home (the Oracle home path you select for Oracle Grid Infrastructure) uses
only ASCII characters
Unset Oracle software environment variables. If you have set ORA_CRS_HOME as an environment
variable, then unset it before starting an installation or upgrade. Do not use ORA_CRS_HOME as a
user environment variable.
If you have had an existing installation on your system, and you are using the same user account to
install this installation, then unset the following environment
variables: ORA_CRS_HOME;ORACLE_HOME; ORA_NLS10; TNS_ADMIN.
1.1.3
Network Configuration Tasks for Oracle Grid Infrastructure and Oracle RAC
Public network switch (redundant switches recommended) connected to a public gateway and
to the public interface ports for each cluster member node.
Ethernet interface card (redundant network cards recommended, bonded as one Ethernet
port name).
The switches and network interface adapters must be at least 1 GbE, with 10 GbE
recommended. Alternatively, use InfiniBand for the interconnect.
Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its
own dedicated private networks. Each network can be classified PUBLIC or PRIVATE+ASM or
PRIVATE or ASM. ASM networks use the TCP protocol.
Cluster Names and Addresses: Determine and configure the following names and addresses for the
cluster
Cluster name: Decide a name for the cluster, and be prepared to enter it during installation.
The cluster name should have the following characteristics:
Globally unique across all hosts, even across different DNS domains.
At least one character long and less than or equal to 15 characters long.
Grid Naming Service Virtual IP Address (GNS VIP): If you plan to use GNS, then configure a GNS
name and fixed address on the DNS for the GNS VIP, and configure a subdomain on your DNS
delegated to the GNS VIP for resolution of cluster addresses. GNS domain delegation is
mandatory with dynamic public networks (DHCP, autoconfiguration).
4
Using Grid Naming Service Resolution: Do not configure SCAN names and addresses in
your DNS. SCANs are managed by GNS.
Using Manual Configuration and DNS resolution: Configure a SCAN name to resolve to
three addresses on the domain name service (DNS).
Standard or Hub Node Public, Private and Virtual IP names and Addresses:
Public node name and address, configured on the DNS and in /etc/hosts (for example,
node1.example.com, address 192.0.2.10). The public node name should be the primary host
name of each node, which is the name displayed by the hostname command.
Private node address, configured on the private interface for each node.
The private subnet that the private interfaces use must connect all the nodes you intend to
have as cluster members. Oracle recommends that the network you select for the private
network uses an address range defined as private by RFC 1918.
Voting files are files that Oracle Clusterware uses to verify cluster node membership and
status. The location for voting files must be owned by the user performing the installation
(oracle or grid), and must have permissions set to 640.
Oracle Cluster Registry files (OCR) contain cluster and database configuration information for
Oracle Clusterware. Before installation, the location for OCR files must be owned by the user
performing the installation (grid or oracle). That installation user must have oinstall as its
primary group. During installation, the installer creates the OCR files and changes ownership
of the path and OCR files to root.
1. 2
1.2.2
If the free space available in the /tmp directory is less than what is required, then complete
one of the following steps:
o
o
Delete unnecessary files from the /tmp directory to make available the space required.
Extend the file system that contains the /tmp directory. If necessary, contact your
system administrator for information about extending file systems.
At least 8.0 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home).
Oracle recommends that you allocate 100 GB to allow additional space for patches.
Upto 10 GB of additional space in the Oracle base directory of the Grid Infrastructure
owner for diagnostic collections generated by Trace File Analyzer (TFA) Collector.
At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure installation
owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files.
For Oracle Solaris platforms, if you intend to install Oracle Database, then allocate 5.2 GB
of disk space for the Oracle home (the location for the Oracle Database software binaries).
1.3
At least 4 GB of RAM for Oracle Grid Infrastructure for cluster installations, including
installations where you plan to install Oracle RAC.
Swap space equivalent to the multiple of the available RAM, as indicated in the following table:
Available RAM
Between 4 GB and 16 GB
Equal to RAM
More than 16 GB
16 GB of RAM
The Linux distributions and packages listed in this section are supported for this release on x86-64. No
other Linux distributions are supported.
Identify operating system requirements for Oracle Grid Infrastructure, and identify additional
operating system requirements for Oracle Database and Oracle RAC installations.
Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64
Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64
7
Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64
1.3.1
Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64
Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 7
Supported distributions:
Oracle Linux 7 with the Red Hat Compatible kernel: 3.10.0-54.0.1.el7.x86_64 or later
1.3.2
Supported Oracle Linux 6 and Red Hat Enterprise Linux 6 Distributions for x86-64
Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 6
Supported distributions:
Oracle Linux 6 with the Red Hat Compatible kernel: 2.6.32-71.el6.x86_64 or later
libxcb-1.5 (x86_64)
libxcb-1.5 (i686)
libXi-1.3 (x86_64)
libXi-1.3 (i686)
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
1.3.3
Supported Oracle Linux 5 and Red Hat Enterprise Linux 5 Distributions for x86-64
Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:
Oracle Linux 5
Supported distributions:
Oracle Linux 5 Update 6 with the Unbreakable Enterprise kernel: 2.6.32-100.0.19 or later
Oracle Linux 5 Update 6 with the Red Hat compatible kernel: 2.6.18-238.0.0.0.1.el5 or later
libXtst-1.0.1
libXtst-1.0.1 (32 bit)
libX11-1.0.3
libX11-1.0.3 (32 bit)
libXau-1.0.1
libXau-1.0.1 (32 bit)
libXi-1.0.1
libXi-1.0.1 (32 bit)
make-3.81
sysstat-7.0.2
The following command can be run on the system to list the currently installed packages:
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel
Any missing RPM from the list above should be added using the "--aid" of "/bin/rpm" option to ensure
all dependent packages are resolved and installed as well.
NOTE: Be sure to check on all nodes that the Linux Firewall and SE Linux is disabled.
11
2.
Create OS groups using the command below Enter commands as the root user:
#/usr/sbin/groupadd oinstall
#/usr/sbin/groupadd dba
#/usr/sbin/groupadd asmadmin
#/usr/sbin/groupadd asmdba
#/usr/sbin/groupadd asmoper
2.
Create the users that will own the Oracle software using the commands:
2.2
Networking
NOTE: This section is intended to be used for installations NOT using GNS.
Determine your cluster name. The cluster name should satisfy the following conditions:
The cluster name is at least 1 character long and less than 15 characters long.
The cluster name must consist of the same character set used for host names: single-byte
alphanumeric characters (a to z, A to Z, and 0 to 9) and hyphens (-).
12
Determine the public host name for each node in the cluster. For the public host name, use the
primary hostname of each node. In other words, use the name displayed by the hostname command
for example: racnode1.
Determine the public virtual hostname for each node in the cluster. The virtual host name is a public
node name that is used to reroute client requests sent to the node if the node is down. Oracle
recommends that you provide a name in the format <public hostname>-vip, for example: racnode1vip. The virtual hostname must meet the following requirements: -The virtual IP address and the
network name must not be currently in use.
The virtual IP address must be on the same subnet as your public IP address.
The virtual host name for each node should be registered with your DNS.
Determine the private hostname for each node in the cluster. This private hostname does not need to
be resolvable through DNS and should be entered in the /etc/hosts file. A common naming convention
for the private hostname is <public hostname>-pvt.
The private IP should NOT be accessible to servers not participating in the local cluster.
The private network should be on standalone dedicated switch(es).
The private network should NOT be part of a larger overall network topology.
The private network should be deployed on Gigabit Ethernet or better.
It is recommended that redundant NICs are configured with the Linux bonding driver.
Active/passive is the preferred bonding method due to its simplistic configuration.
Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin). SCAN VIPs
must NOT be in the /etc/hosts file, it must be resolved by DNS.
Even if you are using a DNS, Oracle recommends that you add lines to the /etc/hosts file on each node,
specifying the public IP, VIP and private addresses. Configure the /etc/hosts file so that it is similar to
the following example:
NOTE: The SCAN IPs MUST NOT be in the /etc/hosts file. This will result in only 1 SCAN IP for the entire
cluster.
[oracle@cehaovmsp145 ~]$ cat /etc/hosts
# Created by DB/RAC OVM at Tue Aug 25 16:59:39 EDT 2015
127.0.0.1
localhost localhost.localdomain
localhost4
::1
localhost localhost.localdomain localhost6
10.64.146.69 cehaovmsp145.us.oracle.com cehaovmsp145
10.64.131.119 cehaovmsp145-i.us.oracle.com cehaovmsp145-i
10.64.146.70 cehaovmsp145-v.us.oracle.com cehaovmsp145-v
10.64.146.92 cehaovmsp146.us.oracle.com cehaovmsp146
10.64.131.120 cehaovmsp146-i.us.oracle.com cehaovmsp146-i
10.64.146.93 cehaovmsp146-v.us.oracle.com cehaovmsp146-v
# For referene: DNS IP is 192.135.82.132; SCAN Name is cehaovmsp1-scan23
13
If you configured the IP addresses in a DNS server, then, as the root user, change the hosts search
order in /etc/nsswitch.conf on all nodes as shown here:
Old:
hosts: files nis dns
New:
hosts: dns files nis
After modifying the nsswitch.conf file, restart the nscd daemon on each node using the following
command:
# /sbin/service nscd restart
After you have completed the installation process, configure clients to use the SCAN to access the
cluster. Using the previous example, the clients would use docrac-scan to connect to the cluster.
The fully qualified SCAN for the cluster defaults to cluster_name-scan.GNS_subdomain_name, for
example
docrac-scan.example.com.
The short SCAN for the cluster is docrac-scan. You can use any name for the SCAN, as long as it is
unique within your network and conforms to the RFC 952 standard.
2.4
Use the following procedure to subscribe to Unbreakable Linux Network (ULN) Oracle Linux channels,
and to add the Oracle Linux channel that distributes the Oracle Preinstallation RPM:
1.
Register your server with Unbreakable Linux Network (ULN). By default, you are registered for
the Oracle Linux Latest channel for your operating system and hardware.
2.
14
5.
From the Available Channels list, select the Linux installation media copy and update patch
channels corresponding to your Oracle Linux distribution. For example, if your distribution is Oracle
Linux 5 Update 6 for x86_64, then select the following:
Oracle Linux 5 Update 6 installation media copy (x86_64)
Oracle Linux 5 Update 6 Patch (x86_64)
6.
Click Subscribe.
7.
Start a terminal session and enter the following command as root, depending on your
platform. For example:
Oracle Linux 6:
# yum install oracle-rdbms-server-12cR1-preinstall
Oracle Linux 5:
# yum install oracle-validated
You should see output indicating that you have subscribed to the Oracle Linux channel, and that
packages are being installed. For example:
el5_u6_i386_base
el5_u6_x86_64_patch
Oracle Linux automatically creates a standard (not role-allocated) Oracle installation owner and
groups, and sets up other kernel configuration settings as required for Oracle installations.
Repeat steps 1 through 7 on all other servers in your cluster.
2.5
Note:- This section can be ignored if you have setup the rpm using the previous steps
As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the
parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used.
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
NOTE: The latest information on kernel parameter settings for Linux can be found in My Oracle
Support ExtNote:169706.1.
Run the following as the root user to allow the new kernel parameters to be put in place:
15
#/sbin/sysctl p
Repeat the above steps on all cluster nodes.
NOTE: OUI checks the current settings for various kernel parameters to ensure they meet the
minimum requirements for deploying Oracle RAC.
2.6
Note:- This section can be ignored if you have setup the rpm using the previous steps (2.4)
To improve the performance of the software on Linux systems, you must increase the shell limits for
the oracle user
1.
Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
16
2.7
To create the Oracle Inventory directory, enter the following commands as the root user:
# mkdir -p /u01/app/oraInventory
# chown -R grid:oinstall /u01/app/oraInventory
# chmod -R 775 /u01/app/oraInventory
17
20
4. On all the other nodes in the cluster, use the scandisks command as the root user to pickup the
newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in
the cluster.
[root@cehaovmsp146 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VOTE01"
Instantiating disk "OCR_VOTE02"
Instantiating disk "DG01"
Instantiating disk "DG02"
Instantiating disk "DG03"
5. After scanning for ASM disks, display the available ASM disks on each node to verify their
availability:
[root@cehaovmsp146 ~]# /usr/sbin/oracleasm listdisks
DG01
DG02
DG03
OCR_VOTE01
OCR_VOTE02
[root@cehaovmsp146 ~]#
3.2
Disk I/O schedulers reorder, delay, or merge requests for disk I/O to achieve better throughput and
lower latency. Linux has multiple disk I/O schedulers available, including Deadline, Noop, Anticipatory,
and Completely Fair Queuing (CFQ). For best performance for Oracle ASM, Oracle recommends that
you use the Deadline I/O Scheduler.
Enter the following command to ensure that the Deadline disk I/O scheduler is configured for use:
# echo deadline > /sys/block/${ASM_DISK}/queue/scheduler
21
4.
Action:
Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next> '
22
Action:
Select radio button 'Advanced Installation' and click ' Next> '
23
Action:
Accept 'English' as language' and click ' Next> '
24
Action:
Specify your cluster name and the SCAN name you want to use and click ' Next> '
Note:
Make sure 'Configure GNS' is NOT selected.
25
Action:
Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured
previously in your /etc/hosts file. Use the 'SSH Connectivity' button to configure/test the
passwordless SSH connectivity between your nodes.
ACTION:
Type in the OS password for the user 'grid' and press 'Setup'
26
Action:
Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct
values for 'Public' and 'Private' and '. When finished click ' Next> '
27
Action:
Select radio button 'Automatic Storage Management (ASM) and click ' Next> '
28
Action:
Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done
click ' Next> '
NOTE: The number of voting disks that will be created depend on the redundancy level you specify:
EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks.
NOTE: If you see an empty screen for your candidate disks it is likely that ASMLib has not been
properly configured. If you are sure that ASMLib has been properly configured click on 'Change
Discovery Path' and provide the correct destination.
29
Action:
Specify and conform the password you want to use and click ' Next> '
30
Action:
Select NOT to use IPMI and click ' Next> '
31
Action:
Select if you wish to Register with EM Cloud control and click ' Next> '
32
Action:
Assign the correct OS groups for OS authentication and click ' Next> '
33
Action:
Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> '
34
Action:
Specify the locations for your Inventory directory and click ' Next> '
35
Action:
Specify the required credential if you wish to automatically run configuration scripts and click 'Next> '
36
Action:
Check that status of all checks is Succeeded and click ' Next> '
Note:
If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up the window
Action:
Execute the runfixup.sh script as described on the screen as root user
37
Action:
Wait for the OUI to complete its tasks, After it completes the copying of binaries to all the nodes of
the cluster, it will bring up a pop up window.
At this point you may need to run oraInstRoot.sh on all cluster nodes (if this is the first installation of
an Oracle product on this system).
root.sh script output on Node 1
[root@cehaovmsp145 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@cehaovmsp145 ~]# /u01/app/12.1.0/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
38
39
40
41
Action:
Wait for the OUI to finish the cluster configuration.
42
Action:
You should see the confirmation that installation of the Grid Infrastructure was successful. Click 'Close'
to finish the install.
[root@cehaovmsp145 ~]# crsctl stat res -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.LISTENER.lsnr
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.OCRVD.dg
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.asm
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
Started,STABLE
ora.net1.network
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
ora.ons
ONLINE ONLINE
cehaovmsp145
STABLE
ONLINE ONLINE
cehaovmsp146
STABLE
43
-------------------------------------------------------------------------------Cluster Resources
-------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.MGMTLSNR
1
ONLINE ONLINE
cehaovmsp145
169.254.41.177 10.64
.131.119,STABLE
ora.cehaovmsp145.vip
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.cehaovmsp146.vip
1
ONLINE ONLINE
cehaovmsp146
STABLE
ora.cvu
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.mgmtdb
1
ONLINE ONLINE
cehaovmsp145
Open,STABLE
ora.oc4j
1
ONLINE ONLINE
cehaovmsp145
STABLE
ora.scan1.vip
1
ONLINE ONLINE
cehaovmsp145
STABLE
-------------------------------------------------------------------------------[root@cehaovmsp145 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version
:
4
Total space (kbytes)
:
409568
Used space (kbytes)
:
1456
Available space (kbytes) :
408112
ID
: 1201040793
Device/File Name
:
+OCRVD
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@cehaovmsp145 ~]# crsctl query css votedisk
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE
ae201939c23b4f12bf57fceabf2ad60f (/dev/oracleasm/disks/OCR_VOTE01) [OCRVD]
Located 1 voting disk(s).
[root@cehaovmsp145 ~]# crsctl check cluster -all
**************************************************************
cehaovmsp145:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
cehaovmsp146:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
44
5.
As the oracle user (rdbms software owner) start the installer by running "runInstaller" from the staged
installation media.
NOTE: Be sure the installer is run as the intended software owner, the only supported method to
change the software owner is to reinstall.
Change into the directory where you staged the RDBMS software
./runInstaller
Action:
Provide your e-mail address, tick the check box and provide your Oracle Support Password if you want
to receive Security Updates from Oracle Support and click ' Next> '
45
Action:
Select the option 'Install Database software only' and click ' Next> '
46
Action:
Select the option Oracle Real Application Clusters database Installation' and click ' Next> '
47
Action:
Select all nodes.
Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your
nodes '
Type in the OS password for the oracle user and click 'Setup'
48
Action:
To confirm English as selected language click ' Next> '
49
Action:
Make sure radio button 'Enterprise Edition' is ticked, click ' Next> '
50
Action:
Specify path to your Oracle Base and below to the location where you want to store the software
(Oracle home). Click ' Next> '
51
Action:
Use the drop down menu to select the names of the Database Administrators and Database Operators
group and click ' Next> '
52
Action:
Check that the status of all checks is 'Succeeded' and click ' Next> '
Note:
If you are sure the unsuccessful checks can be ignored tick the box 'Ignore All' before you click ' Next> '
53
Action:
Perform last check that the information on the screen is correct before you click ' Finish '
54
Action:
Log in to a terminal window as root user and run the root.sh script on the first node. When finished do
the same for all other nodes in your cluster as well. When finished click 'OK'
NOTE: root.sh should be run on one node at a time.
[root@cehaovmsp145 ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh
Performing root user operation.
/u01/app/oracle/product/12.1.0/db_1
/u01/app/oracle/product/12.1.0/db_1
55
Action:
Click ' Close ' to finish the installation of the RDBMS Software.
56
6.
Action:
Click 'Create' to create a new diskgroup
57
Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box
for the disks you want to assign to the new diskgroup.
58
Action:
Click 'OK'
Note:
It is Oracle's Best Practice to have an OCR mirror stored in a second diskgroup. To follow these
recommendations add an OCR mirror. Mind that you can only have one OCR in a diskgroup.
Action:
1. To add OCR mirror to an Oracle ASM diskgroup, ensure that the Oracle Clusterware stack is running
and run the following command as root from the $GRID_HOME /bin directory:
2. # ocrconfig -add +DATA
3. # ocrcheck
59
7.
Action:
Choose option 'Create a Database' and click 'Next'
60
Action:
Choose option 'Advanced Mode' and click 'Next >'
61
Action:
Select 'Oracle Real Application Clusters (RAC) database' and Admin Managed click 'Next'
62
Action:
Type in the name you want to use for your database and select Create As Container Database click
'Next >'
63
Action:
Select all nodes before you click 'Next>'
64
Action:
Select the options you want to use to manage your database and click 'Next'
65
Action:
Type in the passwords you want to use and click 'Next'
66
Action:
Select the diskgroup you created for the database files, optionally select FRA & Enable Archiving if you
wish to configure. Click Next >
67
Action:
Select Appropriate option, if you wish to configure Database Vault & Label Security click 'Next >'
68
Action:
Review and change the settings for memory allocation, character sets etc. according to your needs
and click 'Next >'
69
Action:
Make sure the tick box 'Create Database' is ticked and Optionally you mayselct Generate Database
Creation Scripts and click 'Next >'
70
Action:
Review the validation Results, if you are convinced you can go with Ignore All option, Click Next >
71
Action:
Review the database configuration details again and click 'Finish >'
72
Action:
The database is now created, you can either change or unlock your passwords or just click Exit to finish
the database creation.
73
8.
Please download the latest patch from MOS Doc ID 756671.1. When the document was written latest
PSU release was:Patch 20996835: GRID INFRASTRUCTURE PATCH SET UPDATE 12.1.0.2.4 (JUL2015)
The above patch contains both GI & RDBMS Patch and can be applied on a rolling manner.
Please note:1. Make sure that there are no active sqlplus sessions opened through putty.(ps -ef | grep sqlplus)
2. Make sure that emagent / em dbcontrol is stopped manually before you execute the patch steps
(ps -ef | grep em)
Steps for applying the patch
=======================
1. Download the latest opatch (6880880) utility and replace the existing OPatch with the new one (it
will not allow you to replace, so you have to move with root user and change the ownership to
grid:oinstall)
https://updates.oracle.com/download/6880880.html
2. Create the OCM response file for silent installation of "opatch auto" features as a root user.
$ $GRID_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/ocmconf.rsp
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
<<<< Press Enter/Return key and don't provide any input >>>>
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y <<< type
Y/Yes >>>
The OCM configuration response file (/tmp/ocmconf.rsp) was successfully created. <<
-ocmrf /tmp/ocmconf.rsp
74
Datapatch is run to complete the post-install SQL deployment for the PSU.
$ sqlplus /nolog
SQL>
Connect / as sysdba
SQL> startup
Please refer the readme.html file of the patch document for complete details.
75