You are on page 1of 58

By Rajani Kumar Katam, Oracle RAC DBA. Satyam Computer Services private Ltd.

Step by step installation Oracle 11g (11.1.0.6.0) RAC on Red hat Enterprise LINUX AS 4 with screenshots.

The following are the sequence of steps that are to be executed on the nodes.
Install the Linux Operating System Install Required Linux Packages for Oracle RAC (refer oracle doc for the required packages Packages varies depending on the version of the operating system). Network Configuration Using the Network Configuration application, you need to configure both NIC devices as well as the /etc/hosts file. Both of these tasks can be completed using the Network Configuration GUI. Notice that the /etc/hosts settings are the same for both nodes.

For example we need to specify the entries in the /etc/hosts file as below in both the nodes.

etc/hosts
127.0.0.1 localhost.localdomain localhost # Public Network - (eth0) 192.168.1.100 linux1 192.168.1.101 linux2 # Private Interconnect - (eth1) 192.168.2.100 linux1-priv 192.168.2.101 linux2-priv # Public Virtual IP (VIP) addresses - (eth0) 192.168.1.200 linux1-vip 192.168.1.201 linux2-vip

Create "oracle" User and Directories


groupadd -g 501 oinstall groupadd -g 502 dba groupadd -g 504 asm

useradd -m -u 501 -g oinstall -G dba, asm -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app # chmod -R 775 /u01/app Creating directory for oracle clusterware. # mkdir -p /u01/app/crs # chown -R oracle:oinstall /u01/app/crs # chmod -R 775 /u01/app/crs

Create Mount Point for OCFS2 / Clusterware Let's now create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will be used to store the two Oracle Clusterware shared files (OCR file and voting disk file)
# mkdir -p /u02/oradata/orcl # chown -R oracle:oinstall /u02/oradata/orcl # chmod -R 775 /u02/oradata/orcl

Configure the Linux Servers for Oracle

Edit the .bash_profile file and set the required environment variables in both the nodes.

PATH=$PATH:$HOME/bin export ORACLE_SID=hrms1 export ORACLE_HOME=/u02/app/oracle/db_home export ORA_CRS_HOME=/u02/app/oracle/crs_home export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib unset USERNAME

Swap Space Considerations Installing Oracle Database 11g Release 1 requires a minimum of 1GB of memory

# cat /proc/meminfo | grep MemTotal # cat /proc/meminfo | grep SwapTotal

Configuring Kernel Parameters and Shell Limits In both the nodes


sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 kernel.shmmax = 1073741823 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000

Setting Shell Limits for the oracle User


cat >> oracle oracle oracle oracle EOF /etc/security/limits.conf <<EOF soft nproc 2047 hard nproc 16384 soft nofile 1024 hard nofile 65536

Configure RAC Nodes for Remote Access using SSH

(Refer any UNIX doc for configuring SSH or oracle RAC installation doc)

Enabling SSH User Equivalency for the Current Shell Session


$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add Enter passphrase for /home/oracle/.ssh/id_rsa: xxxxx

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

Reboot both nodes after configuring kernel level Install & Configure Oracle Cluster File System (OCFS2) parameters # rpm -Uvh ocfs2-tools-1.2.6-1.el5.i386.rpm # rpm -Uvh ocfs2-2.6.18-8.el5-1.2.6-1.el5.i686.rpm # rpm -Uvh ocfs2console-1.2.6-1.el5.i386.rpm

Configure OCFS2
$ su # ocfs2console &

Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster

On the "Node Configuration" dialog, click the [Add] button. This will bring up the "Add Node" dialog. In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster. Leave the IP Port set to its default value of 7777. In my example, I added both nodes using linux1 / 192.168.1.100 for the first node and linux2 / 192.168.1.101 for the second node Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active". After verifying all values are correct, exit the application using [File] -> [Quit].

This needs to be performed on both Oracle RAC nodes in the cluster.

Configure O2CB to Start on Boot


# /etc/init.d/o2cb offline ocfs2 # /etc/init.d/o2cb unload # /etc/init.d/o2cb configure

Format the OCFS2 Filesystem Create a partition on the SAN or shared storage for storing ocrfile and voting disk files that are created at the time of cluster ware installation. (use fdisk command as root user and create a partition) NOTE: Always recommended to create 4 partitions so that we can maintain redundant copies of voting disk file and OCR file
$ su -

# mkfs.ocfs2 -b 4K -C 32K -N 4 L ocfs2 /dev/sde2

Mount the OCFS2 File system


$ su # mount -t ocfs2 -o datavolume,nointr L ocfs2 /u02/oradata/orcl of the file system where u want to mount tat partition). (the name

Configure OCFS2 to Mount Automatically at Startup We can do that by adding the following line to the /etc/fstab file on both Oracle RAC nodes in the cluster:
LABEL=ocfs2 /u02/oradata/orcl ocfs2 _netdev,datavolume,nointr 0 0

10.Install & Configure Automatic Storage Management libraries (ASMLib 2.0)


# rpm -Uvh oracleasm-support-2.0.4-1.el5.i386.rpm # rpm -Uvh oracleasm-2.6.18-8.el5-2.0.4-1.el5.i686.rpm # rpm -Uvh oracleasmlib-2.0.3-1.el5.i386.rpm

Configuring and Loading the ASMLib Packages


$ su # /etc/init.d/oracleasm configure

Create ASM Disks for Oracle


$ su # /etc/init.d/oracleasm createdisk vol1 /dev/sde1 # /etc/init.d/oracleasm createdisk VOL2 NOTE: Create the number of disks depending on your requirement. # /etc/init.d/oracleasm scandisks # /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 VOL4 /dev/sde2

10 .Pre-Installation Tasks for Oracle Clusterware 11g


Verifying the Hardware and Operating System Setup with CVU
$ ./runcluvfy.sh stage -post hwos -n hcslinux1, hcslinux2 verbose

11.Installing Oracle clusterware software


Note: Before installing clusterware please verify remote host access and user equivalence using ssh.
$ sh runInstaller

[root@hcslnx01 crs_home]# sh root.sh Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: hcslnx01 hcslnx01-priv hcslnx01 node 2: hcslnx02 hcslnx02-priv hcslnx02

Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /ocfs2/voting_file Format of 1 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. hcslnx01 Cluster Synchronization Services is inactive on these nodes. hcslnx02 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@hcslnx01 crs_home]#

[root@hcslnx02 crs_home]# sh root.sh WARNING: directory '/' is not owned by root Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/' is not owned by root. Changing owner to root clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: hcslnx01 hcslnx01-priv hcslnx01 node 2: hcslnx02 hcslnx02-priv hcslnx02 clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. hcslnx01 hcslnx02 Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes... Creating GSD application resource on (2) nodes... Creating ONS application resource on (2) nodes... Starting VIP application resource on (2) nodes...

Starting GSD application resource on (2) nodes... Starting ONS application resource on (2) nodes...

Done. You have new mail in /var/spool/mail/root [root@hcslnx02 crs_home]#

12. Verify Oracle Clusterware Installation Check Cluster Nodes


$ $ORA_CRS_HOME/bin/olsnodes -n linux1 1 linux2 2

Confirm Oracle Clusterware Function


$ $ORA_CRS_HOME/bin/crs_stat -t -v Name Type R/RA F/FT Target State Host ---------------------------------------------------------------------ora.linux1.gsd application 0/5 0/0 ONLINE ONLINE linux1 ora.linux1.ons application 0/3 0/0 ONLINE ONLINE linux1 ora.linux1.vip application 0/0 0/0 ONLINE ONLINE linux1 ora.linux2.gsd application 0/5 0/0 ONLINE ONLINE linux2 ora.linux2.ons application 0/3 0/0 ONLINE ONLINE linux2 ora.linux2.vip application 0/0 0/0 ONLINE ONLINE linux2

Check CRS Status


$ $ORA_CRS_HOME/bin/crsctl check crs Cluster Synchronization Services appears healthy Cluster Ready Services appears healthy

Event Manager appears healthy

Check Oracle Clusterware Auto-Start Scripts


$ ls -l /etc/init.d/init.* -rwxr-xr-x 1 root root 2236 -rwxr-xr-x 1 root root 5290 -rwxr-xr-x 1 root root 49416 -rwxr-xr-x 1 root root 3859 Oct Oct Oct Oct 12 12 12 12 22:08 22:08 22:08 22:08 /etc/init.d/init.crs /etc/init.d/init.crsd /etc/init.d/init.cssd /etc/init.d/init.evmd

13. Install Oracle Database 11g Software


$ sh runInstaller

Installing Oracle 11g RAC software

NOTE: The above command has to be executed manullay on the failed node by connecting as Oracle user.

Creating 11g RAC database by invoking DBCA utility.

NOTE:

There are five node-level tasks defined for SRVCTL:


Adding and deleting node-level applications Setting and unsetting the environment for node-level applications Administering node applications Administering ASM instances Starting and stopping a group of programs that includes virtual IP addresses, listeners, Oracle Notification Services, and Oracle Enterprise Manager agents (for maintenance purposes).

Status of all instances and services


$ srvctl status database -d orcl Instance orcl1 is running on node linux1 Instance orcl2 is running on node linux2

Status of a single instance


$ srvctl status instance -d orcl -i orcl2 Instance orcl2 is running on node linux2

Status of node applications on a particular node


$ srvctl status nodeapps -n linux1

VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1 ONS daemon is running on node: linux1

Status of an ASM instance


$ srvctl status asm -n linux1 ASM instance +ASM1 is running on node linux1.

List all configured databases


$ srvctl config database orcl

Display configuration for our RAC database


$ srvctl config database -d orcl linux1 orcl1 /u01/app/oracle/product/11.1.0/db_1 linux2 orcl2 /u01/app/oracle/product/11.1.0/db_1

Display the configuration for node applications - (VIP, GSD, ONS, Listener)
$ srvctl config nodeapps -n linux1 -a -g -s -l VIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0 GSD exists. ONS daemon exists. Listener exists.

Display the configuration for the ASM instance(s)


$ srvctl config asm -n linux1 +ASM1 /u01/app/oracle/product/11.1.0/db_1

BY Rajani Kumar Katam Satyam Computer Services private Ltd.

You might also like