You are on page 1of 47

Installing Oracle RAC 10g Release 1 Standard Edition

on Windows - (2003 Server)


by Jeff Hunter, Sr. Database Administrator

Contents

1. Overview
2. Oracle Real Application Cluster (RAC) 10g Introduction
3. FAQs about Oracle10g Standard Edition and RAC
4. Configuring the Operating System and Hardware
5. Downloading Oracle RAC 10g Software
6. Installing Oracle Cluster Ready Services (CRS) Software
7. Stamp the Logical Drives for ASM
8. Installing Oracle10g Database Software with RAC
9. Ensure Valid Environment Variables on Both Nodes
10. Creating / Altering Tablespaces
11. Applying the Oracle10g Release 1 (10.1.0.4) Patch Set 2 for Microsoft
Windows
12. Verifying TNS Networking Files
13. Verifying the RAC Cluster / Database Configuration
14. Starting & Stopping the Cluster
15. Creating Second Disk Group for Flash Recovery Area
16. Enterprise Manager - Database Console
17. Transparent Application Failover - (TAF)
18. About the Author

Overview

Oracle extended their high availability product offerings by now licensing Oracle
RAC 10g with Standard Edition. This has been a welcome move by many customers
who can now take advantage of an active-active database clustering solution without
having to move to their more expensive Enterprise Edition.

This article provides a detailed guide to the tasks required to install and configure
Oracle RAC 10g using Standard Edition on the Microsoft Windows Server 2003
operating environment. The configuration will consist of a two-node cluster, using
Oracle Real Application Clusters (RAC) and Automated Storage Management (ASM)
for the physical database files. (Using ASM is a requirement when configuring Oracle
RAC 10g Standard Edition!)

Before discussing the details of the installation, let's take a conceptual look at what
the environment will look like:
NOTE: Click on the graphic above to view larger image

The complete installation will consist of two phases. The first phase will be the
installation and configuration of the Cluster Ready Services (CRS) software. The
second phase will consist of installing the Oracle Database / RAC software. (All
software components are available from the Oracle Technology Network (OTN).) At
the end of the installation, we will then create a general purpose clustered database
with all sample schemas.

This article assumes the following:

• Neither of the nodes has any Oracle software installed.


• ASM will be used to store the physical clustered database files as this
is a requirement for using Oracle RAC 10g with Oracle Standard
Edition.
• The two files required to be shared for Cluster Ready Services will be
stored on a logical (RAW) device on shared storage. These include the
Oracle Cluster Registry (OCR) file and CRS Voting Disk file.
Oracle10g Standard Edition does not support use of using the Oracle
Clustered File System (OCFS) for these two files nor any of the
physical database files. This is documented in Metalink Note ID :
358248.1.
• The operating system on both nodes will be Microsoft Windows Server
2003 - Enterprise Edition (32-bit). (This installation will also work
with Microsoft Windows 2000 (32-bit) with service pack 1 or higher.
RAC for Windows 64-bit is also supported on Windows Server 2003
Datacenter Edition for 64-bit Itanium and Windows Server 2003
Enterprise Edition for 64-bit Itanium)
• Installing the Oracle CRS and 10g Software only needs to be
performed from one of the nodes in the cluster. (The Oracle Universal
Installer will contact and copy the required software to all other nodes
in the cluster provided the privileges are configured correctly.) For the
purpose of this article, I will be initiating and performing all installs
from the node windows1. It is important to take note of the node that
you will be performing all installs from. All patches and further
installations need to also be initiated from this node. (Although this can
be reconfigured using the OUI.)
• The CRS and Oracle installation require local Administrator privileges
on both nodes. I will be using the local Administrator account to
perform all installations. Also, verify that the password for the account
is the same on both nodes.
• Because of several notable bugs in the Oracle 10.1.0.2 base release for
Windows, I will be applying the 10.1.0.4 patchset. This patchset will
be first applied to the Oracle Cluster Ready Services (CRS) software
and then to the Oracle database software.

Oracle Real Application Cluster (RAC) 10gIntroduction

Oracle Real Application Cluster (RAC) is the successor to Oracle Parallel Server
(OPS) and was first introduced in Oracle9i. RAC allows multiple instances to access
the same database (storage) simultaneously. RAC provides fault tolerance, load
balancing, and performance benefits by allowing the system to scale out, and at the
same time since both nodes access the same database, the failure of one instance will
not cause the loss of access to the database.

At the heart of Oracle RAC 10g is a shared disk subsystem. Both nodes in the cluster
must be able to access all of the data, redo log files, control files and parameter files.
The data disks must be globally available in order to allow both nodes to access the
database. Each node has its own redo log and control files, but the other nodes must
be able to access them in order to recover that node in the event of a system failure.

Not all clustering solutions use shared storage. Some vendors use an approach known
as a federated cluster, in which data is spread across several machines rather than
shared by all. With Oracle RAC 10g, however, multiple nodes use the same set of
disks for storing data. With Oracle RAC 10g, the data files, redo log files, control
files, and archived log files reside on shared storage on raw-disk devices, a NAS,
ASM, or on a clustered file system. Oracle's approach to clustering leverages the
collective processing power of all the nodes in the cluster and at the same time
provides failover security.

The biggest difference between Oracle RAC and OPS is the addition of Cache Fusion.
With OPS a request for data from one node to another required the data to be written
to disk first, then the requesting node can read that data. With cache fusion, data is
passed along with locks.

Pre-configured Oracle RAC 10g solutions are available from vendors such as Dell,
IBM and HP for production environments.

FAQs about Oracle10g Standard Edition and RAC

One of the key reasons for delivering this article is the popularity that RAC is
enjoying being packaged free with Oracle10g Standard Edition. I have, however, been
fielding many questions regarding the constraints and supported configurations with
this type of install. In this section, I attempt to lay out some of the more popular
questions and answers that have seen. I have made every attempt and taken great care
making sure that the answers I provide here are accurate. While some of them rely on
official Oracle documentation, others were obtained after talking with Oracle
Corporation's technical support staff. Please let me know if any of the questions and
answers appear to be incorrect.

1. Can I use the Oracle Cluster File System (OCFS) with 10g
Standard Edition RAC?

No. OCFS is not supported with Standard Edition RAC. All database
files must use ASM (redo logs, recovery area, datafiles, control files,
etc.) It is recommended that the binaries and trace/log files (non-ASM
supported files) are replicated on both nodes. (Not to be on any shared
disk.) This is performed automatically by the installer.

2. Where is OCFS for 10g Standard Edition RAC?

OCFS is part of the Cluster Ready Services (CRS) installation and


normally chosen (and then installed) via the "Disk Formatting
Options" page. This, however, is not available and supported for
Standard Edition. This is noted in the official Oracle documentation -
Oracle® Real Application Clusters - Quick Installation Guide for
Oracle Database Standard Edition 10g Release 1 (10.1.0.2.0) for
Windows / Part No. B13889-01.

On the "Select Disk Formatting Options" page, you must select


the "Do not format any logical drives option".

Note: Do not select one of the options that require a formatted


drive because these options are implemented only in Oracle
Database 10g Enterprise Edition.

3. How can the Cluster Registry File, Voting Disk be put on an


Oracle cluster file system?
This is not possible with Standard Edition. (See above answer for the
reference.) See the manual referenced above, section 7.2 "Prepare
Disks for Cluster Ready Services" for the only supported method of
preparing and creating OCR and voting disks with Standard Edition.

4. Can the CRS Home (%ORA_CRS_HOME%) and Oracle10g


Software (%ORACLE_HOME%) be stored on an Oracle Cluster
File system?

CRS home can never be on an OCFS drive. Oracle Home and


Software may be on OCFS drives in Enterprise Edition but not
Standard Edition.

5. Can both nodes in the cluster share the same binaries for CRS and
for the Oracle10g install?

The CRS and Software installs must have separate homes but they can
be on a shared drive. Simply specify a shared drive at installation time.
However if you place them on shared drives then all instances of your
database will have to be stopped if you upgrade the software. The
database can be more available during software upgrades if the homes
are on local drives. Whether this is of consequence to you, only you
can decide.

6. Has OCFS V2 for Windows been released?

No. It is only in beta for Linux. See


http://oss.oracle.com/projects/ocfs2.

7. I believe that in OCFS V1 for Windows, you could not use it for
sharing the ORACLE_HOME for either the CRS binaries or the
10g ORACLE_HOME binaries.

In Windows you can use the latest OCFS versions delivered with
V9205 and 10g to use OCFS for the ORACLE_HOME. In neither
OCFS V1 or V2 are you able to place the CRS home on an OCFS
drive.

8. In Oracle10g, RAC Standard Edition, is the "only" place to put


the Voting Disk and Cluster Registry File on a logical (RAW)
partition? From what I understand, it cannot be placed on OCFS
for Standard Edition. Does this rule out "any" clustered file
system then, or just OCFS? (i.e. Polyserve, NAS using NFS)

It rules out OCFS, not shared file systems in general. Standard Edition
does however assume that the files are RAW. Any shared disks
supported by your hardware vendor, other than Network Attached
Storage (NAS), can be used for Windows.
9. The OUI does not seem to "prevent" us from using the OCFS as a
storage mechanism for the Voting Disk and Cluster Registry File
when installing Standard Edition. (Not that we would do that - we
would always honor the license agreement we have - just trying to
better understand our options!)

True, OUI does not prevent you from selecting OCFS but it won't work
with Standard Edition.

10. Is it possible to put the Voting Disk and Cluster Registry File on
OCFS in Enterprise Edition?

Yes, in Enterprise Edition you can use OCFS for the OCR and voting
disks.

11. You mention that the CRS Home (%ORA_CRS_HOME%) can


never be located on OCFS. Is this a limitation only for Standard
Edition? Can it be done in Enterprise Edition? Is it just not
possible on any platform (Solaris, Linux, Windows)? I believe this
is not possible simply from the fact that it would be a catch .22 -
You must install CRS and "that" is where the OCFS is located.
(There is no special install just for OCFS like there was in 9i)

CRS home can NEVER be stored on OCFS, in either SE or EE. In


Windows, the OCFS software is part of the Windows CRS installation
but even on Linux this is not supported. The other platforms don't
support OCFS at the moment and I don't know if they ever will.

12. As for the Oracle RAC 10g software (%ORACLE_HOME%), I


read somewhere that it is not possible to use OCFS V1 for "any"
%ORACLE_HOME%.

This restriction is only valid for Linux version of OCFS. In Windows,


you can use OCFS for the Oracle Home (in Enterprise Edition) and
this has been true since V9.2.

13. I have been following the activities on OCFS V2 for Linux, but
where do we look to find information on OFCS V2 for Windows?
(i.e. Beta versions, release dates?)

The Linux versions of OCFS are open source and the progress can be
followed via http://oss.oracle.com.

The Windows version of OCFS, like the OS version, is proprietary and


Oracle has no separate information or installation site. For Windows
there really is no V2 OCFS. OCFS is always part of the CRS
installation on Windows and newer versions of OCFS for Windows will
be distributed as part of the CRS patchset releases.
14. From what I understand, OFCS V1 is not acceptable for "any"
ORACLE_HOME.

The OCFS V1 restrictions regarding Oracle home only apply to Linux.

In Windows, the current versions of OCFS (V9205+ and 10g) allow


you to use OCFS for the Oracle home; BUT only for Enterprise
Edition.

With no version, or platform, can you place the CRS home on an


OCFS drive. The CRS installation is always prior to the database
software installation.

Hardware & Costs

The hardware used to build our Oracle RAC 10g environment consists of two
Windows servers and components that can be purchased over the Internet.
Server 1 - (windows1)
Dimension 2400 Series
- Intel Pentium 4 Processor at 2.80GHz
- 1GB DDR SDRAM (at 333MHz)
- 40GB 7200 RPM Internal Hard Drive
- Integrated Intel 3D AGP Graphics
- Integrated 10/100 Ethernet
- CDROM (48X Max Variable)
- 3.5" Floppy
- No monitor (Already had one)
- USB Mouse and Keyboard $620

1 - Ethernet LAN Cards


- Linksys 10/100 Mpbs - (Used for Interconnect to windows2)
Each Windows server should contain two NIC adapters. The Dell Dimension includes an
integrated 10/100 Ethernet adapter that will be used to connect to the public network. The
second NIC adapter will be used for the private interconnect. $20

1 - SCSI Card
- Dual Differential Ultra/Wide SCSI (PCI) [595-4414] (X6541A) [Manf# 375-0006]
Note that you will need to choose a host adapter that is compatible with your shared storage
subsystem. $195

Server 2 - (windows2)
Dimension 2400 Series $620
- Intel Pentium 4 Processor at 2.80GHz
- 1GB DDR SDRAM (at 333MHz)
- 40GB 7200 RPM Internal Hard Drive
- Integrated Intel 3D AGP Graphics
- Integrated 10/100 Ethernet
- CDROM (48X Max Variable)
- 3.5" Floppy
- No monitor (Already had one)
- USB Mouse and Keyboard

1 - Ethernet LAN Cards


- Linksys 10/100 Mpbs - (Used for Interconnect to windows1)
Each Windows server should contain two NIC adapters. The Dell Dimension includes an
integrated 10/100 Ethernet adapter that will be used to connect to the public network. The
second NIC adapter will be used for the private interconnect. $20

1 - SCSI Card
- Dual Differential Ultra/Wide SCSI (PCI) [595-4414] (X6541A) [Manf# 375-0006]
Note that you will need to choose a host adapter that is compatible with your shared storage
subsystem. $195

Miscellaneous Components
Shared Storage / Disk Array
- Sun StorEdge D1000 Disk Array (JBOD)

I am using a Sun StorEdge D1000 Disk Array in a JBOD configuration with 12 x 9GB 10000
RPM UltraSCSI hard drives. $499

2 - SCSI Cables
- 2 Meter External SCSI Cable - [530-2453] [HD68 to VHDCI68] (X3832A) $40
- 2 Meter External SCSI Cable - [530-2453] [HD68 to VHDCI68] (X3832A) $40
Note that you will need to choose cables that are compatible with your shared storage
subsystem and I/O host adapter.

1 - Ethernet hub or switch


- Linksys EtherFast 10/100 5-port Ethernet Switch - (Used for interconnect int-windows1 / int-
windows2) $30

4 - Network Cables
- Category 5e patch cable - (Connect windows1 to public network) $5
- Category 5e patch cable - (Connect windows2 to public network) $5
- Category 5e patch cable - (Connect windows1 to interconnect ethernet switch) $5
- Category 5e patch cable - (Connect windows2 to interconnect ethernet switch) $5

Total $2,299

A question I often receive is about substituting the Ethernet switch (used for
interconnect int-windows1 / int-windows2) with a crossover CAT5 cable. I would
not recommend this. I have found that when using a crossover CAT5 cable for the
interconnect, whenever I took one of the PCs down, the other PC would detect a
"cable unplugged" error, and thus the Cache Fusion network would become
unavailable.

Configuring the Operating System and Hardware

In this section, we look at the operating system and talk a little about the hardware
requirements for installing Oracle RAC 10g (Standard Edition) on the Microsoft
Windows Server 2003 platform. All topics discussed in this section need to be
performed on both nodes in the cluster.

Memory

Verify that both nodes are equipped with a minimum of 512MB of RAM (for 32-bit
systems) and 1GB (for 64-bit systems). If either of the nodes fail to meet the memory
requirements, you will need to install more RAM before continuing.

Verify that the size of the configured swap space (also known as the paging file size)
is at least twice teh amount of physical RAM size. From the Control Panel, open the
System applet, select the Advanced tab and then select the Performance setting
button.

For Windows 2003 users, click on the Advanced tab from the Performance Options
dialog.

Hard Drives

Each of the nodes in the cluster will need to have a local hard drive installed as well
as access to a set of disks that can be shared between both nodes in the cluster. Any
shared disks that can be supported by your vendor can be used with the exception of a
Network Attached Storage (NAS) device. The shared disk subsystem must be
connected to both nodes in the cluster and all nodes must be able to read and write to
the disks.

The following table describes where the different software components will be stored:

Component Free Disk Required Location


100MB on both nodes for 32-
Cluster Ready The storage can be any local system drive, but the named
bit systems and 140MB on
Services software device (the drive letter) must be the same on both nodes.
both nodes for 64-bit systems.
Oracle10g
The storage can be any local system drive, but the named
Database 1GB on both nodes.
device (the drive letter) must be the same on both nodes.
software
The database files will be hosted on the shared disks
Database files 7GB
using ASM.
The two CRS shared files will be hosted on the shared
disks using a logical (RAW) volume. The Oracle Cluster
CRS Shared Files 150MB
Registry (OCR) File take ~100 MB while the CRS Voting
Disk takes ~20MB.

Network

We now look at the network configuration for both nodes. The nodes in the cluster
need to be able to communicate with each other (known as the interconnect) and to
the public network where external clients can establish connections to Oracle through
using TCP/IP. Although not a strict requirement, it is highly recommended that two
NIC interfaces be installed in both nodes: one for the public network and another
(preferably on a different / private subnet) for the interconnect.

A virtual IP address (VIP) will be configured on both nodes in the cluster to provide
for high availability and failover. This VIP address can be moved between the nodes
in a cluster in case of failure. The VIP addresses are managed by the Cluster Ready
Services (CRS) software component. In order to support the VIP address feature, each
of the nodes will require an unused IP address that is compatible with the public
network's subet and netmask. Like the publicly accessible IP address for the nodes in
the cluster, the VIP address and hostname should be stored in the domain name
system (DNS).

Communication between the two nodes (or all of the nodes in the cluster) requires a
private network. The private network should only allow traffic for the interconnect
and should not be allowed access from outside of the cluster. Both nodes should have
a separate network adapter configured for this private network. While the public
network (and VIP address) should be entered into DNS, the private network should
not be. The network configuration (host name and IP address) for the private network
should be done on each node in the cluster using their hosts file:
%SystemRoot%\system32\drivers\etc\hosts. The following table displays my
network configuration used for this article:

Host Name Type IP Address Registered In


windows1.idevelopment.info Public 192.168.1.121 DNS
windows2.idevelopment.info Public 192.168.1.122 DNS
vip-windows1.idevelopment.info Virtual 192.168.1.221 DNS
vip-windows2.idevelopment.info Virtual 192.168.1.222 DNS
int-windows1.idevelopment.info Private 192.168.2.121 %SystemRoot%\system32\drivers\etc\hosts
int-windows2.idevelopment.info Private 192.168.2.122 %SystemRoot%\system32\drivers\etc\hosts

Finally, verify the following items to complete the network configuration:

1. Check that the public network interfaces have the same


interface names on both nodes in the cluster. For the purpose of
this example, I am using the first NIC interface on each of the
nodes for the public network with a name of Local Area
Connection. Similarly, check that the private network
interfaces have the same interface names on both nodes in the
cluster. I am using the second NIC interfaces on each of the
nodes for the private network with a name of Local Area
Connection 2.

To change the name of the network interface, go to the Network


Connections window (or Network and Dial-up Connections for
Windows 2000) and right-click the icon of the network
interface for which you need to change the name. Select
Rename and enter the new name.
2. Verify that your public interface appears FIRST in your
ipconfig list. Perform the following steps:
o Open the Network Connections window (or Network
and Dial-up Connections for Windows 2000).
o In the Advanced pull-down menu, click Advanced
Settings...
o If the public interface name (which is Local Area
Connection for my example) is not listed as the first
under the Adapters and Bindings tab, then select it and
click the arrow to move it to the top of the list.
o Click OK to save the settings and then exit the network
setup dialog.
3. On both nodes, verify that the private network node name and
IP address are configured in the
%SystemRoot%\system32\drivers\etc\hosts file. Since
nothing outside of the cluster needs access to this network, it
does not need to be configured in DNS.
4. From a Command window on each of the nodes, execute the
ping command, using the node name alias for the other node's
private and public network:
5. C:\> ping windows1.idevelopment.info
6. C:\> ping windows2.idevelopment.info
7. C:\> ping vip-windows1.idevelopment.info
8. C:\> ping vip-windows2.idevelopment.info
9. C:\> ping int-windows1.idevelopment.info
C:\> ping int-windows2.idevelopment.info

If any of the ping commands fail, there is a network


configuration problem that needs resolved.

10. Verify that you have administrative privileges on the other node
from where you will be performing the installation from. To do
this, enter the following command for the second node that is a
part of the cluster from the node you will be installing from:

C:\> net use \\<node_name>\C$

where node_name is the node name. For example, from


windows1, enter:

C:\> net use \\windows2\C$


Oracle Corporation strongly recommends using the same user name and password
on each node in a cluster or use a domain user name. If you use a domain user
name, log on under a domain with a username and password that has administrative
privileges on each node.

Configuring the Disk System

We now look at one of the more critical steps in the process and that is to configure
the shared disk subsystem for use with RAC Standard Edition.
1. Disable Write Caching

The first step is to disable write caching on all shared disks that
are intended to be used for database files. This needs to be
performed on both nodes in the cluster:

o Navigate to [Start] -> [Settings] -> [Control Panel] ->


[Administrative Tools] -> [Computer Management]
-> [Device Manager] -> [Disk Drives]
o Expand the Disk drives hive and double-click the first
listed shared drive.
o Under the Disk Properties tab (or the Policies tab for
Windows 2003) for the selected drive, uncheck the
option that enables the write cache.
o Continue this process for all other shared drives in the
Disk drives hive to disable write cache as previously
described.
2. Enable Automount on Each Node (Windows 2003 Only)

For Windows 2003 systems only, enable the disk automount


option by performing the following steps. This also will need to
be performed on both nodes in the cluster.

o From a Command window, enter the command


diskpart.
o From the DISKPART> prompt, enable the disk automount
option by entering the automount enable command
and then confirming its successful execution as follows:
o DISKPART> automount enable
Automatic mounting of new volumes enabled.

o Type exit to exit from the Diskpart session.


o After performing the above actions on both nodes in the
cluster, each of the machines will need to be rebooted.
3. Prepare Disks for Cluster Ready Services (CRS)

Oracle Cluster Ready Services (CRS) provides the overall


framework and management for all cluster activities. CRS
requires two critical files to be placed in logical drives on one
of the shared disks: one logical drive for the Oracle Cluster
Registry (OCR) file and another logical drive for the CRS
Voting Disk. The following steps need to be performed from
one of the nodes in the cluster in order to create and configure
two logical drives for these file:

o From one of the nodes in the cluster, run the Windows


Disk Administration tool by clicking [Start] ->
[Settings] -> [Control Panel] -> [Administrative
Tools] -> [Computer Management] -> [Storage].
Then expand the [Storage] folder to access [Disk
Management].
o Now, identify a shared disk that does not contain a
primary partition and has free space available in one or
more extended partitions.

In this article, I will choose the entire first disk (from


the twelve disks in the array) to create the two logical
drives. None of the twelve disks in my array have been
initialized and are completely Unallocated. So to start,
I needed to create an extended partition for the entire
disk. This is done by right clicking on the first disk and
selecting New Partition... and clicking [Next] on the
Welcome screen. In the "Select Partition Type" screen,
choose the "Extended partition" radio button. On the
next screen, keep the default size of the partition which
contains the entire disk and hit [Next]. Finally, close
out the wizard by clicking the [Finish] button. The
disk will now be displayed with a bright green band
(free space) surrounded by a darker green border
(Extended partition).

o To create the first logical drive, right click inside an


unused part of an extended partition (the bright green
band indicating free space) and choose New Logical
Drive....

Click [Next] on the Welcome screen to continue.

On the next screen, "Select Partition Type", ensure that


the option "Logical drive" is selected and click [Next].
(It should be the only option available!)

The next screen, "Select Partition Size", enter 20MB as


the size that you want for the Voting Disk logical drive
and click [Next].

The next screen, "Assign Drive Letter", choose the


option Do not assign a drive letter or drive path and
click [Next].

The next screen, "Format Partition", choose the option


Do not format this partition and click [Next].

This is the last page of the wizard. Click [Finish] to


create the logical partition.

o Repeat the previous task but this time you will be


creating a logical drive for the Oracle Cluster Registry
(OCR) file. The only change to make is to replace
20MB with 100MB for the size of the logical drive.
o Reboot both nodes in the cluster.
o Verify that both new logical drives are visible to both
nodes in the cluster and that neither of the machines
have drive letters assigned to them. (I have this happen
to me a lot where one of the drives on the second node
assigns a drive letter to one of the two new logical
drives!) If this happens, simply right click on the logical
drive that was created and select Change Drive Letter
and Paths.... In the dialog box that comes up, click on
the drive letter and click the Remove button, then
confirm the warning dialog box to remove the drive
letter.
4. Prepare Disks for Database Storage (for use with ASM)

The last step is to configure the shared disks for use with
Automatic Storage Management (ASM). Oracle's ASM storage
product consists of one or more disk groups, each of which can
span multiple disks. I will be using Normal Redundancy when
creating my ASM disk group. This type of configuration
requires at least two logical drives. For the purpose of this
example, I will be using a total of four disks for 1 ASM disk
group to store all physical database files named
ORCL_DATA1. I will then be creating another ASM disk
group consisting of four disks for the flash recovery area named
ORCL_FRA1. So, I will need to initialize a total of 8 drives
from the array. To prepare each logical drive, follow the steps
below for each drive.

Keep in mind that Oracle highly recommends each disk in the disk group be the
same size. When configuring disks for an ASM disk group, it is best practice to
create extended partitions the same size on each of the disks. There should only be
one extended partition on each disk and should take up the entire disk.

o From one of the nodes in the cluster, run the Windows


Disk Administration tool by clicking [Start] ->
[Settings] -> [Control Panel] -> [Administrative
Tools] -> [Computer Management] -> [Storage].
Then expand the [Storage] folder to access [Disk
Management].
o Identify a shared disk that does not contain a primary
partition. If the disk is Unallocated (has no partitions at
all), right click the disk and select New Partition....
Traverse the New Partition wizard to create an
Extended partition the size of the entire disk. Once you
have all shared disks configured with extended
partitions, right click inside an unused part of an
extended partition.
o Choose Create Logical Drive in the option window
and a new wizard appears for configuring a new logical
drive. Click [Next] at the Welcome screen.
o On the "Select Partition Type" screen, choose Logical
drive for the partition type to create and click [Next].
o On the "Specify Partition Size" screen, enter the size for
the partition to create. All of the partitions that you
create for the ASM disk groups should include as much
of the free space on the disk as possible. Additionally,
the partitions for all disks in the ASM disk group should
be the same size. Click [Next].
o On the next two screens, choose the option to Do not
assign a drive letter or drive path and then choose Do
not format this partition. Click [Finish] on the last
page of the wizard.
o Continue to create Logical drives for all disks that you
intend to use as ASM disks. For my example, I have
seven more to go!
o After creating all Logical drives, reboot both nodes in
the cluster.
o Verify that all new logical drives are visible to both
nodes in the cluster and that neither of the machines
have drive letters assigned to them. (I have this happen
to me a lot where one of the drives on the second node
assigns a drive letter to one of the new logical drives!)
If this happens, simply right click on the logical drive
that was created and select Change Drive Letter and
Paths.... In the dialog box that comes up, click on the
drive letter and click the Remove button, then confirm
the warning dialog box to remove the drive letter.

Downloading Oracle RAC 10g Software

Overview

The next logical step is to install Oracle Cluster Ready Services and the Oracle
Database 10g software. However, we must first download and extract the required
Oracle software packages from the Oracle Technology Network (OTN).
If you do not currently have an account with Oracle OTN, you will need to create
one. This is a FREE account!

In this section, we will be downloading and extracting the required software from
Oracle to only one of the Windows nodes in the RAC cluster - namely windows1. This
is the machine where I will be performing all of the installs from. The Oracle installer
will copy the required software packages to all other nodes in the RAC configuration.
Verify that you have administrative privileges to the second node from where you
will be performing the installation from. To do this, enter the following command
for each node that is a part of the cluster from the node you will be installing from:
C:\> net use \\<node_name>\C$
where node_name is the node name. For example, from windows1, enter:
C:\> net use \\windows2\C$
Also, the password for the account (the local Administrator account in my
example) should be the same on both nodes in the cluster!

Login to one of the nodes in the Windows RAC cluster as the user that will be
performing the installs. In this example, I will be downloading the required Oracle
software to windows1 and saving them to "C:\orainstall\crs" and
"C:\orainstall\db".

Oracle Database 10g Release 1 (10.1.0.2) for Microsoft Windows (32-bit)

First, download the Oracle Database 10g Release 1 (10.1.0.2) for Microsoft Windows
(32-bit).

Oracle Database 10g Release 1 (10.1.0.2) (Enterprise, Standard, Personal


Edition for Microsoft Windows (32-bit))

• 10g_win32_db.zip (604,263,765 bytes)

Oracle Cluster Ready Services Release 1 (10.1.0.2) for Microsoft Windows (32-
bit)

Next, we need to download the Oracle Cluster Ready Services (OCRS) for Windows
32-bit. This can be downloaded from the same page used to download the Oracle
Database Server:

Oracle Cluster Ready Services Release 1 (10.1.0.2) for Microsoft Windows


(32-bit)

• 10g_win32_crs.zip (87,117,880 bytes)

Extract the two packages you downloaded to a temporary directory. In this example, I
will use "C:\orainstall\crs" and "C:\orainstall\db".

Installing Oracle Cluster Ready Services (CRS) Software

This section describes the first phase of the installation of Oracle RAC 10g - installing
the Cluster Ready Services (CRS).
1. We start by running the setup.exe command from within the staging
directory we downloaded the software to. (Or if you are running from a
CD-ROM.) This will start the Oracle Universal Installer (OUI).
2. cd \orainstall\crs
setup.exe

3. Click [Next] on the Welcome page.


4. On the next page, "Specify File Locations", you can simply accept the
default values for the CRS Home (and path) or supply your own
values. When finished, click [Next] to continue.
5. Select the Language (or languages) for your CRS installation on the
"Language Selection" page, then click [Next] to continue.
6. The next page, "Cluster Configuration" is where we start to configure
the actual cluster software. You will need to provide a cluster name.
The default value of crs is acceptable in most configurations. It should
only be changed if the name is not unique.

Next, enter the public and private node name for both nodes in the
cluster. Neither of the node names should include the domain qualifier.
The values I supplied are as follows:

Public Node Name Private Node Name


windows1 int-windows1
windows2 int-windows2

Click [Next] to continue.

7. The next page, "Specify Network Interface Usage", the OUI displays a
list of cluster-wide interfaces. The default value for each of the
interfaces is "Do Not Use". For each of the Interface Names, select
one to be Public, and the other to be Private.
Interface Name Subnet Interface Type
Local Area Connection 192.168.1.0 Public
Local Area Connection 2 192.168.2.0< Private
8. Click [Next] to continue.
9. The next screen, "Select Disk Formatting Option", you MUST select
the Do not format any logical drives option. Oracle does not support
using the Oracle Cluster File System (OCFS) in Standard Edition.
Click [Next] to continue.
Do not select one of options that require a formatted drive because these options are
implemented only in Oracle Database 10g Enterprise Edition.
10. On the next page, "Disk Configuration - Oracle Cluster Registry
(OCR)", locate the partition that we created to hold the OCR (100MB)
file and select that partition's disk number and partition number from
the list. Click [Next] to continue.
11. On the next page, "Disk Configuration - Voting Disk", locate the
partition that we created to hold the Voting disk (20MB) file and select
that partition's disk number and partition number from the list. Click
[Next] to continue.
12. On the Summary page, click [Install] to start the installation
process. The OUI displays the Install page with an installation progress
bar.
13. At the end of the installation phase, the OUI runs a series of
configuration tools, during which it displays the "Configuration
Assistants" page.
14. After the configuration tools complete their processing, (which the user
can monitor on the "Configuration Assistants" page), the OUI displays
the "End of Installation" page.
15. Click the [Exit] button to exit the OUI.

This completes phase one of the Oracle RAC 10g installation.

Four new services should now be running on both nodes in the cluster.

• Oracle Object Service


• OracleCRService
• OracleCSService
• OracleEVMService

Verify this by opening the Windows "Services" application or running the net start
command from a Command window on both nodes in the cluster.

Stamp the Logical Drives for ASM

With Oracle RAC 10g Standard Edition, Oracle requires you to use ASM for all
database (and flash recovery) files. To enable disk discovery during the Oracle
database installation, the logical drives to be used to store the database files (and those
to be used for the flash recovery area) must be stampted with an ASM header using a
GUI tool called asmtoolg. All disk names created by this tool begin with the prefix
ORCLDISK for identification purposes.

The following procedures should be used to stamp all logical drives that will be used
to store database and flash recovery files. This actions should only be performed on
one of the nodes in the cluster; preferably the node in which you performed the CRS
installation from.

1. Open the tool by running the %ORA_CRS_HOME%\bin\asmtoolg


command, where %ORA_CRS_HOME% is the path you used to install the
Cluster Ready Services software.
2. On the first screen, select Add or change label and click [Next].
3. On the next screen, select the disks to stamp. Hold down the Control
Key when selecting multiple disks. All of the logical disks you created
previously, should have a status of Candidate device. Leave the
checkbox checked and value for Generate stamps with this
prefix set to DATA. Click [Next] to continue.
4. On the next screen, leave all ASM link names to their default value and
click [Next] to continue.
5. Click [Finish] on the final screen to save your work and exit from the
tool.

Installing Oracle10g Database Software with RAC

We now get to start the second and final stage for a fully configured Oracle RAC 10g
environment - installing the Oracleg database software with RAC.

1. Like with installing the CRS software, navigate to the staging directory
we downloaded the Oracle database software to and run the setup.exe
file. (Or if you are running from a CD-ROM.) This will start the Oracle
Universal Installer (OUI).
2. cd \orainstall\db\Disk1
setup.exe

3. Click [Next] on the Welcome page.


4. On the next page, "Specify File Locations", you can simply accept the
default values for the Oracle Home (and path) or supply your own
values. When finished, click [Next] to continue.
The Oracle home name and path that you use in this step MUST be different from
the home that you used for the CRS installation. You must not install the
Oracle10g database software with RAC into the same home where you installed the
CRS software.
5. On the next screen, "Specify Hardware Cluster Information Mode"
page, choose the default which is "Cluster Installation" when the OUI
detects that you are performing this installation on a cluster. Also,
check the checkboxes for both nodes that you will be performing the
install to. By default, the node you are performing the install from (the
local node) is already checked. After checking both nodes, click
[Next] to continue.
6. On the next screen, "Select Installation Type", select the Standard
Edition option and click [Next] to continue.
7. On the next screen, "Select Database Configuration", select the type of
database you would like to create. For the purpose of this example, I
want to perform a "General Purpose" configuration which is the
default option. After clicking the [Next] button, the OUI will display
the "Specify Database Configuration Options page:
o In the "Database Naming" section, enter a value for the Global
Database Name and accept or change the generated value for
the SID. For this example, I will be using:
 Global Database Name:
ORCL.IDEVELOPMENT.INFO
 SID: ORCL
o In the "Database Character Set" section, accept the default
value, which is based on your system locale.
o In the "Database Examples", section, select the Create
database with sample schemas option. Click [Next] to
continue.
8. On the "Select Database Management Option" page, accept the default
values and click [Next] to continue.
9. On the "Specify Database File Storage Option" page, you must select
Automatic Storage Management (the other options are only
supported by the Enterprise Edition of Oracle10g). Click [Next] to
continue.
10. On the "Specify Backup and Recovery Options" page, you should first
select Enable Automated Backups. The Automatic Storage
Management option is selected by default and must be used with
Standard Edition of Oracle10g. In the "Backup Job Credentials"
section, enter the username and password of your current Windows
session user. Click [Next] to continue.
11. On the "Configure Automatic Storage Management" page, the OUI
will display all available disk partitions and locations. In this section, I
wanted to create two ASM disk groups: one for the physical database
files and another for the flash recovery area. The OUI, however, only
allows is to create a single disk group to be used for the physical
database files and names it DATA by default. With this restriction, I only
created the one disk group. I did change the name of the disk group to
ORCL_DATA1 and only selected four of the member disks. I will be
manually creating the second disk group and assigning it to the flash
recovery area a bit later in this article.
The OUI will only list logical drives located on disk that have been stamped with
the asmtoolg utility.
If is possible that the OUI will not find the disks using the default location. If this is
case, you can click on the "Change Disk Discovery Path..." button and enter the
value "\\.\ORCLDISK*".
13. As for the "Redundancy" section, keep the default choice of Normal.
For this type of configuration, you will need to select at lease two disk
partitions. Click [Next] to continue.
14. On the "Specify Database Schema Passwords" page, enter and then
confirm passwords for all of the privileged database accounts. Click
[Next] to continue.
15. On the "Summary" page, the OUI lists all of the software components
to be installed on both nodes in the cluster. After verifying the
installation details, click [Install] to start the installation process!
During the installation process, the OUI displays a progress bar.
16. When the installation is complete, the OUI displays the Welcome
screen for the "Virtual IP Configuration Assistant (VIPCA).
o On the VIPCA Welcome screen, click [Next].
o On the "Network Interfaces" page, identify and select
(highlight) the network interface card (NIC) to which you want
to assign your VIP addresses. Click [Next] to continue.
I have no idea why the VIPCA does this, but it selects both interfaces "Local Area
Connection" and "Local Area Connection 2" by default. You must only select your
public NIC interface. For me, I select only "Local Area Connection". If you do
happen to select your private NIC on this page, (which will be highlighted by
default), your cluster will not function correctly.
o On the "Virtual IPs for cluster nodes" page, enter the virtual IP
(VIP) addresses that identified earlier for both nodes.
Node name IP Alias Name IP address Subnet Mask
windows1 vip-windows1 192.168.1.221 255.255.255.0
windows2 vip-windows2 192.168.1.222 255.255.255.0
o Click [Next] to continue.
o After verifying all information in the VIPCA Summary page,
click the [Finish] button. A Configuration Assistant Progress
Dialog page is then displayed with a progress bar.
o Then the VIPCA completes, like "OK" on the "Configuration
Assistant Progress Dialog" page to see the results windows.
Click Exit to exit the VIPCA and return to the OUI dialog.
17. When the VIPCA is complete, it will run the Oracle Network
Configuration Assistant (NetCA) and then the Database Configuration
Assistant (DBCA). There is no manual intervention required during
these two configuration tools.
18. Toward the end of the DBCA configuration session, the DBCA will
display the "Password Management" option. You can click the
"Password Management" button to display the "Password
Management" dialog. Keep in mind that this is optional. From here,
you can unlock or change the password for any of the default database
accounts. When you are completed with the "Password Management"
dialog, click OK to finish.
19. You can then click the OK button on the DBCA final screen. Although
it will seem like the OUI is stalled, it is preparing to start the Cluster
Database. The "Start Cluster Database" dialog will appear during this
stage. The appearance of the "Start Cluster Database" dialog indicates
the completion of teh creation of the database. The dialog also starts
the cluster database instance on both nodes. When this process
completes, the "End of Installation" page will appear. Click Exit to exit
the OUI.

And that's it!!!! The second and final phase of the installation is complete. The
remainder of this article is dedicated to final configuration steps that should be
performed.

Ensure Valid Environment Variables on Both Nodes


It is important to verify the correct environment variables (path name for the Oracle
home and ORACLE_SID) on both nodes in the cluster. Complete the following on
both nodes:

1. Navigate to [Start] -> [Settings] -> [Control Panel] -> [System] ->
[Advanced] -> [Environment Variables]
2. In the "System variables" dialog, select the Path variable and ensure
that the value for the Path variable contains %ORACLE_HOME%\bin,
where %ORACLE_HOME% is the new Oracle home for the Oracle10g
database software. If the variable does not contain this value (and the
following values), then click Edit and add this value to the start of the
Path variable definition in the "Edit System Variable" dialog. Click OK
when complete. Here is a list of other values that I have defined at the
start of my Path environment variable for each node:
3. C:\oracle\product\10.1.0\db_1\bin;
4. C:\oracle\product\10.1.0\db_1\jre\1.4.2\bin\client;
5. C:\oracle\product\10.1.0\db_1\jre\1.4.2\bin;
6. C:\oracle\product\10.1.0\crs\jre\1.4.2\bin\client;
C:\oracle\product\10.1.0\crs\jre\1.4.2\bin;

7. I also add the ORACLE_SID variable to both nodes in the cluster.


Keep in mind that the SID I configured during the database
configuration was ORCL. The SID on each node, however, will be
configured as follows when the instances are created:
8. Node 1 (windows1): ORACLE_SID=ORCL1
Node 2 (windows2): ORACLE_SID=ORCL2

9. When finished verifying the Path variable and adding the


ORACLE_SID variable, click OK in the "System Properties" page,
and then finally close the Control Panel.

Creating / Altering Tablespaces

When creating the clustered database, we left all tablespaces set to their default size.
Since I am using a fairly large disk group for the shared storage, I like to make a
sizable testing database.

This section provides several optional SQL commands I used to modify and create all
tablespaces for my testing database. Please keep in mind that the database file names
(OMF files) I used in this example may differ from what Oracle creates for your
environment.

$ sqlplus "/ as sysdba"

SQL> alter user scott account unlock;


SQL> grant dba, resource, connect to scott;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/users.259.1'


resize 1024m;
SQL> alter tablespace users add datafile '+ORCL_DATA1' size 1024m
autoextend off;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/system.256.1'


resize 800m;

SQL> alter database datafile '+ORCL_DATA1/orcl/datafile/sysaux.257.1'


resize 500m;

SQL> alter tablespace undotbs1 add datafile '+ORCL_DATA1' size 1024m


2 autoextend on next 50m maxsize 2048m;

SQL> alter tablespace undotbs2 add datafile '+ORCL_DATA1' size 1024m


2 autoextend on next 50m maxsize 2048m;

SQL> alter database tempfile '+ORCL_DATA1/orcl/tempfile/temp.266.1'


resize 1024m;

Here is a snapshot of the tablespaces I have defined for my test database environment:

Status Tablespace Name TS Type Ext. Mgt. Seg. Mgt.


Tablespace Size Used (in bytes) Pct. Used
--------- --------------- ------------ ---------- ---------
------------------ ------------------ ---------
ONLINE EXAMPLE PERMANENT LOCAL AUTO
157,286,400 83,820,544 53
ONLINE SYSAUX PERMANENT LOCAL AUTO
524,288,000 258,080,768 49
ONLINE SYSTEM PERMANENT LOCAL MANUAL
838,860,800 458,162,176 55
ONLINE UNDOTBS1 UNDO LOCAL MANUAL
1,105,199,104 5,832,704 1
ONLINE UNDOTBS2 UNDO LOCAL MANUAL
1,099,956,224 11,862,016 1
ONLINE USERS PERMANENT LOCAL AUTO
2,147,483,648 2,949,120 0
ONLINE TEMP TEMPORARY LOCAL MANUAL
1,073,741,824 1,048,576 0
---------
--------- ------------------ ---------
avg
23
sum
6,946,816,000 821,755,904

7 rows selected.

Applying the Oracle10g Release 1 (10.1.0.4) Patch Set 2 for Microsoft Windows

Overview

At the time of this writing, the latest patchset for Oracle10g running on Microsoft
Windows (32-bit) is 10.1.0.4 (also known as patch 4163362). This is an important
patchset that fixes many bug related to 10g RAC. In particular, there is a major bug
named "TAF Connections to a Standard Edition Database are Incorrectly Rejected".
This is documented in bug 3549731 and was fixed in 10.1.0.3.0. Here is the error you
will get when attempting a TAF connection using Oracle10g Standard Edition:
C:\> sqlplus scott/tiger@orcltest

SQL*Plus: Release 9.2.0.5.0 - Production on Tue Jun 21 17:06:50 2005

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

ERROR:
ORA-01012: not logged on

Error accessing PRODUCT_USER_PROFILE


Warning: Product user profile information not loaded!
You may need to run PUPBLD.SQL as SYSTEM
ERROR:
ORA-01012: not logged on

Error accessing package DBMS_APPLICATION_INFO


SP2-0575: Use of Oracle SQL feature not in SQL92 Entry Level.

Connected to:
Oracle Database 10g Release 10.1.0.2.0 - Production
With the Real Application Clusters option

SQL>

It is also important to note that we will be applying the 10.1.0.4 patchset to both the
Oracle Cluster Ready Services (CRS) and the database software. The CRS software
must be at the same or newer level as the Oracle database software in a RAC
configuration. Therefore, you should always upgrade the CRS software before
upgrading the Oracle database software. Finally, before installing either of the patches
(CRS adn Oracle database software), we will need to download and install the
"Oracle Database 10g Companion CD Release 1 (10.1.0.2) for Microsoft Windows
(32-bit)".

Downloading and installing the Oracle Database 10g Companion CD is only


required if the database being upgraded uses Java Virtual Machine (Java VM) or
Oracle interMedia.

This 10.1.0.4 patch will need to be downloaded from Oracle Metalink, while the
Oracle Database 10g Companion CD can be downloaded from OTN.

Downloading the 10.1.0.4 Patchset

After logging in to the Oracle Metalink website (http://metalink.oracle.com), click on


the "Patches & Updates" button on the topics menu (left) and use the following to
download the correct database patchset:
1. On the "Select a Patch Search Area" screen, select the link for
Simple Search.
2. From the Simple Search screen, select the "Product or Family"
option in the Search By pull-down listing. In the text field to the
right, enter "RDBMS Servers".
3. In the Release option, select "Oracle 10.1.0".
4. For the Patch Type option, select "Patchset/Minipack".
5. Finally, for the Platform or Language option, select
"Microsoft Windows (32-bit)".
6. When all of the options are filled out, click the "Go" button.
7. This will bring up the available patchsets in the Results section.
For the purpose of this article, I will be downloading and
installing patch: 4163362. Select this patchset in the Results
section and then click the Download button to start the
download process.

The file name of the patchset to download is p4163362_10104_WINNT.zip and is


520MB in size. This may take some time to download.

As the Admistrator user account, (or the account you installed the Oracle Software
as), extract the patch file to a temporary directory:

mkdir C:\orainstall\patches\10.1.0.4
move p4163362_10104_WINNT.zip C:\orainstall\patches\10.1.0.4
cd C:\orainstall\patches\10.1.0.4
unzip p4163362_10104_WINNT.zip

Downloading and Installing the Oracle Database 10g Companion CD

The steps in this section are only required if the database being upgraded uses Java
Virtual Machine (Java VM) or Oracle interMedia. For the purpose of this article, my
database does make use of the Java Virtual Machine (Java VM) or Oracle interMedia
and therefore will require the installation of the Oracle Database 10g Companion CD.
The type of installation to perform will be the Oracle Database 10g Products
installation type.

This installation type includes the Natively Compiled Java Libraries (NCOMP) files
to improve Java performance. If you do not install the NCOMP files, the "ORA-
29558:JAccelerator (NCOMP) not installed" error occurs when a database that uses
Java VM is upgraded to the patch release.

Downloading the Oracle Database 10g Companion CD

In this section, we will be downloading and extracting the required software from
Oracle to only one of the Windows nodes in the RAC cluster - namely windows1. This
is the machine where I will be performing the install from. The Oracle installer will
copy the required software packages to all other nodes in the RAC configuration.
Login to the node in the Windows RAC cluster where you performed the CRS and
Oracle database software from. For me, that would be windows1. In this example, I
will be downloading the Oracle Database 10g Companion CD software to windows1
and saving them to "C:\orainstall\comp".

Oracle Database 10g Companion CD Release 1 (10.1.0.2) for Microsoft


Windows (32-bit)

• 10g_win32_companion.zip (353,867,577 bytes)

mkdir C:\orainstall\comp
move 10g_win32_companion.zip C:\orainstall\comp
cd C:\orainstall\comp
unzip 10g_win32_companion.zip
Installing the Oracle Database 10g Companion CD
The next step is to install the Oracle Database 10g Companion CD software from
windows1.
cd C:\orainstall\comp\Disk1
setup.exe
Oracle Database 10g Companion CD
Screen Name Response
Welcome Screen Click <Next>
Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.

In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.
Specify File Locations
Here are the settings I used for this article:
Source Path: C:\orainstall\comp\Disk1\stage\products.xml
Destination Name: OraDb10g_home1
(The one in which the DB software is installed)
Destination Path: C:\oracle\product\10.1.0\db_1
Click <Next>
Selected Node This screen simply lists the nodes that will be part of the install. There is
nothing to edit here.
The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.

Alternatively, reset the node order as follows:

1. Exit Oracle Universal Installer.


2. Enter a command similar to the following, where the node that
you want to install the patch set from is the first node in the
list of nodes:

C:\Oracle_patch> setup -updateNodeList


"CLUSTER_NODES=node2,node1" -local
ORACLE_BASE\ORACLE_HOME=oracle_home_path

3. In this example, the RAC 10g release 1 (10.1.x) software was


installed from node1, but the patch set would now have to be
installed from node2 after running the command.
Click <Next>
Oracle Database 10g Products 10.1.0.2.0
Select a Product to Install
Click <Next>
Summary On the Summary screen, click <Install> to start the installation process.

Exit from the OUI when the installation is complete!

Stop All Services

Before attempting to apply the patchset, we neeed to stop all Oracle related services.

1. Shutdown the Oracle Enterprise Manager Database Control on


both nodes. Run the following command on each node, making
sure to set the %ORACLE_SID% environment variable to
orcl1 for windows1 and orcl2 for windows2:
2. <FROM WINDOWS1>
3. set oracle_sid=orcl1
4. emctl stop dbconsole
5.
6. <FROM WINDOWS2>
7. set oracle_sid=orcl2
emctl stop dbconsole

8. Shut down all RAC services (Oracle instances) on all nodes.


From one of the nodes in the cluster, run:

srvctl stop database -d orcl


(The above command will shutdown all instances. We still,
however, need to shut down the OracleServiceSID service
using the Services Control Panel on all nodes.)

9. Shut down all ASM services (ASM instances) on all the nodes.
From one of the nodes in the cluster, run:
10.srvctl stop asm -n windows1
srvctl stop asm -n windows2

(The above command will shutdown both ASM instances. We


still, however, need to shut down the
OracleASMService+ASMInstanceName service using the
Services Control Panel on all nodes.)

11. Stop all node applications on all the nodes.


12.srvctl stop nodeapps -n windows1
srvctl stop nodeapps -n windows2

13. Shut down CRS services using the Services Control Panel on
all nodes. The following is a list of services that will need to be
shut down on all nodes:
o Oracle Object Service
o OracleCRService
o OracleCSService
o OracleEVMService

Applying the Patchset - CRS Upgrade

Once all services running for CRS and the Oracle database software are stopped, we
can now start the patch installation process. To do this, navigate to the directory where
you extracted the patch set to and run the OUI installer:
You must install the patch set software from the node from where the release
10.1.0.2 software was installed!

Alternatively, reset the node order as follows:

1. Exit Oracle Universal Installer if you have already started it.


2. Enter a command similar to the following, where the node that you want to
install the patch set from is the first node in the list of nodes:

C:\Oracle_patch> setup -updateNodeList


"CLUSTER_NODES=node2,node1" -local
ORACLE_BASE\ORACLE_HOME=oracle_home_path

3. In this example, the RAC 10g release 1 (10.1.x) software was installed from
node1, but the patch set would now have to be installed from node2 after
running the command.
cd C:\orainstall\patches\10.1.0.4
setup.exe
Oracle 10.1.0.4.0 Patchset Installation Screen Responses - CRS
Screen Name Response
Welcome Screen Click <Next>
Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.

In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.
Specify File Locations
Here are the settings I used for this article:
Source Path: C:\orainstall\patches\10.1.0.4\stage\products.xml
Destination Name: OraCr10g_home
(The one in which CRS is installed)
Destination Path: C:\oracle\product\10.1.0\crs
Click <Next>
This screen simply lists the nodes that will be part of the install. There is
nothing to edit here.

The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.

Alternatively, reset the node order as follows:

Selected Node 1. Exit Oracle Universal Installer.


2. Enter a command similar to the following, where the node that
you want to install the patch set from is the first node in the
list of nodes:

C:\Oracle_patch> setup -updateNodeList


"CLUSTER_NODES=node2,node1" -local
ORACLE_BASE\ORACLE_HOME=oracle_home_path

3. In this example, the RAC 10g release 1 (10.1.x) software was


installed from node1, but the patch set would now have to be
installed from node2 after running the command.
Click <Next>
Summary On the Summary screen, click <Install> to start the installation process.

At the end of the patch set installation, you will be prompted with a reminder to patch
your CRS installation in a rolling manner, one node at a time. You can now exit from
the OUI by clicking the [Exit] button then click Yes on the confirmation dialog. The
remainder of this section contains the steps that will need to be performed on both
nodes in the cluster to complete the 10.1.0.4 CRS patchset install.

One of the tasks is to shutdown the four services:


• Oracle Object Service
• OracleCRService
• OracleCSService
• OracleEVMService

We already performed this on all nodes at the beginning of this section, so let's move
on.

The next step is to run the batch file %ORA_CRS_HOME%\install\patch10104.bat on


each node in the cluster. This batch file will automatically start the CRS services on
the patched node upon completion.

<FROM WINDOWS1>
C:\orainstall\patches\10.1.0.4>
C:\oracle\product\10.1.0\crs\install\patch10104.bat
Successful validation check of Oracle CRS services status
Successful binary patch of the C:\oracle\product\10.1.0\crs
Successful cleanup of patch subdirectory
Successful startup of OracleCSService
Successful startup of OracleEvmService
Successful startup of OracleCRService
Successful upgrade of this node to Oracle Cluster Ready Services
10.1.0.4

<FROM WINDOWS2>
C:\orainstall\patches\10.1.0.4>
C:\oracle\product\10.1.0\crs\install\patch10104.bat
Successful validation check of Oracle CRS services status
Successful binary patch of the C:\oracle\product\10.1.0\crs
Successful cleanup of patch subdirectory
Successful startup of OracleCSService
Successful startup of OracleEvmService
Successful startup of OracleCRService
Successful upgrade of this node to Oracle Cluster Ready Services
10.1.0.4

The Oracle Database 10g Patch Set 2 has now been successfully applied on both
nodes for the CRS software! We now need to patch the Oracle database software.

Applying the Patchset - Database Upgrade

After applying the 10.1.0.4 patchset and running the post-installation batch file for the
CRS software, it will start all services (CRS and Oracle database services). Before
applying the patchset to the Oracle database software, we will need to stop all related
services (yes, again):

1. Verify that the Oracle Enterprise Manager Database Control is


shutdown on both nodes. (The previous post-installation batch
file from the CRS install does not start the Oracle Enterprise
Manager Database Control application so it should already be
down.)
2. Shut down all RAC services (Oracle instances) on all nodes.
From one of the nodes in the cluster, run:
srvctl stop database -d orcl

(The above command will shutdown all instances. We still,


however, need to shut down the OracleServiceSID service
using the Services Control Panel on all nodes.)

3. Shut down all ASM services (ASM instances) on all the nodes.
From one of the nodes in the cluster, run:
4. srvctl stop asm -n windows1
srvctl stop asm -n windows2

(The above command will shutdown both ASM instances. We


still, however, need to shut down the
OracleASMService+ASMInstanceName service using the
Services Control Panel on all nodes.)

5. Stop all node applications on all the nodes.


6. srvctl stop nodeapps -n windows1
srvctl stop nodeapps -n windows2

Once all services running for the Oracle database software is stopped, we can now
start the patch installation process. To do this, navigate to the directory where you
extracted the patch set to and run the OUI installer:

You must install the patch set software from the node from where the release
10.1.0.2 software was installed!

Alternatively, reset the node order as follows:

1. Exit Oracle Universal Installer if you have already started it.


2. Enter a command similar to the following, where the node that you want to
install the patch set from is the first node in the list of nodes:

C:\Oracle_patch> setup -updateNodeList


"CLUSTER_NODES=node2,node1" -local
ORACLE_BASE\ORACLE_HOME=oracle_home_path

3. In this example, the RAC 10g release 1 (10.1.x) software was installed from
node1, but the patch set would now have to be installed from node2 after
running the command.

cd C:\orainstall\patches\10.1.0.4
setup.exe
Oracle 10.1.0.4.0 Patchset Installation Screen Responses - Database Software
Screen Name Response
Welcome Screen Click <Next>
Specify File Locations Leave the default value for the Source directory. By default, it should be
pointing to the products.xml file from the stage directory where you unpacked
the patch set files.

In most cases, the OUI will also select the correct destination name and
ORACLE_HOME that you want to update with this patch set.

Here are the settings I used for this article:


Source Path: C:\orainstall\patches\10.1.0.4\stage\products.xml
Destination Name: OraDb10g_home
(The one in which the DB software is installed)
Destination Path: C:\oracle\product\10.1.0\db_1
Click <Next>
This screen simply lists the nodes that will be part of the install. There is
nothing to edit here.

The Selected Nodes screen lists the existing RAC release 10.1.0.2
nodes. The first node in the list is the node from where the release
10.1.0.2 software was installed. You must install the patch set software
from this node. If this is not the node where you are running Oracle
Universal Installer, exit and install the patch set software from the first
node in the list of nodes.

Alternatively, reset the node order as follows:

Selected Node 1. Exit Oracle Universal Installer.


2. Enter a command similar to the following, where the node that
you want to install the patch set from is the first node in the
list of nodes:

C:\Oracle_patch> setup -updateNodeList


"CLUSTER_NODES=node2,node1" -local
ORACLE_BASE\ORACLE_HOME=oracle_home_path

3.In this example, the RAC 10g release 1 (10.1.x) software was
installed from node1, but the patch set would now have to be
installed from node2 after running the command.
Click <Next>
Summary On the Summary screen, click <Install> to start the installation process.

The Oracle Database 10g Patch Set 2 has now been successfully applied on both
nodes for the Oracle database software! You can now exit from the OUI. We now
need to perform several postinstallation tasks.

Required Postinstallation Tasks

We are almost there! The patchset is now installed and we now have the task of
upgrading the database.

The first step is to start all services that were stopped for the patchset installation.
Perform the following:

1. Start node applications on each node of the cluster:


2. srvctl start nodeapps -n windows1
srvctl start nodeapps -n windows2
3. Start the ASM instance on each node of the cluster:
4. srvctl start asm -n windows1
srvctl start asm -n windows2

5. Start the database instance for only the local instance. For my
case, I am on windows1:

srvctl start instance -d orcl -i orcl1

6. Login to the local database:


7. set oracle_sid=orcl1
sqlplus "/ as sysdba"

8. Set the CLUSTER_DATABASE initialization parameter to


FALSE and other instance parameters:
9. SQL> alter system set cluster_database=false
scope=spfile;
10. SQL> alter system set java_pool_size=150m
scope=spfile sid='orcl1';
SQL> alter system set shared_pool_size=150m
scope=spfile sid='orcl1';

11. Shut down the database:

SQL> shutdown immediate

12. Enter the following SQL*Plus commands:


13. SQL> startup upgrade
14. SQL> spool patch.log
15. SQL> @?\rdbms\admin\catpatch.sql
SQL> spool off

Review the patch.log file for errors and inspect the list of
components that is displayed at the end of catpatch.sql script.

16. Restart the database:


17. SQL> shutdown
SQL> startup

18. Recompile all invalid PL/SQL packages:

SQL> @?\rdbms\admin\utlrp.sql
When the 10.1.0.4 patch set is applied to an Oracle Database Standard Edition or
Standard Edition One database, there may be 42 invalid objects after the utlrp.sql
script runs. These objects belong to the unsupported components and do not affect
the database operation.

Ignore any messages indicating that the database contains invalid recycle bin
objects similar to the following:

BIN$4lzljWIt9gfgMFeM2hVSoA==$0
19. Reset CLUSTER_DATABASE initialization parameter to
TRUE:

SQL> alter system set cluster_database=true


scope=spfile;

20. Restart the database:


21. SQL> shutdown
SQL> startup

22. Perform a restart of all Oracle instances:


23.srvctl stop instance -d orcl -i orcl1
srvctl start database -d orcl

Running the configPatch.pl Script on a RAC Cluster

These instructions do not apply to RAC installations where the nodes of the cluster
share the same Oracle home. In this article, I am not sharing the Oracle home and
will therefore need to perform this section.

The configPatch.pl script updates the Oracle Enterprise Manager Database Control
files. Although Oracle Universal Installer copies the configPatch.pl script to all of
the Oracle homes on the cluster, it only runs the script on the node running Oracle
Universal Installer.

If you install this patch set on a RAC installation that does not use a shared Oracle
home directory, then you must manually run
%ORACLE_HOME%\sysman\install\configPatch.pl on each node of the cluster,
except the node from which you ran Oracle Universal Installer.

FROM WINDOWS2
cd C:\oracle\product\10.1.0\db_1\perl\5.6.1\bin\MSWin32-x86
perl C:\oracle\product\10.1.0\db_1\sysman\install\configPatch.pl

Automatically Restarting ONS

For Oracle Database 10g release 1 (both 10.1.0.2 and 10.1.0.3) installations, the
Oracle Notification Service (ONS) AUTO_START parameter is set to 0 on each node
of the cluster. This bug seems to exist for all UNIX platforms (Solaris, Linux, etc.)
and MS Windows. For this reason, CRS does not automatically start the ONS
component when the node is restarted. This issue is documented and being tracked
with Oracle bug 4011834.

You can verify this using the following command:

%ORA_CRS_HOME%\bin\crs_stat
...
NAME=ora.windows1.ons
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
...
NAME=ora.windows1.ons
TYPE=application
TARGET=OFFLINE
STATE=OFFLINE
To work around this issue, perform the following steps as the CRS owner for each
ONS resource. For the purpose of this article, I will also be giving the commands I ran
on one of the nodes in my cluster, windows1. The same tasks need to be run on the
second node windows2:

1. Change directory to the crs_home/crs/public directory.

cd %ORA_CRS_HOME%\crs\profile

2. Use the following command to create a text file containing the


profile resources:

crs_home\bin\crs_stat -p ora.nodename.ons >


ora.nodename.ons.cap

For example,

%ORA_CRS_HOME%\bin\crs_stat -p ora.windows1.ons >


ora.windows1.ons.cap

3. Enter the following command to change the AUTO_START


parameter value to 1 in the ora.nodename.ons.cap file:

crs_home\bin\crs_profile -update ora.nodename.ons


-o as=1

For example,

%ORA_CRS_HOME%\bin\crs_profile -update
ora.windows1.ons -o as=1

4. Enter the following command to update the resource profile:

crs_home\bin\crs_register -u ora.nodename.ons

For example,

%ORA_CRS_HOME%\bin\crs_register -u ora.windows1.ons

Verifying TNS Networking Files


Configuring the Oracle networking files is critical in a RAC environment. During the
Oracle10g database install, the OUI performed all required networking configuring
for a clustered environment on both nodes. For the most part, no manual intervention
is required. However, it is important to verify the networking files on each node, and
in some cases, you may want to add an additional entry into the tnsnames.ora file
for testing Transparent Application Failover (TAF). In this section, I will be adding
an entry to the tnsnames.ora file named ORCLTEST.

listener.ora

Let's first take a look at the listener.ora file that was created during the install. The
listener.ora file should be properly configured on both nodes and no modifications
should be needed.

For clarity, I included a copy of the listener.ora file from my node windows1:

listener.ora
# listener.ora.windows1 Network Configuration File:
# C:\oracle\product\10.1.0\db_1\network\admin\listener.ora.windows1
# Generated by Oracle configuration tools.

LISTENER_WINDOWS1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT =
1521)(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.121)(PORT =
1521)(IP = FIRST))
)
)
)

SID_LIST_LISTENER_WINDOWS1 =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oracle\product\10.1.0\db_1)
(PROGRAM = extproc)
)
)

tnsnames.ora

Here is a copy of my tnsnames.ora file that was configured by Oracle and can be
used for testing the Transparent Application Failover (TAF). This file should already
be configured on both nodes, but you will want to add the new ORCLTEST entry.
You can include any of these entries on other client machines that need access to the
clustered database.

tnsnames.ora
# tnsnames.ora Network Configuration File:
# C:\oracle\product\10.1.0\db_1\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.

LISTENERS_ORCL =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
)

ORCL2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(INSTANCE_NAME = orcl2)
)
)

ORCL1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(INSTANCE_NAME = orcl1)
)
)

ORCLTEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.idevelopment.info)
)
)

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)

Creating the orcltest Service

We added a service named "orcltest" to the tnsnames.ora that will be used to


connect to the database with TAF enabled. The service was added to the
tnsnames.ora, but was never updated as a service for each Oracle instance.

If the only service defined is for orcl.idevelopment.info, then you will need to
manually add the service to both instances:

SQL> show parameter service

NAME TYPE VALUE


-------------------- ----------- --------------------------
service_names string orcl.idevelopment.info

SQL> alter system set service_names =


2 'orcl.idevelopment.info, orcltest.idevelopment.info' scope=both;

Connecting to Clustered Database From an External Client

This is an optional step, but I like to perform it in order to verify my TNS files are
configured correctly. Use another machine (i.e. a Windows machine connected to the
network) that has Oracle installed (either 9i or 10g) and add the TNS entries (in the
tnsnames.ora) from either of the nodes in the cluster that were created for the
clustered database.

Then try to connect to the clustered database using all available service names defined
in the tnsnames.ora file:

C:\> sqlplus system/manager@orcl2


C:\> sqlplus system/manager@orcl1
C:\> sqlplus system/manager@orcltest
C:\> sqlplus system/manager@orcl
Verifying the RAC Cluster / Database Configuration

The following RAC verification checks should be performed on all nodes in the
cluster! For this article, I will only be performing checks from windows1.

Overview

This section provides several srvctl commands and SQL queries that can be used to
validate your Oracle RAC 10g configuration.
There are five node-level tasks defined for SRVCTL:

• Adding and deleting node level applications.


• Setting and unsetting the environment for node-level applications.
• Administering node applications.
• Administering ASM instances.

• Starting and stopping a group of programs that includes virtual IP addresses,


listeners, Oracle Notification Services, and Oracle Enterprise Manager
agents (for maintenance purposes).

Status of all instances and services

srvctl status database -d orcl


Instance orcl1 is running on node windows1
Instance orcl2 is running on node windows2

Status of a single instance

srvctl status instance -d orcl -i orcl2


Instance orcl2 is running on node windows2

Status of a named service globally across the database

srvctl status service -d orcl -s orcltest


Service orcltest is running on instance(s) orcl2, orcl1

Status of node applications on a particular node

srvctl status nodeapps -n windows1


VIP is running on node: windows1
GSD is running on node: windows1
Listener is running on node:windows1
ONS daemon is running on node: windows1

Status of an ASM instance


srvctl status asm -n windows1
ASM instance +ASM1 is running on node windows1.

List all configured databases

srvctl config database


orcl

Display configuration for our RAC database

srvctl config database -d orcl


windows1 orcl1 C:\oracle\product\10.1.0\db_1
windows2 orcl2 C:\oracle\product\10.1.0\db_1

Display all services for the specified cluster database

srvctl config service -d orcl


orcltest PREF: orcl2 orcl1 AVAIL:

Display the configuration for node applications - (VIP, GSD, ONS, Listener)

srvctl config nodeapps -n windows1 -a -g -s -l


VIP exists.: /vip-windows1/192.168.1.221/255.255.255.0/Local Area
Connection
GSD exists.
ONS daemon exists.
Listener exists.

Display the configuration for the ASM instance(s)

srvctl config asm -n windows1


+ASM1 C:\oracle\product\10.1.0\db_1

All running instances in the cluster

SELECT
inst_id
, instance_number inst_no
, instance_name inst_name
, parallel
, status
, database_status db_status
, active_state state
, host_name host
FROM gv$instance
ORDER BY inst_id;

INST_ID INST_NO INST_NAME PAR STATUS DB_STATUS STATE HOST


-------- -------- ---------- --- ------- ------------ ---------
-------
1 1 orcl1 YES OPEN ACTIVE NORMAL
WINDOWS1
2 2 orcl2 YES OPEN ACTIVE NORMAL
WINDOWS2

All data files which are in the disk group

select name from v$datafile


union
select member from v$logfile
union
select name from v$controlfile
union
select name from v$tempfile;

NAME
-------------------------------------------
+ORCL_DATA1/orcl/controlfile/current.260.3
+ORCL_DATA1/orcl/controlfile/current.261.3
+ORCL_DATA1/orcl/datafile/example.267.1
+ORCL_DATA1/orcl/datafile/sysaux.257.1
+ORCL_DATA1/orcl/datafile/system.256.1
+ORCL_DATA1/orcl/datafile/undotbs1.258.1
+ORCL_DATA1/orcl/datafile/undotbs1.275.1
+ORCL_DATA1/orcl/datafile/undotbs2.268.1
+ORCL_DATA1/orcl/datafile/undotbs2.276.1
+ORCL_DATA1/orcl/datafile/users.259.1
+ORCL_DATA1/orcl/datafile/users.274.1
+ORCL_DATA1/orcl/onlinelog/group_1.262.1
+ORCL_DATA1/orcl/onlinelog/group_1.263.1
+ORCL_DATA1/orcl/onlinelog/group_2.264.1
+ORCL_DATA1/orcl/onlinelog/group_2.265.1
+ORCL_DATA1/orcl/onlinelog/group_3.269.1
+ORCL_DATA1/orcl/onlinelog/group_3.270.1
+ORCL_DATA1/orcl/onlinelog/group_4.271.1
+ORCL_DATA1/orcl/onlinelog/group_4.272.1
+ORCL_DATA1/orcl/tempfile/temp.266.1

20 rows selected.

All ASM disk that belong to the 'ORCL_DATA1' disk group

SELECT path
FROM v$asm_disk
WHERE group_number IN (select group_number
from v$asm_diskgroup
where name = 'ORCL_DATA1');

PATH
----------------------------------
\\.\ORCLDISKDATA0
\\.\ORCLDISKDATA1
\\.\ORCLDISKDATA2
\\.\ORCLDISKDATA3
Starting & Stopping the Cluster

At this point, everything has been installed and configured for Oracle RAC 10g. We
have all of the required software installed and configured plus we have a fully
functional clustered database.

With all of the work we have done up to this point, a popular question might be,
"How do we start and stop services?". If you have followed the instructions in this
article, all services should start automatically on each reboot of the Windows nodes.
This would include CRS, all Oracle instances, Enterprise Manager Database Console,
etc.

There are times, however, when you might want to shutdown a node and manually
start it back up. Or you may find that Enterprise Manager is not running and need to
start it. This section provides the commands (using SRVCTL) responsible for starting
and stopping the cluster environment.

Ensure that you are logged in as the "administrator" user. I will be running all of
the commands in this section from windowds1:

Stopping the Oracle RAC 10g Environment

The first step is to stop the Oracle instance. Once the instance (and related services) is
down, then bring down the ASM instance. Finally, shutdown the node applications
(Virtual IP, GSD, TNS Listener, and ONS).
set oracle_sid=orcl1
emctl stop dbconsole
srvctl stop instance -d orcl -i orcl1
srvctl stop asm -n windows1
srvctl stop nodeapps -n windows1

Starting the Oracle RAC 10g Environment

The first step is to start the node applications (Virtual IP, GSD, TNS Listener, and
ONS). Once the node applications are successfully started, then bring up the ASM
instance. Finally, bring up the Oracle instance (and related services) and the
Enterprise Manager Database console.
set oracle_sid=orcl1
srvctl start nodeapps -n windows1
srvctl start asm -n windows1
srvctl start instance -d orcl -i orcl1
emctl start dbconsole

Start / Stop All Instances with SRVCTL

Start / Stop all of the instances and its enabled services. I just included this for fun as
a way to bring down all instances!
srvctl start database -d orcl
srvctl stop database -d orcl

Creating Second Disk Group for Flash Recovery Area

At the start of this article, we mentioned that we wanted to create two disk groups:
one for the actual physical database files named ORCL_DATA1 and another disk
group for the flash recovery area named ORCL_FRA1. During the creation of the
clustered database, we only had an option to create one disk group for the physical
database files. In this section, I will be manually creating another disk group using
SQL.
All of the SQL commands for adding a disk group needs to be performed from one
of the ASM instances: +ASM1 or +ASM2.
set oracle_sid=+ASM1
sqlplus "/ as sysdba"

SQL> SELECT group_number, disk_number, mount_status, header_status,


state, path
2 FROM v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH


------------ ----------- ------- ------------ --------
------------------
0 4 CLOSED PROVISIONED NORMAL
\\.\ORCLDISKDATA4
0 5 CLOSED PROVISIONED NORMAL
\\.\ORCLDISKDATA5
0 6 CLOSED PROVISIONED NORMAL
\\.\ORCLDISKDATA6
0 7 CLOSED PROVISIONED NORMAL
\\.\ORCLDISKDATA7
1 0 CACHED MEMBER NORMAL
\\.\ORCLDISKDATA0
1 1 CACHED MEMBER NORMAL
\\.\ORCLDISKDATA1
1 2 CACHED MEMBER NORMAL
\\.\ORCLDISKDATA2
1 3 CACHED MEMBER NORMAL
\\.\ORCLDISKDATA3

8 rows selected.

SQL> CREATE DISKGROUP orcl_fra1 NORMAL REDUNDANCY


2 DISK '\\.\ORCLDISKDATA4', '\\.\ORCLDISKDATA5',
3 '\\.\ORCLDISKDATA6', '\\.\ORCLDISKDATA7';

Diskgroup created.

The above commands created a new disk group named ORCL_FRA1. We now want
to assign the Database Flash Recovery Area to this new disk group and to also adjust
the size that I would like to allow for this area. To perform these actions, I will need to
update the SPFILE that is being used for each of the instances. To do this, I will need
to be logged into one of the instances: ORCL1 or ORCL2.

set oracle_sid=orcl1
sqlplus "/ as sysdba"

SQL> alter system set db_recovery_file_dest='+ORCL_FRA1' scope=both;

System altered.

SQL> alter system set db_recovery_file_dest_size=30g scope=both;

System altered.

Enterprise Manager - Database Console

The DBCA creates a fully functional Oracle Database Console configuration with
support for RAC. The DBCA creates an instance of the Oracle Database Console on
both nodes in the cluster. Simply point a web browser to either of the machines using
the following URL:
http://<rac_node>:5500/em
For my example, I can navigate to:
http://windows1:5500/em

Transparent Application Failover - (TAF)

Overview

It is not uncommon for businesses of today to demand 99.99% or even 99.999%


availability for their enterprise applications. Think about what it would take to ensure
a downtime of no more than .5 hours or even no downtime during the year. To answer
many of these high availability requirements, businesses are investing in mechanisms
that provide for automatic failover when one participating system fails. When
considering the availability of the Oracle database, Oracle RAC 10g provides a
superior solution with its advanced failover mechanisms. Oracle RAC 10g includes
the required components that all work within a clustered configuration responsible for
providing continuous availability - when one of the participating systems fail within
the cluster, the users are automatically migrated to the other available systems.

A major component of Oracle RAC 10g that is responsible for failover processing is
the Transparent Application Failover (TAF) option. All database connections (and
processes) that loose connections are reconnected to another node within the cluster.
The failover is completely transparent to the user.
This final section provides a short demonstration on how automatic failover works in
Oracle RAC 10g. Please note that a complete discussion on failover in Oracle10g
RAC would be an article in of its own. My intention here is to present a brief
overview and example of how it works.

One important note before continuing is that TAF happens automatically within the
OCI libraries. This means that your application (client) code does not need to change
in order to take advantage of TAF. Certain configuration steps, however, will need to
be done on the Oracle TNS file tnsnames.ora.

Keep in mind that at the time of this article, using the Java thin client will not be
able to participate in TAF since it never reads the tnsnames.ora file.

Setup tnsnames.ora File

Before demonstrating TAF, we need to verify that a valid entry exists in the
tnsnames.ora file on a non-RAC client machine (if you have a Windows machine
lying around). Ensure that you have Oracle RDBMS software installed. (Actually, you
only need a client install of the Oracle software.)

During the creation of the clustered database in this article, I created a new service
that will be used for testing TAF named ORCLTEST. It provides all of the necessary
configuration parameters for load balancing and failover. You can copy the contents
of this entry to the %ORACLE_HOME%\network\admin\tnsnames.ora file on the client
machine (my Windows laptop is being used in this example) in order to connect to the
new Oracle clustered database:

tnsnames.ora File Entry for Clustered Database


...
ORCLTEST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = vip-windows2)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcltest.idevelopment.info)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
...

SQL Query to Check the Session's Failover Information


The following SQL query can be used to check a session's failover type, failover
method, and if a failover has occurred. We will be using this query throughout this
example.
COLUMN instance_name FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11

SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';

Transparent Application Failover Demonstration

From a Windows machine (or other non-RAC client machine), login to the clustered
database using the orcltest service as the SYSTEM user:
C:\> sqlplus system/manager@orcltest

COLUMN instance_name FORMAT a13


COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11

SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';

INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER


------------- --------- ------------- --------------- -----------
orcl1 windows1
SELECT BASIC NO
DO NOT logout of the above SQL*Plus session! Now that we have run the query
(above), we should now shutdown the instance orcl1 on windows1 using the abort
option. To perform this operation, we can use the srvctl command-line utility as
follows:
srvctl status database -d orcl
Instance orcl1 is running on node windows1
Instance orcl2 is running on node windows2

srvctl stop instance -d orcl -i orcl1 -o abort

srvctl status database -d orcl


Instance orcl1 is not running on node windows1
Instance orcl2 is running on node windows2
Now let's go back to our SQL session and rerun the SQL statement in the buffer:
COLUMN instance_name FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11

SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';

INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER


------------- --------- ------------- --------------- -----------
orcl2 windows2
SELECT BASIC YES

SQL> exit

You might also like