You are on page 1of 17

How to Configure LUNs for ASM Disks using WWID,

DM-Multipathing, and ASMLIB on RHEL 5/OEL 5 [ID


1365511.1]

To
Bottom

In this Document
Goal
Solution
1. Configure SCSI_ID to Return Unique Device Identifiers:
2. Configure LUNs for ASM:
4. Create ASM diskgroups:
5. To make the disk available enter the following commands:
7. Ensure that the allocated devices can be seen in /dev/mpath:
8. Ensure that the devices can be seen in /dev/mapper:
9. Check the device type:
10. Setup the ASM parameter (ORACLEASM_SCANORDER), in the file for ASMLIB configuration,
/etc/sysconfig/oracleasm, for forcing ASM to bind with the multipath devices
References

Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.4 to 11.2.0.2 - Release: 10.2 to 11.2
Linux x86-64
Oracle Server Enterprise Edition - Version: 10.2.0.4 to 11.2.0.2

Goal
This document details "how to" steps by using an example that creates devices for Automatic
Storage Management (ASM) using World Wide Identifier (WWID), DM-Multipathing, and
ASMLIB, utilizing a Hitachi Storage Sub-system. The simplified "how to" steps are for Red Hat
Enterprise Linux version 5 (RHEL 5) and Oracle Enterprise Linux version 5 (OEL 5) on Linux
x86-64 for preparing storage to use ASM.
Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally
unique and unchanging. By default, the name of a multipath device is set to its WWID.
Alternately, you can set the user_friendly_names option in the multipath configuration file,
which sets the alias to a node-unique name of the form mpathn. When the user_friendly_names
configuration option is set to yes, the name of the multipath device is set to /dev/mpath/mpathn.
Configuring multipathing by modifying the multipath configuration file, /etc/multipath.conf, will

not be addressed in this document.


The WWID is a persistent, system-independent ID that the Small Computer Storage Interface
(SCSI) Standard requires from all SCSI devices. Each disk attached to a Linux-based server has
a unique SCSI ID. The WWID identifier is guaranteed to be unique for every storage device, and
independent of the path that is used to access the device. This identifier can be obtained by
issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or
Unit Serial Number (page 0x80). The mappings from these WWIDs to the current /dev/sd names
can be seen in the symlinks maintained in the /dev/disk/by-id/ directory.
In this document, the accessible disks are connected via a Host Bus Adapter Card (HBA) to a
Storage Area Network (SAN) or switch. If the disks are attached via Hitachi SAN, the path and
port information is also extracted. Disk arrays that are grouped together as Logical Unit Number
(LUN) storage in SANs can also present themselves as SCSI devices on Linux servers. The
command, "fdisk -l", lists attached SCSI disk devices, including those from a SAN. Multiple
devices share common SCSI identifiers.
Automatic Storage Management Library (ASMLIB) driver is a support library for the Automatic
Storage Management (ASM) feature of the Oracle Database and is available for the Linux
operating system. This document discusses usage with Red Hat Enterprise Linux version 5 and
Oracle Enterprise Linux version 5.

Solution
1. Configure SCSI_ID to Return Unique Device Identifiers:
1a. Whitelist SCSI devices
(System Administrator's Task)
Before being able to configure udev to explicitly name devices, SCSI_ID (scsi_id(8)) should first
be configured to return their device identifiers. SCSI commands are sent directly to the device
via the SG_IO ioctl interface. Modify the /etc/scsi_id.config file - add or replace the 'option=-b'
parameter/value pair (if exists) with 'option=-g', for example:
# cat /etc/scsi_id.config
vendor="ATA",options=-p 0x80
options=-g

1b. List all SCSI devices


Using the command SCSI_ID for each /block/a-h (for example, for /dev/sda we type scsi_id -g -s
/block/sdb) generates the output:
SATA WD2502ABYS-23B7 WD-WCAT1H504691
SATA HUA721075KLA330 GTA260P8H8893E
360060e80045b2b0000005b2b000006c4
360060e80045b2b0000005b2b000006d8

360060e80045b2b0000005b2b00001007
360060e80045b2b0000005b2b00001679
360060e80045b2b0000005b2b0000163c

The two initial SCSI ids represent the local disks (/dev/sda and /dev/sdb). The remaining five
represent the SCSI ids of the fibre channel attached LUNs. As a subset of the output string,
scsi_id generates for the fibre LUNs matches the World Wide Identifier (WWID). A simple
example would be a disk connected to two Fibre Channel ports. Should one controller, port or
switch fail, the operating system can route I/O through the remaining controller transparently to
the application, with no changes visible to the applications, other than perhaps incremental
latency.
Important: If Real Application Clusters (RAC), Clusterware devices must be visible and
accessible to all cluster nodes. Typically, cluster node operating systems need to be updated in
order to see newly provisioned (or modified) devices on shared storage i.e. use '/sbin/partprobe
<device>' or '/sbin/sfdisk -r <device>', etc., or simply reboot. Resolve any issues preventing
cluster nodes from correctly seeing or accessing Clusterware devices before proceeding.
1c. Obtain Clusterware device unique SCSI identifiers:
Run the scsi_id(8) command against Clusterware devices from one cluster node to obtain their
unique device identifiers. When running the scsi_id(8) command with the -s argument, the
device path and name passed should be that relative to sysfs directory /sys/ i.e. /block/<device>
when referring to /sys/block/<device>. Record the unique SCSI identifiers of Clusterware
devices - these are required later configuring multipathing, for example:
# for i in `cat /proc/partitions | awk {'print $4'} |grep sd`; do echo "###
$i: `scsi_id -g -u -s /block/$i`"; done
...
### sdh: 360060e80045b2b0000005b2b0000163c
### sdh1:
### sdi: 360060e80045b2b0000005b2b0000163c
### sdi1:
...
### sdk: 360060e80045b2b0000005b2b00001679
### sdk1:
...
### sdm: 360060e80045b2b0000005b2b000006c4
### sdm1:
### sdn: 360060e80045b2b0000005b2b000006d8
### sdn1:
### sdo: 360060e80045b2b0000005b2b00001007
### sd01:
...
### sdz: 360060e80045b2b0000005b2b00001679
### sdz1:

From the output above, note that multiple devices share common SCSI identifiers. It should now
be evident that devices such as /dev/sdh and /dev/sdi refer to the same shared storage device
(LUN).

Another command can be used for listing the SCSI identifiers.


# ll /dev/disk/by-id/
lrwxrwxrwx 1 root root
-> ../../sda
lrwxrwxrwx 1 root root
part1 -> ../../sda1
lrwxrwxrwx 1 root root
part2 -> ../../sda2
lrwxrwxrwx 1 root root
part3 -> ../../sda3
lrwxrwxrwx 1 root root
part4 -> ../../sda4
lrwxrwxrwx 1 root root
part5 -> ../../sda5
lrwxrwxrwx 1 root root
part6 -> ../../sda6
lrwxrwxrwx 1 root root
part7 -> ../../sda7
lrwxrwxrwx 1 root root
-> ../../sdaa
lrwxrwxrwx 1 root root
part1 -> ../../sdl1
lrwxrwxrwx 1 root root
-> ../../sdab
lrwxrwxrwx 1 root root
part1 -> ../../sdab1
lrwxrwxrwx 1 root root
-> ../../sdn
lrwxrwxrwx 1 root root
part1 -> ../../sdn1

9 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e


10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-3600508e000000000158d6d2169801c0e10 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006b0
10 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006b010 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006c4
11 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006c49 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006d8
10 Jun 27 07:17 scsi-360060e80045b2b0000005b2b000006d8-

1d. Run fdisk to create partitions for ASM disks:


(System Administrator's Task)
# fdisk -l
Disk /dev/sdi: 590.5 GB, 590565212160 bytes
255 heads, 63 sectors/track, 71798 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdi1 1 130543 1048586616 83 Linux

MUST run fdisk on respective devices


fdisk /dev/?
Example:
# fdisk /dev/sdh
Command (m for help): n

Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1011, default 1): [use default]
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): [use
default]
Using default value 1011
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

1e. Run the fdisk(8) and/or 'cat /proc/partitions' commands to ensure devices are visible.
(If Real Application Clusters (RAC), Clusterware devices are visible on each node.) For
example:
# cat /proc/partitions
major minor #blocks name
8 0 142577664 sda
8 1 104391 sda1
8 2 52428127 sda2
8 3 33551752 sda3
8 4 1 sda4
8 5 26218048 sda5
8 6 10482381 sda6
8 7 8385898 sda7
8 16 263040 sdb
8 17 262305 sdb1
8 32 263040 sdc
8 33 262305 sdc1
8 48 263040 sdd
8 49 262305 sdd1
8 64 263040 sde
8 65 262305 sde1
8 80 263040 sdf
8 81 262305 sdf1
8 96 263040 sdg
8 97 262305 sdg1
8 112 576723840 sdh
8 113 576709402 sdh1
8 128 576723840 sdi
8 129 576709402 sdi1
8 144 576723840 sdj
8 145 576709402 sdj1
8 160 52429440 sdk
8 161 52428096 sdk1

8 176 524294400 sdl


8 177 524281275 sdl1
8 192 524294400 sdm
8 193 524281275 sdm1
...
65 208 262147200 sdad
65 209 262132605 sdad1
65 224 262147200 sdae
65 225 262132605 sdae1
253 0 524294400 dm-0
253 1 524294400 dm-1
253 2 524294400 dm-2
253 3 262147200 dm-3
253 4 262147200 dm-4
253 5 263040 dm-5
253 6 263040 dm-6
253 7 263040 dm-7
253 8 263040 dm-8
253 9 263040 dm-9
253 10 263040 dm-10
253 11 576723840 dm-11
253 12 576723840 dm-12
253 13 576723840 dm-13
253 14 52429440 dm-14
253 15 524281275 dm-15
253 16 262305 dm-16
253 17 524281275 dm-17
253 19 262132605 dm-19
253 20 524281275 dm-20

Note: At this point, if Real Application Clusters (RAC), each node may refer to the would-be
Clusterware devices by different device file names. This is expected. Irrespective of which node
the scsi_id command is run from, the value returned for a given device (LUN) should always be
the same.

2. Configure LUNs for ASM:


(System Administrator's Task)
2a. Verify Multipath Devices:
Once multipathing has been configured and the multipathd service started, the multipathed
devices should now be available.
For detailed multipathing commands, please refer to http://magazine.redhat.com/2008/07/17/tipsand-tricks-how-do-i-setup-device-mapper-multipathing-in-red-hat-enterprise-linux-4/
Update the kernel partition table with the new partition as follow (If Real Application Clusters
(RAC), do on each node.):
# /sbin/partprobe

Then verify that all multipaths are active by executing:


# multipath -ll
360060e80045b2b0000005b2b000006c4dm-1 HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:17 sdab 65:176 [active][ready]
\_ 1:0:0:17 sdm 8:192 [active][ready]
360060e80045b2b0000005b2b000006d8dm-2 HITACHI,OPEN-V*20
[size=500G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:18 sdac 65:192 [active][ready]
\_ 1:0:0:18 sdn 8:208 [active][ready]
360060e80045b2b0000005b2b00001679dm-14 HITACHI,OPEN-V
[size=50G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:9 sdk 8:160 [active][ready]
\_ 3:0:0:9 sdz 65:144 [active][ready]
360060e80045b2b0000005b2b0000312edm-9 HITACHI,OPEN-V
[size=257M][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:4 sdf 8:80 [active][ready]
\_ 3:0:0:4 sdu 65:64 [active][ready]
360060e80045b2b0000005b2b00001007dm-3 HITACHI,OPEN-V*5
[size=250G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 3:0:0:19 sdad 65:208 [active][ready]
\_ 1:0:0:19 sdo 8:224 [active][ready]
360060e80045b2b0000005b2b0000163cdm-20 HITACHI,OPEN-V*11 ---> multipathed
[size=550G][features=0][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:7 sdh 8:128 [active][ready]
---> Required to be [active][ready]
\_ 3:0:0:7 sdi 8:112 [active][ready]
---> Required to be [active][ready]

Note: DM-Multipath provides a way of organizing the I/O paths logically, by creating a single
multipath device on top of the underlying devices. Each device that comes from Hitachi Storage
Sub-system (360060e80045b2b0000005b2b0000163c - WWID) requires two underlying
physical devices. These two physical devices have to be partitioned (e.g. sdh1 and sdi1). Also,
the WWID device has to be partitioned (e.g. 3360060e80045b2b0000005b2b0000163cp1). Also,
the two physical devices are required to be the same size within that group, both non-partitioned
and partitioned, and across nodes in Real Application Clusters (RAC).
In fact, various device names are created and used to refer to multipathed devices, for
example:
# dmsetup ls | sort
360060e80045b2b0000005b2b000006b0 (253, 0)
360060e80045b2b0000005b2b000006b0p1 (253, 15)
360060e80045b2b0000005b2b000006c4 (253, 1)
360060e80045b2b0000005b2b000006c4p1 (253, 17)

360060e80045b2b0000005b2b000006d8 (253, 2)
360060e80045b2b0000005b2b000006d8p1 (253, 26)
360060e80045b2b0000005b2b0000163c (253,11)
360060e80045b2b0000005b2b0000163cp1 (253,20)
# ll /dev/mpath/
lrwxrwxrwx 1 root
-> ../dm-0
lrwxrwxrwx 1 root
-> ../dm-15
lrwxrwxrwx 1 root
-> ../dm-1
lrwxrwxrwx 1 root
-> ../dm-17
lrwxrwxrwx 1 root
-> ../dm-2
lrwxrwxrwx 1 root
-> ../dm-26
lrwxrwxrwx 1 root
-> ../dm-11
lrwxrwxrwx 1 root
-> ../dm-20

root 7 Jun 27 07:17 360060e80045b2b0000005b2b000006b0


root 8 Jun 27 07:17 360060e80045b2b0000005b2b000006b0p1
root 7 Jun 27 07:17 360060e80045b2b0000005b2b000006c4
root 8 Jun 27 07:17 360060e80045b2b0000005b2b000006c4p1
root 7 Jun 27 07:17 360060e80045b2b0000005b2b000006d8
root 8 Jun 27 07:17 360060e80045b2b0000005b2b000006d8p1
root 7 Jun 27 07:17 360060e80045b2b0000005b2b0000163c
root 8 Jun 27 07:17 360060e80045b2b0000005b2b0000163cp1

# ll /dev/mapper/
brw-rw---- 1 root disk 253, 0 Jun 27 07:17 360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27 07:17 360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 11 Jun 27 07:17 360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk 253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1
# ls -lR /dev|more
/dev:
drwxr-xr-x 3 root root 60 Jun 27 07:17 bus
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrom-hda -> hda
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom-sr0 -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw-hda -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter-hda -> hda
crw------- 1 root root 5, 1 Jun 27 07:18 console
lrwxrwxrwx 1 root root 11 Jun 27 07:17 core -> /proc/kcore
drwxr-xr-x 10 root root 200 Jun 27 07:17 cpu
drwxr-xr-x 6 root root 120 Jun 27 07:17 disk
brw-rw---- 1 root root 253, 0 Jun 27 07:17 dm-0
brw-rw---- 1 root root 253, 1 Jun 27 07:17 dm-1
brw-rw---- 1 root root 253, 10 Jun 27 07:17 dm-10
brw-rw---- 1 root root 253, 11 Jun 27 07:17 dm-11
brw-rw---- 1 root root 253, 12 Jun 27 07:17 dm-12
brw-rw---- 1 root root 253, 13 Jun 27 07:17 dm-13
brw-rw---- 1 root root 253, 14 Jun 27 07:17 dm-14
brw-rw---- 1 root root 253, 15 Jun 27 07:17 dm-15

brw-rw---- 1 root root


brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
brw-rw---- 1 root root
...
/dev/disk/by-label:
lrwxrwxrwx 1 root root
lrwxrwxrwx 1 root root
lrwxrwxrwx 1 root root
lrwxrwxrwx 1 root root
lrwxrwxrwx 1 root root
lrwxrwxrwx 1 root root

253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
10
10
10
10
10
10

16 Jun 27 07:17 dm-16


17 Jun 27 07:17 dm-17
18 Jun 27 07:17 dm-18
19 Jun 27 07:17 dm-19
2 Jun 27 07:17 dm-2
20 Jun 27 07:17 dm-20
21 Jun 27 07:17 dm-21
22 Jun 27 07:17 dm-22
23 Jun 27 07:17 dm-23
24 Jun 27 07:17 dm-24
25 Jun 27 07:17 dm-25
26 Jun 27 07:17 dm-26

Jun
Jun
Jun
Jun
Jun
Jun

27
27
27
27
27
27

07:17
07:17
07:17
07:17
07:17
07:17

1 -> ../../sda5
boot1 -> ../../sda1
optapporacle1 -> ../../sda2
SWAP-sda3 -> ../../sda3
tmp1 -> ../../sda7
var1 -> ../../sda6

3. Automatic Storage Management Library (ASMLIB) setup:


Note: For improved performance and easier administration, Oracle recommends that the
Automatic Storage Management Library (ASMLIB) driver be used instead of raw devices to
configure Automatic Storage Management disks.
3a. Verify that ASMLIB has not been installed already before installing (If Real Application
Clusters (RAC), run this command on each node):
For example (as root):
# rpm -qa | grep oracleasm
i. Output if installed:
oracleasm-2.6.18-164.el5PAE-2.0.5-1.el5 -----> optional
oracleasm-2.6.18-164.el5debug-2.0.5-1.el5 -----> optional
oracleasm-2.6.18-164.el5-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-164.el5xen-2.0.5-1.el5 -----> optional
ii. Output if not installed
package not installed
a. Install (if not installed).
Administrator's Task)
b. Verify kernel version:
# uname -r
2.6.18-164.el5PAE

Install MUST match kernel version. (System

c. Install the correct packages for the kernel version.


# rpm -i oracleasm-support-2.1.3-1.el5.i386.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i386.rpm

Important: The version of the ASMLIB driver version has to be the same as the kernel version.
Download the matching ASMLIB driver version from Oracle's website for ASMLIB drivers:
http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html
3b. Check status (If Real Application Clusters (RAC), run this command on each
node):
Failed example:

# /etc/init.d/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no

If status failed ("no" displayed), then:

Configuring the Oracle ASM library driver, for example:


# /etc/init.d/oracleasm configure (If RAC, run this command on all nodes)
Default user to own the driver interface []: grid
Infrastructure/ASM user name
Default group to own the driver interface []: asmadmin
Infrastructure/ASM group name
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

-----> input Grid


-----> input Grid

Note: This will configure the on-boot properties of the Oracle ASM library driver. The following
questions will determine whether the driver is loaded on boot and what permissions it will have.
The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer
will keep that current value. Ctrl-C will abort.
3c. Check status again:
# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

4. Create ASM diskgroups:

Important: The /dev/dm-n devices are internal to device-mapper-multipath and are nonpersistent, so should not be used. The /dev/mpath/ devices are created for multipath devices to be
visible together, however, may not be available during the early stages of the boot process, so
should not typically be used. However, /dev/mapper/ devices are persistent and created early
during boot...These are the only device names that should be used to access multipathed devices.
Note: Use /dev/mapper/[WWID]p1 as the device name for createdisk. If Real Application
Clusters (RAC), do only on the first node. All commands done as root.
4a. Check prior to createdisk command:
# /etc/init.d/oracleasm querydisk DAT
Disk "DAT" does not exist or is not instantiated
# /etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device "/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is not marked as an
ASM disk

4b. After Check, do createdisk command:


# /etc/init.d/oracleasm createdisk DAT
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Marking disk "/dev/mapper/360060e80045b2b0000005b2b0000163cp[ OK ] ASM disk

Note: If using multiple devices/disks within an ASM diskgroup, a good practice would be to use
/etc/init.d/oracleasm createdisk with numbers appended to ASM alias name to create the
members of the diskgroup for example:
# /etc/init.d/oracleasm createdisk DAT01
/dev/mapper/360060e80045b2b0000005b2b0000163cp1

5. To make the disk available enter the following commands:


Note: If Real Application Clusters (RAC), run the following two commands, in the order of first
node, second node, etc. All commands done as root.
5a. Scan ASM disks:
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]

5b. List ASM disks:


# /etc/init.d/oracleasm listdisks
DAT

6. Check the ASM diskgroups:


For example (as root) (If Real Application Clusters (RAC), do on each node.):
# /etc/init.d/oracleasm querydisk DAT
Disk "DAT" is a valid ASM disk on device [253, 20]
or
# /etc/init.d/oracleasm querydisk -d DAT
Disk "DAT" is a valid ASM disk on device [253, 20]
# /etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device "/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is marked as an ASM
disk

Note: The numbers, [253, 20], indicate major and minor numbers that correspond to the major
and minor numbers in the file, /proc/partitions. These numbers can be used to validate the
multipathed device by cross-referencing these numbers with the file, /proc/partitions, and the
output of "multipath -ll" to ensure that the major and minor numbers match.
# cat /proc/partitions
major minor #blocks name
.
.
.
253 20 524281275 dm-20

7. Ensure that the allocated devices can be seen in /dev/mpath:


For example (as root) (f Real Application Clusters (RAC), do on each node.):
# cd /dev/mpath
# ls -l
total 0
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b0000163c ->
../dm-11
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b0000163cp1
-> ../dm-20
lrwxrwxrwx 1 root root 8 May 8 10:32 360060e80045b2b0000005b2b0000155a ->
../dm-14

lrwxrwxrwx 1
-> ../dm-22
lrwxrwxrwx 1
../dm-16
lrwxrwxrwx 1
-> ../dm-23
lrwxrwxrwx 1
../dm-1
lrwxrwxrwx 1
../dm-2
lrwxrwxrwx 1
../dm-3
lrwxrwxrwx 1
../dm-4
lrwxrwxrwx 1
../dm-5

root root 8 May 8 10:32 360060e80045b2b0000005b2b0000155ap1


root root 8 May 8 10:32 360060e80045b2b0000005b2b00001584 ->
root root 8 May 8 10:32 360060e80045b2b0000005b2b00001584p1
root root 7 May 8 10:32 360060e80045b2b0000005b2b00003130 ->
root root 7 May 8 10:32 360060e80045b2b0000005b2b00003131 ->
root root 7 May 8 10:32 360060e80045b2b0000005b2b00003132 ->
root root 7 May 8 10:32 360060e80045b2b0000005b2b00003133 ->
root root 7 May 8 10:32 360060e80045b2b0000005b2b00003134 ->

8. Ensure that the devices can be seen in /dev/mapper:


For example (as root) (f Real Application Clusters (RAC), do on each node.):
# ls -l /dev/mapper
total 0
brw-rw---- 1 root disk 253, 0 Jun 27 07:17 360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27 07:17 360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 2 Jun 27 07:17 360060e80045b2b0000005b2b000006d8
brw-rw---- 1 root disk 253, 26 Jun 27 07:17
360060e80045b2b0000005b2b000006d8p1
brw-rw---- 1 root disk 253, 11 Jun 27 07:17 360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk 253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1
brw-rw---- 1 root disk 253, 12 Jun 27 07:17 360060e80045b2b0000005b2b0000155a
brw-rw---- 1 root disk 253, 29 Jun 27 07:17
360060e80045b2b0000005b2b0000155ap1
brw-rw---- 1 root disk 253, 14 Jun 27 07:17 360060e80045b2b0000005b2b00001679
brw-rw---- 1 root disk 253, 25 Jun 27 07:17
360060e80045b2b0000005b2b00001679p1
brw-rw---- 1 root disk 253, 13 Jun 27 07:17 360060e80045b2b0000005b2b00001584
brw-rw---- 1 root disk 253, 24 Jun 27 07:17
360060e80045b2b0000005b2b00001584p1
brw-rw---- 1 root disk 253, 3 Jun 27 07:17 360060e80045b2b0000005b2b00001007
brw-rw---- 1 root disk 253, 19 Jun 27 07:17
360060e80045b2b0000005b2b00001007p1
brw-rw---- 1 root disk 253, 4 Jun 27 07:17 360060e80045b2b0000005b2b0000189c
brw-rw---- 1 root disk 253, 18 Jun 27 07:17
360060e80045b2b0000005b2b0000189cp1

9. Check the device type:

For example (as root) (f Real Application Clusters (RAC), do on each node.):
# /sbin/blkid | grep oracleasm
/dev/dm-20: LABEL="DAT" TYPE="oracleasm"
/dev/dm-22: LABEL="ARC" TYPE="oracleasm"
/dev/dm-23: LABEL="FRA" TYPE="oracleasm"
/dev/sdh1: LABEL="DAT" TYPE="oracleasm"
/dev/sdx1: LABEL="ARC" TYPE="oracleasm"
/dev/sdj1: LABEL="FRA" TYPE="oracleasm"
/dev/sdi1: LABEL="DAT" TYPE="oracleasm"
/dev/sdy1: LABEL="ARC" TYPE="oracleasm"
/dev/sdz1: LABEL="FRA" TYPE="oracleasm"
>>>>

---> >>here multipathed


---> here physical
---> >> here physical

10. Setup the ASM parameter (ORACLEASM_SCANORDER), in the file for


ASMLIB configuration, /etc/sysconfig/oracleasm, for forcing ASM to bind with
the multipath devices
Note: If Real Application Clusters (RAC), do commands on each node. All commands done as
root.
10a. Check the file, /etc/sysconfig/oracleasm:
# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Jun 13 09:58 /etc/sysconfig/oracleasm -> oracleasm_dev_oracleasm

10b. Make a backup of the original file, /etc/sysconfig/oracleasm-_dev_oracleasm


# cp /etc/sysconfig/oracleasm-_dev_oracleasm /etc/sysconfig/oracleasm_dev_oracleasm.orig

10c. Modify the ORACLEASM_SCANORDER and ORACLEASM_SCANEXCLUDE


parameters in /etc/sysconfig/oracleasm:
# vi /etc/sysconfig/oracleasm-_dev_oracleasm
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="mpath dm"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan

ORACLEASM_SCANEXCLUDE="sd"

Note: Another valid value can be used for ORACLEASM_SCANORDER:


ORACLEASM_SCANORDER="dm"
10d. save file
10e. Restart oracleasm:
# service oracleasm restart
or
# /etc/init.d/oracleasm restart

10f. Check mulitpath device against /proc/partitions file:


# cat /proc/partitions
major minor #blocks name
.
.
.
253 20 524281275 dm-20

10g. Check mulitpath device against the file, /dev/oracleasm/disks:


# ls -ltr /dev/oracleasm/disks
brw-rw---- 1 grid asmadmin 253, 20 Oct 4 13:37 DAT

10f. Check oracleasm disks again:


# /etc/init.d/oracleasm listdisks
DAT

Note: ASMLIB first tries/scans all disks that are in the /proc/partitions file. Within the multipath
directory, /dev/mpath, the alias names and the WWIDs point to/are linked to the multipathed
names dm[-n] names. Furthermore, ASMLIB does not scan any disks
(ORACLEASM_SCANEXCLUDE) that start with "sd". This is all the SCSI disks.
Additional Resources

Community Discussions: Storage Management MOS Community


Still have questions? Use the above community to search for similar discussions or start a new
discussion on this subject.

References
NOTE:564580.1 - Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2
(10.2.0) on RHEL5/OEL5
NOTE:603868.1 - How to Dynamically Add and Remove SCSI Devices on Linux
NOTE:555603.1 - Configuration and Use of Device Mapper Multipathing on Oracle Enterprise
Linux (OEL)
NOTE:743949.1 - Unable To Create ASMLIB Disk
NOTE:967461.1 - "Multipath: error getting device" seen in OS log causes ASM/ASMlib to
shutdown by itself
NOTE:580153.1 - How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block
Devices?
NOTE:1089399.1 - Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux
Supported by Red Hat
NOTE:602952.1 - How To Setup ASM & ASMLIB On Native Linux Multipath Mapper disks?

Related
Products

Oracle Database Products > Oracle Database > Oracle Database > Oracle Server Enterprise Edition > STORAGE > ASM Installation and Patching Issues

Keywords
ASM; ASMLIB; CLUSTER; DISKGROUP; ENTERPRISE LINUX; LINUX; MULTIPATH;
ORACLEASM; SCSI; STORAGE

Errors

CSI-360060; CSI-3600508
Back to Top

You might also like