Professional Documents
Culture Documents
SAP/ORACLE/AIX
Performance Tuning Tips
Top 40 - 1997
John Oustalet & Walter Orb
IBM SAP Competency Center
Foster City, California
Table of Contents
General
o Edition Notice (February 1998)
o Scope
o Acknowledgments
o Feedback
o Copyright 1998 IBM Corporation. All Rights Reserved.
o References
Introduction
AIX - Disk I/O
o Tip 1: Keep Disk Utilization Below 15%
o Tip 2: Pseudo-Stripe Hot Logical Volumes
o Tip 3: Carefully Place Hot Logical Volumes On Disk
o Tip 4: Disable Mirror Write Consistency
o Tip 5: Use Asynchronous I/O With AIX 4
o Tip 6: Use Raw Devices
AIX - CPU, Memory, and Paging
o Tip 7: Use vmtune To Reduce Paging
o Tip 8: Paging Spaces Should Be The Same Size And Placed On Separate Disks
Oracle SQL*Net V2
o Tip 9: Use TCP_NODELAY
Oracle - Redo Log Files
o Tip 10: Isolate Redo Log Files
o Tip 11: Use LVM Mirroring For The Redo Log Files
o Tip 12: Increase The Size Of The Redo Log Files
o Tip 13: If the Archiver Cant Keep Up, Add Another Set of Log Files
Oracle - Rollback Segments
o Tip 14: Isolate Rollback Segments
o Tip 15: Increase The Number Of Rollback Segments
o Tip 16: Create Multiple Rollback Segment Tablespaces
Oracle SGA
o Tip 17: Reduce Redo Log Buffer Contention
o Tip 18: Maintain A High Data Buffer Cache Quality
o Tip 19: Reduce Recursive Calls
Oracle - Database Administration
o Tip 20: Reorganize !!!
o Tip 21: Avoid Aged Indexes Index Stagnation
o Tip 22: Dont Put Data Files Of The Same Tablespace On The Same Disk
o Tip 23: Move Large And Highly Accessed Tables To Their Own Tablespaces
o Tip 24: Isolate The Archive Log File System
o Tip 25: Turn Off Timed Statistics
SAP - Application Servers
o Tip 26: Move The SAP Main Instance Off The Database Server
o Tip 27: Consider Installing Two Application Instances On The Same Node
o Tip 28: Update The SAP Kernel To The Latest Available Release
o Tip 29: Its OK To Over Allocate Memory Until You Start Paging
o Tip 30: Maintain Sufficient Extended Memory
o Tip 31: Use Logon Groups
o Tip 32: Set ztta/roll_first = 1
o Tip 33: Group Key 2 And Key 6 In The Same Shared Memory Pool
SAP - ABAP/4 Programming
o Tip 34: Transfer As Little Data As Possible
o Tip 35: Avoid Nested Select Loops And Logical Databases
o Tip 36: Create Secondary Indexes Where Needed But Be Careful!
o Tip 37: Use Table Buffering
o Tip 38: Avoid Using ORDER BY
o Tip 39: Choose A Suitable OCCURS
o Tip 40: Update Locally In Batch
APPENDIX A
o Mirror Write Consistency
What is MWC?
What problem does MWC solve:
How does MWC work.
Implications of turning on MWC
APPENDIX B
o Installing the ORACLE Database on Raw Devices
APPENDIX C
o R/3 note no. 72638
o 7.3.3.X
APPENDIX D
o Enlarging the Redo Logs - Moving Redo Logs to Other Disks
APPENDIX E
o Creating Additional Rollback Tablespaces
APPENDIX F
o R/3 note no. 79376 abbreviated for Oracle and AIX
APPENDIX G
o Table Buffering
Version: 1.0
General
Edition Notice (February 1998)
Scope
This document is written for IBM personnel involved in the installation, configuration and tuning
of SAP R/3, release 3.x. The purpose of this document is to provide an experienced SAP BASIS
specialist with a series of "tips" intended to optimize the performance of SAP on Oracle7 and
AIX 4. The "tips" are based on current information, which may change with experience and new
software rev levels
Standard disclaimer: This document is not a comprehensive tuning guide. It is a quick reference
of recommendations given to our customers during several performance engagements last year.
Use your best judgement.
Acknowledgments
The authors want to graciously acknowledge the work of Jim Dilley, who took the time to
review, critique, and contribute to the contents of this document.
IBM makes no warranties or representations with respect to the content hereof and specifically
disclaims any implied warranties of merchantability or fitness for any particular purpose. IBM
assumes no responsibility for any errors that may appear in this document. The information
contained in this document is subject to change without any notice. IBM reserves the right to
make any such changes without obligation to notify any person of such revision or changes. IBM
makes no commitment to keep the information contained herein up to date.
SAP and ABAP/4 are registered trademarks of SAP AG
IBM and AIX are registered trademarks of the International Business Machines Corporation
References
. "AIX/6000 Internals and Architecture", David A. Kelly, copyright 1996 by McGraw-Hill
. "Oracle Performance Tuning", Mark Gurry and Peter Corrigan, copyright 1996 by O'Reilly &
Associates
. "Advanced Oracle Tuning and Administration", Aronoff, Loney, and Sonawalla, copyright 1997
by Aronoff, Loney, and Sonawalla
. "Developing SAP's R/3 Applications with ABAP/4", Kretschmer and Weiss, copyright 1996 by
SYBEX
. "Tuning ABAP/4 Programs for Performance", Udo Laub, copyright 1997 by SAP Technology,
Inc.
Introduction
After reviewing several SAP systems last year, we recognized a common set of
recommendations given to almost all of our customers. In a moment of inspiration, we decided to
compile our recommendations into this document in the form of easily referenced "tips". We
hope to update this document frequently with new tips and coverage.
Since this isn't a book, we must assume you are familiar with:
* AIX commands and utilities (e.g. vmstat, iostat, netstat, filemon, netpmon, tprof, trace, etc.)
related to performance tuning and administration.
* Oracle7 performance monitoring and administration.
* address the performance implications of configuring SAP modules to meet specific business
process requirements.
* take into account all possible uses and modifications of SAP standard transactions and their
impact on performance.
Hopefully, this document will help you get the most out of your SAP system.
Good Luck,
Use iostat to monitor disk utilization during peak workload (e.g., iostat 30 60). Look for any disk
that has sustained utilization (% tm_act) above 15%. Find out what's on the disk and why it's
working so hard. If you have multiple data files on the same disk, run filemon and look for high
seek times. Hot logical volumes can be placed next to each other to help reduce seek time. You
might have fragmented logical volumes, file systems, and/or files. Use lspv and fileplace to get a
detailed layout of the disk. If fragmented, consider reorganization.
If high I/O volume is causing high disk utilization, look for heavily accessed data files sharing
the same disk. If found, separate the data files by moving them to different disk. Ideally, the I/O
workload should be evenly distributed over all available disks and controllers.
If utilization is too high and there's only one data file on the disk, or if the majority of I/O is
going to one file, consider striping the tablespace or moving "hot" tables out into their own
tablespaces.
The 15% rule is valid for Oracle tablespace data files. However, the I/O to redo and archive log
files is sequential. With sequential I/O, disks can sustain higher utilization - up to 50-60%.
Consider striping the redo log logical volumes if disk utilization exceeds 60%. Use a stripe width
of 64K and stripe the logical volumes over 6 to 8 disks. Since AIX logical volume mirroring and
striping are mutually exclusive, striping will require the Oracle multiplexing of the redo log files.
Initially, create a single copy of the logical volume specifying the primary set of physical
volumes in the "PHYSICAL VOLUME names" field. Also, set the logical volume option
"RANGE of physical volumes" to maximum to distribute I/O load evenly across the physical
volumes. To mirror, add a copy of the logical volume specifying the mirrored set of physical
volumes and set the interdisk range to maximum.
When multiple logical volumes are defined on a physical volume, highly accessed logical
volumes should be placed at the center of the physical volume to take advantage of the lower
seek times. Set the logical volume option "POSITION on physical volume" to center.
When a single logical volume is defined on a physical volume, then the logical volume should be
placed on the outer edge off the disk to take advantage of a higher data transfer rate. In this case,
the average seek time is only influenced by seeks within that logical volume. Set the logical
volume option "POSITION on physical volume" to outer edge.
If you are mirroring logical volumes, disable Mirror Write Consistency (see Tip 4).
MWC tells the Logical Volume Manager (LVM) to keep track of the status of mirrored writes
(see Appendix A). Each write to a mirrored logical volume results in a seek to the outer edge of
the disk to update the status block and then a seek back to the data area to do the actual write.
You end up with two physical writes for each request along with additional head movement. As a
result, this parameter can result in I/O performance degradations of 20 to 50%.
Note: while turning off Mirror Write Consistency improves performance, it exposes you to
mirror synchronization issues if the system abends. After an abend, you must manually resynch
the mirrors which can take minutes to hours to complete depending on the size of the Oracle
partitions (contact me for the manual procedures, baspence @us.ibm.com.) I do not recommend
turning off MWC without careful consideration the performance/administration tradeoffs. Bruce
Spencer (10/29/99)
use_async_io = TRUE
db_writers = 1
When using AIO, do not set the number of db_writers > 1. Data corruption could occur.
AIX 4 supports AIO for both file systems and raw devices. After issuing AIO requests, the
Oracle processes can continue running. A kernel process (kproc), called a server, is in charge of
each request until it completes. The number of servers limits the number of AIO requests that can
be executed concurrently.
You can set the number of servers with smit or chdev. The minimum is the number of servers to
start at system boot. The maximum limits the number of servers that can be started in response to
a large number of concurrent requests. The default values are minservers=1 and
maxservers=10.
Derive the optimum values for minservers and maxservers by monitoring a running system.
Initially, set minservers=10 and maxservers=50. Check the number of active AIO servers
periodically during the day with the AIX command:
I/O performance can be improved by setting up the Oracle data files and redo log files as raw
devices (see Appendix B). Doing so, you avoid Journalized File System (JFS) overhead and the
buffering of data in the AIX file system cache. Reduced JFS overhead results in less CPU
activity. With raw devices, Oracle can write directly from the SGA data block buffer to disk.
Using JFS I/O, each time Oracle data is transferred to or from the I/O subsystem, buffered copies
are created in both the Oracle buffer cache (SGA) and the AIX file system cache. This double
buffering of data increases the total virtual memory requirement of an Oracle instance and can
lead to excessive paging in a memory-constrained environment. I/O performance can worsen as
the I/O subsystem is more heavily exercised. Also, the pagedaemon has to work much harder
flushing modified memory pages to disk.
The authors would also like to point out that file system journaling does not protect user data!
Journaling will only protect the file system's structure and lists. In other words, if the system
crashes in the middle of a database write, the file system will mount, but the data may be
corrupted.
By using raw devices, you might lose JFS advantages for sequential I/O (e.g., sequential read
ahead). However, in SAP R/3, most Oracle data accesses result in random I/O operations.
Also, the use of raw devices gives you better control of data placement making it easier to
balance disk utilization and avoid fragmentation.
Overall, the experience of the Oracle Corporation on the RS/6000 platform reveals better overall
performance and scalability using raw devices compared to file systems. Therefore, in a heavily
used database configuration, the use of raw devices is encouraged.
Moving from file systems to raw devices can be done with the AIX dd command.
After the copy create a symbolic link from the old file name to the new raw device.
ln -s /dev/<my_new_raw_device> <my_old_JFS_file>
The vmtune utility can limit the amount of real memory AIX can use for file cache. The vmtune
parameters limiting file cache are minperm (-p) and maxperm (-P). Parameter values are in
percent of real memory. The minperm default is 20% and the maxperm default is 80%. The
vmtune utility can only be called by root.
Ex: vmtune -p 5 -P 10
VMM operational parameters modified by vmtune are only effective until the next boot. To make
the change permanent, create a /etc/rc.local shell script and insert the vmtune call.
#!/bin/ksh
if [ -f /usr/samples/kernel/vmtune ] ; then
/usr/samples/kernel/vmtune -p 5 -P 10
fi
Add the following entry to /etc/inittab. The /etc/rc.local shell script will be called by init during
boot.
For central systems, vmtune changes need to be applied more carefully especially for systems
reading large external flat files. If excessive paging is still observed after changing vmtune
parameters, consider adding an application server and moving the SAP main instance to it (see
Tip 26).
The vmtune utility is a Licensed Program Product (LPP) installed in the directory
/usr/samples/kernel. If not found, it can be installed from the AIX installation CD
(bos.adt.samples).
The initial allocation of the default paging space logical volume (hd6) is too small. After
installation, there is usually enough free space available on hdisk0 to enlarge hd6 first (smit
chps) before adding additional paging space logical volumes on other disks. Ideally, a paging
space logical volume should be contiguous on disk and may require reorganization (smit
reorgvg).
As a rule of thumb, the total paging space should equal 3 x Real Memory or at least 1 GB,
whichever is larger. SAP recommends a total paging space of 2-3 GB for optimum performance.
The minimum and maximum (worst-case) paging space requirements of an R/3 application
server can be determined with the SAP utility sappfpar.
From the AIX command prompt, run sappfpar to check the instance profile you intend to use.
In the output generated by sappfpar, the minimum paging space, maximum heap space, and
worst case paging space requirements are listed.
Oracle SQL*Net V2
Tip 9: Use TCP_NODELAY
On startup, Oracle opens an operating system socket in "TCP Delay" mode for communication
between an R/3 work process and an Oracle shadow process on the database server. In "TCP
Delay" mode, TCP/IP uses the "Nagle algorithm" for controlling the data traffic. Nagle does the
following:
* If there are no outstanding packets on a socket and a client wants to send a data set that is
smaller than the existing segment size, it waits up to 200ms for more data.
* If additional data is sent from the same socket, all the data is collected together to form a larger
packet to reduce the number of packets on the network.
* The MTU size of the network adapter determines the segment size that is used for data traffic.
The duration of a data transfer between client and server is nonlinearly dependent on the data
volume to be transferred. Put simply, for a relatively small data set, data transfer over the
network can take as long as 200ms.
The above-mentioned factors can be controlled at socket level without influencing the rest of the
system. To do this, Oracle would have to open the socket in "TCP No Delay" mode (see
Appendix C). As a result, any data set put on a socket would be sent immediately, no matter how
small. With no TCP/IP delay, average response time would improve.
By placing redo log logical volumes on their own disk, seek times are virtually eliminated
providing the best possible sequential I/O performance. No other logical volumes should be
placed on redo log disk. Allocate redo log logical volumes starting from the outer edge of the
disk to take advantage of better data transfer rates.
Tip 11: Use LVM Mirroring For The Redo Log Files
LVM mirroring is faster and more efficient than Oracle multiplexing for redo log files. Use
Oracle multiplexing only if high disk utilization requires LVM striping.
Monitor V$LOGHIST for the frequency of redo log switches. Increase the size of your redo log
files so log switches occur every 10 to 15 minutes (see Appendix D).
Tip 13: If the Archiver Cant Keep Up, Add Another Set of
Log Files
Ideally, the redo log disks should never experience simultaneous read and write operations.
Simultaneous operations could result in an excessive number of slow seeks reducing a disk's
sequential I/O performance.
Under load, determine if the Oracle archiver is fast enough to save an inactive redo log file
before the log writer can switch back to the same disk. If the switch occurs while archiver is still
reading from an inactive redo log file, add another set of redo logs (e.g. origlogC). That way, the
log writer switches from origlogA, to origlogB, to origlogC, and then back to origlogA giving the
archiver plenty of time to complete its process.
SAP's rule of thumb is one rollback segment for every four SAP work processes.
Start with SAP's recommendation then monitor V$WAITSTAT. If there are any "undo header" or
"undo block" waits, add more rollback segments until the undo waits are as close to zero as
possible.
Each rollback segment has a transaction table that controls access. Check V$ROLLSTAT to
determine the number of waits on a slot for each transaction table. If the hit ratio ((gets -
waits) *100) /gets is less than 99%, add additional rollback segments until contention is
reduced. Increase the number of rollback segments by the number of rollback segments
experiencing waits.
The Private Rollback Segments (PRS) should be spread over two or more isolated tablespaces on
separate disks (e.g., PSAPROLL, PSAPROLL2, PSAPROLL3, ...). The rollback segments of
each tablespace will be used alternately in the order in which they are listed in the initSID.ora
file. This approach will distribute rollback I/O load equally over multiple disks (see Appendix E).
Oracle SGA
Tip 17: Reduce Redo Log Buffer Contention
The redo log buffers are used to hold database changes before writing them to the on-line redo
log files. When a log buffer reaches one-third full (two-thirds full in Oracle 7.3), a commit, or a
write takes place to the database, the log writer process writes the contents of the buffer to the
active redo log file. It also writes from the log buffer every few seconds if there is no activity.
Redo log buffers operate in a circular fashion and are controlled via internal database latches.
SAP can perform several simultaneous update transactions resulting in poor database
performance if these buffers are not large enough to keep up.
Check V$LATCH, if the ratio of gets to misses or immediate_gets to immediate_misses for the
redo allocation and the redo copy exceeds 1%, consider tuning the redo buffer.
* Increase the size of the redo log buffer to 1048576 (1 MB) to reduce the number of log buffer
allocation retries.
For good R/3 performance, it is important to maintain a high data buffer cache quality. A well-
tuned Oracle database can have a data buffer cache quality as high as 100%. Check the
V$WAITSTAT table for buffer busy waits on "data block". Increase the initSID.ora parameter
DB_BLOCK_BUFFERS until buffer busy waits are less than 1% of logical reads. This should
produce data buffer cache hit ratios close to 100%. A reasonable size for medium systems is
about 200 - 300 MB; for large SMP servers, a data buffer size of 600 - 800 MB might be
adequate.
As you increase the size of the data buffer cache, watch for paging. You may have to reduce the
AIX file system cache (see Tip 7), or if that's not possible, you might have to reduce the data
buffer cache size to avoid paging.
Like regular calls, blocks read from dictionary tables are loaded into the SGA. If the data
dictionary information required by a query is already in the dictionary cache of the shared pool,
then no recursive call is necessary. A large number of recursive calls is an indication that the
dictionary cache of the shared pool is too small. From an application point of view, a recursive
call is unproductive work, consuming CPU and I/O resources. Unfortunately, some recursive
activity is unavoidable. A healthy ratio of user calls to recursive calls in R/3 is about 4:1.
The library cache, dictionary cache, and session information are all contained within the shared
pool. To increase the size of the dictionary cache, simply increase the size of the shared pool with
the initSID.ora parameter SHARED_POOL_SIZE. Reasonable sizes for medium systems are
about 150 - 200 MB; shared pool sizes for large systems can go as high as 300 MB. If the shared
pool size is already greater than 200 MB, only increase it if the value "free memory" in V$SGA
is less than 1 MB.
Oracle - Database Administration
Tip 20: Reorganize !!!
When R/3 is installed, most of the Oracle tables and indexes are fully stored within their initial
extent. Each tablespace consists of exactly one data file.
When data is added to the database, its status changes. As tables and indexes grow, additional
extents are created to make space for new data. This may result in higher data access times.
When a tablespace is full (there is not enough freespace to create a new extent), an additional
data file must be added.
Internal fragmentation can occur if the fill level of the database blocks develops differently. The
fill level of the individual blocks is initially identical. Creating and deleting data causes some
blocks to be filled completely, while others may be relatively empty. As a result, space is used
inefficiently.
If a data record does not fit into a database block, block chaining occurs. The database must then
follow a chain from the first block of the data record to the further blocks. As a result, more time
is needed for reading data from the disk.
Since R/3 usually accesses a table entry using an index, the above changes to the database
usually have little influence on the time required to access data. However, the changes may
adversely affect the run time of a query using a full table scan.
* Freespace fragments are merged into larger freespace segments (external defragmentation).
Reorganize the indexes of the update tables VBHDR, VBMOD, and VBDATA weekly.
Tip 22: Dont Put Data Files Of The Same Tablespace On The
Same Disk
A frequently extended tablespace usually indicates a high level of use. Placing the data files of a
highly accessed tablespace on the same disk might result in high disk utilization and seek time.
I/O load should be balanced evenly over all available disks and controllers (see Tips 1-3).
Try to keep tablespace size under 2GB to ease administration and reorganization. If a
tablespace's size exceeds 2GB, consider moving large tables and indexes out to their own
tablespaces.
Sequential write performance of the archive log file system is critical. Writing of an archive log
file should not be interrupted by a seek to another area of the disk. The archive log file system
should be isolated on its own disk.
Separating the main instance from the database server provides easier maintenance,
administration, and tuning of servers with only one type of workload. Dedicate your database
server to running only Oracle.
An alternative to a single, big instance is two smaller ones. With two application instances, you
have two sets of semaphores and two ABAP program buffers (512MB total). Semaphore
contention would be reduced providing better CPU utilization and throughput. The required
number of work processes could be split over two instances. Workload can be segregated
resulting in more efficient buffer utilization and better cache hit rates.
When allocating R/3 work processes and buffers, the combined allocation should not force
operating system paging (see Tip 29).
Tip 28: Update The SAP Kernel To The Latest Available
Release
The SAP kernel should be maintained at the latest level available. From release 3.0C to 3.1H, the
kernel is backwardly compatible and can be safely updated without requiring an R/3 upgrade
(see Appendix F) . The latest R/3 kernel incorporates fixes for problems found in earlier versions,
some fixes are performance related.
If a large extended memory requirement is contributing to a SAP over-allocation, there are job
scheduling and programming techniques that can be applied to help reduce this requirement.
Check the SAP transaction ST02 for extended memory utilization, the max used size should
always be lower than the available size. If the used heap memory size is greater than zero,
investigate in more detail. Transaction RZ03>Edit>Views>Memory Management>Absolute
Values shows detailed statistics about current memory usage. The number of restarted work
processes (dialog and non-dialog) should be equal to zero. If not, work processes were forced
into private mode.
Work processes that go into private mode might also influence the wait times observed in ST03.
As work processes go into private mode, less are available to handle the remaining user
workload.
Tip 31: Use Logon Groups
R/3 logon load balancing is used to dynamically distribute R/3 users across application server
instances. Logon groups are defined consisting of one or more application instances having
similar processing workloads. When logging on, users are evenly distributed across the available
application servers of a logon group. This increases the efficiency of the defined groups in terms
of performance and system resources.
Grouping users by functional areas, SAP buffer requirements may be reduced helping eliminate
object swaps. Only objects relevant to a specific functional area would be buffered. For example,
a "FI application server" would only load FI ABAPs in its program buffer.
Logon groups are installed and maintained centrally within the R/3 System (transaction SMLG).
You can define the maximum response time for each application server group, and the maximum
number of users that can log on to any particular group server. This means you can set up
application servers with improved response time behavior for important workgroups with time-
critical transactions. Setting up and using logon groups means that the behavior of each server
becomes more predictable allowing you to tune more effectively.
Example:
ipc/shm_psize_02 = -10
ipc/shm_psize_06 = -10
If you need to enlarge the ABAP program buffer and pool 10 has already reached 256MB. If a
shared memory segment is available, move the ABAP program buffer and the Dispatcher
Administration table and out of pool 10.
Example:
ipc/shm_psize_02 = -30
ipc/shm_psize_06 = -30
ipc/shm_psize_30 = 256000000
abap/buffersize = 245000
Using an asterisk in the SELECT clause of a SELECT statement, requests all fields of the table
specified in the FROM clause (e.g., SELECT * FROM KNA1). Most often, the calling program
doesn't need all the fields of a table.
Unwanted fields can be avoided by specifying fields in a SELECT clause field list or in the
definition of a view. By eliminating extraneous fields, you reduce CPU, memory, network, and
I/O overhead. On the other hand, extensive use of field lists (not views) might lead to a large
variety of statements. This would influence the performance of the statement cache managed by
the database interface.
A SELECT statement without a WHERE clause may result in an expensive index range scan on
a client-dependent table. Be aware that SAP will add a WHERE clause specifying MANDT =
SY-MANDT.
If you expect to fetch less than 20% of a table, specify all known conditions in the WHERE
clause. With a WHERE clause, Oracle can choose the optimum execution plan and select the best
index. Index range scans are great if you expect a small result set.
Note: If the full key is known, use SELECT SINGLE (Index Unique Scan).
If you expect to fetch more than 20% of a table, consider doing a full table scan with
SELECT ... CLIENT SPECIFIED. This will force Oracle into table scan mode avoiding any
index overhead. Also, data blocks are read sequentially instead of randomly resulting in less head
movement (seeks) and better data transfer.
If the entire set of fields retrieved in a SELECT statement is stored in an index, do a full table
scan only if you expect to fetch more than 40% of the table. All of the required data can be
retrieved from the index without going to the table. Obtaining data directly from an index usually
requires significantly fewer physical reads.
If you don't specify all known conditions in the WHERE clause, extraneous rows may be fetched
creating unnecessary CPU, memory, network, and I/O overhead. Don't rely on the CHECK
statement; be specific.
For example:
PROCESS ...
ENDSELECT.
ENDSELECT.
ENDSELECT.
ENDSELECT.
If the TABLE1 SELECT returns 1,000 rows - TABLE2 SELECT will be performed 1,000 times.
If each TABLE2 SELECT returns 100 rows - TABLE3 SELECT will be performed 100,000
times.
If each TABLE3 SELECT returns 100 rows - TABLE4 SELECT will be performed 1,000,000
times,
Also, logical database accesses (GETs) are nested SELECT loops in disguise. As you transverse
the hierarchy of a logical database, SELECT-ENDSELECT loops are nested behind the scenes.
For example, accessing a table in logical database five levels deep may result in nested select
loops five levels deep. Also, a significant amount of unwanted processing may occur due to code
imbedded in the logical database for other programs. If you want to write modular code, use
function modules, not logical databases.
SELECT from a view directly into an internal table then loop process.
Example:
3. Sort I_TABLES.
PROCESS ...
ENDSELECT.
-----------------------------------------------------------------------------------------------------
The first SELECT will do an array fetch from a view (join) into an internal table. The second
SELECT will UNION all of the resulting SELECT statements from FOR ALL ENTRIES. It will
then do an array fetch and individually process each row of the result set. In this example, two
array operations should run much quicker than twelve million individual database operations.
The use of views and SELECT ... FOR ALL ENTRIES has reduced ABAP run times from hours
to minutes, minutes to seconds.
Tip 36: Create Secondary Indexes Where Needed But Be
Careful!
Secondary indexes can speed up queries dramatically. The ABAP/4 data dictionary allows 16
secondary indexes for each table. But be very careful! Adding secondary indexes will slow
inserts and updates. Also, the database optimizer may take longer to choose the best execution
plan.
* fields are selected without index support resulting in long sequential reads.
* An index must be selective. An index range scan should not return thousands of rows.
Typical candidates for buffering include customizing and system tables. In certain cases, master
data with customizing character can also be buffered.
An ABAP program will run faster if you know the maximum size of an internal table when
defining it. The estimated number of rows is specified with the OCCURS parameter (the value
after addition OCCURS in the defining DATA statement of the table).
The estimated size of the table in bytes is the main factor influencing the speed of your program.
The size is calculated as the product of the width and number of rows in the table.
When the first table entry is created (APPEND, COLLECT), the SAP kernel allocates enough
memory space to hold the whole table or at least a block of approximately 8-16K. Making a good
guess at the OCCURS parameter prevents the kernel from allocating unnecessary memory space.
The OCCURS value is especially important if you define internal tables within internal tables. If
the OCCURS parameter is set too large for an inner table, memory is wasted, since blocks of at
least 8K are allocated for each row of the outer table.
Since the kernel allocates memory as needed, use the following guidelines when determining the
value of OCCURS:
* The OCCURS parameter does not limit the actual size of a table; it only affects the
performance of table operations.
* If you cannot estimate the size of the table, set OCCURS to zero and let the system do it for
you.
* If the estimated size of the table is below 8K, set OCCURS equal to the estimated number of
rows.
* If the estimated size of the table is larger than 8K, set OCCURS to zero.
Example:
Example:
COMMIT WORK.
When function modules are called in update task, field values and internal tables are written to
the database (VBLOG tables). When the COMMIT WORK occurs, the queued function modules
and data are passed to an update work process for processing.
For transactions run in dialog, asynchronous updates make sense when trying to provide the end
user with the fastest possible response time. For transactions run in batch, end-to-end response
time is no longer a concern; throughput is the challenge. If your goal is to maximize batch
throughput, it makes sense to update locally instead of asynchronously. By disabling
asynchronous update, update log overhead and contention are eliminated. Updates are done
directly by the batch work process.
To disable asynchronous update, insert the line "SET UPDATE TASK LOCAL" just prior to the
function module call in update task.
Example:
When disabling asynchronous update, we strongly recommend that you modify a copy of the
SAP standard transaction and submitted it only in batch.
CALL TRANSACTION
If a program submitted in batch is using "CALL TRANSACTION" and the transaction called is
using asynchronous updates, the transaction can be called with an update mode `L' making all
asynchronous updates local.
Example:
APPENDIX A
Mirror Write Consistency
What is MWC?
Mirror write consistency is a parameter of an AIX logical volume (LV) which can be set to either
YES or NO. It's meaningless unless the logical volume is mirrored.
A write is in progress to a logical volume that has mirror copies. Before the write completes,
power is lost so that the write never completes. When the system reboots, as with an unmirrored
logical volume, the data read from the area to which the unfinished write was addressed may be
old data, new data or a mixture of the two. The difference that mirroring introduces is that if the
data is read twice the two reads may give different results. This is because the two reads may
read from different copies of the data. In many environments this is not a problem but some
applications would be sensitive to this and this is the reason that MWC has been provided.
Near the beginning (outer edge) of each disk AIX keeps the mirror write consistency cache
record. This is a single disk block which is regarded as a array of 1 bit records. Each bit in the
block represents the MWC state of one logical track group (LTG) on the disk. A LTG is 128K in
size.
When a write must be performed to a mirrored logical volume which has MWC turned on, the
LVM first checks the in memory version of the MWC record. If this says that any of the LTG
areas affected by the write
are in sync then the LVM must update the in-memory version of the record and flush it to disk,
marking the new LTGs as out of sync. The write of the MWC record involves a seek to the
beginning of the disk. Once the MWC record has been flushed out to disk, the actual mirror
writes can take place. This will obviously involve a seek back to the data area of the disk.
When the writes of the mirrored data complete, the LVM does not immediately update the MWC
record on disk. That is handled later since it is not dangerous to have areas marked as out of sync
when really they are in sync. If the system should crash, when it reboots, the MWC record on
each disk is examined and if any LTGs are found to be marked out of sync they are synchronized
by reading either copy and writing it to the other. The copy to read is chosen randomly since each
copy of the data is equally valid (or invalid).
If MWC is turned on for a logical volume, performance of writes to that logical volume will be
severely impacted because each write will result in a seek to the beginning of the disk and a 512
byte write before seeking to the data to be written. If it can be determined that MWC can be
turned off then a substantial performance gain will result.
APPENDIX B
Installing the ORACLE Database on Raw Devices
Prerequisites
Caution
The database should only be installed and handled by personnel familiar with raw devices.
Installing an ORACLE database on raw devices increases the complexity of the R/3 installation,
and increases the complexity of handling the database instance.
Note
Before you install your ORACLE database with raw devices, please check whether it would not
be better to increase performance by transferring the database instance to a separate host or by
adding processors to your host.
The administration of a system which combines a file system with raw devices must be handled
very carefully.
The installation of the ORACLE database on raw devices is only recommended for large
databases.
This means:
* If the database server has 2 - 4 CPUs -> both versions can be considered
* If the database server has more than 4 CPUs -> we recommend considering both possibilities.
Before you start to install the R/3 System, read this chapter for information on how to prepare the
normal installation procedure.
The person conducting the installation must be familiar with a standard ORACLE database
configuration (redo logs, security, performance issues and disk configuration as described in the
section "Overview: Database Installation" in the main manual).
As opposed to a standard (file system) installation, the ORACLE database (redo logs, ORACLE
data and control information) uses raw devices here.
The ORACLE database raw devices are character devices accessed via links after the
installation. Each tablespace can be extended by additional raw devices (this might be compared
with creating several data files for each tablespace when data files are used).
The links for the database log files are located in the directories
/oracle/<SAPSID>/origlogA
/oracle/<SAPSID>/origlogB
If the ORACLE feature for mirroring logs is used, the following directories contain the links to
the character devices for the mirror logs:
/oracle/<SAPSID>/mirrlogA
/oracle/<SAPSID>/mirrlogB
All other links (e.g. system_1, roll_1, temp_1) are located in the directory
/oracle/<SAPSID>/sapraw
As the mentioned above, directories only contain links, they need only a few bytes of space. The
actual space required for the data, control data and logs of the database is located on the raw
devices.
The naming convention for the links is similar to that of ordinary files and can be viewed in the
create_RAW_links file that is written by R3INST (see description below).
Installation Procedure
The following steps need to be carried out in addition to those that are normally required for the
installation of file system based databases.
Caution
Before you start the installation you must copy the file /<CD mount>/SAPFS.TPL from the CD
SAP KERNEL to your installation directory <INSTDIR>
Edit the file <INSTDIR>/SAPFS.TPL with any editor. In the section [DB-OPTIONS]
RAW-DEVICE = N
to
RAW-DEVICE = Y
*-------------------------------R3INST------------------------------*
* No Yes *
*-------------------------------------------------------------------*
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
*-------------------------------------------------------------------*
you can increase tablespace sizes or change the mirroring mode.
Note that the SAPDATA numbering in the <INSTDIR>/SAPFS.TPL file still appears, but is not
used. The size of each tablespace can be modified. The sum of the current sizes is given under
<DBDATA>/sapraw.
Note
Ignore the values for the log and mirror log files, because the log files are created as
raw devices.
The size of /oracle/<SAPSID>/sapraw is not the final size. The correct size of the raw devices
is calculated when the link script is created (see below).
After modifying the size, the following question is shown on the screen (see the section
"Creating UNIX Users" of chapter "Installing the Central Instance" in the main manual):
*-------------------------------R3INST------------------------------*
*-------------------------------------------------------------------*
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
* *
*-------------------------------------------------------------------*
In the next step, the link script is created. Restart R3INST and select:
Continue Interrupted Installation -> Cancel the Installation -> DB-TOOLS -> Create Link
Script for RAW device-links.
The link script file create_RAW_links is then created in the directory <INSTDIR>.
#Generated
#at <Date>
ln -s <RAWDEV>/<SAPSID>cntrl1 /oracle/<SAPSID>/sapraw/cntrl1.dbf
ln -s <RAWDEV>/<SAPSID>cntrl2 /oracle/<SAPSID>/sapraw/cntrl2.dbf
ln -s <RAWDEV>/<SAPSID>cntrl3 /oracle/<SAPSID>/sapraw/cntrl3.dbf
ln -s <RAWDEV>/<SAPSID>_g11m1 /oracle/<SAPSID>/origlogA/log_g11m1.dbf
ln -s <RAWDEV>/<SAPSID>_g13m1 /oracle/<SAPSID>/origlogA/log_g13m1.dbf
...
The required raw device sizes are shown as a comment for each link.
Note
Add at least 1 MB per raw device to your disk space requirements or these in SAPFS.TPL for
administrative purposes.
Otherwise, the raw devices might be slightly too small and the errors
Exit R3INST and copy the script used to create the links in the directory with:
cp <INSTDIR>/create_RAW_links /oracle/<SAPSID>/sapraw
installation requirements.
In every line beginning with ln -s, replace the part after ln -s, for example
<RAWDEV>/<SAPSID>cntrl1, with the real raw device name.
Example
If you use logical volumes to create the raw devices on several operating systems, replace the
part after ln -s with /dev/<VGName>/R<lvName>. If the raw device name cannot be determined
(e.g. <RAWDEV>/<SAPSID>cntrl1), the link must be set to the real raw device name.
Note
If you intend to use redo log mirroring, you have to add the following entries to the
create_RAW_links script.
When raw devices are being used, the default for R3INST is hardware mirroring (replace the part
after ln -s with the real raw device name and <SAPSID> with the SAP system ID):
ln -s <RAWDEV>/<SAPSID>_g11m2 /oracle/<SAPSID>/mirrlogA/log_g11m2.dbf
ln -s <RAWDEV>/<SAPSID>_g13m2 /oracle/<SAPSID>/mirrlogA/log_g13m2.dbf
ln -s <RAWDEV>/<SAPSID>_g12m2 /oracle/<SAPSID>/mirrlogB/log_g12m2.dbf
ln -s <RAWDEV>/<SAPSID>_g14m2 /oracle/<SAPSID>/mirrlogB/log_g14m2.dbf
The raw device files and the redo log files have the same size.
Create the required raw devices with the sizes described in the create_RAW_links script file.
If the hardware requirements do not correspond to the exact size of the raw devices because the
extent size of logical volumes is defined as 4 MByte, you have to round off the raw device size.
sh /oracle/<SAPSID>/sapraw/create_raw_links
Caution
Change the owner of the links and of the raw devices to ora<sid>:dba.
Restart R3INST and continue the installation using the same procedure as for file system
installations.
APPENDIX C
R/3 note no. 72638
20.01.1998
________________________________________________________________________
Number 0072638
Language E
________________________________________________________________________
Long text
Symptom
This behavior occurred with SQL*NET V2 and the Oracle release 7.2.3 for the
first time.
Apply the patches listed below to solve the problem with 7.2.3, for 7.3.3
consider the separate section.
The cause of this problem is that ORACLE opens the operating system socket for
the communication between the client process (R/3 work process) and the ORACLE
shadow process in the database server in the "TCP Delay" mode. This mode
leaves it to the TCP/IP implementation on the operating system to delay
sending half-empty TCP packages (so that the package may be filled). The wait
cycle is approximately 200ms long.
The duration of a data transfer between the client and server is nonlinearly
dependent on the data volume to be transferred (for example, on the results of
a 'Fetch' operation). Putting it more simply, this means that for relatively
small datasets, the data transfer can take 200ms.
This behavior can basically be observed on both SQL*Net V1 and SQL*Net V2. The
difference between these SQL*Net versions is that this effect occurs for
smaller datasets in SQL*Net V2 (from 2K) and more frequently than for SQL*Net
V1 (effect is only visible from approximately 4K). This is due to the fact
that SQL*Net V1 works with an internal 4K buffer, whereas SQL*Net V2 uses a 2K
buffer.
Up to now this performance problem was mainly observed on HP-UX but can also
occur on other platforms.
Solution
In the meantime, there is a patch for SQL*Net V2 from ORACLE which allows the
sockets to be opened with TCP_NODELAY. However, this patch is only available
for Version 7.2.3!
Unfortunately, for this patch to be effective for the communication from the
client to the server (thus, for example, for insert or update operations), it
is not sufficient simply to import the patch in the database, but the R/3
kernel must also be linked with the patched ORACLE libraries. This has already
been done with the Standard 3.1G kernels for HP UX. For all other platforms,
this is the case with the 3.1G kernel as of patch level 26.
Preconditions for the patch are
2. SQL*Net V2
**************************************************************
Caution!!!
The ORACLE patch only solves a very specific performance problem of SQL*Net.
It cannot be seen as a solution for every type of performance problem. A
precise analysis of the respective problem is absolutely necessary, and the
patch should only be applied if the specified symptoms cause a significant
deterioration in the remote database access when compared to the local
database access.
***************************************************************
1. If you have used SQL*Net V1 so far, you must first switch to SQL*Net V2. To
do this, proceed as described in Note 61041.
2. Download the corresponding ORACLE patch for your hardware platform from
sapservX. Follow the instructions in the respective README files when
importing the patch.
AIX : ~ftp/general/3rdparty/oracle/AIX/7.2.3/bug330388
~ftp/general/3rdparty/oracle/AIX/7.2.3/bug475453
4. Download the latest 3.1G kernel for your hardware platform from sapservX
and import this according to the description in Note 72342. You can also use
the 3.1G kernel without any problems in R/3.
Releases
30/D/E/F. You can also use the 3.1G kernel for R/3 Releases
7.3.3.X
As of Release 7.3, no patch is necessary to configure the nodelay behavior.
directory /oracle/<SID>
OR!!!!!
tnsnames.ora
(DESCRIPTION =
(SDU = 32768)
(ADDRESS_LIST =
listener.ora
(SID_DESC =
(GLOBAL_DBNAME = C11.world)
(SDU=32768)
(SID_NAME = ...
SDU size is the communication size of SQL*NET V2. You can set it as of Release
7.3 (in bytes).
We recommend that you use solution 1 since it displays the same behavior as
under 7.2.3.
Note that in every case an installation specific to every Appl. Server will be
made in the near future (migration on Oracle8).
________________________________________________________________________
300 to 30F
________________________________________________________________________
OP system AIX
DEC-UNIX
HP-UX
RELIANT
SOLARIS
DB system ORACLE
APPENDIX D
Enlarging the Redo Logs - Moving Redo Logs to Other Disks
During a standard installation, four redo log members are installed with a size of 20 MB each.
The size of the redo log files directly affects database performance and security.
Caution
Large redo log files give better performance, but there are three disadvantages:
* If the redo log file currently in use is lost, the amount of lost data is big.
sqldba lmode=y
2. Backup the whole database. (This step is necessary to have a restart point if any of the
following actions should fail)
sqldba lmode=y
Example
6. Create new log members with the new size (or at a new file location).
Example
('/oracle/<SID>/origlogA/log_g101_m1',
'/oracle/<SID>/mirrlogA/log_g101_m2')
SIZE 80M;
Naming convention:
log_g<group_number>_m<member_number>
7. Repeat step 5 and 6 for the log groups 102, 103 and 104.
If you try to drop the logfile currently written, you get an ORACLE message:
9. Backup the whole database. (Since any change of the redo log files results in a
change of the structural database description in the control files, this backup will be
APPENDIX E
Creating Additional Rollback Tablespaces
CREATE TABLESPACE PSAPROLL2 DATAFILE `/oracle/SID/sapdatax/roll2_1/roll2.data1'
SIZE 204800K REUSE DEFAULT STORAGE ( INITIAL 1024K NEXT 1024K MINEXTENTS
2 MAXEXTENTS 121 PCTINCREASE 0 ) ;
CREATE ROLLBACK SEGMENT PRS2_1 TABLESPACE PSAPROLL2
`/oracle/SID/sapdatax/roll3_1/roll3.data1'
initSID.ora entry:
rollback_segments =
(PRS_1,PRS2_1,PRS3_1,PRS_2,PRS2_2,PRS3_2,PRS_3,PRS2_3,PRS3_3, ...)
APPENDIX F
R/3 note no. 79376 abbreviated for Oracle and AIX
20.01.1998
________________________________________________________________________
Number 0079376
Language E
________________________________________________________________________
Long text
Symptom
Error in the SAP kernel of Releases 3.0C, 3.0D, 3.0E, 3.0F, 3.10 or 3.1G
Solution
with Releases 3.0C, 3.0D, 3.0E, 3.0F, 3.10 and 3.1G. Therefore, you can
affected system.
The general recommendation from SAP that you keep up to date with the
Definition of terms:
o still apply the Hot Packages for your R/3 Release for corrections
to ABAP/4 Repository objects (that is, apply the same Hot Packages
as before).
Caution: If the system has Release 3.0D or 3.0E and you are working with
Preconditions
o If you are using DBMS ORACLE under UNIX and the initial release is
There is a guide and the software for this in the 3.1H upgrade
package.
ORACLE 7.3.3 with the 3.1H kernel for all R/3 releases since 3.0D.
o If you are using the operating system AIX, you must ensure that
o Make sure that the languages and character sets (code pages) are
Note 83196.
Check
zcsa/installed_languages.
- The number of the character set that your system uses. Table
Installation
You can install a 3.1H kernel from a regular 3.1H kernel CD for by
under AS/400) on the central instance host and then on all application
o Stop saposcol
Under UNIX:
cd /usr/sap/<SAPSID>/SYS/exe/run
./saposcol -k
rm saposcol
- Under UNIX:
Change to the directory /sapmnt/<SAPSID>/exe
Sections of the old kernel are copied back into the directory
saproot.sh):
mkdir oldkernel
cp -p run/* oldkernel
su - root
exit
su - root
cd /sapmnt/<SAPSID>/exe
./saproot.sh <SAPSID>
exit
- Under UNIX, you can download the frontend software from the
/<sapcd>/GUI/UNIX//R3UP readunixgui
o Very important: For individual executables (particularly
under general/R3server/patches/rel31H/OS/OPS/DBMS/*
OS = UNIX, NT or OS400
follows:
- Under UNIX:
1. cd /usr/sap/<SAPSID>/SYS/exe/run
2. /<CD-Dir>/SDK/UNIX/<OS>/CAR -xvf
/ /<CD-Dir>/SDK/UNIX/<OS>/<additional package>.CAR
o Only for initial Releases 3.0C and 3.0D: After importing the
automatically.
o Only for initial Releases 3.0C and 3.0D: In 3.0C and 3.0D, it is
possible that Batch/CPIC user IDs could also log on online via RFC
kernel does not meet your expectations, the old 3.0C/D/E/F or 3.10/3.1G
Procedure:
You do not have to reset the database upgrades (DB patches) or the
Only for initial Releases 3.0C and 3.0D: In Release 3.0E, the algorithm
for password encryption has changed. This means that the password must
be reset for all users who were created in the system with the new
newer or not. The system must still be operated with the 3.1H
kernel.
________________________________________________________________________
30C to 30F
APPENDIX G
Table Buffering
Only transparent tables and pooled tables can be buffered. Buffering is not possible for cluster
tables.
The following two points indicate when a table should not be buffered:
* Data read by the application must always be up-to-date. Delays due to synchronization are
unacceptable.
* Table data is frequently changed. In this case the work involved in synchronization can be
greater than the improvement in performance gained by buffering. In general, buffering a table is
not a good idea if more than one percent of the accesses involve changes.
* Buffering not allowed: Buffering is not allowed for this table, for example because
application programs always require the most up-to-date data from the table or because it is too
frequently changed.
* Buffering allowed but switched off: There is no commercial or technical reason not to buffer
the table. Applications accessing the table run correctly whether it is buffered or not. Buffering
for the table is not activated in the standard profile because it is not possible to estimate the table
size and the access profile (frequency of the different ways of accessing the table). In the relevant
installation buffering can be activated at any time by specifying a buffering type.
* Buffering active: The table is to be buffered. How this is achieved must in this case be
determined by specifying a buffering type.
* Full buffering: With full buffering, either the whole table or none of the table is located in the
buffer. When a read access takes place, the entire table is loaded into the buffer.
* Generic buffering: When you access a record from a table that is generically buffered, all
records with corresponding generic key fields are loaded into buffer. In other words, the generic
areas of a table are fully buffered. Within the buffer, the generic areas are treated as objects in
their own right (like fully buffered tables).
* Single-record buffering: Only those records of a table actually being accessed are loaded into
the buffer. This saves storage space in the buffer. However, this type of buffering involves more
administrative tasks than full buffering. In addition, many more database accesses are needed to
load the records of the table.
Tables with single-record buffering by record are located in the single-record table buffer
TABLP, fully and generically buffered tables are both located in the full table buffer TABL.
* Tables up to 30 KB in size. If a table is accessed frequently, but all accesses are read accesses,
this value can be exceeded. However, you should always pay attention to the buffer utilization
factor.
* Larger tables with several frequently accessed rows. However, if the application program is
able to formulate an extremely selective WHERE condition for these multiple accesses using a
database index, it may be advisable to dispense with buffering.
* Tables where frequent attempts are made to access data not contained in the table (resulting in
a "No record found" message). With full buffering, all records of a table are always contained in
the buffer. A direct decision can then be made as to whether or not the table contains a record for
a specific key.
When considering whether a table should be fully buffered, you should take into account the size
of the table, the number of read accesses, and the number of write accesses. Tables best suited to
full buffering are small and are subjected to a large number of read accesses, but are rarely
written to.
A table should be buffered generically if only certain generic areas of the table are generally
needed in an application server.
* Client-dependent, fully buffered tables are automatically buffered generically (even if full
buffering was explicitly selected). The client field is the generic key. It is assumed that work is
not being carried out in all clients simultaneously on an application server.
The individual generic areas are treated as independent, fully buffered tables.
Generic buffering only makes sense if the corresponding table is accessed with fully qualified
generic key fields. If a field of the generic key is not assigned a value in a SELECT statement, it
is read directly from the database, circumventing the buffer.
If you access a generic area not buffered with fully specified key, it will be loaded with database
call. If the table does not contain any records in the specified area ("No Record Found"), the area
in the buffer is marked as non-existent. This eliminates further attempts to accesses the area.
If a generic key is too short for a large table, very few large areas will be held in the buffer. When
an access is made, too much data may then be loaded to the buffer.
If the generic key is too long, many small generic areas are buffered. If the full generic key is not
specified, requests will circumvent the buffer resulting in database calls. If there are only a few
records in each generic area, it may be more efficient to fully buffer the table.
Single-record buffering is recommended for large tables with a small number of frequently
accessed records. The total size of records accessed should be between 100 and 200 KB.
In single-record buffering, each SELECT statement without the suffix SINGLE proceeds straight
to the database. This applies even if the complete key is specified in the SELECT statement. In
single-record buffering, therefore, the buffer can be used only for accesses with SELECT
SINGLE.
If you access a record that is not yet buffered with SELECT SINGLE, there is a database call to
load the record. This record is marked as nonexistent in the buffer if the table does not contain a
record for the specified primary key ("No Record Found"). This prevents further attempts to
access the record.