Professional Documents
Culture Documents
ibm.com/redbooks
Redpaper
REDP-4797-00
Note: Before using this information and the product it supports, read the information in Notices on
page vii.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
ix
xi
xi
xi
iii
iv
33
36
37
38
39
40
40
42
43
43
44
45
45
46
47
47
50
50
50
51
51
52
53
53
53
54
56
57
57
59
60
60
61
62
64
64
66
67
68
68
69
70
71
74
77
77
78
80
83
101
102
105
106
110
111
111
114
119
123
125
126
129
130
131
132
135
136
136
137
137
137
138
138
140
142
143
144
145
146
150
151
152
Contents
vi
153
157
157
162
165
166
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169
169
170
171
171
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in
other operating environments may vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on generally available systems.
Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this
document should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
vii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Memory
AIX
Electronic Service Agent
EnergyScale
Focal Point
IBM Systems Director Active Energy
Manager
IBM
Micro-Partitioning
POWER Hypervisor
Power Systems
POWER6+
POWER6
POWER7
PowerHA
PowerVM
Power
POWER
pSeries
Redbooks
Redpaper
Redbooks (logo)
System Storage
System x
System z
Tivoli
viii
Preface
This IBM Redpaper publication is a comprehensive guide covering the IBM Power 720
and Power 740 servers supporting AIX, IBM i, and Linux operating systems. The goal of this
paper is to introduce the innovative Power 720 and Power 740 offerings and their major
functions, including these:
The IBM POWER7 processor available at frequencies of 3.0 GHz, 3.55 GHz, and
3.7 GHz.
The specialized POWER7 Level 3 cache that provides greater bandwidth, capacity,
and reliability.
The 2-port 10/100/1000 Base-TX Ethernet PCI Express adapter included in the base
configuration and installed in a PCIe Gen2 x4 slot.
The integrated SAS/SATA controller for HDD, SSD, tape, and DVD. This controller
supports built-in hardware RAID 0, 1, and 10.
The latest IBM PowerVM virtualization, including PowerVM Live Partition Mobility and
PowerVM IBM Active Memory Sharing.
Active Memory Expansion technology that provides more usable memory than is
physically installed in the system.
IBM EnergyScale technology that provides features such as power trending,
power-saving, capping of power, and thermal measurement
Professionals who want to acquire a better understanding of IBM Power Systems
products can benefit from reading this Redpaper publication. The intended audience includes
these roles:
Clients
Sales and marketing professionals
Technical support professionals
IBM Business Partners
Independent software vendors
This paper complements the available set of IBM Power Systems documentation by providing
a desktop reference that offers a detailed technical description of the Power 720 and
Power 740 systems.
This paper does not replace the latest marketing materials and configuration tools. It is
intended as an additional source of information that, together with existing sources, can be
used to enhance your knowledge of IBM server solutions.
ix
is also skilled on IBM System Storage, IBM Tivoli Storage Manager, IBM System x,
and VMware.
Carlo Costantini is a Certified IT Specialist for IBM and has over 33 years of experience
with IBM and IBM Business Partners. He currently works in Italy Power Systems Platforms
as Presales Field Technical Sales Support for IBM Sales Representatives and IBM
Business Partners. Carlo has broad marketing experience, and his current major areas of
focus are competition, sales, and technical sales support. He is a Certified Specialist for
Power Systems servers. He holds a masters degree in electronic engineering from
Rome University.
Steve Harnett is a Senior Accredited Professional, Chartered IT Professional, and member
of the British Computing Society. He currently works as a pre-sales Technical Consultant in
the IBM Server and Technology Group in the UK. Steve has over 16 years of experience
working in post sales supporting Power Systems. He is a product Topgun and a recognized
SME in Electronic Service Agent, Hardware Management Console, and High end Power
Systems. He also has several years of experience in developing and delivering education to
clients, IBM Business Partners, and IBMers.
Volker Haug is a certified Consulting IT Specialist within IBM Systems and Technology
Group, based in Ehningen, Germany. He holds a bachelor's degree in business management
from the University of Applied Studies in Stuttgart. His career has included more than 24
years working in the IBM PLM and Power Systems divisions as a RISC and AIX Systems
Engineer. Volker is an expert in Power Systems hardware, AIX, and PowerVM virtualization.
He is POWER7 Champion and also a member of the German Technical Expert Council, a
affiliate of the IBM Academy of Technology. He has written several books and white papers
about AIX, workstations, servers, and PowerVM virtualization.
Craig Watson has 15 years of experience working with UNIX-based systems in roles
including field support, systems administration, and technical sales. He has worked in the
IBM Systems and Technology Group since 2003. Craig is currently working as a Systems
Architect, designing complex solutions for customers that include Power Systems, System x,
and Systems Storage. He holds a masters degree in electrical and electronic engineering
from the University of Auckland.
Fabien Willmann is an IT Specialist working with Techline Power Europe in France. He
has 10 years of experience with Power Systems, AIX, and PowerVM virtualization. After
teaching hardware courses on Power Systems servers, he joined ITS as an AIX consultant
where he developed his compentencies in AIX, HMC management, and PowerVM
virtualization. Building new Power Systems configurations for STG pre-sales is his major area
of expertise today. Recently he gave a workshop on the econfig configuration tool, focused on
POWER7 processor-based BladeCenters during the symposium for French Business
Partners in Montpellier.
The project that produced this publication was managed by:
Scott Vetter, IBM Certified Project Manager and PMP.
Thanks to the following people for their contributions to this project:
Larry Amy, Gary Anderson, Sue Beck, Terry Brennan, Pat Buckland, Paul D. Carey,
Pete Heyrman, John Hilburn, Dan Hurlimann, Kevin Kehne, James Keniston, Jay Kruemcke,
Robert Lowden, Hilary Melville, Thoi Nguyen, Denis C. Nizinski, Pat ORourke, Jan Palmer,
Ed Prosser, Robb Romans, Audrey Romonosky, Todd Rosedahl, Melanie Steckham,
Ken Trusits, Al Yanes
IBM U.S.A.
Stephen Lutz
IBM Germany
Tamikia Barrow
International Technical Support Organization, Poughkeepsie Center
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface
xi
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xii
Chapter 1.
General description
The IBM Power 720 (8202-E4C) and IBM Power 740 (8205-E6C) servers utilize the latest
POWER7 processor technology designed to deliver unprecedented performance, scalability,
reliability, and manageability for demanding commercial workloads. The new Power 720 and
Power 740 servers provide enhancements that can be particularly beneficial to customers
running applications driving very high I/O or memory requirements.
The performance, availability, and flexibility of the Power 720 server can enable companies to
spend more time running their business utilizing a proven solution from thousands of ISVs
that support the AIX, IBM i, and Linux operating systems. The Power 720 server is a
high-performance, energy efficient, reliable, and secure infrastructure and application server
in a dense form factor. As a high-performance infrastructure or application server, the
Power 720 contains innovative workload-optimizing technologies that maximize performance
based on client computing needs and Intelligent Energy features that help maximize
performance and optimize energy efficiency, resulting in one of the most cost-efficient
solutions for UNIX, IBM i, and Linux deployments.
As a distributed application server, the IBM Power 720 is designed with capabilities to deliver
leading-edge application availability and enable more work to be processed with less
operational disruption for branch office and in-store applications. As a consolidation server,
PowerVM Editions provide the flexibility to use leading-edge AIX, IBM i, Linux applications
and offer comprehensive virtualization technologies designed to aggregate and manage
resources while helping to simplify and optimize your IT infrastructure and deliver one of the
most cost-efficient solutions for UNIX, IBM i, and Linux deployments.
The Power 740 offers the performance, capacity, and configuration flexibility to meet the most
demanding growth requirements, and combined with industrial-strength PowerVM
virtualization for AIX, IBM i, and Linux, it can fully utilize the capability of the system. These
capabilities are designed to satisfy even the most demanding processing environments and
can deliver business advantages and higher client satisfaction.
The Power 740 is designed with innovative workload-optimizing and energy management
technologies to help clients get the most out of their systems (that is, running applications
rapidly and energy efficiently to conserve energy and reduce infrastructure costs). It is fueled
by the outstanding performance of the POWER7 processor, making it possible for applications
to run faster with fewer processors, resulting in lower per-core software licensing costs.
Note: The Integrated Virtual Ethernet (IVE) adapter is not available for the Power 720.
The Power 720 also implements Light Path diagnostics, which provides an obvious and
intuitive means to positively identify failing components. Light Path diagnostics allow system
engineers and administrators to easily and quickly diagnose hardware problems.
There is an upgrade available from a POWER6 processor-based IBM Power 520 server
(8203-E4A) into the Power 720 (8202-E4C). A Power 520 (9408-M25) can be converted to an
Power 520 (8203-E4A) and then be upgraded to a Power 720 (8202-E4C). You can also
directly upgrade from an Power 520 (8203-E4A) to the Power 720 (8202-E4C), preserving the
existing serial number.
The Power 720 system's Capacity Backup (CBU) designation can help meet your
requirements for a second system to use for backup, high availability, and disaster recovery. It
enables you to temporarily transfer IBM i processor license entitlements and IBM i
2
The Power 740 server supports a maximum of 32 DDR3 DIMM slots, with eight DIMM slots
included in the base configuration and 24 DIMM slots available with three optional memory
riser cards. A system with three optional memory riser cards installed has a maximum
memory of 512 GB.
The Power 740 system comes with an integrated SAS controller, offering RAID 0, 1, and 10
support, and two storage backplanes are available. The base configuration supports up to six
SFF SAS HDDs/SSDs, an SATA DVD, and a half-high tape drive. A higher-function backplane
is available as an option. This supports up to eight SFF SAS HDDs/SSDs, an SATA DVD, a
half-high tape drive, Dual 175 MB Write Cache RAID with RAID 5 and 6 support, and an
external SAS port.
All HDDs/SSDs are hot-swap and front accessible. If the internal storage capacity is not
sufficient, there are also four disk-only I/O drawers supported, providing large storage
capacity and multiple partition support.
The Power 740 comes with five PCI Express (PCIe) Gen2 full-height profile slots for installing
adapters in the system. Optionally, an additional riser card with four PCIe Gen2 low-profile
slots can be installed in a GX++ slot available on the backplane. This extends the number of
slots to nine. The system also comes with a PCIe x4 Gen2 slot containing a 2-port
10/100/1000 Base-TX Ethernet PCI Express adapter.
If additional slots are required, the Power 740 supports external I/O drawers, allowing for a
maximum of four feature code #5802/#5877 PCIe drawers or four #5796 PCI-X drawers. This
makes the server capable of 20 PCIe slots or 24 PCI-X slots.
.
Note: The Integrated Virtual Ethernet (IVE) adapter is not available for the Power 740.
The Power 740 also implements Light Path diagnostics, which provides an obvious and
intuitive means to positively identify failing components. Light Path diagnostics allows system
engineers and administrators to easily and quickly diagnose hardware problems.
Note: The Power 740 Capacity Backup capability is available for IBM i. For information
about registration and other topics, visit:
http://www.ibm.com/systems/power/hardware/cbu
Figure 1-2 shows the Power 740 rack model.
Operating
Power 720
Non-operating
Power 740
Power 720
Power 740
Temperature
Relative humidity
8 - 80%
8 - 80%
Maximum dew
point
Operating
voltage
100 - 127 V ac or
200 - 240 V ac
N/A
Operating
frequency
47 - 63 Hz
Power
consumption
840 watts
maximum
1400 watts
maximum
N/A
Power source
loading
0.857 kVa
maximum
1.428 kVa
maximum
N/A
Thermal output
2867 Btu/hour
maximum
4778 Btu/hour
maximum
N/A
Maximum
altitude
3050 m
(10,000 ft)
Noise level
reference point:
Tower system:
5.6 bels (operating)
5.5 bels (idle)
200 - 240 V ac
N/A
N/A
Rack system:
6.0 bels (operating
5.9 bels (idle)
N/A
Rack system:
5.6 bels (operating)
5.5 bels (idle)
Note: The maximum measured value is expected from a fully populated server under an
intensive workload. The maximum measured value also accounts for component tolerance
and non ideal operating conditions. Power consumption and heat load vary greatly by
server configuration and utilization. Use the IBM Systems Energy Estimator to obtain a
heat output estimate based on a specific configuration:
http://www-912.ibm.com/see/EnergyEstimator
#7567
#7568
Depth
Height
Power 720
(8202-E4C)
Power 740
(8205-E6C)
#7134
#7131
#7135
#7132
Dimension
Width
Depth
Dimension
Height
Weight
Figure 1-3 shows the rear view of a Power 740 with the optional PCIe expansion.
External
SAS Port
Optional
4 x PCIe x8 Gen2 slots
GX++ Slot 1
(shared with PCIe
expansion feature)
System
SPCN
Ports
Ports
HMC
Ports
USB
Ports
GX++
Slot 2
Service Processor
EnergyScale technology
Hot-swap and redundant cooling
Three USB ports and two system ports
Two HMC ports and two SPCN ports
Integrated:
Service Processor
EnergyScale technology
Hot-swap and redundant cooling
Three USB ports and two system ports
Two HMC ports and two SPCN ports
Table 1-6 summarizes the processor features available for the Power 720.
Table 1-6 Processor features for the Power 720
Feature code
#EPC5
#EPC6
#EPC7
The Power 740 requires that one or two processor modules be installed. If two
processor modules are installed, they have to be identical. Table 1-7 lists the available
processor features.
Table 1-7 Processor features for the Power 740
Feature code
Min/max
modules
#EPC9
1/2
#EPC8
1/2
#EPCA
1/2
#EPCB
1/2
Feature capacity
Access rate
DIMMs
#EM04
4 GB
1066 MHz
2 x 2 GB DIMMs
#EM08
8 GB
1066 MHz
2 x 4 GB DIMMs
#EM16a
16 GB
1066 MHz
2 x 8 GB DIMMs
#EM32a
32 GB
1066 MHz
2 x 16 GB DIMMs
a. A Power 720 system with 4-core processor module feature #EPC5 cannot be ordered with the
16 GB memory feature #EM16 or 32 GB memory feature #EM32.
It is generally best that memory be installed evenly across all memory riser cards in the
system. Balancing memory across the installed memory riser cards allows memory access
in a consistent manner and typically results in the best possible performance for your
10
configuration. However, balancing memory fairly evenly across multiple memory riser
cards, compared to balancing memory exactly evenly typically has a very small
performance difference.
Split
backplane
JBOD
RAID 0, 1,
and 10
RAID 5
and 6
#5618
No
Yes
Yes
No
No
Yes
Yes
Yes
No
No
#EJ01
No
No
Yes
Yes
Yes
Table 1-10 shows the available disk drive feature codes available for the installation a
Power 720 and Power 740 server.
Table 1-10 Disk drive feature code description
Feature code
Description
OS support
#1917
AIX, Linux
#1886
AIX, Linux
#1775
AIX, Linux
#1793
AIX, Linux
#1995
AIX, Linux
#1925
AIX, Linux
#1953
AIX, Linux
11
Feature code
Description
OS support
#1885
AIX, Linux
#1880
AIX, Linux
#ES0A
AIX, Linux
#ES0C
AIX, Linux
#1790
AIX, Linux
#1964
AIX, Linux
#1790
AIX, Linux
#1751
AIX, Linux
#1752
AIX, Linux
#1888
IBM i
#1947
IBM i
#1787
IBM i
#1794
IBM i
#1996
IBM i
#1956
IBM i
#1911
IBM i
#1879
IBM i
#1948
IBM i
#ES0B
IBM i
#ES0D
IBM i
#1916
IBM i
#1962
IBM i
#1909
IBM i
#1737
IBM i
#1738
IBM i
Table 1-11 shows the available disk drive feature codes available for the installation in an I/O
enclosure external to a Power 720 and Power 740 server.
Table 1-11 Non CEC Disk drive feature code description
12
Feature code
Description
OS support
#3586
AIX, Linux
#3647
AIX, Linux
#3648
AIX, Linux
#3649
AIX, Linux
#3587
IBM i
Feature code
Description
OS support
#3677
IBM i
#3678
IBM i
#3658
IBM i
13
Certain adapters are available for order in large quantities. Table 1-12 lists the Gen2 disk
drives available in a quantity of 150.
Table 1-12 Available Gen2 disk drives in quantity of 150
14
Feature code
Description
#1817
#1818
#1844
#1866
#1868
#1869
#1887
#1927
#1929
#1958
#EQ0C
#EQ0D
#EQ38
#EQ52
Disk mirroring (default), which requires feature code #0040, #0043, or #0308
SAN boot (#0837)
RAID, which requires feature code #5630
Mixed Data Protection (#0296)
If you need more disks than available with the internal disk bays, you can attach additional
external disk subsystems.
SCSI disks are not supported in the Power 720 and 740 disk bays. However, if you want to
use SCSI disks, you can attach existing SCSI disk subsystems.
For more detailed information about the available external disk subsystems, see 2.11,
External disk subsystems on page 77.
The Power 720 and 740 have a slim media bay that can contain an optional DVD-RAM
(#5762) and a half-high bay that can contain a tape drive or removable disk drive.
15
Table 1-13 shows the available media device feature codes for Power 720 and 740.
Table 1-13 Media device feature code description for Power 720 and 740
Feature code
Description
#1103
#1104
#5619
#5638
#5746
#5762
Additional considerations: Take notice of these considerations for tape drives and USB
disk drives:
If tape device feature #5619, #5638, or #5746 is installed in the half-high media bay,
feature #3656 must be also selected.
A half-high tape feature and a feature #1103 Removable USB Disk Drive Docking
Station are mutually exclusive. One or the other can be in the half-high bay in the
system but not both. As for the tape drive, the #3656 is not required with #1103.
1.6 I/O drawers for Power 720 and Power 740 servers
The Power 720 and Power 740 servers support the following 12X attached I/O drawers,
providing extensive capability to expand the overall server expandability and connectivity:
Feature #5802 provides 10 PCIe slots and 18 SFF SAS disk slots.
Feature #5877 provides 10 PCIe slots.
Feature #5796 provides six PCI-X slots (supported but not orderable).
The 7314-G30 drawer provides six PCI-X slots (supported but not orderable).
Three disk-only I/O drawers are also supported, providing large storage capacity and multiple
partition support:
The feature #5886 EXP 12S SAS drawer holds a 3.5-inch SAS disk or SSD.
The feature #5887 EXP24S SFF Gen2-bay drawer for high-density storage holds SAS
hard disk drives.
The feature #5786 Totalstorage EXP24 disk drawer and #5787 Totalstorage EXP24 disk
tower holds a 3.5-inch SCSI disk (used for migrating existing SCSI drives supported but
not orderable).
The 7031-D24 holds a 3.5-inch SCSI disk (supported but not orderable).
The Power 720 provides one GX++ slot, offering one connection loop. The Power 740 has
one GX++ slot if one processor module is installed, and two GX++ slots when two processor
modules are installed. Thus, the Power 740 provides one or two connection loops.
16
17
Table 1-14 summarizes the maximum number of I/O drawers supported and the total number
of PCI slots available when expansion consists of a single drawer type.
Table 1-14 Maximum number of I/O drawers supported and total number of PCI slots
Server
Processor
cards
Max
#5796
drawers
Max #5802
and #5877
drawersa
PCI-X
PCIe
PCI-X
PCIe
Power 720
One
24
8a
30a
Power 740
One
24
8a
30a
Power 740
Two
48
8a
30a
Table 1-15 summarizes the maximum number of disk-only I/O drawers supported.
Table 1-15 Maximum number of disk-only I/O drawers supported
Server
Processor
cards
Max #5886
drawers
Max #5887
drawers
Max #7314-G30
drawers
Power 720
One
28
14
Power 740
One
28
14
Power 740
Two
28
14
Note: The 4-core Power 720 does not support the attachment of 12X I/O drawers or the
attachment of disk drawers such as the #5886 EXP 12S SAS drawer, #5887 EXP24S SFF
Gen2-bay drawer, #5786 Totalstorage EXP24 disk drawer, or #5787 Totalstorage EXP24
disk tower.
18
The SFF bays of the EXP24S are different from the SFF bays of the POWER7 system units or
12X PCIe I/O drawers (#5802 and #5803). The EXP24S uses Gen2 or SFF-2 SAS drives that
physically do not fit in the Gen1 or SFF-1 bays of the POWER7 system unit or 12X PCIe I/O
Drawers or vice versa.
The EXP24S includes redundant AC power supplies and two power cords.
19
20
To use the no-charge features on your initial order of 6-core and 8-core Power 720 Express
Editions (#0779), you must order:
3.0 GHz 6-core processor module (#EPC6) or 3.0 GHz 8-core processor module (#EPC7)
IBM i Primary Operating System Indicator (#2145)
16 GB minimum memory: 4 x 4 GB (#EM04), or 2 x 8 GB (#EM08), or 1 x 16 GB
(#EM16), or 1 x 32 GB (#EM32)
Minimum of two HDD, or two SSD, or two Fibre Channel adapters, or two FCoE adapters.
You only need to meet one of these disk/SSD/FC/FCoE criteria. Partial criteria cannot
be combined.
If the above requirements are met, the following are included:
Three no-charge activations (3 x #EPE6) with feature #EPC6 or four no-charge activations
(4 x #EPE7) with feature #EPC7
Thirty IBM i user entitlements (charged)
One IBM i Access Family license with unlimited users (57xx-XW1)
Reduced price on 57xx-WDS and 5733-SOA
Note: The Power 740 does not have an Express Edition for the IBM i feature
code available.
1.9 IBM i Solution Edition for Power 720 and Power 740
The IBM i Solution Editions for Power 720 and Power 740 are designed to help you take
advantage of the combined experience and expertise of IBM and independent software
vendors (ISVs) in building business value with your IT investments. A qualifying purchase of
software, maintenance, services or training for a participating ISV solution is required when
purchasing an IBM i Solution Edition.
The Power 720 IBM i Solution Edition feature code #4928 supports the 4-core configuration
and feature code (#4927) supports both 6-core and 8-core configurations. The Power 720
Solution Edition includes no-charge features resulting in a lower initial list price for qualifying
clients. Also included is an IBM Service voucher to help speed implementation of the
ISV solution.
The Power 740 IBM i Solution Edition (#4929) supports 4-core to 16-core configurations. The
Power 740 Solution Edition includes no-charge features resulting in a lower initial list price for
qualifying clients. Also included is an IBM Service voucher to help speed implementation of
the ISV solution.
For a list of participating ISVs, a registration form, and additional details, visit the Solution
Edition website:
http://www-03.ibm.com/systems/power/hardware/editions/solutions.html
21
Feature code
#4934
#4935
#4936
Note: The IBM i for Business Intelligence solution is not available for the Power 740.
22
Power 720
1.11.2 Features
The following features present on the current system can be moved to the new system:
The Power 720 can support the following 12X drawers and disk-only drawers:
23
Console (SDMC) is required to manage the Power 720 and Power 740 servers. Multiple
POWER6 and POWER7 processor-based servers can be supported by a single HMC
or SDMC.
Note: If you do not use an HMC or IVM or SDMC, the Power 720 and Power 740 runs in
full system partition mode, meaning that a single partition owns all the server resources
and only one operating system can be installed.
If an HMC is used to manage the Power 720 and Power 740, the HMC must be a rack-mount
CR3 or later or a deskside C05 or later.
The IBM Power 720 and IBM Power 740 servers require the Licensed Machine Code
Version 7 Revision 740.
Remember: You can download or order the latest HMC code from the Fix Central website:
http://www.ibm.com/support/fixcentral
Existing HMC models 7310 can be upgraded to Licensed Machine Code Version 7 to support
environments that can include IBM POWER5, IBM POWER5+, POWER6, and POWER7
processor-based servers. Licensed Machine Code Version 6 (#0961) is not available for
7042 HMCs.
When IBM Systems Director is used to manage an HMC, or if the HMC manages more than
254 partitions, the HMC must have a minimum of 3 GB RAM and must be a rack-mount CR3
model, or later, or deskside C06 or later.
Future enhancements: At the time of writing, the SDMC is not supported for the
Power 720 (8202-E4C) and Power 740 (8205-E6C) models.
IBM intends to enhance the IBM Systems Director Management Console (SDMC) to
support the Power 720 (8202-E4C) and Power 740 (8205-E6C). IBM also intends for the
current Hardware Management Console (HMC) 7042-CR6 to be upgradable to an IBM
SDMC that supports the Power 720 (8202-E4C) and Power 740 (8205-E6C).
24
Remember: It is the clients responsibility to ensure that the installation of the drawer in the
preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and
compatible with the drawer requirements for power, cooling, cable management, weight,
and rail security.
25
26
Four PDUs can be mounted vertically in the back of the T00 and T42 racks. See Figure 1-4
for the placement of the four vertically mounted PDUs. In the rear of the rack, two additional
PDUs can be installed horizontally in the T00 rack and three in the T42 rack. The four vertical
mounting locations will be filled first in the T00 and T42 racks. Mounting PDUs horizontally
consumes 1U per PDU and reduces the space available for other racked components. When
mounting PDUs horizontally, it is best to use fillers in the EIA units occupied by these PDUs to
facilitate proper air-flow and ventilation in the rack.
Rack Rear View
Status LED
For detailed power cord requirements and power cord feature codes, see the IBM Power
Systems Hardware Information Center at the following website:
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp
Note: Ensure that the appropriate power cord feature is configured to support the power
being supplied.
The Intelligent PDU+, base option, 1 EIA Unit, Universal, UTG0247 Connector (#5889), the
Base/Side Mount Universal PDU (#9188) and the optional, additional, Universal PDU (#7188)
and the Intelligent PDU+ options (#7109) support a wide range of country requirements and
electrical power specifications. The #5889 and #7109 PDUs are identical to #9188 and #7188
PDUs but are equipped with one Ethernet port, one console serial port, and one rs232 serial
port for power monitoring.
The PDU receives power through a UTG0247 power line connector. Each PDU requires one
PDU-to-wall power cord. Various power cord features are available for various countries and
27
applications by varying the PDU-to-wall power cord, which must be ordered separately. Each
power cord provides the unique design characteristics for the specific power requirements. To
match new power requirements and save previous investments, these power cords can be
requested with an initial order of the rack or with a later upgrade of the rack features.
The PDU has 12 client-usable IEC 320-C13 outlets. There are six groups of two outlets fed by
six circuit breakers. Each outlet is rated up to 10 amps, but each group of two outlets is fed
from one 15 amp circuit breaker.
Note: Based on the power cord that is used, the PDU can supply from 4.8 kVA to 19.2 kVA.
The power of all the drawers plugged into the PDU must not exceed the power
cord limitation.
The Universal PDUs are compatible with previous models.
Note: Each system drawer to be mounted in the rack requires two power cords, which are
not included in the base order. For maximum availability, it is highly desirable to connect
power cords from the same system to two separate PDUs in the rack, and to connect each
PDU to independent power sources.
28
To attach a 7216 Multi-Media Enclosure to the Power 720 and Power 740, consider the
following cabling procedures:
Attachment by an SAS adapter
A PCIe Dual-X4 SAS adapter (#5901) or a PCIe LP 2-x4-port SAS Adapter 3 Gb (#5278)
must be installed in the Power 720 and Power 740 server in order to attach to a 7216
Model 1U2 Multi-Media Storage Enclosure. Attaching a 7216 to a Power 720 and
Power 740 through the integrated SAS adapter is not supported.
For each SAS tape drive and DVD-RAM drive feature installed in the 7216, the appropriate
external SAS cable will be included.
An optional Quad External SAS cable is available by specifying (#5544) with each 7216
order. The Quad External Cable allows up to four 7216 SAS tape or DVD-RAM features to
attach to a single System SAS adapter.
Up to two 7216 storage enclosure SAS features can be attached per PCIe Dual-X4 SAS
adapter (#5901) or the PCIe LP 2-x4-port SAS Adapter 3 Gb (#5278).
Attachment by an USB adapter
The Removable RDX HDD Docking Station features on 7216 only support the USB cable
that is provided as part of the feature code. Additional USB hubs, add-on USB cables, or
USB cable extenders are not supported.
For each RDX Docking Station feature installed in the 7216, the appropriate external USB
cable will be included. The 7216 RDX Docking Station feature can be connected to the
external, integrated USB ports on the Power 710 and Power 730 or to the USB ports on
4-Port USB PCI Express Adapter (# 2728).
The 7216 DAT320 USB tape drive or RDX Docking Station features can be connected to
the external, integrated USB ports on the Power 710 and Power 730.
The two drive slots of the 7216 enclosure can hold the following drive combinations:
One tape drive (DAT160 SAS or Half-high LTO Ultrium 5 SAS) with second bay empty
Two tape drives (DAT160 SAS or Half-high LTO Ultrium 5 SAS) in any combination
One tape drive (DAT160 SAS or Half-high LTO Ultrium 5 SAS) and one DVD-RAM SAS
drive sled with one or two DVD-RAM SAS drives
Up to four DVD-RAM drives
29
One tape drive (DAT160 SAS or Half-high LTO Ultrium 5 SAS) in one bay, and one RDX
Removable HDD Docking Station in the other drive bay
One RDX Removable HDD Docking Station and one DVD-RAM SAS drive sled with one
or two DVD-RAM SAS drives in the right bay
Two RDX Removable HDD Docking Stations
Figure 1-5 shows the 7216 Multi-Media Enclosure.
Back, No Door
203mm (8.0 in.)
Drawer Rail
Mounting
Flanges
Front, No Door
31
The vertical distance between the mounting holes must consist of sets of three holes
spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.67 mm
(0.5 in.) on center, making each three-hole set of vertical hole spacing 44.45 mm (1.75 in.)
apart on center. Rail-mounting holes must be 7.1 mm 0.1 mm (0.28 in. 0.004 in.) in
diameter. Figure 1-7 shows the top front specification dimensions.
Hole Diameter =
7.1 +/- 0.1mm
6.75mm min
15.9mm
15.9mm
15.9mm
12.7mm
12.7mm
15.9mm
15.9mm
15.9mm
15.9mm
12.7mm
12.7mm
32
6.75mm min
15.9mm
Chapter 2.
33
Buffer
Buffer
DIMM #8
DIMM #6
DIMM #4
DIMM #2
DIMM #7
DIMM #3
DIMM #5
DIMM #1
DIMM #8
DIMM #6
DIMM #4
DIMM #2
Memory Card #2
Buffer
Service
Processor
Buffer
DIMM #7
DIMM #3
DIMM #5
DIMM #1
Memory Card #1
2 System Ports
2 HMC Ports
2 SPCN Ports
VPD Chip
Figure 2-1 shows the logical system diagram of the Power 720.
Optional PCIe Gen2 Riser
PCIe Gen2 x8 (Short, LP) SLOT #1
P7-IOC
(Optional
Expansion)
Memory Controller
POWER7 Chip 1
4-6-8 cores
USB
Controller
USB #1
USB #2
USB #3
USB #4
TPMD
P7-IOC
DVD
RAIDs
0,1,10
SAS
Controller
HDD1
HDD2
HDD3
HDD4
HDD5
HDD6
34
Buffer
DIMM #8
DIMM #6
DIMM #4
DIMM #2
DIMM #7
DIMM #3
DIMM #5
DIMM #1
DIMM #8
Buffer
Service
Processor
Buffer
DIMM #6
Memory Card #2
DIMM #4
DIMM #2
DIMM #7
DIMM #3
DIMM #5
DIMM #1
Memory Card #1
2 System Ports
2 HMC Ports
2 SPCN Ports
VPD Chip
Figure 2-2 shows the logical system diagram of the Power 740.
Buffer
POWER7 Chip 1
4-6-8 cores
USB #1
USB #2
USB #3
USB #4
2.9 Gbps
TPMD
2.9 Gbps
2.9 Gbps
POWER7 Chip 2
4-6-8 cores
DVD
Memory Controller
Memory Card #3
SAS
Controller
Memory Card #4
DIMM #8
DIMM #6
DIMM #4
DIMM #2
Buffer
DIMM #7
DIMM #5
DIMM #3
DIMM #1
Buffer
DIMM #8
DIMM #6
DIMM #4
DIMM #2
DIMM #7
DIMM #5
DIMM #3
DIMM #1
Buffer
RAIDs
0,1,10
HDD1
HDD2
HDD3
HDD4
HDD5
HDD6
GX++ SLOT #1
35
The superscalar POWER7 processor design also provides a variety of other capabilities:
Binary compatibility with the prior generation of POWER processors
Support for PowerVM virtualization capabilities, including PowerVM Live Partition Mobility
to and from POWER6 and POWER6+ processor-based systems
36
Figure 2-3 shows the POWER7 processor die layout with the major areas identified:
Processor cores
L2 cache
L3 cache and chip interconnection
Symmetric Multi Processing (SMP) links
Memory controllers
37
POWER7 processor
Die size
567 mm2
Fabrication technology
Components
Processor cores
4, 6, or 8
4/32
256 KB/2 MB
4 MB/32 MB
1 or 2
SMP design-point
Compatibility
45 nm lithography
Copper interconnect
Silicon-on-Insulator
eDRAM
38
Maximizing throughput
SMT4 mode enables the POWER7 processor to maximize the throughput of the processor
core by offering an increase in core efficiency. SMT4 mode is the latest step in an evolution of
multithreading technologies introduced by IBM. Figure 2-4 shows the evolution of
simultaneous multithreading.
Multi-threading evolution
1995 single thread out of order
FX0
FX1
FP0
FP1
LS0
LS1
BRX
CRL
FX0
FX1
FP0
FP1
LS0
LS1
BRX
CRL
FX0
FX1
FP0
FP1
LS0
LS1
BRX
CRL
FX0
FX1
FP0
FP1
LS0
LS1
BRX
CRL
Thread 0 Executing
Thread 2 Executing
Thread 1 Executing
Thread 3 Executing
No Thread Executing
The various SMT modes offered by the POWER7 processor allow flexibility, enabling users to
select the threading mode that meets an aggregation of objectives such as performance,
throughput, energy use, and workload enablement.
Intelligent Threads
The POWER7 processor features Intelligent Threads that can vary based on the workload
demand. The system either automatically selects (or the system administrator can manually
select) whether a workload benefits from dedicating as much capability as possible to a
single thread of work, or if the workload benefits more from having capability spread across
two or four threads of work. With more threads, the POWER7 processor can deliver more
total capacity as more tasks are accomplished in parallel. With fewer threads, those
workloads that need very fast individual tasks can get the performance that they need for
maximum benefit.
Chapter 2. Architecture and technical overview
39
Core
Core
Memory
Controller
Core
Core
Core
Core
Core
Memory
Controller
Advanced
Buffer ASIC
Chip
Advanced
Buffer ASIC
Chip
DDR3 DRAMs
40
Figure 2-6 outlines the physical packaging options that are supported with POWER7
processors.
1 x Memory Controller
Local broadcast SMP links active
2 x Memory Controllers
Local broadcast SMP links active
Global broadcast SMP links active
POWER7 processors have the unique ability to optimize to various workload types. For
example, database workloads typically benefit from very fast processors that handle high
transaction rates at high speeds. Web workloads typically benefit more from processors with
many threads that allow the breaking down of web requests into many parts and handle them
in parallel. POWER7 processors uniquely have the ability to provide leadership performance
in either case.
TurboCore mode
Users can choose to run selected servers in TurboCore mode. This mode uses four cores per
POWER7 processor chip with access to the entire 32 MB of L3 cache (8 MB per core) and at
a faster processor core frequency, which delivers higher performance per core, and might
save on software costs for those applications that are licensed per core.
Note: TurboCore is available on the Power 780 and Power 795.
MaxCore mode
MaxCore mode is for workloads that benefit from a higher number of cores and threads
handling multiple tasks simultaneously, taking advantage of increased parallelism. MaxCore
mode provides up to eight cores and up to 32 threads per POWER7 processor.
41
Innovation using eDRAM on the POWER7 processor die is significant for these reasons:
Latency improvement
A six-to-one latency improvement occurs by moving the L3 cache on-chip compared to L3
accesses on an external (on-ceramic) ASIC.
Bandwidth improvement
A 2x bandwidth improvement occurs with on-chip interconnect. Frequency and bus sizes
are increased to and from each core.
42
POWER7
POWER6
Technology
45 nm
65 nm
Die size
567 mm2
341 mm2
Maximum cores
4 threads
2 threads
Maximum frequency
4.25 GHz
5 GHz
L2 cache
4 MB per core
L3 cache
Memory support
DDR3
DDR2
I/O bus
Two GX++
One GX++
Yesa
No
Both
Nap only
a. Not supported on the Power 770 and the Power 780 4-socket systems.
b. For more information about sleep and nap modes, see 2.15.1, IBM EnergyScale technology
on page 97.
43
44
POWER7 processor
modules
Figure 2-8 POWER7 processor card shown in a Power 740 with processor module
Cores per
POWER7
processor
Frequency
(GHz)
Processor activation
Min/Max
cores per
system
Min/Max
processor
module
#EPC5
3.0
4/4
1/1
45
Feature
Cores per
POWER7
processor
Frequency
(GHz)
Processor activation
Min/Max
cores per
system
Min/Max
processor
module
#EPC6
3.0
6/6
1/1
#EPE7
3.0
8/8
1/1
Table 2-4 summarizes the POWER7 processor options for the Power 740 system.
Table 2-4 Summary of POWER7 processor options for the Power 740 system
Feature
Cores per
POWER7
processor
Frequency
(GHz)
Processor activation
Min/Max
cores per
system
Min/Max
processor
module
#EPE9
3.3
4/8
1/2
#EPE8
3.7
4/8
1/2
#EPEA
3.7
6/12
1/2
#EPEB
3.55
16
2/2
46
The Power 740 is a two-socket system supporting up to two POWER7 processor modules.
The server supports a maximum of 32 DDR3 DIMM slots, with eight DIMM slots included in
the base configuration and 24 DIMM slots available with three optional memory riser cards.
Memory features (two memory DIMMs per feature) supported are 4 GB, 8 GB, 16 GB, and
32 GB run at speeds of 1066 MHz. A system with three optional memory riser cards installed
has a maximum memory of 512 GB.
The minimum memory capacity for the Power 720 and Power 740 systems is 4 GB (2 x 2 GB
DIMMs). Table 2-5 shows the maximum memory supported on the Power 720.
Table 2-5 Power 720 maximum memory
Processor cores
4-core
32 GB
64 GB
6-core
8-core
128 GB
256 GB
Note: A system with the 4-core processor module (#EPC5) does not support the 16 GB (#EM16) and
32 GB (#EM32) memory features.
Table 2-6 shows the maximum memory supported on the Power 740.
Table 2-6 Power 740 maximum memory
Processor cores
One memory
riser card
Two memory
riser cards
Three memory
riser cards
Four memory
riser cards
1 x 4-core
1 x 6-core
1 x 8-core
128 GB
256 GB
Not available
Not available
2 x 4-core
2 x 6-core
2 x 8-core
128 GB
256 GB
384 GB
512 GB
47
Figure 2-9 shows the logical memory DIMM topology for the POWER7 processor card.
SN-B
SN-B
Memory Card #4
DDR3 RDIMM Slot 5
MC0
Port 1
SN-A
POWER7
POWER7
DDR3 RDIMM Slot 7
SN-B
MC0
Port 2
SN-B
Memory Card #3
Memory Card #1
MC0
Port 3
MC0
Port 1
MC0
Port 3
SN-A
Figure 2-10 shows memory location codes and how the Memory Riser Cards are divided in
Quads, each Quad being attached to a memory buffer.
Slot #1 P1-Cn-C1
Slot #2 P1-Cn-C2
Quad A
Slot #3 P1-Cn-C3
Slot #4 P1-Cn-C4
Buffer B
Buffer A
Slot #5 P1-Cn-C7
Slot #6 P1-Cn-C8
Quad B
Slot #7 P1-Cn-C9
Slot #8 P1-Cn-C10
48
P1-C15-C1
Pair 4
P1-C15-C2
Pair 12
P1-C15-C3
Pair 4
P1-C1-C4
MC0
Port 0
Pair 12
POWER7
P1-C10
P1-C17-C1
Pair 3
P1-C17-C2
Pair 11
P1-C17-C3
Pair 3
P1-C17-C4
Pair 11
P1-C15-C7
Pair 16
P1-C17-C7
Pair 15
P1-C15-C8
Pair 8
P1-C17-C8
Pair 7
P1-C15-C9
Pair 16
P1-C17-C9
Pair 15
P1-C15-C10
Pair 8
P1-C17-C10
Pair 17
P1-C18-C1
Pair 1
P1-C18-C2
Pair 9
P1-C18-C3
Pair 1
P1-C18-C4
Pair 9
P1-C16-C1
Pair 2
P1-C16-C2
Pair 10
P1-C16-C3
Pair 2
P1-C16-C4
Pair 10
MC0
Port 1
POWER7
P1-C11
MC0
Port 2
Pair 14
P1-C16-C8
Pair 6
P1-C16-C9
Pair 14
P1-C16-C10
Pair 6
P1-C18-C7
MC0
Port 3
Pair 13
P1-C18-C8
Pair 5
P1-C18-C9
Pair 13
P1-C18-C10
Pair 5
49
3.00 GHz
L1 (data) cache
144 GBps
L2 cache
144 GBps
L3 cache
96 GBps
System memory
68.22 GBps
Table 2-8 Power 740 processor, memory, and I/O bandwidth estimates
Memory
3.55 GHz
L1 (data) cache
170.4 GBps
L2 cache
170.4 GBps
L3 cache
113.6 GBps
System memory
50
10 GBps simplex
20 GBps duplex
10 GBps simplex
20 GBps duplex
10 GBps simplex
20 GBps duplex
30 GBps simplex
60 GBps duplex
51
Description
Location code
Slot 1
PCIe Gen2 x8
P1-C2
P7IOC PCIe
PHB5
Full height/short
Slot 2
PCIe Gen2 x8
P1-C3
P7IOC PCIe
PHB4
Full height/short
Slot 3
PCIe Gen2 x8
P1-C4
P7IOC PCIe
PHB3
Full height/short
Slot 4
PCIe Gen2 x8
P1-C5
P7IOC PCIe
PHB2
Full height/short
Slot 5
PCIe Gen2 x8
P1-C6
P7IOC PCIe
PHB1
Full height/short
Slot 6
PCIe Gen2 x4
P1-C7
P7IOC
multiplexer PCIe
PHB0
Full height/short
Slot 7
PCIe Gen2 x8
P1-C1-C1
P7IOC PCIe
PHB1
Low profile/short
Slot 8
PCIe Gen2 x8
P1-C1-C2
P7IOC PCIe
PHB4
Low profile/short
Slot 9
PCIe Gen2 x8
P1-C1-C3
P7IOC PCIe
PHB2
Low profile/short
Slot 10
PCIe Gen2 x8
P1-C1-C4
P7IOC PCIe
PHB3
Low profile/short
Remember: Full-height PCIe adapters and low-profile PCIe adapters are not
interchangeable. Even if the card was designed with low-profile dimensions, the tail stock
at the end of the adapter is specific to either low-profile or full-height PCIe slots.
52
53
The Power 720 and Power 740 system enclosure is equipped with five PCIe x8 Gen2
full-height slots. There is a sixth PCIe x4 slot dedicated to the PCIe Ethernet card that is
standard with the base system. An optional PCIe Gen2 expansion feature is also available
that provides an additional four PCIe x8 low profile slots.
IBM offers only PCIe low-profile adapter options for the Power 720 and Power 740 systems.
All adapters support Extended Error Handling (EEH). PCIe adapters use a different type of
slot than PCI and PCI-X adapters. If you attempt to force an adapter into the wrong type of
slot, you might damage the adapter or the slot.
Note: IBM i IOP adapters are not supported in the Power 720 and Power 740 systems.
Full Height
Low Profile
54
Many of the full-height card features are available in low-profile format. For example, the
#5273 8 Gb dual port Fibre channel adapter is the low-profile adapter equivalent of the #5735
adapter full height. They have equivalent functional characteristics.
Table 2-11 provides a list of low profile adapter cards and their equivalent in full height.
Table 2-11 Equivalent adapter cards
Low profile
Feature
code
CCIN
#2053
57CD
#5269
Adapter description
Full height
Feature
code
CCIN
#2054 or
#2055
57CD or
57CD
5269
#5748
5748
#5270
2B3B
#5708
2BCB
#5271
5271
#5717
5717
#5272
5272
#5732
5732
#5273
577D
#5735
577D
#5274
5768
#5768
5768
#5275
5275
#5769
5769
#5276
5774
#5774
5774
#5277
57D2
#5785
57D2
#5278
57B3
#5901
57B3
Before adding or rearranging adapters, you can use the System Planning Tool to validate the
new adapter configuration. See the System Planning Tool website:
http://www.ibm.com/systems/support/tools/systemplanningtool/
If you are installing a new feature, ensure that you have the software required to support the
new feature and determine whether there are any existing update prerequisites to install. To
do this, use the IBM Prerequisite website:
https://www-912.ibm.com/e_dir/eServerPreReq.nsf
The following sections discuss the supported adapters and provide tables of orderable feature
numbers. The tables indicate operating support, AIX (A), IBM i (i), and Linux (L), for each of
the adapters.
55
CCIN
Adapter description
Slot
Size
OS
support
#5260
576F
PCIe
Low profile
Short
A, i, L
#5271
5271
PCIe
Low profile
Short
A, L
#5272
5272
PCIe
Low profile
Short
A, L
#5274
5768
PCIe
Low profile
Short
A, i, L
#5275
5275
PCIe
Low profile
Short
A, L
#5279
2B43
PCIe
Low profile
#5280
2B44
PCIe
Low profile,
#5284
5287
PCIe
Low profile,
Short
A, L
#5286
5288
PCIe
Low profile,
Short
A, L
#5287
5287
PCIe
Full height,
Short
A, L
#5288
5288
PCIe
Full height
Short
A, L
#5706
5706
PCI-X
Full height
Short
A, i, L
#5717
5271
PCIe
Full height
Short
A, L
#5732
5732
PCIe
Full height
Short
A, L
PCI-X
Full height
Short
A, L
#5740
56
#5744
2B44
PCIe
Full height
#5745
2B43
PCIe
Full height
#5767
5767
PCIe
Full height
Short
A, i, L
Feature
code
CCIN
Adapter description
Slot
Size
OS
support
#5768
5768
PCIe
Full height
Short
A, i, L
#5769
5769
PCIe
Full height
Short
A, L
#5772
576E
PCIe
Full height
Short
A, i, L
#5899
576F
PCIe
Full height
A, i, L
#9055a
5767
PCIe
Full height,
Short
A, i, L
#EC27
EC27
PCIe
Low profile
A, L
#EC28
EC27
PCIe
Full height
A, L
#EC29
EC29
PCIe
Low profile
A, L
#EC30
EC29
PCIe
Full height
A, L
a. This adapter is required in the Power 720 and Power 740 systems.
Note: For IBM i OS, Table 2-12 on page 56 shows the native support of the card. All
Ethernet cards can be supported by IBM i through the VIOS server.
CCIN
Adapter description
Slot
Size
OS
support
#5269
5269
PCIe
Low profile
Short
A, L
#5748a
5748
PCIe
Full height
Short
A, L
57
CCIN
Adapter description
Slot
Size
OS
support
#5278
57B3
PCIe
Low profile
Short
A, i, L
#5805bc
574E
PCIe
Full height
Short
A, i, L
#5900a
572A
PCI-X
Full height
Short
A, L
#5901b
57B3
PCIe
Full height
Short
A, i, L
#5908
575C
PCI-X
Full height
Short
A, i, L
#5912
572A
PCI-X
Full height
Short
A, i, L
#5913c
57B5
PCIe
Full height
Short
A, i, L
ESA1
57B$
PCIe
Full height
A, i, L
For detailed information about SAS cabling of external storage, see the IBM Power Systems
Hardware Information Center:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp
Table 2-15 shows a comparison between parallel SCSI and SAS.
Table 2-15 Comparison parallel SCSI to SAS
58
Feature
Parallel SCSI
SAS
Architecture
Performance
Scalability
15 drives
Compatibility
Hot pluggability
Yes
Yes
Feature
Parallel SCSI
SAS
Device identification
Termination
177 GB
SSD
177 GB
SSD
177 GB
SSD
177 GB
SSD
SAS
Cntrl
Figure 2-13 The PCIe RAID and SSD SAS Adapter and 177 GB SSD modules
To connect to external SCSI or SAS devices, the adapters listed in Table 2-14 on page 58
are available.
Table 2-16 Available PCIe RAID and SSD SAS adapters
Feature
code
CCIN
Adapter description
Slot
Size
OS
support
#2053a
57CD
PCIe
Low profile
Double wide,
short
A, i, L
#2054
57CD
PCIe
Double wide,
short
A, i, L
#2055b
57CD
PCIe
A, i, L
59
a. Only supported in the Rack-mount configuration. VIOS attachment requires Version 2.2
or later.
b. Only supported in a #5802/#5877 PCIe I/O drawer. Not supported in the Power 720 and
Power 740 CEC. If used with the Virtual I/O server, the Virtual I/O server Version 2.2 or later
is required.
Note: For a Power 720 tower configuration, it is possible to place PCIe-based SSDs in a
#5802/#5877 PCIe I/O drawer.
The 177 GB SSD Module with Enterprise Multi-level Cell (eMLC) uses a new enterprise-class
MLC flash technology, which provides enhanced durability, capacity, and performance. One,
two, or four modules can be plugged onto a PCIe RAID and SSD SAS adapter, providing up
to 708 GB of SSD capacity on one PCIe adapter.
Because the SSD modules are mounted on the adapter, to service either the adapter or one
of the modules, the entire adapter must be removed from the system. Although the adapter
can be hot plugged when installed in a #5802 or #5877 I/O drawer, removing the adapter also
removes all SSD modules. So, to be able to hot plug the adapter and maintain data
availability, two adapters must be installed and the data mirrored across the adapters.
Under AIX and Linux, the 177 GB modules can be reformatted as JBOD disks, providing
200 GB of available disk space. This removes RAID error correcting information, so it is best
to mirror the data using operating system tools in order to prevent data loss in case of failure.
2.8.7 iSCSI
iSCSI adapters in Power Systems provide the advantage of increased bandwidth through
hardware support of the iSCSI protocol. The 1 Gigabit iSCSI TOE (TCP/IP Offload Engine)
PCI-X adapters support hardware encapsulation of SCSI commands and data into TCP, and
transports them over the Ethernet using IP packets. The adapter operates as an iSCSI TOE.
This offload function eliminates host protocol processing and reduces CPU interrupts. The
adapter uses a small form factor LC type fiber optic connector or a copper RJ45 connector.
Table 2-17 lists the orderable iSCSI adapters.
Table 2-17 Available iSCSI adapters
Feature
code
CCIN
Adapter description
Slot
Size
OS
support
#5713
573B
PCI-X
Full height
Short
A, i, L
60
CCIN
Adapter description
Slot
Size
OS
support
#5273
PCIe
Low profile
Short
A, i, L
#5276
PCIe
Low profile
Short
A, i, L
#5729a
PCIe
Full height,
Short
A, Lb
#5735c
577D
PCIe
Full height
Short
A, i, L
#5749
576B
PCI-X
Full height
Short
#5759
1910
5759
PCI-X
Full height
Short
A, L
#5774
5774
PCIe
Full height
Short
A, i, L
EN0Y
EN0Y
PCIe
Low profile
A, i, L
a. A Gen2 PCIe slot is required to provide the bandwidth for all four ports to operate at full speed.
b. The usage within IBM i it is not supported. Instead, use it with the Virtual I/O server.
c. At the time of writing the IBM i device driver does not support this card at PCIe slot 6, P1-C7.
Note: The usage of NPIV through the Virtual I/O server requires 8 Gb Fibre Channel
adapters such as the #5273, #5735, and #5729.
61
Ethernet
and Fibre
Channel
Cables
Fibre Channel
Cable
FC Switch
Ethernet Switch
Ethernet
Cables
Ethernet
Cable
Ethernet
CEC or I/O Drawer
FC
Ethernet Device/
Switch
Fibre Channel
Cable
FCoE Switch
FCoE
CEC or I/O Drawer
Ethernet Device/
Switch or FCoE
Device/Switch
Rack
Rack
Figure 2-14 Comparison between existing FC and network connection and FCoE connection
Table 2-19 lists the available Fibre Channel over Ethernet Adapters. They are
high-performance Converged Network Adapters (CNA) using SR optics. Each port can
provide Network Interface Card (NIC) traffic and Fibre Channel functions simultaneously.
Table 2-19 Available FCoE adapters
Feature
code
CCIN
Adapter description
Slot
Size
OS
support
#5708
2B3B
PCIe
Full height
Short
A, L
#5270
2B3B
PCIe
Low profile
Short
A, L
For more information about FCoE, see An Introduction to Fibre Channel over Ethernet, and
Fibre Channel over Convergence Enhanced Ethernet, REDP-4493.
62
communication between devices. Each link can support multiple transport services for
reliability and multiple prioritized virtual communication channels.
IBM offers the GX++ 12X DDR Adapter (#EJ04) that plugs into the system backplane
(GX++ slot). One GX++ slot is available on the Power 720. One or two GX++ slots are
available on the Power 740, if used with one or two processor cards. Detailed information can
be found in 2.6, System bus on page 51.
By attaching a 12X to 4X converter cable (#1828, #1841,or #1842) to #EJ04, a supported IB
switch can be attached. AIX, IBM i, and Linux operating systems are supported.
A new PCIe Gen2 LP 2-Port 4X InfiniBand quad data rates (QDR) Adapter 40 Gb (#5283) is
available. The PCIe Gen2 low-profile adapter provides two high-speed 4X InfiniBand
connections for IP over IB usage in the Power 720 and Power 740. On the Power 720 and
Power 740, this adapter is supported in PCIe Gen2 slots. The following types of QDR IB
cables are provided for attachment to the QDR adapter and its QSFP(Quad Small
Form-Factor Pluggable) connectors:
Copper cables provide 1-meter, 3-meter, and 5-meter lengths (#3287, #3288, and 3289).
Optical cables provide 10-meter and 30-meter lengths (#3290 and #3293). These are
QSFP/QSFP cables that also attach to QSFP ports on the switch.
The feature #5283 QDR adapter attaches to the QLogic QDR switches. These switches can
be ordered from IBM using the following machine type and model numbering:
7874-036 is a QLogic 12200 36-port, 40 Gbps InfiniBand Switch that cost-effectively links
workgroup resources into a cluster.
7874-072 is a QLogic 12800-040 72-port, 40 Gbps InfiniBand switch that links resources
using a scalable, low-latency fabric, supporting up to four 18-port QDR Leaf Modules.
7874-324 is a QLogic 12800-180 324-port 40 Gbps InfiniBand switch designed to maintain
larger clusters, supporting up to eighteen 18-port QDR Leaf Modules.
Note: The feature #5283 adapter has two 40 Gb ports ,and a PCIe Gen2 slot has the
bandwidth to support one port. This means that the benefit of two ports will be for
redundancy rather than additional performance.
Table 2-20 lists the available InfiniBand adapters.
Table 2-20 Available InfiniBand adapters
Feature
code
CCIN
Adapter description
Slot
Size
OS
support
#5283
58E2
PCIe
Low profile
Short
A, L
#5285a
58E2
PCIe
full high
slot
A, L
GX++
#EJ04
A, L
For more information about Infiniband, see HPC Clusters Using InfiniBand on IBM Power
Systems Servers, SG24-7767.
63
CCIN
Adapter description
Slot
Size
OS
support
#5277
57D2
PCIe
Low profile
Short
A, L
#5289
57D4
PCIe
Full height,
Short
A, L
#5290
57D4
PCIe
Low profile
Short
A, L
#5785a
57D2
PCIe
Full height
Short
A, L
64
Figure 2-15 details an internal topology overview for the #5618 backplane.
Tape Drive
Integrated
SAS Adapter
Slim DVD
P7IOC
Disk #1
Disk #2
Disk #3
Disk #4
Disk #5
Disk #6
Figure 2-16 shows an internal topology overview for the #EJ01 backplane:
BACKUP BATTERY
Tape Drive
Integrated
SAS Adapter
SAS
Port
Exp.
P7IOC
Slim DVD
Disk #1
Disk #2
Disk #3
Disk #4
Integrated
SAS Adapter
BACKUP BATTERY
SAS
Port
Exp.
Disk #5
Disk #6
Disk #7
Disk #8
65
Drive protection
HDD/SSD drive protection can be provided by AIX, IBM i, and Linux software or by the
HDD/SSD hardware controllers. Mirroring of drives is provided by AIX, IBM i, and Linux
software. In addition, AIX/Linux supports controllers providing RAID 0, 1, 5, 6, or 10. IBM i
integrated storage management already provides striping, so IBM i also supports controllers
providing RAID 5 or 6. To further augment HDD/SSD protection, hot spare capability can be
used for protected drives. Specific hot spare prerequisites apply.
An integrated SAS controller offering RAID 0, 1, and 10 support is provided in the Power 720
and Power 740 system unit.
It can be optionally augmented by RAID 5 and RAID 6 capability when storage backplane
#EJ01 is added to the configuration. In addition to these protection options, mirroring of drives
by the operating system is supported. AIX or Linux supports all of these options. IBM i does
not use JBOD, and uses imbedded functions instead of RAID 10, but does leverage the
RAID 5 or 6 function of the integrated controllers.
Other disk/SSD controllers are provided as PCIe SAS adapters are supported. Different PCI
controllers with and without write cache are supported. Also, RAID 5 and 6 on controllers with
write cache are supported with one exception: The PCIe RAID and SSD SAS adapter has no
write cache, but it supports RAID 5 and RAID 6.
Table 2-22 list the RAID support by backplane.
Table 2-22 RAID configurations for the Power 720 and Power 740
Feature code
Split
cackplane
JBOD
RAID 0, 1,
and 10
RAID 5
and 6
#5618
No
Yes
Yes
No
No
Yes
Yes
Yes
No
No
#EJ01
No
No
Yes
Yes
Yes
AIX and Linux can use disk drives formatted with 512-byte blocks when being mirrored by the
operating system. These disk drives must be reformatted to 528-byte sectors when used in
RAID arrays. Although a small percentage of the drive's capacity is lost, additional data
protection such as ECC and bad block detection is gained in this reformatting. For example, a
300 GB disk drive, when reformatted, provides around 283 GB. IBM i always uses drives
formatted to 528 bytes. Solid state disks are formatted to 528 bytes.
Power 720 and Power 740 support a dual write cache RAID feature that consists of an
auxiliary write cache for the RAID card and the optional RAID enablement.
66
67
Tape Drive
Integrated
SAS Adapter
Slim DVD
P7IOC
Disk #1
Disk #2
Disk #3
Disk #4
Integrated
SAS Adapter
Disk #5
Disk #6
Figure 2-17 Internal topology overview for #5618 DASD backplane with split backplane feature #EJ02
68
Feature code
DASD
PCI slots
#5796
None
6 x PCI-X
#5802
10 x PCIe
#5877
None
10 x PCIe
Feature code
DASD
PCI slots
#5887
None
#5886
None
Each processor card feeds one GX++ adapter slot. On the Power 720, there can be one
GX++ slot available, and on the Power 740, there can be one or two, depending on whether
one or two processor modules are installed.
Note: The attachment of external I/O drawers is not supported on the 4-core Power 720.
69
E1
E2
SPCN 0
(P1-C8-T1)
SPCN 1
(P1-C8-T2)
P1-C8-T3
C
1
C
2
C
3
P1-C1
P1-C3
P1-C2
C
4
C
5
C
6
P1-C4
P1-C6
P1-C5
12X Port 1
(P1-C7-T2)
Figure 2-18 PCI-X DDR 12X Expansion Drawer rear side
0.6 (#1861)
1.5 (#1862)
3.0 (#1865)
8 meters (#1864)
70
Figure 2-19 shows the front view of the 12X I/O Drawer PCIe (#5802).
Disk drives
Service card
Port cards
Power cables
Figure 2-19 Front view of the 12X I/O Drawer PCIe
Figure 2-20 shows the rear view of the 12X I/O Drawer PCIe (#5802).
10 PCIe cards X2
SAS connectors
12X connectors
Mode switch
SPCN connectors
Figure 2-20 Rear view of the 12X I/O Drawer PCIe
71
Note: Mode change using the physical mode switch requires power-off/on of the drawer.
Figure 2-20 on page 71 indicates the Mode Switch in the rear view of the #5802 I/O Drawer.
Each disk bay set can be attached to its own controller or adapter. The #5802 PCIe 12X I/O
Drawer has four SAS connections to drive bays. It connects to PCIe SAS adapters or
controllers on the host system.
Figure 2-21 shows the configuration rule of disk bay partitioning in the #5802 PCIe 12X I/O
Drawer. There is no specific feature code for mode switch setting.
Note: The IBM System Planing Tool supports disk bay partitioning. Also, the IBM
configuration tool accepts this configuration from the IBM System Planing Tool and passes
it through IBM manufacturing using the Customer Specified Placement (CSP) option.
MODE
SWITCH
1
2
4
Figure 2-21 Disk bay partitioning in #5802 PCIe 12X I/O drawer
The SAS ports, as associated with the mode selector switch map to the disk bays, have the
mappings shown in Table 2-24.
Table 2-24 SAS connection mappings
72
Location code
Mappings
Number of bays
P4-T1
P3-D1 to P3-D5
5 bays
P4-T2
P3-D6 to P3-D9
4 bays
P4-T3
P3-D10 to P3-D14
5 bays
P4-T3
P3-D15 to P3-D18
4 bays
E2
ARECW500-0
P3-C3
P3-C4
P3-D12
P3-D13
P3-D14
P3-D15
P3-D16
P3-D17
P3-D18
P3-C1
P3-C2
P3-D8
P3-D9
P3-D10
P3-D11
E1
P3-D1
P3-D2
P3-D3
P3-D4
P3-D5
P3-D6
P3-D7
Figure 2-22 and Figure 2-23 provide the location codes for the front and rear views of the
#5802 I/O drawer.
ARECW501-0
P2-T3
P2-T2
P2-T1
P4-T5
P1-C10
P1-C9
P1-C8
P1-C7
P1-C6
P1-C5
P1-T2
P1-C4
P1-C3
P1-C2
P1-C1
P1-T1
P4-T1
P4-T2
P4-T3
P4-T4
73
2.10.4 12X I/O drawer PCIe and PCI-DDR 12X Expansion Drawer 12X cabling
I/O drawers are connected to the adapters in the CEC enclosure with data transfer cables:
12X DDR cables for the #5802 and #5877 I/O drawers
12X SDR and/or DDR cables for the #5796 I/O drawers
The first 12X I/O Drawer that is attached in any I/O drawer loop requires two data transfer
cables. Each additional drawer, up to the maximum allowed in the loop, requires one
additional data transfer cable. Note the following information:
A 12X I/O loop starts at a CEC bus adapter port 0 and attaches to port 0 of an I/O drawer.
The I/O drawer attaches from port 1 of the current unit to port 0 of the next I/O drawer.
Port 1 of the last I/O drawer on the 12X I/O loop connects to port 1 of the same CEC bus
adapter to complete the loop.
Figure 2-24 shows typical 12X I/O loop port connections.
0
0
I/O
I/O
C
1
1
I/O
74
I/O
Table 2-25 shows various 12X cables to satisfy the various length requirements.
Table 2-25 12X connection cables
Feature code
Description
#1861
#1862
#1865
#1864
720
720
PCIe
+
+
+
+
PCIe
One I/O-Drawer
PCIe
Two I/O-Drawers
PCI-X
720
PCI-X
720
PCI-X
+
+
PCI-X
One I/O-Drawer
+
+
PCI-X
Four I/O-Drawers
Figure 2-25 12X I/O Drawer configuration for a Power 720 with one GX++ slot
75
The configuration rules are the same for the Power 740. However, because the Power 740
can have up to two GX++ slots, you have various options available to attach 12X IO drawers.
Figure 2-26 shows four of them, but there are more options available.
740
740
+
+
PCI-X
+
+
PCI-X
PCIe
PCIe
+
+
+
+
PCIe
Two I/O-Drawers
PCIe
Four I/O-Drawers
PCIe
740
740
+
+
+
+
+
+
PCIe
PCI-X
PCI-X
PCIe
PCI-X
+
+
PCI-X
PCIe
Three I/O-Drawers
PCI-X
Six I/O-Drawers
Figure 2-26 12X I/O Drawer configuration for a Power 740 with one GX++ slot
76
1.5 M
3.0 M
8.0 M
Yes
Yes
No
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
No
Yes
Yes
No
No
Yes
Yes
No
2.10.5 12X I/O Drawer PCIe and PCI-DDR 12X Expansion Drawer SPCN cabling
The System Power Control Network (SPCN) is used to control and monitor the status of
power and cooling within the I/O drawer.
SPCN cables connect all ac powered expansion units (Figure 2-27):
1. Start at SPCN 0 (T1) of the CEC unit to J15 (T1) of the first expansion unit.
2. Cable all units from J16 (T2) of the previous unit to J15 (T1) of the next unit.
3. To complete the cabling loop, from J16 (T2) of the final expansion unit, connect to the
CEC, SPCN 1 (T2).
4. Ensure that a complete loop exists from the CEC, through all attached expansions and
back to the CEC drawer.
System Unit
0
1
J15
J16
J15
J16
J15
J16
Description
#6006
#6008a
#6007
#6029a
77
IBM 7031 TotalStorage EXP24 Ultra320 SCSI Expandable Storage Disk Enclosure (no
longer orderable)
IBM System Storage
The following sections describe the EXP 12S Expansion Drawer, the EXP24S SFF
Gen2-bay Drawer, the 12X I/O Drawer PCIe with SFF Disks drawer, and IBM System Storage
in more detail.
With proper cabling and configuration, multiple wide ports are used to provide redundant
paths to each dual-port SAS disk. The adapter manages SAS path redundancy and path
switching in case a SAS drive failure occurs. The SAS Y cables attach to an EXP 12S Disk
Drawer. Use SAS cable (YI) system to SAS enclosure, single controller/dual path 1.5 M
(#3686, supported but not longer orderable) or SAS cable (YI) system to SAS enclosure,
single controller/dual path 3 M (#3687) to attach SFF SAS drives in an EXP12S Drawer.
78
Figure 2-28 illustrates connecting a system external SAS port to a disk expansion drawer.
IPHAD709-1
YI-Cable
Use a SAS cable (YO) system to SAS enclosure, single controller/dual path 1.5 M (#3450) or
SAS cable (YO) system to SAS enclosure, single controller/dual path 3 M (#3451) to attach
SFF SAS drives in an EXP12S Drawer. In the EXP 12S Drawer, a high-availability I/O
configuration can be created using a pair of #5278 adapters and SAS X cables to protect
against the failure of a SAS adapter.
Various disk options are available to be installed in the EXP 12S drawer. Table 2-28 shows
the available disk drive feature codes.
Table 2-28 Disk options for the EXP 12S drawer
Feature code
Description
OS support
#3586
AIX, Linux
#3647
AIX, Linux
#3648
AIX, Linux
#3649
AIX, Linux
#3646
AIX. Linux
#3587
IBM i
#3676
IBM i
#3677
IBM i
79
Feature code
Description
OS support
#3678
IBM i
#3658
IBM i
Note: A EXP 12S Drawer containing SSD drives cannot be attached to the CEC external
SAS port on the Power 720 and Power 740 or through a PCIe LP 2-x4 port SAS adapter
3 Gb (#5278). If this configuration is required, use a high-profile PCIe SAS adapter or a
PCI-X SAS adapter.
A second EXP 12S drawer can be attached to another drawer using two SAS EE cables,
providing 24 SAS bays instead of 12 bays for the same SAS controller port. This is called
cascading. In this configuration, all 24 SAS bays are controlled by a single controller or a
single pair of controllers.
The EXP 12S Drawer can also be directly attached to the SAS port on the rear of the
Power 720 and Power 740, providing a low-cost disk storage solution. The rear SAS port is
provided by the storage backplane, eight SFF Bays/175MB RAID/Dual IOA (#EJ01).
A second unit cannot be cascaded to an EXP 12S Drawer attached in this way.
Note: If the internal disk bay of the Power 720 or Power 740 contains any SSD drives, an
EXP 12S Drawer cannot be attached to the external SAS port on the Power 720 or
Power 740 (this applies even if the I/O drawer only contains SAS disk drives).
For detailed information about the SAS cabling, see the Serial-attached SCSI cable planning
documentation at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7had/p7
hadsascabling.htm
80
Note: A single #5887 drawer can be cabled to the CEC external SAS port when a #5268
DASD backplane is part of the system. A 3Gb/s YI cable (#3686/#3687) is used to connect
a #5887 to the CEC external SAS port.
A single #5887 will not be allowed to attach to the CEC external SAS port when a #EPC5
processor (4-core) is ordered/installed on a single socket Power 720 system.
The EXP24S can be ordered in one of three possible manufacturing-configured MODE
settings (not customer set-up): 1, 2, or 4 sets of disk bays.
With IBM AIX, Linux, Virtual I/O Server, the EXP24S can be ordered with four sets of six bays
(mode4), two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). With IBM i the
EXP24S can be ordered as one set of 24 bays (mode 1).
Note: Note the following information:
The modes for the EXP24S SFF Gen2-bay Drawer are set by IBM Manufacturing.
There is no option to reset after the drawer has been shipped.
If you order multiple EXP24S, avoid mixing modes within that order. There is no
externally visible indicator as to the drawer's mode.
Several EXP24S cannot be cascaded on the external SAS connector. Only one #5887
is supported.
The Power 720 or Power 740 support up to 14 EXP24S.
There are six SAS connectors on the rear of the EXP 24S drawer to which SAS
adapters/controllers are attached. They are labeled T1, T2, and T3, and there are two T1, two
T2, and two T3 (Figure 2-29):
In mode 1, two or four of the six ports are used. Two T2 are used for a single SAS
adapter, and two T2 and two T3 are used with a paired set of two adapters or dual
adapters configuration.
In mode 2 or mode 4, four ports will be used, two T2 and two T3, to access all SAS bays.
An EXP 24S in mode 4 can be attached to two or four SAS controllers and provide a lot of
configuration flexibility. An EXP24S in mode 2 has similar flexibility. Up to 24 HDDs can be
supported with any of the supported SAS adapters/controllers.
81
Include EXP24S no-charge specify codes with EXP24S orders to indicate to IBM
Manufacturing the mode to which the drawer should be set and the adapter/controller/cable
configuration that will be used. Table 2-29 lists the no-charge specify codes, the physical
adapters/controllers/cables with their own chargeable feature numbers.
Table 2-29 EXP 24S Cabling
Feature code
Mode
Adapter/controller
Cable to drawer
Environment
#9359
One #5278
1 YO Cable
A, L, VIOS
#9360
Pair #5278
2 YO Cables
A, L, VIOS
#9361
Two #5278
2 YO Cables
A, L, VIOS
#9365
Four #5278
2 X Cable
A, L, VIOS
#9366
2 X Cables
A, L, VIOS
#9367
Pair #5805
2 YO Cables
A, i, L, VIOS
#9368
Two 5805
2 X Cables
A, L, VIOS
#9382
One #5904/06/08
1 YO Cable
A, i, L, VIOS
#9383
Pair #5904/06/08
2 YO Cables
A, i, L, VIOS
#9384
1 YI Cable
A, i, L, VIOS
#9385
Two #5913
2 YO Cables
A, i, L, VIOS
#9386
Four #5913
4 X Cables
A, L, VIOS
The following cabling options for the EXP 24S Drawer are available:
X cables for #5278
3 m (#3661)
6 m (#3662)
15 m (#3663)
X cables for #5913 (all 6 Gb except for 15 m cable)
3 m (#3454)
6 m (#3455)
10 m (#3456)
YO cables for #5278
1.5 m (#3691)
3 m (#3692)
6 m (#3693)
15 m (#3694)
YO cables for #5913 (all 6 Gb except for 15 m cable)
1.5 m (#3450)
3 m (#3451)
6 m (#3452)
10 m (#3453)
YI cables for system unit SAS port (3 Gb)
1.5 m (#3686)
3 m (#3687)
82
Note: IBM plans to offer a 15-meter, 3 Gb bandwidth SAS cable for the #5913 PCIe2
1.8 GB Cache RAID SAS Adapter when attaching the EXP24S Drawer (#5887) for large
configurations where the 10-meter cable is a distance limitation.
The EXP24S Drawer rails are fixed length and designed to fit Power Systems provided
racks of 28 inches (711 mm) deep. EXP24S uses 2 EIA of space in a 19-inch-wide rack.
Other racks might have different depths, and these rails will not adjust. No adjustable depth
rails are orderable at this time.
For detailed information about the SAS cabling, see the serial-attached SCSI cable planning
documentation at:
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7had/p7
hadsascabling.htm
83
Note: A new IBM 7031 TotalStorage EXP24 Ultra320 SCSI Expandable Storage Disk
Enclosure cannot be ordered for the Power 720 and Power 740, and thus only existing
7031-D24 drawers or 7031-T24 towers can be moved to the Power 720 and 740 servers.
AIX and Linux partitions are supported along with the usage of a IBM 7031 TotalStorage
EXP24 Ultra320 SCSI Expandable Storage Disk Enclosure.
84
supporting a greater potential return on investment (ROI). For more information about
Storwize V7000, see:
http://www.ibm.com/systems/storage/disk/storwize_v7000/index.html
Availability
Description
7310-C05
Withdrawn
7310-C06
Withdrawn
7042-C06
Withdrawn
7042-C07
Withdrawn
7042-C08
Available
7310-CR3
Withdrawn
85
Type-model
Availability
Description
7042-CR4
Withdrawn
7042-CR5
Withdrawn
7042-CR6
Available
At the time of writing, the HMC must be running V7R7.4.0. It can also support up to 48
POWER7 systems. Updates of the machine code, HMC functions, and hardware
prerequisites can be found on Fix Central at this address:
http://www-933.ibm.com/support/fixcentral/
Server management
The first group contains all functions related to the management of the physical servers under
the control of the HMC:
System password
Status Bar
Power On/Off
Capacity on Demand
Error management
System indicators
Error and event collection reporting
Dump collection reporting
Call Home
Customer notification
Hardware replacement (Guided Repair)
SNMP events
Concurrent Add/Repair/Upgrade
Redundant Service Processor
Firmware Updates
Virtualization management
The second group contains all of the functions related to virtualization features such as a
partitions configuration or the dynamic reconfiguration of resources:
86
System Plans
System Profiles
Partitions (create, activate, shutdown)
Profiles
Partition Mobility
DLPAR (processors, memory, I/O, and so on)
Custom Groups
87
Figure 2-30 shows a simple network configuration to enable the connection from HMC to
the server and to enable Dynamic LPAR operations. For more details about HMC and the
possible network connections, see Hardware Management Console V7 Handbook,
SG24-7491.
Management LAN
ent1
eth1
eth0
HMC 1
HMC 2
Service
Processor
entx
entx
entx
Power System
The default mechanism for allocation of the IP addresses for the service processor HMC
ports is dynamic. The HMC can be configured as a DHCP server, providing the IP address at
the time the managed server is powered on. In this case, the FSPs are allocated IP
addresses from a set of address ranges predefined in the HMC software. These predefined
ranges are identical for Version 710 of the HMC code and for previous versions.
If the service processor of the managed server does not receive a DHCP reply before time
out, predefined IP addresses will be set up on both ports. Static IP address allocation is also
an option. You can configure the IP address of the service processor ports with a static IP
address by using the Advanced System Management Interface (ASMI) menus.
88
Note: The service processor is used to monitor and manage the system hardware
resources and devices. The service processor offers two Ethernet 10/100 Mbps ports as
connections. Note the following information:
Both Ethernet ports are visible only to the service processor and can be used to attach
the server to an HMC or to access the ASMI options from a client web browser using
the HTTP server integrated into the service processor internal operating system.
When not configured otherwise (DHCP or from a previous ASMI setting), both Ethernet
ports of the first FSP have predefined IP addresses:
Service processor Eth0 or HMC1 port is configured as 169.254.2.147 with netmask
255.255.255.0.
Service processor Eth1 or HMC2 port is configured as 169.254.3.147 with netmask
255.255.255.0.
For the second FSP of IBM Power 770 and 780, these default addresses are:
Service processor Eth0 or HMC1 port is configured as 169.254.2.146 with netmask
255.255.255.0.
Service processor Eth1 or HMC2 port is configured as 169.254.3.146 with netmask
255.255.255.0.
For more information about the service processor, see Service processor on page 146.
89
Figure 2-31 shows one possible highly available HMC configuration managing two servers.
These servers have only one CEC and therefore only one FSP. Each HMC is connected to
one FSP port of all managed servers.
eth1
eth1
HMC1
HMC2
eth0
eth0
LAN 1
LAN1
LAN2
LAN3 -
LAN 2
FSP
FSP
System A
System B
LPAR A1
LPAR B1
LPAR A2
LPAR B2
LPAR A3
LPAR B3
Note that only hardware management networks (LAN1 and LAN2) are highly available (as
shown in Figure 2-31) for simplicity. However, the management network (LAN3) can be made
highly available by using a similar concept and adding more Ethernet adapters to LPARs and
HMCs.
Both HMCs must be on a separate VLAN to protect from any network contention. Each HMC
can be a DHCP server for its VLAN.
For more details about redundant HMCs, see the Hardware Management Console V7
Handbook, SG24-7491.
90
HMC
Virtual I/O Server
System firmware
Partition operating systems
To check which combinations are supported, and to identify required upgrades, you can
use the Fix Level Recommendation Tool web page:
http://www14.software.ibm.com/webapp/set2/flrt/home
If you want to migrate a LPAR from a POWER6 processor-based server onto a POWER7
processor-based server using PowerVM Live Partition Mobility, consider this information: If
the source server is managed by one HMC and the destination server is managed by a
different HMC, ensure that the HMC managing the POWER6 processor-based server is at
V7R7.3.5 or later and the HMC managing the POWER7 processor-based server is at
V7R7.4.0 or later.
91
The SDMC can be obtained as a hardware appliance in the same manner as an HMC.
Hardware appliances support managing all Power Systems servers. The SDMC can
optionally be obtained in a virtual appliance format, capable of running on VMware (ESX/i 4,
or later) and KVM (Red Hat Enterprise Linux (RHEL) 5.5). The virtual appliance is only
supported for managing small-tier Power servers and Power Systems blades.
Note: At the time of writing, the SDMC is not supported for the Power 720 (8202-E4C) and
Power 740 (8205-E6C) models.
IBM intends to enhance the IBM Systems Director Management Console (SDMC) to
support the Power 720 (8202-E4C) and Power 740 (8205-E6C). IBM also intends for the
current HMC 7042-CR6 to be upgradable to an IBM SDMC that supports the Power 710
(8202-E4C) and Power 740 (8205-E6C).
Table 2-31 details whether the SDMC software appliance, hardware appliance, or both are
supported for each model.
Table 2-31 Type of SDMC appliance support for POWER7-based server
POWER7 models
The IBM SDMC Hardware Appliance requires an IBM 7042-CR6 rack-mounted Hardware
Management Console with the IBM SDMC indicator (#0963).
Note: When ordering #0963, the features #0031(No Modem), #1946 (additional 4 GB
memory), and #1998 (additional 500 GB SATA HDD) are being configured automatically.
Feature #0963 replaces the HMC software with IBM Systems Director Management
Console Hardware Appliance V6.7.3 (5765-MCH).
Neither an external modem (#0032) nor an internal modem (#0033) can be selected with
the IBM SDMC indicator (#0963).
To run HMC LMC (#0962), you cannot order the additional storage (#1998). However, you
can order the additional memory (#1946) if wanted.
92
The IBM SDMC Virtual Appliance requires IBM Systems Director Management Console
V6.7.3 (5765-MCV).
Note: If you want to use the software appliance, you have to provide the hardware and
virtualization environment.
At a minimum, the following resources must be available to the virtual machine:
2.53 GHz Intel Xeon E5630, Quad Core processor
500 GB storage
8 GB memory
The following hypervisors are supported:
VMware (ESXi 4.0 or later)
KVM (RHEL 5.5)
The SDMC on POWER6 processor-based servers and blades requires eFirmware level 3.5.7.
A SDMC on Power Systems POWER7 processor-based servers and blades requires
eFirmware level 7.3.0.
For more detailed information about the SDMC, see IBM Systems Director Management
Console: Introduction and Overview, SG24-7860.
93
IBM periodically releases maintenance packages (service packs or technology levels) for the
AIX operating system. Information about these packages and downloading and obtaining the
CD-ROM is on the Fix Central website:
http://www-933.ibm.com/support/fixcentral/
The Fix Central website also provides information about how to obtain the fixes shipping
on CD-ROM.
The Service Update Management Assistant, which can help you automate the task of
checking and downloading operating system downloads, is part of the base operating system.
For more information about the suma command, go to the following website:
http://www14.software.ibm.com/webapp/set2/sas/f/genunix/suma.html
95
96
97
When a system is idle, the system firmware will lower the frequency and voltage to Power
Energy Saver Mode values. When fully utilized, the maximum frequency will vary,
depending on whether the user favors power savings or system performance. If an
administrator prefers energy savings and a system is fully utilized, the system is designed
to reduce the maximum frequency to 95% of nominal values. If performance is favored
over energy consumption, the maximum frequency can be increased to up to 109% of
nominal frequency for extra performance.
Dynamic Power Saver Mode is mutually exclusive with Power Saver Mode. Only one of
these modes can be enabled at a given time.
Power Capping
Power Capping enforces a user-specified limit on power usage. Power Capping is not a
power-saving mechanism. It enforces power caps by throttling the processors in the
system, degrading performance significantly. The idea of a power cap is to set a limit that
must never be reached but that frees up extra power never used in the data center. The
margined power is this amount of extra power that is allocated to a server during its
installation in a datacenter. It is based on the server environmental specifications that
usually are never reached because server specifications are always based on maximum
configurations and worst-case scenarios. The user must set and enable an energy cap
from the IBM Director Active Energy Manager user interface.
Soft Power Capping
There are two power ranges into which the power cap can be set. One is Power Capping,
as described previously, and the other is Soft Power Capping. Soft Power Capping
extends the allowed energy capping range further, beyond a region that can be
guaranteed in all configurations and conditions. If the energy management goal is to meet
a particular consumption limit, then Soft Power Capping is the mechanism to use.
Processor Core Nap Mode
The IBM POWER7 processor uses a low-power mode called Nap that stops processor
execution when there is no work to do on that processor core. The latency of exiting Nap is
very small, typically not generating any impact on applications running. Because of that,
the POWER Hypervisor can use Nap mode as a general-purpose idle state. When the
operating system detects that a processor thread is idle, it yields control of a hardware
thread to the POWER Hypervisor. The POWER Hypervisor immediately puts the thread
into Nap mode. Nap mode allows the hardware to turn the clock off on most of the circuits
inside the processor core. Reducing active energy consumption by turning off the clocks
allows the temperature to fall, which further reduces leakage (static) power of the circuits,
causing a cumulative effect. Nap mode saves from 10 - 15% of power consumption in the
processor core.
Processor core Sleep Mode
To be able to save even more energy, the POWER7 processor has an even lower power
mode called Sleep. Before a core and its associated L2 and L3 caches enter Sleep mode,
caches are flushed and transition lookaside buffers (TLB) are invalidated, and hardware
clock is turned off in the core and in the caches. Voltage is reduced to minimize leakage
current. Processor cores inactive in the system (such as CoD processor cores) are kept in
Sleep mode. Sleep mode saves about 35% of power consumption in the processor core
and associated L2 and L3 caches.
Fan control and altitude input
System firmware will dynamically adjust fan speed based on energy consumption, altitude,
ambient temperature, and energy savings modes. Power Systems are designed to
operate in worst-case environments, in hot ambient temperatures, at high altitudes, and
with high-power components. In a typical case, one or more of these constraints are not
valid. When no power savings setting is enabled, fan speed is based on ambient
98
99
A new power savings mode called Inherit Host Setting is available and is only applicable
to partitions. When configured to use this setting, a partition adopts the power savings
mode of its hosting server. By default, all partitions with dedicated processing units, and
the system processor pool, are set to the Inherit Host Setting.
On POWER7 processor-based systems, several EnergyScales are imbedded in
the hardware and do not require an operating system or external management
component. More advanced functionality requires Active Energy Manager (AEM) and
IBM Systems Director.
Table 2-32 provides a list of all features supported, underlining all cases where AEM is not
required. Table 2-32 also notes the features that can be activated by traditional user
interfaces (that is, ASMI and HMC).
Table 2-32 AEM support
Feature
ASMI
HMC
Power Trending
Thermal Reporting
Power Capping
Energy-optimized Fans
Processor Folding
Partition Power
Management
The Power 720 and Power 740 systems implement all the EnergyScale capabilities listed
in 2.15.1, IBM EnergyScale technology on page 97.
100
Chapter 3.
Virtualization
As you look for ways to maximize the return on your IT infrastructure investments,
consolidating workloads becomes an attractive proposition.
IBM Power Systems combined with PowerVM technology is designed to help you consolidate
and simplify your IT environment, with the following key capabilities:
Improve server utilization and sharing I/O resources to reduce total cost of ownership and
make better use of IT assets.
Improve business responsiveness and operational speed by dynamically re-allocating
resources to applications as needed, to better match changing business needs or handle
unexpected changes in demand.
Simplify IT infrastructure management by making workloads independent of hardware
resources, thereby enabling you to make business-driven policies to deliver resources
based on time, cost, and service-level requirements.
This chapter discusses the virtualization technologies and features on IBM Power Systems:
POWER Hypervisor
POWER Modes
Partitioning
Active Memory Expansion
PowerVM
System Planning Tool
101
Greater than 8 GB up to 16 GB
64 MB
Greater than 16 GB up to 32 GB
128 MB
Greater than 32 GB
256 MB
In most cases, however, the actual minimum requirements and recommendations of the
supported operating systems are above 256 MB. Physical memory is assigned to partitions in
increments of LMB.
The POWER Hypervisor provides the following types of virtual I/O adapters:
102
Virtual SCSI
Virtual Ethernet
Virtual Fibre Channel
Virtual (TTY) console
Virtual SCSI
The POWER Hypervisor provides a virtual SCSI mechanism for virtualization of storage
devices. The storage virtualization is accomplished using two paired adapters:
A virtual SCSI server adapter
A virtual SCSI client adapter
A Virtual I/O Server partition or an IBM i partition can define virtual SCSI server adapters.
Other partitions are client partitions. The Virtual I/O Server partition is a special logical
partition, as described in 3.4.4, Virtual I/O Server on page 119. The Virtual I/O Server
software is included on all PowerVM Editions and when using the PowerVM Standard Edition
and PowerVM Enterprise Edition, dual Virtual I/O Servers can be deployed to provide
maximum availability for client partitions when performing Virtual I/O Server maintenance.
Virtual Ethernet
The POWER Hypervisor provides a virtual Ethernet switch function that allows partitions on
the same server to use a fast and secure communication without any need for physical
interconnection. The virtual Ethernet allows a transmission speed in the range of 1 - 3 Gbps,
depending on the maximum transmission unit (MTU) size and CPU entitlement. Virtual
Ethernet support began with IBM AIX Version 5.3, or an appropriate level of Linux supporting
virtual Ethernet devices (see 3.4.9, Operating system support for PowerVM on page 130).
The virtual Ethernet is part of the base system configuration.
Virtual Ethernet has these major features:
The virtual Ethernet adapters can be used for both IPv4 and IPv6 communication and can
transmit packets with a size up to 65 408 bytes. Therefore, the maximum MTU for the
corresponding interface can be up to 65 394 (65 390 if VLAN tagging is used).
The POWER Hypervisor presents itself to partitions as a virtual 802.1Q-compliant switch.
The maximum number of VLANs is 4 096. Virtual Ethernet adapters can be configured as
either untagged or tagged (following the IEEE 802.1Q VLAN standard).
A partition can support 256 virtual Ethernet adapters. Besides a default port VLAN ID,
the number of additional VLAN ID values that can be assigned per virtual Ethernet
adapter is 20, which implies that each virtual Ethernet adapter can be used to access
21 virtual networks.
Each partition operating system detects the virtual local area network (VLAN) switch
as an Ethernet adapter without the physical link properties and asynchronous data
transmit operations.
Any virtual Ethernet can also have connectivity outside of the server if a layer-2 bridge to a
physical Ethernet adapter is set in one Virtual I/O Server partition. See 3.4.4, Virtual I/O
Server on page 119, for more details about shared Ethernet), also known as Shared
Ethernet Adapter.
Note: Virtual Ethernet is based on the IEEE 802.1Q VLAN standard. No physical I/O
adapter is required when creating a VLAN connection between partitions, and no access to
an outside network is required.
Chapter 3. Virtualization
103
Client logical
partition 1
Client virtual
fibre channel
adapter
Client logical
partition 2
Client virtual
fibre channel
adapter
Client logical
partition 3
Client virtual
fibre channel
adapter
Physical fibre
channel adapter
Hypervisor
Physical
storage 2
Physical
storage 3
Figure 3-1 Connectivity between virtual Fibre Channels adapters and external SAN devices
104
On Power System servers, partitions can be configured to run in several modes, including:
POWER6 compatibility mode
This execution mode is compatible with Version 2.05 of the Power Instruction Set
Architecture (ISA). For more information, visit the following address:
http://www.power.org/resources/reading/PowerISA_V2.05.pdf
POWER6+ compatibility mode
This mode is similar to POWER6, with eight additional Storage Protection Keys.
POWER7 mode
This is the native mode for POWER7 processors, implementing the v2.06 of the Power
Instruction Set Architecture. For more information, visit the following address:
http://www.power.org/resources/downloads/PowerISA_V2.06_PUBLIC.pdf
The selection of the mode is made on a per-partition basis, from the HMC, by editing the
partition profile (Figure 3-2).
Figure 3-2 Configuring partition profile compatibility mode from the HMC
Chapter 3. Virtualization
105
POWER7 mode
Customer value
2-thread SMT
4-thread SMT
Throughput performance,
processor core utilization
High-performance computing
Barrier Synchronization
Fixed 128-byte Array,
Kernel Extension Access
Enhanced Barrier
Synchronization
Variable Sized Array, User
Shared Memory Access
High-performance computing
parallel programming
synchronization facility
#4793
#4794
% CPU
utilization
for
expansion
Very cost effective
1 = Plenty of spare
CPU resource
available
2 = Constrained
CPU resource
already running at
significant utilization
Both cases show that there is a knee-of-curve relationship for CPU resource required for
memory expansion:
Busy processor cores do not have resources to spare for expansion.
The more memory expansion done, the more CPU resource required.
The knee varies depending on how compressible the memory contents are. This example
demonstrates the need for a case-by-case study of whether memory expansion can provide a
positive return on investment.
Chapter 3. Virtualization
107
To help you perform this study, a planning tool is included with AIX 6.1 Technology Level 4,
allowing you to sample actual workloads and estimate how expandable the partition's
memory is and how much CPU resource is needed. Any Power System server can run the
planning tool. Figure 3-4 shows an example of the output returned by this planning tool. The
tool outputs various real memory and CPU resource combinations to achieve the desired
effective memory. It also recommends one particular combination. In this example, the tool
recommends that you allocate 58% of a processor, to benefit from 45% extra memory
capacity.
Active Memory Expansion Modeled Statistics:
----------------------Modeled Expanded Memory Size :
8.00 GB
Expansion
Factor
--------1.21
1.31
1.41
1.51
1.61
True Memory
Modeled Size
-------------6.75 GB
6.25 GB
5.75 GB
5.50 GB
5.00 GB
Modeled Memory
Gain
----------------1.25 GB [ 19%]
1.75 GB [ 28%]
2.25 GB [ 39%]
2.50 GB [ 45%]
3.00 GB [ 60%]
CPU Usage
Estimate
----------0.00
0.20
0.35
0.58
1.46
108
After you select the value of the memory expansion factor that you want to achieve, you can
use this value to configure the partition from the HMC (Figure 3-5).
True Memory
Modeled Size
-------------6.75 GB
6.25 GB
5.75 GB
5.50 GB
5.00 GB
p
Sam
Modeled Memory
Gain
----------------1.25 GB [ 19%]
1.75 GB [ 28%]
2.25 GB [ 39%]
2.50 GB[ 45%]
3.00 GB [ 60%]
l e ou
tput
CPU Usage
Estimate
----------0.00
0.20
0.35
0.58
1.46
5.5 true
8.0 max
Figure 3-5 Using the planning tool result to configure the partition
On the HMC menu describing the partition, check the Active Memory Expansion box and
enter true and maximum memory, and the memory expansion factor. To turn off expansion,
clear the check box. In both cases, a reboot of the partition is needed to activate the change.
In addition, a one-time, 60-day trial of Active Memory Expansion is available to provide more
exact memory expansion and CPU measurements. The trial can be requested using the
Capacity on Demand web page.
http://www.ibm.com/systems/power/hardware/cod/
Active Memory Expansion can be ordered with the initial order of the server or as an MES
order. A software key is provided when the enablement feature is ordered that is applied to
the server. Rebooting is not required to enable the physical server. The key is specific to an
individual server and is permanent. It cannot be moved to a separate server. This feature is
ordered per server, independently of the number of partitions using memory expansion.
Chapter 3. Virtualization
109
From the HMC, you can see whether the Active Memory Expansion feature has been
activated (Figure 3-6).
Note: If you want to move an LPAR using Active Memory Expansion to another system
using Live Partition Mobility, the target system must support AME (the target system must
have AME activated with the software key). If the target system does not have AME
activated, the mobility operation fails during the pre-mobility check phase, and an
appropriate error message displays to the user.
For detailed information regarding Active Memory Expansion, you can download the
document Active Memory Expansion: Overview and Usage Guide from this location:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=SA&subtype=WH&appname=S
TGE_PO_PO_USEN&htmlfid=POW03037USEN
3.4 PowerVM
The PowerVM platform is the family of technologies, capabilities, and offerings that deliver
industry-leading virtualization on the IBM Power Systems. It is the new umbrella branding
term for Power Systems Virtualization (Logical Partitioning, Micro-Partitioning, Power
Hypervisor, Virtual I/O Server, Live Partition Mobility, Workload Partitions, and more). As with
Advanced Power Virtualization in the past, PowerVM is a combination of hardware
enablement and value-added software. Section 3.4.1, PowerVM editions on page 111,
discusses the licensed features of each of the three separate editions of PowerVM.
110
Express
Standard
Enterprise
#5225
#5227
#5228
#5225
#5227
#5228
For more information about the features included on each version of PowerVM, see IBM
PowerVM Virtualization Introduction and Configuration, SG24-7940.
Note: At the time of writing, the IBM Power 720 (8202-E4C) and Power 740 (8205-E6C)
have to be managed by the Hardware Management Console or by the Integrated
Virtualization Manager.
Chapter 3. Virtualization
111
Micro-Partitioning
Micro-Partitioning technology allows you to allocate fractions of processors to a logical
partition. This technology was introduced with POWER5 processor-based systems. A
logical partition using fractions of processors is also known as a Shared Processor Partition
or micro-partition. Micro-partitions run over a set of processors called Shared Processor Pool.
Virtual processors are used to let the operating system manage the fractions of processing
power assigned to the logical partition. From an operating system perspective, a virtual
processor cannot be distinguished from a physical processor, unless the operating system
has been enhanced to be made aware of the difference. Physical processors are
abstracted into virtual processors that are available to partitions. The meaning of the term
physical processor in this section is a processor core. For example, a 2-core server has two
physical processors.
When defining a shared processor partition, several options have to be defined:
The minimum, desired, and maximum processing units
Processing units are defined as processing power, or the fraction of time that the partition
is dispatched on physical processors. Processing units define the capacity entitlement of
the partition.
The Shared Processor Pool
Pick one from the list with the names of each configured Shared Processor Pool. This list
also displays the pool ID of each configured Shared Processor Pool in parentheses. If the
name of the desired Shared Processor Pool is not available here, you must first configure
the desired Shared Processor Pool using the Shared Processor Pool Management
window. Shared processor partitions use the default Shared Processor Pool called
DefaultPool by default. See 3.4.3, Multiple Shared Processor Pools on page 114, for
details about multiple Shared Processor Pools.
Whether the partition will be able to access extra processing power to fill up its virtual
processors above its capacity entitlement (selecting either to cap or uncap your partition)
If there is spare processing power available in the Shared Processor Pool or other
partitions are not using their entitlement, an uncapped partition can use additional
processing units if its entitlement is not enough to satisfy its application processing
demand.
The weight (preference) in the case of an uncapped partition
The minimum, desired, and maximum number of virtual processors
The POWER Hypervisor calculates partitions processing power based on minimum, desired,
and maximum values, processing mode, and is also based on requirements of other active
partitions. The actual entitlement is never smaller than the processing units desired value, but
can exceed that value in the case of an uncapped partition and up to the number of virtual
processors allocated.
A partition can be defined with a processor capacity as small as 0.10 processing units. This
represents 0.10 of a physical processor. Each physical processor can be shared by up to 10
shared processor partitions and the partitions entitlement can be incremented fractionally by
as little as 0.01 of the processor. The shared processor partitions are dispatched and
time-sliced on the physical processors under control of the POWER Hypervisor. The shared
processor partitions are created and managed by the managed console or Integrated
Virtualization Management.
112
The IBM Power 720 supports up to eight cores, and has these maximums:
Up to eight dedicated partitions
Up to 80 micro-partitions (10 micro-partitions per physical active core)
The Power 740 allows up to 16 cores in a single system, supporting the following maximums:
Up to 16 dedicated partitions
Up to 160 micro-partitions (10 micro-partitions per physical active core)
An important point is that the maximums stated are supported by the hardware, but the
practical limits depend on the application workload demands
Additional information about virtual processors includes:
A virtual processor can be running (dispatched) either on a physical processor or as
standby waiting for a physical processor to become available.
Virtual processors do not introduce any additional abstraction level. They are only a
dispatch entity. When running on a physical processor, virtual processors run at the same
speed as the physical processor.
Each partitions profile defines CPU entitlement that determines how much processing
power any given partition should receive. The total sum of CPU entitlement of all partitions
cannot exceed the number of available physical processors in a Shared Processor Pool.
The number of virtual processors can be changed dynamically through a dynamic
LPAR operation.
Processing mode
When you create a logical partition you can assign entire processors for dedicated use, or you
can assign partial processing units from a Shared Processor Pool. This setting defines the
processing mode of the logical partition. Figure 3-7 shows a diagram of the concepts
discussed in this section.
lp lp lp lp
lp lp
Linux
0.5 PrU
AIX V6.1
1.5 PrU
Set of micro-partitions
AIX V5.3
0.5 PrU
AIX V6.1
1.5 PrU
AIX V6.1
AIX V5.3
Set of micro-partitions
lp lp
lp lp
lp lp
lp lp
lp lp
lp lp
vp
vp
vp
vp
vp
vp
Shared-Processor Pool 0
Shared-Processor Pool 1
POWER Hypervisor
Dedicated processors
Dedicated processors
Chapter 3. Virtualization
113
Dedicated mode
In dedicated mode, physical processors are assigned as a whole to partitions. The
simultaneous multithreading feature in the POWER7 processor core allows the core to
execute instructions from two or four independent software threads simultaneously. To
support this feature we use the concept of logical processors. The operating system (AIX,
IBM i, or Linux) sees one physical processor as two or four logical processors if the
simultaneous multithreading feature is on. It can be turned off and on dynamically while the
operating system is executing (for AIX, use the smtctl command). If simultaneous
multithreading is off, each physical processor is presented as one logical processor, and thus
only one thread.
Shared mode
In shared mode, logical partitions use virtual processors to access fractions of physical
processors. Shared partitions can define any number of virtual processors (the maximum
number is 10 times the number of processing units assigned to the partition). From the
POWER Hypervisor point of view, virtual processors represent dispatching objects. The
POWER Hypervisor dispatches virtual processors to physical processors according to the
partitions processing units entitlement. One processing unit represents one physical
processors processing capacity. At the end of the POWER Hypervisors dispatch cycle
(10 ms), all partitions should receive total CPU time equal to their processing units
entitlement. The logical processors are defined on top of virtual processors. So, even with a
virtual processor, the concept of a logical processor exists and the number of logical
processors depends on whether the simultaneous multithreading is turned on or off.
114
To implement MSPPs, there is a set of underlying techniques and technologies. Figure 3-8
shows an overview of the architecture of multiple Shared Processor Pools.
AIX V5.3
AIX V6.1
Linux
AIX V6.1
AIX V6.1
Linux
EC 1.6
EC 0.8
EC 0.5
EC 1.6
EC 0.8
EC 0.5
vp7
vp1
vp4
vp0
vp2
vp6
vp3
vp10
vp5
vp8
vp9
POWER Hypervisor
p0
p1
p2
p3
p4
p5
p6
p7
KEY:
EC
p
vp
SPPn
Entitled Capacity
Physical processor
Virtual processor
Shared-Processor Pooln
Micro-partitions are created and then identified as members of either the default Shared
Processor Pool0 or a user-defined Shared Processor Pooln . The virtual processors that exist
within the set of micro-partitions are monitored by the POWER Hypervisor, and processor
capacity is managed according to user-defined attributes.
If the Power Systems server is under heavy load, each micro-partition within a Shared
Processor Pool is guaranteed its processor entitlement plus any capacity that it might be
allocated from the reserved pool capacity if the micro-partition is uncapped.
If some micro-partitions in a Shared Processor Pool do not use their capacity entitlement, the
unused capacity is ceded and other uncapped micro-partitions within the same Shared
Processor Pool are allocated the additional capacity according to their uncapped weighting.
In this way, the entitled pool capacity of a Shared Processor Pool is distributed to the set of
micro-partitions within that Shared Processor Pool.
All Power Systems servers that support the multiple Shared Processor Pools capability will
have a minimum of one (the default) Shared Processor Pool and up to a maximum of 64
Shared Processor Pools.
Chapter 3. Virtualization
115
Value
0.
0.
116
Figure 3-9 shows the levels of unused capacity redistribution implemented by the POWER
Hypervisor.
Level1 capacity
resolution
Level0
capacity
resolution
SPP2
SPPn
SPP1 capacity
resolution
SPP2 capacity
resolution
SPPn capacity
resolution
Micro-partition
Micro-partitionnn
SPP1
Micro-partition3
Micro-partition2
Micro-partition1
Micro-partition0
SPP0
SPP0 capacity
resolution
POWER Hypervisor
p0
p1
p2
p3
p4
p5
Chapter 3. Virtualization
117
118
Hypervisor
Shared Ethernet
Adapter
Physical Ethernet
Adapter
Virtual Ethernet
Adapter
Physical Disk
Adapter
Virtual SCSI
Adapter
Physical
Disk
Virtual SCSI
Adapter
Physical
Disk
Virtual Ethernet
Adapter
Virtual SCSI
Adapter
Because the Virtual I/O server is an operating system-based appliance server, redundancy
for physical devices attached to the Virtual I/O Server can be provided by using capabilities
such as Multipath I/O and IEEE 802.3ad Link Aggregation.
Installation of the Virtual I/O Server partition is performed from a special system backup DVD
that is provided to clients who order any PowerVM edition. This dedicated software is only for
the Virtual I/O Server (and IVM in case it is used) and is only supported in special Virtual I/O
Server partitions. Three major virtual devices are supported by the Virtual I/O Server:
Shared Ethernet Adapter
Virtual SCSI
Virtual Fibre Channel adapter
Chapter 3. Virtualization
119
The Virtual Fibre Channel adapter is used with the NPIV feature, described in 3.4.8, N_Port
ID virtualization on page 129.
VIOS
Client 1
Client 2
Client 3
en0
(if.)
en0
(if.)
en0
(if.)
ent0
(virt.)
ent0
(virt.)
ent0
(virt.)
VLAN=1
VLAN=2
VLAN=2
PVID=1
Ethernet
switch
External
Network
120
Hypervisor
PVID=2
ent2
(virt.)
PVID=1
PVID=99
ent1
(virt.)
VID=2
ent0
(phy.)
PVID=1
ent3
(sea)
PVID=1
en3
(if.)
A single SEA setup can have up to 16 Virtual Ethernet trunk adapters, and each virtual
Ethernet trunk adapter can support up to 20 VLAN networks. Therefore, a possibility is for a
single physical Ethernet to be shared between 320 internal VLAN networks. The number of
shared Ethernet adapters that can be set up in a Virtual I/O Server partition is limited only by
the resource availability, because there are no configuration limits.
Unicast, broadcast, and multicast are supported, so protocols that rely on broadcast or
multicast, such as Address Resolution Protocol (ARP), Dynamic Host Configuration
Protocol (DHCP), Boot Protocol (BOOTP), and Neighbor Discovery Protocol (NDP) can
work on an SEA.
Note: A Shared Ethernet Adapter does not need to have an IP address configured to be
able to perform the Ethernet bridging functionality. Configuring IP on the Virtual I/O Server
is convenient because the Virtual I/O Server can then be reached by TCP/IP, for example,
to perform dynamic LPAR operations or to enable remote login. This task can be done
either by configuring an IP address directly on the SEA device or on an additional virtual
Ethernet adapter in the Virtual I/O Server. This leaves the SEA without the IP address,
allowing for maintenance on the SEA without losing IP connectivity in case SEA failover
is configured.
For a more detailed discussion about virtual networking, see:
http://www.ibm.com/servers/aix/whitepapers/aix_vn.pdf
Virtual SCSI
Virtual SCSI is used to refer to a virtualized implementation of the SCSI protocol. Virtual SCSI
is based on a client/server relationship. The Virtual I/O Server logical partition owns the
physical resources and acts as a server or, in SCSI terms, a target device. The client logical
partitions access the virtual SCSI backing storage devices provided by the Virtual I/O Server
as clients.
The virtual I/O adapters (virtual SCSI server adapter and a virtual SCSI client adapter) are
configured using a managed console or through the Integrated Virtualization Manager on
smaller systems. The virtual SCSI server (target) adapter is responsible for executing any
SCSI commands that it receives. It is owned by the Virtual I/O Server partition. The virtual
SCSI client adapter allows a client partition to access physical SCSI and SAN-attached
devices and LUNs that are assigned to the client partition. The provisioning of virtual disk
resources is provided by the Virtual I/O Server.
Physical disks presented to the Virtual/O Server can be exported and assigned to a client
partition in a number of ways:
The entire disk is presented to the client partition.
The disk is divided into several logical volumes, which can be presented to a single client
or multiple clients.
As of Virtual I/O Server 1.5, files can be created on these disks, and file-backed storage
devices can be created.
The logical volumes or files can be assigned to separate partitions. Therefore, virtual SCSI
enables sharing of adapters and disk devices.
Chapter 3. Virtualization
121
Figure 3-12 shows an example where one physical disk is divided into two logical volumes by
the Virtual I/O Server. Each of the two client partitions is assigned one logical volume, which
is then accessed through a virtual I/O adapter (VSCSI Client Adapter). Inside the partition,
the disk is seen as a normal hdisk.
Client Partition 1
Client Partition 2
Physical
Adapter
LVM
Logical
Volume 1
Logical
Volume 2
Hdisk
Hdisk
VSCSI
Server
Adapter
VSCSI
Server
Adapter
VSCSI
Client
Adapter
VSCSI
Client
Adapter
POWER Hypervisor
Figure 3-12 Architectural view of virtual SCSI
At the time of writing, virtual SCSI supports Fibre Channel, parallel SCSI, iSCSI, SAS, SCSI
RAID devices and optical devices, including DVD-RAM and DVD-ROM. Other protocols such
as SSA and tape devices are not supported.
For more information about the specific storage devices supported for Virtual I/O Server, see:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html
122
Includes IBM Systems Director agent and a number of pre-installed Tivoli agents, such
as these:
Tivoli Identity Manager (TIM), to allow easy integration into an existing Tivoli Systems
Management infrastructure
Tivoli Application Dependency Discovery Manager (ADDM), which creates and
automatically maintains application infrastructure maps including dependencies,
change-histories, and deep configuration values
vSCSI eRAS.
Additional CLI statistics in svmon, vmstat, fcstat, and topas.
Monitoring solutions to help manage and monitor the Virtual I/O Server and shared
resources. New commands and views provide additional metrics for memory, paging,
processes, Fibre Channel HBA statistics, and virtualization.
For more information about the Virtual I/O Server and its implementation, see IBM PowerVM
Virtualization Introduction and Configuration, SG24-7940.
Chapter 3. Virtualization
123
124
The managed console is used to configure, validate, and orchestrate. Use the managed
console to configure the Virtual I/O Server as an MSP and to configure the VASI device.
A managed console wizard validates your configuration and identifies issues that can
cause the migration to fail. During the migration, the managed console controls all phases
of the process.
125
LPAR2
Logical Memory
U
LPAR3
Logical Memory
U
Mappings
Without
Active Memory
Deduplication
Duplicate pages
Unique pages
Active Memory Deduplication allows the Hypervisor to dynamically map identical partition
memory pages to a single physical memory page within a shared memory pool. This enables
a better utilization of the AMS shared memory pool, increasing the systems overall
performance by avoiding paging. Deduplication can cause the hardware to incur fewer cache
misses, which also leads to improved performance.
126
Figure 3-14 shows the behavior of a system with Active Memory Deduplication enabled on its
AMS shared memory pool. Duplicated pages from different LPARs are stored just once,
providing the AMS pool with more free memory.
LPAR1
Logical Memory
LPAR2
Logical Memory
U
LPAR3
Logical Memory
U
Mappings
D
U
With
Active Memory
Deduplication
Free
U
Duplicate pages
Unique pages
Figure 3-14 Identical memory pages mapped to a single physical memory page with Active Memory
Duplication enabled
AMD depends on the Active Memory Sharing (AMS) feature to be available, and it
consumes CPU cycles donated by the AMS pool's VIOS partitions to identify deduplicated
pages. The operating systems running on the AMS partitions can hint to the PowerVM
Hypervisor that certain pages (such as frequently referenced read-only code pages) are
particularly good for deduplication.
To perform deduplication, the Hypervisor cannot compare every memory page in the AMS
pool with every other page. Instead, it computes a small signature for each page that it visits,
and stores the signatures in an internal table. Each time that a page is inspected, its signature
is looked up against the known signatures in the table. If a match is found, the memory pages
are compared to be sure that the pages are really duplicates. When an actual duplicate is
found, the Hypervisor remaps the partition memory to the existing memory page and returns
the duplicate page to the AMS pool.
Chapter 3. Virtualization
127
Figure 3-15 shows two pages being written in the AMS memory pool and having its
signatures matched on the deduplication table.
Page A
Signature
Function
Sign A
Page A
Page B
AMS
Memory
Pool
Dedup
Table
Signature
Function
Sign A
ure
nat
Sig ction
n
Fu
AMS
Memory
Pool
Dedup
Table
Figure 3-15 Memory pages having their signatures matched by Active Memory Deduplication
From the LPAR point of view, the AMD feature is completely transparent. If an LPAR attempts
to modify a deduplicated page, the Hypervisor takes a free page from the AMS pool, copies
the duplicate page content into the new page, and maps the LPAR's reference to the new
page so that the LPAR can modify its own unique page.
System administrators can dynamically configure the size of the deduplication table, ranging
from 1/8192 up to 1/256 of the configured maximum AMS memory pool size. Having this table
too small might lead to missed deduplication opportunities. Conversely, having a table that is
too large might waste a small amount of overhead space.
The management of the Active Memory Deduplication feature is done via a managed
console, allowing administrators to:
Enable and disable Active Memory Deduplication at an AMS Pool level.
Display deduplication metrics.
Display and modify the deduplication table size.
128
Figure 3-16 shows the Active Memory Deduplication function being enabled to a shared
memory pool.
Figure 3-16 Enabling the Active Memory Deduplication for a shared memory pool
Chapter 3. Virtualization
129
NPIV is supported in PowerVM Express, Standard, and Enterprise Editions, on the IBM
Power 720 and Power 740 servers.
AIX
V5.3
AIX
V6.1
AIX
V7.1
IBM i
6.1.1
IBM i
7.1
RHEL
V5.7
RHEL
V6.1
SLES
V10
SP4
SLES
V11
SP1
Virtual SCSI
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Virtual Ethernet
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Shared Ethernet
Adapter
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Virtual Fibre
Channel
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Virtual Tape
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Logical Partitioning
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
DLPAR I/O
processor
add/remove
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
DLPAR I/O
memory add
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
DLPAR I/O
memory remove
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Yes
Micro-Partitioning
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Shared Dedicated
Capacity
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Multiple Shared
Processor Pools
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Suspend/Resume
No
Yes
Yes
No
No
No
No
No
No
Shared Storage
Pools
Yes
Yes
Yes
Yes
Yesa
No
No
No
No
Thin Provisioning
Yes
Yes
Yes
Yesb
Yesb
No
No
No
No
Active Memory
Sharing and Active
Memory
Deduplication
No
Yes
Yes
Yes
Yes
No
Yes
No
Yes
130
Feature
AIX
V5.3
AIX
V6.1
AIX
V7.1
IBM i
6.1.1
IBM i
7.1
RHEL
V5.7
RHEL
V6.1
SLES
V10
SP4
SLES
V11
SP1
Live Partition
Mobility
Yes
Yes
Yes
No
No
Yes
Yes
Yes
Yes
Simultaneous
Multi-Threading
(SMT)
Yesc
Yesd
Yes
Yese
Yes
Yesc
Yesc
Yesc
Yes
Active Memory
Expansion
No
Yesf
Yes
No
No
No
No
No
No
Linux releases
Comments
SLES 10 SP4
SLES 11
RHEL 5.7
RHEL 6.1
POWER6 compatibility
mode
Yes
Yes
Yes
Yes
POWER7 mode
No
Yes
No
Yes
Strong Access
Ordering
No
Yes
No
Yes
No
Yes
No
Yes
Base OS support
available
4-way SMT
No
Yes
No
Yes
VSX Support
No
Yes
No
Yes
Full exploitation
requires Advance
Toolchain
Distro toolchain
mcpu/mtune=p7
No
Yes
No
Yes
SLES11/GA toolchain
has minimal P7
enablement necessary
to support kernel build
Advance Toolchain
Support
Yes, execution
restricted to
Power6
instructions
Yes
Yes, execution
restricted to
Power6
instructions
Yes
Chapter 3. Virtualization
131
Features
Linux releases
Comments
SLES 10 SP4
SLES 11
RHEL 5.7
RHEL 6.1
No
Yes
Yes
Yes
Tickless idle
No
Yes
No
Yes
Improved energy
utilization and
virtualization of
partially to fully idle
partitions.
You can use the SPT before you order a system to determine what you must order to support
your workload. You can also use the SPT to determine how you can partition a system that
you already have.
The System Planning tool is an effective way of documenting and backing up key system
settings and partition definitions. It allows the user to create records of systems and export
them to their personal workstation or backup system of choice. These same backups can
then be imported back onto the same managed console when needed. This can be useful
when cloning systems enabling the user to import the system plan to any managed console
multiple times.
The SPT and its supporting documentation can be found on the IBM System Planning
Tool site:
http://www.ibm.com/systems/support/tools/systemplanningtool/
Chapter 3. Virtualization
133
134
Chapter 4.
135
4.1 Reliability
Highly reliable systems are built with highly reliable components. On IBM POWER
processor-based systems, this basic principle is expanded upon with a clear design for
reliability architecture and methodology. A concentrated, systematic, architecture-based
approach is designed to improve overall system reliability with each successive generation of
system offerings.
0.8
0.6
0.4
0.2
0
Grade 3
Grade 1
136
Grade 5
4.2 Availability
IBM hardware and microcode capability to continuously monitor execution of hardware
functions is generally described as the process of First Failure Data Capture (FFDC). This
process includes the strategy of predictive failure analysis, which refers to the ability to track
intermittent correctable errors and to vary components off-line before they reach the point of
hard failure, causing a system outage, and without the need to recreate the problem.
The POWER7 family of systems continues to offer and introduce significant enhancements
designed to increase system availability to drive towards a high-availability objective with
hardware components that can perform the following automatic functions:
Self-diagnose and self-correct during run time.
Automatically reconfigure to mitigate potential problems from suspect hardware.
Self-heal or automatically substitute good components for failing components.
Note: POWER7 processor-based servers are independent of the operating system for
error detection and fault isolation within the central electronics complex.
Chapter 4. Continuous availability and manageability
137
Persistent deallocation
To enhance system availability, a component that is identified for deallocation or
deconfiguration on a POWER processor-based system is flagged for persistent deallocation.
Component removal can occur either dynamically (while the system is running) or at boot
time (IPL), depending both on the type of fault and when the fault is detected.
In addition, runtime unrecoverable hardware faults can be deconfigured from the system after
the first occurrence. The system can be rebooted immediately after failure and resume
operation on the remaining stable hardware. This prevents the same faulty hardware from
138
affecting system operation again. The repair action is deferred to a more convenient, less
critical time.
The following persistent deallocation functions are included:
Processor.
L2/L3 cache lines. (Cache lines are dynamically deleted.)
Memory.
Deconfigure or bypass failing I/O adapters.
139
140
Figure 4-2 shows a POWER7 processor chip, with its memory interface, consisting of two
controllers and four DIMMs per controller. Advanced memory buffer chips are exclusive to
IBM and help to increase performance, acting as read/write buffers. Power 720 and
Power 740 uses one memory controller. Advanced memory buffer chips are on the system
planar and support two DIMMs each.
POWER7
Core
POWER7
Core
256 KB L2
256 KB L2
256 KB L2
256 KB L2
SMP Fabric
POWER7
Core
GX
POWER7
Core
32 MB L3 Cache
256 KB L2
256 KB L2
256 KB L2
256 KB L2
POWER7
Core
POWER7
Core
POWER7
Core
POWER7
Core
Memory Controller
Port
Buffer
Ctrl
DIMMCtrl
DIMM
Memory Controller
Port
Port
Buffer
Buffer
Ctrl
DIMM
Ctrl
DIMM
Ctrl
DIMM
Ctrl
DIMM
Port
Buffer
Ctrl
CtrlDIMM
DIMM
141
POWER Hypervisor. This deallocation will take effect on a partition reboot if the logical
memory block is assigned to an active partition at the time of the fault.
In addition, the system will deallocate the entire memory group associated with the error on
all subsequent system reboots until the memory is repaired. This precaution is intended to
guard against future uncorrectable errors while waiting for parts replacement.
142
L2 and L3 deleted cache lines are marked for persistent deconfiguration on subsequent
system reboots until they can be replaced.
143
POWER7
POWER7
12X Channel
Hub
PCI-X
Bridge
PCI-X
Bridge
12X Channel
Hub
12X Channel
PCIe Bridge
Parity error
CRC with
retry or ECC
PCI-X to PCI-X
PCI Bridge Enhanced
Error Handling
PCIe
Adapter
Parity error
PCI-X
Adapter
144
4.3 Serviceability
IBM Power Systems design considers both IBM and client needs. The IBM Serviceability
Team has enhanced the base service capabilities and continues to implement a strategy that
incorporates best-of-breed service characteristics from diverse IBM systems offerings.
Serviceability includes system installation, system upgrades and downgrades (MES), and
system maintenance and repair.
The goal of the IBM Serviceability Team is to design and provide the most efficient system
service environment that includes:
Easy access to service components, design for Customer Set Up (CSU), Customer
Installed Features (CIF), and Customer Replaceable Units (CRU)
On demand service education
Error detection and fault isolation (ED/FI)
First-failure data capture (FFDC)
An automated guided repair strategy that uses common service interfaces for a converged
service approach across multiple IBM server platforms
By delivering on these goals, IBM Power Systems servers enable faster and more accurate
repair, and reduce the possibility of human error.
Client control of the service environment extends to firmware maintenance on all of the
POWER processor-based systems. This strategy contributes to higher systems availability
with reduced maintenance costs.
This section provides an overview of the progressive steps of error detection, analysis,
reporting, notifying, and repairing that are found in all POWER processor-based systems.
4.3.1 Detecting
The first and most crucial component of a solid serviceability strategy is the ability to
accurately and effectively detect errors when they occur. Although not all errors are a
guaranteed threat to system availability, those that go undetected can cause problems
because the system does not have the opportunity to evaluate and act if necessary. POWER
processor-based systems employ System z server-inspired error detection mechanisms
that extend from processor cores and memory to power supplies and hard drives.
Service processor
The service processor is a microprocessor that is powered separately from the main
instruction processing complex. The service processor provides the capabilities for:
POWER Hypervisor (system firmware) and Hardware Management Console
connection surveillance
Several remote power control options
Reset and boot features
Environmental monitoring
The service processor monitors the servers built-in temperature sensors, sending
instructions to the system fans to increase rotational speed when the ambient temperature
is above the normal operating range. Using an architected operating system interface, the
service processor notifies the operating system of potential environmentally related
145
problems so that the system administrator can take appropriate corrective actions before a
critical failure threshold is reached.
The service processor can also post a warning and initiate an orderly system shutdown in
these cases:
The operating temperature exceeds the critical level (for example, failure of air
conditioning or air circulation around the system).
The system fan speed is out of operational specification (for example, because of
multiple fan failures).
The server input voltages are out of operational specification.
The service processor can immediately shut down a system when the following
cases occur:
Temperature exceeds the critical level or remains above the warning level for too long.
Internal component temperatures reach critical levels.
Non-redundant fan failures occur.
Placing calls
On systems without a Hardware Management Console, the service processor can place
calls to report surveillance failures with the POWER Hypervisor, critical environmental
faults, and critical processing faults even when the main processing unit is inoperable.
Mutual surveillance
The service processor monitors the operation of the POWER Hypervisor firmware during
the boot process and watches for loss of control during system operation. It also allows the
POWER Hypervisor to monitor service processor activity. The service processor can take
appropriate action, including calling for service, when it detects that the POWER
Hypervisor firmware has lost control. Likewise, the POWER Hypervisor can request a
service processor repair action if necessary.
Availability
The auto-restart (reboot) option, when enabled, can reboot the system automatically
following an unrecoverable firmware error, firmware hang, hardware failure, or
environmentally induced (AC power) failure.
146
Note: The auto-restart (reboot) option has to be enabled from the Advanced System
Manager Interface or from the Control (Operator) Panel. Figure 4-4 shows this option
using the ASMI.
Fault monitoring
Built-in self-test (BIST) checks processor, cache, memory, and associated hardware that
is required for proper booting of the operating system when the system is powered on at
the initial installation or after a hardware configuration change (for example, an upgrade).
If a non-critical error is detected or if the error occurs in a resource that can be removed
from the system configuration, the booting process is designed to proceed to completion.
The errors are logged in the system nonvolatile random access memory (NVRAM). When
the operating system completes booting, the information is passed from the NVRAM to the
system error log where it is analyzed by error log analysis (ELA) routines. Appropriate
actions are taken to report the boot-time error for subsequent service, if required.
147
Error checkers
IBM POWER processor-based systems contain specialized hardware detection circuitry that
is used to detect erroneous hardware operations. Error checking hardware ranges from parity
error detection coupled with processor instruction retry and bus retry, to ECC correction on
caches and system buses. All IBM hardware error checkers have distinct attributes:
Continuous monitoring of system operations to detect potential calculation errors.
Attempts to isolate physical faults based on runtime detection of each unique failure.
Ability to initiate a wide variety of recovery mechanisms designed to correct the problem.
The POWER processor-based systems include extensive hardware and firmware
recovery logic.
148
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
Text
CPU
L1
Text
Service
Processor
Log error
Non-volatile
RAM
Text
Text
Text
Text
Text
Text
Text
Text
L2 / L3
Error checkers
Memory
Disk
Fault isolation
The service processor interprets error data that is captured by the FFDC checkers (saved in
the FIRs or other firmware-related data capture methods) to determine the root cause of the
error event.
Root cause analysis might indicate that the event is recoverable, meaning that a service
action point or need for repair has not been reached. Alternatively, it could indicate that a
service action point has been reached, where the event exceeded a predetermined threshold
or was unrecoverable. Based on the isolation analysis, recoverable error threshold counts
might be incremented. No specific service action is necessary when the event is recoverable.
When the event requires a service action, additional required information is collected to
service the fault. For unrecoverable errors or for recoverable events that meet or exceed their
service threshold (meaning that a service action point has been reached), a request for
service is initiated through an error logging component.
4.3.2 Diagnosing
Using the extensive network of advanced and complementary error detection logic that is built
directly into hardware, firmware, and operating systems, the IBM Power Systems servers can
perform considerable self-diagnosis.
149
Boot time
When an IBM Power Systems server powers up, the service processor initializes the system
hardware. Boot-time diagnostic testing uses a multi-tier approach for system validation,
starting with managed low-level diagnostics that are supplemented with system firmware
initialization and configuration of I/O hardware, followed by OS-initiated software test routines.
Boot-time diagnostic routines include:
Built-in self-tests (BISTs) for both logic components and arrays ensure the internal
integrity of components. Because the service processor assists in performing these tests,
the system is enabled to perform fault determination and isolation, whether or not the
system processors are operational. Boot-time BISTs can also find faults undetectable by
processor-based power-on self-test (POST) or diagnostics.
Wire-tests discover and precisely identify connection faults between components such as
processors, memory, or I/O hub chips.
Initialization of components such as ECC memory, typically by writing patterns of data and
allowing the server to store valid ECC data for each location, can help isolate errors.
To minimize boot time, the system determines which of the diagnostics are required to be
started in order to ensure correct operation, based on the way the system was powered off or
on the boot-time selection menu.
Run time
All Power Systems servers can monitor critical system components during run time, and they
can take corrective actions when recoverable faults occur. IBM hardware error-check
architecture provides the ability to report non-critical errors in an out-of-band communications
path to the service processor without affecting system performance.
A significant part of IBM runtime diagnostic capabilities originates with the service processor.
Extensive diagnostic and fault analysis routines have been developed and improved over
many generations of POWER processor-based servers, and enable quick and accurate
predefined responses to both actual and potential system problems.
The service processor correlates and processes runtime error information, using logic
derived from IBM engineering expertise to count recoverable errors (called thresholding) and
predict when corrective actions must be automatically initiated by the system. These actions
can include these:
Requests for a part to be replaced
Dynamic invocation of built-in redundancy for automatic replacement of a failing part
Dynamic deallocation of failing components so that system availability is maintained
Device drivers
In certain cases diagnostics are best performed by operating-system-specific drivers, most
notably I/O devices that are owned directly by a logical partition. In these cases, the operating
system device driver often works in conjunction with I/O device microcode to isolate and
recover from problems. Potential problems are reported to an operating system device driver,
which logs the error. I/O devices can also include specific exercisers that can be invoked by
the diagnostic facilities for problem recreation if required by service procedures.
4.3.3 Reporting
In the unlikely event that a system hardware or environmentally induced failure is diagnosed,
IBM Power Systems servers report the error through a number of mechanisms. The analysis
150
result is stored in system NVRAM. Error log analysis (ELA) can be used to display the failure
cause and the physical location of the failing hardware.
With the integrated service processor, the system has the ability to automatically send out an
alert through a phone line to a pager, or call for service in the event of a critical system failure.
A hardware fault also illuminates the amber system fault LED located on the system unit to
alert the user of an internal hardware problem.
On POWER7 processor-based servers, hardware and software failures are recorded in the
system log. When an HMC is attached, an ELA routine analyzes the error, forwards the event
to the Service Focal Point (SFP) application running on the HMC or SDMC, and has the
capability to notify the system administrator that it has isolated a likely cause of the system
problem. The service processor event log also records unrecoverable checkstop conditions,
forwards them to the Service Focal Point (SFP) application, and notifies the system
administrator. After the information is logged in the SFP application, if the system is properly
configured, a call-home service request is initiated and the pertinent failure data with service
parts information and part locations is sent to the IBM service organization. This information
will also contain the client contact information as defined in the Electronic Service Agent
(ESA) guided setup wizard.
Remote support
The Remote Management and Control (RMC) subsystem is delivered as part of the base
operating system, including the operating system running on the Hardware Management
Console. RMC provides a secure transport mechanism across the LAN interface between the
operating system and the Hardware Management Console and is used by the operating
system diagnostic application for transmitting error information. It performs a number of other
functions also, but these are not used for the service infrastructure.
151
When a local or globally reported service request is made to the operating system, the
operating system diagnostic subsystem uses the Remote Management and Control
Subsystem (RMC) to relay error information to the Hardware Management Console. For
global events (platform unrecoverable errors, for example) the service processor will also
forward error notification of these events to the Hardware Management Console, providing a
redundant error-reporting path in case of errors in the RMC network.
The first occurrence of each failure type is recorded in the Manage Serviceable Events task
on the HMC /SDMC. This task then filters and maintains a history of duplicate reports from
other logical partitions on the service processor. It then looks at all active service event
requests, analyzes the failure to ascertain the root cause, and, if enabled, initiates a call home
for service. This methodology ensures that all platform errors will be reported through at least
one functional path, ultimately resulting in a single notification for a single problem.
4.3.4 Notifying
After a Power Systems server has detected, diagnosed, and reported an error to an
appropriate aggregation point, it then takes steps to notify the client, and if necessary the IBM
support organization. Depending on the assessed severity of the error and support
agreement, this can range from a simple notification to having field service personnel
automatically dispatched to the client site with the correct replacement part.
152
Client Notify
When an event is important enough to report, but does not indicate the need for a repair
action or the need to call home to IBM service and support, it is classified as Client Notify.
Clients are notified because these events might be of interest to an administrator. The event
might be a symptom of an expected systemic change, such as a network reconfiguration or
failover testing of redundant power or cooling systems. These are examples of these events:
Network events such as the loss of contact over a local area network (LAN)
Environmental events such as ambient temperature warnings
Events that need further examination by the client (although these events do not
necessarily require a part replacement or repair action)
Client Notify events are serviceable events, by definition, because they indicate that
something has happened that requires client awareness in the event the client wants to take
further action. These events can always be reported back to IBM at the clients discretion.
Call home
A correctly configured POWER processor-based system can initiate an automatic or manual
call from a client location to the IBM service and support organization with error data, server
status, or other service-related information. The call-home feature invokes the service
organization in order for the appropriate service action to begin, automatically opening a
problem report, and in certain cases, also dispatching field support. This automated reporting
provides faster and potentially more accurate transmittal of error information. Although
configuring call-home is optional, clients are strongly encouraged to configure this feature to
obtain the full value of IBM service enhancements.
153
Light Path
The Light Path LED feature is for low-end systems, including Power Systems up to
models 750 and 755, that might be repaired by clients. In the Light Path LED implementation,
when a fault condition is detected on the POWER7 processor-based system, an amber FRU
fault LED is illuminated, which is then rolled up to the system fault LED. The Light Path
system pinpoints the exact part by turning on the amber FRU fault LED that is associated with
the part to be replaced.
The system can clearly identify components for replacement by using specific
component-level LEDs, and can also guide the servicer directly to the component by
signaling (staying on solid) the system fault LED, the enclosure fault LED, and the component
FRU fault LED.
After the repair, the LEDs shut off automatically if the problem is fixed.
Guiding Light
Midrange and high-end systems, including models 770 and 780 and later, are usually
repaired by IBM Support personnel.
The enclosure and system identify LEDs that turn on solid and that can be used to follow the
path from the system to the enclosure and down to the specific FRU.
Guiding Light uses a series of flashing LEDs, allowing a service provider to quickly
and easily identify the location of system components. Guiding Light can also handle
multiple error conditions simultaneously, which might be necessary in certain complex
high-end configurations. In these situations, Guiding Light awaits for the servicers indication
of what failure to attend first and then illuminates the LEDs to the failing component.
Data centers can be complex places, and Guiding Light is designed to do more than identify
visible components. When a component might be hidden from view, Guiding Light can flash a
sequence of LEDs that extend to the frame exterior, clearly guiding the service representative
to the correct rack, system, enclosure, drawer, and component.
154
Service labels
Service providers use these labels to assist them in performing maintenance actions.
Service labels are found in various formats and positions and are intended to transmit
readily available information to the servicer during the repair process.
Several of these service labels and the purpose of each are described in the following list:
Location diagrams are strategically located on the system hardware, relating information
regarding the placement of hardware components. Location diagrams can include location
codes, drawings of physical locations, concurrent maintenance status, or other data that is
pertinent to a repair. Location diagrams are especially useful when multiple components
are installed, such as DIMMs, CPUs, processor books, fans, adapter cards, LEDs, and
power supplies.
Remove or replace procedure labels contain procedures often found on a cover of the
system or in other spots that are accessible to the servicer. These labels provide
systematic procedures, including diagrams, detailing how to remove and replace certain
serviceable hardware components.
Numbered arrows are used to indicate the order of operation and serviceability
direction of components. Various serviceable parts such as latches, levers, and touch
points must be pulled or pushed in a certain direction and certain order so that the
mechanical mechanisms can engage or disengage. Arrows generally improve the ease
of serviceability.
Concurrent maintenance
The IBM POWER7 processor-based systems are designed with the understanding that
certain components have higher intrinsic failure rates than others. The movement of fans,
power supplies, and physical storage devices naturally make them more susceptible to
wearing down or burning out. Other devices such as I/O adapters can begin to wear from
repeated plugging and unplugging. For these reasons, these devices have been specifically
designed to be concurrently maintainable when properly configured.
In other cases, a client might be in the process of moving or redesigning a data center, or
planning a major upgrade. At times like these, flexibility is crucial. The IBM POWER7
processor-based systems are designed for redundant or concurrently maintainable power,
fans, physical storage, and I/O towers.
The most recent members of the IBM Power Systems family, based on the POWER7
processor, continue to support concurrent maintenance of power, cooling, PCI adapters,
media devices, I/O drawers, GX adapter, and the operator panel. In addition, they support
concurrent firmware fix pack updates when possible. The determination of whether a
firmware fix pack release can be updated concurrently is identified in the readme file that is
released with the firmware.
Firmware updates
System Firmware is delivered as a Release Level or a Service Pack. Release Levels support
the general availability (GA) of new function or features, and new machine types or models.
Chapter 4. Continuous availability and manageability
155
the service action is always performed from the operating system of the partition owning
that resource.
Clients can subscribe through the subscription services to obtain notifications about the latest
updates available for service-related documentation. The latest version of the documentation
is accessible through the internet.
4.4 Manageability
Several functions and tools help manageability and enable you to efficiently and effectively
manage your system.
Service processor
The service processor is a controller that is running its own operating system. It is a
component of the service interface card.
The service processor operating system has specific programs and device drivers for the
service processor hardware. The host interface is a processor support interface that is
connected to the POWER processor. The service processor is always working, regardless of
the main system units state. The system unit can be in the following states:
Standby (power off)
Operating, ready-to-start partitions
Operating with running logical partitions
Functions
The service processor is used to monitor and manage the system hardware resources and
devices. The service processor checks the system for errors, ensuring the connection to the
HMC for manageability purposes and accepting ASMI Secure Sockets Layer (SSL) network
connections. The service processor provides the ability to view and manage the
157
machine-wide settings by using the ASMI, and enables complete system and partition
management from the HMC.
Note: The service processor enables a system that does not boot to be analyzed. The
error log analysis can be performed from either the ASMI or the HMC.
The service processor uses two Ethernet 10/100Mbps ports. Note the following information:
Both Ethernet ports are only visible to the service processor and can be used to attach the
server to an HMC or to access the ASMI. The ASMI options can be accessed through an
HTTP server that is integrated into the service processor operating environment.
Both Ethernet ports support only auto-negotiation. Customer selectable media speed and
duplex settings are not available.
Both Ethernet ports have a default IP address, as follows:
Service processor Eth0 or HMC1 port is configured as 169.254.2.147.
Service processor Eth1 or HMC2 port is configured as 169.254.3.147.
The functions available through service processor include:
Call Home
Advanced System Management Interface (ASMI)
Error Information (error code, PN, Location Codes) menu
View of guarded components
Limited repair procedures
Generate dump
LED Management menu
Remote view of ASMI menus
Firmware update through USB key
158
c. From the System Management tasks list, select Operations Advanced System
Management (ASM).
Access the ASMI using a web browser.
The web interface to the ASMI is accessible by running Microsoft Internet Explorer 7.0,
Opera 9.24, or Mozilla Firefox 2.0.0.11 running on a PC or mobile computer that is
connected to the service processor. The web interface is available during all phases of
system operation, including the initial program load (IPL) and run time. However, a few of
the menu options in the web interface are unavailable during IPL or run time to prevent
usage or ownership conflicts if the system resources are in use during that phase. The
ASMI provides a SSL web connection to the service processor. To establish an SSL
connection, open your browser using this address:
https://<ip_address_of_service_processor>
Where <ip_address_of_service_processor> is the address of the service processor of
your Power Systems server, such as 9.166.196.7.
Tip: To make the connection through Internet Explorer, click Tools Internet Options.
Clear the Use TLS 1.0 check box, and click OK.
Access the ASMI using an ASCII terminal.
The ASMI on an ASCII terminal supports a subset of the functions that are provided by the
web interface and is available only when the system is in the platform standby state. The
ASMI on an ASCII console is not available during various phases of system operation,
such as the IPL and run time.
159
Release Lever
(slide left to release operator panel and pull out from chassis)
Figure 4-6 Operator panel is pulled out from the chassis
Error Information
Generate dump
View Machine Type, Model, and Serial Number
Limited set of repair functions
160
error log and the AIX configuration data. IBM i has a service tools problem log, IBM i history
log (QHST), and IBM i problem log.
The available modes are as follows:
Service mode
This requires a service mode boot of the system and enables the checking of system
devices and features. Service mode provides the most complete checkout of the system
resources. All system resources, except the SCSI adapter and the disk drives used for
paging, can be tested.
Concurrent mode
This enables the normal system functions to continue while selected resources are being
checked. Because the system is running in normal operation, certain devices might
require additional actions by the user or diagnostic application before testing can be done.
Maintenance mode
This enables the checking of most system resources. Maintenance mode provides the
same test coverage as service mode. The difference between the two modes is the way
that they are invoked. Maintenance mode requires that all activity on the operating system
be stopped. The shutdown -m command is used to stop all activity on the operating system
and put the operating system into maintenance mode.
The System Management Services (SMS) error log is accessible on the SMS menus.
This error log contains errors that are found by partition firmware when the system or
partition is booting.
You can access the service processors error log on the ASMI menus.
You can also access the system diagnostics from an AIX Network Installation Management
(NIM) server.
Note: When you order a Power System, a DVD-ROM or DVD-RAM might be optional. An
alternate method for maintaining and servicing the system must be available if you do not
order the DVD-ROM or DVD-RAM.
The IBM i operating system and associated machine code provide Dedicated Service Tools
(DST) as part of the IBM i licensed machine code (Licensed Internal Code) and System
Service Tools (SST) as part of the IBM i operating system. DST can be run in dedicated mode
(no operating system loaded). DST tools and diagnostics are a superset of those available
under SST.
The IBM i End Subsystem (ENDSBS *ALL) command can shut down all IBM and customer
applications subsystems except the controlling subsystem QTCL. The Power Down System
(PWRDWNSYS) command can be set to power down the IBM i partition and restart the
partition in DST mode.
You can start SST during normal operations, which leaves all applications up and running,
using the IBM i Start Service Tools (STRSST) command (when signed onto IBM i with the
appropriately secured user ID).
With DST and SST, you can look at various logs, run various diagnostics, or take various
kinds of system dumps or other options.
161
Depending on the operating system, the service-level functions that you typically see when
using the operating system service menus are as follows:
Product activity log
Trace Licensed Internal Code
Work with communications trace
Display/Alter/Dump
Licensed Internal Code log
Main storage dump manager
Hardware service manager
Call Home/Customer Notification
Error information menu
LED management menu
Concurrent/Non-concurrent maintenance (within scope of the OS)
Managing firmware levels
Server
Adapter
Remote support (access varies by OS)
162
For access to the initial web pages that address this capability, see the Support for IBM
Systems web page:
http://www.ibm.com/systems/support
For Power Systems, select the Power link (Figure 4-7).
Although the content under the Popular links section can change, click the Firmware and
HMC updates link to go to the resources for keeping your systems firmware current.
163
If there is an HMC to manage the server, the HMC interface can be use to view the levels of
server firmware and power subsystem firmware that are installed and are available to
download and install.
Each IBM Power Systems server has the following levels of server firmware and power
subsystem firmware:
Installed level
This level of server firmware or power subsystem firmware has been installed and will be
installed into memory after the managed system is powered off and then powered on. It is
installed on the temporary side of system firmware.
Activated level
This level of server firmware or power subsystem firmware is active and running
in memory.
Accepted level
This level is the backup level of server or power subsystem firmware. You can return to this
level of server or power subsystem firmware if you decide to remove the installed level. It is
installed on the permanent side of system firmware.
IBM provides the Concurrent Firmware Maintenance (CFM) function on selected Power
Systems. This function supports applying nondisruptive system firmware service packs to the
system concurrently (without requiring a reboot operation to activate changes). For systems
that are not managed by an HMC, the installation of system firmware is always disruptive.
The concurrent levels of system firmware can, on occasion, contain fixes that are known as
deferred. These deferred fixes can be installed concurrently but are not activated until the
next IPL. Deferred fixes, if any, will be identified in the Firmware Update Descriptions table of
the firmware document. For deferred fixes within a service pack, only the fixes in the service
pack that cannot be concurrently activated are deferred. Table 4-1 shows the file-naming
convention for system firmware.
Table 4-1 Firmware naming convention
PPNNSSS_FFF_DDD
PP
NN
164
Package identifier
SSS
Release indicator
FFF
Current fixpack
DDD
01
02
AL
Low End
AM
Mid Range
AS
Blade Server
AH
High End
AP
AB
165
AIX
6.1
AIX
7.1
IBM i
RHEL
5.7
RHEL
6.1
SLES11
SP1
Most
Most
Most
Hardware scrubbing
CRC
Chipkill
Runtime diagnostics
Most
Most
Most
RAS feature
System deallocation of failing components
Memory availability
166
AIX
5.3
AIX
6.1
AIX
7.1
IBM i
RHEL
5.7
RHEL
6.1
SLES11
SP1
Dynamic Trace
Wire tests
Component initialization
Most
Most
Most
Most
Most
Most
Inventory collection
Lightpath LEDs
RAS feature
Serviceability
167
AIX
5.3
AIX
6.1
AIX
7.1
IBM i
RHEL
5.7
RHEL
6.1
SLES11
SP1
Redundant HMCs
Most
Most
Most
RAS feature
a. Electronic Service Agent via a managed HMC will report platform-level information but not Linux operating system
detected errors.
168
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
IBM BladeCenter PS700, PS701, and PS702 Technical Overview and Introduction,
REDP-4655
IBM BladeCenter PS703 and PS704 Technical Overview and Introduction, REDP-4744
IBM Power 710 and 730 (8231-E1C, 8231-E2C) Technical Overview and Introduction,
REDP-4796
IBM Power 750 and 755 (8233-E8B, 8236-E8C) Technical Overview and Introduction,
REDP-4638
IBM Power 770 and 780 (9117-MMC, 9179-MHC) Technical Overview and Introduction,
REDP-4798
IBM Power 795 (9119-FHB) Technical Overview and Introduction, REDP-4640
IBM PowerVM Virtualization Introduction and Configuration, SG24-7940
IBM PowerVM Virtualization Managing and Monitoring, SG24-7590
IBM PowerVM Live Partition Mobility, SG24-7460
IBM System p Advanced POWER Virtualization (PowerVM) Best Practices, REDP-4194
PowerVM Migration from Physical to Virtual Storage, SG24-7825
IBM System Storage DS8000: Copy Services in Open Environments, SG24-6788
IBM System Storage DS8700 Architecture and Implementation, SG24-8786
PowerVM and SAN Copy Services, REDP-4610
SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
169
Other publications
These publications are also relevant as further information sources:
IBM Power Systems Facts and Features POWER7 Blades and Servers
http://www.ibm.com/systems/power/hardware/reports/factsfeatures.html
Specific storage devices supported for Virtual I/O Server
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html
IBM Power 710 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03048usen/POD03048USEN.PDF
IBM Power 720 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03048usen/POD03048USEN.PDF
IBM Power 730 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03050usen/POD03050USEN.PDF
IBM Power 740 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03051usen/POD03051USEN.PDF
IBM Power 750 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03034usen/POD03034USEN.PDF
IBM Power 755 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03035usen/POD03035USEN.PDF
IBM Power 770 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03035usen/POD03035USEN.PDF
IBM Power 780 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03032usen/POD03032USEN.PDF
IBM Power 795 server Data Sheet
http://public.dhe.ibm.com/common/ssi/ecm/en/pod03053usen/POD03053USEN.PDF
Active Memory Expansion: Overview and Usage Guide
http://public.dhe.ibm.com/common/ssi/ecm/en/pow03037usen/POW03037USEN.PDF
Migration combinations of processor compatibility modes for active Partition Mobility
http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/topic/p7hc3/iphc3pcmco
mbosact.htm
Advance Toolchain for Linux website
http://www.ibm.com/developerworks/wikis/display/hpccentral/How+to+use+Advance+
Toolchain+for+Linux+on+POWER
170
Online resources
These websites are also relevant as further information sources:
IBM Power Systems Hardware Information Center
http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp
IBM System Planning Tool website
http://www.ibm.com/systems/support/tools/systemplanningtool/
IBM Fix Central website
http://www.ibm.com/support/fixcentral/
Power Systems Capacity on Demand website
http://www.ibm.com/systems/power/hardware/cod/
Support for IBM Systems website
http://www.ibm.com/support/entry/portal/Overview?brandind=Hardware~Systems~Power
IBM Power Systems website
http://www.ibm.com/systems/power/
IBM Storage website
http://www.ibm.com/systems/storage/
Related publications
171
172
Back cover
Redpaper
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.