Professional Documents
Culture Documents
Technical Documentation
X6 Implementation Guide
1.9.96-13
Technical Documentation
X6 Implementation Guide
1.9.96-13
II
Technical Documentation
Lenovo, the Lenovo logo, System x and For Those Who Do are trademarks or registered trademarks
of Lenovo in the United States, other countries, or both. Other product and service names might be
trademarks of Lenovo or other companies.
A current list of Lenovo trademarks is available on the web at:
http://www.lenovo.com/legal/copytrade.html.
IBM, the IBM logo, and ibm.com are trademarks of International Business Machines Corp., registered in
the United States and/or other countries.
Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in
the United States and/or other countries.
Fusion-io is a registered trademark of Fusion-io, in the United States.
Intel, Intel Xeon, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or
its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
SAP HANA is a trademark of SAP Corporation in the United States, other countries, or both.
Other company, product or service names may be trademarks or service marks of others.
X6 Implementation Guide
1.9.96-13
III
Technical Documentation
Contents
1 Abstract
1.1 Preface & Scope .
1.2 Acknowledgements
1.3 Feedback . . . . .
1.4 Disclaimer . . . . .
1.5 Support . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
2
2
3
3
3
. . . . .
. . . . .
Versions
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
6
6
6
6
6
6
7
3 Solution Overview
3.1 The SAP HANA Appliance Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Definition of SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7
7
4 Hardware Configurations
4.1 SAP HANA Platform Edition T-Shirt Sizes . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Single Node versus Clustered Configuration . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Network Switch Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 SAP HANA Optimized Hardware Configurations . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 System x3850 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . .
4.3.2 System x3950 X6 Single Node Configurations . . . . . . . . . . . . . . . . . . . . .
4.3.3 System x3850 X6 Single Node Four Socket Configurations with Storage Expansion
4.3.4 System x3950 X6 SAP ERP on SAP HANA Single Node Configurations . . . . . .
4.3.5 System x3850 X6 Cluster Node Configurations with Storage Expansion . . . . . .
4.3.6 System x3950 X6 Cluster Node Configurations . . . . . . . . . . . . . . . . . . . .
4.4 Card Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Network Interface Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 Slots for additional Network Interface Cards . . . . . . . . . . . . . . . . . . . . . .
4.4.3 RAID Adapter Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
10
10
11
13
13
13
14
14
14
15
15
15
15
16
5 Networking
5.1 Networking Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Jumbo Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.4 Network Switch Configuration For Clustered Installations . . . . . . . . . . . . . . . . . .
5.5 Customer Site Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6 Network Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6.1 Numbering conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6.2 Internal Networks Option 1 G8264 RackSwitch 10Gbit . . . . . . . . . . . . . . .
5.6.3 Internal Networks Option 2 G8124 RackSwitch 10Gbit . . . . . . . . . . . . . . .
5.6.4 Internal Networks Option 3 G8272 RackSwitch 10Gbit . . . . . . . . . . . . . . .
5.6.5 Internal Networks Option 4 G8296 RackSwitch 10Gbit . . . . . . . . . . . . . . .
5.6.6 Administrative, SAP-Access and Backup Networks Option G8052 RackSwitch
1Gbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.6.7 Network Configurations in a Clustered Environment . . . . . . . . . . . . . . . . .
5.7 Setting up the Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
21
21
22
23
24
24
24
24
26
27
28
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Introduction
2.1 Purpose . . . . . . . . . . . . . . . .
2.2 Applicability . . . . . . . . . . . . .
2.2.1 SAP HANA Platform Edition
2.3 Exclusions and Exceptions . . . . . .
2.4 Conventions . . . . . . . . . . . . . .
2.4.1 Icons Used . . . . . . . . . .
2.4.2 Code Snippets . . . . . . . .
X6 Implementation Guide
1.9.96-13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
30
31
IV
Technical Documentation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
32
33
33
33
33
33
33
36
36
36
37
38
38
39
39
39
40
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
42
42
42
43
45
45
48
48
51
52
53
58
60
61
62
62
62
62
65
7 After Installation
7.1 Actions to insure the correctness of the installation . . . . . . . . . . . . . . . . . . . . . .
7.2 HANA Network Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
66
67
8 Disaster Recovery
8.1 Architecture . . . . . . . . . . . . . . . . . . . . . .
8.1.1 Terminology . . . . . . . . . . . . . . . . .
8.1.2 Architectural overview . . . . . . . . . . . .
8.1.3 Three site/Tiebreaker node architecture . .
8.2 Mixing eX5/X6 Server in a DR Cluster . . . . . .
8.3 Hardware Setup . . . . . . . . . . . . . . . . . . . .
8.3.1 Site A and B . . . . . . . . . . . . . . . . .
8.3.2 Tiebreaker Site C (optional) . . . . . . . . .
8.3.3 Acquire TCP/IP addresses and host names
68
68
68
69
71
71
71
71
71
72
5.8
5.9
X6 Implementation Guide
1.9.96-13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Technical Documentation
8.4
8.5
8.6
8.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
72
73
73
73
74
76
77
78
79
81
81
83
83
83
83
85
86
86
87
87
88
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
91
91
92
94
94
97
97
97
98
99
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
103
103
104
105
106
107
107
108
108
110
110
110
110
111
111
112
113
X6 Implementation Guide
1.9.96-13
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
Replacement
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
Replacement
. . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
VI
Technical Documentation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
128
128
128
129
129
129
133
133
135
135
136
136
137
137
138
150
150
151
152
152
152
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
154
154
154
156
156
156
157
158
158
158
158
159
160
161
13 Software Updates
162
13.1 Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
X6 Implementation Guide
1.9.96-13
VII
Technical Documentation
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
4.1
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
162
162
164
164
164
165
165
166
166
167
168
171
172
174
176
177
177
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
178
178
178
178
179
179
179
180
180
181
181
181
181
182
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
183
183
183
186
187
187
187
187
188
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
189
189
190
191
191
192
193
195
195
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
X6 Implementation Guide
1.9.96-13
197
VIII
Technical Documentation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
204
204
204
204
205
205
205
205
Appendices
207
207
207
C Quotas
209
C.1 Quota Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
C.2 Quota Calculation Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
D Performance Settings
211
214
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
216
216
216
216
217
217
220
220
221
221
222
222
223
223
224
225
225
G References
G.1 Lenovo References . . . . . . . . . . . . . . . . . . . . . . . . . .
G.2 IBM References . . . . . . . . . . . . . . . . . . . . . . . . . . . .
G.3 SAP General Help (SAP Service Marketplace ID required) . . . .
G.4 SAP Notes (SAP Service Marketplace ID required) . . . . . . . .
G.5 Novell SUSE Linux Enterprise Server References . . . . . . . . .
G.6 Red Hat Enterprise Linux References (Red Hat account required)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
226
226
226
226
227
228
228
H Changelog
X6 Implementation Guide
1.9.96-13
229
IX
Technical Documentation
List of Figures
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
X6 Implementation Guide
1.9.96-13
Technical Documentation
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
login to ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
configure management network . . . . . . . . . . . . . . . . . . . . . . . . . . .
display network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
display network adapters 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IP configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set IP,NETMASK,GW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set DNS and Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Set DNS suffix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to
device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ESXi5.5 Storage Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ESXi 5.1 WEB Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create new virtual machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose custom configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose a name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose disk storage for VM files . . . . . . . . . . . . . . . . . . . . . . . . . .
Newest virtual machine hardware version . . . . . . . . . . . . . . . . . . . . .
Configure the use of more than 32 CPUs . . . . . . . . . . . . . . . . . . . . . .
Choose Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose number of CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose Network Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose SCSI controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Create new HANA datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose datastore size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Choose SCSI Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Add a new CD/DVD device . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Select ISO image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Select IDE device 0:0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Finish creation of SLES ISO mount . . . . . . . . . . . . . . . . . . . . . . . . .
Upgrade virtual hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Confirm upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Upgrade virtual hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Changing the autoyast parameter for installation . . . . . . . . . . . . . . . . .
Adding kickstart parameter for install . . . . . . . . . . . . . . . . . . . . . . .
Overview of Backup/Restore Operations . . . . . . . . . . . . . . . . . . . . . .
Sample GRUB boot loader screen . . . . . . . . . . . . . . . . . . . . . . . . . .
X6 Implementation Guide
1.9.96-13
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
the USB
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
129
130
130
131
131
132
132
133
135
137
138
139
139
140
140
141
141
142
142
143
143
144
144
145
145
146
146
147
147
148
148
149
149
150
151
190
196
XI
Technical Documentation
List of Tables
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
X6 Implementation Guide
1.9.96-13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
13
13
13
14
14
14
15
16
16
17
19
19
22
23
24
25
27
28
29
30
41
42
43
44
46
47
47
47
48
49
50
51
52
72
72
76
91
93
95
98
100
101
105
106
106
107
128
155
160
160
166
XII
Technical Documentation
53
54
55
56
57
58
59
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
168
172
200
208
214
215
222
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
183
183
184
191
192
192
192
192
193
193
193
194
195
195
196
196
196
List of Listings
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
X6 Implementation Guide
1.9.96-13
XIII
Technical Documentation
List of Abbreviations
ASU
BIOS
DR
DT
SAP Dynamic Tiering (not to be confused with Disaster Recovery (DR), previously
Disaster Tolerance (DT))
ELILO
IBM GPFS
GRUB
GSS
IMM
LILO
Linux Loader
MTM
NIC
OLAP
OLTP
OS
Operating System
RHEL
SAP HANA
SLES
SLES for SAP SUSE Linux Enterprise Server for SAP Applications
UEFI
UUID
VLAG
VLAN
X6 Implementation Guide
1.9.96-13
XIV
Technical Documentation
Abstract
This document provides general information specific to the Lenovo Systems Solution for SAP HANA
Platform Edition (short: Lenovo Solution). This document assumes that the reader understands the
basic structure and components of the SAP HANA Platform Edition (SAP HANA) software, that he
has a solid understanding of Linux administration processes, and that he has been instructed how to
install the SAP HANA1 software on Lenovo Systems hardware.
Lenovo Solution is built with Lenovo Systems hardware based on Intel Xeon Architecture as building
blocks for a scale-up or scale-out SAP HANA system. These provide a highly-scalable infrastructure for
SAP HANA. The Lenovo Systems servers with local storage and Lenovo Systems Networking switches
will be used to run SAP HANA.
Lenovo has created orderable models upon which you may install and run the SAP HANA according to
the sizing charts coordinated with SAP AG. For each workload type, special ordering options for the
Lenovo System servers, storage and switches have been approved by SAP and Lenovo to accommodate
the requirements for the SAP HANA.
Attention
IMPORTANT: Please do not attempt to install a system without having been instructed
about the content of this document.
Note
It is considered best practice to create backups before and recover the SAP HANA system
after a major failure instead of relying on a fresh install with the help of this document. For
details on Backup and Recovery please refer to the Lenovo Solution Backup & Restore Guide
as well as the Lenovo Solution Hardware, Operating System & GPFS Operations Guide (SAP
Note 1650046).
Copyright 2014-2015 Lenovo. All Rights Reserved.
Neither this documentation nor any part of it may be copied or reproduced in any form or by any means
or translated into another language, without the prior consent of Lenovo.
Lenovo makes no warranties or representations with respect to the content hereof and specifically disclaims
any implied warranties of merchantability or fitness for any particular purpose. Lenovo assumes no
responsibility for any errors that may appear in this document. The information contained in this
document is subject to change without any notice. Lenovo reserves the right to make any such changes
without obligation to notify any person of such revision or changes. Lenovo makes no commitment to
keep the information contained herein up to date.
Edition Notice: 3rd July 2015
This is the published edition of this document. The online copy is the master.
1 SAP
X6 Implementation Guide
1.9.96-13
Technical Documentation
1.1
The objective of this paper is to document the installation and configuration of the SAP HANA Platform
Edition (SAP HANA) on System x hardware using a managed set up rather than manually installing
each node from scratch. The major products installed here are SAP HANA, IBM General Parallel File
System (IBM GPFS) and the operating systems SUSE Linux Enterprise Server for SAP Applications
(SLES for SAP), or Red Hat Enterprise Linux (RHEL).
For instructions how to administrate SAP HANA Platform Edition (SAP HANA) please refer to the
SAP HANA Technical Operations Manual2 . Instructions how to administrate and maintain the other
components delivered with the System x solution can be found in the SAP Note 1650046 Lenovo Systems
Solution Hardware, Operating System & GPFS Operations Guide. The Lenovo System x solution for
SAP HANA Quick Start Guide provides an overview of the complete solution and instructions how to
find service and support for your Lenovo Solution.
1.2
Acknowledgements
X6 Implementation Guide
1.9.96-13
Technical Documentation
1.3
Feedback
We are interested in your comments and feedback. Please send it to sapsolutions@lenovo.com. The full
guidebook can be downloaded, depending on its version, from following community (SAP HANA Support
Document section) SAP Solutions at Lenovo Community.
1.4
Disclaimer
This document is subject to change without notification and will not cover the issues encountered in
every customer situation. It should be used only in conjunction with the official product literature. The
information contained in this document has not been submitted to any formal test and is distributed AS
IS.
All statements regarding Lenovo future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized
reseller for the full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in Lenovo product announcements.
The information is presented here to communicate Lenovos current investment and development activities
as a good faith effort to help with our customers future planning.
This document is for educated service personnel only. If you are not familiar with the described system,
we will ask you to restrain from trying to apply what is described herein you could void the preloaded
system installation and void the SAP certified configuration. This will void the warranty and support of
said machine. Please contact the sapsolutions@lenovo.com to get enrolled for education prior to installing
an Lenovo Solution appliance.
In case of issues with the SAP HANA appliance, the customer is asked to open a SAP Help Desk request
(OSS ticket) first and foremost. Only by following this path, can we ensure the proper configuration of
the Lenovo Solution. If the customer would open an Lenovo support ticket for the system, he might be
requested to perform system upgrades to firmware or software to the latest available levels which might
not be supported with the SAP HANA appliance. If identified as a hardware or file system issue, the
ticket will be forwarded to the Lenovo support team and handled appropriately. Although this may be
contrary to standard Lenovo Support processes, it is the approved and accepted support process for all
SAP Appliances including the SAP HANA appliance.
1.5
Support
The System x SAP HANA development team provides new images for the SAP HANA appliance at regular intervals. These images have dependencies regarding the hardware, operating systems, and hardware
drivers. The use of the latest image for maintenance and installation of SAP HANA appliance is highly
recommended.
Whenever the firmware level recommendations (fixes known firmware issues) for the Lenovo components
of the SAP HANA appliance are given by the individual System x support representatives, it is the
customers responsibility to upgrade (or downgrade) to the recommended levels as instructed by System
x support representatives. A list of the minimally required versions can be found in SAP Note 1880960
Lenovo Systems Solution for SAP HANA Platform Edition FW/OS/Driver Maintenance.
X6 Implementation Guide
1.9.96-13
Technical Documentation
Whenever the operating systems recommendations (fixes known operating systems issues) for the SUSE
Linux components of the SAP HANA appliance are given by the SAP, SUSE, or IBM/Lenovo support
representatives, it is the customers responsibility to upgrade (or downgrade) to the recommended levels
as instructed by SAP through an explicit SAP Note or a Customer OSS Message. SAP describes their
operational concept, including updating of the operating system components in SAP Note 1599888 SAP
HANA: Operational Concept. If the Linux kernel is updated, you have to recompile IBM GPFS software
as well.
Whenever other hardware or software recommendations (that fix known issues) for components of the
SAP HANA appliance are given by the individual Lenovo support representatives, it is the customers
responsibility to upgrade (or to downgrade) to the recommended levels as instructed by Lenovo support
representatives.
If software and documentation updates are available, you can download them from the respective Lenovo,
IBM, SUSE or SAP website. To check for updates, go to the following websites. Follow the procedure in
the included documentation to update the software.
Firmware and drivers for System X6 Servers
You can obtain updates for System x3850/x3950 X6 servers on the IBM support website (Fix
Central) at http://www.ibm.com/support/fixcentral using the the Find product tab.
IBM General Parallel File System (IBM GPFS3 ) and IBM Spectrum Scale updates
You can obtain updates for GPFS on the IBM support website for GPFS 3.5.0, GPFS 4.1.0
and IBM Spectrum Scale/GPFS 4.1.1
SUSE Linux Enterprise Server for SAP Applications 11 SP3
You can download the installation package from the SUSE website at http://download.
novell.com/Download?buildid=XL0RqEykZpc~
SUSE Linux patches and updates
You can obtain the latest code updates for SUSE from the SUSE website at http://download.
novell.com/patch/finder/
Red Hat Enterprise Linux 6.5 and 6.6
You can download the installation package from the Red Hat website at http://www.redhat.
com/en/technologies/linux-platforms/enterprise-linux
VMware ESX Server patches and updates
You can obtain the latest code updates for vSphere ESX server from the VMware website at
http://www.vmware.com/support/
SAP HANA appliance updates
You can obtain the latest code updates from SAP at the SAP Service Marketplace at http:
//service.sap.com/swdc
Lenovo recommends that customers follow the software upgrade recommendations set out by SAP in the
SAP HANA Technical Operations Manual4 (TOM). It is important to understand that the corrections
listed in this note are those known to be a solution to a definite problem when running SAP HANA
appliance on the System x solutions. This knowledge was derived from internal testing, or customers who
ran into a specific problem. In parallel, the organizations owning the individual products provide a lot
more fixes that are unknown to the Lenovo-SAP team, yet are recommend to be applied, nevertheless. In
particular, there are fixes that IBM/Lenovo recommends to install that are not listed here. It is expected
3 IBM
4 http://help.sap.com/hana/SAP_HANA_Technical_Operations_Manual_en.pdf
X6 Implementation Guide
1.9.96-13
Technical Documentation
that you contact your IBM/Lenovo service contact to get a list of those fixes as well as a reasonably
current service level in general.
X6 Implementation Guide
1.9.96-13
Technical Documentation
Introduction
2.1
Purpose
This document is intended to provide a single point of reference for techniques and product behaviors
when dealing with SAP HANA.
2.2
Applicability
The techniques and product behaviours outlined in this document apply to:
SAP HANA appliance Platform Edition v1.0
SLES for SAP5 11 SP3
RHEL6 6.5 and 6.6
IBM GPFS 3.5 and 4.1
Lenovo Systems solution for SAP HANA appliance based on the:
System x3850/x3950 X6 Workload Optimized Server
2.2.1
In this document, we reference to several different versions of the Lenovo Solution guided installation
software. The following numbering refers to the corresponding SAP HANA Platform Edition version.
1.7.x SAP HANA Platform Edition v 1.0 SPS07 - First release on IBM/Lenovo Systems X6 hardware
1.8.x SAP HANA Platform Edition v 1.0 SPS08
1.9.x SAP HANA Platform Edition v 1.0 SPS09
2.3
The techniques and product behaviours outlined in this document may not be applicable to future releases.
2.4
Conventions
This guide uses several conventions to improve the readers experience and the ease of understanding.
2.4.1
Icons Used
The following information boxes indicate important information you should follow according to the level
of importance.
Attention
ATTENTION pay close attention to the instructions given
5 SUSE
6 Red
X6 Implementation Guide
1.9.96-13
Technical Documentation
Warning
WARNING this is something to take into consideration
Note
INFORMATION extra information describing in detail
2.4.2
Code Snippets
When reading code snippets you have to note the following: Lines of code that are too long to be shown
in one line will be automatically broken. This line break is indicated by an arrow at the end of the first
and an arrow at the start of the second line:
1
This is a code snippet that is too long to be printed in one single line, therefore ,you will see an automatic line break.
There are also line numbers at the left side of each code snippet to improve the readability.
Code examples that contain commands that have to be executed on a command line follow these rules:
Lines beginning with a # indicate commands to be executed by the root user.
Lines beginning with a $ indicate commands to be executed by an arbitrary user.
Solution Overview
This document provides general information specific to the Lenovo Solution. This document assumes
that the reader understands the basic structure and components of the SAP HANA Platform Edition.
SAP HANA should be installed on hardware that has been specifically certified for SAP HANA by SAP.
This hardware may not be configured from individual parts, rather it is to be ordered and delivered as a
single unit using an Lenovo manufacturer type/model number specified later.
3.1
The Lenovo Solution is based on building blocks that provide a highly scalable infrastructure for SAP HANA
based on the System x architecture: x3850/x3950 X6 as well as software, such as IBM GPFS, that will
be used to run SAP HANA.
Lenovo has created several system models upon which you may install and run SAP HANA according to
the sizing charts coordinated with SAP. For each workload type a special System x type/model has been
approved by SAP and Lenovo to accommodate the requirements for the SAP HANA Platform Edition.
3.2
The following picture defines the current SAP HANA scenarios that can be leveraged through the System
x solution for the SAP HANA Platform Edition.
X6 Implementation Guide
1.9.96-13
Technical Documentation
SAP
HANA
Local BI
SAP ERP
(CRM
SRM,SCM)
Data Mart
SAP HANA
1.0
SAP HANA DB
Appliance
1.0 SPS 05
SAP ERP n
(CRM,
SRM,SCM)
SAP
HANA
Customer
Application
Data Mart
SAP HANA DB
Appliance
1.0 SPS 05
SAP HANA
1.0
SAP
HANA
Data Mart
SAP HANA DB
Appliance
1.0 SPS 05
SAP HANA
1.0
X6 Implementation Guide
1.9.96-13
Technical Documentation
Hardware Configurations
The System X6 Workload Optimized servers for SAP HANA are based upon two building blocks that
can be used to fulfill the hardware requirements for SAP HANA. The SAP HANA appliance software
must be installed only on a certified and tested hardware configuration based on one of these two models.
Lenovo provides a model/type number for four (4) socket and eight (8) socket systems that are to be
setup for each certified model by SAP. A customer needs only to choose the model and the extra options
to fulfill their requirements. Models created manually will neither be supported by Lenovo nor SAP due
to the high-performance criteria set out by SAP during certification.
X6 Implementation Guide
1.9.96-13
Technical Documentation
Internal Storage:
121.2TB 2.5" HDD for RAID1 and RAID5
4400GB SSD for LSI CacheCade
One (1) External Storage (EXP2524) for systems 3TB (stand-alone configurations) or > 1024GB
(cluster configurations)
2 Dual-Port 10GbE NICs
1 Quad-Port 1GigE NICs
IBM General Parallel File System
Certified for SLES for SAP OS and SAP HANA appliance software
4.1
Lenovo and SAP have certified a set of configurations to be used with the SAP HANA Platform Edition
that are based on the Intel Xeon IvyBridge EX E7-4880v2, E7-4890v2, E7-8880v2, E7-8890v2 or Intel
Xeon Haswell EX E7-8880v3, E78880Lv3, E7-8890v3 processor family.
4.2
Clients
(Prod)
Server 1
(Production)
Production)
SAP ERP
Clients
(Test)
Server 2
(Test)
SAP ERP
Clients
(Dev)
Server 3
(Development)
SAP HANA
database
SAP HANA
database
SAP HANA
database
GPFS
GPFS
GPFS
Internal
Internal
storage
storage
Internal
storage
Internal
storage
X6 Implementation Guide
1.9.96-13
10
Technical Documentation
2. As a clustered configuration with a distributed HANA instance across servers. All server (nodes)
form one HANA cluster. All servers (nodes) form one GFPS cluster. These should be installed as
clustered servers.
Clients
SAP BW
SAP ERP
SAP
HANA
Cluster
Server 1
Server 2
Server 3
SAP HANA
Database
SAP HANA
Database
SAP HANA
Database
Master node
Worker node
GPFS
Primary
node
GPFS
Secondary
node
Additional node
Internal Storage
Internal Storage
Internal Storage
SAP HANA
Data&Log
SAP HANA
Data&Log
SAP HANA
Data&Log
GPFS
Standby node
Backup/Recovery
SAN Storage
SAN
storage
GPFS
SAN
storage
SAN
storage
SAN
storage
Cluster
For clustered configurations, extra hardware such as network switches and adapters need to be purchased in addition to the clustered appliances. Currently, the supported network switches for the Lenovo
Workload Optimized server in a clustered configuration are:
X6 Implementation Guide
1.9.96-13
11
Technical Documentation
Network
10Gb Ethernet
1Gb Ethernet
Description
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
RackSwitch
G8296 (Rear-to-Front)
G8296 (Front-to-Rear)
G8272 (Rear-to-Front)
G8272 (Front-to-Rear)
G8264 (Rear-to-Front)
G8264 (Front-to-Rear)
G8124E (Rear-to-Front)
G8124E (Front-to-Rear)
G8052 (Rear-to-Front)
G8052 (Front-to-Rear)
Part Number
7159GR6
7159GF5
7159CRW
7159CFV
7159G64
715964F
7159BR6
7159BF7
7159G52
715952F
X6 Implementation Guide
1.9.96-13
12
Technical Documentation
4.3
SEO models exist for certain configurations, please see the E: Lenovo X6 Server MTM List & Model
Overview on page 214 for more details.
4.3.1
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
256
512
256
512
x3850 X6
6241AC3
2 Intel Xeon E7-8880v2/v3
4 Intel Xeon E7-8880v2/v3
256GB
384GB
512GB
256GB
512GB
61.2TB HDD 2400GB SSD
1 M5210
3.6 TB RAID5 for SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
128GB
Network
384
1024
1536
2048
x3950 X6
6241AC4
4 Intel Xeon E7-8880v2/v3
256GB
512GB
768GB
1024GB
1536GB
2048GB
61.2TB HDD 2400GB SSD
121.2TB HDD 4400GB SSD
1 M5210
2 M5210
3.6 TB RAID5 for
9.6 TB RAID5 for
SAP HANA data/log
SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
512
768
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
512
1024
1536
2048
x3950 X6
6241AC4
8 Intel Xeon E7-8880v2/v3
512GB
1TB
1.5TB
2TB
61.2TB HDD 2400GB SSD
121.2TB HDD 4400GB SSD
1 M5210
2 M5210
3.6 TB RAID5 for SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
X6 Implementation Guide
1.9.96-13
13
Technical Documentation
4.3.3
System x3850 X6 Single Node Four Socket Configurations with Storage Expansion
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
768
768GB
1024
1536*
2048*
x3850 X6
6241AC3
4 Intel Xeon E7-8880v2/v3
1TB
1.5TB
2TB
151.2TB HDD & 4400GB SSD
1 M5210 & 1 M5120/M5225
13.2 TB RAID5 for SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
Table 5: System x3950 X6 Single Node Four Socket Configurations with Storage Expansion
* For Suite on HANA only, not Datamart and BW
4.3.4
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
3TB
4TB
6TB
x3950 X6
6241AC4
8 Intel Xeon E7-8880v2/v3
3TB
4TB
6TB
211.2TB HDD & 6400GB SSD
301.2TB HDD & 8400GB SSD
2 M5210 & 1 M5120/M5225
19.2 TB RAID5 for SAP HANA data/log 28.8 TB RAID5 for SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
Table 6: System x3950 X6 SAP ERP on SAP HANA Single Node Configurations
4.3.5
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
256
512
1024
x3850 X6
6241AC3
2 Intel Xeon E7-8880v2
4 Intel Xeon E7-8880v2/v3
256GB
512GB
1TB
151.2TB HDD & 4400GB SSD
1 M5210 & 1 M5120/M5225
13.2 TB RAID5 for SAP HANA data/log
2 Dual-Port 10GbE
1 Quad-Port 1GigE
X6 Implementation Guide
1.9.96-13
14
Technical Documentation
4.3.6
SAP Models
Product
Type/Model
CPU
Memory
Disk
Controller
Disk Layout
Network
512
1024
1024
x3950 X6
6241AC4
2048
4.4
Card Placement
Attention
You need to make sure, that the cards are placed in the correct PCI slot. Please refer to the
tables below for the assignment regarding in which slot a certain card should be. This step
must be done before the installation. Please be aware, that only with the correct card layout
your machine is supported by Lenovo.
Depending on having two, four or eight socket machines, there is a different card placement. Please refer
to figure 5 and table 10 two socket machines, figure 6 and table 11 on page 17 regarding four socket
machines and figure 8 and table 12 on page 19. Concerning the numbering of the slots please note that
PCI slots 11 and 12 are located in the Storage Book, see figure 7. A x3950 X6 machine has an additional
Storage Book containing PCI slots 43 and 44. The Storage Books are accessible from the front.
4.4.1
The x3850 X6 machine comes with two Mellanox Connect X-3 10GbE adapters that provide two 10GbE
ports or two Mellanox ConnectX-3 FDR IB VPI adapters that provide two QSFP ports. With QSA
adapters the QSFP ports support SFP+ transceivers for 10GbE connectivity. A quad port Intel I-350
provides four 1GbE ports and is placed in slot 10. In a x3950 X6 an additional I-350 card can be placed
in slot 42. Intel I-340 PCI cards is available optionally, if more 1GbE ports are needed.
Please see the tables and figures below regarding the assignment regarding in which slot a certain card
should be, depending on your machine type and configuration.
4.4.2
If the customer needs more network ports, the PCI slots shown in table 9: Slots which may be used for
additional NICs on page 16 may be used for additional NICs.
X6 Implementation Guide
1.9.96-13
15
Technical Documentation
Machine
x3850 X6 two sockets
x3850 X6 four sockets
x3950 X6 four sockets
x3950 X6 eight sockets
PCI Slots
9, 10
2, 3, 5, 6, 10
9, 10, 41, 42
5, 6, 10, 37, 38, 42
4.4.3
The internal RAID adapter is a ServeRAID M5210 which resides in slot 12 in the Storage Book. Regarding
the x3950 X6, there are two internal RAID adapter used, residing in slot 12 and 44.
The first external RAID adapter (ServeRAID M5120 or M5225) in a x3850 X6 will be placed in slot 8,
the second in slot 7 and then slot 9 for the third. Regarding a x3950 X6 machine, placement will start
in slot 40, then 39, then 41 and finally 7 and 8, refer to table 13 for details.
Card
Port Label
Slot
E
F
G
H
A
B
C
D
12
10
8
7
Ethernet
Device
eth4
eth5
eth6
eth7
eth8
eth9
eth10
eth11
eth0
eth1
eth2
eth3
X6 Implementation Guide
1.9.96-13
16
Technical Documentation
Card
Port Label
Slot
E
F
G
H
C
D
A
B
12
10
9
8
7
5
4
1
Ethernet
Device
eth4
eth5
eth6
eth7
eth8
eth9
eth10
eth11
eth2
eth3
eth0
eth1
X6 Implementation Guide
1.9.96-13
17
Technical Documentation
Figure 7: Workload Optimized System Storage Book. This contains slots 11, 12 and slots 43, 44 on x3950
X6 in an additional Storage Book
X6 Implementation Guide
1.9.96-13
18
Technical Documentation
Card
Port Label
E
F
G
H
C
D
A
B
10
36
4
Ethernet
Device
eth4
eth5
eth6
eth7
eth2
eth3
eth0
eth1
K
L
M
N
Slot
42
e.g.
e.g.
e.g.
e.g.
e.g.
e.g.
e.g.
e.g.
eth8
eth9
eth10
eth11
eth8
eth9
eth10
eth11
Table 12: Network interface card assignments for an eight socket x3950 X6
* This cards is optional, please refer to table 13 for details
4 processors
512GB 4S
1TB 4S
MLNX
MLNX
10
12
36
I350
I350
M5210
39
MLNX
MLNX
Slot
4
7
8 processors
1TB
2TB
MLNX
MLNX
4TB
MLNX
6TB*
MLNX
I350
M5210
MLNX
S/C
M5120/
M5225
I350
M5210
MLNX
S/C
M5120/
M5225
C M5120/
M5225
C M5120/
M5225
I350
M5210
C M5120/
M5225
I350
M5210
C
M5120/
M5225
40
I350
M5210
MLNX
C
M5120/
M5225
I350
M5210
MLNX
C
M5120/
M5225
41
42
44
I350
M5210
I350
M5210
I350
M5210
I350
M5210
12TB*
MLNX
S/C
M5120/
M5225
S/C
M5120/
M5225
I350
M5210
MLNX
C M5120/
M5225
S/C
M5120/
M5225
C M5120/
M5225
I350
M5210
Table 13: Card placement for x3950 X6 four socket and eight socket
X6 Implementation Guide
1.9.96-13
19
Technical Documentation
X6 Implementation Guide
1.9.96-13
20
Technical Documentation
Networking
5.1
Networking Requirements
The networking for the Lenovo Solution, the Integrated Management Module (IMM) and the corresponding switches should be set up and integrated into the customer network environment according to the
customers requirements and the recommendations from SAP. SAP currently recommends that individual
workloads are separated by either physical or virtual LAN addresses or subnets.
The individual workloads described by SAP are:
SAP HANA internal communication via SAP HANA private networking
Customer access to the SAP HANA appliance via:
SAP Landscape Transformation Replication (LT)
Sybase Replication (SR)
SAP Business Objects Data Services (DS)
Business Objects XI, Microsoft Excel, etc.
Server data management tools for:
System/DB backup and restore operations
Logical server application management (can be partially accomplished via Integrated Management Module)
SSH access, VNC access, SAP Support access
We strongly recommend that the following SAP Workloads are dedicated and distinct subnets using
separate Ethernet adapters (NICs). If not, the network setup will become more complicated.
SAP HANA client access
Server data management
Server application management
Additionally to the SAP workloads the Lenovo Solution defines two additional workloads:
IBM clustered files system communications for GPFS
Physical server management via the Integrated Management Module
Hardware support, console web access and SSH access
It is necessary to separate the IBM GPFS and SAP HANA internal networks from all other networks as
well as from each other. Servers being configured in a clustered scenario require two dedicated high speed
NICs (e.g. 10GbE) with separate physical private LANs for the internal communication of GPFS and SAP
HANA. In addition external networks, e.g. for SAP Client/BW and SAP management communication
should be separated as well. If not, SAP HANA performance may be compromised and the system is not
supported by SAP nor Lenovo.
5.2
Jumbo Frames
It is possible and allowed to activate the usage of so-called jumbo frames for the HANA and GPFS
networks. Jumbo frames are Ethernet frames with a Maximum Transmission Unit (MTU) up to 9000
bytes. The standard MTU is 1500.
X6 Implementation Guide
1.9.96-13
21
Technical Documentation
The advantage of jumbo frames is less overhead for headers and checksum computation. This can lead
to a better network performance on the HANA and GPFS networks.
Attention
Jumbo frames can only be used, if all network components (for example networking adapters
and switches) that have to process these jumbo frames support the usage.
If erroneously activated, jumbo frames cause the loss of network connectivity.
The switches G8264, G8272, G8296 and G8124E are certified for the usage in the Lenovo Solution
appliance with jumbo frames. In a standard cluster setup jumbo frames can be activated. In DR11 or
High Availability setups the HANA and GPFS networks may communicate via non-Lenovo customer
switches that cannot handle jumbo frames, therefore it is recommended to not use jumbo frames in these
setups.
To change this behaviour, you have to change the MTU size. This can be done like the following:
SUSE: in the YaST module for networking: General tab in the configuration of the network device/bond
Red Hat: changing the MTU size in the file /etc/sysconfig/network-scripts/ifcfg-* of the
interface/bond
Warning
Jumbo frames are activated during the installation phase for bond0 and bond1. You may have
to deactivate the usage of jumbo frames in certain scenarios.
5.3
Network Configuration
Before you configure the server and install the Lenovo Solution, please gather the following network
information from your network administrator where indicated with the b symbol. Please use only IPv4
addresses.
Note
In case the customer plans to install a single node configuration, but would like to scale it out
to a cluster by adding more severs: plan the network configuration for the GPFS and HANA
networks as if the cluster would be already existing to simplify a later scale out.
IP Address
Default Network Prefix
Default Netmask
Default Gateway
Primary DNS IP
Secondary DNS IP
Domain Search
NTP Server
b
b
b
b
b
b
b
b
X6 Implementation Guide
1.9.96-13
22
Technical Documentation
Network
IBM GPFS
Private Network (predefined)
SAP HANA
Private Network (predefined)
Customer
Network
IMM
IBM GPFS
Private Network (predefined)
SAP HANA
Private Network (predefined)
Port Label
Single
Cluster IP Address
Hostname
Server Node 01 (Worker/Stand-By/Single)
127.0.1.1 (default)
gpfsnode01
192.168.10.101 (exany
A/C
(mandatory)
ample)
b
b
127.0.2.1 (default)
hananode01
192.168.20.101 (exany
B/D
(mandatory)
ample)
b
b
Any of the remaining
NIC
b
b
ports
b
b
I
Server Node 02 (Worker/Stand-By)
Netmask
Gateway
255.255.255.0
(recommended)
b
None
(recommended)
255.255.255.0
(recommended)
b
None
(recommended)
any
A/C
127.0.1.1 (default)
192.168.10.102 (example)
gpfsnode02
(mandatory)
255.255.255.0
(recommended)
None
(recommended)
any
B/D
127.0.2.1 (default)
192.168.20.102 (example)
hananode02
(mandatory)
255.255.255.0
(recommended)
None
(recommended)
..
.
for all other nodes
..
.
Table 15: IP address configuration
5.4
In a clustered configuration with high availability, the internal networks of the appliance for GPFS and
HANA are set up with redundant links. These connect to redundant G8264, G8272, G8296 or G8124E
10GigE switches. Both switches are connected with a minimum of two ISL ports. It is recommended
to use the 40GbE ports for the ISLs. On host side the two corresponding ports of each network are
configured as Linux bond devices. The data replication connection to the primary data source can also
be set up in a redundant fashion and connects directly to the appliance internal 10GigE HANA network.
The details for this setup depend strongly on the customers network infrastructure and need to be planned
accordingly. Details to the exact configuration can be found in chapter 5.6.7: Network Configurations in
a Clustered Environment on page 30.
Warning
When connecting the data replication network directly to the internal 10GigE network, an
ACL needs to be configured on the uplink port to isolate the internal networks (e.g. 127.0.n.24)
from the customer network.
If a network adapter or one of the switches fail, the SAP HANA network and the GPFS network are
taken over by the remaining switch and network adapter.
It is recommended to establish redundant network connections for the other networks (e.g. client network)
as well. This setup is similar to the internal networks and requires two identical 1GigE or 10GigE switches
X6 Implementation Guide
1.9.96-13
23
Technical Documentation
(e.g. G8052 1GigE or G8264 10GigE). As long as there is one redundant path to each server the remaining
appliance and data management networks can be implemented with a single link. Each of the networks
will then connect to one of the two switches.
To implement network redundancy on the switch level, a Virtual Link Aggregation Group (VLAG) needs
to be created on the two network switches. A VLAG requires a dedicated inter-switch link (ISL) for
synchronization. More details can be found in Chapter 5.6.7: Network Configurations in a Clustered
Environment on page 30.
Note
For more details on VLAGs please obtain the Application Guide respective to the RackSwitch
model and N/OS you have installed and consult chapter "Virtual Link Aggregation Groups"
(e.g. "RackSwitch G8272 Application Guide").
5.5
We allow the customer to define and use their own networks and connect them to the dedicated customer
network NICs using their own switch infrastructure. Please ensure the proper IP address setup on the
Lenovo Solution server. This guide does not go into detail regarding the customers switch configuration,
nor for the configuration in the cluster.
5.6
5.6.1
Network Definitions
Numbering conventions
Network
(G8264)
(G8272)
(G8296)
(G8124)
(G8052)
This option is defined to use the G8264 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 24 Lenovo Solution servers (or 26 servers with "40G
-> 4x 10G" breakout cable on ports 9 or 13) to be connected. The setup is as follows:
X6 Implementation Guide
1.9.96-13
24
Technical Documentation
18,20,22,24,26,28...64 (HANA)
.----------------------,5_____
MGMT|
G8264 Switch
|1_____\__ Inter-Switch 40Gb Link (ISL)
----------------------
\_\_____Port 5 bonded ISL
17,19,21,23,25,27...63 (GPFS) / \
18,20,22,24,26,28...64 (HANA)/
\___Port 1 bonded ISL
.----------------------,5____/
/
MGMT|
G8264 Switch
|1_________/
----------------------
17,19,21,23,25,27...63 (GPFS)
Port
MGMT
17
18
19
20
..
.
VLAN
4095
100
200
100
200
..
.
IP Address
<customer-mgmt IP1>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
Hostname
<switch1>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
Server NIC
n/a
bond0
bond1
bond0
bond1
..
.
g8264-1
g8264-1
g8264-2
g8264-2
g8264-2
g8264-2
g8264-2
..
.
63
64
MGMT
17
18
19
20
..
.
100
200
4095
100
200
100
200
..
.
192.168.10.124
192.168.20.124
<customer-mgmt IP2>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
gpfsnode24
hananode24
<switch2>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
bond0
bond1
n/a
bond0
bond1
bond0
bond1
..
.
g8264-2
g8264-2
63
64
100
200
192.168.10.124
192.168.20.124
gpfsnode24
hananode24
bond0
bond1
X6 Implementation Guide
1.9.96-13
25
Technical Documentation
5.6.3
This option is defined to use the G8124 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 7 Lenovo Solution servers to be connected. The setup
is as follows:
2,4,6,8,10,12,14 (HANA)
.----------------------,24____
MGMT|
G8124 Switch
|23____\__ Inter-Switch 10Gb Link (ISL)
----------------------
\_\_____Port 24 bonded ISL
1,3,5,7,9,11,13 (GPFS) / \
2,4,6,8,10,12,14 (HANA)/
\___Port 23 bonded ISL
.----------------------,24___/
/
MGMT|
G8124 Switch
|23________/
----------------------
1,3,5,7,9,11,13 (GPFS)
X6 Implementation Guide
1.9.96-13
26
Technical Documentation
Switch
g8124-1
g8124-1
g8124-1
g8124-1
g8124-1
g8124-1
g8124-1
..
.
Port
MGMT-b
1
2
3
4
5
6
..
.
VLAN
4095
100
200
100
200
100
200
..
.
IP Address
<customer-mgmt IP1>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
192.168.10.103
192.168.20.103
..
.
Hostname
<switch1>
gpfsnode01
hananode01
gpfsnode02
hananode02
gpfsnode03
hananode03
..
.
Server NIC
n/a
bond0
bond1
bond0
bond1
bond0
bond1
..
.
g8124-1
g8124-1
g8124-2
g8124-2
g8124-2
g8124-2
g8124-2
g8124-2
g8124-2
..
.
13
14
MGMT-b
1
2
3
4
5
6
..
.
100
200
4095
100
200
100
200
100
200
..
.
192.168.10.107
192.168.20.107
<customer-mgmt IP2>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
192.168.10.103
192.168.20.103
..
.
gpfsnode07
hananode07
<switch2>
gpfsnode01
hananode01
gpfsnode02
hananode02
gpfsnode03
hananode03
..
.
bond0
bond1
n/a
bond0
bond1
bond0
bond1
bond0
bond1
..
.
g8124-2
g8124-2
13
14
100
200
192.168.10.107
192.168.20.107
gpfsnode07
hananode07
bond0
bond1
5.6.4
This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 24 Lenovo Solution servers (or 32 servers with "40G
-> 4x 10G" breakout cables on ports 49,50,51 or 52) to be connected. The setup is as follows:
2,4,6,8,10,12...48 (HANA)
.----------------------,54_____
MGMT|
G8272 Switch
|53_____\__ Inter-Switch 40Gb Link (ISL)
----------------------
\_\_____Port 54 bonded ISL
1,3,5,7,9,11...47 (GPFS)
/ \
2,4,6,8,10,12...48 (HANA)
/
\___Port 53 bonded ISL
.----------------------,54____/
/
MGMT|
G8272 Switch
|53_________/
----------------------
1,3,5,7,9,11...47 (GPFS)
X6 Implementation Guide
1.9.96-13
27
Technical Documentation
This guide defines the IBM GPFS network to be used as 192.168.10.0/24 and the SAP HANA network
to be used as 192.168.20.0/24. If the customer wants to use a different IP range he may do so, but it
should be used consistently as the internal (private) network within this guide.
Switch
g8272-1
g8272-1
g8272-1
g8272-1
g8272-1
..
.
Port
MGMT
1
2
3
4
..
.
VLAN
4095
100
200
100
200
..
.
IP Address
<customer-mgmt IP1>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
Hostname
<switch1>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
Server NIC
n/a
bond0
bond1
bond0
bond1
..
.
g8272-1
g8272-1
g8272-2
g8272-2
g8272-2
g8272-2
g8272-2
..
.
47
48
MGMT
1
2
3
4
..
.
100
200
4095
100
200
100
200
..
.
192.168.10.124
192.168.20.124
<customer-mgmt IP2>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
gpfsnode24
hananode24
<switch2>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
bond0
bond1
n/a
bond0
bond1
bond0
bond1
..
.
g8272-2
g8272-2
47
48
100
200
192.168.10.124
192.168.20.124
gpfsnode24
hananode24
bond0
bond1
This option is defined to use the G8272 RackSwitch 10Gbit Ethernet switch as a private network landscape
for IBM GPFS and SAP HANA. This allows up to 43 Lenovo Solution servers (or 47 servers with "40G
-> 4x 10G" breakout cables on ports 87 and 88) to be connected. The setup is as follows:
2,4...48/50,52...86 (HANA)
.----------------------,96_____
MGMT|
G8296 Switch
|95_____\__ Inter-Switch 40Gb Link (ISL)
----------------------
\_\_____Port 96 bonded ISL
1,3...47/49,51...85 (GPFS)
/ \
2,4...48/50,52...86 (HANA)
/
\___Port 95 bonded ISL
.----------------------,96____/
/
MGMT|
G8296 Switch
|95_________/
----------------------
1,3...47/49,51...85 (GPFS)
X6 Implementation Guide
1.9.96-13
28
Technical Documentation
Port
MGMT
1
2
3
4
..
.
VLAN
4095
100
200
100
200
..
.
IP Address
<customer-mgmt IP1>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
Hostname
<switch1>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
Server NIC
n/a
bond0
bond1
bond0
bond1
..
.
g8296-1
g8296-1
g8296-2
g8296-2
g8296-2
g8296-2
g8296-2
..
.
85
86
MGMT
1
2
3
4
..
.
100
200
4095
100
200
100
200
..
.
192.168.10.143
192.168.20.143
<customer-mgmt IP2>
192.168.10.101
192.168.20.101
192.168.10.102
192.168.20.102
..
.
gpfsnode43
hananode43
<switch2>
gpfsnode01
hananode01
gpfsnode02
hananode02
..
.
bond0
bond1
n/a
bond0
bond1
bond0
bond1
..
.
g8296-2
g8296-2
85
86
100
200
192.168.10.143
192.168.20.143
gpfsnode43
hananode43
bond0
bond1
The G8052 RackSwitch 1Gbit Ethernet switch is mainly used for the administrative networks. It can
be used also for SAP-Access, backup network or other client specific networks. These networks are both
public and private and need to be carefully separated with VLANs. The landscape is as follows:
X6 Implementation Guide
1.9.96-13
29
Technical Documentation
2,4,6,8,10,12,14,...48
52.----------------------,50______
51|
G8052 Switch
|49______\__ Inter-Switch 1Gb Link (ISL)
----------------------
\_\_____Port 50 bonded ISL
1,3,5,7,9,11,13,....47 (IMM)
/ \
2,4,6,8,10,12,14,...48
/
\___Port 49 bonded ISL
52.----------------------,50_____/
/
51|
G8052 Switch
|49__________/
----------------------
1,3,5,7,9,11,13,....47 (IMM)
Port
52
1
3
..
.
VLAN
4092
300
300
..
.
IP Address
<customer-mgmt IP1>
192.168.30.101
192.168.30.102
..
.
Hostname
<switch1>
cust-imm01.site.net
cust-imm02.site.net
..
.
Server NIC
n/a
sys-mgmt
sys-mgmt
..
.
47
52
1
3
..
.
300
4092
300
300
..
.
192.168.30.124
<customer-mgmt IP2>
192.168.30.125
192.168.30.126
..
.
cust-imm24.site.net
<switch2>
cust-imm25.site.net
cust-imm26.site.net
..
.
sys-mgmt
n/a
sys-mgmt
sys-mgmt
..
.
47
300
192.168.30.148
cust-imm48.site.net
sys-mgmt
5.6.7
The networking in the clustered environment is the cornerstone of the Lenovo Solution. Therefore it is
important that you ensure that the network (switches, wires, etc. ) has been set up before starting the
installation of the servers. Below is one example of how to connect the customers network infrastructure
with the clustered environment, see figure 14.
Please read section 5.7: Setting up the Switches on page 31 for the RackSwitch setup.
X6 Implementation Guide
1.9.96-13
30
Technical Documentation
Legend
SAP client
1GbE
10GbE
SAP HANA
GPFS
10 GbE
10 GbE
Interface
Inter Switch
Links
IMM
1 GbE
Bonded
Interface
Optional
Interface
40 GbE
Customer
Customer
Interface Zone
Interface Zone
0
6
8
1 GigE
10 GigE
SAP
SAPHANA
HANAAppliance
Appliance
IMM
1GigE 1
Customer
Switch Choice
Optional
IMM
Node1
IMM
Node2
IMM
Node3
NodeN
10 GbE
HANA
HANA
6
8
HANA
HANA
10GigE 1
6
8
System
management
SAP
Business Suite
1GigE 2
Customer
Switch Choice
GPFS
GPFS
7
9
GPFS
10GigE 2
GPFS
7
9
Optional
3
2
5
4
10
3
11
5
4
10
3
11
10
3
11
5
4
10
11
5.7
5.7.1
5.7.1.1 Configuring SSH/SCP Features on the Switch SSH and SCP features are disabled by
default. To change the SSH/SCP settings, use the following procedure. Connect to the switch via a serial
console and execute the following commands:
RS
RS
RS
RS
8XXX> enable
8XXX# configure terminal
8XXX(config)# ssh enable
8XXX(config)# ssh scp-enable
RS
RS
RS
RS
X6 Implementation Guide
1.9.96-13
31
Technical Documentation
G8264 #1
G8264 #2
mgt
5-8 13-16
18 20 22 24 26 28
30 32 34 36 38 40
42 44 46 48 50 52
54 56 58 60 62 64
1-4
17 19 21 23 25 27
29 31 33 35 37 39
41 43 45 47 49 51
53 55 57 59 61 63
9-12
Port: 29
Port: 17
mgt
5-8 13-16 18 20 22 24 26 28
30 32 34 36 38 40
42 44 46 48 50 52
54 56 58 60 62 64
17 19 21 23 25 27
29 31 33 35 37 39
41 43 45 47 49 51
53 55 57 59 61 63
1-4
9-12
Port: 30
Port: 18
Port: 18
Port: 30
Port: 17
bond0
bond1
eth6
eth7
eth8
eth9
eth0
eth2
eth1
eth3
Port: 29
node 1
eth4 eth5
sys
GPFS
bond0
bond1
eth6
eth7
eth8
eth9
HANA
eth0
eth2
eth1
eth3
node 2
eth4 eth5
sys
X6 Implementation Guide
1.9.96-13
32
Technical Documentation
Note
The management IP addresses are examples and need to be customized according to the
customers network.
These instructions are for RackSwitch N/OS Version 8.2. Newer versions may have different commands.
Please check the RackSwitch Industry-Standard CLI Reference for the version of the CLI that correlates
to the switch N/OS version.
5.7.3
5.7.4
5.7.5
5.7.6
Disable Routing
RS 8XXX (config)# no ip routing
5.7.7
Add Networking
For each subnetwork, you should create the following VLANs and Trunk VLAG configurations as described.
5.7.8
VLAN configurations
5.7.8.1
X6 Implementation Guide
1.9.96-13
33
Technical Documentation
X6 Implementation Guide
1.9.96-13
34
Technical Documentation
vlag tier-id 10
vlag hlthchk peer-ip <customer-mgmt IP2>
vlag isl adminkey 5094
in <VLAN ports>
vlag adminkey 1000+<VLAN port> enable
# Define Switch 2
RS 8XXX (config)# vlag tier-id 10
X6 Implementation Guide
1.9.96-13
35
Technical Documentation
5.8
In a stretched HA or DR scenario a inter-site port channel needs to be configured. The inter-site port
channel configuration depends on the customer premise equipment and infrastructure. This chapter
describes diverse options how this configuration can be implemented. The following examples are based
on G8264 port layout. For other supported rackswitch types following ports should be used:
G8124 solution: depending on the connection type, the switch ports 22, or 21-22 respectively
G8272 solution: depending on the connection type, the switch ports 48, or 47-48 respectively
G8296 solution: depending on the connection type, the switch ports 86, or 86-87 respectively
If the port channel configuration is needed for a stretched HA setup, the HANA and the GPFS VLANs
have to be enabled on the trunk interfaces. If the port channel trunk is for a DR setup, only GPFS
VLANs have to be enabled on the trunk interfaces.
5.8.1
If there is just one single site-interconnect available - as described with the drawing below - the following
configuration has to be applied to the switches to establish a static inter-site connection.
Single Inter-Site Link
.------------------------------------------------.
|
|
HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64
.----------------------,5_____
.----------------------,5_____
MGMT|
G8264 Switch 1a
|1_____\
MGMT |
G8264 Switch 2a
|1_____\
----------------------
\
----------------------
\
GPFS 17,19,21,23,25,27...63
/ ISL
GPFS 17,19,21,23,25,27...63
/ ISL
HANA 18,20,22,24,26,28...64
/
HANA 18,20,22,24,26,28...64
/
.----------------------,5____/
.----------------------,5____/
MGMT|
G8264 Switch 1b
|1___/
MGMT |
G8264 Switch 2b
|1___/
----------------------
----------------------
GPFS 17,19,21,23,25,27...63
GPFS 17,19,21,23,25,27...63
Switchport Portchannel Configuration
# Define Switch 1a,2a
#
RS 8264 port 64
#
RS 8272 port 48
#
RS 8296 port 86
#
RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
X6 Implementation Guide
1.9.96-13
36
Technical Documentation
If there are two site-interconnect fibres - as described with the drawing below - each cable should connect
to two switches, instead of connecting them both to just one switch pair. The following configuration has
to be applied to the switches to establish one logical static inter-site connection over 2 cables.
Redundant Inter-Site Link (one on each switch)
.------------------------------------------------.
|
|
HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64
HANA 18,20,22,24,26,28...64
.----------------------,5_____
.----------------------,5_____
MGMT|
G8264 Switch 1a
|1_____\
MGMT |
G8264 Switch 2a
|1_____\
----------------------
\
----------------------
\
GPFS 17,19,21,23,25,27...63
/ ISL
GPFS 17,19,21,23,25,27...63
/ ISL
HANA 18,20,22,24,26,28...64
/
HANA 18,20,22,24,26,28...64
/
.----------------------,5____/
.----------------------,5____/
MGMT|
G8264 Switch 1b
|1___/
MGMT |
G8264 Switch 2b
|1___/
----------------------
----------------------
GPFS 17,19,21,23,25,27...63(64)
GPFS 17,19,21,23,25,27...63(64)
|
|
-----------------------------------------------
Redundant Inter-Site Link (one on each switch)
Switchport Portchannel Configuration
# Define Switch 1a,2a,1b,2b
#
RS 8264 port 64
#
RS 8272 port 48
#
RS 8296 port 86
#
RS 8124 port 22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
X6 Implementation Guide
1.9.96-13
37
Technical Documentation
If there are four site-interconnect fibres - as described with the drawing below - two of them should be
connected on port 63 and port 64 on each switch. The following configuration has to be applied to the
switches to establish one logical static inter-site connection over 4 cables.
Portchannel over four inter-site links (two on each switch)
.------------------------------------------------.
|
.--------------------------------------------+---.
|
|
|
|
HANA 18,20,22,24,26,28...64(+63)
HANA 18,20,22,24,26,28...64(+63)
.----------------------,5_____
.----------------------,5_____
MGMT|
G8264 Switch 1a
|1_____\
MGMT |
G8264 Switch 2a
|1_____\
----------------------
\
----------------------
\
GPFS 17,19,21,23,25,27...63
/ ISL
GPFS 17,19,21,23,25,27...63
/ ISL
HANA 18,20,22,24,26,28...64
/
HANA 18,20,22,24,26,28...64
/
.----------------------,5____/
.----------------------,5____/
MGMT|
G8264 Switch 1b
|1___/
MGMT |
G8264 Switch 2b
|1___/
----------------------
----------------------
GPFS 17,19,21,23,25,27...63(+64)
GPFS 17,19,21,23,25,27...63(+64)
|
|
|
|
|
--------------------------------------------+---
------------------------------------------------
Portchannel over four inter-site links (two on each switch)
Switchport Portchannel Configuration
# Define Switch 1a,1b,2a,2b
#
RS 8264 port 63,64
#
RS 8272 port 47,48
#
RS 8296 port 85,86
#
RS 8124 port 21,22
RS 8XXX (config)# interface port <port>
RS 8XXX (config-if)# switchport mode trunk
# The next 2 configuration statements are valid in case of a stretched HA solution. In a
# stretched HA scenario HANA and GPFS VLANs must be enabled on the trunk interface.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN,HANA VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN,HANA VLAN]
# The next 2 configuration statements are valid in case of DR solution. Only GPFS VLAN
# must be enabled on the trunk interface in a DR scenario.
RS 8XXX (config-if)# switchport trunk allowed vlan [GPFS VLAN]
RS 8XXX (config-if)# switchport trunk native vlan [GPFS VLAN]
RS 8XXX (config-if)# exit
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 port <port>
RS 8XXX (config)# portchannel 63 enable
RS 8XXX (config)# vlag portchannel 63 enable
5.8.4
5.8.4.1
X6 Implementation Guide
1.9.96-13
38
Technical Documentation
# scp admin@switch.example.com:getcfg .
5.8.4.2
5.9
The script SwitchAutoConfig.sh can be used to create a basis configurations for the switch models
G8124 and G8264. We recommend to copy and paste the created configuration into the serial console of
the switches.
SwitchAutoConfig.sh can be found in /opt/lenovo/saphana/bin/.
As a prerequisite for SwitchAutoConfig.sh, the switches must be base configured as described in chapter
5.7.1: Basic Switch Configuration Setup on page 31, and reachable via ssh over network.
5.9.1
Script Usage
./SwitchAutoConfig.sh -h
usage: ./SwitchAutoConfig.sh [-c type] [-d type]
styletypes=[G8264|G8052|G8124]
Examples
The following command will create the configurations for a G8264 switch pair. You will be asked to enter
configuration details like IP addresses. After the configuration part you have to enter the ssh password
of the switches, twice per switch. The first you enter the ssh password, the script will check the firmware
version of the switches, the second time the password must be entered for the deployment process.
./SwitchAutoConfig.sh -c G8264
The following command will create and deploy the configurations for a G8264 switch pair.
./SwitchAutoConfig.sh -d G8264
X6 Implementation Guide
1.9.96-13
39
Technical Documentation
Attention
Please be very careful, if you create the configuration for a switch connected to the customer
network. In this case make sure, that the switch is disconnected during the setup. Only if the
configuration is complete and matches the customer requirements bring up the connection to
the customer network.
After the configuration deployment the switches should be checked manually. Afterwards
the configuration can be saved as described in chapter 5.7.9: Save changes to switch FLASH
memory on page 36.
5.9.3
Input Values
All the default values are based on the Networking Guide standards, but can be changed if needed. Most
input values like hostname or IP address need to be provided by the customer. Portchannel is only needed
in case of a DR or HA cluster. If portchannel should be configured, the script will ask for the type of port
channel that has to be configured. There are two port channel options - HA or DR. The GPFS, HANA,
xCat and IMM VLAN IPs are IPs that reside within those VLANs. Their purpose is to be able to ping
server addresses within these VLANs from the switch. For G8052 the script will ask for a MGMT Port,
because the G8052 has no dedicated management port.
X6 Implementation Guide
1.9.96-13
40
Technical Documentation
This section describes the installation and configuration of HANA on SUSE Linux Enterprise Server for
SAP Applications 11 SP3 and HANA on Red Hat Enterprise Linux 6.6. Subsections that only apply to
one of these operating systems are marked accordingly. This section can be applied starting from the
non-OS component DVD version 1.9.96-13.
The software installation and configuration is executed at the customer site. This includes networking
customization, IBM GPFS cluster setup and SAP HANA installation. It does not include the connection
and replication to SAP Business Suite back end systems (such as ERP or BW).
Attention
Please read SAP Note 2001528 Linux: SAP HANA Database SPS 08 revision 80 (or higher)
on RHEL 6 or SLES 11, and SAP Note 2159166 SAP HANA SPS 09 Database Revision 96
to learn about known issues and recommendations by SAP.
Note
It is highly recommended to check the system setup and software versions of installed components after the complete installation process. See section 15.2: Basic System Check on page
183 how to achieve this.
Phase
Actions
BoMC
1
2
3
X6 Implementation Guide
1.9.96-13
41
Technical Documentation
6.1
Preparation
As you might not be able to access online documentation at the customer site, please make yourself
familiar with the following links and downloads before arriving without all information that is useful.
Please note that these documents in turn might reference to other documentation not mentioned here,
so you would need to get this as well. We highly recommend the SAP HANA Installation Guides as well
as the SAP HANA TOC Manual.
Experience SAP HANA
SAP Service Marketplace
SAP Help Portal SAP HANA
SAP HANA 1.0: Central Note
SAP HANA Sizing Guide
Release Restrictions Note
http://experiencesaphana.com
https://service.sap.com/hana*
http://help.sap.com/hana_appliance
https://service.sap.com/sap/support/notes/1514967*
https://service.sap.com/sap/support/notes/1514966*
https://service.sap.com/sap/support/notes/1513496*
Depending on the customers operation guidelines it might be necessary to prepare the customer infrastructure beforehand so that the HANA appliance can be integrated in a smooth and timely manner.
What follows are a few tips we have collected while talking with SAP.
6.1.1
Firewall Preparations
If the customer has firewalls running between the HANA appliance and the connected components (ERP,
clients, backup & restore server, etc.), make sure that the appropriate network ports are opened. For
details on the relevant ports please refer to the SAP HANA security guide at http://help.sap.com/
hana_appliance Security.
6.1.2
The customer needs to have the "Non OS content for Lenovo Systems solution for SAP HANA appliance
additional software stack" before the service person arrives. A DVD should have arrived with every
X6 Implementation Guide
1.9.96-13
42
Technical Documentation
system. It is not possible due to legal reasons to download the DVD from the Internet. In case a
customer has lost the DVD, or did not receive such, he needs to order it directly from Lenovo. In order
to do this, please direct the customer to contact Lenovo support and provide part number (p/n) for the
latest version from the table below. The other numbers are here for reference.
P/N
00MV674
Description
Remarks
Supported OS
latest version
6.1.3
The System x servers software, firmware and driver versions should either be at the exact level as given
here or can be above if indicated so. For details please refer to table 25. The versions listed in that
table have been certified with SAP. If an upgrade to a higher version is supported without consultation
of Lenovo/SAP, this is indicated with a 3. Updates that require a statement from Lenovo or SAP before
upgrading are indicated with b. Certain firmware levels have been declared as static and an upgrade to
higher version is not supported, this is indicated with 7.
In general you should use BoMC12 to apply the newest firmware versions before starting the OS installation, unless there are restrictions for certain firmware packages in table 25: Supported Firmware,
Software and Driver Levels on page 44. If unsure, you should first contact SAP Support (via the SAP
OSS System) with a direct question regarding the latest drivers and their support.
Attention
Mandatory kernel update after installation on SLES for SAP 11 SP3 to kernel
version 3.0.101-0.47.52, or higher.
Attention
Mandatory kernel update after installation on RHEL 6.6 to 2.6.32-504.16.2.el6, or
higher. See SAP Note 2136965 SAP HANA DB: Recommended OS settings for RHEL 6.6.
Attention
Mandatory update of the GCC runtime environment for SAP HANA SPS08
(Revision 80) or higher. See 2001528 Linux: SAP HANA Database SPS 08 revision 80
(or higher) on RHEL 6 or SLES 11 for details.
Attention
Mandatory update of the GNU C Library is required after installation when
installing SAP HANA Database revision 80 or higher. See SAP Note 1888072 SAP
HANA DB: Indexserver crash in __strcmp_sse42 for details.
12 https://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC
X6 Implementation Guide
1.9.96-13
43
Technical Documentation
Note
UEFI and IMM firmware levels should always be updated in parallel to avoid possible contention problems between the two.
Note
When installing or performing upgrades, the operator should be prepared to expect multiple
reboots. Please refer to chapter 12.2: Reboot Behavior on page 154.
X6 Implementation Guide
1.9.96-13
44
Technical Documentation
Warning
Do not downgrade existing firmware levels unless otherwise explicitly recommended to do
so by Lenovo.
6.1.4
Card Placement
Attention
You may need to change the card placement. The machine coming from the factory may have
a different card layout than we require. Please refer to section 4.4: Card Placement on page
15 for the assignment regarding in which slot a certain card should be. This step must be
done before the installation. Please be aware, that only with the correct card layout your
machine is supported by Lenovo.
6.1.5
These steps are necessary before the operating system can be installed. When the system comes from
Lenovo, it should already been set to the settings listed, but after UEFI firmware updates, it may happen
that these parameters are reset. Follow the next instructions on how to configure the servers UEFI
parameters correctly for use with SAP HANA appliance.
Please check in this step also the power policy settings like described in chapter 12.1: Power Policy
Configuration on page 154.
6.1.5.1 Obtaining web interface access for IMM To access the web interface of the IMM and use
the remote presence feature, you need the IP address for the IMM. You can modify the IMM IP address
through the UEFI Setup utility. To locate or change the IP address, complete the following steps:
1. Turn on the server.
2. When the prompt <F1> Setup is displayed, press F1 .
3. From the setup utility main menu, select
System Settings
Network Configuration .
4. Obtain or change the network settings (IP address, host name, subnet mask, gateway).
5. Save network settings, confirm to restart IMM.
6. Press Esc to get back to main menu.
6.1.5.2 Feature on Demand Activation To be able to configure the RAID adapters correctly some
Feature on Demand (FoD) keys need to be activated. It is possible that they are already activated when
shipped.
ServeRAID M5100/M5200 Series Performance Key for Lenovo System x
ServeRAID M5100/M5200 Series SSD Caching Enabler for Lenovo System x
(optional, only if RAID6 is required by customer) ServeRAID M5100/M5200 Series RAID 6 Upgrade
for Lenovo System x (RAID6 can only be configured on external M5120/M5225 RAID adapters.)
The necessary documentation was shipped with the servers to the customer. You can activate the FoDs
via IMM: After the login go to IMM Management Activation Key Management .
Note
We recommend that the customer keeps a backup of the Feature on Demand keys.
X6 Implementation Guide
1.9.96-13
45
Technical Documentation
6.1.5.3
2. Choose None .
3. Press Esc three times.
4. Select Save Settings and press
6.1.5.4
Operating Modes .
Custom Mode .
Disable .
Platform Controlled .
Maximum Performance .
Power .
I/O sensitive .
Please check and set the settings in UEFI according to the following tables.
Note
Please be aware, that not every setting is available on every platform.
Section Operation Modes
Setting
Value
Custom Mode
Max Performance
Automatic
Enable
Disable
Max Performance
Enable
Enable
ACPI C3
Platform Controlled
Max Performance
OperatingModes.ChooseOperatingMode
Memory.MemorySpeed
Memory.MemoryPowerManagement
Processors.ProcessorPerformanceStates
Processors.C1EnhancedMode
Processors.QPILinkFrequency
Processors.TurboMode
Processors.C-States
Processors.PackageACPIC-StateLimit
Power.PowerPerformanceBias
Power.PlatformControlledType
X6 Implementation Guide
1.9.96-13
46
Technical Documentation
Setting
Value
Turbo Mode
Processor Performance States
C-States
Package ACPI C-State Limit
C1 Enhanced Mode
Hyper Threading
Execute Disable Bit
Intel Virtualization Technology
Enable SMK
Hardware Prefetcher
Adjacent Cache Prefetch
DCU Streamer Prefetcher
DCU IP Prefetcher
Direct Cache Access (DCA)
Cores in CPU Package
QPI Link Frequency
Energy Efficient Turbo
Uncore Frequency Scaling
MWAIT/MMONITOR
Enable
Enable
Enable
ACPI C3
Disable
Enable
Enable
Enable
Disable
Enable
Enable
Enable
Enable
Enable
All
Max Performance
Enable
Enable
Enable
Processors.TurboMode
Processors.ProcessorPerformanceStates
Processors.C-States
Processors.PackageACPIC-StateLimit
Processors.C1EnhancedMode
Processors.Hyper-Threading
Processors.ExecuteDisableBit
Processors.IntelVirtualizationTechnology
Processors.EnableSMX
Processors.HardwarePrefetcher
Processors.AdjacentCachePrefetch
Processors.DCUStreamerPrefetcher
Processors.DCUIPPrefetcher
Processors.DirectCacheAccessDCA
CoresinCPUPackage
Processors.QPILinkFrequency
Processors.EnergyEfficientTurbo
Processors.UncoreFrequencyScaling
Processors.MWAITMMONITOR
Value
Capping Disable
Platform Controlled
Max Performance
I/O sensitive
Disable
Power.ActiveEnergyManager
Power.PowerPerformanceBias
Power.PlatformControlledType
Power.WorkloadConfiguration
Power.10GbMezzCardStandbyPower
Value
Memory Mode
Memory Memory Speed
Memory Power Management
Socket Interleave
Memory Data Scrambling
PatrolScrub
Mirroring
Sparing
RankMarginingTest
Independent
Max Performance
Automatic
NUMA
Enable
Enable
Disable
Disable
Disable
Memory.MemoryMode
Memory.MemorySpeed
Memory.MemoryPowerManagement
Memory.SocketInterleave
Memory.PatrolScrub
Memory.MemoryDataScrambling
Memory.Mirroring
Memory.Sparing
Memory.RankMarginingTest
X6 Implementation Guide
1.9.96-13
47
Technical Documentation
6.1.5.5 Boot Order The installer supports (starting from release 1.8.80-10) only the installation in
UEFI Mode. For the boot loaders used see table 30.
Note
When you reinstall a system but changed the Legacy/UEFI Mode, make sure the partition
table is cleared, either by wiping it or by recreating the RAID1 VD for the OS.
Type
Supported from
Boot loader
SLES 11 SP3
1.7.70-8
ELILO
RHEL 6.6
1.9.96-13
Grub
6.2
Phase 1
The Lenovo Systems Solution for SAP HANA appliance is ready for an installation with the factory
provided image.
6.2.1
The RAID configuration of all RAID5 and RAID6 arrays is executed by the automated installer starting
with release 1.8.80-10. The only manual step the installing person has to do is to configure the RAID1
for the OS.
The following tables are meant as an overview and a reference in case that the automated RAID configuration is not working properly.
Tables 31: x3850 X6 RAID Controller Configuration on page 49 and 32: x3950 X6 RAID Controller
Configuration on page 50 describe possible configurations of the RAID controllers. There are different
possible setups for the RAID controllers with different numbers of SSDs and HDDs:
M5210 (on x3950 X6: first internal)
2 SSDs + 6 HDDs: 1 RAID1 for OS, 1 RAID5 for GPFS
M5210 (only x3950 X6, second internal)
2 SSDs + 6 HDDs: 1 RAID5 for GPFS
M5120/M5225
2 SSDs + 9 HDDs: 1 RAID5 for GPFS
2 SSDs + 10 HDDs: 1 RAID6 for GPFS
2 SSDs + 18 HDDs: 2 RAID5 for GPFS
2 SSDs + 20 HDDs: 2 RAID6 for GFPS
X6 Implementation Guide
1.9.96-13
48
Technical Documentation
Optionally: +2 SSDs13
Controller
Models
M5210
all
M5120/
M5225
Single node:
768GB,
Cluster:
512GB
VD ID
Type
HDD
Physical
Drives
2
HDD
(3+p)
RAID5
SSD
RAID0
0*
HDD
9
10
SSD
Config
Comment
RAID1
VD for OS
GPFS,
CacheCade
enabled
CacheCade
of VD1
(8+p)
RAID5
(8+2p)
RAID6
GPFS,
CacheCade
enabled
RAID0
CacheCade
of VD0
X6 Implementation Guide
1.9.96-13
49
Technical Documentation
Controller
Models
1st M5210
all
2nd M5210
1st M5120/
M5225
Single node:
768GB,
Cluster:
512GB
Single node:
3072GB,
Cluster:
2048GB
Single node:
6144GB,
Cluster:
3072GB
VD ID
Type
HDD
Physical
Drives
2
HDD
(3+p)
RAID5
SSD
RAID0
HDD
(5+p)
RAID5
SSD
RAID0
0*
10
1*
nd
2 M5120/
M5225
Single node:
12.288GB,
Cluster:
6144GB
3rd M5120/
M5225
Single node:
12.288GB,
Cluster:
6144GB
HDD
9
10
1/2**
Single node:
12.288,
Cluster:
4096GB
HDD
0*
SSD
2/4*
HDD
9
10
1*
HDD
9
10
1/2*
SSD
0*
HDD
2
9
10
SSD
Config
Comment
RAID1
VD for OS
GPFS,
CacheCade
enabled
CacheCade
for VD1
GPFS,
CacheCade
enabled
CacheCade
for VD1
(8+p)
RAID5
(8+2p)
RAID6
(8+p)
RAID5
(8+2p)
RAID6
RAID0
(8+p)
RAID5
(8+2p)
RAID6
(8+p)
RAID5
(8+2p)
RAID6
GPFS,
CacheCade
enabled
GPFS,
CacheCade
enabled
CacheCade
for VD0&1
GPFS,
CacheCade
enabled
GPFS,
CacheCade
enabled
RAID0
CacheCade
for VD0&1
(8+p)
RAID5
(8+2p)
RAID6
GPFS,
CacheCade
enabled
RAID0
CacheCade
for VD0
X6 Implementation Guide
1.9.96-13
50
Technical Documentation
Device
Partition #
/dev/sda
1
2
3
Partition
Name*
/dev/sda1
/dev/sda2
/dev/sda3
4
5
/dev/sd[b-z]
Size
File system
Mount Point
148MB
64GB
32GB
vfat
ext3/4
swap
/dev/sda4
148MB
vfat
/dev/sda5
64GB
ext3/4
100%
GPFS
/boot/efi
/
(none)
/var/backup/
boot/efi
/var/backup
/sapmnt
(sapmntdata)
Table 33: Partition Scheme for Single Node and Cluster Installations
* The actual partition numbers may vary depending on whether you use RHEL or SLES for SAP.
Warning
At this point, only the RAID1 for the OS will be configured. Other RAID arrays are generated
automated in phase 3 of the setup.
6.2.1.1
Storage .
2. Select the internal RAID controller. If your server has two M5210 controllers, only configure the
first controller as described here. You can determine the first internal controller by the smaller bus
number on the right side of the "Storage" view.
3. Select Main Menu
Configuration Management .
Using the IMM, the machine can be booted into the installation media. Directions on how to use the IMM
can be found in the Lenovo server installation guidelines respective to the System x model purchased.
The server software installation process varies slightly depending on how the mounted software images
are attached to the server. This section describes the different image mounting methods and the available
options to install the images for each method. See table 34: DVD/ISO Media Install Options on page
52. Installations via USB drives are supported.
There are two Lenovo DVDs shipped besides the DVDs of the operating system media kit. The "Lenovo
Installation" DVD (Lenovo non-OS components), contains all files that are needed for a successful installation of the appliance. The "Additional Products" DVD contains additional files for SAP HANA that
are not required for a successful installation. If you want to have these files automatically transferred to
X6 Implementation Guide
1.9.96-13
51
Technical Documentation
the server(s) during installation, you must use option 1 in table 34. We recommend not mounting this
DVD.
When installing RHEL there is an additional RHEL for HANA DVD shipped containing necessary compatibility RPMs.
When installing SLES for SAP there is an additional SLES DVD shipped containing necessary compatibility RPMs.
DVD/ISO Media Option
SLES for SAP/RHEL
Lenovo non-OS Components
RHEL for HANA or
Compat. files for SLES
Additional Products
SLES for SAP
Lenovo non-OS Components
RHEL for HANA or
Compat. files for SLES
Order in Virtual
Media Manager
3(1st)
3(2nd)
USB
Stick
3(3rd)
3(4th, optional)
3(1st)
3
3(2nd)
6.2.3
SLES, UEFI Mode: After you mount the software images for the execution of phase one install,
restart the system and wait until the black boot-option screen from SUSE is displayed.
In the boot-option screen, use
and then b .
SLES and RHEL: The media will automatically install the SLES for SAP or RHEL operating
system. The installer will copy the extra software necessary for the SAP HANA product (GPFS
and other software add-ons). The machine will be properly partitioned, installed and initially
configured. You will not need to touch this system at this point. After the system reboot phase
two of the installation will begin.
Note
Continue with Section 6.3: Phase 2 SLES for SAP on page 53, or 6.4: Phase 2 RHEL on
page 58.
X6 Implementation Guide
1.9.96-13
52
Technical Documentation
6.3
1.
If you had to restart the server in one of the next steps and you see this screen again,
change into a console or open a terminal and execute service openibd start. If you
do not do this, you will not be able to configure the network correctly in later steps.
+ x , then enter the command and then enter
To open a console, press Ctrl + Alt +
exit to close the console.
At the welcome screen select Next .
2. Ensure that the customer accepts the SUSE(R) Linux Enterprise Server for SAP Applications 11
SP3 SUSE Software License Agreement. Select Next .
X6 Implementation Guide
1.9.96-13
53
Technical Documentation
X6 Implementation Guide
1.9.96-13
54
Technical Documentation
X6 Implementation Guide
1.9.96-13
55
Technical Documentation
X6 Implementation Guide
1.9.96-13
56
Technical Documentation
57
Technical Documentation
6.4
Phase 2 RHEL
X6 Implementation Guide
1.9.96-13
58
Technical Documentation
8. Select the timezone tab and select the correct timezone. Select Forward .
9. Deselect "Enable kdump?". Select Finish , then select No .
10. Log in as root user.
11. Configure /etc/hosts: Add a line for gpfsnodeXX and hananodeXX (where XX is the node number,
e.g. 01) and a line for the external IP and hostname, for example:
1
2
3
192.168.10.110 gpfsnode10
192.168.20.110 hananode10
10.10.10.10
myhananode10.domainname myhananode10
12. Execute system-config-network and select DNS configuration. Do not use the Device configuration
option.
As "Hostname" enter the fully qualified domain name.
Enter the DNS servers.
As "DNS search path" enter the domain.
13. Edit the configuration file of the network device for the external communication, e.g. ifcfg-eth4,
in /etc/sysconfig/network-scripts/. (Do not change the settings for eth0-3, they are the slaves
of bond0-1.) Make sure that the file contains the line ONBOOT=yes but the line HWADDR= is deleted.
At the end the file should look like this:
1
2
3
4
5
6
7
8
9
10
DEVICE=eth[X]
TYPE=Ethernet
UUID=[UUID]
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
IPV6INIT=no
IPADDR=[IP address]
NETMASK=[netmask]
GATEWAY=[gateway]
The networking adapters need to be configured to the customers network landscape. Depending on
the customers network infrastructure the other Ethernet adapters need to be modified according
to table 15: IP address configuration on page 23. This is left to the customer and service personnel
to properly define in advance.
(a) There are two bonded devices configured for the Mellanox adapters. This is used by default
for the IBM GPFS and SAP HANA private networks and should not be changed. This is a
private network and does not need to be connected to the customers network landscape.
(b) Single node: If the customer wishes to use the 10Gb adapters for his client access, then
you need to change the adapter used for each of these bonded adapters. It is not necessary
in a single node installation to use two adapters, only that one adapter is assigned with the
correct private networking host names and IP addresses. Configure the interfaces via the files
ifcfg-bond0, and ifcfg-bond1 in the directory /etc/sysconfig/network-scripts/.
Note
In case the customer plans to scale out the single node installation to a cluster
by adding more severs: Plan the network configuration for the GPFS and HANA
networks as if the cluster was already present to simplify a later scale out.
X6 Implementation Guide
1.9.96-13
59
Technical Documentation
(c) Single node without Mellanox cards: If the machine is configured without a Mellanox
card, bond0 and bond1 will be empty (i.e. no slave interfaces), but still be present. There is
no need to change the IP addresses of both bonded interfaces and can remain 127.0.1.1 and
127.0.2.1.
Ports of NICs that are placed in the server as a replacement for the Mellanox cards will be
named starting from eth100.
(d) Cluster node: It is important to modify the host name/IP address pair of gpfsnodeNN
/ 127.0.1.1 (e.g 192.168.10.101/24) in order to properly auto-configure the private network.
Follow the advice given by the customer in table 15: IP address configuration on page 23.
It is also important to modify the host name/IP address pair of hananodeNN / 127.0.2.1
(e.g 192.168.20.101/24) in order to properly auto-configure the private network. Follow the
advice given by the customer in table 15: IP address configuration on page 23.
Configure the interfaces via the files ifcfg-bond0, and ifcfg-bond1 in the directory /etc/
sysconfig/network-scripts/.
Warning
If not changed, the installation will fail at a later point. Please see figure 19 on
page 56. Please change the value in the marked black box to reasonable values,
e.g. 192.168.10.101/24 gpfsnode01 for bond0 and 192.168.20.101/24 hanaode01 for
bond1. Do not use the preset values in the fields IP address and hostname.
Execute
1
6.5
Interim Check
Before starting phase three, it is a good practice to ensure that you can access all machines on the network
and that each node is ready to install and configure the SAP HANA appliance software. You can use the
following commands to determine that each system is ready for the cluster install.
On every node run the following commands and check that they are consistent with the cluster you are
about to install:
1. Review the physical partitions (sdx):
1
2. This command must properly show the node itself (not every node):
1
3. This command must properly show the node itself (not every node):
X6 Implementation Guide
1.9.96-13
60
Technical Documentation
4. The following
are reachable.
verifying that
through other
1
command lists all reachable servers in both internal networks. Ensure all servers
Except for the servers own adapter, MAC address are shown and can be used for
the right servers were found and not other servers in the same network reachable
network connections:
# cat /proc/net/bonding/bond0
# cat /proc/net/bonding/bond1
# ntpq -p
# date
If any of these values are not as expected, you should correct them and repeat this test before starting
with phase three.
6.5.1
6.5.1.1 SLES for SAP 11 SP3 Install the updates for libgcc_s1, and libstdc++6 shipped on the
extra DVD shipped with the appliance.
1
2
6.5.1.2
1
RHEL 6.6
X6 Implementation Guide
1.9.96-13
61
Technical Documentation
6.5.2
Add the external host names specified in step 9 of phase 2 (dialog "SAP HANA Configuration", see
screenshot above) to the /etc/hosts file on all nodes so that every node can resolve the external host
name of the other nodes. Test this by pinging the external host name of all nodes on every node before
continuing with the next phase.
6.6
6.6.1
Phase 3
Verification of RAID Controller and HDD/SSD Firmware
Ensure that the RAID controllers, and the HDDs and SSDs run with the latest firmware. If you used
BoMC14 in an earlier step to install all available firmware updates on this server, skip this step.
Note
Firmware bugs in older firmware versions may lead to decreasing performance or even data
loss.
6.6.2
HANA Installation
Attention
The SAP HANA installation packages are copied to the node in this step. Make sure that
the Lenovo non-OS components DVD is still mounted via IMM (or USB thumb drive), or the
installation will fail.
Phase three starts after the machine has rebooted and you have ascertained that all the networking is
working. Either from the console, or from a SSH connection, you may call the Lenovo SAP HANA
appliance configuration tool. It is recommended to call the configuration tool on the first node but it can
be started on any node of the cluster.
Attention
In case, you are connecting via SSH from a machine that is not set for the English language,
you must set the LANG environment variable to "C" beforehand. If not, the SAP HANA
Database Installation may break while trying to determine the hardware requirements.
# export LANG=C
Download the latest hardware check script from SAP Note 1658845 Recently certified SAP HANA
hardware/OS not recognized. Copy the ZIP file to the server to /root/HanaHwCheck.zip. The automated
(Lenovo) installer will update the HANA hardware check script automatically, if it finds this file at this
location.
Attention
Not providing the most recent HANA hardware check script may cause the HANA installation
to fail.
14 https://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-BOMC
X6 Implementation Guide
1.9.96-13
62
Technical Documentation
6.6.2.1
1
# saphana-setup-saphana.sh
1. Read the License Agreement. Use
2. Check that the appliance was detected correctly and confirm with
to select OK .
63
Technical Documentation
12. Confirm the Group ID of the HANA user, or enter a customized value. Select OK .
13. Enter the SAP HANA password. Select OK .
# saphana-setup-saphana.sh
1. Read the License Agreement. Use
2. Check that the appliance was detected correctly and confirm with
# saphana-setup-saphana.sh
1. Read the License Agreement. Use
2. Check that the appliance was detected correctly and confirm with
.
.
to select OK .
64
Technical Documentation
Note
Currently the default and recommended value is /sapmnt. Nowadays SAP recommends
to use /hana and this may become the default path in future releases.
Both paths are supported, but for new installations in legacy environments /sapmnt is
strongly recommended.
The IBM GPFS internal name of this filesystem will still be sapmntdata in any case.
9. Enter a SID. Select OK .
10. Enter an Instance Number. Select OK .
11. Confirm the User ID of the HANA user, or enter a customized value. Select OK .
12. Confirm the Group ID of the HANA user, or enter a customized value. Select OK .
13. Enter the SAP HANA password. Select OK .
14. Confirm the password. Select OK .
Note
Follow the instructions in Section 7: After Installation on page 66.
Please also review SAP Note 1906381 Network setup for external communication for an overview how
HANA can connect to the client network.
6.6.3
Adding a second node for high availability is described in section 10.1: Single Node with HA Installation
with Side-car Quorum Solution on page 103. Please refer to there when installing a simple single node
HA solution.
X6 Implementation Guide
1.9.96-13
65
Technical Documentation
After Installation
After the installation of the Lenovo Solution you have to take several actions to ensure that the installation
is correct.
7.1
At first execute a system check (see Section 15.2: Basic System Check on page 183) with the latest
version of the check script.
Follow the instructions given by the check script to prevent unwanted behaviour of the appliance.
Warning
Update the kernel and IBM GPFS to the suggested levels. Earlier versions of GPFS
and the kernel have known bugs that may cause the appliance to stop working.
Attention
Do not change the SSH configuration for the root user (e.g. not allowing SSH logins).
SSH is required for IBM GPFS and is configured accordingly.
On x3850 X6 and x3950 X6 servers you can create a symbolic link from /sapmnt/<SID> to /sapmnt/
shared/<SID> to simulate the GPFS filesystem layout of eX5 based appliances, if you use scripts
or other tools that use this path hard coded:
1
ln -s /sapmnt/shared/<SID> /sapmnt/<SID>
Install the SAP Solution Manager Diagnostics Agent (SMD). If the customer plans to integrate
the new HANA server(s) into his existing SAP management infrastructure (SAP Solution Manager, System Landscape Directory) the SMD must be installed in preparation. The SAP Solution
Manager Diagnostic Agent can be installed via the SAP HANA Lifecycle Manager (HLM).
To install the SMD via the HANA Lifecycle Manager open a browser and navigate to https:
//<HANAServerHostname>:1129/lmsl/HLM/<SID>/ui?sid=<SID>, choose Add Solution Manager
Diagnostics Agent (SMD) and follow the instructions on screen. Skip the registration forms for the
Solution Manager and the System Landscape Directory if you do not wish to register the HANA
installation at this time. For other means to use the HLM or if the HLM is not accessible please
refer to the SAP HANA Update and Configuration Guide 15 .
The installation of SAP Solution Manager Diagnostics Agent is documented in the chapter Adding
a Solution Manager Diagnostics Agent on an SAP HANA System in the aforementioned guide.
Check that the HANA log mode is configured correctly.
If the log mode is wrong, the appliance will experience an out-of-space condition on the IBM GPFS
(/sapmnt/). See SAP Note 1642148 FAQ: SAP HANA Database Backup & Recovery (No. 26
What general configuration options do I have for saving the log area?).
Make sure that the backup paths are configured correctly.
They only are allowed to point to the GPFS filesystem if it is used as a staging area for a third
party backup solution. Permanent backups on the GPFS are unsupported.
Check, if the SAP Host agent is running on every server.
If not you can either reboot every server in the cluster or start it by executing on every server:
1
15 Obtainable
from http://help.sap.com/hana_appliance
X6 Implementation Guide
1.9.96-13
66
Technical Documentation
7.2
There are several options how to setup HANA regarding the connection on the client network. This
depends highly on the setup of the customer network. A good overview of the possibilities gives:
SAP Note 1906381 Network setup for external communication
X6 Implementation Guide
1.9.96-13
67
Technical Documentation
Disaster Recovery
The scope of this section is to provide a guide for the Lenovo Disaster Recovery (previously SAP Disaster
Tolerance) solution for SAP HANA . The solution is implemented in two physically independent locations,
with one location used as the production site and the second which serves as the backup or disaster site.
A third optional location is possible for a tie breaking (quorum) feature of GPFS.
The goal of DR is to enable a secondary data center to take over production services with exactly the
same set of data as stored in the primary sites data center. Synchronous data replication between the
primary and secondary site ensures zero data loss (RPO=0). This allows the protection of a data center
against events like power outage, fire, flood, or hurricane. The time required to recover the services
(RTO) is different for each installation depending upon the exact client implementation.
8.1
Architecture
This sections briefly explains the architecture of the Lenovo DR solution for SAP HANA and provides
examples how it can be installed in a standard two-tier or three-tier data center environment.
Site C
Quorum
Node
(Optional)
Site A
Site B
FS:
sapmntdata
FS:
sapmntdata
Synchronous
Synchronous
Replication
Replication
8.1.1
Terminology
The terms site A, primary site, and active site are used interchangeably in this document to refer to the
location where the productive SAP HANA HA is initially set up and used.
Similarly, site B, backup site, and passive site all refer to the second location where the productive SAP
HANA HA system is copied to in the case of a disaster.
After a failover the naming of these two sides may be swapped, depending whether the customer wants
to switch back as soon as possible or keep using the former backup site as the primary site.
Site C will refer to the quorum or tiebreaker site.
SAP also uses the terms Disaster Recovery (DR) and Disaster Tolerant (DT) interchangeably. We will
try to be consistent and use DR in this document.
X6 Implementation Guide
1.9.96-13
68
Technical Documentation
8.1.2
Architectural overview
The Lenovo DR solution for SAP HANA can be thought of two standard Lenovo HA clusters in two
different sites combined into one large cluster. Each site can be planned as a standard Lenovo HA
cluster with the same hardware requirements as the standard solution. Currently, the only architectural
requirement is that both sites have the same number of server nodes and each site has the same number
of network switches as the existing Lenovo HA cluster offering.
The idea of Lenovo DR solution for SAP HANA is to have one stretched IBM GPFS cluster spanning
both sites and providing one file system for SAP HANA. There are two separate SAP HANA clusters on
both sites that can access data in this single shared file system. Synchronous data replication built into
the file systems ensures that at any given point in time there is the exact same data in both data centers.
Figure 27: DR Data Distribution in a Four Node Cluster on page 69 shows the high-level architecture.
Warning
As of December 2012, SAP has published an end-to-end value of 320s latency between the
synchronous Sites of a DR cluster. It is known by both SAP and Lenovo that this number
of itself is not enough to describe if the SAP HANA database can recover from a disaster or
not.
Latency is a term that can be split into many different categories such as: network latency,
or application latency; each of which has their own values necessary for a proper DR setup.
It is also dependent on whether you use On Line Analytical Processing (OLAP) or On Line
Transaction Processing (OLTP) workloads.
Currently SAP is considering this value on a case per case basis, and it is important that you
discuss these values with your customer and the SAP consultant on site.
The Lenovo DR solution for SAP HANA works with a total of three data copies. The first copy is kept
local to the writing node. The second copy is stored on any other node except the writing node and
the third copy is always stored on any node on the remote site. Depending on the file size and actual
disk space usage of a certain node, GPFS tends to either cluster blocks on a node or stripe them across
multiple nodes. The same applies to distribution over disks within a node.
Site A
Site B
node2
node3
node4
node5
node6
node7
node8
HDD
HDD
HDD
HDD
HDD
HDD
HDD
HDD
fio
fio
first
replica
node1
synchronous
meta
data
third
replica
second
replica
syn
chr
ono
us
fio
fio
fio
fio
fio
fio
FG
1,0,1
FG
1,0,2
FG
2,0,1
FG
2,0,2
FG
1,1,1
FG
1,1,2
FG
2,1,1
FG
2,1,2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sdb2
sdb2
sdb2
sdb2
sdb2
sdb2
sdb2
OS
OS
OS
OS
OS
OS
OS
sdb2
OS
X6 Implementation Guide
1.9.96-13
69
Technical Documentation
The details of the network setup are not strictly defined. It is up to the project team to develop a solution
that is suitable to the customers existing network infrastructure. This must be discussed well in advance
together with the customer networking team.
The basic requirement is to have at least two sites, a third network site is needed if a so called tiebreaker
node will be part of the Disaster Tolerance architecture.
Each site will use a standard HA setup with its own dedicated GPFS and SAP HANA network. This
can be provided by using the standard IBM RackSwitch G8264 10 Gbps Ethernet switches, which are
part of the standard SAP HANA HA offering of Lenovo. The standard network requirements of a HA
solution regarding the customers uplink connectivity also apply to DR.
For the tiebreaker node at site C, there are no
special network requirements as there is only one
server.
HANA
HANA
For the connectivity between the two main sites,
at least one dedicated optical fibre connection endto-end between both sides is recommended. A
GPFS
routed or non-dedicated connection may be used,
but no guarantees about performance or operation
can be made. Using redundant optical fibres endto-end may improve performance and reliability.
The project team is responsible to work out a soluSite C
tion with respect to the customers infrastructure
(optional)
and requirements. A dedicated Ethernet network
needs to be provided for the GPFS network. For
Figure 28: Logical DR Network Setup
the configuration of the inter-site portchannel see
Section 5.8: Inter-Site Portchannel Configuration on page 36.
Figure 27 on page 69 shows a scenario with four nodes on each site. Only the HANA internal network
and the GPFS network are shown, no uplinks connecting the HANA cluster to the client network.
In a solution with a quorum site, the tiebreaker node must be reachable from within the internal GPFS
IPv4 network, each node must be able to reach the tiebreaker node and vice versa. There are no other
special requirements on this connection; neither bandwidth or latency guarantees are needed. It is
acceptable to use a routed connection through the customers internal network as long as it is reliable.
HANA internal
HANA internal
10 Gbit
GPFS
ISL
40 Gbit
HANA internal
HANA internal
10 Gbit
GPFS
node1
GPFS
IBM RackSwitch G8264 #4
node2
node3
node4
40 Gbit
ISL
GPFS
node5
node6
node7
node8
X6 Implementation Guide
1.9.96-13
70
Technical Documentation
8.1.3
If the customer decides to use a tiebreaker node in a third site, an additional server with an appropriate
GPFS license is required. Although the use of any server is possible, we recommend to use the Side-Car
Quorum Node x3550 M3/M4 defined in section 10.1.2: Prepare quorum node on page 105. This definition
includes the necessary licenses and services required for the tiebreaker node. This node is optional but
recommended for increased reliability and simplicity in the case of a disaster.
The solution has been tested in setups with and without this additional node. The rationale for this node
is the split-brain scenario where the connection between the two main sites is lost. The tiebreaker node
helps in deciding which site is the active site and, thus, prevents the primary site from going down for
data integrity reasons. Additionally, this server eases some operational procedures by reducing both the
time needed for recovery and the likelihood of operating errors.
This document will describe the use of the tiebreaker node and explain the deviations when it is not
necessary.
8.2
Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 97. Information given there takes precedence
over the instructions below.
8.3
Hardware Setup
This section talks about how to physically install System x machines and how to prepare uEFI for HANA.
It also provides information about the network has to be set up.
8.3.1
Site A and B
The hardware setup of the nodes at each site has to be performed as described in section 6: Guided
Install of the Lenovo Solution on page 41. The following list summarises these steps.
Ensure certified hardware is available and connected to power
Verify firmware levels. They must be identical on all nodes
Modify / Check UEFI settings. They must be identical on all nodes
Configure storage (RAID setup)
8.3.2
It is recommended to setup the tiebreaker node according to the description in section 10.1.2: Prepare
quorum node on page 105.
The tiebreaker node must have a small partition (50 MB is sufficient) to hold a replica of the GPFS file
system descriptors. It will not contain any data or metadata information. The node must be able to
reach all other nodes at both site A and site B of the GPFS cluster. The partition can reside on a logical
volume (LVM) if desired. However, GPFS must be able to recognize the partition, so, when using LVM,
the name /dev/dm-X must be used instead of the logical volume name. Performance is not critical for
this partition.
X6 Implementation Guide
1.9.96-13
71
Technical Documentation
8.3.3
Refer to section 5.3: Network Configuration on page 22 which contains a template that can be used to
gather all the required networking parameters. Ideally, this is done before the installation starts at the
customer location.
8.3.3.1
cluster:
Tiebreaker node
Parameter
Hostname
IP address for Hostname
IP address for GPFS Network
Value
Value
The setup of the switches used for the GPFS and SAP HANA network is described in section 5.4: Network
Switch Configuration For Clustered Installations on page 23. For the link between the switches on both
sites refer to the next sections.
8.3.5
The GPFS network will be stretched over site A and B, while the SAP HANA network must not. This
means that the GPFS network on both sites will be one subnet and each node can reach all other nodes
on both sites; whereas, the SAP HANA networks on site A and B are isolated from each other.
The GPFS network on both sites should be connected with at least a dedicated 10GBit connection. A
routed network is not recommended as it may have severe impact on the synchronous replication of the
data.
The SAP HANA network is separated on both sites. This is due to SAP HANA being operated in a cold
standby mode. For this reason, both sites will use the same hostnames and IP adresses for SAP HANA.
This requires a strict isolation of these two networks.
X6 Implementation Guide
1.9.96-13
72
Technical Documentation
8.3.6
The network connections in the customer network for SAP HANA access, management, backup and other
connections depends very much on the customer network and his requirements. General guidance can be
found in section 6: Guided Install of the Lenovo Solution on page 41.
8.3.7
The tiebreaker node at site C needs to be integrated as well into the GPFS cluster. Every node in the
cluster must be able to contact the tiebreaker node and vice versa.
This depends on the configuration of the tiebreaker node (one or more network interfaces), the subnet
used for GPFS traffic (private or public) and other parameters. It is up to the project team to come to
an agreed solution with the customer.
Possible setups include a multi-homed tiebreaker node or static host routes when private address ranges
are used. VPN, NAT or router capabilities are further options.
The following is an example for a setup with a GPFS subnet of 192.168.10.x and a tiebreaker node with
one network adapter and a public IP address in a 10.x.x.x range:
1. On the tiebreaker node add the GPFS address as an alias to the NIC attached to the public network
e.g.
1
IPADDR_1='192.168.10.99/24'
2. Add host routes on every node in the GPFS cluster to this IP alias.
1
3. Add host routes on the tiebreaker node for every node in the cluster.
1
2
3
4
4. Verify that the newly created alias is reachable throughout the cluster and all nodes can be pinged
from the tiebreaker node via the internal GPFS network addresses.
8.4
Software Setup
Note
The base installation changed with the advent of the new text based installer which also allows
the installation on Red Hat Enterprise Linux. This replaces the manual installation described
here in earlier releases.
X6 Implementation Guide
1.9.96-13
73
Technical Documentation
Note
Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata
is user configurable during installation. SAP HANA will be also installed into this path.
Lenovo currently recommends to use /sapmnt, while SAP promotes /hana.
The following commands and code snippets use /sapmnt. For any other path please replace
/sapmnt with the chosen path.
Install all standard DR servers as described in section 6: Guided Install of the Lenovo Solution on page
41. In phase 3 choose the role Cluster Node (Worker) for all servers. Please note that in the interim
check in section 6.5: Interim Check on page 60 each site is only expected to see only the site-local nodes
in the HANA network test.
For the optional quorum node, please follow the instructions given in section 10.1.2: Prepare quorum
node on page 105 and following to install the base operating system and software.
8.4.1
192.168.10.1XX gpfsnodeXX
192.168.20.1XX hananodeXX
The tiebreaker node only has a gpfsnode name as it is used solely for GPFS communication
192.168.10.1XX gpfsnodeXX
The GPFS network spans both sites, which means in an example with four nodes per site you have
gpfsnode01 up to gpfsnode08 (gpfsnode01-04 at site A, gpfsnode05-08 at site B).
The SAP HANA network is restricted to only one site, which in turn means you should use each hananodeXX entry twice (once per site). This effectively couples any active SAP HANA node to a backup node
on the second site. In the example with four nodes on each site you have hananode01 to hananode04 at
site A and hananode01 to hananode04 at site B.
8.4.1.1
1
2
3
4
5
6
7
8
9
10
11
12
13
...
# Second node on first site:
192.168.10.102 gpfsnode02
192.168.20.102 hananode02
192.168.10.101 gpfsnode01
192.168.20.101 hananode01
192.168.10.103 gpfsnode03
192.168.20.103 hananode03
192.168.10.104 gpfsnode04
192.168.20.104 hananode04
...
# Second node on second site (physically the sixth node)
192.168.10.106 gpfsnode06
X6 Implementation Guide
1.9.96-13
74
Technical Documentation
14
15
16
17
18
19
20
21
192.168.20.102
192.168.10.105
192.168.20.101
192.168.10.107
192.168.20.103
192.168.10.108
192.168.20.104
...
hananode02
gpfsnode05
hananode01
gpfsnode07
hananode03
gpfsnode08
hananode04
The optional tiebreaker node only has GPFS addresses. This has two consequences: the tiebreaker
node only has gpfsnodeXX entries in the /etc/hosts file for all nodes; and, all other nodes have no
hananodeXX entry for this special node. In our example above, a tiebreaker node would get allocated
gpfsnode99.
After editing the /etc/hosts entries it is a good idea to verify network connectivity. To do so, execute
the following command to list all nodes of the DR clusters attached to the GPFS network:
1
Generate a new SSH key for passwordless ssh access, authorize it and distribute it to the other nodes:
1
2
3
# ssh-keygen -q -b 4096 -N "" -C "Unique SSH key for root on DR Cluster" -f /root/.,ssh/id_rsa
# cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
# for node in gpfsnode0{1..8} ; do scp /root/.ssh/id_rsa /root/.ssh/id_rsa.pub /root,/.ssh/authorized_keys root@$node:.ssh/ ; done
Distribute the known_hosts file to the other nodes:
X6 Implementation Guide
1.9.96-13
75
Technical Documentation
Note
In previously releases of this document the shipped SSH root key was used and distributed
among the nodes in the DR-enabled. This imposes a security risk and you should consider
replacing this key with a new unique key. Please contact support.
8.4.2
Create the necessary configuration files. On the first node (which will be the primary configuration
server), create a file /var/mmfs/config/nodes.cluster and add one line per node containing its GPFS
network hostname. If applicable, add the tiebreaker node as last node.
Next append ":quorum" (no spaces) to the end of line for some hosts, according to the following rules:
a) Distribute all available nodes (except tiebreaker) in four equal sized groups and append ":quorum" to
the first node of each group.
b) If a quorum node is available, mark it as quorum.
c) Without a quorum node, mark the second node of the first group as a quorum.
With an example of 8 nodes, you should have 5 nodes marked as quorum nodes. See the following example
for an 8 node DR cluster without and with a dedicated tiebreaker node (gpfsnode99):
Failure group 1
Topology
Vector
1,0,x
Failure group 2
2,0,x
Failure group 3
1,1,x
Failure group 4
2,1,x
Failure group 5
(tie breaker)
3,0,1
gpfsnode99:quorum
(not applicable)
gpfsnode01:quorum-manager
gpfsnode02:quorum-manager
gpfsnode03:quorum-manager
gpfsnode04:
gpfsnode05:quorum-manager
gpfsnode06:
gpfsnode07:quorum-manager
gpfsnode08:
Note
Adding node designation manager is optional as quorum nodes are automatically eligible to
be chosen as cluster manager.
One comment regarding the topology vectors, as they will be used in a later step. The value of x has to
be replaced with the number of the node within the failure group. If you have 3 nodes in each failure
X6 Implementation Guide
1.9.96-13
76
Technical Documentation
group, and the number of the nodes is from 1 to 3 in each failure group, then the second node in the first
failure group will be 1,0,2; the second node in the third failure group will be 1,1,2.
Create the GPFS cluster with the first node of each site as primary (-p) resp. secondary server (-s)
1
1
2
# mmstartup -a
Apply the following cluster configuration changes
1
2
3
# mmchconfig unmountOnDiskFail=meta -i
# mmchconfig panicOnDiskFail=meta -i
# /usr/bin/yes 999 | /usr/lpp/mmfs/bin/mmchconfig dataStructureDump=/tmp/GPFSdump,,pagepool=4G,maxMBpS=2048,maxFilesToCache=4000,skipDioWriteLogWrites=1,,nsdInlineWriteMax=1M,prefetchAggressivenessWrite=2,readReplicaPolicy=local,,enableRepWriteStream=false,enableLinuxReplicatedAIO=yes,nsdThreadsPerDisk=24
After this last command you need to restart GPFS with
1
2
# mmshutdown -a
# mmstartup -a
8.4.3
On the first node, create a file /var/mmfs/config/disk.list.data.fs. For each node add entries as
described in the following section, but replace the failureGroup with the correct topology vector for the
particular node. Make sure that the pool definitions are only once in this file.
8.4.3.1 GPFS 3.5 Disk Definitions For every HDD RAID device /dev/sdb and subsequent devices
add a NSD definition like the following template:
1
2
3
4
5
6
%nsd: device=/dev/sdb
nsd=data01node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1,0,1
pool=system
Please dont forget to increment the first number in the nsd line, e.g. data02node01 for the second HDD
block device. You can get a device list with lsscsi.
Then after adding als device stanzas add these lines unaltered:
X6 Implementation Guide
1.9.96-13
77
Technical Documentation
1
2
3
4
5
6
7
8
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1
When using a tiebreaker node add the following lines to the stanza file:
1
2
3
4
5
6
%nsd: device=/dev/sda3
nsd=desc01node99
servers=gpfsnode99
usage=descOnly
failureGroup=3,0,1
pool=system
Replace device, nsd name, server with the correct values where necessary.
If your setup includes a tiebreaker node determine the device name of the partition allocated for the
descriptor-only NSD and change the line in disk.list.data.fs starting with %nsd: device= accordingly.
8.4.4
Filesystem Creation
# mmcrnsd -F /var/mmfs/config/disk.list.data.fs -v no
Create the filesystem
# mmcrfs sapmntdata -F /var/mmfs/config/disk.list.data.fs -A no -B 512k -N 3000000 -,v no -m 3 -M 3 -r 3 -R 3 -j hcluster --write-affinity-depth 1 -s ,failureGroupRoundRobin --block-group-factor 1 -Q yes -T /sapmnt
Create filesets
1
2
3
# mmmount sapmntdata -a
To verify the file system is successfully mounted execute
# mmlsmount sapmntdata -L
Link the filesets in the filesystem
1
2
3
4
5
6
#
#
#
#
#
#
X6 Implementation Guide
1.9.96-13
78
Technical Documentation
8.4.5
We recommend to install SAP HANA on the backup site first and thereafter on the primary site. This is
safer to install because your backup site installation cannot accidentally make changes to your production
environment.
8.4.5.1 Install HANA on backup site Before continuing with the installation make sure that the
GPFS file system sapmntdata is mounted at /sapmnt. In order to prepare the backup site, it is necessary
to do a standard HANA installation and then delete the installed content on the shared filesystem.
8.4.5.1.1 Install SAP HANA software on backup site Please install SAP HANA on the backup
site as described in the official SAP documentation available here: http://help.sap.com/hana_appliance.
The location of the SAP HANA installation files is /var/tmp/saphana.
The roles (worker or standby) are not important, except that the first one needs to be a worker. We
recommend to install all other nodes as standby, as this installation type is faster.
8.4.5.1.2 Stop HANA and SAP Host agent on backup site
and stop SAP HANA:
1
$ HDB stop
Then log in as root and stop SAP Host agent and other services:
# /etc/init.d/sapinit stop
Afterwards disable the autostart of the sapinit service
X6 Implementation Guide
1.9.96-13
79
Technical Documentation
8.4.5.1.3 Delete SAP HANA shared content The purpose of this installation is to install the
node local parts of a SAP HANA system. After installing SAP HANA on all backup site nodes the data
in /sapmnt must be deleted:
1
2
3
# rm -r /sapmnt/data/<SID>
# rm -r /sapmnt/log/<SID>
# rm -r /sapmnt/shared/<SID>
8.4.5.1.4 Disable mmfsup script on backup site nodes An installation with the Recovery Image
will install a mmfsup script which will automatically start SAP HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Disable it on all cluster nodes.
1
# id <SID>adm
and compare the numerical IDs of <SID>adm and group sapsys. You can specify the ids in the SAP
HANA Installation process either over a configuration file or a commandline parameter, you find details
in the SAP documentation: SAP HANA Server Installation and Update Guide.
8.4.5.3 Disable mmfsup script on production site nodes An installation with the Recovery
Image will install an mmfsup script which will automatically start HANA after the file system comes up.
This must be deactivated as it may start SAP HANA on both sites (using the same hostnames.)
The script resides in /var/mmfs/etc. Remove it on all cluster nodes.
X6 Implementation Guide
1.9.96-13
80
Technical Documentation
8.4.6
8.4.6.1 Quorum node setup using a new node The setup of a new server can be done by following
the instructions in section 10.1.2: Prepare quorum node on page 105 excluding the setup of the switches
which does not apply to a DR configuration.
8.4.6.2 Tiebreaker node setup using an existing node If an existing node will be used as the
tiebreaker node, please consult the system administrator and ask him to:
Provide a partition which will be used for to hold the GPFS file descriptor information
Install GPFS
Build the GPFS portability layer. Note: This may require the installation of the kernel header files
/ sources and some development tools (compiler, make...)
Setup network access to all other GPFS cluster nodes in the GPFS network
Exchange ssh keys so that the tiebreaker node root account can be accessed without a password
from the other GPFS cluster nodes.
Follow the instructions in sections 10.1.6: Quorum Node IBM GPFS setup on page 108 and 10.1.7:
Quorum Node IBM GPFS installation on page 108.
General information how to install and setup GPFS can be found online in the Information Center section
Installing GPFS on Linux nodes.
8.4.7
Verify Installation
8.4.7.1
# mmgetstate -a
# mmlscluster
# mmgetstate -aLs
The cluster configuration is listed with
# mmlscluster
When using the tiebreaker node check that the tiebreaker node is a quorum node and that the
remaining quorum nodes are distributed evenly among the other file system failure groups. You see
the failure groups with
# mmlsdisk sapmntdata
X6 Implementation Guide
1.9.96-13
81
Technical Documentation
Information about the failure group setting can be found in section 8.4.2: GPFS Server configuration
on page 76. If not using the tiebreaker make sure that the active site has at least one more quorum
node than the passive site. In general, try to keep an odd number of quorum nodes.
Verify cluster manager location
Verify the location of the cluster manager depending on the use of the tiebreaker node
1
# mmlsmgr
If the solution uses a tiebreaker node, the cluster manager must be on the passive/backup site, in a
solution without a tiebreaker node, the cluster manager must be on the active site. To change the
cluster manager issue
# mmchmgr -c <node>
Verify replication factor 3 (= three copies, two local and one remote copy)
1
# mmlsfs sapmntdata
Verify that the following values are all set to 3:
1
2
3
4
-m
-M
-r
-R
Default
Maximum
Default
Maximum
number
number
number
number
of
of
of
of
metadata replicas
metadata replicas
data replicas
data replicas
# mmlsdisk sapmntdata
Make sure that the server nodes are distributed evenly among the failure groups.
# mmlsdisk sapmntdata -e
All disks up and ready
If there are disks down or suspended, check the reason (eg. hardware failure, system reboot, ...)
and restart them once the problem has been resolved.
The following command will try to start all disks in the file system. This has no effect on already
started disks.
X6 Implementation Guide
1.9.96-13
82
Technical Documentation
8.5
Extending a DR-Cluster
This section describes how to grow a DR cluster. Growing a DR enabled cluster requires that both sites
grow by the same number of nodes. In general the installation of each active/backup server couple needs
not to be done at the same time, but its highly recommended. The overcautious technician may also
decide to install the backup node prior to the active node.
The following sections will only explain the differences from the basic DR installation in the sections
before.
8.6
Please read chapter 9.2: Mixed eX5/X6 DR Clusters on page 97. Information given there takes precedence
over the instructions below.
8.6.1
Hardware Setup
Please refer to 8.3: Hardware Setup on page 71 and follow the instructions there. Ping the new machine
on the GPFS network from all machines to test if the network configuration is correct. Ping the new
machine on the HANA network from all servers, it is supposed to be reachable only from nodes on the
same site.
8.6.2
GPFS Part 1
1. First step is to add /etc/hosts entries on every machine. Lets assume that the new nodes are the
9th and 10th nodes with node09 going to the active site and 10 into the backup site. Distribute any
new nodes evenly into the existing failure groups (topology), so that a failure group has at most
one more node than the other, put the backup server into the corresponding FG on the backup site.
In the example above, the 9th node will go into failure group 1 (1,0,x) getting the topology vector
1,0,3 and the 10th node will go into failure group 3 (1,1,x) with topology vector 1,1,3.
On all existing nodes, add host entries for the the GPFS network, .e.g.:
1
2
192.168.1.109 gpfsnode09
192.168.1.110 gpfsnode10
On the new nodes add entries for all other nodes. Copying the entries from one of the existing
nodes is the easiest way.
First add host keys for the new nodes to the existing machines. Run on any existing node
# for srcnode in gpfsnode0{1..8} ; do echo node $srcnode ; ssh $srcnode 'for ,target in gpfsnode0{9,10} ; do echo -n $target ; ssh-keygen -R $target ; ,ssh-keyscan -t rsa target >> /root/.ssh/known_hosts ; done '; done
The value gpfsnode01..8 will generate a list from gpfsnode01 to gpfsnode08, if the host names differ
or are not consecutive, replace this with a space separated list of host names. The same applies to
gpfsnode09,10 which are the new nodes in this example.
X6 Implementation Guide
1.9.96-13
83
Technical Documentation
Then copy the root SSH key to the new news. Issue these command on one of the existing cluster
nodes:
1
# for node in gpfsnode{01..10} ; do echo -n $node ; ssh-keygen -R $node ; ssh-,keyscan -t rsa $node >> /root/.ssh/known_hosts ; done
Test the SSH key exchange by runnign this command on any node
# for srcnode in gpfsnode{01..10} ; do echo from node $srcgpfsnode ; ssh ,$srcnode 'for target in gpsfnode{01..10} ; do echo To node $target ; ssh ,$target hostname ; done '; done
The command should run without interaction and errors.
# cd /var/tmp/install/gpfs-<GPFS-RELEASE>
# rpm -ivh gpfs.base-<GPFS-RELEASE>-0.x86_64.rpm
#
#
#
#
cd /usr/lpp/mmfs/src
make Autoconfig
make World
make InstallImages
6. To add the new nodes to the cluster run on any running node
1
# mmaddnode -N gpfsnode09,gpfsnode10
X6 Implementation Guide
1.9.96-13
84
Technical Documentation
# mmstartup -N gpfsnode09,gpfsnode10
9. Create the disk descriptor files. Before adding the disks to the shared file system, you must create
the disk descriptor or stanza files. You can create them on any node on the cluster, but it is
preferably done on the node where the files for the initial cluster creation are located. Please see
chapter 8.4.3: GPFS Disk configuration on page 77 for a description of the stanza files. You only
need to create entries for the drives on the new nodes and you can omit the pool configuration
entries. Let us assume the new file is /var/mmfs/config/disk.list.data.gpfsnode0910.
10. Create NSDs
1
8.6.3
# mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnode0910
Skip this for a node on the active site. For the HANA installation on the backup site, we need a temporary
filesystem which must satisfy some requirements. RAM based filesystems are not sufficient, so we use the
fresh created NSDs for a temporary filesystem, install the backup instance, and destroy the temporary
filesystem afterwards before continuing with the installation.
1. Create a temporary filesystem
1
$ HDB stop
Then log in as root and stop SAP Host agent and other services:
# /etc/init.d/sapinit stop
Afterwards disable the autostart of the sapinit service
X6 Implementation Guide
1.9.96-13
85
Technical Documentation
# rm /var/mmfs/etc/mmfsup
6. Delete temporary filesystem After installing all new backup nodes, unmount temporary Filesystem
on all nodes
1
mmmumount sapmnttmp -a
and delete it
mmdelfs sapmnttmp
This will delete all shared HANA content and will leave the node specific HANA parts installed.
8.6.4
GPFS Part 2
# mmadddisk sapmntdata
-F /var/mmfs/config/disk.list.data.gpfsnode0910
# mmlsdisk sapmntdata
HANA
8.6.5.1
1. Please make sure that you have mounted the shared file system on the new nodes.
1
# mmlsmount sapmntdata -L
# cd /var/tmp/install/saphana/DATA_UNITS/SAP_HOST_AGENT_LINUX_X64
# rpm -ihv saphostagent.rpm
As recommended by the RPM installation, a password for sapadm may be set.
X6 Implementation Guide
1.9.96-13
86
Technical Documentation
4. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide".
Warning
SAP HANA in this DR solution must be installed using the hostname of the HANAinternal network (usually on bond1, hostname hananodeXX). The host based routing
used in the HA solution is not applicable for the DR solution.
8.7
IBM supports the installation of storage expansions in a DR scenario to allow clients to run a nonproductive SAP HANA instance on idling DR-site nodes. During normal operation in a DR scenario, all
nodes at one of the two sites are only receiving data from the active site and store them on their local
disks.
SAP is tolerating to run a non-productive SAP HANA instance on those nodes. The local disks of the
nodes are used for production data. A storage expansion is used to provide enough local storage for those
non-productive instances.
In the event of a disaster, when the backup site becomes the active site, all non-productive SAP HANA
instances have to be shut down to allow production to continue to run.
8.7.1
Architecture
This section briefly explains how IBM enables the use of idling DR-site nodes to run non-productive
SAP HANA instances.
8.7.1.1 Prerequisites The use of a storage expansion is only supported in a DR scenario. No
expansions can be used when running in an HA environment unless being part of the certified server
models.
All nodes on the DR-site must have a storage expansion connected. Having only a subset of the DR-site
nodes equipped with storage expansions is not a supported environment. Furthermore, all expansions
must have identical disk drives installed.
If the customer considers both participating data centers to be equal (which means that after a fail-over
of his production instances to the DR-site he will not manually fail production back to his site A data
center), then you must have storage expansion connected also to all primary site nodes. This storage
expansion will remain unused until you actually need to move data away from DR-site nodes which are
now being used to host SAP HANA production instances.
8.7.1.2 Architectural overview The following illustration shows you how IBMs solution for SAP HANA
DR with storage expansions looks like:
The expansion storage is visible as local storage only and connected via the SAS interface. The storage
is not shared by multiple nodes.
X6 Implementation Guide
1.9.96-13
87
Technical Documentation
node1
Site A
node2
node3
node4
node5
node6
Site B
node7
node8
HDD
HDD
HDD
HDD
HDD
HDD
HDD
fio
fio
fio
fio
fio
fio
fio
fio
third
replica
meta
data
Production
file
system
second
replica
first
replica
HDD
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
sda1 sda2
OS
OS
OS
OS
OS
OS
OS
OS
RAID Ctrl
RAID Ctrl
RAID Ctrl
RAID Ctrl
First replica
Second replica
...
...
...
...
Second file system spanning only expansion box drives (metadata and data)
Attention
The external storage can only be used to host data of non-productive SAP HANA instances.
The storage must not be used to expand space of the production file system or to store
backups.
8.7.1.3 Architectural comments IBM only support running GPFS with a replication factor of 2 for
the non-productive instance. This means, outages of a single node can be handled and no data is lost. We
do not support a replication factor of 3 because the scope of non-productive SAP HANA environments
does not include disaster recovery.
There will be exactly one new file system spanning all DR-site expansion box drives. While we do
not support a multi SID configuration it is a valid scenario to run, e.g., on some DR-site nodes a QA
environment and on other DR-site nodes development. This, however, has to be done on the same file
system.
IBM does not enable quotas on the new expansion box file system. Make sure to have either a valid
backup procedure in place or to regularly delete old backups.
8.7.2
Setup
This section assumes that the nodes have been successfully installed with an operating system already
(as required for a backup DR site).
8.7.2.1 Hardware setup Connect the EXP2524 SAS port labeled In to one of the M5120 or M5225
ports. For details, see the EXP2524 Installation Guide. Configure the drives as described in the section
6: Guided Install of the Lenovo Solution on page 41. Either reboot or rescan the SCSI bus and verify
that Linux recognizes the new drives.
X6 Implementation Guide
1.9.96-13
88
Technical Documentation
8.7.2.2 GPFS configuration You reuse the existing GPFS cluster and create a second file system
spanning only the expansion drives of the DR-site nodes.
Even if your setup includes expansions on the primary site, execute the procedure only on the DR-site
expansions. The primary site expansion drives will not be used in the beginning.
1. On each DR-site node, collect the device names of all expansion drives. When using the M5225
Controller you can get the drive names with the this command:
1
1
2
3
4
/dev/sde
/dev/sdf
/dev/sdg
/dev/sdh
for each of DR-site node. Note: After sdz, Linux wraps around and continues with sdaa, sdab, ...
/dev/sde:gpfsnode04::dataAndMetadata:4:ext01gpfsnode04:system
/dev/sdf:gpfsnode04::dataAndMetadata:4:ext02gpfsnode04:system
/dev/sdg:gpfsnode04::dataAndMetadata:4:ext03gpfsnode04:system
/dev/sdh:gpfsnode04::dataAndMetadata:4:ext04gpfsnode04:system
/dev/sde:gpfsnode05::dataAndMetadata:5:ext01gpfsnode05:system
/dev/sdf:gpfsnode05::dataAndMetadata:5:ext02gpfsnode05:system
/dev/sdg:gpfsnode05::dataAndMetadata:5:ext03gpfsnode05:system
/dev/sdh:gpfsnode05::dataAndMetadata:5:ext04gpfsnode05:system
/dev/sde:gpfsnode06::dataAndMetadata:6:ext01gpfsnode06:system
/dev/sdf:gpfsnode06::dataAndMetadata:6:ext02gpfsnode06:system
/dev/sdg:gpfsnode06::dataAndMetadata:6:ext03gpfsnode06:system
/dev/sdh:gpfsnode06::dataAndMetadata:6:ext04gpfsnode06:system
Store as /tmp/nsdlistexp.txt. Then create NSDs using those disks
# mmcrnsd -F /tmp/nsdlistexp.txt
# mmcrfs /dev/sapmntext -F /tmp/nsdlistexp.txt -A no -B 512k -N 3000000 -v no -,m 2 -M 2 -r 2 -R 2 -j hcluster --write-affinity-depth 1 -s ,failureGroupRoundRobin --block-group-factor=1 -T /sapmntext
X6 Implementation Guide
1.9.96-13
89
Technical Documentation
Warning
Be sure to use nsdlistexp.txt and not your list with internal drives! Using the wrong
drives can destroy your production data!
4. Mount file system on DR-site nodes only.
1
5. Install SAP HANA worker and standby nodes as described in the guide "SAP HANA Administration
Guide". Take care to install HANA on /sapmntext and not on /sapmnt.
Also take care that you dont use the UID (user id) and GID (group id) of the DR HANA instance
especially when installing non-productive HANA instances before installing the DR instance.
If you have expansion boxes connected also to your primary site nodes, they get activated only when you
need to migrate non-productive SAP HANA instances data away from DR-site notes. See the Lenovo
SAP HANA Appliance Operations Guide 16 for details.
When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the guide "SAP HANA Administration Guide".
16 SAP
X6 Implementation Guide
1.9.96-13
90
Technical Documentation
9.1
9.1.1
A mixed eX5/X6 cluster is a System x Solution for SAP HANA cluster consisting of eX5 based servers
(Intel Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another
term used is "hybrid cluster". Due to the new storage layout for X6-only installations, an X6 configuration
must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is considered
to be configured in legacy or compatibility mode.
Besides the different storage layout, there are some minor configuration changes between the older Westmere appliance releases and the first X6 appliance versions. These will be explained below. Future
releases will level the differences.
9.1.2
9.1.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 cluster
is limited by the number of eX5 servers within that cluster. The number of X6 server must always be
less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported
options are either to increase the number of eX5 server so that they are still the majority or to switch to
a pure X6 cluster which requires a reinstallation.
For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:
eX5 T-Shirt Size
SSD (x3690, 7147-H3X, Generation 1)
S (x3690, 7147-HBX, Generation 2)
M (x3950, 7143-H2X or 7143-HBX)
L (x3950, 7143-H3X or 7143-HBX)
X6 Server Model
AC32S256C (2 CPUs, 256GB RAM)
AC34S512C (4 CPUs, 512GB RAM)
AC48S1024C (8 CPUs, 1024GB RAM)
9.1.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem software on the eX5 servers must be updated to the same version installed on the X6 models. The minimum
supported GPFS versions for the cluster are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 8 (4.1.0.8)
which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix 8 can be
used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8!
It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced
RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient
than the previously used RAID0 configuration. When installing a new cluster please use appliance version
1.6.60-7 or later for the eX5 servers.
Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system
quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance
X6 Implementation Guide
1.9.96-13
91
Technical Documentation
version. If this script is not available, please calculate the quotas manually following the instructions in
the appendix of the eX5 Operations Guide.
Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a
hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.
9.1.3
New Installation
In general, the installation and operation instructions for eX5 and X6-based servers remain valid. For
eX5 servers, please use the installation description in Lenovo eX5 Systems Solution for SAP HANA Implementation Guide.
For the installation of the X6 server, please use the Lenovo X6 Systems Solution for SAP HANA Implementation Guide for System x X6 Servers and read the instructions below. Please read these
instructions before installing the new server and take care to implement them correctly.
Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana.sh
with the Cluster (Worker) option. Do not execute the script with the Cluster (Master) option. This means
the script is only called once.
9.1.3.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first
internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase
2, login to the server and run
1
# parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart, system2 ext2 "" 1675 3350
For SSD/S sized clusters this is not necessary.
9.1.3.2 Adapting the GPFS stanza file After configuring the base system and the subsequent
reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage
layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage
pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS
storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This
is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd
RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C)
to the storage pool hddpool.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the
usage and pool parameters as shown in table 39: Stanza file for X6 servers in eX5 clusters on page 93.
Please set the nsd, servers and failureGroup to their correct values.
Complete the installation as described in the eX5 Implementation Guide and run phase 3 (of the cluster
configuration) from any eX5 node. Do not run the cluster configuration on an X6 machine as this will
result in a misconfigured cluster. It is safe to install the whole cluster including the X6 servers from any
eX5 node.
9.1.3.3 Enable automatic restripe for whole cluster eX5 models up to appliance software version
1.6.60-7 installed a script which attempts to start all NSDs and restripes the GPFS filesystem if any NSD
was not up. This script was installed as a GPFS callback which gets triggered upon every node start. Since
appliance version 1.7.70-8 the script and the callback are no longer installed and replaced by a GPFSinternal restripe mechanism. The GPFS-internal restripe is enabled by setting the cluster configuration
value restripeOnDiskFailure=yes.
X6 Implementation Guide
1.9.96-13
92
Technical Documentation
Model
1
2
AC32S256C
(S/SSD)
3
4
5
6
Generated File
Change To
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
1
2
3
1
2
3
4
5
AC34S512C
(M)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
nsd=data01node04
5
servers=gpfsnode04
6
usage=dataAndMetadata 7
failureGroup=1004
8
pool=system
9
%nsd: device=/dev/sdc 10
nsd=data02node04
11
servers=gpfsnode04
12
usage=dataAndMetadata 13
failureGroup=1004
14
pool=system
15
16
17
18
1
2
3
1
2
3
4
5
AC48S1024C
(L)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
5
nsd=data01node04
6
servers=gpfsnode04
usage=dataAndMetadata 7
8
failureGroup=1004
9
pool=system
%nsd: device=/dev/sdc 10
11
nsd=data02node04
12
servers=gpfsnode04
usage=dataAndMetadata 13
14
failureGroup=1004
15
pool=system
16
17
18
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1004
pool=hddpool
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1004
pool=hddpool
X6 Implementation Guide
1.9.96-13
93
Technical Documentation
In a mixed cluster you must delete the callback and enable the new GPFS internal restripe.
Deactivate the callback and enable the automatic restripe with the following commands:
1
2
# mmdelcallback start-disks-on-startup
# mmchconfig restripeOnDiskFailure=yes
Both commands need to be run only once on any active cluster node.
9.1.4
When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the eX5
Implementation & Operations Guides.
No special handling is required besides using the saphana-quota-calculation.sh script only on eX5
nodes or X6 nodes installed with appliance version 1.7.70-9 or later. Do not run the quota calculator on
any X6 node installed with appliance version 1.7.70-8.
When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6
nodes according to the X6 Implementation Guide. After Phase 2 (the basic configuration) for X6 nodes
in M/L (x3950 based) clusters the first internal RAID array needs to be partitioned at the OS level.
Login to the server and run
1
# parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart, system2 ext2 "" 1675 3350
For SSD/S sized clusters this is not necessary. Afterwards adapt the generated stanza file on each node
before adding these node to the cluster.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the
usage and pool parameters as shown in table 40: Stanza file for X6 servers in eX5 clusters on page 95.
Please set the nsd, servers and failureGroup to their correct values.
Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node.
Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with
appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the
eX5 Operations Guide.
9.1.5
In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.
9.1.5.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation
of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets
for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups).
When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the
eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix
of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota
calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the
quota chapter in the appendix.
9.1.5.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP
HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation.
X6 Implementation Guide
1.9.96-13
94
Technical Documentation
Model
1
2
AC32S256C
(S/SSD)
3
4
5
6
Generated File
Change To
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
1
2
3
1
2
3
4
5
AC34S512C
(M)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
nsd=data01node04
5
servers=gpfsnode04
6
usage=dataAndMetadata 7
failureGroup=1004
8
pool=system
9
%nsd: device=/dev/sdc 10
nsd=data02node04
11
servers=gpfsnode04
12
usage=dataAndMetadata 13
failureGroup=1004
14
pool=system
15
16
17
18
1
2
3
1
2
3
4
5
AC48S1024C
(L)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
5
nsd=data01node04
6
servers=gpfsnode04
usage=dataAndMetadata 7
8
failureGroup=1004
9
pool=system
%nsd: device=/dev/sdc 10
11
nsd=data02node04
12
servers=gpfsnode04
usage=dataAndMetadata 13
14
failureGroup=1004
15
pool=system
16
17
18
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1004
pool=hddpool
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1004
pool=hddpool
X6 Implementation Guide
1.9.96-13
95
Technical Documentation
9.1.5.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implementation & Operation Guides for eX5 are fully applicable.
For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the
stanza files given in 9.1.4: Existing Cluster Extension/Node Replacement on page 94 must be used.
Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.
X6 Implementation Guide
1.9.96-13
96
Technical Documentation
9.2
9.2.1
A mixed eX5/X6 DR cluster is a Lenovo Solution DR-enabled cluster consisting of eX5 based servers (Intel
Westmere, MT 7143 and 7147) and X6 based servers (Intel Ivybridge, MT 3837 and 6241). Another term
used is "hybrid DR cluster". Due to the new storage layout for X6-only installations, an X6 configuration
must be slightly modified before an X6 node can be added to an eX5 cluster. Such an X6 node is
considered to be configured in legacy or compatibility mode.
Besides the different storage layout, there are some minor configuration changes between the older Westmere appliance releases and the first X6 appliance versions. These will be explained below. Future
releases will level the differences.
9.2.2
9.2.2.1 Limit of X6 nodes in a cluster The maximum number of X6 servers in an eX5 DR cluster
is limited by the number of eX5 servers within that cluster. The number of X6 server must always be
less than the number of eX5 nodes. If you plan to use more X6 servers in a cluster, the only supported
options are either to increase the number of eX5 server so that they are still the majority or to switch to
a pure X6 cluster which requires a reinstallation.
For DR-clusters we require that both sites (primary & secondary) must consist only of eX5 server or only
of X6 servers or of a mix of eX5 and X6 server where the eX5 servers have the majority on each site.
For example these combinations are allowed:
Primary site: 6 eX5, secondary site: 6 X6 servers
This is allowed as no site is mixed.
Primary site: 6 eX5, secondary site: 4 eX5 & 2 X6 servers
This is allowed as the first site is not mixed and the eX5 have the majority on the secondary site.
Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 & 1 X6
While both sites are mixed, but in each site the eX5 are the majority.
These combinations are not allowed:
Primary site: 3 ex5 & 3 X6, secondary site: 6 eX5 servers
This is not allowed as on the first site the eX5 servers are not the majority.
Primary site: 4 eX5 & 3 X6, secondary site: 6 eX5 servers
The eX5 servers are the majority on both sites, but the sites differ in size.
For each eX5 server model exists a corresponding X6 server model which is permitted as a replacement:
X6 Implementation Guide
1.9.96-13
97
Technical Documentation
X6 Server Model
AC32S256C (2 CPUs, 256GB RAM)
AC34S512C (4 CPUs, 512GB RAM)
AC48S1024C (8 CPUs, 1024GB RAM)
9.2.2.2 Prerequisites Before deploying any X6 server to an eX5 cluster, the GPFS filesystem software on the eX5 servers must be updated to the same version installed on the X6 models. The minimum
supported GPFS versions for hybrid DR clusters are GPFS 3.5 PTF 19 (3.5.0-19) or GPFS 4.1 PTF 8
(4.1.0.8) which may require an update even on the X6 nodes. Alternatively PTF 17 (3.5.0-17) with eFix
8 can be used. Contact IBM support to obtain this eFix. Do not use plain 3.5.0-17 without eFix 8!
It is required to use only eX5 servers installed with appliance version 1.6.60-7 or later, which introduced
RAID5 in cluster configurations. The RAID5 setup is perceived as being more reliable and convenient
than the previously used RAID0 configuration. When installing a new cluster please use appliance version
1.6.60-7 or later for the eX5 servers.
Appliance versions 1.6.60-7 and later contain a helper script for calculating the necessary file system
quotas. In a hybrid cluster please use the script on the eX5 cluster node installed with the latest appliance
version. If this script is not available, please calculate the quotas manually following the instructions in
the appendix of the eX5 Operations Guide.
Since Appliance version 1.7.70-9 an updated quota calculation help script is installed which can detect a
hybrid cluster environment enabling it to use the correct formulas even when called on X6 nodes.
9.2.3
New Installation
In general, the installation and operation instructions for eX5 and X6-based servers remain valid. For
eX5 servers, please use the installation description in Lenovo eX5 Systems Solution for SAP HANA Implementation Guide.
For the installation of the X6 server, please use the Lenovo X6 Systems Solution for SAP HANA Implementation Guide for System x X6 Servers and read the instructions below. Please read these
instructions before installing the new server and take care to implement them correctly.
Follow the Implementation Guide until (including) the call of the script saphana-setup-saphana.sh
with the Cluster (Worker) option. Do not execute the script with the Cluster (Master) option. This means
the script is only called once.
9.2.3.1 Partitioning for M/L sized clusters For X6 nodes in M/L (x3950 based) clusters the first
internal RAID array needs to be partitioned at the OS level. After finishing the base installation in phase
2, login to the server and run
1
# parted /dev/sdb --script mklabel gpt unit gib mkpart system1 ext2 "" 0 1675 mkpart, system2 ext2 "" 1675 3350
For SSD/S sized clusters this is not necessary.
9.2.3.2 Adapting the GPFS stanza file After configuring the base system and the subsequent
reboot in phase 2 of the installation, the GPFS stanza files need to be adapted to the older eX5 storage
layout. For S/SSD model based cluster no change is needed as these models use only one GPFS storage
pool like the new X6 models. In clusters based on x3950 models, storage is divided into two GPFS
X6 Implementation Guide
1.9.96-13
98
Technical Documentation
storage pools. The new X6 servers must provide these two storage pools in order to be compatible. This
is achieved by assigning the internal RAID array to the GPFS storage pool system and assigning the 2nd
RAID array in the external SAS enclosure (AC34S512C) resp. in the upper storage book (AC48S1024C)
to the storage pool hddpool.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on all X6 nodes and change the
usage and pool parameters as shown in table 42: Stanza file for X6 servers in eX5 clusters on page 100:
Please set the nsd, servers and failureGroup to their correct values.
Complete the installation as described in the chapter "Disaster Recovery" in the Implementation Guide
for eX5.
9.2.4
When expanding a mixed cluster with additional eX5 servers, please follow the instructions in the Disaster
Recovery sections of the eX5 Implementation & Operations Guides. Do not run the quota calculator on
any X6 node installed with appliance version 1.7.70-8.
When adding new X6 nodes to an existing hybrid cluster or an eX5-only cluster, please install the X6
nodes according to the X6 Implementation Guide. After Phase 2 (the basic configuration) adapt the
generated stanza file on each node before adding these node to the cluster.
Edit the stanza file (/var/mmfs/config/disk.list.data.gpfsnode*) on the X6 nodes and change the
usage and pool parameters as shown in table 43: Stanza file for X6 servers in eX5 clusters on page 101.
Please set the nsd, servers and failureGroup to their correct values.
Follow the normal instructions given in the eX5 Operations Guide in chapter 4.2 Adding a cluster node.
Afterwards either run the quota calculation script from any eX5 nodes, from any X6 node installed with
appliance version 1.7.70-9 or later or do the manual calculation described in the appendix section of the
eX5 Operations Guide.
9.2.5
In general the eX5 Operations Guide is applicable for the whole cluster including the new X6 servers.
9.2.5.1 Quota Calculation eX5 based servers have used two so called fileset for a logical separation
of HANA data volumes and log files. Each fileset is limited with a quota. X6 servers use three filesets
for separating HANA data volumes, log files and the shared parts (like binaries, config, trace, backups).
When using X6 servers in a eX5 cluster, the two fileset setup is used on all nodes, so for the quotas the
eX5 version of the Operations Guide is applicable. The quota calculation is explained in the appendix
of the guide. On any eX5 node and on X6 nodes with appliance version 1.7.73-9 you can use the quota
calculation script saphana-quota-calculator.sh. The usage of this script is also documented in the
quota chapter in the appendix.
9.2.5.2 HANA installation When installing additional SAP HANA instances or reinstalling SAP
HANA, SAP HANA must be installed into /sapmnt as described in the eX5 documentation.
Note
In the DR solution only for the hanalog fileset a quota is set.
X6 Implementation Guide
1.9.96-13
99
Technical Documentation
Model
Generated File
1
2
AC32S256C
(S/SSD)
3
4
5
6
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
Change To
1
2
3
4
5
6
1
2
3
1
2
3
4
5
AC34S512C
(M)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
nsd=data01node04
5
servers=gpfsnode04
6
usage=dataAndMetadata 7
failureGroup=1004
8
pool=system
9
%nsd: device=/dev/sdc 10
nsd=data02node04
11
servers=gpfsnode04
12
usage=dataAndMetadata 13
failureGroup=1004
14
pool=system
15
16
17
18
1
2
3
1
2
3
4
5
6
7
8
AC48S1024C 9
(L)
10
11
12
13
14
15
16
17
18
%nsd: device=/dev/sdb 4
5
nsd=data01node04
6
servers=gpfsnode04
usage=dataAndMetadata 7
failureGroup=1004
8
9
pool=system
%nsd: device=/dev/sdc 10
nsd=data02node04
11
12
servers=gpfsnode04
usage=dataAndMetadata 13
failureGroup=1004
14
15
pool=system
%nsd: device=/dev/sdd 16
nsd=data03node04
17
servers=gpfsnode04
18
usage=dataAndMetadata 19
20
failureGroup=1004
21
pool=system
22
23
24
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
%nsd: device=/dev/sdd
nsd=data02node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
X6 Implementation Guide
1.9.96-13
100
Technical Documentation
Model
Generated File
1
2
AC32S256C
(S/SSD)
3
4
5
6
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1004
pool=system
Change To
1
2
3
4
5
6
1
2
3
1
2
3
4
5
AC34S512C
(M)
6
7
8
9
10
11
12
%nsd: device=/dev/sdb 4
nsd=data01node04
5
servers=gpfsnode04
6
usage=dataAndMetadata 7
failureGroup=1004
8
pool=system
9
%nsd: device=/dev/sdc 10
nsd=data02node04
11
servers=gpfsnode04
12
usage=dataAndMetadata 13
failureGroup=1004
14
pool=system
15
16
17
18
1
2
3
1
2
3
4
5
6
7
8
AC48S1024C 9
(L)
10
11
12
13
14
15
16
17
18
%nsd: device=/dev/sdb 4
5
nsd=data01node04
6
servers=gpfsnode04
usage=dataAndMetadata 7
failureGroup=1004
8
9
pool=system
%nsd: device=/dev/sdc 10
nsd=data02node04
11
12
servers=gpfsnode04
usage=dataAndMetadata 13
failureGroup=1004
14
15
pool=system
%nsd: device=/dev/sdd 16
nsd=data03node04
17
servers=gpfsnode04
18
usage=dataAndMetadata 19
20
failureGroup=1004
21
pool=system
22
23
24
%nsd: device=/dev/sdb
nsd=data01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
%nsd: device=/dev/sdb1
nsd=MDdata01node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdb2
nsd=MDdata02node04
servers=gpfsnode04
usage=dataAndMetadata
failureGroup=1,0,4
pool=system
%nsd: device=/dev/sdc
nsd=data01node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
%nsd: device=/dev/sdd
nsd=data02node04
servers=gpfsnode04
usage=dataOnly
failureGroup=1,0,4
pool=hddpool
X6 Implementation Guide
1.9.96-13
101
Technical Documentation
9.2.5.3 Storage Device Failure For any failed storage device in a eX5 based node, the Implementation & Operation Guides for eX5 are fully applicable.
For X6 based nodes please use the Operation Guide for X6. The only difference in handling is that the
stanza files given in 9.2.4: Existing Cluster Extension/Node Replacement on page 99 must be used.
Please also ensure that CacheCade acceleration is enabled for newly created RAID devices on X6.
X6 Implementation Guide
1.9.96-13
102
Technical Documentation
10
This section covers installations that consist of just one single node in production and need to have HA
or DR features using SAP System Replication or IBM GPFS Storage replication.
10.1
A single node with high availability (HA) describes the smallest possible configuration for a highly
available Lenovo solution for a SAP HANA system. In principle, this can be described as a cluster where
only a single node is highly available, since there is only one SAP HANA worker node. There is no
distribution of information across the nodes as there is no secondary worker node attached. Figure 31:
Single Node with High Availability on page 103 shows a high level overview of the system landscape with
two SAP HANA appliances and an IBM GPFS Quorum node.
Worker Node
Standby Node
Quorum Node
GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)
G8264 switches
X6 Implementation Guide
1.9.96-13
103
Technical Documentation
node1
node2
node3
Quorum
Data
second
replica
first
replica
HDD
Data
File
System
Descriptor
FS Desc
FS Desc
meta
data
HDD
Meta
data
Meta
data
LG1
FG1
LG1
FG2
sda1
OS
sda2
sda1
sda2
FS Desc
LG1
FG3
sda1
OS
sda2
OS
10.1.1
To begin the installation, you need to install both Lenovo Workload Optimized Systems using the steps
at the beginning of chapter 6: Guided Install of the Lenovo Solution on page 41. Configure the network
interfaces (internal and external) and the NTP server(s) as described there.
1. Start the text based installer as follows on each of the two nodes:
1
saphana-setup-saphana.sh -H
The switch -H prevents SAP HANA from being installed automatically. This needs to be done
manually later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 64
together with the steps described below.
X6 Implementation Guide
1.9.96-13
104
Technical Documentation
10.1.2
The quorum node used can be, e.g. an Lenovo System x3550 M4 with a single CPU and three local disks
configured in a RAID5 configuration. It also contains an Emulex Virtual Fabric Adapter II with two 10
Gigabit Ethernet ports. We recommend the following server to be used as quorum nodes for the best
price/performance of this node. Bigger systems only require a larger cost for the GPFS license and are
not needed. See table 44 on page 105.
Part
Number
Qty.
1
6
3
1
1
1
1
1
28
1
1
1
1
2
10.1.2.1 Install the Operating System You may use SLES17 11 to install the OS on this machine
using the default settings. While installing Linux, please select the pattern "C/C++ Compiler and
Tools" in addition to the default selection of software. If you do not do this at install time, then open
the YaST software panel and install the above pattern before installing and compiling GPFS.
Note
SLES 11 does not contain RAID drivers for the IBM ServeRAID M5110 RAID controller (see
table 44). In order to install this driver at the same time, you must prepare a USB drive
with the appropriate ServeRAID device update driver (dud) file that can be found on IBM
FixCentral. Install it during the install by pressing <F6> during boot splash screen. Please
refer to the driver README instructions for further details.
Note
We recommend to always use the latest version of SLES for the quorum node.
You can download the IBM ServeRAID drivers from IBM support sites: e.g. http://ibm.com/support/
entry/portal/docdisplay?lndocid=migr-5082165. If you install using the SLES for SAP Applications
17 SUSE
X6 Implementation Guide
1.9.96-13
105
Technical Documentation
11 DVD, you will be able to install with this dud file, but you will not be able to reboot the system as
the device driver that was used to install, is not compatible with the newer kernel delivered in the SLES
for SAP Applications 11 installation media. Therefore we do not recommend to use the SLES for SAP
Applications 11 installation media for this server.
10.1.2.2 Disk partitioning The SLES 11 installation media will automatically partition your hard
drive if you do not remove the boot option "autoyast=usb:///" completely. Although this is not dramatic,
it would mean you would have to use a tool like gparted to resize the partitions in the following manner.
This is not described in this document.
We recommend to remove the boot option, "autoyast=usb:///" completely and manually configure the
partitions as described in Section 45: Single Node with HA OS Partitioning on page 106.
Device
/dev/sda1
/dev/sda2
/dev/sda3
Size
rest
10GB
10GB
Mount point
/
swap
not mounted - not formated - used for GPFS NSD
10.1.2.3 Firewall Disable the integrated firewall during the network configuration steps or else you
wont be able to connect to the server until the firewall has been configured correctly. This may be turned
on and configured according to the SAP HANA Security Guidelines.
10.1.3
Follow information in table 46: Single Node with HA OS Networking Setup on page 106 to setup the
networking during the SLES for SAP Applications OS installation.
Network
10GbE port 0
10GbE port 1
bond0
Host Name
GPFS IP address
HANA IP Address
Description
Connect 10GigE port to the first G8264 switch
Connect 10GigE port to the second G8264 switch
Bond Port 0 and Port 1 together
Set the Bonding options to:
mode=4 xmit_hash_policy=layer3+4
gpfsnode99
Place at the end of the range (e.g. 192.168.10.253)
This is not needed as this node will not run SAP
HANA.
X6 Implementation Guide
1.9.96-13
106
Technical Documentation
Legend
SAP client
1GbE
SAP HANA
GPFS
10 GbE
10 GbE
10GbE
Customer
Customer
Interface Zone
Interface Zone
Interface
Inter Switch
Links
IMM
1 GbE
40 GbE
Bonded
Interface
Optional
Interface
0
6
8
1 GigE
10 GigE
SAP
SAPHANA
HANASingle
SingleNode
Nodewith
withHA
HAAppliance
Appliance
IMM
Node1
IMM
IMM
Node2
Quorum 0 1
Node
10 GbE
Customer
Switch Choice
HANA
HANA
6
8
10GigE 1
System
management
SAP
Business Suite
10GigE 2
GPFS
GPFS
GPFS
2
3
Customer
Switch Choice
3
2
5
4
10
11
5
4
10
11
10.1.3.1 Switch configuration The network switches need to be configured in the standard scaleout configuration, described in section 5.6.7: Network Configurations in a Clustered Environment on page
30. The 10GigE connections of the additional quorum node will be configured as an extension to the
existing vLAG configuration. The ports of the new network links need to be added to the correct VLANs
and the vLAG and LACP settings need to be made.
Description
ports
vLAG - LACP key
PVID
G8264 Switch #1
22
1002
101
G8264 Switch #2
22
1002
101
10.1.4
The host file /etc/hosts on all three cluster nodes needs to have the following entries. Change the
IP addresses to the ones used in your scenario. Add entries that are missing like for instance external
hostnames.
1
2
3
192.168.10.101
192.168.10.102
192.168.10.253
10.1.5
gpfsnode01 gpfsnode01
gpfsnode02 gpfsnode02
gpfsnode99 gpfsnode99
SSH configuration
The ssh configuration also needs to be extended to the third node. Each node needs to have the public
ssh keys of each other node so that the communication between the GPFS nodes is guaranteed.
X6 Implementation Guide
1.9.96-13
107
Technical Documentation
1
2
ssh-copy-id gpfsnode01
ssh-copy-id gpfsnode02
Run the following command on each of the first two nodes with the GPFS private network hostname of
the new quorum node:
ssh-copy-id gpfsnode99
10.1.6
Update the file /var/mmfs/config/nodes.cluster on the first node (gpfsnode01) to the following content, as it may be needed later:
1
2
3
gpfsnode01:quorum
gpfsnode02:quorum
gpfsnode99:quorum
Besides the necessary number of quorum nodes it is also required to have a quorum on the file system
descriptor. The number of copies of the file system descriptor depends on the number of disk in different
failure groups. To maintain file system operations GPFS requires a quorum of the majority of the replicas
of the file system descriptor. For a two node HA cluster it is therefore necessary to also have a copy of
the descriptor on the quorum node. A disk needs to made available to GPFS on the additional quorum
node which will only hold a copy of the file system descriptor. It does not have any data or metadata.
10.1.7
mkdir -p /var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS-4.1* /var/tmp/install/gpfs-4.1
scp gpfsnode01:/var/tmp/install/gpfs-4.1/GPFS_4.1* /var/tmp/install/gpfs-4.1
This should give you the base installer archive GPFS_4.1_STD_LSX_QSG.tar.gz and the PTF GPFS-4.1.
0.<PTF>-x86_64-Linux.standard.tar.gz.
Extract the IBM GPFS archives and start the installer:
1
2
3
4
cd /var/tmp/install/gpfs-4.1
tar xvf GPFS_4.1_STD_LSX_QSG.tar.gz
tar xvf GPFS-4.1.0.1-x86_64-Linux.standard.tar.gz
./gpfs_install-4.1.0-0_x86_64 --dir . --text-only
Accept the license by pressing "1". Then install the RPMs:
X6 Implementation Guide
1.9.96-13
108
Technical Documentation
1
2
3
4
5
6
7
8
9
rpm
rpm
rpm
rpm
rpm
rpm
-ivh
-ivh
-ivh
-ivh
-ivh
-ivh
gpfs.base-${gpfs_release}-0.x86_64.rpm
gpfs.gpl-${gpfs_release}-0.noarch.rpm
gpfs.msg.en_US-${gpfs_release}-0.noarch.rpm
gpfs.docs-${gpfs_release}-0.noarch.rpm
gpfs.gskit-*.x86_64.rpm
gpfs.ext-${gpfs_release}-0.x86_64.rpm
rpm
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
gpfs.base-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm
gpfs.ext-${gpfs_release}-${gpfs_update_fixpack}.x86_64.update.rpm
gpfs.gpl-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
gpfs.msg.en_US-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
gpfs.docs-${gpfs_release}-${gpfs_update_fixpack}.noarch.rpm
10
11
12
13
14
15
mkdir -p /usr/lpp/mmfs/4.1/
cp -pr license /usr/lpp/mmfs/4.1/
10.1.7.1 Build the IBM GPFS Portability Layer Follow the instructions in /usr/lpp/mmfs/
src/README. In general, you may build the IBM GPFS libraries as follows:
1
2
3
4
cd /usr/lpp/mmfs/src
make Autoconfig
make World
make InstallImages
10.1.7.2
1. Create /etc/profile.d/saphana-profile.sh;
1
PATH=$PATH:/usr/lpp/mmfs/bin
source /etc/profile.d/saphana-profile.sh
mkdir /tmp/GPFSdump
mkdir /var/mmfs/config
X6 Implementation Guide
1.9.96-13
109
Technical Documentation
10.1.8
mmaddnode gpfsnode99
3. Mark backup and quorum node as quorum nodes for the cluster
1
mmstartup
10.1.9
Create a disk descriptor file in the configuration directory of the quorum node /var/mmfs/config/disk.
list.quorum.gpfsnode99. It should contain the following line which defines the disk partition on the
quorum node as an NSD with the explicit function to hold the file system descriptor:
1
/dev/sda3:gpfsnode99::descOnly:1099:quorum01node99
Create the NSD by running the mmcrnsd command on the quorum node:
mmcrnsd -F /var/mmfs/config/disk.list.quorum.gpfsnode99 -v no
10.1.10
After creating the NSD the disk needs to be added to the file system by running the mmadddisk command:
1
10.1.11
Execute the command mmlsclusteron one of the cluster nodes. The output should look similar to this:
1
2
3
4
5
6
7
HANAcluster.gpfsnode01
12394192078945061775
HANAcluster.gpfsnode01
/usr/bin/ssh
/usr/bin/scp
8
9
10
11
12
13
14
Node
X6 Implementation Guide
1.9.96-13
IP address
Designation
110
Technical Documentation
15
16
17
18
--------------------------------------------------------------------1
gpfsnode01
192.168.10.101
gpfsnode01
quorum
2
gpfsnode02
192.168.10.102
gpfsnode02
quorum
3
gpfsnode99
192.168.10.253
gpfsnode99
quorum
10.1.11.1 List the IBM GPFS Disks Check the disks in the cluster. There are 2 devices on each
of the NSD server and none on the quorum node. The listing of the command mmlsdisksapmntdata-L
shows that there is one disk per failure group which contains a file system descriptor. This ensures that
a quorum may be reached if a node fails.
1
2
3
4
5
6
7
8
9
10
11
disk
driver sector
name
type
size
-------------- ------ -----data01node01
nsd
512
data02node01
nsd
512
data01node02
nsd
512
data02node02
nsd
512
quorum01node99 nsd
512
Number of quorum disks: 3
Read quorum value:
2
Write quorum value:
2
10.1.12
failure
group
------1001
1001
1002
1002
1003
holds
metadata
-------yes
yes
yes
yes
no
holds
data
----yes
yes
yes
yes
no
status
-----ready
ready
ready
ready
ready
storage
availability
-----------up
up
up
up
up
disk
id
---1
2
3
4
5
pool
remarks
------ -------system
desc
system
system
desc
system
system
desc
10.2
This solution is designed to provide improved high-availability capabilities for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. There is one active SAP HANA
instance running on the primary node and database data gets replicated by IBM GPFS to the secondary
node. The secondary node is running in hot-standby, ready to take over operation if the primary node
experiences any failure. In such a 1+1 stretched HA scenario the secondary node usually is distant to the
primary node. Examples are a different fire compartment zone or the other end of the campus. Depending
on distances it can also be on a different campus in the same city. No non-production SAP HANA instance
is allowed to run in this scenario.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the first
figure 34: Single Node with stretched HA - Two Site Approach on page 112 the quorum node is placed at
the primary site.
This ensures that IBM GPFS on the primary site node stays up and running even if the link to the
DR-site node gets interrupted.
X6 Implementation Guide
1.9.96-13
111
Technical Documentation
Site B
Worker Node
Quorum Node
Standby Node
GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)
G8264 switches
Site B
Quorum Node
Worker Node
GPFS Links
Standby Node
G8264 switches
10.2.1
This scenario must be installed like a conventional 1+1 HA scenario as shown above in 10.1.1: Installation
of SAP HANA appliance single node with HA on page 104. The major difference is the network setup. It
can be either routed or switched, depending on the clients environment (in conventional 1+1 HA scenarios
there is only one IBM-provided switch between the hops). Usually, clients have different types of links
spanning the two sites and they use different network equipment technologies. The client is allowed to
use his own network equipment (i.e. switches) on the secondary site. Ensure that the separation of
X6 Implementation Guide
1.9.96-13
112
Technical Documentation
network interfaces is kept across both nodes (distinct switches or VLAN18 s for each IBM GPFS and
HANA network port per node). This is to guarantee high-availability of the solution. The file system
layout is shown in Figure 36: File System Layout - Single Node stretched HA on page 113.
node1
node2
node3
Quorum
Data
second
replica
first
replica
HDD
Data
File
System
Descriptor
FS Desc
FS Desc
meta
data
HDD
Meta
data
Meta
data
LG1
FG1
LG1
FG2
sda1
sda2
sda1
OS
OS
sda2
FS Desc
LG1
FG3
sda1
sda2
OS
10.2.2
10.3
This solution is designed to provide disaster recovery capabilities for a single node SAP HANA installation. It can be applied to any SAP HANA machine size. There is one active SAP HANA instance
running on the primary site node and a standby node on the backup site is ready to take over operation
in case of a disaster. The difference between a single node with stretched HA and a single node with
DR installation is the fact that automatic failover is sacrificed for the possibility to run a non-production
SAP HANA instance on the DR-site node. Otherwise, the two setups are identical. The setup of this
solution is a manual process after SLES has been installed.
Because of the importance of the quorum node it is recommended to place it at a third site. We
understand, however, that this is not always feasible. This leads to the following two designs. In the
first figure 37: Single Node with Disaster Recovery - Two Site Approach on page 114 the quorum node is
18 Virtual
X6 Implementation Guide
1.9.96-13
113
Technical Documentation
placed at the primary site. This ensures that IBM GPFS on the primary site node stays up and running
even if the link to the DR-site node gets interrupted.
Site B
Storage expansion for
non-prod DB instance
Worker Node
Quorum Node
DR Node
GPFS Links
SAP HANA Links
Inter-Switch Link (ISL)
G8264 switches
Figure 37: Single Node with Disaster Recovery - Two Site Approach
The second approach places the quorum node at a third site. The network architecture can be seen in
figure 38: Single Node with Disaster Recovery - Three Site Approach on page 114.
Site C
Site B
Quorum Node
Worker Node
GPFS Links
Standby Node
G8264 switches
Figure 38: Single Node with Disaster Recovery - Three Site Approach
X6 Implementation Guide
1.9.96-13
114
Technical Documentation
10.3.1
This scenario has to be installed in the exact same way as described in 10.1.1: Installation of SAP HANA
appliance single node with HA on page 104. IBM GPFS replicates data to the backup site node. The
difference is in the configuration of SAP HANA.
10.3.2
node1
node2
HDD
node3
Quorum
Data
second
replica
first
replica
This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 126 to setup the additional disk drives. The overall file system architecture is illustrated in figure
39: File System Layout - Single Node with DR with Storage Expansion on page 115.
Data
File
System
Descriptor
FS Desc
FS Desc
meta
data
HDD
Meta
data
Meta
data
FS Desc
LG1
LG1
LG1
FG1
FG2
FG3
sda1
sda2
OS
sda1
sda2
OS
sda1
sda2
OS
M5120
...
10.4
This solution is designed to provide the maximum level of redundancy for a single node SAP HANA
installation. It can be applied to any SAP HANA configuration size. High availability concepts ensure
that the database stays up if the primary node has an issue. Disaster recovery concepts ensure that
the database stays up if the first two SAP HANA nodes (residing in the primary customer data center)
X6 Implementation Guide
1.9.96-13
115
Technical Documentation
become unavailable. Figure 40: Single Node with HADR using IBM GPFS Storage Replication on page
116 illustrates the overall architecture of the solution.
Site B
Storage expansion for
non-prod DB instance
Worker Node
Standby Node
GPFS Links
DR Node
G8264 switches
Figure 40: Single Node with HADR using IBM GPFS Storage Replication
10.4.1
Install the latest supported IBM Systems Solution for SAP HANA on all three nodes by using the latest
supported SLES for SAP Applications DVD and the latest non-OS component DVD.
The procedure is similar as described in Installation of SAP HANA appliance single node with HA. The
final file system layout is shown in figure 41 on page 117.
X6 Implementation Guide
1.9.96-13
116
Technical Documentation
node1
node2
node3
HDD
Data
second
replica
first
replica
HDD
meta
data
File
System
Descriptor
third
replica
Data
Data
FS Desc
FS Desc
FS Desc
Meta
data
Meta
data
Meta
data
LG1
LG1
LG1
FG1
FG2
FG3
sda1
OS
sda2
sda1
sda2
sda1
OS
sda2
OS
saphana-setup-saphana.sh -H
The switch -H prevents SAP HANA from being installed automatically. This needs to be done
manually later. Refer to the steps as stated in section 6.6.2.2: Cluster Installation on page 64
together with the steps described below.
X6 Implementation Guide
1.9.96-13
117
Technical Documentation
mmchfs sapmntdata -m 3 -r 3
mmlsfs sapmntdata
...
-m
3
-M
3
-r
3
-R
3
...
Default
Maximum
Default
Maximum
number
number
number
number
of
of
of
of
metadata replicas
metadata replicas
data replicas
data replicas
6. Restripe the data on the IBM GPFS filesystem to all have the required three replicas:
1
mmrestripefs sapmntdata -R
mmchconfig unmountOnDiskFail=meta
mmchconfig panicOnDiskFail=meta
8. Adjust the quotas on the file system. The log quota is set to 1 TB regardless of memory size.
1
9. Install SAP HANA similarly as described in section 8.4.5: SAP HANA appliance installation on
page 79.
10.4.2
This solution supports the additional use of the DR-site node to host a non-production SAP HANA
instance. Follow instructions in 10.7: Expansion Storage Setup for Non-productive SAP HANA Instance
on page 126 to setup the additional disk drives. The overall file system architecture is illustrated in figure
42: File System Layout - Single Node HADR with Storage Expansion on page 119.
X6 Implementation Guide
1.9.96-13
118
Technical Documentation
node1
node2
node3
HDD
Data
second
replica
first
replica
HDD
third
replica
Data
File
System
Descriptor
FS Desc
FS Desc
FS Desc
meta
data
Data
Meta
data
Meta
data
Meta
data
LG1
LG1
LG1
FG1
FG2
FG3
sda1
OS
sda2
sda1
OS
sda2
sda1
sda2
OS
M5120
...
Figure 42: File System Layout - Single Node HADR with Storage Expansion
10.5
This solution provides redundancy at the application layer. It can be applied to any SAP HANA configuration size. For details, see official SAP HANA documentation on http://help.sap.com/hana. There
are two ways how to design the network for such a DR solution based on System Replication. As the IBM
GPFS interfaces on the DR-site node are not connected to the primary site a set of redundant switches
is optional. This leads to one architecture with switches and one architecture without switches between
the SAP HANA nodes. Figure 43: Single Node DR with SAP System Replication on page 120 shows the
solution with switches.
X6 Implementation Guide
1.9.96-13
119
Technical Documentation
Site B
Storage expansion for
non-prod DB instance
Worker Node
DR Node
G8264 switches
Site B
Storage expansion for
non-prod DB instance
Worker Node
DR Node
10.5.1
Each site is considered to be a single node, as far as SLES and IBM GPFS are concerned. The final
file system layout can be seen in figure 45: File System Layout of Single Node DR with SAP System
Replication on page 121.
X6 Implementation Guide
1.9.96-13
120
Technical Documentation
SAP HANA
System Replication
node2
File system A
File system B
HDD
HDD
Data
first
replica
File
System
Descriptor
FS Desc
File
System
Descriptor
FS Desc
meta
data
Meta
data
meta
data
first
replica
node1
Meta
data
Data
LG1
LG1
FG1
FG1
sda1
sda2
sda1
sda2
OS
OS
GPFS Cluster A
GPFS Cluster B
Figure 45: File System Layout of Single Node DR with SAP System Replication
Perform a single node installation on both nodes as described in 6.6.2.1: Single Node Installation on page
63 but start the installer with the -H option:
1
saphana-setup-saphana.sh -H
In the option list select Single Node .
The switch -H prevents HANA from being installed automatically. This needs to be done manually later.
Data replication will be taken care of by SAP HANA application level. Replication can happen synchronously or asynchronously. Configure the network connection for SAP HANA and ensure the connectivity.
10.5.2
This setup supports the additional use of the DR-site node to host a non-production SAP HANA instance.
The layout of the two file systems (production and non-production) is illustrated in figure 46: File System
Layout of Single Node DR with SAP System Replication with Storage Expansion on page 122.
X6 Implementation Guide
1.9.96-13
121
Technical Documentation
HDD
Data
first
replica
File system B
HDD
FS Desc
File
System
Descriptor
File system A
FS Desc
Meta
data
meta
data
node2
File
System
Descriptor
node1
meta
data
first
replica
SAP HANA
System Replication
Meta
data
Data
LG1
LG1
FG1
FG1
sda1
sda2
OS
sda1
sda2
OS
M5120
GPFS Cluster A
...
GPFS Cluster B
Figure 46: File System Layout of Single Node DR with SAP System Replication with Storage Expansion
On the remote site node (receiving the replication data from primary SAP HANA instance) you will
have two file systems configured. The primary file systems spans local disks only and is to be configured
in the exact same way as the primary site file system. This file system will host the replicated data
coming in from the active production SAP HANA instance. The second file system only consists of
storage expansion box drives attached to the remote site node. This file system will host the data
of the non-production SAP HANA instance. Follow instructions in 10.7: Expansion Storage Setup for
Non-productive SAP HANA Instance on page 126 to setup these additional disk drives.
10.6
This approach also provides maximum redundancy for single node SAP HANA installations. We use
the term 1+1/1 to describe this style of single node installation. It can be applied to any SAP HANA
configuration size. 1+1/1 uses the IBM GPFS storage replication feature and SAP HANA System
Replication feature. For HA (1+1) it uses IBM GPFS storage replication. To achieve this, the active
and the standby node are in the same IBM GPFS cluster and have access to the system file system.
Whenever the active node writes data to disk IBM GPFS replicates it to the standby node.
In addition to that, SAP HANA System Replication transfers data to a DR node on a remote site. In
case of a disaster in the primary site data center the DR node can be used to host SAP HANA. SAP
X6 Implementation Guide
1.9.96-13
122
Technical Documentation
HANA System Replication can either run in synchronous or in asynchronous replication mode. The DR
node creates a separate IBM GPFS cluster consisting just of itself. It has its own file system on local
disk. There is no logical connection to the primary site IBM GPFS cluster. As a consequence, the IBM
GPFS network adapter on the DR node is to be left unconnected. This leads to two possible network
architectures. The first one provides redundant switches on both sites. Figure 47: Single Node with HA
using IBM GPFS Storage Replication and DR using System Replication on page 123 shows this design.
Quorum Node
Site B
Storage expansion for
non-prod DB instance
Worker Node
Standby Node
GPFS Links
DR Node
G8264 switches
Figure 47: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication
The second architecture drops the switches on the DR site and instead connects the only required network
interfaces (the 10 Gbit connection for SAP HANA communication) directly to the primary site switches.
This is illustrated in figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using
System Replication without remote site Switches on page 124.
X6 Implementation Guide
1.9.96-13
123
Technical Documentation
Quorum Node
Site B
Storage expansion for
non-prod DB instance
Worker Node
Standby Node
GPFS Links
DR Node
G8264 switches
Figure 48: Single Node with HA using IBM GPFS Storage Replication and DR using System Replication
without remote site Switches
10.6.1
The two nodes on the primary site are to be installed in the exact same way as a 1+1 HA environment
described in 10.1.1: Installation of SAP HANA appliance single node with HA on page 104. There is one
IBM GPFS cluster and one file system spanning both nodes with IBM GPFS taking care of replicating
the data to the standby node (r=2, m=2).
To install the DR node follow all steps of a standard SAP HANA single node installation apart from
installing SAP HANA itself (use the -H option). Please refer to 10.5: Single Node DR Installation with
SAP HANA System Replication on page 119 for details.
The OS and IBM GPFS have no logical dependency on the primary site node. This will be achieved on
application level with SAP HANA in the next step.
The final file system layout is shown in figure 49: File System of Single Node with HA and DR with
System Replication on page 125 and it illustrates the use of the two technologies, IBM GPFS storage
replication and SAP HANA system replication.
X6 Implementation Guide
1.9.96-13
124
Technical Documentation
SAP HANA
System Replication
node1
node2
node3
Quorum
node3
File system B
HDD
first
replica
HDD
Data
second
replica
Data
Data
FS Desc
Meta
data
Meta
data
LG1
LG1
FG1
FG2
sda1
sda2
OS
sda1
OS
GPFS Cluster A
sda2
FS Desc
File
System
Descriptor
File
System
Descriptor
FS Desc
meta
data
HDD
FS Desc
meta
data
first
replica
HDD
Meta
data
LG1
LG1
FG3
sda1
sda2
FG1
sda1
OS
sda2
OS
GPFS Cluster B
Figure 49: File System of Single Node with HA and DR with System Replication
10.6.2
Install two separate instances of SAP HANA, one in each site. For the primary site please follow the
according steps for a clustered HA installation.
On the DR node you have to follow all steps of a standard SAP HANA single node installation. This
includes installing all components of SAP HANA and making sure that it runs self contained. You then
have to follow official SAP HANA documentation to enable SAP HANA System Replication between the
instance on the primary site node and the instance on the DR node.
X6 Implementation Guide
1.9.96-13
125
Technical Documentation
SAP HANA
System Replication
node1
node2
node3
Quorum
node3
File system B
HDD
first
replica
HDD
Data
second
replica
Data
Data
FS Desc
Meta
data
Meta
data
LG1
LG1
FG1
FG2
sda1
sda2
OS
sda1
OS
sda2
FS Desc
File
System
Descriptor
File
System
Descriptor
FS Desc
meta
data
HDD
FS Desc
meta
data
first
replica
HDD
Meta
data
LG1
LG1
FG3
sda1
sda2
FG1
sda1
OS
sda2
OS
M5120
GPFS Cluster A
...
GPFS Cluster B
Figure 50: File System of Single Node with HA and DR with System Replication and Storage Expansion
10.7
This sections describes how to setup the disks in an expansion storage that hosts a non-productive SAP
HANA instance. Expansions storage is supported in environments where the nodes at a DR site would
be idle otherwise.
Depending on the memory size of the nodes you have a different number of drives in the expansions.
Create as many (8+p) RAID 5 arrays as possible. Declare remaining drives as hot spare. For details on
how to use the RAID configuration utility see 6.2.1: Storage Configuration RAID Setup on page 48.
Each RAID 5 device will be given to IBM GPFS as an NSD.
Collect the device names of all newly created virtual drives. Then create NSDs on them according to the
following rules:
1. all NSDs will be dataAndMetadata
2. all NSDs go into the system pool
3. naming scheme is extXXnodeYY with XX being the two-digit drive number and YY the node number
4. one single failure group for all expansion box drives, make sure it is unique within you cluster
Store a disk descriptor file similar to the following as /tmp/nsdlistexp.txt:
1
2
3
4
5
6
7
8
9
%nsd: device=/dev/sdd
nsd=ext01node02
servers=gpfsnode02
usage=dataAndMetadata
failureGroup=2
pool=system
%nsd: device=/dev/sde
nsd=ext02node02
servers=gpfsnode02
X6 Implementation Guide
1.9.96-13
126
Technical Documentation
10
11
12
13
14
15
16
17
18
19
20
usage=dataAndMetadata
failureGroup=2
pool=system
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1
Create NSDs
# mmcrnsd -F /tmp/nsdlistexp.txt
Create the file system
# mmmount sapmntext
If your client has a storage expansion connected to both nodes, primary site and backup site, then you
need to apply above procedure two times, one for each node. Each expansion box file system is to be
handled separately. Do not create a single file system that spans over both expansion box disks!
This scenario is used if both data centers thus both nodes are to be considered equal and you want to
be able to run production SAP HANA out of both data centers. In this case non-production SAP HANA
instances must also be able to run on both nodes hence the need for a dedicated /sapmntext file system
on both sides.
X6 Implementation Guide
1.9.96-13
127
Technical Documentation
11
Virtualization
The Lenovo Solution can be installed inside of a VMware virtual machine starting with Support Packackage Stack (SPS) 05. Currently SAP supports following virtualization solutions:
VMware vSphere 5.1 and SAP HANA SPS05 (or later releases) for non-production use cases
VMware vSphere 5.5 and SAP HANA SPS07 (or later releases) for production and non-production
use cases.
For non-production use multiple virtual machines may be deployed. For production use only single
node installations are supported. See SAP Note 1788665 SAP HANA Support for VMware Virtualized
Environments.
For VMware vSphere configuration please see SAP Note 1122388 Linux: VMware vSphere configuration
Guidelines
Attention
For Lenovo Servers with Intel Haswell EX Processor the minimum supported Version of
VMware vSphere is 5.5U2.
The sizing of a virtual machine has to be done according to the existing SAP HANA sizing guidelines
for single node installations. The CPU/RAM ratio has to be met. In general SAP HANA virtualized
with VMware vSphere is sized the same as non-virtualized SAP HANA deployments. In other words,
for sizing the virtual machine (VM) the CPU/memory ratio as used for bare-metal sizing is taken into
account to ensure locality of memory access on the underlying hardware resources.
Lenovo
Name
vCPUs
VM1
VM2
VM3
VM4
VM5
VM6
10
20
30
40
50
60
Virtual
memory
(GB)
64
128
192
256
320
384
Ratio
1
2
3
4
5
6
Total
HDD
for OS
Total HDD
for GPFS
128
128
128
128
128
128
416
736
1056
1376
1696
2016
11.1
11.1.1
Getting Started
Memory Overhead
CPU and memory overcommitment is not allowed in virtual HANA environments. For this reason memory
has to be spared for the ESXi hypervisor to run and manage the virtual machines.
A very conservative estimate for the amount of memory that needs to be unassigned the SAP HANA
virtual machines for overhead is 3 to 4 percent. For example, on a system having 1 TB of RAM,
approximately 30 to 40 GB would need to be left unassigned to the virtual machines.
X6 Implementation Guide
1.9.96-13
128
Technical Documentation
In a system with 1TB of RAM a single VM6 machine with 384GB RAM could be installed, leaving
the rest of the system unused. Two VM6 machine would still leave enough unassigned memory for the
hypervisor and virtual machine memory overhead.
11.1.2
Configure UEFI
Apply the UEFI configuration as described in section 6: Guided Install of the Lenovo Solution on page
41.
11.1.3
The VMware ESXi 5.5 hypervisor is to be installed on an USB pen drive. The drive is located at an
internal USB port in the server. This prevents an unintended removal of the USB pen drive.
Boot the server with attached USB pen drive. Enter BIOS and select
Boot Manager Boot from embedded hypervisor . VMware ESXi 5.5 does not boot from a USB-Drive when the
BIOS is in legacy mode. It must be in UEFI mode.
11.1.4
To be able to connect to the ESXi Hypervisor you have to configure the Management network. Per
default the ESXi connects to the first available network adapter via dhcp. This is not always desired.
1. At the direct console of the ESXi host, press F2 and provide credentials when prompted.
X6 Implementation Guide
1.9.96-13
129
Technical Documentation
X6 Implementation Guide
1.9.96-13
130
Technical Documentation
X6 Implementation Guide
1.9.96-13
131
Technical Documentation
7. Set primary and secondary DNS Server and Hostname and press
X6 Implementation Guide
1.9.96-13
132
Technical Documentation
11.1.5
By default, remote command execution is disabled on an ESXi host, and you cannot log in to the host
using a remote shell. You can enable remote command execution from the direct console or from the
vSphere Client.
To enable SSH access in the direct console
1. At the direct console of the ESXi host, press F2 and provide credentials when prompted.
2. Scroll to Troubleshooting Options and press
3. Choose "Enable SSH" and press
On the left, "Enable SSH" changes to "Disable SSH". On the right, "SSH is Disabled" changes to
"SSH is Enabled".
4. Press Esc until you return to the main direct console screen.
11.1.6
To be able to use the storage on an X6 machine you have to configure the RAID adapters.
You can install the StorCLI tool directly under VMware ESXi 5.5. As a prerequisite SSH has to be
enabled on the VMware ESXi 5.5.
You can download the latest StorCLI version from http://www-947.ibm.com/support/entry/portal/
docdisplay?lndocid=migr-5092951.
Copy the files VMware ESXi via SCP.
11.1.6.1
X6 Implementation Guide
1.9.96-13
133
Technical Documentation
esxcfg-module -d <mod-name>
X6 Implementation Guide
1.9.96-13
134
Technical Documentation
11.1.7
Since the ESXi Hypervisor runs on standard System x HANA Hardware there is no external storage
attached. You have to open a SSH session on the ESXi hypervisor.
To list the installed storage devices execute:
1
Figure 59: ESXi 5.x filesystems on a System x3850 X6. The VFAT filesystems belong to the USB device
Create a VMFS5 filesystem on a partition. Example VMFS5 creation on a System x3850 X6:
1
setting up vswitches
The core of VMware vSphere networking are virtual switches. vSphere supports two types of switches.
The standard switch (VSS) and the distributed switch (VDS). The latter is needed for vMotion. However,
since vMotion is not supported in this solution we will only describe standard switches. Virtual switches
are necessary for the virtual machines to connect to each other or the outside world.
The VMs contact the physical adapters through vswitches. The communication uplink, that is the
network interface you applied the IP-Adress for the management network is always vswitch0.
If you create a standard vswitch it does not have connection to physical interface per default. This can
be useful if you want to have an isolated VM.
1
2
3
4
## adding Switches
esxcli network vswitch standard add --vswitch-name=vSwitchGPFS --ports=24
esxcli network vswitch standard add --vswitch-name=vSwitchHANA --ports=24
esxcli network vswitch standard add --vswitch-name=vSwitchKOM --ports=24
5
6
# changing MTU
X6 Implementation Guide
1.9.96-13
135
Technical Documentation
7
8
9
10
#adding portgroup
11
12
13
14
11.1.9
Teaming must be set up on ESXi Hypervisor. Setting up teaming in the VMs is useless. Teaming is
always a HA teaming. To set up teaming you add a NIC to a vwsitch.
1
2
3
To see and set the failover policy in a vswitch you need following commands
1
2
11.1.10
There are two ways to provide the needed ISOs for the virtual machines.
One is a NFS connected storage from an external source or a datastore on the server.
11.1.10.1 Setting up NFS datastore It is easier to store the SLES for SAP 11 and the non-OS
components ISOs on a separate filesystem and mount it via NFS on the ESXi hypervisor.
To create an NFS mount login to the hypervisor via SSH and execute:
1
11.1.10.2 Setting up a local datastore A datastore is a directory on the ESXi hypervisor in which
you copy the SLES and non-OS component ISOs. Therefore the filesystems must be created first. Connect
via SSH to the ESXi hypervisor.
All mounted volumes are available at /vmfs/volumes.
X6 Implementation Guide
1.9.96-13
136
Technical Documentation
To restart the ESXi 5.5 hypervisor press F12 at the ESXi prompt. You have to authenticate before you
can actually restart the hypervisor.
11.1.12
VMware vSphere Client is required to perform many of the tasks described in this document. Complete
the following steps to install VMware vSphere Client on a suitable system in your network.
Note
To avoid any unexpected behavior, it is strongly recommended that you use the VMware
vSphere Client that matches the version of the SAP HANA system hardwares VMware ESXi
5 hypervisor. If you already have an appropriate version of the VMware vSphere Client
installed, skip to the next section 11.2: Configuring and Starting VMs with vSphere Client on
page 138.
1. Boot the system hardware to the VMware ESXi 5 hypervisor. The IP address of the VMware ESXi
5 hypervisor is displayed on the console.
Note
If you have already added a host name to your DNS, you can use the host name instead
of the IP address.
2. On the Microsoft Windows system where VMware vSphere Client will be installed, open a secure
web connection (HTTPS) and enter the IP address of VMware ESXi 5 hypervisor in the browser
address bar. The VMware ESXi 5 welcome screen is displayed.
3. Download the vSphere client and follow the on-screen instructions to install the client. Note: If a
security warning window opens, click the Ignore button.
X6 Implementation Guide
1.9.96-13
137
Technical Documentation
Note
VMware vCenter server also provides a web based vSphere Client that can be
used. Open a secure web connection (HTTPS) to the vCenter server to the address
https://<address to vCenter server>/vsphere-client/
11.2
To configure and start the virtual machines, complete the following steps.
Note
The illustrations in this document might differ slightly from what you see on your screen.
1. Log in to the VMware vSphere Client. Type the IP address or host name of the host system, and
your user name and password and click the Login button.
(a) If a security warning window opens, ignore the warning and install the certificate.
(b) On a new server, you might also see a warning that there is no datastore; ignore this warning,
too.
2. The virtual machine is created with the aid of the vCenter GUI. You can use the WEB-GUI as
well, if you prefer it.
X6 Implementation Guide
1.9.96-13
138
Technical Documentation
X6 Implementation Guide
1.9.96-13
139
Technical Documentation
X6 Implementation Guide
1.9.96-13
140
Technical Documentation
X6 Implementation Guide
1.9.96-13
141
Technical Documentation
X6 Implementation Guide
1.9.96-13
142
Technical Documentation
X6 Implementation Guide
1.9.96-13
143
Technical Documentation
X6 Implementation Guide
1.9.96-13
144
Technical Documentation
X6 Implementation Guide
1.9.96-13
145
Technical Documentation
X6 Implementation Guide
1.9.96-13
146
Technical Documentation
X6 Implementation Guide
1.9.96-13
147
Technical Documentation
X6 Implementation Guide
1.9.96-13
148
Technical Documentation
1
2
3
virtualHW.version = "10"
memsize = "<sizeoframyouneed>"
numvcpus = "<numofcpuyouneed>"
To take the changes in effect you must reload the VM
X6 Implementation Guide
1.9.96-13
149
Technical Documentation
11.3
After starting the virtual machine the installation prompt appears. Use
to select the line "SLES for
+
SAP Applications - Installation with external profile". Move the cursor (
) to the boot options
and change the autoyast parameter to autoyast=device://sr1/. See figure 84: Changing the autoyast
parameter for installation on page 150.
Note
Please continue with the installation instructions in section 6.3: Phase 2 SLES for SAP on
page 53.
11.4
Operating System (Red Hat Enterprise Server 6.5 and 6.6) Installation
After starting the virtual machine the Installation prompt appears. Press
Add "ks=cdrom://ks.cfg".
See figure 85: Adding kickstart parameter for install on page 151.
X6 Implementation Guide
1.9.96-13
150
Technical Documentation
Note
Please continue with the installation instructions in section 6.4: Phase 2 RHEL on page 58,
but also execute the steps in the following section.
11.4.1
After the installation you need to login as root and perform following tasks:
Remove the file /etc/modprobe.d/bonding.conf.
Remove the files ifcfg-bond0, ifcfg-bond1, and ifcfg-eth3 in /etc/sysconfig/network-scripts.
Edit ifcfg-eth0 and remove the lines MASTER=bond0 and slave=yes.
The file ifcfg-eth0 file should look like this:
1
2
3
4
5
6
7
8
9
DEVICE=eth0
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
BOOTPROTO=none
NM_CONTROLLED=no
IPADDR=[IPADDR of Server]
NETMASK=[netmask]
IPV6INIT=no
The configuration for eth1 and eth2 is similar. Please keep in mind that eth1 is the GPFS network
interface (gpfsnode01) and eth2 ist the HANA network interface (hananode01).
Edit /etc/hosts and add IP address and full name of your server, IP and name of gpfsnode01, and
IP and name of hananode01.
Reboot the VM.
After reboot continue with installation as described in section 6.6: Phase 3 on page 62.
X6 Implementation Guide
1.9.96-13
151
Technical Documentation
11.5
11.5.1
After Installation the scheduler should be NOOP. To check the scheduler for the running system run this
command:
1
# cat /sys/block/sdb/queue/scheduler
This checks for drive sdb.
If noop is not the scheduler, change the scheduler in a running system with this command
1
2
1
2
1
2
vmw_pvscsi.cmd_per_lun=1024
vmw_pvscsi.ring_pages=32
The complete kernel line in /boot/grub/menu.lst will look like this:
1
2
3
paragraphparameters to the *.vmx file Memory prealloccation. It is sensible to allocate all memory
at boot time. This is done with the sched.mem.prealloc parameter set to TRUE. It is manadatory to
set the sched.mem.min parameter as well. If you do not do it the VM will fail to start. Usually the
sched.mem.min parameter equalls the amount of memory in MB set to the VM
X6 Implementation Guide
1.9.96-13
152
Technical Documentation
1
2
3
sched.mem.min = "xxx"
sched.mem.prealloc = "TRUE"
sched.swap.vmxSwapEnabled = "FALSE"
If the parameter sched.mem.prealloc is set, it takes a little longer for the VM to start. This is not a bug.
All system X servers are multicore servers. This can cause latency issues if the needed memory segments
are not in the near storage area. To resolve this NUMA (non uniform memory access) problem VMware
has developed sophisticated NUMA aware schedulers. A VM with more than 8 vCPUs is considered a wide
virtual machine. However, to reduce latency is may be sensible to bind a virtual machine to a cpu. These
are the numa.* parameters in the *.vmx file of the VM. The numa.autosize.vcpu.maxPerVirtualNode is
set automatically if the number of vcpu is more than 8.
1
2
3
4
numa.autosize.vcpu.maxPerVirtualNode = "20"
numa.autosize.cookie = "200001"
numa.nodeAffinity = "0"
numa.vcpu.preferHT = "TRUE"
sched.cpu.latencySensitivity = "HIGH"
NIC Optimization For perfomance and latency-sensitive VMs it is recommended to use the vmxnet3
vNIC driver. At the *.vmx configuration file for the VM you change the driver for the GPFS and HANA
Ethernet card to vmxnet3
1
2
3
ethernet1.virtualDev = "vmxnet3"
ethernet1.present = "TRUE"
ethernet1.networkName = "GPFS Network"
The VM must be down to be able to change the parameters. Leave the eth0 device to e1000.
On the Linux side you have to install the vmware tools, because these provide the vmxnet3 kernel driver.
1
2
3
X6 Implementation Guide
1.9.96-13
153
Technical Documentation
12
There are several possibilities to upgrade IBM appliances. You can either upgrade the RAM of your
appliance (scale-up) or you can add servers to create or increase the size of a cluster (scale-out).
Table 49: RAID array and RAID controller overview on page 155 lists defined models according to
number of CPUs, memory, and number of RAID arrays.
An upgrade from 4U chassis (x3850 X6) to 8U chassis (x3950 X6) is possible with some extra efforts.
Upgrades from 2 CPU sockets to 4, and 4 to 8 sockets are possible. Please note that the PCI-e slot
assignment changes (section 4.4: Card Placement on page 15) are required.
When scaling out a stand-alone installation (single server) to a cluster without changing the RAM it might
be necessary to add additional storage to the servers. Please note the different lines for stand-alone and
scale-out that might list different numbers of RAID arrays. Additional storage can mean either to add
9 HDDs to an existing storage expansion, or to add a new storage expansion, or (only for 8U chassis)
to add a second internal M5210 RAID controller. If your upgrade path requires new RAID controllers
please follow the instructions in section 4.4: Card Placement on page 15).
12.1
Unless specified to manufacturing, systems shipped from the factory have default settings that may not
meet customer desired settings. It is strongly recommended that during pre-installation setup, or after
installing additional hardware options, the power policy and power management selections should be
checked to insure:
Sufficient power is available for the configuration
The desired correct power redundancy and throttling settings have been selected
Note
Failure to properly set values can prevent the system from booting or log error events.
For more information on how to perform this task, refer to section Setting power supply power policy
and system power configurations of the System x3850 X6 and x3950 X6 Installation and Service Guide19 .
12.2
Reboot Behavior
When installing or performing upgrades, the operator should be prepared to expect multiple reboots
during the POST process as the system performs the required configuration and setting changes. A lack
of understanding reboot behavior could cause the operator to suspect bad or misbehaving hardware or
firmware and result in interrupting the required process. Interrupting the process will result in increased
time to complete the installation and may require service depending on what actions the operator has
performed improperly.
The number of reboots will vary depending upon the type (HW vs FW) and number of changes. Firmware
changes (primary bank, secondary bank, both, option) has most effect and may be as high as seven. The
number and size of installed memory DIMMs affects the time between reboots, not the number.
19 http://publib.boulder.ibm.com/infocenter/systemx/documentation/topic/com.ibm.sysx.3837.doc/nn1hu_
install_and_service_guide.pdf
X6 Implementation Guide
1.9.96-13
154
Technical Documentation
Chassis
CPUs
2
Usage
Standalone
Scaleout
Standalone
x3850 X6
4
Scaleout
Standalone
Scaleout
Standalone
x3950 X6
8
Scaleout
Memory
128-512GB
256GB
512GB
256-512GB
768-1024GB
1.5-2TB
3-4TB
6TB
512-1024GB
1.5TB
2TB
3TB
4TB
6TB
256-512GB
768-1024GB
1.5-2TB
512-1024GB
512GB
1-2TB
3-4TB
6TB
8TB
12TB
1TB
2TB
4TB
6TB
8TB
12TB
IA*
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
1
2
2
2
2
2
2
2
2
2
2
2
EA**
0
0
1
0
1
1
2
3
1
2
2
3
4
5
0
0
0
0
0
0
1
2
2
5
0
1
3
5
6
10
M5120/M5225
0
0
0
0
1
1
1
2
1
1
1
2
2
3
0
0
0
0
0
0
1
1
1
3
0
1
2
3
3
5
Note
[1]
[1]
[2]
[4]
[3]
[4]
[4]
[3]
[3]
[3]
[2]
[2]
[2]
[3]
[3]
[4]
[2]
[3]
[3]
X6 Implementation Guide
1.9.96-13
155
Technical Documentation
Note
Before adding or removing any hardware, remove AC power and wait for the LCD display
and all Light Emitting Diodes (LEDs) to turn off.
For more information on this topic and to see a reboot guideline chart, refer to RETAIN tip MIGR509687320 .
12.3
12.3.1
Adding storage
Adding storage via EXP2524
The second M5210 will be connected to 6 HDDs for a RAID5 and 2 SSDs for CacheCade.
1. Install the M5210 in the server.
2. Install the HDDs and SSDs.
3. 12.3.3: Configure RAID array(s) on page 157.
4. 12.3.8: Configuring GPFS on page 159.
20 http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5096873
21 For
details on hardware configuration and setup see Operations Guide for X6 based models section CacheCade RAID1
Configuration
X6 Implementation Guide
1.9.96-13
156
Technical Documentation
12.3.3
The command line tool storcli is installed on your appliance. It will be used to configure the RAIDs.
Note
All commands were tested with storcli version 1.07.07. Other versions syntax may vary.
Look in the output of storcli64 /call show for the controller with the unconfigured drives (UGood).
The actual enclosure IDs (EID), slot numbers (Slt), and ID of the controller may vary in your setup.
1
2
3
4
:
Controller = 1
Status = Success
Description = None
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
X6 Implementation Guide
1.9.96-13
157
Technical Documentation
12.3.4
You can configure the CacheCade RAID arrays either with RAID1 or RAID0. Depending on the hardware
setup you have to decide which RAID level you have to configure.
1 M5210: only RAID0
1 M5210 + 1 M5120/M5225 (with 2 SSDs): only RAID0
1 M5210 + 1 M5120/M5225 (with 4 SSDs): RAID0 or RAID1
1 M5210 + 2 or more M5120/M5225: RAID0 or RAID1
2 M5210: RAID0 or RAID1
Please keep in mind that all CacheCade VDs must have the same RAID level. This means that you have
to recreate existing CacheCade arrays that have the wrong RAID level.
12.3.5
Create the CacheCade device, where assignvds=X is the RAID 5 (with X as the Logical/Virtual Drive
ID). If you created 2 RAID5 arrays, use assignvds=X,Y to assign the CacheCade VD to both arrays.
8:1-2 is an example list of SSDs used.
To decide for the RAID level (raidX) see the previous section.
1
12.3.6
When you added storage to an existing EXP the CacheCade VD is already configured.
Assign the CacheCade VD to the newly created RAID5 array, where /cX is the controller, and /vX the
RAID5 array:
1
12.3.7
To change the RAID level of an existing CacheCade VD you have to delete and recreate the CacheCade
VD.
At first, find the CacheCade VD ID and the slots of the SSDs. Use the following command, where /cX
is the RAID controller.
1
X6 Implementation Guide
1.9.96-13
158
Technical Documentation
Create the deleted CacheCade again, where /cX is the RAID controller and drives=12:1-2 is an example
list of SSD drives used.
1
12.3.8
Configuring GPFS
Find the block device that belongs to the newly created RAID array. mmlsnsd -X, lsscsi, and lsblk
may be helpful.
Find the name of the new NSD(s). For example: If you are on gpfsnode01, execute mmlsnsd | grep
gpfsnode01 to find out the names that are already in use for the existing NSDs.
Create a stanza file (/var/mmfs/config/disk.list.data.gpfsnodeZZ.new) containing the information
about the new GPFS NSD(s). Repeat this block for all newly created RAID arrays accordingly. ZZ is
the node number (e.g. 01 in gpfsnode01).
1
2
3
4
5
6
%nsd: device=/dev/sdX
nsd=dataYYnodeZZ
servers=gpfsnodeZZ
usage=dataAndMetadata
failureGroup=10ZZ
pool=system
Execute
1
2
mmcrnsd -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no
mmadddisk sapmntdata -F /var/mmfs/config/disk.list.data.gpfsnodeZZ.new -v no
Attention
The following command must only be executed on stand-alone configurations. Do not execute
it in a cluster environment!
mmrestripefs sapmntdata -b
This will balance the data between the used and unused disks equally.
Change the GPFS quotas to match the new requirements. Run the quota calculator and you will see a
result like this:
1
2
3
4
# saphana-quota-calculator.sh
Please set the Shared quota to 8187 GB
Please set the Data quota to 3072 GB
Please set the Log quota to 1024 GB
5
6
7
8
9
X6 Implementation Guide
1.9.96-13
159
Technical Documentation
12.4
Adding memory
Note
The installation of additional memory requires a system downtime.
When the customer decides for a scale-up, i.e. adding RAM to the server(s), you have to follow the
memory DIMM placement rules for IBM X6 servers to get the best performance. The DIMMs must be
placed equally over all CPU books each CPU book must contain the same amount of DIMMs in the
same slots.
Tables 50: x3850 X6 Memory DIMM Placement on page 160, and 51: x3950 X6 Memory DIMM Placement on page 160 show which slots must be populated for specific configurations. The number of memory
DIMMs can be computed by "RAM size"/"DIMM size".
8
3
3
7
7
7
7
7
7
7
7
7
7
2 Sockets
16 24 32
3 3 3
3 3 3
3 3 3
3 3 3
7
3 3
7
3 3
7
7
3
7
7
3
7
7
7
7
7
7
7
7
7
7
7
7
48
3
3
3
3
3
3
3
3
3
3
3
3
16
3
3
7
7
7
7
7
7
7
7
7
7
4 Sockets
32 48 64
3 3 3
3 3 3
3 3 3
3 3 3
7
3 3
7
3 3
7
7
3
7
7
3
7
7
7
7
7
7
7
7
7
7
7
7
96
3
3
3
3
3
3
3
3
3
3
3
3
16
3
3
7
7
7
7
7
7
7
7
7
7
4 Sockets
32 48 64
3 3 3
3 3 3
3 3 3
3 3 3
7
3 3
7
3 3
7
7
3
7
7
3
7
7
7
7
7
7
7
7
7
7
7
7
96
3
3
3
3
3
3
3
3
3
3
3
3
32
3
3
7
7
7
7
7
7
7
7
7
7
64
3
3
3
3
7
7
7
7
7
7
7
7
8 Sockets
96 128
3
3
3
3
3
3
3
3
3
3
3
3
7
3
7
3
7
7
7
7
7
7
7
7
192
3
3
3
3
3
3
3
3
3
3
3
3
X6 Implementation Guide
1.9.96-13
160
Technical Documentation
12.5
mmchfs sapmntdata -A no
saphana-udev-config.sh -sw
mmmount sapmntdata
X6 Implementation Guide
1.9.96-13
161
Technical Documentation
13
Software Updates
Note
Starting with appliance version 1.9.96-13 the mount point for the GPFS file system sapmntdata
is user configurable during installation. SAP HANA will be also installed into this path.
Lenovo currently recommends to use /sapmnt, while SAP promotes /hana.
The following commands and code snippets use /sapmnt. For any other path please replace
/sapmnt with the chosen path.
13.1
Warning
Please be careful with updates of the software stack. Please update the software and driver components
only with a good reason, either because you are affected by a bug or have a security concern and only
after Lenovo or SAP support advised you to upgrade or after requesting approval from support via the
SAP OSS Ticket System on the queue BC-OP-LNX-LENOVO. Be defensive with updates as updates may
affect the proper operation of your SAP HANA appliance and the System x SAP HANA Development
team does not test every released patch or update.
13.2
Update Variants
This subsection gives an overview of the procedure in general, how updates should be applied. Then
there are two ways presented, how one could update in a cluster environment, either disruptive with a
downtime, or rolling, where one node is updated at a time and then re-added to the cluster.
Before performing a rolling update (non-disruptive one node at a time update) in a cluster environment
make sure that your cluster is in good health and all server nodes and storage devices are running.
13.2.1
This is the generic version for any kind of updates which require a system restart.
1. (on the target node) Check GPFS cluster health
Before performing any updates on any node, verify that the cluster is in a sane state. First check
that all nodes are running with the command
1
# mmgetstate -a
and check that all nodes are active, then verify that all disks are active
# mmlsdisk -e
The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.
Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
X6 Implementation Guide
1.9.96-13
162
Technical Documentation
Verify that SAP HANA and sapstartsrv are not running anymore:
1
2
# ps ax | grep sapstart
# ps ax | grep hdb
No processes should be found, if any processes are found please retry stopping SAP HANA.
# mmumount sapmntdata
and take care that no open process is preventing the file system from unmounting. If that happens
use
# lsof /sapmnt
to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.). Other
nodes within the cluster can still mount the shared file system.
4. Shutdown GPFS
1
# mmshutdown
5. Perform upgrades
Do now the necessary updates.
6. Restart the system
Restart the server if necessary. GPFS and SAP HANA should start automatically during reboot.
Skip step 7.
7. Restart GPFS
If you did not restart the whole server in step 6, start GPFS
1
# mmstartup
# mmmount sapmntdata
# mmlsdisk sapmntdata -e
If any disks are down, restart them with the command
X6 Implementation Guide
1.9.96-13
163
Technical Documentation
# mmrestripefs sapmntdata -r
Warning
Currently the FPO feature used in the appliance is not compatible with file system
rebalancing. Do not use the -b parameter!
# mmcheckquota -a
13.2.2
In the disruptive cluster update scenario, one would shutdown the whole cluster an apply all updates.
This will cause a downtime.
13.2.3
This update procedure applies when you are performing updates which either need a server restart like
Linux kernel update or need a restart of specific server software (e.g. GPFS) on affected nodes.
The idea of a rolling update is to update only one server at a time and after the server is back online in
the cluster, proceed with the next node in the same way. By doing so, you can avoid having downtimes.
For updating the SAP HANA software in a SAP HANA cluster, please refer to the SAP HANA Technical
Operations Manual. This can be done independent of other updates.
13.3
RHEL versionlock
RHEL has a mechanism to lock the versions of specified packages. Without this mechanism you would
update from RHEL 6.5 to RHEL 6.6 doing a yum update without further notice.
SAP HANA is only released for dedicated RHEL versions. Therefore it is advisable to restrict the update
for the kernel-version. You can find examples for RHEL 6.5 and for RHEL 6.6 below.
If it is not already done this mechanism can be activated by installing two packages and creating a file
/etc/yum/pluginconf.d/versionlock.list in the following way:
1
1
2
3
4
X6 Implementation Guide
1.9.96-13
164
Technical Documentation
5
6
7
8
9
kernel-firmware-2.6.32-431.*
kernel-headers-2.6.32-431.*
kernel-devel-2.6.32-431.*
redhat-release-*
# Keep packages for RHEL 6.5 (end)
or for RHEL 6.6 like this
1
2
3
4
5
6
7
8
9
13.4
At the time this document is created, kernel version 3.0.101-0.47.52.1 is mandatory for SLES for SAP 11
SP3. Please consult SAP if there is now a higher version recommended.
Warning
If the Linux kernel is updated, it is mandatory to recompile the GPFS portability layer kernel
module. Otherwise the system will not work anymore!
13.4.1
There are multiple methods to update a SLES for SAP installation. Possible update sources include
updating by using kernel RPMs copied onto the target server, using a corporate-internal installed SLES
update server/repository or by using Novells update server via the Internet (requires registration of the
installation). Possible methods include command line based tools like rpm -Uvh or CLI/X11 based GUI
tools like SUSEs YaST2.
Please refer to Novells official SLES documentation. A good starting point is the chapter "Installing
or Removing Software" in the SLES 11 Deployment guides obtainable from https://www.suse.com/
documentation/sles11/.
If you decide to update from RPM files, you need to update at least the following files:
kernel-default-<kernelversion>.x86_64.rpm
kernel-default-base-<kernelversion>.x86_64.rpm
kernel-default-devel-<kernelversion>.x86_64.rpm
kernel-source-<kernelversion>.x86_64.rpm
kernel-syms-<kernelversion>.x86_64.rpm
X6 Implementation Guide
1.9.96-13
165
Technical Documentation
kernel-trace-devel-<kernelversion>.x86_64.rpm
kernel-xen-devel-<kernelversion>.x86_64.rpm
Updating using YAST is recommended over updating from files.
13.4.2
There are multiple methods to update a RHEL installation. Possible update sources including updating
by using kernel RPMs copied onto the target server, using a corporate-internal installed RHEL update
server/repository or by using Red Hats update server via the Internet (requires registration of the
installation).
Please refer to Red Hats official RHEL documentation. A good starting point is the Red Hat Deployment
Guide22 (chapter 27 "Manually Upgrading The Kernel").
If you decide to update from RPM files, you need to update at least the following files
kernel-<kernelversion>.el6.x86_64.rpm
kernel-devel-<kernelversion>.el6.x86_64.rpm
kernel-firmware-<kernelversion>.el6.noarch.rpm
kernel-headers-<kernelversion>.el6.x86_64.rpm4
There are two sources for Kernel upgrades on Red Hat Linux: http://www.redhat.com/security/
updates/, and http://www.redhat.com/docs/manuals/RHNetwork/
Download the kernel RPMs necessary for your system. Red Hat recommends to keep the old kernel
packages as a fallback in case there are problems with the new kernel.
Updating using repositories is recommended over updating from files.
Please refer to chapter 13.3: RHEL versionlock on page 164 how to check, if a versionlock mechanism is
implemented and how to allow kernel updates.
13.4.3
Title
Stop SAP HANA
Unmount GPFS file systems, stop GPFS
Update Kernel Packages
Build new GPFS portability layer
Restart GPFS & check GPFS status
Start SAP HANA
22 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.
html
X6 Implementation Guide
1.9.96-13
166
Technical Documentation
Older versions of the appliance may not have this script, so please stop SAP HANA and other SAP
software manually.
Stop of SAP HANA is documented in the SAP HANA administration guidelines at the SAP Help
Portal23 or SAP Service Marketplace24 .
Make sure no process has files open on /sapmnt, you can test that with the command:
1
# lsof /sapmnt
# mmumount all
# mmshutdown
#
#
#
#
cd /usr/lpp/mmfs/src/
make Autoconfig
make World
make InstallImages
#
#
#
#
mmstartup
mmmount all
mmgetstate
mmlsmount all
13.5
Updating GPFS
Note
Upgrading GPFS requires a rebuild of the portability layer. The same applies if the Linux
kernel was upgraded.
23 https://help.sap.com/hana
24 https://service.sap.com/hana
X6 Implementation Guide
1.9.96-13
167
Technical Documentation
13.5.1
Title
Stop SAP HANA
Unmount GPFS file systems, stop GPFS
Upgrade to new GPFS Version
Build new GPFS portability layer
Update cluster and file system information
Restart GPFS, mount GPFS file systems
Check Status of GPFS
Start SAP HANA
# lsof /sapmnt
# mmumount all -a
# mmshutdown -a
3. Upgrade to new GPFS version. This step may be skipped if only the portability layer needs to be
re-compiled due to a Linux kernel update. (Replace <newgpfsversion> with GPFS version number
of the update.)
1
2
3
4
#
#
#
#
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
gpfs.base-<newgpfsversion>.x86_64.update.rpm
gpfs.docs-<newgpfsversion>.noarch.rpm
gpfs.gpl-<newgpfsversion>.gpl.noarch.rpm
gpfs.msg.en_US-<newgpfsversion>.noarch.rpm
#
#
#
#
cd /usr/lpp/mmfs/src/
make Autoconfig
make World
make InstallImages
26 https://service.sap.com/hana
X6 Implementation Guide
1.9.96-13
168
Technical Documentation
1
2
3
4
#
#
#
#
mmchconfig release=LATEST
mmstartup -a
mmchfs sapmntdata -V full
mmmount all -a
# mmgetstate -a
# mmlsmount all -L
# mmlsconfig | grep minReleaseLevel
13.5.1.1
# mmgetstate -a
and check that all nodes are active, then verify that all disks are active:
# mmlsdisk -e
The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.
Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
# lsof /sapmnt
No processes should be found, if any processes are found please retry stopping SAP HANA and any
other process accessing /sapmnt.
169
Technical Documentation
# mmumount sapmntdata
and take care that no open process is preventing the file system from unmounting. If that happens
use
# lsof /sapmnt
to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.) close
them and retry. Other Nodes within the cluster can still mount the shared file system.
4. Shutdown GPFS
1
# mmshutdown
GPFS should unload its kernel modules during its shutdown, so check the output of this command.
#
#
#
#
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
gpfs.base-3.X.0-xx.x86_64.update.rpm
gpfs.docs-3.X.0-xx.noarch.rpm
gpfs.gpl-3.X.0-xx.gpl.noarch.rpm
gpfs.msg.en_US-3.X.0-xx.noarch.rpm
#
#
#
#
cd /usr/lpp/mmfs/src/
make Autoconfig
make World
make InstallImages
6. Restart GPFS
1
# mmstartup
Verify that the node started up correctly
# mmgetstate
During the startup phase the node is shown in the state arbitrating, this changes to active when
GPFS completed startup.
# mmmount sapmntdata
# mmlsdisk sapmntdata -e
If any disks are shown as down, restart them with the command
X6 Implementation Guide
1.9.96-13
170
Technical Documentation
# mmrestripefs sapmntdata -r
Warning
Currently the FPO feature used in the appliance is not compatible with file system
rebalancing. Do not use the -b parameter!
# mmcheckquota -a
After all nodes are updated you can update the GPFS cluster configuration and the GPFS "on disk
format" (the data structures written to disk) to the newer version. Not all updates require this update
steps but it is safe to do them in any case. This update is non-disruptive and can be performed while
the cluster is active.
1. Update the cluster configuration with the newest settings
1
# mmchconfig release=LATEST
13.6
# mmlsfs sapmntdata -V
This section applies to single node and cluster installations. For single node installations only a disruptive
upgrade can be done.
Cluster installations can be upgraded either all at once (disruptive) or node-by-node (rolling).
DR installations can also be upgraded either all at once (disruptive) or node-by-node (rolling). Additionally, it is possible to upgrade the DR site first and the primary site at a later point. If the DR site
hosts a non-productive SAP HANA instance this approach can be used to verify the new code level in
pre-production.
X6 Implementation Guide
1.9.96-13
171
Technical Documentation
Note
GPFS 4.1 is only supported with PTF 8 or higher (that is 4.1.0-8).
Make sure you have the required GPFS packages before continuing. GPFS has introduced three editions
with different content. GPFS 4.1 Standard Edition is required (Express is not sufficient). If you have a
gpfs.ext RPM file then you have Standard Edition.
Existing GPFS 3.5 clients are entitled to GPFS 4.1 Standard Edition. For further information, including
how to migrate licenses, see GPFS FAQ27
13.6.1
Title
Stop SAP HANA
Unmount GPFS file systems, stop GPFS
Remove GPFS 3.5 packages, install GPFS 4.1 packages
Build new GPFS portability layer
Update cluster and file system information
Restart GPFS, mount GPFS file systems
Check Status of GPFS
Start SAP HANA
# lsof /sapmnt
# mmumount all -a
# mmshutdown -a
3. Remove all GPFS 3.5 packages and install new 4.1 packages.
Get a list of all installed GPFS 3.5 packages
1
27 http://www-01.ibm.com/support/knowledgecenter/api/content/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/
gpfsclustersfaq.html#migto41
28 https://help.sap.com/hana
29 https://service.sap.com/hana
X6 Implementation Guide
1.9.96-13
172
Technical Documentation
1
2
3
4
5
6
#
#
#
#
#
#
rpm
rpm
rpm
rpm
rpm
rpm
-ivh
-ivh
-ivh
-ivh
-ivh
-ivh
gpfs.base-4.1.0-0.x86_64.rpm
gpfs.docs-4.1.0-0.noarch.rpm
gpfs.ext-4.1.0-0.x86_64.rpm
gpfs.gpl-4.1.0-0.noarch.rpm
gpfs.gskit-8.0.50-16.x86_64.rpm
gpfs.msg.en_US-4.1.0-0.noarch.rpm
#
#
#
#
#
rpm
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
gpfs.base-4.1.0-8.x86_64.update.rpm
gpfs.docs-4.1.0-8.noarch.rpm
gpfs.ext-4.1.0-8.x86_64.update.rpm
gpfs.gpl-4.1.0-8.noarch.rpm
gpfs.msg.en_US-4.1.0-8.noarch.rpm
# cd /usr/lpp/mmfs/src/
# make Autoconfig
# make World
# make InstallImages
(optionally) # make rpm
5. Update cluster and file system information to current GPFS version. Activate new cluster configuration repository (CCR) feature.
1
2
3
4
5
#
#
#
#
#
mmstartup -a
mmchconfig release=LATEST
mmchcluster --ccr-enable
mmchfs sapmntdata -V full
mmmount all -a
# mmgetstate -a
# mmlsmount all -L
# mmlsconfig | grep minReleaseLevel
X6 Implementation Guide
1.9.96-13
173
Technical Documentation
13.6.2
To minimize downtime distribute the GPFS 4.1 packages on all nodes before starting.
1. Check GPFS cluster health
Before performing any updates on any node, verify that the cluster is in a sane state. First check
that all nodes are running and active with the command
1
# mmgetstate -a
Then verify that all disks are active
# mmlsdisk -e
The disks on the node to be taken down do not need to be in the up state, but make sure that all
other disks are up.
Warning
If disks of more than one server node are down, the file system will be shut down causing
all other SAP HANA nodes to fail.
# lsof /sapmnt
No processes should be found. If any processes are found please retry stopping SAP HANA and all
other processes accessing /sapmnt.
# mmumount sapmntdata
and take care that no open process is preventing the file system from unmounting. If that happens
use
# lsof /sapmnt
to find processes still accessing the file system, e.g. running shells (root, <SID>adm, etc.) close
them and retry. Other Nodes within the cluster still have /sapmnt mounted.
# mmshutdown
GPFS unloads its kernel modules during its shutdown, so check the output of this command carefully.
X6 Implementation Guide
1.9.96-13
174
Technical Documentation
1
2
3
4
5
6
#
#
#
#
#
#
rpm
rpm
rpm
rpm
rpm
rpm
-ivh
-ivh
-ivh
-ivh
-ivh
-ivh
gpfs.base-4.1.0-0.x86_64.rpm
gpfs.docs-4.1.0-0.noarch.rpm
gpfs.ext-4.1.0-0.x86_64.rpm
gpfs.gpl-4.1.0-0.noarch.rpm
gpfs.gskit-8.0.50-16.x86_64.rpm
gpfs.msg.en_US-4.1.0-0.noarch.rpm
#
#
#
#
#
rpm
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
gpfs.base-4.1.0-8.x86_64.update.rpm
gpfs.docs-4.1.0-8.noarch.rpm
gpfs.ext-4.1.0-8.x86_64.update.rpm
gpfs.gpl-4.1.0-8.noarch.rpm
gpfs.msg.en_US-4.1.0-8.noarch.rpm
# cd /usr/lpp/mmfs/src/
# make Autoconfig
# make World
# make InstallImages
(optional) # make rpm
6. Restart GPFS
1
# mmstartup
Verify that the node started up correctly
# mmgetstate
During the startup phase the node is shown in state arbitrating for a short period of time. This
changes to active when GPFS completed startup successfully.
# mmmount sapmntdata
9. Verify GPFS disks are active again (this command can be executed on any node)
1
# mmlsdisk sapmntdata -e
If any disks are shown as down, restart them with the command
X6 Implementation Guide
1.9.96-13
175
Technical Documentation
10. Restore correct replication level (this command can be executed on any node)
Start a restripe so that all data is properly replicated again
1
# mmrestripefs sapmntdata -r
Warning
Do not use the -b parameter!
# mmcheckquota -a
After all nodes have been updated successfully you can update the GPFS cluster configuration and
the GPFS "on disk format" (the data structures written to disk) to the newer version. This update is
non-disruptive and can be performed while the cluster is active.
1. Update the cluster configuration to the newest version
1
# mmchconfig release=LATEST
# mmchcluster --ccr-enable
13.7
# mmlsfs sapmntdata -V
chmod +x mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin
Then you can start the installation with:
./mlnx-lnvgy_fw_nic_2.4-1.0.0.4_rhel6_x86-64.bin --enable-affinity
If this step fails, you may have to install the python-devel package from the official SLES or RHEL
repositories.
This will upgrade your driver and firmware of the Mellanox network cards. Please review the output of
the above program for possible errors. After a successful upgrade, a reboot will be necessary.
X6 Implementation Guide
1.9.96-13
176
Technical Documentation
13.8
SAP HANA
Warning
Make sure that the packages listed in Appendix F.5: FAQ #5: Missing RPMs on page 217
are installed on your appliance. An upgrade may fail without them.
Please refer to the official SAP HANA documentation for further steps.
13.9
For a detailed description for upgrades of VMware vSphere ESXi 5.5 to 5.5U2 please consult the VMware
vsphere-esxi-vcenter-server-552-upgrade-guide. If your ESXi host is connected to a vCenter Server you
can performe an online update of the ESXi host. The procedure is described in the vsphere-esxi-vcenterserver-552-upgrade-guide. In this section we describe the update with a reboot of the ESXi host. You
need to be able to log into the IMM and start a remote console. Mount the update ISO from ESXi 5.5U2
at the remote console.
1. Shutdown all running VMs
2. reboot ESXi host
3. boot from the ESXi 5.5U2 ISO
4. choose the the USB Storage device for your update
5. choose upgrade
6. confirm upgrade
7. reboot
8. boot from USB stick
Todo after reboot All of the shown commands are for the CLI. Please do a ssh login as root to the ESXi
host to be able to perform the commands
1. check license <vim-cmd vimsvc/license show>
2. check firewall setting <esxcli network firewall ruleset list>
3. check disks <esxcli storage filesystem list>
4. check installed VMs <vim-cmd vmsvc/getallvms>
5. check vswitches <esxcli network vswitch standard list>
6. check RAID controller <storcli /call show>
X6 Implementation Guide
1.9.96-13
177
Technical Documentation
14
This section describes the steps needed to perform an upgrade of RHEL 6.5 to RHEL 6.6.
14.1
For the upgrade a maintenance downtime is needed with a least one reboot of the servers. If you have
installed software that was not part of the initial installation from Lenovo, please make sure that this
software is compatible with RHEL 6.6.
Note
Testing in a non-productive environment before upgrading productive systems is highly recommended. As always backing up the system before performing changes is also highly recommended.
14.2
Rolling Upgrade
In a cluster environment a rolling upgrade (one node at a time) is possible as long as you are running a
HA environment with IBM GPFS 3.5 and with at least a standby node.
See section 13.5: Updating GPFS on page 167 for information on the IBM GPFS upgrade.
In any case you can perform a non-rolling upgrade, taking all nodes down for maintenance.
14.3
Upgrade Overview
The following tested and recommended upgrade steps require one reboot. The tasks are mostly the same
for cluster and single node systems, if there is an operational difference between these two types, it will
be noted. This list shows the upgrade steps.
1. Stop IBM GPFS & HANA.
2. Upgrade IBM GPFS if necessary.
3. Update Mellanox Drivers
4. Upgrade from RHEL 6.5 to RHEL 6.6.
5. Kernel upgrade if necessary
6. Install Compability Pack
7. Recompile kernel module for IBM GPFS
8. Adapt Configuration
9. Upgrade complete: Start IBM GPFS & HANA.
When doing a rolling upgrade or the upgrade of a single node, do the steps described in this section only
on the server currently being updated.
When updating all nodes in a cluster at the same time, do you can perform the steps on all nodes in
parallel: step 1 on all nodes, then step 2 on all nodes and then step 3 on all nodes and so on.
X6 Implementation Guide
1.9.96-13
178
Technical Documentation
14.4
Prerequisites
You are running Lenovo Systems Solution for SAP HANA appliance system and want to to upgrade the
RHEL 6.5 operating system to RHEL 6.6.
You should run at least IBM GPFS version 4.1.0-8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS.
You can find out your IBM GPFS version with the command
1
# rpm -q gpfs.base
For RHEL 6.6 version 2.4-1.0.0 of the Mellanox-Drivers is needed. You can check the version using:
# ethtool -i eth0
For the Upgrade the following DVDs or images are needed:
RHEL 6.6-DVD
nss-softokn packages
nss-softokn-freebl-3.14.3-19.el6.x86_64
nss-softokn-freebl-3.14.3-19.el6.i686
e.g as part of the RHEL 6.6 compability pack
Other ways of providing the images to the Server (e.g. locally, FTP, SFTP, etc) are possible but not
explained as part of this guide.
Also other upgrade mechanism like e.g. using a satellite-server are out of scope of this guide.
14.5
1. Shutdown HANA
Shutdown HANA and all other SAP software running in the whole cluster or on the single node
cleanly. Login in as root on each node and execute
1
# lsof /sapmnt
2. Unmount the IBM GPFS file system Unmount the IBM GPFS file system /sapmnt by issuing
1
# mmumount all
# mmshutdown -a
to shutdown the IBM GPFS software on all cluster nodes.
14.6
You should run at least IBM GPFS version 4.1.0-8. If your system is running a IBM GPFS version below
that, you should upgrade IBM GPFS first, see 13.5: Updating GPFS on page 167.
X6 Implementation Guide
1.9.96-13
179
Technical Documentation
14.7
For RHEL 6.6 at least version 2.4-1.0.0 of the Mellanox-Drivers is needed. If you have a version below
that, you should upgrade the Mellanox drivers first, see 13.7: Update Mellanox Network Cards on page
176.
14.8
# vi /etc/yum/pluginconf.d/versionlock.list
2
3
4
5
6
7
8
9
10
11
1
2
# ls /media/
RHEL-6.6 Server.x86_64
This information is needed for the baseurl-part below. Now create a repository file rhel-dvd66.
repo in /etc/yum.repos.d
# vi /etc/yum.repos.d/rhel-dvd66.repo
with the following content:
1
2
3
4
5
[dvd66]
name=Red Hat Enterprise Linux Installation DVD
baseurl=file:///media/RHEL-6.6\ Server.x86_64/
gpgcheck=0
enabled=0
1
2
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.6 (Santiago)
X6 Implementation Guide
1.9.96-13
180
Technical Documentation
2
3
vi /etc/yum/pluginconf.d/versionlock.list
4
5
6
7
8
9
10
11
12
13
14.9
Please consult SAP if there is now a higher version of the kernel recommended. Please check also chapter
13.4.2: RHEL Kernel Update Methods on page 166.
14.10
A update of the nss-softokn packages is mandatory. More information can be found in:
SAP Note 2001528 Linux: SAP HANA Database SPS 08 revision 80 (or higher) on RHEL 6 or
SLES 11
Why can I not install or start SAP HANA after a system upgrade?30
1
14.11
IBM GPFS need self-compiled (so called "out-of-tree" drivers) Linux kernel modules to operate properly.
To compile IBM GPFS kernel module execute the following commands
1
2
3
4
#
#
#
#
cd /usr/lpp/mmfs/src
make Autoconfig
make World
make InstallImages
14.12
Adapting Configuration
Please review the performance settings in D: Performance Settings on page 211 because they might have
changed.
30 https://access.redhat.com/solutions/1236813
X6 Implementation Guide
1.9.96-13
181
Technical Documentation
14.13
Start IBM GPFS and HANA by either rebooting the machine (recommended) or starting the daemons
manually:
1. Restart GPFS
1
2
# mmstartup
# mmmount all
Verify status of IBM GPFS and if the file system is mounted:
1
2
# mmgetstate
# mmlsmount all
2. Start HANA
1
X6 Implementation Guide
1.9.96-13
182
Technical Documentation
15
This chapter describes different steps to check the appliances health status. The script described here
should be updated and executed in regular intervals by a system administrator. The other sections present
additional information and give deeper insight into the system.
Note
SAP Note 1661146 Lenovo/IBM Check Tool for SAP HANA appliances provides details for
downloading and using the following scripts to catalog the hardware and software configurations and create a set of information to assist service and support of the machine by SAP and
Lenovo.
We highly recommend that a SAP HANA system administrator regularly downloads and
updates these scripts to ensure to obtain the latest support information for the servers.
15.1
System Login
The latest version of the Lenovo Solution installation also adds a message of the day that shows the
current status of the GPFS filesystems, and memory usage. This will pop up once each login for every
user. The message is created by a cron job that runs once an hour, this means that the information is
not real time and the system status may have changed in the meantime.
____
_
____
/ ___| / \ | _ \
\___ \ / _ \ | |_) |
___) / ___ \| __/
|____/_/
\_\_|
1
2
3
4
5
_
_
_
_
_
_
| | | | / \ | \ | | / \
| |_| | / _ \ | \| | / _ \
| _ |/ ___ \| |\ |/ ___ \
|_| |_/_/
\_\_| \_/_/
\_\
7
8
9
10
11
12
13
14
15
16
17
18
!
!
!
!
15.2
Included with the installation is a script that will inform you and the customer that all the hardware
requirements and basic operating system requirements have been met. Using the option -h, you can see
the various ways to call the saphana-support-lenovo.sh script.
Note
It is highly recommended to work with the latest version of the system check script. You can
find it in SAP Note 1661146 Lenovo/IBM Check Tool for SAP HANA appliances.
1
2
# saphana-support-lenovo.sh -h
Usage: saphana-support-lenovo [OPTIONS]
X6 Implementation Guide
1.9.96-13
183
Technical Documentation
3
4
Lenovo Systems solution for SAP HANA appliance System Checking Tool
to check hardware system configuration for Lenovo and SAP Support teams.
5
6
7
8
Options:
-c
-s
9
10
-h
11
12
13
14
15
16
17
18
19
20
21
22
1
2
3
4
5
6
7
# saphana-support-lenovo.sh -c
===================================================================
# LENOVO SUPPORT TOOL Version 1.9.96-13.2406.2b5da57 -- 2015-06-15
# (C) Copyright IBM Corporation 2011-2014
# (C) Copyright Lenovo 2015
# Analysis taken on:
20150622-1522
===================================================================
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
X6 Implementation Guide
1.9.96-13
184
Technical Documentation
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Installation configuration:
---------Parameter clustered is standby
Parameter exthostname is ......
Parameter cluster_ha_nodes is 1
Parameter cluster_nr_nodes is 2
Parameter hanainstnr is 12
Parameter hanasid is FLO
Parameter hanauid is 1100
Parameter hanagid is 111
Parameter shared_fs_mountpoint is /sapmnt
Parameter gpfs_node1 is gpfsnode01 192.168.212.101
Parameter gpfs_node2 is gpfsnode02 192.168.212.102
Parameter hana_node1 is hananode01 192.168.213.201
Parameter hana_node2 is hananode02 192.168.213.202
Parameter step is 11
-------------------------------------------------------------------
45
46
47
48
49
Hardware analysis:
---------CPU Type: Pentium 4 Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz OK
# of CPUs: 4, threads: 144 OK
50
51
52
53
ServeRAID: 2 adapters OK
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
X6 Implementation Guide
1.9.96-13
185
Technical Documentation
79
80
81
82
---------NOTE: The following checks are for known problems of the system.
See the FAQ section of the Lenovo - SAP HANA Operations Guide
SAP Note 1661146 found at https://service.sap.com/notes
83
84
85
86
87
88
89
90
91
92
93
15.3
System Support
In case of a problem with the Lenovo Systems Solution for SAP HANA Platform Edition, you should
always direct the customer to open an OSS Message, whether or not it is an obvious problem with the
hardware. Lenovo, IBM, and SAP have an agreement that all problems with the Lenovo Solution are to
come first through SAP Support process, where there are Lenovo L3 Support members who will help the
customer determine what the root cause of the problem is. If it is determined that there is a problem with
an Lenovo Solution, then the Lenovo L3 support person will instruct and guide the customer in opening
the correct IBM PMR and help ensure that the appropriate attention has been given to the problem.
In order to make this process for all involved easier, Lenovo delivers a special program that can gather
much of the data necessary in an initial support call. Using this script the customer can help streamline
the support process in order to obtain the fastest and most competent support available.
This script is found in the directory /opt/lenovo/saphana/bin and is called saphana-support-lenovo.
sh. In order to collect support data, the customer should run this command from the shell as follows:
1
# saphana-support-lenovo.sh -s
Note
-[1.8.80-12]: These appliances were shipped with the script /opt/ibm/saphana/bin/
saphana-support-ibm.sh. When installing the latest support script version you will get the
new script saphana-support-lenovo.sh. Do not remove the script saphana-support-ibm.
sh.
This script, along with the Linux SAP System Information Tool, can be found in the SAP OSS Notes
1661146 and 618104 respectively. When the SAP System Information Tool is placed in /opt/lenovo/
saphana/bin, it will be automatically called from this script and its input will be also collected.
X6 Implementation Guide
1.9.96-13
186
Technical Documentation
15.4
15.4.1
In some cases it might be useful to check the UEFI settings of the HANA servers. Therefore, the
saphana-support-lenovo.sh script uses the Lenovo Advanced Settings Utility (ASU), if it is installed,
and prints out warnings, if there is a misconfiguration. This check can be enabled via the -e parameter.
Download the latest Linux 64-bit RPM from https://www-947.ibm.com/support/entry/myportal/
docdisplay?lndocid=LNVO-ASU and install the RPM.
Before upgrading the ASU tool remove the old version. Find the installed version via rpm -qa | grep
asu.
15.4.2
The saphana-support-lenovo.sh script also analyzes the status of the ServeRAID controllers and the
controller-internal batteries to check whether the controllers are in a working and performing state.
For activation of this feature the StorCLI (Command Line) Utility for Storage Management software
must be installed. Go to https://www-947.ibm.com/support/entry/myportal/docdisplay?lndocid=
migr-5092950 and download the file locally and install the RPMs.
Before upgrading the StorCLI tool remove the old version. Find the installed version via rpm -qa |
grep storcli.
Warning
[1.6.60-7]+ With the change to RAID5 based storage configuration, installing the MegaCLI
Utility is even more important as a HDD/SSD failure is not directly visible with standard
GPFS commands until a whole RAID array has failed.
15.4.3
For models of the Lenovo Solution that come with SSDs it might be useful to check the state of the SSDs.
This includes all x3850 X6 and x3890 X6 servers, and eX5 SSD, XS, and S models.
Go to http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5090923 and download the latest binary of the SSD Wear Gauge CLI utility (lnvgy_utl_ssd_-<version>_linux_32-64.
bin). Copy it to the machine to be checked.
When upgrading the tool remove existing binaries from /opt/ibm/ssd_cli/ and/or /opt/lenovo/ssd_
cli/.
Copy the bin file into /opt/lenovo/ssd_cli/:
X6 Implementation Guide
1.9.96-13
187
Technical Documentation
1
2
3
# mkdir -p /opt/lenovo/ssd_cli/
# cp lnvgy_utl_ssd_-*_linux_32-64.bin /opt/lenovo/ssd_cli/
# chmod u+x /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin
Execute the binary:
# /opt/lenovo/ssd_cli/lnvgy_utl_ssd_-*_linux_32-64.bin -u
Sample output:
1
2
3
4
15.5
31 SAP
32 http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5087035
X6 Implementation Guide
1.9.96-13
188
Technical Documentation
16
This section provides enough instructions necessary to create a simple system copy of the base operating
system found on the first hard drive, or primary partition. This image can then be used for a basic backup/
restore solution of the primary partition. This image, once copied initially, should also be transferred to
offline storage to ensure that data does not get lost due to irreparable hard drive failures. The intent
of this section is that the user can have a simple backup and restore solution using the tools available
within Linux to protect their system. For enterprise backup and restore solutions, we recommend to use
an enterprise backup and restore option to ensure backup/restore operations for the operating system as
well as the IBM General Parallel File System and SAP HANA file systems.
What follows is a description, how to create a backup of the operating system. We also describe how to
restore these items in case of a planned or unplanned disaster with the original Operating System (OS).
This is valid for systems installed with at least the version 1.8.80-10 of the System x automated installer.
Earlier Systems may require extra effort for OS backup partition creation. The following System x server
models can be used:
System x3950 eX5 Workload Optimized System (7143) for SAP HANA Platform Edition,
System x3690 eX5 Workload Optimized System (7147) for SAP HANA Platform Edition,
System x3850/x3950 X6 Workload Optimized System (3837) for SAP HANA Platform Edition.
System x3850/x3950 X6 Workload Optimized System (6241) for SAP HANA Platform Edition.
Warning
Do not go into production without verifying a full backup and a full restore of the operating
system!
16.1
Description
In order to perform a simple backup and restoration of the OS, excluding the SAP HANA executables,
configuration, data or logs, you need to run a few commands in Linux in order to set up a working copy
of the OS. What we will explain here is a method of copying the Linux file system that is contained on
two partitions of the first hard drive.
Using the Linux command rsync, you are able to intelligently copy a file system from one partition to
another quickly and with little effort. This tool can also be set up in nightly cron schedules to happen
automatically and semi-automate the process of taking a backup image of the OS. As seen in 86: Overview
of Backup/Restore Operations on page 190, the general concept is that the user uses rsync to copy the
contents of the root (/) and boot (/boot/efi) directories from their original partitions onto two newly
created partitions on the same hard drive.
X6 Implementation Guide
1.9.96-13
189
Technical Documentation
rsync
/dev/sda5
(backroot)
/dev/sda4
(backboot)
/dev/sda2
(hanaroot)
/dev/sda2
(hanaroot)
/dev/sda3
(hanaboot)
/dev/sda3
(hanaboot)
Normal Operation
/dev/sda4
(backboot)
/dev/sda5
(backroot)
rsync
Boot Loader
The server can use two different methods to boot. For X6 based systems, the default method is using
the Unified Extensible Firmware Interface, or UEFI. According to Wikipedia33 , the Unified Extensible
Firmware Interface is a specification that defines a software interface between an operating system and
platform firmware. The second method is using the legacy method of BIOS, which was a typical way to
boot SAP HANA on eX5 based systems.
Linux requires a boot loader that understands the specific boot method. Two options are available: Grand
Unified Bootloader (GRUB) and Linux Loader (LILO) . The way a server boots; and, subsequently,
installs the boot loader determines some of the system partitioning and file system layout of the installed
server. Although it is possible to use both methods to boot and install the Lenovo Solution server, this
document will only cover the steps necessary to create a restore image using the UEFI boot mechanism
with either the GRUB or LILO boot loader. If you are using the Legacy Boot option, you will need
to become familiar with how each distribution handles the boot procedure with the Legacy BIOS boot
option as this is not part of this documentation.
EFI Linux Loader (ELILO) is the interface the Lenovo System x UEFI uses to talk to the LILO
boot loader. The Linux installation will place the boot loader under the directory /boot/efi. The
configuration file for ELILO can be found in /etc/elilo.conf. Using GRUB, the Linux installation will
place the boot loader under the directory /boot/efi. The configuration file for GRUB can be found in
/boot/grub/menu.conf or /boot/grub/grub.conf, depending on the version of GRUB
33 http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
X6 Implementation Guide
1.9.96-13
190
Technical Documentation
16.1.2
Drive Partitions
Starting with version 1.8.80-10 of the the Lenovo Solution installation media, it will create five (5)
partitions on the first drive (sda). Each partition has a specific label and purpose for the system backup
and restore. The labels are: hanaboot, hanaroot, hanaswap, backboot and backroot. The correlation
of these labels to the appropriate devices can be found by listing the symbolic links in the directory
/dev/disk/by-label.
An example partition layout for systems is shown below. The first device is partitioned into several
physical and logical partitions and named with a label, a simple identifier, and a Universally Unique
Identifier (UUID) . Only the UUID is promised to remain connected to the proper partition as it was
created.
Partition
/dev/disk/by-label
/dev/disk/by-id
/dev/disk/by-uuid
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
/dev/sda5
hanaboot
hanaroot
swap
backboot
backroot
scsi-{33-hexadecimal-number}-part1
scsi-{33-hexadecimal-number}-part2
scsi-{33-hexadecimal-number}-part3
scsi-{33-hexadecimal-number}-part4
scsi-{33-hexadecimal-number}-part5
hexadecimal
hexadecimal
hexadecimal
hexadecimal
hexadecimal
number
number
number
number
number
Attention
Pay special attention on systems installed earlier than version 1.8.80-10. These systems may
have been installed with extra partitions that are used for other auxillary file systems unrelated
to SAP HANA. If this is the case, then you should be certain to first create enough free space
in order to create new backup partitions and also determine a way to backup and save off the
data in these auxillary partitions.
The backup and recovery of these drives is not part of this document, but similar rules can
be applied.
16.2
Prerequisites
The Lenovo Solution server should also have been installed using the included automatic installer program.
If not, some of the names of the partitions might be different and these directions may not work correctly.
16.2.0.1 SUSE Linux Enterprise Server Partition Labels In a system installed with the SUSE
Linux Enterprise Server OS, not all partitions are labeled. This seems to be an issue with how SLES
handles the creation of labels for VFAT file system partitions. By default, SLES uses the values found
under the /dev/disk/by-id directory when describing specific partitions. This document will continue
to use the /dev/disk/by-label values, and it will be expected that these are translated to /dev/disk/
by-id values when implementing this backup solution on SLES.
16.2.0.2 Create entries in /etc/fstab for new mounts Before you start with the OS portion
of this procedure, you should ensure that the backboot and backroot devices are mounted to the file
system as /var/backup/root and /var/backup/boot/efi. These mount points should already exist in
the file /etc/fstab similar to the example (for SLES) below:
1
2
3
vfat umask=0002,utf8=true
ext3 acl,user_xattr
0 0
1 1
X6 Implementation Guide
1.9.96-13
191
Technical Documentation
Note
The
hexadecimal
portion
of
the
value
of
/dev/disk/by-id/
scsi-3600605b0038ac2601a9a1f01cc74cf23-partx will be different for every individual drive and installation. We recommend to read the contents of /etc/fstab before and
copy only the value for the stated partitions for all new backup partitions. Pay particular
notice to rename the partition to the correct partition created!
1
2
3
1 2
0 0
After each time the rsync command has completed, the root file system has now been copied exactly
from / into /var/backup/. In order to boot from the backup partition backroot, we want to switch the
partition labels (or ids) from hana* to the back* labelled partitions. The hana* partitions should now
be mounted as the file system /var/backup in order to restore from the backed up image in the case of
a recovery.
We recommend to slightly modify the message of the day (motd) so that you can visually see that you
are using the backup image. Since this is also copied on top of any previous images, it is best to use a
symbolic link to keep both the backup and original motd file.
1
2
3
4
5
touch /etc/motd.{bak,orig}
echo "## !!!!! T H E B A C K U P M E S S A G E !!!!! ##" > /etc/motd.bak
cat /etc/motd >> /etc/motd.{bak,orig}
rm /etc/motd
ln -s /etc/motd.orig /etc/motd
Listing 6: Creating a copy of the motd file
After every rsync run, the fstab needs to be adopted as shown here. We recommend to create a copy
of the origial and backup so that you can easily switch between the two after a call to rsync. You can
copy the original file /etc/fstab to /etc/fstab.orig and create a new copy called /etc/fstab.bak.
1
2
3
4
5
touch /etc/fstab.{bak,orig}
echo "## !!!!! T H E B A C K U P F S T A B
cat /etc/fstab >> /etc/fstab.{bak,orig}
rm /etc/fstab
ln -s /etc/fstab.orig /etc/fstab
X6 Implementation Guide
1.9.96-13
/var/backup
/var/backup/boot/efi
swap
/boot/efi
ext3
vfat
swap
vfat
acl,user_xattr
umask=0002,utf8=true
defaults
umask=0002,utf8=true
1
0
0
0
1
0
0
0
192
Technical Documentation
/dev/disk/by-id/scsi-3600605b0038ac2601a9a1f01cc74cf23-part5 /
ext3 acl,user_xattr
1 1
acl,user_xattr
umask=0002,utf8=true
defaults
acl,user_xattr
umask=0002,utf8=true
1
0
0
1
0
1
0
0
1
0
rm /var/backup/etc/fstab
cd /var/backup/etc
ln -s fstab.bak fstab
Listing 10: Changing files for backup partition
16.2.2
Original name:
name###
on these installs. Otherwise, YaST will not see this option in the boot list for ELILO and may not present
it to you during boot.
1
2
3
4
5
6
7
8
9
X6 Implementation Guide
1.9.96-13
193
Technical Documentation
If you update the kernel, you will also need to update the lines image = and initrd = in this file for the
backup entry.
After changing the elilo.conf run
1
elilo --verbose
to update the boot loader. The intention is that you will be able to start up the backup partition in
order to copy the saved state in the backup partition over top of the primary partition.
Grub installed systems
In systems installed using the GRUB boot loader (by default all Red Hat based installs and SUSE installs
on System eX5 hardware), edit the contents of /boot/grub/grub.cfg (RHEL), or /boot/grub/menu.lst
(SLES), and copy the section for the primary partition to edit it as the new backup partition.
This is a copy of the default boot line with the title, root and kernel lines changes to match the
partition used for the backup partition.On RHEL replacing the label and root values with the value
backup and backroot partition ID. On SLES the according scsi-<id>-part<X> has to be changed to
fit the <id> and partition <X> on the given system.
1
2
3
4
5
6
7
8
yast2 bootloader
to update the boot loader, on RHEL:
grub-install /dev/sda
Note
The partition number for a GRUB installed partition is based on the device syntax
of (device[,partmap-name1part-num1[,partmap-name2part-num2[,...]]]). The syntax
(hd0) represents using the entire disk of the first device, for example sda, while the syntax
(hd0,1) represents using the second partition of the device, for example sda2. Notice that
GRUB identifies the first partition on the first device as (hd0,0) or (hd0) for short.
Note
In our example, we presume that the hanaroot partition is (hd0,1) and the backroot partition is (hd0,4).
Append or change these lines in /var/backup/etc/grub.conf. Here, we exchange the meanings of the
hanaroot and backroot partitions. When booting into this kernel, the hanaroot is the partition to be
restored, and the backroot is the default partition to be booted. The title, root and kernel lines are
changed to match the partition used for the backroot partition. We should also change the parameter
default in the header subsection to point to the Restore image (usually the subsection number 2) rather
than the original SAP HANA image.
X6 Implementation Guide
1.9.96-13
194
Technical Documentation
1
2
3
4
5
6
7
8
9
default=2
title Restore from SAP HANA Platform Edition Backup Image
root (hd0,<PARTITION NR, see above>)
kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=LABEL=backroot
KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.UTF-8 crashkernel=auto
processor.max_cstate=0 intel_idle.max_cstate=0
transparent_hugepage=never SYSFONT=latarcyrheb-sun16
rd_NO_LUKS rd_NO_LVM rd_NO_DM rd_NO_MD rhgb quiet
initrd /boot/initramfs-2.6.32-431.el6.x86_64.img
Listing 13: Example GRUB Configuration for Backup Partition
16.3
In order to perform an initial backup run as root the following commands. The initial backup will take a
long time as it is copying the entire file system under the hanaroot partition into the backroot partition.
Subsequent executions of the rsync command will be shorter as it is intelligent enough to only copy what
has changed between calls of the command.
As the system administrator (root) run:
1
2
3
4
5
6
7
8
9
10
11
12
start_stamp=$(date +%s)
# Begin backup of root file system
rsync -aAXxv --delete / /var/backup --exclude={/dev/*,/proc/*,/sys/*,/tmp/*,/boot/efi*,/run/*,/mnt/*,/media/*,\
/lost+found,/var/backup/*,/sapmnt/*,/var/lib/ntp/proc/*,/etc/fstab}
middle_stamp=$(date +%s)
echo "Root file system completed in $( echo \
"(${middle_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${middle_stamp}-${start_stamp})%60"| bc ) seconds"
# Begin backup of /boot/efi file system
rsync -aAXxv --delete /boot/efi/ /var/backup/boot/efi/
end_stamp=$(date +%s)
echo "Boot file system completed in $( echo \
"(${end_stamp}-${start_stamp})/60"| bc ) minutes $( echo "(${end_stamp}-${start_stamp})%60"| bc ) seconds"
16.4
In case of a planned or unplanned system outage, the wish to recover the last known good backup of the
root and boot file system partitions that have been copied on to the backup partitions is possible. In the
case of a hard drive failure where the backup partitions have been lost, the copies stored on an external
storage must be recopied into the backup partitions after the hard drive failure has been resolved by the
hardware support team. After that, the restore can take place as described here.
Restart the machine and boot the backup OS. While booting, select the created boot option for the
backup partition from the list given by the ELILO boot loader menu. By default there is no menu
congigured, but if you press the TAB key while you see the text ELILO Booting:... you will be given
the options you can choose. The newly created option of "backup" should be visible. If not, rerun the
elilo verbose command in the original OS and restart.
The GRUB boot loader menu is shown by default (see 87: Sample GRUB boot loader screen on page 196.
You can use the arrow keys to select the newly created option "backup". This should be done only after
checking that the boot loader menu in the backup partition has been properly updated according to the
directions in 16.2.1: Correcting the backup fstab on page 192 above.
X6 Implementation Guide
1.9.96-13
195
Technical Documentation
/boot/efi
/
swap
/var/backup/boot/efi
/var/backup/root
vfat
ext3
swap
vfat
ext3
umask=0002,utf8=true
acl,user_xattr
defaults
umask=0002,utf8=true
acl,user_xattr
0
1
0
0
1
0
1
0
0
1
0
1
0
0
1
0
1
0
0
1
Warning
Be careful after using the rsync command to pay attention to the files /var/backup/etc/
fstab and the boot loaders /var/backup/boot/grub/grub.cfg or /var/backup/etc/elilo.
conf. Ensure that they have the reverse meaning to that described in the previous section.
On the primary partition, you should now be able to boot into the primary partition using the boot
loaders default menu item.
X6 Implementation Guide
1.9.96-13
196
Technical Documentation
17
This section provides instructions necessary to create a simple SAP HANA Platform Edition backup and
restore procedure. These images can then be used for a basic backup/restore solution. Initially, they are
copied locally and must be transferred to an offline storage for any real use. The intent is that the user
can have a simple backup and restore solution using the tools available with IBM GPFS and SAP HANA.
For advanced backup and restore solutions, we recommend to use an enterprise backup solution to ensure
backup/restore operations for IBM GPFS and SAP HANA.
What follows is a description how to take snapshots of the IBM GPFS file system and the SAP HANA
database. We also describe how to restore SAP HANA in case of a planned or unplanned disaster. This
enables the administrator to take backups of the SAP HANA data without interrupting the database
service (so called online backups of the database). The time it takes to actually backup the data afterwards
to a secure place does not affect SAP HANA operation.
Note
Features from SAP HANA Studio for snapshot generation are described as well. Identical
results can be achieved using the command-line SQL interface found in the SAP HANA guide
books.
17.1
Description
The procedure to backup SAP HANA and IBM GPFS only applies to SAP HANA 1.0 SPS 07 and later.
These instructions are also included in the SAP HANA Operations Guide. All screenshots were taken
with this release. The GUI may change with newer releases.
This procedure can restore data:
on the very same environment the snapshot was taken from,
on an environment that copies the landscape of the original system.
A change in landscape (mton copy) is not supported.
Make sure to always check the following locations for latest information:
http://help.sap.com/hana/SAP_HANA_Administration_Guide_en.pdfSAP HANA Administration Guide, Chapter: Backup and Recovery,
http://www.saphana.com/docs/DOC-1220SAP HANA Backup and Recovery Overview,
IBM GPFS snapshot http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/
com.ibm.cluster.IBMGPFS.v4r1.IBMGPFS200.doc/bl1adv_logcopy.htmdocumentation.
Warning
Do not go into production without verifying a full backup and restore procedure!
17.2
X6 Implementation Guide
1.9.96-13
197
Technical Documentation
In SAP HANA Studio either right-click on Backup and choose "Manage Storage Snapshots" from the
context menu or click on "Storage Snapshots" on the right. This allows to generate a snapshot. The
following dialog opens:
Click on Prepare. You are then asked to give this snapshot a name. This name will be stored in the
SAP HANA backup catalog. It does not appear outside of SAP HANA.
After clicking the OK button the snapshot is generated. Any log entries are merged into the data area
so that it has a consistent state that can be recovered from.
X6 Implementation Guide
1.9.96-13
198
Technical Documentation
While the snapshot is active you can not have further snapshots or backups taken from this SAP HANA
instance. Notice file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 this file
indicates that the content of this directory is a valid SAP HANA snapshot and can be used to recover
from.
The next step is to take a IBM GPFS snapshot. Login to any server of the SAP HANA installation. It
does not matter on which server you issue the IBM GPFS snapshot commands.
1
2
3
4
5
6
After this command has finished you have a new folder <snapshotname> in /sapmnt/.snapshots This
subfolder contains all files that you can then use to copy to a safe place. The IBM GPFS snapshot is
taken from the entire GPFS file system.
If the IBM GPFS snapshot has finished successfully confirm this fact and release the SAP HANA snapshot. In SAP HANA Studio click on Confirm. This opens the following dialog:
X6 Implementation Guide
1.9.96-13
199
Technical Documentation
We recommend to use the name given to the IBM GPFS snapshot as part of the mmcrsnapshot command.
After you acknowledge this window the wizard finishes and you can leave the storage snapshot dialog.
If the IBM GPFS snapshot did not finish successfully or was manually aborted, click on the Abandon
button and act accordingly.
Copy the IBM GPFS snapshot data to a safe place on an external storage device. E.g. this could be an
NFS export on a storage server. For instance, this can be done with the following tools: simple Linux
copy (cp), secure copy (scp) or rsync command. On the other hand, integration into IBM Tivoli Storage
Manager or other automated file backup tools is also possible. This depends highly on the customer
demands and availabilities regarding hardware and backup requirements.
See table 55 for the files and directories which need to be copied to an external storage in order to have
a full SAP HANA backup.
Path
Exclude
/sapmnt/.snapshot/<snapshotname>/shared
/sapmnt/.snapshot/<snapshotname>/shared/
<SID>/HDB<INST_NR>/backup
/sapmnt/.snapshot/<snapshotname>/data/
<SID>
Having more than one active snapshots at a time is supported by IBM GPFS. The maximum number
of snapshots in sapmntdata is 256 (this applies to IBM GPFS 3.5 and 4.1). You can list all existing
IBM GPFS snapshots with mmlssnapshot sapmntdata
X6 Implementation Guide
1.9.96-13
200
Technical Documentation
However, keep in mind that all IBM GPFS snapshots still remain on the same physical disks as your
production SAP HANA data. This does by no means represent a valid backup location! Moreover,
having IBM GPFS snapshots will lead to a slightly decreased file system performance. Therefore it is
essential to move and archive such backup to a remote device and to delete the snapshot.
17.3
There are two ways to restore the SAP HANA snapshot. Either with SAP HANA Studio or with a
command line statement.
Restore with SAP HANA Studio
In SAP HANA Studio right-click on the SAP HANA instance you want to recover to and select Recover.
The recovery wizard appears.
X6 Implementation Guide
1.9.96-13
201
Technical Documentation
Specify Snapshot as the type of backup to recover from. This disables the location box.
If you restore on the same system from which the snapshot was taken you can skip the license key question.
If you are restoring to a different system you need to provide a license key. If you do not specify a valid
key the restore still completes successfully but the database instance will be locked afterwards. It is
possible to specify a valid license key later on.
X6 Implementation Guide
1.9.96-13
202
Technical Documentation
In the next step, restore takes places. Restore time depends on the amount of data being recovered and
the number of servers involved.
Restore via command line
In order to restore the SAP HANA snapshot, execute the following commands as <sid>adm:
1
2
su - nktadm
./HDBSettings.sh recoverSys.py --command "RECOVER DATA USING SNAPSHOT CLEAR LOG"
After the restore completes successfully the procedure automatically starts the SAP HANA instance. The
file snapshot_databackup_0_1 in /sapmnt/data/<SID>/mnt00001/hdb00001 is automatically removed
upon a successful restore.
X6 Implementation Guide
1.9.96-13
203
Technical Documentation
18
Troubleshooting
For the Lenovo Systems Solution for SAP HANA Platform Edition the installation of SLES for SAP as
well as the installation and configuration of IBM GPFS, and SAP HANA has been greatly simplified
by an installation process with an accompanying guided installation. This process automatically installs
and configures the base OS components necessary for the SAP HANA appliance software. It is no longer
supported to install the OS manually for the Lenovo Solution.
18.1
When configuring a clustered configuration by hand, install SAP HANA worker and standby nodes as
described in the Lenovo SAP HANA Appliance Operations Guide 34 (Section 4.3 Cluster Operations
Adding a cluster node).
18.2
If you updated the Linux kernel, you will have to update the portability layers for GPFS before starting
SAP HANA. After a kernel reboot, you will not see the GPFS mount points available. Follow the
directions above in section regarding updating both portability layers.
18.3
One possible reason for degrading disk I/O on the HDDs or SSDs could be a discharged or disconnected
battery on the RAID controller. In that case the cache policy is changed from "WriteBack" (default) to
WriteThrough, meaning that the data is written to disk instead to the cache. This will have a significant
I/O performance impact.
To verify, please proceed as follows:
1. The StorCLI tool (see section 15.4.2: ServeRAID StorCLI Utility for Storage Management on page
187) is installed during HANA setup. The path is /opt/MegaRAID/storcli/. If you have been
using the MegaCli64 client before, you dont have to learn new commands. The commands are the
same.
2. Determine current cache policy:
1
3. Depending on the model there is a varying number of output lines. Sample output:
1
2
3
4
Default
Current
Default
Current
Cache
Cache
Cache
Cache
Policy:
Policy:
Policy:
Policy:
WriteBack,
WriteBack,
WriteBack,
WriteBack,
ReadAhead,
ReadAhead,
ReadAhead,
ReadAhead,
Direct,
Direct,
Cached,
Cached,
No
No
No
No
Write
Write
Write
Write
Cache
Cache
Cache
Cache
if
if
if
if
Bad
Bad
Bad
Bad
BBU
BBU
BBU
BBU
If the output contains "WriteThrough" for the "Current Cache Policy" while the previous "Default
Cache Policy" defines "WriteBack", the cache policy has been switched from the "WriteBack" default
due to some issue.
You can then check each batterys status. For example, with the sample output above you would
check the status of the first two adapters batteries (the third one is OK).
34 SAP
X6 Implementation Guide
1.9.96-13
204
Technical Documentation
1
2
18.4
When a IBM Certified Engineer exchanges a system board, he is required only to reset the Manufacturer
Type and Model (MTM) and serial number of the machine inside of the EEPROM Settings. SAP HANA
hardware checker (before revision 27) looks at the description of the string instead of the MTM.
To workaround this issue a Lenovo services person can use the Lenovo Advanced Settings Utility (ASU)
tool (see section 15.4.1: Lenovo Advanced Settings Utility on page 187) to reset the system product data to
the correct data for the SAP installer to work. ASU is installed under /opt/lenovo/toolscenter/asu.
The tool can then be used to view or set the firmware settings of the IMM from the command line. For
example to show and subsequently reset the System Product Identifier required by SAP HANA, you can
use the following commands:
1
18.5
18.6
You can find a list of SAP Notes in Appendix G.4: SAP Notes (SAP Service Marketplace ID required)
on page 227. This chapter is to describe some of these SAP Notes in more detail.
18.6.1
https://service.sap.com/sap/support/notes/1641148
18.6.1.1 Symptom You are running a SAP HANA scale out landscape and see different time zone
settings for the sidadm user.
18.6.1.2 Reason and Prerequisites Your SAP HANA scale out landscape shows different time
zone settings for at least one server, i.e. the master node shows time zone UTC and all other nodes
show time zone CET. This may be caused by an inconsistency in the installation process and should be
corrected.
X6 Implementation Guide
1.9.96-13
205
Technical Documentation
18.6.1.3 Solution To change the time zone settings of the sidadm user: go to the home directory
/usr/sap/
1
2
X6 Implementation Guide
1.9.96-13
206
Technical Documentation
Appendices
A
GPFS 3.5 introduced a new disk descriptor format called stanzas. The old disk descriptor format is
deprecated since GPFS 3.5. This stanza format is also valid for GPFS 4.1 (introduced with release 1.8).
Create the file /var/mmfs/config/disk.list.data.gpfsnode01 by concatenating the following parts:
1. Always add
%nsd: device=/dev/sdb
nsd=data01node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
2. When having one RAID array in the SAS expansion unit
%nsd: device=/dev/sdc
nsd=data02node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
3. When having two RAID arrays in SAS expansion unit, add also
%nsd: device=/dev/sdd
nsd=data03node01
servers=gpfsnode01
usage=dataAndMetadata
failureGroup=1001
pool=system
4. Always add these line at the end
%pool:
pool=system
blockSize=1M
usage=dataAndMetadata
layoutMap=cluster
allowWriteAffinity=yes
writeAffinityDepth=1
blockGroupFactor=1
This is currently valid only for the DR-enabled clusters, for standard HA-enabled clusters use the plain
single number failure groups as described in the instructions above.
With GPFS 3.5 TL2 (the base version for DR) a new failure group (FG) format called "Topology vectors"
was introduced which is being used for the DR solution. A more detailed description for topology vectors
can be found in the GPFS 3.5 Advanced Administration Guide chapter "GPFS File Placement Optimizer".
X6 Implementation Guide
1.9.96-13
207
Technical Documentation
In short, the topology vector is a replacement for the old FGs, storing more information on the infrastructure of the cluster. Topology vectors are used for NSDs, but as the same topology vector is used for
all disks of a server node it will be explained in the context of a server node.
In a standard DR cluster setup all nodes are grouped evenly into four FGs (five when using the TiebreakerNode) with two FGs on every site.
A topology vector consists of three numbers divided by commas. The first of the three numbers is either
1 or 2 (for all the SAP HANA nodes) or 3 for the tiebreaker node. The second number is 0 (zero) for all
site A nodes and 1 for all site B nodes. The third number enumerates the nodes in each of the failure
groups starting from 1.
In a standard eight node DR-cluster (4 nodes per site) we would have these topology vectors:
Site
Site A
Site B
Site C
Failure Group
Failure group 1
(1,0,x)
Failure group 2
(2,0,x)
Failure group 3
(1,1,x)
Failure group 4
(2,1,x)
Failure group 5
(tiebreaker) (3,0,x)
Topology Vector
1,0,1
1,0,2
2,0,1
2,0,2
1,1,1
1,1,2
2,1,1
2,1,2
Node
gpfsnode01 / hananode01
gpfsnode02 / hananode02
gpfsnode03 / hananode03
gpfsnode04 / hananode04
gpfsnode05 / hananode01
gpfsnode06 / hananode02
gpfsnode07 / hananode03
gpfsnode08 / hananode04
3,0,1
gpfsnode99
X6 Implementation Guide
1.9.96-13
208
Technical Documentation
Quotas
C.1
Quota Calculation
Note
This section is only for information purposes. Please use the quota calculator in the next
section C.2: Quota Calculation Script on page 209.
The quota calculation for this and the following appliance releases is more complex than the quota
calculations in previous releases. An utility script is provided to make the calculation easier.
In general the quota calculations follows SAP recommendations for HANA 1.4 and later.
For HANA single nodes and HA-enabled clusters, there will be set quotas for HANA log files, HANA
data volumes and for the shared HANA data. In DR-enabled cluster a quota should be set only for SAP
HANAs log files.
The formula for the quota calculation is
1
2
3
C.2
A script is available to ease the quota calculation. The standard installation uses this script to calculate
the quotas during installation and the administrator can also call this script to recalculate the quotas after
a topology change happened, e.g. installation of more HANA instances, changing node role, shrinking or
growing the cluster.
Most values are read from the system or guessed. For a cluster the standard assumption is to have one
dedicated standby node. For a DR solution no reliable guess on the nodes can be made and manual
override must be used.
The basic call is
1
# saphana-quota-calculator.sh
X6 Implementation Guide
1.9.96-13
209
Technical Documentation
As a result it will give the calculated quotas and the commands to set them to the calculated result.
After reviewing these you can add the -a parameter to the call which will automatically set the quotas
as calculated.
In the case you are running a cluster and the number of dedicated standbys is not one, use the parameter
-s <# standby> to set a specific number of standby hosts. 0 is also a valid value.
In the case of a DR enabled cluster, the guess for the active worker nodes will be always wrong. Please
use also the parameter -w <# workers> to set the number of nodes running HANA as active worker.
The number of workers and standbys should equal the number of nodes on a site.
Additional parameters are -r to get a more detailed report on the quota calculation and -c to verify
the currently set quotas (allows a deviation of 10%, too inaccurate for larger clusters with more than 8
nodes).
X6 Implementation Guide
1.9.96-13
210
Technical Documentation
Performance Settings
Please review the following configuration settings if the support script indicates it:
1. Change Processor C-State Boot parameter
This will disable the use of some processor C-States, which can reduce power consumption but lower
performance. This boot parameter should not have any effect on Lenovo solutions, as restricting
the processor C-state is done in other settings. However, SAP requires this parameter be set at
boot.
(a) ELILO installed systems (SLES based systems)
Change line 12 in /etc/elilo.conf from
1
kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-,b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.,UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ,rd_NO_DM intel_idle.max_cstate=0 transparent_hugepage=never ,crashkernel=auto rhgb quiet rhgb quiet
to
kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 ro root=UUID=3d420911-eef8-46de-,b019-aff9d6e7d36a rd_NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us LANG=en_US.,UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_LVM ,rd_NO_DM intel_idle.max_cstate=0 processor.max_cstate=0 ,transparent_hugepage=never crashkernel=auto rhgb quiet rhgb quiet
su - [sid]adm
hdbparam --paramset fileio.async_write_submit_active=on
hdbparam --paramset fileio.async_write_submit_blocks=all
hdbparam --paramset fileio.async_read_submit=on
There are 2 additional parameters that are not available in HANA revision 80 but are available in
revisions 90 and above.
1
2
X6 Implementation Guide
1.9.96-13
211
Technical Documentation
#!/bin/bash
sysctl -w net.ipv4.tcp_rmem="8388608 8388608 8388608"
sysctl -w net.ipv4.tcp_wmem="8388608 8388608 8388608"
Make the file executable:
1
2
1
2
1
2
1
2
3
4
5
6
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queuedepth
fi
done
Afterwards lines 26-32 looks like:
1
2
3
4
5
6
7
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queuedepth
echo noop > ${i}/queue/scheduler
fi
done
X6 Implementation Guide
1.9.96-13
212
Technical Documentation
To temporarily apply the settings immediately without a reboot, perform the following command
for each disk entry (sda, sdb, etc) in /sys/block/
1
X6 Implementation Guide
1.9.96-13
213
Technical Documentation
Starting with the support of Intel Xeon IvyBridge EX family of processors, SAP has changed there naming
of the models. Previously, SAP had named these "TShirt" sizes of S,M,L,XL, etc. The new naming
convention is purely based on the amount of memory each predefined configuration should contain, for
example 128, 256, 512, etc. Each of these servers are orderable with the proper components to fulfill the
SAP pre-configured system sizes.
The following table shows the SAP HANA T-Shirt Sizes to Machine Type Model (MTM) code mapping.
The last x in the MTM is a placeholder for the region code the server was sold in, for example, a U for
the USA. While the Machine Type is 6241, the different Models are shown below.
Chassis CPUs
Memory
128GB
256GB
384GB
512GB
256GB
512GB
4U
768GB
4
1TB
1.5TB
2TB
3TB
4TB
6TB
Usage
Standalone
Standalone
Scale-out
Standalone
Standalone
Scale-out
Standalone
Standalone
Scale-out
Standalone
Standalone
Scale-out
Standalone
Standalone
Standalone
Standalone
Standalone
Model
AC32S128S
AC32S256S
AC32S256C
AC32S384S
AC32S512S
AC32S512S
AC34S256S
AC34S512S
AC34S512C
AC34S768S
AC34S1024S
AC34S1024C
AC34S1536S
AC34S2048S
AC34S3072S
AC34S4096S
AC34S6144S
Possible Model
6241-AC3, -H2x1 ,
6241-AC3, -H3x1 ,
6241-AC3
6241-AC3
6241-AC3, -H4x1 ,
6241-AC3
6241-AC3
6241-AC3, -H5x1 ,
6241-AC3
6241-AC3
6241-AC3, -H6x1 ,
6241-AC3
6241-AC3
6241-AC3
6241-AC3
6241-AC3
6241-AC3
-HZx2 , -HUx3
-HYx2 , -HTx3
-HXx2 , -HSx3
-HWx2 , -HRx3
-HVx2 , -HQx3
X6 Implementation Guide
1.9.96-13
214
Technical Documentation
Chassis CPUs
Memory
256GB
512GB
768GB
1TB
1.5TB
2TB
512GB
8U
1.5TB
2TB
8
3TB
4TB
6TB
8TB
12TB
Usage
Standalone
Standalone
Scale-out
Standalone
Standalone
Scale-out
Standalone
Standalone
Standalone
Scale-out
Standalone
Standalone
Scale-out
Standalone
Scale-out
Standalone
Scale-out
Standalone
Scale-out
Standalone
Scale-out
Standalone
Scale-out
Model
AC44S256S
AC44S512S
AC44S512C
AC44S768S
AC44S1024S
AC44S1024C
AC44S1536S
AC44S2048S
AC48S512S
AC48S1024C
AC48S1536S
AC48S2048S
AC48S2048C
AC483072S
AC483072C
AC48S4096S
AC48S4096C
AC48S6144S
AC48S6144C
AC48S8192S
AC48S8192C
AC48S12288S
AC48S12288C
Possible Model
6241-AC4
6241-AC4, -HBx1 , -HEx2 , -HHx3
6241-AC4
6241-AC4
6241-AC4, -HCx1 , -HFx2 , -HIx3
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4, -HDx1 , -HGx2 , -HJx3
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
6241-AC4
X6 Implementation Guide
1.9.96-13
215
Technical Documentation
The support script saphana-support-ibm.sh can detect various known problems in your appliance. In
case such a problem is found, the support script will give an FAQ entry number. Please follow only the
instructions given in the particular entry. When in doubt please contact Lenovo support via SAPs OSS
ticket system.
Information on how to run the support script can be found in the Operations Guide, section 2.3 Basic
System Check. Please use always the latest support script which may detect new issues found after
installing your appliance. You can find the latest version attached to SAP Note 1661146 Lenovo Check
Tool for SAP HANA appliances.
F.1
Problem: If left unconfigured, each installed and running HANA instance may use up to 97% (90% in
older HANA revisions) of the systems memory. If multiple unconfigured HANA systems or misconfigured
HANA systems are running on the same machine(s) "Out of Memory" situations may occur. In this case
the so called "OOM Killer" of Linux gets triggered which will terminate running processes at random
and in most cases will kill SAP HANA or GPFS first, leading to service interruption. An unconfigured
HANA system is a system lacking a global_allocation_limit setting in the HANA systems global.ini
file. Misconfigured SAP HANA systems are multiple systems running at the same time with a combined
memory limit over 90% of the physical installed memory.
Solution: Please configure the global allocation limit for all systems running at the same time. This
can be done by setting the global_allocation_limit parameter in the systems global.ini configuration
files. Please calculate the combined memory allocation for HANA so that at least 25GB are free for other
programs. Please use only the physically installed memory for your calculation.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.
F.2
Problem: Older cluster installations do not have the GPFS parameter "readReplicaPolicy" set to "local"
which may improve performance in certain cases. Newer cluster installations have this value set and
single nodes are not affected by this parameter at all. It is recommended to configure this value.
Solution: Execute the following command on any cluster node at any time:
1
# mmchconfig readReplicaPolicy=local
This can be done during normal operation and the change becomes effective immediately for the whole
GPFS cluster and is persistent over reboots.
F.3
Problem: For a general description of the SAP HANA memory limit see Appendix F.1: FAQ #1: SAP
HANA Memory Limits on page 216. XS sized servers have only 128GB RAM installed of which even a
X6 Implementation Guide
1.9.96-13
216
Technical Documentation
single SAP HANA system will use up to 93.5% equaling 119GB (older revisions of HANA used 90% =
b
115GB) if no lower memory limit is configured. This leaves too little memory for other processes which
may trigger Out-Of-Memory situations causing crashes.
Solution: Please configure the global allocation limit for the installed SAP HANA system to a more
apropriate value. The recommended value is 112GB if the GPFS page pool size is set to 4GB (see FAQ
#12: GPFS pagepool should be set to 4GB) and 100GB or less if the GPFS page pool is set to 16GB. If
multiple systems are running at the same time, please calculate the total memory allocation for HANA
so the sum does not exceed the recommended value. Please use only the physically installed memory for
your calculation.
More information on the parameter global_allocation_limit can be found in the "HANA Administration
Guide" at http://help.sap.com/hana_appliance/. Please configure the memory limits as described
there.
F.4
Problem: Under some rare conditions single node SSD or XS/S gen 2 models may be installed with
overlapping NSDs. Overlapping means that the whole drive (e.g. /dev/sdb) as well as a partition on the
same device (e.g. /dev/sdb2) may be configured as NSDs in GPFS. As GPFS is writing data on both
NSDs, each NSD will overwrite and corrupt data on the other NSD. In the end at some point the whole
device NSD will overwrite the partition table and the partition NSD is lost and GPFS will fail. This is
the most common situation where the problem will be noticed.
Consider any data stored in /sapmnt to be corrupted even if the file system check finds no errors.
Solution: The only solution is to reinstall the appliance from scratch. To prevent installing with the
same error again, the single node installation must be completed in phase 2 of the guided installation.
Do not deselect "Single Node Installation".
F.5
Problem: An upgrade of SAP HANA or another SAP software component fails because of missing
dependencies. As some of these package dependencies were added by SAP HANA after your system was
initially installed, you may install those missing packages and still receive full support of the Lenovo
Systems solution. If you no longer have the SLES for SAP DVD or RHEL DVD (depending on what
OS you are using) that had been delivered with your system, you may obtain it again from the SUSE
Customer Center respectively Red Hat.
Solution: Ensure that the packages listed below are installed on your appliance.
SUSE Linux Enterprise Server for SAP Applications
libuuid
gtk2 - Added for HANA Developer Studio
java-1_6_0-ibm - Added for HANA Developer Studio
libicu - Added since revision 48 (SPS04)
mozilla-xulrunner192-* - Added for HANA Developer Studio
ntp
sudo
syslog-ng
X6 Implementation Guide
1.9.96-13
217
Technical Documentation
tcsh
libssh2-1 - Added since revision 53 (SPS05)
expect - Added since revision 53 (SPS05)
autoyast2-installation - Added since revision 53 (SPS05)
yast2-ncurses - Added since revision 53 (SPS05)
Red Hat Enterprise Linux: At the moment there are no known packages that have to be installed
additionally.
Missing packages can be installed from the SLES for SAP DVD shipped with your appliance using the
following instructions. It is possible to add the DVD that was included in your appliance install as a
repository and from there install the necessary RPM package. First Check to see if the SUSE Linux
Enterprise Server is already added as an repository:
1
# zypper repos
2
3
4
5
# | Alias
| Name
| Enabled | Refresh
--+----------------+----------------+---------+-------1 | SUSE-Linux-... | SUSE-Linux-... | Yes
| No
If it doesnt exist, please place the DVD in the drive (or add it via the Virtual Media Manager) and add
it as a repository. This example uses the SLES for SAP 11 SP1 media.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
1
2
# zypper lr -u
# | Alias
| Name
,
| Enabled | Refresh | URI
--+--------------------------------------------------+----------------------------------------------,
1 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server,-11-SP3 11.3.3-1.138 | Yes
| No
| cd:///?devices=/dev/sr0
X6 Implementation Guide
1.9.96-13
218
Technical Documentation
# cp -r /media/SLES-11-SP3-DVD*/* /var/tmp/install/sles11/ISO/
Register the directory as a repository to zypper
2
3
4
5
6
7
1
2
# zypper lr -u
# | Alias
| Name
,
| Enabled | Refresh | URI
--+--------------------------------------------------+----------------------------------------------,
1 | SUSE-Linux-Enterprise-Server-11-SP3
| SUSE-Linux-Enterprise-Server,-11-SP3
| Yes
| Yes
| file:/var/tmp/install/sles11/ISO/
2 | SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138 | SUSE-Linux-Enterprise-Server,-11-SP3 11.3.3-1.138 | Yes
| No
| cd:///?devices=/dev/sr0
Then search to ensure that the package can be found. This example searches for libssh.
# zypper search libssh
1
2
3
4
5
S | Name
| Summary
| Type
--+-----------+-------------------------------------+-------| libssh2-1 | A library implementing the SSH2 ... | package
6
7
8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
:
:
X6 Implementation Guide
1.9.96-13
219
Technical Documentation
F.6
Problem: Linux is using a technology for power saving called "CPU governors" to control CPU throttling
and power consumption. By default Linux uses the governor "ondemand" which will dynamically throttle
CPUs up and down depending on CPU load. SAP advised to use the governor "performance" as the
ondemand governor will impact HANA performance due to too slow CPU upscaling by this governor.
Since appliance version 1.5.53-5 (or simply SLES for SAP 11 SP2 based appliances) we changed the CPU
governor to performance. In case of an upgrade you also need to change the governor setting. If you are
still running SLES for SAP 11 SP1 based appliances, you may also change this setting to trade in power
saving for performance. This performance boost was not quantified by the development team.
Solution: On all nodes append the following lines to the file /etc/rc.d/boot.local:
1
2
4
5
6
7
8
9
bios_vendor=$(/usr/sbin/dmidecode -s bios-vendor)
# Phoenix Technologies LTD means we are running in a VM and governors are not ,available
if [ $? -eq 0 -a ! -z "${bios_vendor}" -a "${bios_vendor}" != "Phoenix Technologies ,LTD" ]; then
/sbin/modprobe acpi_cpufreq
for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
do
echo performance > $i
done
fi
The setting will change on the next reboot. You can also change safely the governor settings immediately
by executing the same lines at the shell. Copy & paste all the lines at once, or type them one by one.
F.7
Problem: Starting HANA fails due to insufficient disk space. The following error message will be found
in indexserver or nameserver trace:
1
HDB info
to see if there is any HANA processes running. If there are, run
kill -9 proc_pid
to shut them down, one by one.
Download and apply GPFS version 3.4.0.23. Refer to the section 13.5: Updating GPFS on page 167 for
information about how to upgrade GPFS.
Note
It is recommended that you consider upgrading your GPFS version from 3.4 to 3.5 as support
for GPFS 3.4 has been discontinued from IBM.
X6 Implementation Guide
1.9.96-13
220
Technical Documentation
SAP highly recommends that you run uniqueChecker.py script after patching GPFS to make sure that
your database is consistent.
F.8
The Linux kernel used by SAP HANA includes a built-in driver (intel_idle) which will ignore any
C-State limits imposed by Basic Input/Output System (BIOS)/Unified Extensible Firmware Interface
(UEFI) when it is active.
This driver may cause issues by enabling C-States even though they are disabled in the BIOS or UEFI.
This can cause minor latency as the CPUs transition out of a C-State and into a running state. This is
not the preferred state for the SAP HANA appliance and must be changed.
To prevent the intel_idle driver from ignoring BIOS or UEFI settings for C-States, add the following
start parameter to the kernels boot loader configuration file:
intel_idle.max_cstate=0
Append both parameters to the end of the kernel command line of your boot loader (/boot/grub/menu.lst)
and reboot the server.
Warning
For clustered configurations, this change needs to be done on each server of the cluster. Only
make this change when all servers can be rebooted at once, or when you have an active standby node to take over the rebooting systems HANA services. Do not try to reboot more servers
than stand-by nodes are active
For further information please refer to the SUSE knowledgebase article.
F.9
Problem: After the initial release of the new X6-based servers (x3850 X6, x3950 X6) a serious issue
in various firmware versions of the ServeRAID M5210 RAID adapter has been found which can trigger
continuous controller resets. This happens only under heavy load and each controller reset may cause
service interruption. Certain firmware versions do not exhibit this issue, but these versions show severely
degraded I/O performance. Only servers using the ServeRAID M5120 controller for attaching an external
SAS enclosure are affected.
Future appliance versions will be have the workaround for the controller reset issue preinstalled while the
performance issue can be only solved by an up- or downgrade to an unaffected firmware version.
35 http://www.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5091901
X6 Implementation Guide
1.9.96-13
221
Technical Documentation
Affected versions
23.7.1-0010, 23.12.0-0011, 23.12.0-0016, 23.12.0-0019
23.16.0-0018, 23.16.0-0027
Table 59: ServeRAID M5120 Firmware Issues
Solution: The current recommendation is to use firmware version 23.22.0-0024 (or newer, if listed as
stable by Lenovo SAP HANA Team) and to change the following configuration value in the installed OS.
Both can be done after installation.
F.9.1
On the installed appliance, please edit /etc/init.d/ibm-saphana and change the lines
1
2
3
4
5
6
7
function start() {
QUEUESIZE=1024
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
fi
done
to this version (if not already set)
1
2
3
4
5
6
7
8
9
function start() {
QUEUESIZE=1024
QUEUEDEPTH=250
for i in /sys/block/sd* ; do
if [ -d $i ]; then
echo $QUEUESIZE > $i/queue/nr_requests
echo $QUEUEDEPTH > $i/device/queue_depth
fi
done
by inserting lines 3 & 7. The new settings will be set on the next reboot or by calling
2
3
Adapter #1
4
5
6
7
8
==============================================================================
Versions
================
Product Name
: ServeRAID M5120
X6 Implementation Guide
1.9.96-13
222
Technical Documentation
9
10
Serial No
: xxxxxxxxxx
FW Package Build: 23.22.0-0024
Currently, version 23.22.0-0024 is recommended. Download the 23.22.0-0024 FW package for
ServeRAID 5100 SAS/SATA adapters via IBM Fixcentral or use following direct link: https:
//ibm.biz/BdRatD.
for dev in $(lsscsi |grep -i m5120 |grep -E -o '/dev/sd[a-z]+'| cut -d '/' -f3), ; do cat /sys/block/${dev}/device/queue_depth ; done
F.10
With GPFS version 3.5.0-13 the new GPFS parameter enableLinuxReplicatedAIO was introduced.
Please note the following:
Single node installations: Single node installations are not affected by this parameter. It can
be set to "yes" or "no".
Cluster installations:
GPFS 3.5.0-13 - 3.5.0-15: The parameter must be set to "no". When upgrading to GPFS
3.5.0-16 or higher you have to manually set the value to "yes".
Warning
Instead of setting the parameter to "no" we highly recommend to upgrade GPFS
to 3.5.0-16 or higher.
GPFS 3.5.0-16 or higher: The parameter must be set to "yes".
DR cluster installations: The parameter must be set to "yes".
The support script (saphana-support-ibm.sh) checks if the parameter is set correctly. If it is not set
correctly, adjust the setting:
1
2
# mmchconfig enableLinuxReplicatedAIO=no
# mmchconfig enableLinuxReplicatedAIO=yes
F.11
Problem: In some very rare occasions GPFS NSDs may be created on devices with a GUID Partition
Tables (GPT). When the NSD is created parts of the primary GPT header are overwritten. Newer UEFI
firmware releases offer an option to repair damaged GPTs and if activated the UEFI may try to recover
the primary GPT from the backup copy during boot-up. This will destroy the NSD header and in case
of single nodes this leads to the loss of all data in the GPFS filesystem.
To cause this issue, the following prerequisites must all apply:
X6 Implementation Guide
1.9.96-13
223
Technical Documentation
A storage device used as a NSD in a GPFS filesystem must have a GPT before the NSD was
created. This can only happen if the drive or RAID array was used before and has not been wiped
or reassembled. As part of the HANA appliance, GPT labels on non-OS disks are only created as
part of the mixed eX5/X6 clusters. If a system was only used for the HANA appliance, this cannot
occur unless there was a misconfiguration.
GPFS 3.4 or GPFS 3.5 was used when the NSD and the filesystem was created, either during
installation or manually after installation, regardless of the current running GPFS version. GPFS
4.1 uses protective partition tables to prevent this issue when creating new NSDs.
An UEFI version with GPT recovery functionality is either installed or an upgrade to such a version
is planned. Further risk comes from the UEFI upgrade as these new UEFI versions will enable the
GPT recovery by default.
The probability for this combination is very low.
Solution: If the support script pointed you to this FAQ entry, please contact Lenovo Support via SAPs
OSS Ticket System and put the message on the Queue BC-OP-LNX-IBM. Please prepare a support script
dump as described in SAP Note 1661146 Lenovo Check Tool for SAP HANA appliances. The Lenovo
support will then devise a solution for your installation.
When the ASU tool is installed, run the command
1
F.12
Problem: GPFS is configured to use 16GB RAM for its so called pagepool. Recent tests showed
that the size of this pagepool can be safely reduced to 4GB which will yield 12GB of memory for other
running processes. Therefore it is recommended to change this parameter on all appliance installations
and versions. Updated versions of the support script will warn if the pagepool size is not 4GB and will
refer to this FAQ entry.
Solution: Please change the pagepool size to 4GB. Execute
1
# mmchconfig pagepool=4G
to change the setting cluster-wide. This means this command needs to be run only once on Single Node
and clustered installation.
The pagepool is allocated during the startup of GPFS, so a GPFS restart is required to activate the new
setting. Please stop HANA and any processes that access GPFS filesystems before restarting GPFS. To
restart GPFS execute
1
2
# mmshutdown
# mmstartup
In clusters all nodes need to be restarted. You can do this one node at a time or restart all nodes at
once by adding the parameter -a to both commands. In the latter case please make sure no program is
accessing GPFS filesystems on any node.
X6 Implementation Guide
1.9.96-13
224
Technical Documentation
# mmdiag --config
and search for the pagepool line. This value is shown in bytes.
F.13
FAQ #13: Limit Page Cache Pool to 4GB (SAP Note #1557506
Problem: SLES offers an option to limit the size of the page cache pool. Per default the page cache
size is umlimited. SAP recommends in SAP Note 1557506 Linux paging improvements to limit this
page cache to 4GB of RAM. This may improve resilience against Out-Of-Memory events.
Future appliance software versions will set this value by default. RHEL does currently not offer this
option.
Solution: Add the following line to file /etc/sysctl.conf:
1
vm.pagecache_limit_mb = 4096
and run
# sysctl -e -p
to activate this value without a reboot. This change can be done without a downtime.
F.14
GPFS 3.5 and higher come with the new parameter restripeOnDiskFailure. The GPFS callback script
start-disks-on-startup automatically installed on the Lenovo Solution is superseded by this parameter
IBM GPFS NSDs are automatically started on startup when restripeOnDiskFailure is activated.
On DR cluster installations, neither the callback script nor restripeOnDiskFailure should be activated.
Solution: To enable the new parameter on all nodes in the cluster execute:
1
# mmdelcallback start-disks-on-startup
X6 Implementation Guide
1.9.96-13
225
Technical Documentation
G
G.1
References
Lenovo References
G.2
IBM References
G.3
X6 Implementation Guide
1.9.96-13
226
Technical Documentation
G.4
X6 Implementation Guide
1.9.96-13
227
Technical Documentation
SAP Note 2051052 GPFS "No space left on device" when df shows free space
SAP Notes regarding Virtualization
SAP Note 1122387 Linux: SAP Support in virtualized environments
G.5
Currently Supported
SUSE Linux Enterprise Server 11 SP3 Release Notes
SUSE Linux Enterprise Server for SAP Applications 11 SP3 Media
G.6
Red Hat Enterprise Linux 6 Why can I not install or start SAP HANA after a system upgrade?
Red Hat Enterprise Linux 6 Red Hat Enterprise Linux for SAP HANA: system updates and supportability
X6 Implementation Guide
1.9.96-13
228
Technical Documentation
Changelog
This section describes the changes that have been done within a release version since it was published.
X6 Implementation Guide
1.9.96-13
229