Professional Documents
Culture Documents
Contents
Chapter 1: Save New Spooled Files.................................................................................1
The QSRSAVO API ............................................................................................................. 1 RcdLen and DataLen Fields ................................................................................................ 4 The SPLFDTA (Key #35) Parameter ...................................................................................... 5 Save Your Spooled Files ...................................................................................................... 8
Chapter 2: IBM i Access for Windows: Silent Install, Selective Install .............................9
How IBM i Access Is Distributed....................................................................................... 10 Passing Parameters to the Installation Program .................................................................. 10 Custom Installations ......................................................................................................... 12 Using the setup.ini File ..................................................................................................... 12 Many More Options ......................................................................................................... 14
Chapter 5: How to Implement Open-Source Solutions: Laying the Linux Foundation ..27
Exploring the Capabilities ................................................................................................. 27 Simplifying the Linux Installation ...................................................................................... 28 Operating System Replication........................................................................................... 28 What About VIOS?............................................................................................................ 29
Chapter 6: Optimize System i Performance Adjuster and Shared Memory Pools ..........35
Some Background ............................................................................................................ 35 Buying vs. Tuning ............................................................................................................. 36 What Performance Adjuster Does ..................................................................................... 36 Managing Activity Levels .................................................................................................. 37 Rationalizing and Implementing Shared Pools .................................................................. 38 Initializing Performance Adjuster ...................................................................................... 39 An Explanation of Parameters ........................................................................................... 40 One Note ......................................................................................................................... 41 In Conclusion ................................................................................................................... 41
Chapter 1:
The QSRSAVO API has just two parameters: a fully qualified user space name and an error code data structure. Figure 1 shows the prototype for ILE RPG. Any program using this API must store the parameters for performing the save in the user space before calling the API. (These are somewhat different
Systems Management
Systems Management
1. Initialize the SavObj.Rcdnbr integer to 1 for the first record. 2. Calculate the pointer for the SavObjParms data structure (SavObjParmsPtr) to position it just past the SavObj.Rcdnbr integer. 3. Initialize the RcdLen, KeyNbr, and DataLen fields to the record length, the key number for the OBJ/OBJTYPE parameter (1), and the length of the data, respectively. 4. Calculate the pointer for the SavObjParmList data structure (SavObjParmPtr) to position it just past the DataLen integer. 5. In this case the data is a list, so the first item of that data is a four-byte integer that specifies how many elements there are in the list. Here its initialized to 1 (ElemNbr) as there is only one element in the list. 6. Calculate the pointer for the ObjObjTypeParm data structure shown in Figure 4 (SavObjElemPtr) to position it just past the ElemNbr integer. 7. Finally, initialize the two elements of the ObjObjTypeParm data structure to *ALL objects and *ALL object types.
%size(ObjObjTypeParm); = 1; // OBJ/OBJTYPE parameter. = %size(SavObjParmList) + = SavObjParmsPtr + %size(SavObjParms); = 1; = SavObjParmPtr + %size(SavObjParmList); = '*ALL'; = '*ALL';
qualified based(SavObjElemPtr)
Even though the program only saves spooled files, the first parameter specifying *ALL objects and *ALL object types must be specified. The code to build the second parameter for the list of libraries is in Figure 5, and its similar to the code in Figure 3, except for the following: The SavObj.Rcdnbr integer is incremented by one for the second parameter record. The pointer calculation for the SavObjParms data structure (SavObjParmsPtr) uses the RcdLen value from the previous parameter record to set it to a new position just past it. The KeyNbr is initialized with the key number for the LIB parameter (2). The data portion contains a list of libraries to be saved. Because Im only saving spooled files, I substitute the special value *SPLF. Figure 6 shows the data structure for the LIB parameter data.
Systems Management
Field
SavObj.RcdNbr SavObjParms.RcdLen SavObjParms.KeyNbr SavObjParms.DataLen SavObjParmList.ElemNbr LIB() parameter value SavObjParms.RcdLen SavObjParms.KeyNbr SavObjParms.DataLen SavObjParmList.ElemNbr DEV() parameter value
Human Readable
2 28 2 (LIB parameter) 14 1 *SPLF 28 3 (DEV parameter) 14 1 TAP01
Why, you might be asking, does the SavObjParms data structure need a Figure 9 : size for the data and the whole record? SetIntBoundary() function After all, the API knows the size of the P SetIntBoundary b RcdLen and KeyNbr fields; it would be D SetIntBoundary pi 10i 0 D ElementLength 10i 0 const easy enough to subtract them from the record length to get the data length, or D BoundarySize c 4 vice-versa. /free Its a good question, and it mostly if %rem(ElementLength: BoundarySize) = *zero; has to do with boundary issues. Fourreturn ElementLength; else; byte binary integers are handled more return (%int(ElementLength / BoundarySize) + 1) * BoundarySize; efficiently if theyre all on a four-byte endif; boundary. For example, if youve got /end-free data that is 14 bytes (say, a list of device P SetIntBoundary e or library names with a single element), then together with the three integers the record length comes out to 26 bytes, which would put the next record in the middle of a four-byte boundary (Figure 7). At address x0021 in the dump you can see that the x1A, which is the end of the integer, is in the middle of the first column. This is because it starts at address x001E. To maintain a four-byte boundary for better efficiency, you could increase the record length to 28 bytes while the data length would, of course, remain at 14 bytes (Figure 8). This leaves an extra two bytes unused, but thats really no big deal. Now in the dump, that same integer starts at x0020 and ends at x0023, and all is right with the world. To make it even more efficient Ive also created an internal function called SetIntBoundary() (Figure 9). If the length isnt evenly divisible by four, then it will return an increased length that is. If it
Systems Management
is evenly divisible, then it will return the original length. Now the length calculation in Figure 3 can be changed to include this new function (A in Figure 10).
Most of the parameters added to the user space are mundane single or list parameters that are easy to figure out. SPLFDTA (Key #35) is more complex because there are multiple ways to select spooled files. Figure 11 shows the data structures used to handle this parameter, and Figure 12 shows the code. The first pieces of data are the same as the other parameters Figure 11 : (i.e., the RcdLen, KeyNbr, and DataLen The structures needed for the spooled file data (SPLFDTA) fields) in the SavObjParms data structure. parameter D SplfDtaParm ds qualified based(SavObjParmPtr) At A in Figure 12 the DataLen is the sum D SplfData 10i 0 D SplfHdrLen 10i 0 total of the sizes of the SplfDtaParm, D SplfOffset 10i 0 SplfDtaSelect, and SplfDtaAttr data D SplfDtaSelect ds qualified based(SavObjElemPtr) structures because that is all you need D Length 10i 0 D Offset 10i 0 to save new spooled files. Other uses D Include 10i 0 of this parameter can be quite long and D Format 10i 0 D SelectOffset 10i 0 complex, and you can get as granular as D NewAttrOffset 10i 0 necessary about what you save, but you D SplfDtaAttr ds qualified based(SavObjElemPtr) dont need to get that in-depth for this D Length 10i 0 D OutqName 10a program. D OutqLib 10a D SplfName 10a Once the SavObjParms data strucD JobName 10a ture is initialized, the program then D UserName 10a D JobNbr 6a calculates a pointer to just past it so that D UserData 10a D JobSysName 8a it can start filling the SplfDtaParm data D FormType 10a structure (B in Figure 12). The first field D StrCrtDate 13a D EndCrtDate 13a to be initialized is SplfData, and it can have one of three values: *ZERO (no spooled files are saved), 1 (for every out queue saved the spooled files contained within will be saved), or 2 (only selected spooled files are saved, and additional selection criteria is required). The program example is using option 2. The next field to be initialized is the SplfHdrLen field, which is the length of the SplfDtaParm data structure. It can have two possible values depending on what is specified in the aforementioned SplfData field. If *ZERO or 1 is specified, then this value must be 8, which is the length that would cover only the SplfData and SplfHdrLen fields in the data structure. (Im not sure why the developers put the SplfHdrLen field second in the data structure. It would have made more logical sense if it had
Systems Management
The fourth field is called Format, and its the numeric format of the structure that provides the selection criteria for selecting (or omitting) spooled files. The allowable values are 1 and 2. Specifying 1 tells the API that Im going to provide the spooled file ID (i.e., full job name, spooled file name, number, etc.) to select a single spooled file, which would be cumbersome for this application. In the code I specify 2 (C in Figure 12), which tells the API that Im going to provide selection criteria. The fifth field is named SelectOffset, and its the offset to the selection criteria. I calculate it by adding the size of the SplfDtaSelect structure to the SplfOffset field. Finally, the sixth field is called NewAttrOffset, and its the offset to the aforementioned new attributes, which I set to *ZERO because I dont want the spooled file expiration dates to be changed after they have been saved. The last data structure that needs to be initialized for this parameter is named SplfDtaAttr, and its similar in concept to selection parameters on many of the spooled file commands: Work With Spooled Files (WRKSPLF), Hold Spooled File (HLDSPLF), Release Spooled File (RLSSPLF), Delete Spooled File (DLTSPLF), etc. You can see the data structure at C in Figure 11 and the code to initialize it at D in Figure 12. The selection field names should be pretty self-explanatory. Most of the fields expect a name, generic name, or other appropriate special value. (If, for example, you specified MYOUTQ* in the OutqName field and *LIBL in the OutqLib field, then it would save all spooled files found in output queues in the library list whose names started with MYOUTQ.) The only exceptions are the fields StrCrtDate and EndCrtDate, which would be used to enter a date/time range and select spooled files based on their creation date/time. They are both 13-byte fields, and the date/time data must be in the format CYYMMDDHHMMSS (where C is the century digit: 0=19xx, 1=20xx, 2=21xx, etc.). Regardless of what would normally be stored in these fields, the special value *ALL can be specified in any of them to tell the API to disregard it when making the selection. At D in Figure 12 I specify *ALL in every field except StrCrtDate, which is initialized by a date/ time field that was calculated previously. In the command and program source that you can download (from systeminetwork.com/code or peterlevy.com), the value that is inserted into StrCrtDate is built from two command parameters: REFDATE and REFTIME. If REFDATE(*SAVLIB) is specified, then the program will calculate it by retrieving the last save date/time from the QUSRSYS library using the Retrieve Object Description (QUSROBJD) API. If a date or time has been provided in the parameters, then it will use those values instead. Either way, this criterion is used by the QSRSAVO API to save all spooled files created on or after this date/time. If the last save date/time in the QUSRSYS library doesnt match the last time that all spooled files were backed up, then your program should pass the real date/time in these two command parameters. After the program is finished the spooled files will appear on the media (or in a save file) as if they were saved from an output queue named *SPLF in library *SPLF. This, of course, is all a front, because once you take option 5 to display the spooled files, youll see that they all came from separate output queues. Another possible source of irritation is that when the SAVCHGxxx commands are used with the SAVNEWSPLF command , all of the saved spooled files will end up at the end of the media instead of peppered throughout like they would be with the SAVxxx commands. I also tried to determine whether you could combine both saving changed objects and new spooled files using the QSRSAVO API, but alas, it cant be done. Its a little too ironic that the API can save new spooled files but not changed objects, while the commands can save changed objects but not new
Systems Management
You may not realize it but, as I wrote in the introduction, your users do consider spooled files to be very important. If your shop isnt saving them, then I encourage you to start. If youre only saving them on weekends, during the full system backup, you can now save the new ones during the daily SAVCHGxxx with the QSRSAVO API. From then on, if important spooled files get accidentally deleted or purged, youll be the hero because youve been backing them up all the time, not just on the weekends.
Peter Levy (pklevy@gmail.com) graduated with a computer science degree from Rutgers in 1982 and has been working on the System/38, AS/400, IBM i platform since 1984. He has worked for companies in printing, consumer electronics, chemicals, apparel, transportation, and computer consulting.
Systems Management
Chapter 2:
Systems Management
IBM i Access is distributed both on the installation media (DVD) and in the IBM i IFS. Working with the DVD install is easy; simply insert the DVD, and the setup program starts automatically. This option is good if you do not have a network connection to the install image in the IFS. The IFS install requires a network connection from the PC to the installation directory /QIBM/ProdData/Access/ Windows, shown in Figure 1. Once youve opened that directory, you can run the cwblaunch.exe program or navigate to the version-specific directory and run the setup.exe program, as in Figure 2. When you run cwblaunch, the program identifies the correct version of the product to install on your PC or server. Figure 3 shows the directories and the available versions of IBM i Access for Windows.
Figure 2 : Each Install Image directory contains setup.exe program and setup.ini file
Description
Used for any of the supported 32-bit versions of Windows. Used for any of the supported 64-bit versions of Windows on a PC that uses AMD 64-bit or Intel Xeon 64-bit processors. Used for 64-bit versions of Windows on a PC that uses Intel Itanium 64-bit processors (typically only server systems). This directory is not provided with IBM i Access V7R1.
Regardless of how you install IBM i Access for Windows (DVD or IFS, cwblaunch or setup), you can pass parameters to the installation program. Figure 4 shows a summary of the command line parameters for the programs. The two general categories of parameters are: parameters that control the user interface level displayed during the installation. parameters that specify what is installed and the installation options. Simply starting cwblaunch or setup. exe without specifying any parameters
Figure 4 : Command line parameters that can be passed to cwblaunch or setup.exe
Systems Management
displays the entire sequence of installation panels and dialogs. By default, the program performs a complete install of all the components of IBM i Access for Windows. Parameters that control the user interface. Figure 5 shows the parameters that you can pass to either cwblaunch or setup and the effect of the parameters. Figure 6 is the Choose Setup Language dialog that you can suppress with the /S parameter. Note that if you use the /v/qn (no user interface) parameter, users are not prompted to reboot their PCs; however, if any open Windows programs have unsaved work, users are prompted to close the open programs so that the reboot can occur. Parameters that specify what is installed. In addition to controlling how the installation program interacts with you, you can specify which features of IBM i Access for Windows to install. In the IBM documentation, the parameters are called public properties (this is based on the Windows Installer terminology). Figure 7 shows the CWBINSTALLTYPE public property and the three values that you can specify for the property. The property value corresponds with the Setup Type dialog in Figure 8. To combine the user interface parameters with the public property, you can specify multiple /v parameters like this:
cwblaunch /S /v/qn CWBINSTALLTYPE=PC5250User
Description
Default, displays all installation panels and dialogs. You can change any of the installation options, select features to install and cancel the install when using the no-parameters install. Suppress the Choose Setup Language dialog (see Figure 6). The language to install is selected based upon the Windows default language selection. Reduced user interface. Displays a progress bar during the install, prompts for reboot at end of install. Basic user interface. Displays a progress bar during the install, prompts for reboot at end of install. No user interface displayed during install. Does not prompt for reboot at end of install; reboot occurs automatically. If there are any open programs, the user is prompted to end the open programs.
Notes You can include the /S parameter with any of the other parameters. Example: cwblaunch /S /v/qn Performs a truly silent install. Do not enter a space between the /v characters and the characters that follow. There is no practical difference between the /v/qr and /v/qb options.
Figure 6 : Choose Setup Language dialog is displayed first in the install process Figure 7 : Parameters that control the features that are installed
Parameters
(none) /vCWBINSTALLTYPE=Complete /vCWBINSTALLTYPE=Custom /vCWBINSTALLTYPE=PC5250User
Description
Default, installs all features of IBM i Access. Installs all features of IBM i Access (same as default option). Indicates that the Custom Setup dialog (see Figure 9) is to be displayed, allowing selection of features to install. Installs the PC5250 Display and Print Emulator.
Note that the parameters following the /v are enclosed within doublequotation characters.
Notes The CWBINSTALLTYPE values correspond to the options on the Setup Type dialog shown in Figure 8. The property name CWBINSTALLTYPE is case-sensitive.
Systems Management
Custom Installations
Selecting the installation type option Custom in Figure 7 (or choosing the Custom option from the Setup Type dialog in Figure 8) displays the Custom Setup dialog in Figure 9. But what happens if you also use the silent install feature, for example, by running the following command?
cwblaunch /S /v/qn CWBINSTALLTYPE=Custom
The answer is that the silent install proceeds, and the default features specified in an interactive custom setup are selected. Figure 10 shows the features included with each of the three standard setup types. Sometimes, you will want more control over the features that are installed. Using the ADDLOCAL public property and the identifiers in Figure 10, you can specify a comma-delimited list of features to be installed. For example, to install the 5250 emulator, SSL, data transfer, and ODBC, use a command such as this:
cwblaunch /S /v/qn ADDLOCAL=emu,ssl,dt,odbc
If you use ADDLOCAL, do not use the CWBINSTALLTYPE property. The Required Programs feature (identifier req) will always be installed automatically, so you need not specify it in the ADDLOCAL list.
The previous examples show how to pass parameters to the cwblaunch program. To run it, enter the command directly in the Windows Run program or in a Command Prompt window, or
Figure 9 : Custom Setup dialog used in an interactive install to select features to install
Systems Management
create a batch file that contains the program name and the parameters. Another technique is to embed the parameters in the setup.ini file associated with the install image. For example, Figure 2 shows the location of the setup.ini file used with the 32-bit install image. There are corresponding setup.ini files for the 64-bit install images, as well. Before making any modifications to a setup.ini file, you may want to make a backup copy of the file. Figure 11 shows an excerpt of the default setup.ini file for the 32-bit install image. The following are keys in the [Startup] section that you can modify: CmdLine: Modify this to specify the user interface level and the features to install. EnableLangDlg: Modify this to suppress the language selection dialog (see Figure 6). Figure 12 shows the modifications to setup.ini to perform the following during the installation: suppress user interface dialogs and panels (/qn parameter added to CmdLine value). install the PC5250 emulator, SSL, data transfer, and ODBC (the ADDLOCAL property added to the CmdLine value). suppress the language selection dialog (EnableLngDlg value set to n). After modifying setup.ini, save it to the installation directory. (Do not change the name of the file; the installation program looks specifically for a file named setup.ini.) You can now run the cwblaunch or setup programs without
Setup Type Identifier Complete Custom * PC5250User tbj emu ssl oc Optional Features
viewer req
System i Navigator
irc inav inavbo inavwm inavcfg inavnet inavisa inavsec inavug inavdb inavfs inavback inavcmd inavpp inavmon inavlog inavafp inavad
dir
Data Access
dt dtexcel odbc oledb dotnet lotus123
Printer Drivers
afp scs
Systems Management
Figure 10 : (Contibue)
Programmers Toolkit
Headers, Libraries and Documentation Java Programmers Tools hld jpt
Note: Custom setup type assumes that no additional features are selected on the Custom Setup dialog.
The installation process provides many more options and features that you may want to investigate. One helpful tool for doing so is IBMs PDF manual called IBM i Access for Windows: Installation and setup (publib. boulder.ibm.com/infocenter /iseries/v6r1m0/topic/rzaij/rzaij.pdf). Using the examples Ive shown in this article and the additional information from the IBM manual, you can create tailored installation images for your IBM i Access 6.1 or 7.1 users.
Craig Pelkie (craig@web400.com) is a technical editor for System iNEWS and has worked as a programmer with IBM midrange computers for many years. He has also written and lectured extensively on IBM i technologies, including client/server programming, Client Access, Java, WebSphere, .NET applications for IBM i, and web development.
15
Chapter 3:
If you are already using Systems Director 6.2, then updating to version 6.2.1 is quite simple. Just select Update IBM Systems Director from the top of the Systems Director welcome page. This action will connect you to ibm.com and show all the available Systems Director updates. Click Download and Install, and youll be updated in no time. If you are new to Systems Director, then you will want to use the IBM Systems Director Pre-Installation Utility. This tool analyzes the physical or virtual server that you selected to install the Systems Director Server onto, then shows the results (Figure 1), to ensure that all the requirements are met. This tool works for every OS on which you can install the server: AIX, Linux, and Windows. Although Systems Director Server does not install on IBM i, Systems Director can manage IBM i systems very well. The following are all the elements analyzed to ensure the Systems Director Server installation will go smoothly: Runtime authentication OS compatibility Host architecture Processors Disk space available Memory available Software required Port availability Promotion validity Intelligent Platform Management Interface (IPMI) status (Linux only) Security-Enhanced Linux (SELinux) status (Linux only)
Systems Management
Systems Management
Although Service and Support Manager is not included in the Systems Director base installation, there is no additional charge to use it, so I highly recommend downloading and installing it. Service and Support Manager analyzes events received from your managed systemsthen, if they are deemed serviceable, it automatically collects data and creates a service request at IBM. In Systems Director 6.2.1, Service and Support Manager also collects performance management data for Power Systems with an AIX OS. Once collected, the data is securely transmitted to IBM support. In addition, with Systems Director 6.2.1 you can now manually open a service request through Service and Support Manager. If you determine that an event is serviceable but has not been processed, then you can collect service data and have it sent to IBM along with a service request. A common request from customers is that although they want to monitor many kinds of systems, they also want different kinds of data collected on specific systems. Systems Director 6.2.1 makes this possible. In the properties of each system, the Service and Support tab is enhanced to show selections for problem reporting, inventory reporting, and performance management data reporting. An important change in Systems Director 6.2.1 is that as soon as the plug-in is installed, serviceable problems are monitored, and if problems are found, data is collected. You no longer have to activate specific systems. However, in order to have the data sent to IBM support, you need to open the Service and Support Getting Started wizard.
Increasing numbers of customers are interested in using Systems Director through their own custom scripts and applications. There are two ways to do this. The first is through the commandline interface. A newer method is through the IBM Systems Director Software Development Kit (SDK). Although you must register to use the SDK, there is no fee. To register, go to ibm.com /vrm/4api1; to learn more about the SDK, go to publib.boulder.ibm.com /infocenter/director/sdk/index.jsp; for access to the SDK forums, go to ibm. com/developerworks/forums/forum. jspa?forumID=1852. The SDK lets you use web-based APIs to pull data from Systems Director or push down to run Systems Director tasks. You can also register your own applications in the Systems Director UI through the External Application Launch. This lets
Systems Management
Active Energy Manager adds support for the latest systems, as well as additional UPS and PDU power management devices. The hardware list can be found in the Information Center. Active Energy Manager also adds new hardware monitoring features such as the ability to monitor power usage of attached I/O drawers. In addition, you can monitor aggregated power values across the whole server. One of the top customer requests has also been added: the ability to manage power usage per partition. This allows processors allocated to some partitions to be dynamically throttled but processors allocated to others to run at full capacity. Other attractive enhancements that Ill cover in more detail in future columns include: Cost calculator: Provides for additional calculations to help determine how much money you are actually saving by using power savings and power capping. It also now provides estimates of future savings with continued use of power savings mode. New performance monitor views.
VMControl Enhancements
Almost all the enhancements in VMControl 2.3.1 are based on direct customer requests, which are the most valuable improvements for real-world customer needs. Ill go into more detail in a future column, but the following is a summary: Deploying images in a virtual appliance. VMControl Standard Edition is dedicated to deploying new workloads onto target systems. VMControl 2.3.1 adds new and simplified deployment methods. Now with VMControl 2.3.1 you can deploy AIX without needing to configure Network Installation Manager (NIM) by using a storage-based image repository. Simply select an existing Virtual I/O Server (VIOS) as your repository, and you can capture and deploy AIX or Linux virtual servers (partitions), then deploy them later. The only requirement is that your VIOS needs to be configured to use SAN storage. The VIOS repository also adds fast copy capabilities, but Ill discuss those in a future column. You can also use an existing partition with a SAN Volume Controller to achieve similar results. For those using VMware and Hyper-V, VMControl 2.3.1 has started integrating Tivoli Provisioning Manager for Images. z/VM users will see enhancements, as well. System pools. VMControl Enterprise Edition has additional capabilities for adding pre-existing virtual servers into system pools. This can be done for multiple virtual servers by grouping them into
Systems Management
a workload. In addition, you can now choose to optimize a server system pool manually or on a repeating basis. This ensures that all the servers in the pool are being utilized equally.
Storage Enhancements
The biggest change in storage is the new plug-in called Storage Control. If that sounds familiar, it is because Storage Control was embedded in VMControl. However, customers asked for many storage enhancements that did not directly relate to VMControl, so Storage Control was enhanced and can be installed on its own. Storage Control uses Tivoli Storage Productivity Center technology to discover and manage midrange storage, including the new IBM Storwize V7000. With this integration, you now get a common management interface for storage, along with your server and network management for midrange and most high-end storage systems, as well. One look at the inventory for an IBM Storwize V7000 (Figure 2) shows that a broad set of data is collected and can be used to monitor and manage storage that your servers are using.
Although Systems Director had some performance issues in early releases, each new release is focused on improving that. In Systems Director 6.2.1, you can customize how many job instances to save when you schedule a job. For example, if you schedule to collect inventory every week, after a year you will have 52 job instances stored in Systems Director. To improve performance and reduce the database storage needs, you could reduce that number to four so that you have the previous months worth for troubleshooting. The Performance Tuning and Scaling Guide for IBM Systems Director 6.2 is available at www-03. ibm.com/systems/software/director/downloads/mgmtservers.html. This guide is kept up to date and is a good resource if you have questions or need tips on performance and scalability. One of the security enhancements in Systems Director 6.2.1 is the requirement for configuring a 1:1 credential mapping for single sign-on (SSO) when launching the Hardware Management Console (HMC). Otherwise, you are prompted for a password.
Other Enhancements
All the new functionality in Systems Director 6.2.1 comes with new command-line interfaces. A command that was long overdue is the new Revoke Access command. If you need to write a script to revoke access to a system, you can use the following command:
system/revokeaccesssys
Finally, network enhancements have been made for both the base Network Management function and the Standard Editions Network Control plug-in. In the base network management, enhancements are added to support stacked switches as well as other BladeCenter configurations. In addition, third-party switches can be configured by downloading partner plug-ins. This was already supported for BladeCenter switches but is now supported for standalone switches. For those who have Network Control, version 1.2.1 includes new features, as well. Here is a quick summary: Support for BladeCenter Power Blade virtual switches and virtual network adapters without the HMC
Systems Management
Systems Director 6.2.1 includes numerous updates and enhancements. For more information about these improvements, or to provide feedback, see the sidebar IBM Systems Director Website Links.
Greg Hintermeister (gregh@us.ibm.com) works at IBM as a user experience designer and is an IBM master inventor. He has extensive experience designing user interaction for IBM Systems Director, IBM Virtualization Manager, System i Navigator, mobile applications, and numerous web applications. Greg is a regular speaker at user groups and technical conferences.
Systems Management
21
Chapter 4:
Personalize Startup
The first thing I suggest is to define your startup pages, or those tasks you commonly use and want to see immediately after you log on. My startup pages include Welcome, Health Summary, Monitors, and Virtual Servers and Hosts, as Figure 1 shows. These tabs show up when I log on, and I can minimize the navigation area so that I have more space to work in and can get to what I need more quickly. I chose these four tabs for the following reasons: Welcome. I really like the Welcome pages Manage tab because it shows me the categories of tasks, or activities, I can work in. Although in many cases I like to get directly to a particular system to work with it, in other cases I prefer to, for example, click on Update Manager to be guided through managing updates across multiple systems. Health Summary. Health Summary is where I can see the all the resources Im interested in. I have a whole section on this topic later in the article, but in one screen I can see a dashboard of important metrics, my favorite systems and groups, any system with problems, and other custom groups added as thumbnails.
Systems Management
drop-down menu. You need to do this in the order in which you want the tabs to appear. You can also find the tasks in the navigation area on the left. However, the Welcome page doesnt show up as a tabso my instructions are the only way to make the Welcome task a tab. To determine which tab should be the default to display after logon, select My Startup Pages from the navigation area, as Figure 3 shows, then select the default. My suggestion is to make the Health Summary tab your default, which will give you instant access to your personalized resource list; in addition, the dashboard graphs will automatically start collecting data.
Personalize Resources
After you personalize what you see when you log on, youll want to personalize which resources you see. To do this, I suggest opening the Navigate Resources task from the navigation area and clicking All Systems. Browse the list; when you find a favorite, right-click that system and select Add To, Favorites. This will instantly add your system to the Health Summary tabs Favorites section. You can also add groups to your favorites list. As an example, select Find a Resource and enter HMC and Managed Power Systems. Select the group name and add it to your favorites list. An added feature is that if a group is in the favorites list, the Problems and Compliance columns aggregate the problems of any member in that group. Next, create a dynamic group. A dynamic group analyzes the criteria you specify. When it finds a hit in the database, it adds that system
Systems Management
After the group is created, you can edit the description of any system. The system will then instantly appear in your new group. Finally, right-click the group itself and select Add To, Health Summary. This will add this new group as a thumbnail in the Health Summary task, as Figure 5 shows. You can also personalize the list of key metrics that appear in the dashboard. I use this space to monitor my management servers CPU utilization. Notice in Figure 5 how my AIX partition is using shared processors to add resources when necessary and take them away when no longer needed. To add metrics of interest, click the Monitors tab, select your management server OS, then rightclick CPU Utilization %. Select the Add to Dashboard option.
Systems Management
One last thing about the Health Summary: You can personalize how many rows appear in the embedded tables. Open the navigation area and select Navigation Preferences in the Settings category.
Now that youve personalized the systems and groups you care about, lets prune the tasks you dont care about. To do this, you need to create a user role and assign it to the user ID you use sign in to Systems Director. In the navigation area, select Roles in the Security category, then click the Create button. Once youre in the wizard, give the role a name such as Greg Tasks. On the Permissions page you can select all tasks, then remove the tasks you dont want to see, as Figure 6 shows. I selected many categories in this example, but the categories not selected (still in the Available list) wont show up in the UI after I assign this role. Another idea to consider is to create different user profiles for different tasks. For example, if you want to focus on managing updates, you could create a role that has permission to view Release Management, General, Inventory, and System Status and Health. This will make the UI much more streamlined for your update tasks. After a role is created, go to your Users list and assign the role to a user. After the role is assigned, simply sign off and then sign on again. When you sign on youll see the subset list of tasks in the navigation area and in your context menu.
Customization
With just a bit of work on your part, you can personalize how Systems Director looks. Youll get a lot more out of using Systems Director if you customize it to suit your needs. For more information about using Systems Director, see the sidebar IBM Systems Director Website Links.
Greg Hintermeister (gregh@us.ibm.com) works at IBM as a user experience designer and is an IBM master inventor. He has extensive experience designing user interaction for IBM Systems Director, IBM Virtualization Manager, System i Navigator, mobile applications, and numerous web applications. Greg is a regular speaker at user groups and technical conferences.
Systems Management
27
Chapter 5:
In this first article I want to take some time to go over capabilities of Power Systems and IBM i that make the implementation of network appliance-type solutions attractive as well as practical. Lets start
Systems Management
A Linux installation on POWER using IBM i for Virtual I/O requires a number of steps including: Logical Partition Creation Virtual I/O Definition Virtual Network Support Linux Installation Installation of Service and Productivity Tools I am not going to attempt to go over the steps needed to create the logical partition, define the Virtual I/O, or how to setup support for virtual networks; however, I would like to discuss how to simplify the installation of Linux itself as well as the Service and Productivity Tools. The Service and Productivity Tools (www14.software.ibm.com/webapp/set2/sas/f/ lopdiags/home.html) provide a number of additional functions and capabilities specific to the POWER architecture, including the ability to respond to a power-off request with a clean shutdown of the operating system, as well as supporting Dynamic LPAR functions. One can certainly perform the Linux installation and then after the installation is complete, download and install the utilities; however, an easier way would be to take advantage of the IBM Installation Toolkit for Linux (www14.software.ibm. com/webapp/set2/sas/f/lopdiags/installtools/home.html). In a nutshell, the PowerPack CD essentially front-ends the Linux installation with a number of questions related to the desired configuration of the Linux instance being built. From the answers, a response file is built that is used to perform a silent (no user interaction) installation. PowerPack CD also takes care of installing the necessary Service and Productivity Tools. One additional benefit of PowerPack CD is that you have the same look and feel for the Linux installation regardless of whether the RedHat or Novell/SuSE Linux distribution is being installed.
By having the operating system on its own disk, we can have an environment in which we can install the operating system a single time and then copy it as we wish to implement additional open-source solutions. There are a couple of keys to making this work. First, the installer generates the network configuration based on the MAC address of the network adapter. If youre using virtual Ethernet, the
Systems Management
MAC address will be different for any replicated images that are associated with a different LPAR. This will result in a new device handle being created for the Ethernet adapter /dev/eth#, and the original device handle will remain on the system even though it is no longer valid. This can be corrected by renaming the network configuration as well as removing the name association. First, lets change the name of the network configuration file. Currently, the network configuration files name will include the MAC address:
cd /etc/sysconfig/network ls ifccfg-eth*
Also, edit the ifcfg-eth0 file and comment out the UNIQUE entry (put a # at the start of the line) Now lets remove the mapping of the MAC address to the device handle:
rm /etc/udev/rules.d/30-net_persistent_names.rules
This file will be re-generated the next time the system boots with the correct MAC address of the Ethernet adapter. In addition to the network device mapping, the reference by the boot loader to the bootable disk needs to be changed. The installer configures the boot loader to point to a specific SCSI device/ address to boot from, the following steps will change this to a generic name: Edit the /etc/lilo.conf file Ensure that the boot line indicates boot = /dev/sda1 Ensure that the root line indicates root = /dev/sda3 Re-generate the yaboot.conf configuration file with the lilo command Thats it! The disk with the Linux OS is now unique. The partition should be shutdown and the storage space saved for later use.
Keep in mind that the above steps are for Linux implementations that are using I/O hosted by IBM i. If your storage is hosted by VIOS, you can accomplish the same thinga replicable Linux image by using the capture and deploy features of VMControl in IBM Systems Director (I will cover the VMControl capture and deploy in a future article).
Storage Management
One of the keys to implementing these workloads will be to make the resource allocation as flexible as possible. We already mentioned configuring the partition for memory and processor flexibility; now lets spend some time looking at how we can make the storage for the workloads flexible as well. With Linux as the operating system and IBM i providing the storage virtualization, we have the ability to leverage Logical Volume Manager (LVM) along with Network Server Storage Spaces (NWSSTG) to implement a storage scheme for the open-source solution that can grow over time. I recommend that the installation of Linux and the open-source application being implemented be stored on a disk that is separate from the storage that will be used for the data. As an example, when we implement a file serving solution we will have one Network Server Storage Space that Linux and SAMBA (the opensource file server) will be installed on and a second storage space that the file resources being shared
Systems Management
The above command creates a new virtual disk called DATA01 that is 10GB in size. Now the storage space needs to be linked to the Network Server being used to provide I/O to the Linux operating system:
ADDNWSSTGL NWSSTG(DATA01) NWSD(LINUX)
The above command links the vitual disk DATA01 to the Network Server LINUX. You can think of the link process as inserting a device onto a SCSI bus. The link is done dynamically so the disk is immediately available to the Linux operating system. Now in Linux we need to re-scan the SCSI bus in order to discover the new disk:
echo - - - > /sys/class/scsi_host/host0/scan
While the above command may look a bit cryptic, it is actually quite simple. The three dashes indicate the starting SCSI address, ending SCSI address, and device type that you want to scan for. In this case we are scanning from the starting address to the ending address for all device types. The / sys/class/scsi_host/host0/scan is simply a handle in the operating system that represents the scan command for the first SCSI bus (host0). Now we need to put a partition on the disk. Since we are going to use the entire disk for LVM we will simply create a single partition on the disk. We will use the fdisk command to create the partition:
fdisk /dev/sdb
The above statement starts fdisk on the second disk (sdb) on the system. This assumes that there was only a single disk on the system prior to creation and linking of the new virtual disk. To make the disk usable in LVM it first needs to be initializedthis is done with the pvcreate command:
Systems Management
Administration Tools
hile the series of articles on implementing open-source has an emphasis of ease of administration, the reality is that you will still need to perform some level of administration in three keys areas:
Administration of both the Operating System as well as the Environment can be handled via IBM Systems Director. The intent of IBM Systems Director is to provide a single-pane of glass for administration of a companys IT environment. When it comes to Linux on Power there are a number of functions that can be performed with IBM Systems Director. As an example, one can take advantage of the Update Manager in IBM Systems Director to build compliance policies that would be used to check software levels of the installed software against a known list of updates. The updates themselves come from the distributor-provided update process and the Linux distribution must be registered with the distributors update server; however, by using System Directors update function you have a unified method for checking for and applying updates across all of your Power operating systems. Another useful function from Systems Director for the Linux environment is the ability to monitor the health and status of various aspects of the server. As an example, a monitor can be established for file system usage that would send an alert when file system usage reaches a certain point, as shown in Figure 1. In the above scenario, the I/O for the Linux partition is being virtualized from the Linux partition. An event monitor could be established in Systems Director that would trigger when the Linux file system reaches a defined thresholdat that point an event is raised in Systems Director. The event trigger could cause a script to be started in the Linux partition. The script in the Linux partition could then make an ssh call to the IBM i partition to create a new virtual disk and link it to the Network Server. Finally the script could then take the additional virtualized storage, add it to the Logical Volume and increase the size of the file system. Talk about seamless integration and autonomicswith IBM Systems Director and a bit of scripting, its possible to make the Linux server self-healing for storage (and other) related issues. Another cool thing that IBM Systems Director brings to the Linux on Power Figure 1 : File System Usage environment is the functions provided by the VMControl plugin, which gets us into administration of the Environment. The Express edition of VMControl provides the ability to create and modify the Logical Partition that the Linux instance will run in. Where it starts to get interesting is in the Standard Edition, which provides the ability to capture and deploy Linux instances. This greatly enhances the ability for IT environments to implement Linux-based network appliances. Again, if we have built a Linux instance as a base operating system installation (without the installation/configuration of a solution) then we can use VMControl to capture that instance and when we are ready to deploy a solution (like File Serving) we can use VMControl to deploy the Linux instance. The deployment function would create the Logical Partition, restore a new Linux instance (based on the captured image), configure networking in Linux, and start the new serverall at the click of a button! The captured image doesnt need to just be the operating system; it can in fact be the operating system and any software applications you wish to have installed. As an example, if you want to have a captured file serving appliance you could establish a Linux partition with the SAMBA file server installed and configured and then capture that image as a deployable file server appliance. VMControl has the ability to capture the Linux image either to a Linux image repository or to a VIOS server. A future article will delve more deeply into how to use VMControl to capture and deploy Linux-based network appliances. There are a number of tools and utilities that can be used for Application Administration. Each distributor provides its own set of tools. As an example, Novell/SuSE provides yast (Yet Another Setup Tool) with their SuSE Linux Enterprise Server (SLES) distributions, and RedHat provides a number of separate utilities that are all pre-pended with the characters system-config-. Additionally, many applications provide their own administration toolsas an example, the SAMBA File Server (which we will cover in the next article in the series) provides a web-based tool called SWAT (SAMBA Web Administration Tool) for working with the overall configuration of the file server as well as configuration of the file shares. A free web-based tool that brings together a lot of the operating system management as well as application management is WebMin (webmin.com). The idea behind WebMin is to remove the need to edit configuration files directly (which is exactly what we want to stay away from) and manage the system from a console or remotely. I will be highlighting the WebMin functions throughout the Implementing open-source applications series to show how it can be used to simplify the management and configuration of the specific application being discussed and provide a unified management tool for each Linux-based application you decide to implement within your environment.
Systems Management
pvcreate /dev/sdb
With the above command, LVM will now recognize the disk as a Physical Volume (PV). Now the Volume Group itself is created with the vgcreate command:
vgcreate datavg /dev/sdb
The above command creates a volume group called datavg using the physical volume on the disk (sdb). We are now ready to create the logical volume itself. In this case we are going to use the entire space in the volume group so we need to find out exactly what space is available:
vgdisplay datavg
The above command will display information about volume group datavg including the free space. Finally, lets create the logical volume:
lvcreate -L10G -ndata datavg
The above command creates a logical volume called data in the logical volume group datavg. The size of the logical volume is 10GB. To make the logical volume available to Linux it needs to be formatted with a file system and mounted:
mke2fs -j /dev/datavg/data mkdir /mnt/data mount /dev/datavg/data
Notice the device path. The Logical Volume Group is the second element in the path and the logical volume is the third element in the path. So now that we have the structure for the Logical Volume Group in place, we can take advantage of it to create the size of the resulting file system dynamically. First a new virtual disk will need to be created and linked to the network server in the hosting IBM i partition. Once the virtual disk has been linked, the SCSI bus needs to be re-scanned in Linux using the same command I showed earlier. In Linux, the new disk will need to be initialized using the pvcreate command shown earlier but this time replacing the disk identifier with the new disk name. As an example, if this is the third disk in the system, the path would be /dev/sdc. To add the disk to the volume group, the vgextend command is used:
vgextend datavg /dev/sdc
The above command adds the physical volume created on /dev/sdc to the datavg Volume Group. Now the newly created free space in the Volume Group can be added to the Logical Volume:
lvextend -L+10G /dev/datavg/data
The above command adds an additional 10GB to the data volume in the datavg volume group. This assumes that the virtual disk created was 10GB in size. Finally, to make the additional space available to Linux and the application using it, the file system needs to be resized:
unmount /mnt/data e2fssck -f /dev/datavg/data resize2fs /dev/datavg/data mount /dev/datavg/data /mnt/data
In order, the above commands do the following: Unmounts the file system (so any application that makes use of the file system should probably be stopped prior to the resize); checks the file system for errors; resizes the file system to use all available disk space; and finally remounts the file system.
Systems Management
In addition to the commands used above you could also setup and maintain LVM through a number of GUI and web based administration tools.
Management
There are several good tools for day-to-day management of the Linux environment. Two that I recommend are IBM Systems Director and WebMin. IBM Systems Director (www-03.ibm.com/systems/ software/director/downloads/index.html) can be used to administer Linux as well as the Logical Partition Linux is running on. WebMin is a free open-source utility that provides web-based management of the Linux operating system as well as numerous open-source applications. For an example of using WebMin and LVM, see the sidebar in the online version of this article at SystemiNetwork.com.
In this article I walked you through how to create a Linux environment that can be replicated as well as leveraging capabilities of Linux and IBM i to provide a dynamic storage environment. I know some of the commands may have seemed a bit daunting; however, once youve done them once or twice they are fairly straight forward. This lays the ground work for implementing open-source solutions that will be free of the necessity to become a Linux guru.
Erwin Earley (opensolutions@askerwin.com) is a managing consultant at IBM who has worked with the Rochester, Minnesota, development lab since 1996. Erwin currently heads up the Open Community Center of Competency in the IBM i Technology Center. He has worked in the IT industry since 1980 and has experience with several Unix variants as well as Linux and IBM i.
Systems Management
35
Chapter 6:
Some Background
The physical resources that contribute to performance of any platform are CPU, memory, disk arms, and network bandwidth, although the latter is external and out of scope for this discussion. Server performance can only be as good as its weakest processing resource, so its important to measure and tune them all effectively. Some non-physical resource factors that can negatively contribute to
Systems Management
A lot of shops without adequate technical expertise tend to assume that poor performance is an indication a CPU upgrade is needed and dont re-evaluate their I/O, which is the most commonly untuned resource. There can be an impulsive tendency to invest capital in unnecessary processing resources, which may or may not help, as opposed to effectively measuring and tuning all existing resources. When tuning a server, you need to keep in mind that these various resources are equally important and that relieving one resource bottleneck, like memory faulting or disk arm utilization, can create a bottleneck in another resource, like CPU, which had been previously underutilized or had been running efficiently Imagine youre grinding wheat to produce flour at a mill driven by a water wheel. The water wheel is either underutilized or functioning within spec but needs to be spun faster to increase flour production. The spin of the wheel depends on the amount of water passing through it, but a dam upstream is causing a restricted flow of water. Until that dam is cleared, it wouldnt make sense to upgrade the water wheel to a larger size. If you consider the water wheel as a CPU unable to be driven at capacity, the water as your workload, and the dam as an I/O bottleneck, you can follow the analogy and see that it also doesnt make sense to upgrade to a larger CPU until the associated I/O bottlenecks are remediated. Once you break I/O bottlenecks, such as memory faulting and excessive or unbalanced disk arm utilization, the work will flow faster and drive the CPU. Until that happens, you cant accurately measure the CPU to determine whether it also needs an upgrade. In other words, you wouldnt want to upgrade to a POWER7 water wheel until you relieve the I/O dam upstream.
System is Performance Adjuster can help you manage this situation. Performance Adjuster is enabled by setting system value QPFRADJ, which is dependent on the thresholds, as shown in Figure 1,
Systems Management
within the Work with Shared Pools screen, as shown in Figure 2. Performance Adjuster constantly measures these shared pool thresholds and dynamically reallocates memory resources to relieve faulting or adjusts activity levels to stabilize transitions. When the faulting and/or transition thresholds are reached, Performance Adjuster reassigns memory resources based on the minimum and maximum ranges defined for each shared pool. The two issues that limit Performance Adjusters effectiveness out of the box are the fact that most work executes by default in the *BASE pool, and Performance Adjuster ranges default to very low minimums and high maximums, which are too open ended and Figure 2 : The Work with Shared Pools screen can allow an overreaction that can do more harm than good. Performance Adjuster is dependent on shared pools to be most effective. Allowing unrestricted memory ranges is like arbitrarily opening and closing the dam in the analogy above, which makes resource utilization unpredictable. Open-ended adjustment ranges can cause Performance Adjuster to overreact to temporary events, such as an ad hoc interactive query. Performance Adjuster, for example, could react to an interactive event, reassigning memory to shared pool *INTER even after the ad hoc event ends, only to react the other way to put things back as they were. This situation can be exacerbated further if the upper range (Max %) *INTER is too high, causing the other pools to go too low. These transitions can create unpredictable results and make performance difficult to measure. Establishing accurate minimum and maximum ranges enables the server to gracefully transition from an interactive-intensive workload during the day to a more batch-intensive workload after business hours and over the weekend.
Performance Adjuster is also tasked with managing activity levels, which is the only thing it efficiently performs out of the box. Back in the day, before Performance Adjuster was available and manual tuning was required, an inadequate activity level would cause Wait to Ineligible (Wait-Inel) and Active to Ineligible (Act-Inel) transitions on the Work with System Status (WRKSYSSTS) screen, as shown in Figure 3, which resulted in serious performance problems caused by jobs unable to get access to the CPU. Although its extremely important to maintain activity levels in proportion to the number of active jobs and/or threads in a memory pool, Performance Adjuster adds very little additional value out of the box because by default, the server ships with all work running out of the *BASE memory pool,
Systems Management
So to effectively tune your server, even before enabling Performance Adjuster, you must rationalize the different types of work running on your server and route them into memory pools sharing similar characteristics. The three main types of work are interactive, batch, and what I call asynchronousbatch work that doesnt necessarily start and stop, but rather remains active and waits for work to come in, which it processes then waits some more. Asynchronous work sometimes executes at different priorities than batch work. One good example of asynchronous of work is third-party Independent Software Vendor (ISV) subsystems, as shown in Figure 4, which may or may not need to be moved out of *BASE depending
Systems Management
on the amount of resources they utilize. The commands below can be used to route subsystem work out of the *BASE pool into separate shared pools. Note: You must execute the Change Subsystem Description (CHGSBSD) command once for each subsystem, and you must execute the Change Routing Entry (CHRRTGE) command once for each routing entry in a given subsystem. 1. Route batch work from *BASE to Shared Pool 1 CHGSBSD SBSD(QBATCH) POOLS((1 *BASE) (2 *SHRPOOL1)) CHGRTGE SBSD(QBATCH) SEQNBR(nnn) POOLID(2) Figure 5 : The Change Shared Pool screen 2. Route asynchronous work from *BASE to Shared Pool 2 CHGSBSD SBSD(&ASYNCHSBS) POOLS((1 *BASE) (2 *SHRPOOL2)) CHGRTGE SBSD(&ASYNCHSBS) SEQNBR(nnn) POOLID(2) 3. Route HTTP work from *BASE to Shared Pool 3 CHGSBSD SBSD(QHTTPSVR/QHTTPSVR) POOLS((1 *BASE) (2 *SHRPOOL3)) CHGRTGE SBSD(QHTTPSVR/QHTTPSVR) SEQNBR(10) POOLID(2)
The shared pool characteristics that govern and enforce boundaries around Performance Adjuster can be interactively initialized via the Work with Shared Pools (WRKSHRPOOL) command or programmatically via the Change Shared Pool (CHGSHRPOOL) command. The WRKSHRPOOL command has three views (Pool Data, Tuning Data, and Text), which you toggle between via the F11 key after executing the command. The WRKSHRPOOL command can only be interactively executed but gives you the ability to view and modify all shared pools in a single place. You can execute CHGSHRPOOL interactively or programmatically, but it is pool specific and only allows you to manipulate a single pool at a time. An example of the Change Shared Pool screen is shown in Figure 5. The following examples use the CHGSHRPOOL command to introduce adjustment range baselines and eliminate *MACHINE faulting, which can drive up CPU utilization, disk arm utilization, and across-the-board faulting. Its important to remember that these examples should be used only as a guideline with the understanding that the needs of a particular server may vary. Best practice is to measure and determine adequate upper boundaries during peak processing times, like the end of the month, because acceptable performance at peak utilization usually guarantees the same during non
Systems Management
An Explanation of Parameters
The following is a brief explanation of each parameter of the WRKSHRPOOL and CHGSHRPOOL commands. These parameters are the same for both commands but are represented as columns of the WRKSHRPOOL command and the rows of the CHGSHRPOOL command: Pool identifier: The name of the storage pool (*MACHINE, *BASE, *INTERACT, *SPOOL, *SHRPOOLn). Storage size: The desired size of the storage pool expressed in kilobyte (1KB = 1024 bytes) multiples. Activity level: The maximum number of threads that can simultaneously run in the pool. Paging option: This determines whether the system does (*CALC) or does not (*FIXED) dynamically adjust the paging characteristics of the storage pool for optimum performance. Text description: Verbiage associated with this storage pool. Minimum page faults: The minimum page faults per second to use as a guideline for adjustment of this storage pool. Per-thread page faults: The page faults per second for each active thread to use as a guideline for adjustment of this storage pool. Each job is comprised of one or more threads. Maximum page faults: The maximum page faults per second to use as a guideline for adjustment of this storage pool. Priority: The priority given to this pool by Performance Adjuster relative to the priority of the other storage pools being adjusted. Minimum size %: The minimum amount of storage to allocate to this storage pool as a percentage of total main storage.
Systems Management
Maximum size %: The maximum amount of storage to allocate to this storage pool as a percentage of total main storage.
One Note
Some of the issues discussed in this article in regards to dynamic memory reallocation in a single LPAR containing static resources should be taken into consideration before considering new hardware management functionality, like uncapped processor and memory resources across multiple LPARs. Performance tuning and measurement is a challenge in a single LPAR when resources are static, so you can imagine what issues could arise if the physical resources suddenly become dynamic across multiple LPARs. It could become difficult, if not impossible, to accurately analyze performance against moving targets, let alone undertake a capacity-planning effort. For example, if youre viewing performance data for a certain interval where CPU is running at 90 percent, the question suddenly becomes 90 percent of whatever amount of CPU happened to be assigned to the LPAR at the given interval. Ad hoc events on one LPAR could also suddenly set off a chain of events across multiple LPARs, making physical resource reallocation a horse race, which is difficult to rationalize. Im not suggesting never enabling uncapped resources, only to understand the interdependencies between the participating LPARs, the potential ramifications, and to implement uncapped resource reallocation boundaries. Physical resources work together, and increasing/decreasing one could have an effect on the others. For that reason, care should be taken if a decision is made to suddenly allow physical CPU and memory resources to be manipulated dynamically and separately.
In Conclusion
The primary and most desirable goal of tuning a server is to achieve good performance at all times. Best practice is to tune the server to handle processing peaks, assuming that this will provide good performance at off-peak periods. If thats simply not possiblefor example, because of budget constraints that limit your ability to acquire additional processing resourcesa secondary goal is to at least make performance predictable in order to manage business and end-user expectations. Poor performance is bad enough, but the only thing worse is unpredictable poor performance Properly implemented, Performance Adjuster can remediate excessive faulting, stabilize enduser and batch performance, allow the server to gracefully transition between interactive and batch workloads, and remediate drastic transitions to achieve that much-desired server predictability. Performance Adjuster is a powerful tool, but boundaries must be established around its ability to adjust resources to prevent it from overreacting, and shared pools should also be implemented to make it most effective. Dont be afraid to experiment with shared pools and Performance Adjuster ranges.
Tom Reilly (TomReilly418@gmail.com) has 25+ years experience in IT working on the System i platform since its inception as the AS/400 and before that on the System/38. Tom provides engineering, delivery, operational automation, and technical writing support for an international pharmaceutical company and specializes in large MRP, ERP, and messaging implementations running on System i.
Systems Management
Disaster Recovery. That Works. MIMIX RecoverNow provides reliable, expansive disaster protection and lets you recover single objects or entire applications from any point in time. Visit ww.visionsolutions.com or call 800-957-4511.
Copyright 2011, Vision Solutions, Inc. All rights reserved. IBM is trademark of International Business Machines Corporation.