You are on page 1of 62

Oracle Hyperion Enterprise Performance Management System

High Availability and Disaster Recovery Guide RELEASE 11.1.2.1 Updated: November 2011

EPM System High Availability and Disaster Recovery Guide, 11.1.2.1 Copyright 2008, 2011, Oracle and/or its affiliates. All rights reserved. Authors: EPM Information Development Team Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS: Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

Contents

Documentation Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 1. About High Availability and Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Assumed Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Installation Documentation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 High Availability and Disaster Recovery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 High Availability and Load Balancing for EPM System Components . . . . . . . . . . . . . . . . . 9 Components Clustered with EPM System Configurator . . . . . . . . . . . . . . . . . . . . . . . . 9 Components Clustered Outside EPM System Configurator . . . . . . . . . . . . . . . . . . . . 11 General Clustering Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 2. Support Matrix for High Availability and Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Chapter 3. Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 General Information About Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Disaster Recovery Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Disaster Recovery for EPM System Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Environment Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Host Name Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Database Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Disaster Recovery Without File System and Database Replication . . . . . . . . . . . . . . . . . . . 24 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 4. Foundation Services Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Configuring Lifecycle Management for Shared Services High Availability . . . . . . . . . . . . . 27 Performance Management Architect Dimension Server Clustering and Failover . . . . . . . . 28 Task Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 VIP Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Action Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Application Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Setting the Performance Management Architect Server Logical Web Address . . . . . . . 33

Contents

iii

Chapter 5. Essbase Server Clustering and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Essbase Server Clustering Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Active-Passive Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Active-Active Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Configuring Active-Active Clusters with Provider Services . . . . . . . . . . . . . . . . . . . . . 37 Adding Servers to Active-Active Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Removing Active-Active Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Adding Components to Active-Active Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . 38 Removing Database Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Enabling Clustered Database Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Disabling Cluster Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Active-Active Essbase Clustering Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Connections to Essbase Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Chapter 6. Reporting and Analysis Services Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Reporting and Analysis Configuration for a Distributed Environment . . . . . . . . . . . . . . . 45 Clustering Reporting and Analysis Framework Services and Common Libraries . . . . . . . . 46 Clustering GSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Clustering Interactive Reporting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Financial Reporting Print Server Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Chapter 7. Data Management Services Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 FDM Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Data Relationship Management Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Chapter 8. Clustering EPM System Web Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Clustering Web Applications in a Manual Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Appendix A. Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

iv

Contents

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support


Oracle customers have access to electronic support through My Oracle Support. For information, visit http:// www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup? ctx=acc&id=trs if you are hearing impaired.

Documentation Accessibility

1
In This Chapter

About High Availability and Disaster Recovery

Assumed Knowledge........................................................................................ 7 Installation Documentation Roadmap .................................................................... 7 High Availability and Disaster Recovery Comparison .................................................... 9 High Availability and Load Balancing for EPM System Components ................................... 9 General Clustering Considerations .......................................................................11

Assumed Knowledge
This guide is for administrators who install, configure, deploy, and manage Oracle Hyperion Enterprise Performance Management System products. It assumes the following:
l

Security and server administration skills Windows or UNIX administration skills or both, depending on your computing environment Web application server administration skills, including familiarity with WebLogic A strong understanding of your organization's security infrastructure, including authentication providers such as Microsoft Active Directory, Lightweight Directory Access Protocol (LDAP) Enabled providers, Oracle Internet Directory, and mSunONE LDAP Directory, and use of Secure Sockets Layer (SSL) A strong understanding of your organization's database and server environments, including file systems A strong understanding of your organization's network environment and port usage

Installation Documentation Roadmap


You can find EPM System installation documentation in the Oracle Documentation Library (http://www.oracle.com/technology/documentation/epm.html) on Oracle Technology Network. For faster access to the documentation for a specific release, you can use the Enterprise Performance Management Documentation Portal (http://www.oracle.com/us/solutions/entperformance-bi/technical-information-147174.html), which also contains links to EPM Supported Platform Matrices, My Oracle Support, and other information resources.

Assumed Knowledge

Note: Always check the Oracle Documentation Library (http://www.oracle.com/technology/

documentation/epm.html) on Oracle Technology Network to see whether an updated version of a guide is available. Table 1 lists the documents to consult for instructions on performing essential installation tasks.
Table 1

Documentation That You Need Related Documentation Oracle Hyperion Enterprise Performance Management System Certification Matrix (http://www.oracle.com/technology/software/products/ias/files/ fusion_certification.html) Oracle Hyperion Enterprise Performance Management System Installation Start Here Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Task Meeting system requirements and understanding release compatibility Planning the installation Installing, configuring, and deploying EPM System products Starting EPM System products Validating the installation Upgrading EPM System products

l l l

Securing EPM System Provisioning users

Hyperion Security Administration Guide Oracle Hyperion Enterprise Performance Management System User and Role Security Guide

Table 2 lists the documents to consult for additional installation tasks that you might need to perform.
Table 2

Documentation That You Might Need Related Documentation Oracle Hyperion Enterprise Performance Management System Installation and Configuration Troubleshooting Guide Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide Oracle Hyperion Enterprise Performance Management System Lifecycle Management Guide Oracle Hyperion Enterprise Performance Management System High Availability and Disaster Recovery Guide

Task Troubleshooting installations Creating a backup of product and application data Migrating from one environment to another Clustering EPM System applications for high availability and disaster recovery

Additional content is available in the White Papers Library at Oracle Enterprise Performance Management / Business Intelligence White papers. (http://www.oracle.com/technetwork/ middleware/bi-foundation/resource-library-090986.html).
8
About High Availability and Disaster Recovery

High Availability and Disaster Recovery Comparison


High Availability and Disaster Recovery (sometimes also known as business continuity) address different requirements, as shown in Table 3.
Table 3

High Availability and Disaster Recovery Compared Disaster Recovery Addresses service continuity, so that in case of disaster, service is maintained through a standby site. Two independent environments, typically in separate and distinct facilities, each contain their own data (in the file system and database) and executables. Data and configuration information are replicated between the production and standby sites.

High Availability Addresses service availability, providing redundancy so that if one infrastructure component (network, servers, processes) becomes unavailable, overall service remains available. A single system contains its own data (in the file system and database) and executables. Data replication is unnecessary (although data should be backed up).

For information on setting up Disaster Recovery for EPM System components, see Chapter 3, Disaster Recovery. For general information on setting up Disaster Recovery, see the Oracle Fusion Middleware Disaster Recovery Guide ( http://download.oracle.com/docs/cd/E14571_01/ doc.1111/e15250/toc.htm).

High Availability and Load Balancing for EPM System Components


Most EPM System components support clustering in active-active configurations to remove single points of failure from the architecture, maintain consistent performance through load balancing, or both. Components that support clustering include Web applications, Oracle Essbase Server, and Oracle Hyperion Financial Management, Fusion Edition, server, which are clustered with Oracle's Hyperion Enterprise Performance Management System Configurator.
Note: You can cluster Essbase Server in an active-passive configuration with EPM System

Configurator. To cluster Essbase Server in an active-active configuration, you use Oracle Hyperion Provider Services. See Chapter 5, Essbase Server Clustering and Failover. Other components can be clustered outside EPM System Configurator.

Components Clustered with EPM System Configurator


The following EPM System components can be clustered with EPM System Configurator:
l

Oracle's Hyperion Foundation Services

High Availability and Disaster Recovery Comparison

Foundation Services Managed Server (including Oracle's Hyperion Shared Services, Oracle Enterprise Performance Management Workspace, Fusion Edition, and Foundation Web Service) Oracle Hyperion EPM Architect, Fusion Edition Performance Management Architect Data Synchronization Hyperion Calculation Manager Essbase Server (active-passive configuration) Oracle Essbase Administration Services Provider Services Oracle's Hyperion Reporting and Analysis Framework Oracle Hyperion Financial Reporting, Fusion Edition Oracle's Hyperion Web Analysis Oracle Hyperion Planning, Fusion Edition Financial Management
o

Essbase
m

Oracle's Hyperion Reporting and Analysis


m

Oracle's Hyperion Financial Performance Management Applications


m

Financial Management Server Financial Management Web application Financial Management Web Services Web Application (IIS) Financial Management LCM Web Services Web Application (IIS) Financial Management Oracle Hyperion Smart View for Office, Fusion Edition, Web Services Web Application (IIS) Financial Management Web Application (IIS)

Oracle Hyperion Profitability and Cost Management, Fusion Edition Oracle Hyperion Performance Scorecard, Fusion Edition Oracle Hyperion Financial Close Management Oracle Hyperion Disclosure Management Oracle Hyperion Financial Data Quality Management, Fusion Edition, Web Application (IIS) Oracle Hyperion Financial Data Quality Management ERP Integration Adapter for Oracle Applications ERP Integrator Web Application

Data Management
m

See Chapter 2, Support Matrix for High Availability and Load Balancing and Chapter 8, Clustering EPM System Web Applications.

10

About High Availability and Disaster Recovery

Components Clustered Outside EPM System Configurator


The following EPM System components support clustering outside EPM System Configurator for removing single points of failure from the architecture, maintaining consistent performance through load balancing, or both.
l

Foundation
m

Performance Management Architect Dimension Server See Chapter 4, Foundation Services Clustering.

Essbase Server (active-active cluster configuration) See Chapter 5, Essbase Server Clustering and Failover. Reporting and Analysis
m

Reporting and Analysis Framework Services and Common Libraries See Clustering Reporting and Analysis Framework Services and Common Libraries on page 46.

Oracle's Hyperion Interactive Reporting See Clustering Interactive Reporting Services on page 47. Financial Reporting Print Server See Financial Reporting Print Server Clusters on page 47.

Data Management:
m

FDM Application Server See FDM Clusters on page 49. Oracle Hyperion Data Relationship Management, Fusion Edition, Web Application (IIS) See Data Relationship Management Clusters on page 50. Data Relationship Management Application Server See Data Relationship Management Clusters on page 50.

See Chapter 2, Support Matrix for High Availability and Load Balancing.

General Clustering Considerations


Note these general points when installing EPM System components in a distributed environment:
l

If you have more than one Oracle HTTP Server or IIS Web server, you must use a load balancer (hardware or software) to route traffic to the servers, and the logical Web address for the Web application cluster should be the load balancer. If you have only one Oracle HTTP Server or IIS Web server, the logical Web address for the Web application cluster should be the Oracle HTTP Server or IIS.

General Clustering Considerations

11

Foundation Services is required on only one machine in the deployment, unless multiple instances are required for clustering. There is a required configuration sequence for EPM System components installed in a distributed environment. In particular, you must configure Foundation Services first. See Configuration Sequence in a Distributed Environment in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. In a distributed environment, EPM Oracle home must be the sam on all machines. When configuring EPM System for high availability where multiple instances of services are running, you must point to the same location on a shared disk in these fields in EPM System Configurator:
m

(Reporting and Analysis Framework Services) Repository Directory (Essbase Server) File path to application location (ARBORPATH) Performance Scorecard - Configure Attachment Files Location

Example:
m

Repository Directorys:/pkt7119/user_projects/epmsystem1/
ReportingAnalysis/data/RM1

Full path to application location (ARBORPATH)s:/pkt7119/user_projects/


epmsystem1/EssbaseServer/essbaseserver1

Performance Scorecard - Configure Attachment Files Locations:/pkt7119/


user_projects/epmsystem1/HPS/hpsfiles

On the machine on which you plan to administer the WebLogic Server, you must install all Web applications that you plan to deploy on any machine in the environment. (The WebLogic Administration Server is installed and deployed on the Foundation Services machine.) On each remote machine in a distributed environment, install the Web applications that you plan to run on that machine and then use EPM System Configurator to deploy the Web applications automatically, or manually deploy the Web applications.
Note: Oracle Hyperion Enterprise Performance Management System Installer, Fusion

Edition installs WebLogic Server on each machine (for Web tier and Service tier components) in a distributed environment.
l

If you are deploying Web applications on a machine other than the WebLogic Administration Server machine, WebLogic Administration Server must be running. All Web applications in an EPM System deployment must be deployed on either all Windows machines or on all UNIX machines. However, because Financial Management runs only on Windows, if you are using Financial Reporting with Financial Management, you must install them together on a Windows machine. (Financial Management is not supported as a data source on a UNIX platform.) If your other Web applications are deployed to UNIX machines, deploy Financial Reporting and Web Analysis on Windows using a manual process. See Deploying Financial Reporting and Web Analysis on Windows for use with

12

About High Availability and Disaster Recovery

Financial Management in Chapter 6, Manually Deploying EPM System Web Applications, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.
l

You can have more than one Web server in a deployment for load balancing and failover. In this scenario, configure the Web server on each machine in the environment. For IIS 6.0, you cannot install 32-bit components on a 64-bit system on which 64-bit components are installed. On 32-bit platforms, all EPM System products can coexist. For IIS 7 (the default on Windows 2008 systems), 32-bit and 64-bit components can coexist. EPM System static content, including product online help, is installed with Oracle HTTP Server. If you are using FDM and IIS as the Web server, you must install the FDM Web application and the Web server on the same box. If you are using Financial Management and IIS as the Web server, you must install Financial Management Web applications and the Web server on the same box.

See Installing EPM System Products in a Distributed Environment in Chapter 3, Installing EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.

General Clustering Considerations

13

14

About High Availability and Disaster Recovery

2
Table 4

Support Matrix for High Availability and Load Balancing

The tables in this chapter list the supported clustering methodologies for EPM System components by product group and indicate whether high availability and load balancing are supported for each component. The tables also include notes and references to additional information.
Foundation Services Clustering Supported Methodology WebLogic clustering with EPM System Configurator High Availability Yes Load Balancing Yes References Notes To configure Oracle Hyperion Enterprise Performance Management System Lifecycle Management for high availability when Shared Services is set up for high availability, you must set up a shared disk. None
l

Product/ Component Foundation Services Managed Server (including Shared Services, EPM Workspace, and Foundation Web Services Web applications)

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Configuring Lifecycle Management for Shared Services High Availability in Chapter 4, Foundation Services Clustering, of this guide Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

Performance Management Architect Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Performance Management Architect Data Synchronizer Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

None

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

15

Product/ Component Calculation Manager Web Application

Supported Methodology WebLogic clustering with EPM System Configurator

High Availability Yes

Load Balancing Yes

References Notes None


l

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Chapter 4, Foundation Services Clustering in this guide Oracle Clusterware documentation

Performance Management Architect Dimension Server and other processes


Table 5

Oracle Clusterware clustering for failover

Yes

No

None

Essbase Clustering Supported Methodology


l

Product/ Component Essbase Server

High Availability Yes

Load Balancing Active-active clusters configured with Provider Services support loadbalancing.

References Notes
l

Active-passive clustering with EPM System Configurator Active-active clustering with Provider Services

Activepassive clusters support failover with write-back. Activeactive clusters are read-only.

Active-passive clustering: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide Active-active clustering: Chapter 5, Essbase Server Clustering and Failover, in this guide

l l

Administration Services Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

Session failover is not supported.

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Provider Services Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

None

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

16

Support Matrix for High Availability and Load Balancing

Product/ Component Oracle Essbase Integration Services Oracle Essbase Studio


Table 6

Supported Methodology None

High Availability No

Load Balancing No

References Notes None None

None

No

No

None

None

Reporting and Analysis Clustering Supported Methodology WebLogic clustering with EPM System Configurator High Availability Yes Load Balancing Yes References Notes None
l

Product/ Component Reporting and Analysis Framework Web Application

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Financial Reporting Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

None

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Web Analysis Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

None

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

Reporting and Analysis Framework Services and Common Libraries Interactive Reporting Services Financial Reporting Print Server

Virtual clustering through EPM Workspace

Yes

Yes

None

Chapter 6, Reporting and Analysis Services Clustering, in this guide

Virtual clustering through EPM Workspace Installation on different machines for physical clustering

Yes

Yes

None

Chapter 6, Reporting and Analysis Services Clustering, in this guide

Yes

Yes

None

Chapter 6, Reporting and Analysis Services Clustering, in this guide

17

Table 7

Financial Performance Management Applications Clustering Supported Methodology WebLogic clustering with EPM System Configurator High Availability Yes Load Balancing Yes References Notes None
l

Product/ Component Planning Web Application

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

Planning RMI Registry Financial Management Server

None Clustering with EPM System Configurator

No Yes

No Yes

None In EPM System Configurator, use the Register Application Servers/ Clusters task. None

None Clustering Financial Management Servers in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Financial Management Web Services Web Application

WebLogic clustering with EPM System Configurator

Yes

Yes

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

Financial Management Web Services Web Application (IIS) Financial Management Web Application (IIS) Financial Management Lifecycle Management Web Services Web Application (IIS)

Clustering with Oracle HTTP Server or thirdparty load balancers Clustering with Oracle HTTP Server or thirdparty load balancers Clustering with Oracle HTTP Server or thirdparty load balancers

Yes

Yes

None

Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Yes

Yes

None

Yes

Yes

None

18

Support Matrix for High Availability and Load Balancing

Product/ Component Financial Management Smart View Web Services (IIS) Performance Scorecard

Supported Methodology Clustering with Oracle HTTP Server or thirdparty load balancers WebLogic clustering with EPM System Configurator

High Availability Yes

Load Balancing Yes

References Notes None Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide
l

Yes

Yes

None

Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide

Profitability and Cost Management

WebLogic clustering with EPM System Configurator

Yes

Yes

None

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide

Disclosure Management Financial Close Management

None WebLogic clustering with EPM System Configurator

No Yes

No Yes

None None

None Automatic deployment: Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide
l

Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Load balancing: Configuring High Availability for Oracle Fusion Middleware SOA Suite in the Oracle Fusion Middleware High Availability Guide

Table 8

Data Management Products Clustering Supported Methodology Clustering with the FDM proprietary load balancer High Availability Yes Load Balancing Yes References Notes None
l l l

Product/ Component FDM Application Server

Database software documentation FDM Clusters on page 49 Oracle Hyperion Financial Data Quality Management, Fusion Edition, Configuration Guide

19

Product/ Component FDM proprietary load balancer

Supported Methodology None

High Availability Yes

Load Balancing No

References Notes The load balancer is designed to be installed in more than one place in an environment. If the primary load balancer becomes unavailable, clients use a secondary load balancer. None None

FDM IIS Web Application

Clustering with Oracle HTTP Server or thirdparty load balancers None WebLogic clustering with EPM System Configurator

Yes

Yes

Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide None Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide Manual deployment: Chapter 8, Clustering EPM System Web Applications, in this guide Data Relationship Management Clusters on page 50 Configuring Load Balancing for Data Relationship Management Web Applications in the Oracle Hyperion Data Relationship Management Installation Guide Data Relationship Management Clusters on page 50 Configuring Host Machines in the Oracle Hyperion Data Relationship Management Installation Guide

FDM Task Manager ERP Integrator

No Yes

No Yes

None None

Data Relationship Management IIS Web Application

Clustering with Oracle HTTP Server or thirdparty load balancers

No

Yes

Multiple Microsoft IIS instances are deployed in an active-active configuration.

Data Relationship Management Application Server

Clustering with Data Relationship Management proprietary load balancing

No

Yes

Multiple application servers are deployed in a primary-secondary configuration.

20

Support Matrix for High Availability and Load Balancing

3
In This Chapter

Disaster Recovery

General Information About Disaster Recovery...........................................................21 Disaster Recovery Architecture ...........................................................................22 Disaster Recovery for EPM System Components........................................................23 Disaster Recovery Without File System and Database Replication ...................................24 Additional Information.....................................................................................25

General Information About Disaster Recovery


This chapter contains information that is specific to EPM System Disaster Recovery configurations. The Oracle Fusion Middleware Disaster Recovery Guide ( http:// download.oracle.com/docs/cd/E14571_01/doc.1111/e15250/toc.htm) is the primary reference for design considerations, recommendations, setup procedures, troubleshooting steps, and other information that you need to deploy and manage the Oracle Fusion Middleware Disaster Recovery solution.

General Information About Disaster Recovery

21

Disaster Recovery Architecture


Figure 1 EPM System Disaster Recovery Architecture

Note: Although the deployment shown in Figure 1 uses symmetric topology, with the same

number of servers at the production and standby sites, deployment using asymmetric topology (with fewer servers at the standby site than at the production site) is also possible. Deployment with asymmetric topology requires a server at the standby site for each logical server cluster at the production site. Use of a shared or replicated disk requires a common share across machines; for example, the share can be under /user_projects/data.

22

Disaster Recovery

Disaster Recovery for EPM System Components


Subtopics
l l l

Environment Configuration Host Name Requirements Database Recommendations

Environment Configuration
Configuring a Disaster-Recovery environment requires these steps: 1. Install and configure EPM System at the production site. Runtime executables and data should be on a replicatable partition Distributed services must be clustered to form a logical service. 2. If the host names at the standby site differ from the host names at the production site, set up host name aliases at the standby site. See Host Name Requirements on page 23. 3. When the EPM System configuration at the production site is complete, install and configure EPM System at the standby site. 4. Set up database replication.
Note: You can use a backup and restoration procedure for replication.

5. Enable the standby site.


l

Disable mirroring between the production and standby sites. Run the crash-recovery procedure for each application to recover Essbase. See Chapter 4, Essbase Components, in the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide. Start the services on the standby hosts.

Host Name Requirements


An EPM System Disaster Recovery deployment requires a means of resolving host references between the production and standby sites. Ensure that your configuration uses one of these options, listed in order of preference:
l

Production and standby sites are on separate networks. The fully qualified host names can be the same in both sites. Production and standby sites have different DNS that resolve the host names to the correct IP address in their network. The standby site can have a standby DNS that is activated when a disaster occurs. Production host names are resolved to a local IP address at the standby site by means of an /etc/hosts file.
23

Disaster Recovery for EPM System Components

If the host names must differ between the production and standby sites and there is no separate DNS for the standby site, set up an alias for the production site servers in the standby site as shown in Figure 2, so that the main server is the first entry in the alias.
Figure 2 Host Name Alias Setup

Database Recommendations
Database recommendations for a Disaster Recovery environment:
l

Use the database host name alias on the standby site. Use Oracle Data Guard configuration for data repositories. For planned configuration changes, force database synchronization with Oracle Data Guard.

See the Oracle Data Guard documentation at http://www.oracle.com/technology/deploy/ availability/htdocs/DataGuardOverview.html.

Disaster Recovery Without File System and Database Replication


You can set up Disaster Recovery using backup instead of file system and database replication. With replication, any changes made on the production site are also applied to the standby site. Backup is less costly than replication but enables you to recover only backed-up data. For example, if data was last backed up on Friday and the production site is damaged on the following

24

Disaster Recovery

Thursday, data changes that occurred between the two dates are lost. More-frequent backups enable you to recover more data. The file system backup and the database backup must be synchronized. Backing up the file system and the database at approximately the same time, when there is relatively little activity, ensures that they are synchronized. For Disaster Recovery without file systems and database replication, take one of these steps:
l

Replicate the installation image to ensure that all patches applied to the production site after the initial setup are also applied to the standby site. Promptly manually apply all patches at the production site to the standby site.

Additional Information
For more information about setting up a Disaster Recovery environment, see these documents:
l

The Oracle Fusion Middleware Disaster Recovery Guide (http://download.oracle.com/docs/ cd/E14571_01/doc.1111/e15250/intro.htm#BABHCEJJ) The Oracle Data Guard documentation at http://www.oracle.com/technology/deploy/ availability/htdocs/DataGuardOverview.html Oracle Fusion Middleware DR Solution Using NetApp Storage at http://www.netapp.com/us/ library/technical-reports/tr-3672.html The Disaster Recovery guide for the RDBMS that you use.

Additional Information

25

26

Disaster Recovery

4
In This Chapter

Foundation Services Clustering

Configuring Lifecycle Management for Shared Services High Availability ............................27 Performance Management Architect Dimension Server Clustering and Failover .....................28

This chapter provides information about configuring Lifecycle Management for Shared Services high availability and setting up Performance Management Architect Dimension Server for failover. For information about clustering Foundation Services Web applications through EPM System Configurator, see Chapter 8, Clustering EPM System Web Applications.

Configuring Lifecycle Management for Shared Services High Availability


This section describes how to configure Lifecycle Management when Shared Services is set up for high availability and is started as a Windows service. After this configuration is completed, when artifacts are exported using Lifecycle Management, the content is exported to a path on a shared disk; when imported, the content is read from the shared disk exported location.

To configure Lifecycle Management for high availability:


1 2 3 4 5 6 7 8
Set up a shared disk/folder that is accessible to all Shared Services nodes. On each node, start Shared Services as a service using the login of a domain user who has access to the shared disk/folder. On one node, launch Oracle's Hyperion Shared Services Console and expand the Deployment Metadata node under the Foundation application group. Expand the Shared Services Registry node, then Foundation Services, and then Shared Services. Under the Shared Services node, right-click the Properties node and select Export for Edit. Save the component.properties file to a location on the file system. Open the saved file in a text editor and search for the property filesystem.artifact.path. Change the value associated with the filesystem.artifact.path property.

UNIX-style UNC paths with forward slashes must be defined for the shared disk; for example:
Configuring Lifecycle Management for Shared Services High Availability

27

filesystem.artifact.path=//hostname/share

Save the changes. Services, and select Import after Edit.

10 From Oracle's Hyperion Shared Services Console, right-click the Properties node under Shared 11 Browse to the location of the updated file and select the file.
This action updates the property in Oracle's Hyperion Shared Services Registry.

12 Restart Shared Services on this node and all other nodes using the domain user login.

Performance Management Architect Dimension Server Clustering and Failover


Subtopics
l l l l l

Task Sequence VIP Resources Action Scripts Application Resources Setting the Performance Management Architect Server Logical Web Address

Task Sequence
You use Oracle Clusterware to cluster Performance Management Architect Dimension Server for failover in an active-passive configuration. Oracle Clusterware documentation is available at http://www.oracle.com/pls/db112/portal.portal_db? selected=16&frame=#oracle_clusterware. For information about clustering Web Application and Performance Management Architect Data Synchronizer Web Application, see Chapter 8, Clustering EPM System Web Applications. Clustering Performance Management Architect Dimension Server for failover involves this task sequence: 1. Installing the Performance Management Architect Dimension Server component in the Oracle Clusterware shared folder on a clustered disk, or in a subfolder of that folder. 2. Configuring Performance Management Architect with EPM System Configurator See Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. 3. Creating and registering a virtual Internet protocol (VIP) resource with Oracle Clusterware See VIP Resources on page 29. 4. Creating an action script See Action Scripts on page 30.

28

Foundation Services Clustering

5. Creating and registering an application resource with Oracle Clusterware See Application Resources on page 31. 6. Setting the Performance Management Architect Dimension Server logical Web address See Setting the Performance Management Architect Server Logical Web Address on page 33.

VIP Resources
Subtopics
l l l

Editing EPMA_CreateAndStartVIPResource.bat Stopping and Unregistering VIP Resources Checking VIP Resource Status

You run EPMA_CreateAndStartVIPResource.bat, in EPM_ORACLE_HOME/products/ Foundation/BPMA/AppServer/DimensionServer/ServerEngine/Failover, to create, register, and start a VIP resource. The VIP resource is paired with an application resource to provide a single point of access. The batch file runs in a command window and pauses when finished. Pressing any key closes the command window. Before running EPMA_CreateAndStartVIPResource.bat, you can edit it to conform with your environment. You use a different batch file to stop and delete the VIP resource after deleting the application resource. If clients access the application through a network, and failover to another node is enabled, you must register a VIP address for the application. Oracle Clusterware provides a standard VIP agent for application VIPs. Basing any new application VIPs on the VIP type that is referenced in EPMA_CreateAndStartVIPResource.bat ensures consistent behavior among all VIPs deployed in a cluster.

Editing EPMA_CreateAndStartVIPResource.bat
You can edit EPMA_CreateAndStartVIPResource.bat to specify values for these variables, which are listed at the top of the script:
l

ACTION_SCRIPTFull path and file name for usrvip.bat, which is in the Oracle

Clusterware installation folder This batch file is the action script that Oracle Clusterware uses to manage the VIP resource.
l

VIP_IPA cluster VIP, registered in DNS START_TIMEOUTNumber of seconds that Oracle Clusterware waits for the VIP resource

to start before declaring a failed start


l

STOP_TIMEOUTNumber of seconds that Oracle Clusterware waits for the VIP resource to

stop before declaring a failed stop


l

CHECK_INTERVALNumber of seconds between repeated checks

Performance Management Architect Dimension Server Clustering and Failover

29

Shortening intervals for more-frequent checks increases resource consumption if you use the script agent. To reduce resource consumption, use an application-specific agent.
l

SCRIPT_TIMEOUTMaximum time in seconds for an action to run

Oracle Clusterware returns an error message if the action script does not finish within the specified time. The timeout applies to all actions (start, stop, check, and clean).
l

RESTART_ATTEMPTSNumber of times Oracle Clusterware attempts to restart a resource

on the resource's current server before attempting to relocate it For example, if the value is 1, Oracle Clusterware attempts to relocate the resource after a second failure. A value of 0 indicates that there is no attempt to restart, but Oracle Clusterware always attempts to fail the resource over to another server.
l

CRS_HOMEFull path to the BIN folder for your Oracle Clusterware installation

Stopping and Unregistering VIP Resources


After you unregister an application resource, you can stop and delete the associated VIP resource, which unregisters the resource. Deleting a VIP resource does not affect the Performance Management Architect installation.

To stop and unregister a VIP resource, run


EPMA_StopAndDeleteVIPResource.bat, in EPM_ORACLE_HOME/products/ Foundation/BPMA/AppServer/DimensionServer/ServerEngine/ Failover. The batch file runs in a command window and pauses when finished. Pressing any key closes the command window.

Checking VIP Resource Status


After running EPMA_CreateAndStartVIPResource.bat or EPMA_StopAndDeleteVIPResource.bat, you can run this command from the command line to check the status of the VIP resource:
crsctl status resource epmavip -v

A status of STATE=ONLINE indicates that the resource is running correctly. After you run EPMA_StopAndDeleteVIPResource.bat, the VIP resource should no longer exist.

Action Scripts
Oracle Clusterware calls an action script to stop or start an application resource (for example, Performance Management Architect Dimension Server) or to check the status of the application. You can run the action script from Oracle Clusterware or from the command line. The action script logs the date, time, action being performed (start, stop, clean, or check), and action result (success or failure). You create the action script by editing EPMA_ActionScript.bat, in EPM_ORACLE_HOME/
products/Foundation/BPMA/AppServer/DimensionServer/ServerEngine/ Failover, to conform to your environment.

30

Foundation Services Clustering

You can edit EPMA_ActionScript.bat to specify these variables, which are listed at the top of the script:
l

LOG_PATHFull path to a local folder where the application resource action script logs

information Example: set LOG_PATH=C:/CRS_ACTION/EPMA Assuming that you provide a path with a valid drive letter, the action script creates the path at runtime if the path does not exist.
l

LOGSCRA concatenation of the LOG_PATH value and a valid file name for the environment

Example: set LOGSCR=%LOG_PATH%/ClusterActionEPMA.log


l

SECONDS_TO_WAIT_FOR_STARTNumber of seconds that the action script waits for the

application resource to start before declaring a failed start and returning a 0 to the calling process (Oracle Clusterware) Example: set SECONDS_TO_WAIT_FOR_START=180
l

SECONDS_TO_WAIT_FOR_STOPNumber of seconds that the action script waits for the

application resource to stop before declaring a failed stop and returning a 0 to the calling process (Oracle Clusterware) Example: set SECONDS_TO_WAIT_FOR_STOP=60
Note: If your Performance Management Architect release is 11.1.2.1, the two sections labeled EPMA pre-11.1.2.1 section should be commented out.

If your Performance Management Architect release is 11.1.2.0 or earlier, the section labeled EPMA 11.1.2.1 section should be commented out.

Application Resources
Subtopics
l l l

Editing EPMA_CreateAndStartAppResource.bat Stopping and Unregistering Application Resources Checking Application Resource Status

You run EPMA_CreateAndStartAppResource.bat, in EPM_ORACLE_HOME/products/ Foundation/BPMA/AppServer/DimensionServer/ServerEngine/Failover, to create, register, and start an application resource. The VIP resource is paired with a VIP resource to provide a single point of access. Before running EPMA_CreateAndStartAppResource.bat, you can edit it to conform with your environment. You use a different batch file to stop and delete the application resource. If you stop the application resource by running crsctl stop resource EPMAServer -f or by shutting down the Hyperion EPMA Server service directly using the Windows Services applet, Oracle Clusterware automatically attempts to restart it on another node in the cluster. For the application resource to stay idle, you must run EPMA_StopAndDeleteAppResource.bat. To

Performance Management Architect Dimension Server Clustering and Failover

31

restart an application resource after deleting it with EPMA_StopAndDeleteAppResource.bat, you must run EPMA_CreateAndStartAppResource.bat to recreate and start it. Deleting the VIP and application resources has no effect on the Performance Management Architect installation.
Caution!

After running EPMA_CreateAndStartAppResource.bat, which registers the application with Oracle Clusterware as a resource, use Oracle Clusterware commands to start and stop the Performance Management Architect server. Do not stop or start the application resource directly (for example, in the Windows services applet).

Editing EPMA_CreateAndStartAppResource.bat
You can edit EPMA_CreateAndStartAppResource.bat to specify values for these variables, which are listed at the top of the script:
l

ACTION_SCRIPTFull path and file name for the EPMA_ActionScript.bat file provided

with your Performance Management Architect installation This batch file is the action script that Oracle Clusterware uses to manage the application resource (for example, Performance Management Architect Server).
l

FAILOVER_DELAYNumber of seconds to wait before starting the failover process after a

failure is detected
l

FAILURE_THRESHOLDNumber of failures detected within a specified failure interval for a

resource before Oracle Clusterware marks the resource as unavailable and stops monitoring it If a resource fails the specified number of times, then Oracle Clusterware stops the resource. If the value is 0, then failure tracking is disabled. The maximum value is 20.
l

FAILURE_INTERVALInterval, in seconds, during which Oracle Clusterware applies the FAILURE_THRESHOLD attribute

If the value is 0, failure tracking is disabled.


l

START_TIMEOUTNumber of seconds that Oracle Clusterware waits for the application

resource to start before declaring a failed start


l

STOP_TIMEOUTNumber of seconds that Oracle Clusterware waits for the application

resource to stop before declaring a failed stop


l

CHECK_INTERVALNumber of seconds between repeated checks

Shortening intervals for more-frequent checks increases resource consumption if you use the script agent. To reduce resource consumption, use an application-specific agent.
l

RESTART_ATTEMPTSNumber of times Oracle Clusterware attempts to restart a resource

on the resource's current server before attempting to relocate it For example, if the value is 1, Oracle Clusterware attempts to relocate the resource after a second failure. A value of 0 indicates that there is no attempt to restart, but Oracle Clusterware always attempts to fail the resource over to another server.

32

Foundation Services Clustering

CRS_HOMEFull path to the BIN folder for your Oracle Clusterware installation

Stopping and Unregistering Application Resources


To stop and unregister an application resource, run
EPMA_StopAndDeleteAppResource.bat. The batch file runs in a command window and pauses when finished. Pressing any key closes the command window.

Checking Application Resource Status


After running EPMA_CreateAndStartAppResource.bat, you can run these commands from the command line one at a time, to display the status of your application resources:
l

crsctl status resource epmavip -v crsctl status resource EPMAServer -v

Tip: Instead of running the commands individually, you can run EPMA_Status.bat, in EPM_ORACLE_HOME/products/Foundation/BPMA/AppServer/DimensionServer/ ServerEngine/Failover, which runs both commands.

When the resources are running correctly, their status is STATE=ONLINE. Oracle Clusterware runs the action script EPMA_ActionScript.bat with the check parameter at the check interval that is set when the application resource is created. If the action script returns a 1, indicating that the application is not running, it attempts to start the application on another node in the cluster. After running EPMA_StopAndDeleteAppResource.bat, you can run this command from the command line to ensure that the resource no longer exists and that the Hyperion EPMA Server service is not running on any node in the cluster:
crsctl status resource EPMAServer -v

Setting the Performance Management Architect Server Logical Web Address


You use EPM System Configurator to set the Performance Management Architect Server logical Web address for the Hyperion EPMA Web Tier Web Application service to the cluster address or name.

To set the logical Web address in EPM System Configurator:


1 2 3
Select the EPM Oracle instance to configure, and then click Next. Click Uncheck All. Expand the tree.

Performance Management Architect Dimension Server Clustering and Failover

33

4 5

Select Hyperion Foundation | Configure Logical Address for Web Applications, and then click Next. For each Web application:

a. Select Set the logical web address. b. For the Product Component: DimensionServer, double-click the value in the Host column. c. Change the value to specify one of these items:
l

SCAN (Single Client Access Name) if your RAC is Oracle 11g Release 2 or later Application VIP Host name alias that points to the application VIP

d. Click Next.

6 7 8

Click Next to finish the configuration. Start the Hyperion EPMA Web Tier - Web Application service. Wait a few minutes, and then log on to EPM Workspace.

34

Foundation Services Clustering

5
In This Chapter

Essbase Server Clustering and Failover

Essbase Server Clustering Configurations ...............................................................35 Active-Passive Essbase Clusters..........................................................................36 Active-Active Essbase Clusters ...........................................................................36 Connections to Essbase Clusters.........................................................................43

This chapter discusses clustering active-active and active-passive clustering of Essbase Server. For information about clustering Administration Services Web Application and Provider Services Web Application, see Chapter 8, Clustering EPM System Web Applications.

Essbase Server Clustering Configurations


Essbase Server clustering can be active-passive or active-active.
Table 9

Essbase Server Clustering Configurations Active-Passive Yes Yes No Yes Active-Active No Yes Yes Yes

Capability Write-back Failover Load balancing High availability

Active-passive Essbase clusters support failover with write-back to databases. Active-passive Essbase clusters do not support load-balancing. Essbase failover clusters use the service failover functionality of the Oracle Process Manager and Notification Server server. A single Essbase installation is run in an active-passive deployment, and one host runs the Essbase agent and two servers. Oracle Process Manager and Notification Server stops, starts, and monitors the agent process. See Active-Passive Essbase Clusters on page 36. Active-active Essbase clusters support high availability and load balancing. An active-active Essbase cluster supports read-only operations on the databases and should be used only for reporting. Because active-active Essbase clusters do not support data write-back or outline modification, and they do not manage database replication tasks such as synchronizing the changes in one
Essbase Server Clustering Configurations

35

database across all databases in the cluster, they do not support Planning. When Planning is configured to use Essbase in cluster mode as a data source, it does not support the ability to launch business rules with Oracle's Hyperion Business Rules or Calculation Manager as the rules engine. You can use Provider Services to set up active-active Essbase clusters. See Active-Active Essbase Clusters on page 36.

Active-Passive Essbase Clusters


An active-passive Essbase cluster can contain two Essbase servers. To install additional Essbase servers, you must install an additional instance of Essbase. The application must be on a shared drive, and the cluster name must be unique within the deployment environment. These types of shared drive are supported:
l

SAN storage device with a shared disk file system supported on the installation platform such as OCFS NAS device over a supported network protocol.
Note: Any networked file system that can communicate with an NAS storage device is

supported, but the cluster nodes must be able to access the same shared disk over that file system. SAN or a fast NAS device is recommended because of shorter I/O latency and failover times. You set up active-passive Essbase clusters with EPM System Configurator. You specify the Essbase cluster information for each Essbase instance. You define the cluster when you configure the first instance of Essbase. When you configure the second instance, you associate the instance with the cluster.
Note: For a given physical Essbase server that Administration Services is administering,

Administration Services displays only the name of the cluster to which that Essbase server belongs. For instructions, see Clustering Essbase Server in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.

Active-Active Essbase Clusters


Using Provider Services, you can create active-active cluster of identical databases belonging to one Essbase server, to multiple Essbase servers on the same computer, or to Essbase servers distributed across multiple computers over the network.

36

Essbase Server Clustering and Failover

Note: Essbase servers may be subject to licensing restrictions.

Provider Services clients include Smart View clients, custom Java application programming interface (API) clients, and XML for Analysis (XMLA) clients. Provider Services distributes client requests to database instances belonging to the cluster. An active-active Essbase cluster supports read-only operations on the databases; it does not support data write-back or outline modification. An active-active Essbase cluster does not manage database replication capabilities, such as synchronizing the changes in one database across all databases in the cluster. After configuring a set of Essbase servers for active-active clustering, you must define and enable the cluster under the Provider Services node in the Enterprise View of Administration Services Console. See Enabling Clustered Database Components on page 39.

Configuring Active-Active Clusters with Provider Services


If Essbase is clustered with Provider Services and no third-party tool:
l

Smart View must be used rather than Oracle Essbase Spreadsheet Add-in. Essbase has no write-back capability and should be used for reporting only; therefore, Planning is not supported. Nodes must be loaded and calculated individually.

Adding Servers to Active-Active Essbase Clusters


You must specify which servers a cluster includes.

To add servers to an Essbase cluster, from Administration Services Console:


1 2
From Enterprise View or a custom view, select Essbase Servers. For each server to be added:

a. Right-click, and select Add Essbase Servers. b. In Add Essbase Server, enter the Essbase server name, user name, and password. c. Confirm the password that you entered in the preceding step.

3 4 5 6 7 8

From Enterprise View or a custom view, under the Provider Services node, select a provider. Right-click and select Create, then Create Essbase Cluster. Select Add Essbase Cluster, then Cluster name, and then enter a name for the cluster; for example, East Coast Sales. Enter a short description; for example, East Coast sales databases. Click Add to add servers to the cluster. In Select Cluster Component Database, specify the Essbase server, application, and database names, and then click OK.

Active-Active Essbase Clusters

37

The Essbase server and associated application and database names are displayed under the cluster component list; for example, localhost.Demo.Basic. A cluster component comprises the Essbase server, application, and database name.

Repeat step 7 and step 8 to add any other components.

10 In Add Cluster, click OK.


The new cluster name is displayed under Essbase Clusters.

Removing Active-Active Essbase Clusters


To remove an active-active Essbase cluster:
1 2 3 4 5
From Enterprise View or a custom view in Administration Services Console, under the Provider Services node, select a provider. Under the provider node, select Essbase Clusters. Under Essbase Clusters, select a cluster. Right-click, and select Remove. In Remove Essbase Cluster, click Yes.

The removal takes effect when you restart Provider Services.

Adding Components to Active-Active Essbase Clusters


When creating an Essbase cluster, specify associated Essbase servers, applications, databases.

To add components to a cluster, from Administration Services Console:


1 2 3 4 5 6 7 8 9
From Enterprise View or a custom view, under the Provider Services node, select a provider. Under the provider node, select the Essbase Clusters node. Under the Essbase Clusters node, select the cluster. Right-click, and select Edit. In the Essbase Cluster panel, click Add. In Select Cluster Component Database, specify the Essbase server, application, and database names. Click OK.

The database component is listed in the Essbase Cluster panel.


To add more components, repeat step 5 through step 7 for each component. Click Apply.

10 Click Close.

38

Essbase Server Clustering and Failover

Removing Database Components


To remove a database component from a active-active cluster, from Administration Services
Console:

1 2 3 4 5 6 7

From Enterprise View or a custom view, under the Provider Services node, select a provider. Under the Provider node, select the Analytic Clusters node. Under the Analytic Clusters node, select a cluster. Right-click, and select Edit. For each database component to be removed, in the Analytic Cluster panel, select the component, and click Remove. Click Apply. Click Close.

Enabling Clustered Database Components


You can reenable a database component after disabling it.
Note: Components that were part of the cluster definition when Provider Services was started

can be enabled and disabled dynamically with no need to restart Provider Services. However, if you add a component to a cluster or create a cluster, you must restart Provider Services for the new cluster definition to take effect. You can enable or disable the newly added components after restarting Provider Services.

To enable clustered database components, from Administration Services Console:


1 2 3 4 5
From Enterprise View or a custom view, under the Hyperion Provider Services node, select a provider. Under the Provider node, select the Analytic Clusters node. Under the Analytic Clusters node, select a cluster. Right-click, and select Edit. For each database component to be enabled, in the Analytic Cluster panel, select the component, and click Enable.

The status of the database component changes to Enabled.

Click Close.

Note: Components that were part of the cluster definition when Provider Services was started

can be enabled and disabled dynamically without restarting Provider Services. However, if you add a component to an existing cluster or create a cluster, you must restart Provider Services for the new cluster definition to take effect. You cannot enable or disable the newly added cluster components until you restart Provider Services.

Active-Active Essbase Clusters

39

Disabling Cluster Components


You can disable individual database components in a cluster. For example, you can take the component offline to update the database.

To disable a database component in a cluster, from Administration Services Console:


1 2 3 4 5 6
From Enterprise View or a custom view, under the Hyperion Provider Services node, select a provider. Under the provider node, select the Essbase Clusters node. Under the Essbase Clusters node, select a cluster. Right-click, and select Edit. For each component to be disabled, in the Essbase Cluster panel, select the component, and click Disable. Click Close.

Active-Active Essbase Clustering Examples


For simplicity, all examples in this section use Smart View.

Essbase Server Clusters


Provider Services enables you to group sets of Essbase servers running applications with identical databases and use them as one resource.
Note: When adding or deleting an Essbase server in a cluster, restart the server to reflect changes

to the group. You can enable or disable components in the group without restarting the server.

Essbase Database Clusters


Clustering Essbase databases enables load balancing and failover support. Provider Services provides parallel clustering, in which a series of active, duplicate databases respond to user requests. Which database is accessed is transparent to users, who connect to and retrieve data from one data source. Provider Services facilitates the routing of connections between databases in a cluster, based on availability and precedence rules.

40

Essbase Server Clustering and Failover

Figure 3

Essbase Database Clustering with Provider Services

In Figure 3, Smart View users connect to Essbase through Provider Services. Each user connection is assigned to a server during the Essbase session. Provider Services uses session-level load balancing. For example, in Figure 3, User 1s connection is mapped to Data Source A. User 2s connection is mapped to Data Source B. User 3s connection is mapped to data source C. All requests from User 1 are handled by Data Source A for the duration of the connection. If data source A fails:
l

User 1 times out at Data Source A. User 1 is rerouted to the next available data source, which is Data Source C in Figure 4.

Figure 4 illustrates what happens when Data Source A goes offline.

Active-Active Essbase Clusters

41

Figure 4

Database Cluster with One Data Source Offline

In Figure 4, the state of query 1 is maintained at the middle tier and rerouted. Provider Services also provides load balancing across servers. Figure 5 depicts clustered databases deployed on one server.
Figure 5 Essbase Database Cluster on One Server

42

Essbase Server Clustering and Failover

In Figure 5, two servers contain Essbase databases. Server 1 has four processors and 8 GB of RAM. Server 2 has eight processors and 16 GB of RAM. Because Server 2 has more resources, it contains Data Sources B and C. Therefore, Server 2 can handle both connections. Failover support also applies for database clusters on one server. In Figure 6, Server 2 goes offline. User 2 and User 3 are then rerouted to the next available server, Server 1.
Figure 6 Failover for Database Cluster on One Server

Connections to Essbase Clusters


Essbase clients and servers can connect to an Essbase cluster by way of a URL in this format:
http(s)://host:port/aps/Essbase?ClusterName=clusterName.

You can also connect to an Essbase cluster using only the cluster name, but you must first enable this by modifying a configuration file to specify the Provider Services server that resolves the cluster name in the URL. The Provider Services server is specified in these configuration files:
l

For server-to-server communicationessbase.cfg Use this format :


ApsResolver http(s)://host:port/aps

You can specify several Provider Services servers in essbase.cfg, using a semicolon (:) between server names.
l

For client-to-server communicationessbase.properties Use this format :


ApsResolver=http(s)://host:port/aps

Connections to Essbase Clusters

43

To connect to a Provider Services Essbase cluster using Financial Reporting or Web Analysis, you must configure Financial Reporting or Web Analysis for three -tier mode. To configure Financial Reporting for three-tier mode: 1. Start MIDDLEWARE_HOME/EPMSystem11R1/products/financialreporting/bin/ FRConfig.cmd. 2. Specify the EssbaseJAPIServer as the Provider Services server. 3. Restart Financial Reporting, and enter the Provider Services cluster name as the Server Name. To configure Web Analysis for three-tier mode: 1. Log on to EPM Workspace as an admin user. 2. Select Navigate, then Administer, then Reporting and Analysis, and then Web Applications. 3. Right-click WebAnalysis Web-Application, and select Properties. 4. On the Essbase Configuration tab, set these properties and then click OK:
l

EESEmbeddedMode=false (The default setting is true.)

EESServerName=Provider Services server name (The default setting is localhost.) Click OK, and restart theWeb Analysis server for changes to take effect.

44

Essbase Server Clustering and Failover

6
In This Chapter

Reporting and Analysis Services Clustering

Reporting and Analysis Configuration for a Distributed Environment .................................45 Clustering Reporting and Analysis Framework Services and Common Libraries .....................46 Clustering GSM.............................................................................................46 Clustering Interactive Reporting Services ................................................................47 Financial Reporting Print Server Clusters ................................................................47

This chapter discusses clustering Reporting and Analysis services-tier components outside EPM System Configurator. See Chapter 8, Clustering EPM System Web Applications, for information about clustering Reporting and Analysis Web applications through EPM System Configurator.

Reporting and Analysis Configuration for a Distributed Environment


Considerations if you are installing Reporting and Analysis in a distributed environment:
l

Install only one instance of Reporting and Analysis Framework services and Interactive Reporting services on each host, and run EPM System Configurator on each machine. You can then use EPM Workspace to replicate services on each host. Each instance is part of the cluster and is used for load balancing and high availability. See Clustering Reporting and Analysis Framework Services and Common Libraries on page 46. The GSM and ServiceBroker services must be enabled on all instances of the Reporting and Analysis services for high availability of Reporting and Analysis. By default, the GSM and ServiceBroker services are enabled only on the first instance of the Reporting and Analysis services.
Note: Clustering without high availability or failover does not require that the GSM and

Service Broker services be enabled on all instances.


l

If you are running multiple instances of the Reporting and Analysis Repository Service, all instances should share the file system location. Specify the file system location during configuration with EPM System Configurator, on the Configure Reporting and Analysis Framework Services page, or with the Administer section of EPM Workspace. If you are running this service as a Windows service, use a UNC path instead of a mapped drive. This

Reporting and Analysis Configuration for a Distributed Environment

45

prevents potential permissions errors than can occur when Windows attempts to create a mapped drive at startup. See Configure Reporting and Analysis Framework Services in Chapter 4, Configuring EPM System Products, of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. On Windows platforms, when replicating common Reporting and Analysis services and using the network shared folder for the repository location, run the Reporting and Analysis agent Windows service under a user account with sufficient privileges for the network shared folder (not under a Local System account).
l

For the Financial Reporting Web application, you can have only one active instance of the Scheduler component in a clustered environment. Use the same path to MIDDLEWARE_HOME on all machines. (Otherwise, multiple Reporting and Analysis nodes are displayed in Shared Services.)

For more information about configuring Reporting and Analysis for a distributed environment, see the Hyperion Reporting and Analysis Framework Administrators Guide.

Clustering Reporting and Analysis Framework Services and Common Libraries


You can cluster Reporting and Analysis Framework Services and Common Libraries by using EPM Workspace to configure multiple instances of a service on a computer.

To cluster Reporting and Analysis Framework Services and Common Libraries:


1 2 3 4
Log on to EPM Workspace as an administrator. Select Navigate, then Administer, then Reporting and Analysis, and then Services. Right-click an agent for a Reporting and Analysis Framework service, and select Copy. Enter a name and port range for the new configuration, and then click OK.

Clustering GSM
You can cluster GSM after installing and configuring Reporting and Analysis on two machines.

To cluster GSM:
1 2 3 4
On the second machine where you have configured Reporting and Analysis, log on to EPM Workspace. Select Navigate, then Reporting and Analysis, and then Services. Right-click Reporting and Analysis, and then select Properties. On the Services tab, set GSM to Enabled.

46

Reporting and Analysis Services Clustering

Clustering Interactive Reporting Services


You can cluster Interactive Reporting services through EPM Workspace to create multiple instances of a service on a computer.

To cluster Interactive Reporting services:


1 2 3 4
Log on to EPM Workspace as an administrator. Select Navigate, then Administer, then Reporting and Analysis, and then Services. Right-click an agent for a Interactive Reporting service, and select Copy. Enter a new port range and then click OK.

Financial Reporting Print Server Clusters


You can deploy the Financial Reporting Print Server in an active-active configuration, with one installation on each machine. No manual steps are required to achieve load balancing and failover, but you must configure and register Financial Reporting Print Server manually before you set up clustering.

To configure and register the Financial Reporting Print Server:


1
From a command line, navigate to the directory where Financial Reporting Studio is installed. The default installation directory is Financial_Reporting_Studio_Installation_Directory/ products/financialreporting/install/bin, and open FRSetupPrintServer.properties in a text editor.

The default installation directory for Financial Reporting Studio is c:/Program Files/ Oracle/FinancialReportingStudio.

Specify the Financial Reporting Server URL and the administrator credentials used to register the Financial Reporting Print Server:
l

FRWebServer=http://server:port

Specify the same server URL that is used for connecting from Financial Reporting Studio, and ensure that the server is running.
l

AdminUser=user name AdminPassword=password

From a command line, navigate to Financial_Reporting_Studio_Installation_Directory/products/ financialreporting/install/bin, and run this command:
FRSetupPrintServer.cmd

Ensure that the Financial Reporting Print Server service has been created and started.
Note: You might need to start the service manually the first time.

Clustering Interactive Reporting Services

47

48

Reporting and Analysis Services Clustering

7
In This Chapter

Data Management Services Clustering

FDM Clusters ...............................................................................................49 Data Relationship Management Clusters ................................................................50

This chapter discusses clustering Data Management product components outside EPM System Configurator. See Chapter 8, Clustering EPM System Web Applications, for information about clustering ERP Integrator, which is done through EPM System Configurator.

FDM Clusters
FDM Application Server can be clustered with the FDM proprietary load balancer. For instructions on configuring the load balancer, see the Oracle Hyperion Financial Data Quality Management, Fusion Edition Configuration Guide. You can set up Oracle HTTP Server as a load balancer for FDM IIS Web applications. For instructions, see Load Balancing Financial Management or FDM Web Applications on IIS in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. Using EPM System Configurator, you can cluster FDM Web application for high availability with either Oracle HTTP Server or third-party load balancers. For instructions, see Load Balancing Financial Management or FDM Web Applications on IIS in Chapter 4, Configuring EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. For instructions on clustering the FDM relational database, see the documentation for the database software. Figure 7 shows a scenario with an FDM relational database clustered for failover and high availability on proprietary EPM System application servers.

FDM Clusters

49

Figure 7

FDM Clustered for Failover and High Availability

Data Relationship Management Clusters


You can cluster Data Relationship Management Web applications with either Oracle HTTP Server or third-party load balancers. For instructions on clustering with Oracle HTTP Server, see Configuring Load Balancing for Data Relationship Management Web Applications in the Oracle Hyperion Data Relationship Management Installation Guide. Data Relationship Management Server applications can be clustered for load-balancing only, using a primary-secondary machine configuration. Long-running read-only operations can be processed on secondary application servers, to reduce the processing load on the primary application server that is handling write operations. For instructions on configuring Data Relationship Management Server applications for load-balancing, see Configuring Host Machines in the Oracle Hyperion Data Relationship Management Installation Guide.

50

Data Management Services Clustering

Note: The processing of requests by application servers may not be distributed evenly among

the machines in the cluster. Routing to a specific machine is based on the data being accessed and the type of operation being performed. With Data Relationship Management installed in a clustered database environment, you can select Generate scripts to be run by a database administrator when creating a database from the Repository Wizard in the Data Relationship Management Configuration Console. Two scripts are generated: one for creating the schema owner, or database, and one for creating the database schema objects. For instructions on clustering the Data Relationship Management repository, see the documentation for the database software being used.

Data Relationship Management Clusters

51

52

Data Management Services Clustering

8
In This Chapter

Clustering EPM System Web Applications

Prerequisites................................................................................................53 Clustering Web Applications in a Manual Deployment .................................................54

This chapter assumes that you are familiar with WebLogic administration and clustering. If you are unfamiliar with these tasks, Oracle urges you to seek technical assistance before attempting to cluster an EPM System Web application.

Prerequisites
Note: The information in this section assumes that you have installed your Web applications

on each node to be included in the cluster, using procedures provided in Chapter 3, Installing EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide. Complete these tasks before setting up a cluster for an EPM System Web application:
l

Enable either session persistence or sticky sessions (which direct all requests for a specific session to the same server) on the load balancer. Ensure that all the computers to be included in the cluster use either Windows or UNIX but not both. Install the EPM System product on each node that the cluster will include. Install to the same file system location on each machine. Using the same file system path on each physical machine in a cluster is important so that these environment variables can be set once for the entire cluster, rather than set and customized for each node in the cluster:
m

All OSCLASSPATH and PATH UNIXLD_LIBRARY_PATH, LIBPATH, or SHLIB_PATH

For information about additional requirements, see these sections in Chapter 3, Installing EPM System Products, in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide:
m

Installing EPM System Products in a Distributed Environment

Prerequisites

53

Configuring EPM System Products in a Distributed Environment Configuring Products in a Clustered Environment

Clustering Web Applications in a Manual Deployment


You can cluster a manually deployed Web application using WebLogic. This section provides a general overview of clustering Web applications. See the WebLogic documentation for more details on this procedure. For information about setting up load balancing for a Financial Management or FDM Web application, see Load Balancing Financial Management or FDM Web Applications on IIS in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.
Note: If you deployed Web applications using EPM System Configurator, EPM System

Configurator creates the cluster and adds servers to the cluster. You need not perform additional tasks in WebLogic. See Clustering Web Applications in the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.

To cluster Web applications:


1 2
Start the WebLogic Administration Console. If you manually deployed the Web applications, in the Domain Structures pane, click Clusters and create a cluster.

If you deployed the Web applications with EPM System Configurator and clicked Setup to specify the logical address for the Web application, this step is not necessary, because EPM System Configurator created the cluster for you.

If you manually deployed the Web applications, select the cluster, click the HTTP tab, and for Frontend Host, enter the host name and port of the load balancer.

If you deployed the Web applications with EPM System Configurator and clicked Setup to specify the logical address for the Web application, this step is not necessary, because EPM System Configurator entered this information during configuration.

4 5

Click the Servers tab, click Add, and on the Add a Server to Cluster page, select a server from the list, and then click Finish. Click the Deployments tab, select an EPM System Web application, click the Targets tab, and for the cluster this Web application is deployed to, select All Servers in the Cluster.

Repeat this step for all EPM System Web applications. In a distributed environment, the Node Manager propagates changes to all the machines in the cluster.

To add another server to the cluster to scale out the deployment:

a. Select the server and select Clone. b. Select the server that you just cloned, and change the machine on which the server is running.

54

Clustering EPM System Web Applications

7 8 9

Repeat step 2through step 6 as needed. Start the servers from WebLogic Administration Console. Launch EPM System Configurator and perform the Configure Web Server task.

Clustering Web Applications in a Manual Deployment

55

56

Clustering EPM System Web Applications

A
l

Additional Information

For more information about installing, configuring, and using these Oracle Hyperion Enterprise Performance Management System products, see the product guides in the Oracle Documentation Library (http://www.oracle.com/technology/documentation/epm.html) on the Oracle Technology Network. Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition; Oracle's Hyperion Enterprise Performance Management System Configurator; Oracle's Hyperion Enterprise Performance Management System Diagnostics; Oracle's Hyperion Shared Services Registry Foundation Services
m

Oracle's Hyperion Foundation Services (includes Oracle's Hyperion Shared Services, Oracle Hyperion Enterprise Performance Management System Lifecycle Management, and Oracle Enterprise Performance Management Workspace, Fusion Edition) Oracle HTTP Server Oracle WebLogic Server Oracle Hyperion EPM Architect, Fusion Edition Hyperion Calculation Manager Oracle Hyperion Smart View for Office, Fusion Edition Oracle Essbase Server Oracle Essbase Administration Services Oracle Essbase Integration Services Oracle Hyperion Provider Services Oracle Essbase Studio Oracle's Hyperion Reporting and Analysis Framework Oracle's Hyperion Interactive Reporting Oracle Hyperion Financial Reporting, Fusion Edition Oracle's Hyperion SQR Production Reporting Oracle's Hyperion Web Analysis

Essbase
m

Oracle's Hyperion Reporting and Analysis


m

Oracle's Hyperion Financial Performance Management Applications

57

Oracle Hyperion Planning, Fusion Edition Oracle Hyperion Financial Management, Fusion Edition Oracle Hyperion Performance Scorecard, Fusion Edition Oracle Hyperion Profitability and Cost Management, Fusion Edition Oracle Hyperion Disclosure Management Oracle Hyperion Financial Close Management Oracle Hyperion Financial Data Quality Management, Fusion Edition Oracle Hyperion Financial Data Quality Management ERP Integration Adapter for Oracle Applications Oracle Hyperion Data Relationship Management, Fusion Edition

Data Management Products


m

58

Additional Information

Glossary

active-active high availability system A system in which all the

Disaster Recovery The ability to safeguard against natural or

available members can service requests, and no member is idle. An active-active system generally provides more scalability options than an active-passive system. Contrast with active-passive high availability system.
active-passive high availability system A system with active

unplanned outages at a production site by having a recovery strategy for applications and data to a geographically separate standby site.
EPM Oracle home A subdirectory of Middleware home

members, which are always servicing requests, and passive members that are activated only when an active member fails. Contrast with active-active high availability system.
application server cluster A loosely joined group of

containing the files required by EPM System products. The EPM Oracle home location is specified during installation with EPM System Installer.
EPM Oracle instance A directory containing active, dynamic

application servers running simultaneously, working together for reliability and scalability, and appearing to users as one application server instance. See also vertical application cluster and horizontal application cluster.
assemblies Installation files for EPM System products or

components of EPM System products (components that can change during run-time). You define the EPM Oracle instance directory location during configuration with EPM System Configurator.
external authentication Logging on to Oracle EPM System

components.
asymmetric topology An Oracle Fusion Middleware Disaster

Recovery configuration that is different across tiers on the production site and standby site. For example, an asymmetric topology can include a standby site with fewer hosts and instances than the production site.
backup A duplicate copy of an application instance. cluster An array of servers or databases that behave as a

products with user information stored outside the application. The user account is maintained by the EPM System, but password administration and user authentication are performed by an external service, using a corporate directory such as Oracle Internet Directory (OID) or Microsoft Active Directory (MSAD).
failover The ability to switch automatically to a redundant

single resource which share task loads and provide failover support; eliminates one server or database as a single point of failure in a system.
cluster interconnect A private link used by a hardware cluster

standby database, server, or network if the primary database, server, or network fails or is shut down. A system that is clustered for failover provides high availability and fault tolerance through server redundancy and faulttolerant hardware, such as shared disks.
hardware cluster a collection of computers that provides a

for heartbeat information, to detect node failure.


cluster services Software that manages cluster member

operations as a system. With cluster services, you can define a set of resources and services to monitor through a heartbeat mechanism between cluster members and to move these resources and services to a different cluster member as efficiently and transparently as possible.

single view of network services (for example, an IP address) or application services (such as databases and Web servers) to clients of these services. Each node in a hardware cluster is a standalone server that runs its own processes. These processes can communicate with one another to form what looks like a single system that cooperatively provides applications, system resources, and data to users.

Glossary

59

high availability A system attribute that enables an

migration The process of copying applications, artifacts, or

application to continue to provide services in the presence of failures. This is achieved through removal of single points of failure, with fault-tolerant hardware, as well as server clusters; if one server fails, processing requests are routed to another server.
horizontal application server cluster A cluster with application

users from one environment or computer to another; for example, from a testing environment to a production environment.
migration log A log file that captures all application migration

actions and messages.


migration snapshot A snapshot of an application migration

server instances on different machines.


identity A unique identification for a user or group in

that is captured in the migration log.


native authentication The process of authenticating a user

external authentication.
installation assemblies Product installation files that plug in

name and password from within the server or application.


Oracle home A directory containing the installed files

to EPM System Installer.


Java application server cluster An active-active application

server cluster of Java Virtual Machines (JVMs).


lifecycle management The process of migrating an

required by a specific product, and residing within the directory structure of Middleware home. See also Middleware home.
permission A level of access granted to users and groups for

application, a repository, or individual artifacts across product environments.


load balancer Hardware or software that directs the requests

managing data or other users and groups.


provisioning The process of granting users and groups

specific access permissions to resources.


proxy server A server acting as an intermediary between

to individual application servers in a cluster and is the only point of entry into the system.
load balancing Distribution of requests across a group of

workstation users and the Internet to ensure security.


relational database A type of database that stores data in

servers, which helps to ensure optimal end user performance.


locale A computer setting that specifies a location's

related two-dimensional tables. Contrast with multidimensional database.


repository Storage location for metadata, formatting, and

language, currency and date formatting, data sort order, and the character set encoding used on the computer. Essbase uses only the encoding portion. See also encoding, ESSLANG.
logical Web application An aliased reference used to identify

annotation information for views and queries.


restore An operation to reload data and structural

the internal host name, port, and context of a Web application. In a clustered or high-availability environment, this is the alias name that establishes a single internal reference for the distributed components. In EPM System, a nonclustered logical Web application defaults to the physical host running the Web application.
managed server An application server process running in its

information after a database has been damaged or destroyed, typically performed after shutting down and restarting the database.
role The means by which access permissions are granted to

users and groups for resources.


security agent A Web access management provider (for

example, Oracle Access Manager, Oracle Single Sign-On, or CA SiteMinder) that protects corporate Web resources.
security platform A framework enabling Oracle EPM System

own Java Virtual Machine (JVM).


Middleware home A directory that includes the Oracle

products to use external authentication and single sign-on.


shared disks See shared storage.

WebLogic Server home and can also include the EPM Oracle home and other Oracle homes. A Middleware home can reside on a local file system or on a remote shared disk that is accessible through NFS.

60

Glossary

Shared Services Registry The part of the Shared Services

repository that manages EPM System deployment information for most EPM System products, including installation directories, database settings, computer names, ports, servers, URLs, and dependent service data.
shared storage A set of disks containing data that must be

available to all nodes of a failover cluster; also called shared disks.


silent response files Files providing data that an installation

administrator would otherwise be required to provide. Response files enable EPM System Installer or EPM System Configurator to run without user intervention or input.
single point of failure Any component in a system that, if it

fails, prevents users from accessing the normal functionality.


single sign-on (SSO) The ability to log on once and then access

multiple applications without being prompted again for authentication.


symmetric topology An Oracle Fusion Middleware Disaster

Recovery configuration that is identical across tiers on the production site and standby site. In a symmetric topology, the production site and standby site have the identical number of hosts, load balancers, instances, and applications. The same ports are used for both sites. The systems are configured identically and the applications access the same data.
token An encrypted identification of one valid user or group

on an external authentication system.


upgrade The process of deploying a new software release and

moving applications, data, and provisioning information from an earlier deployment to the new deployment.
user directory A centralized location for user and group

information, also known as a repository or provider. Popular user directories include Oracle Internet Directory (OID), Microsoft Active Directory (MSAD), and Sun Java System Directory Server.
vertical application server cluster A cluster with multiple

application server instances on the same machine.


WebLogic Server home A subdirectory of Middleware home

containing installed files required by a WebLogic Server instance. WebLogic Server home is a peer of Oracle homes.

Glossary

61

62

Glossary

You might also like