You are on page 1of 540

HYPERION SYSTEM 9 BI+

WORKSPACE
RELEASE 9.2

AD M IN I ST R ATOR S GU ID E

Copyright 19892006 Hyperion Solutions Corporation. All rights reserved. Hyperion, the Hyperion logo, and Hyperions product names are trademarks of Hyperion. References to other companies and their products use trademarks owned by the respective companies and are for reference purpose only. No portion hereof may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or information storage and retrieval systems, for any purpose other than the recipients personal use, without the express written permission of Hyperion. The information contained herein is subject to change without notice. Hyperion shall not be liable for errors contained herein or consequential damages in connection with the furnishing, performance, or use hereof. Any Hyperion software described herein is licensed exclusively subject to the conditions set forth in the Hyperion license agreement. Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the applicable Hyperion license agreement and as provided in DFARS 227.7202-1(a) and 227.7202-3(a) (1995), DFARS 252.227-7013(c)(1)(ii) (Oct 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14, as applicable. Hyperion Solutions Corporation 5450 Great America Parkway Santa Clara, California 95054 Printed in the U.S.A.

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Where to Find Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Help Menu Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Additional Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Education Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consulting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii xxii xxii xxii

Documentation Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii PART I Administering Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 CHAPTER 1 Hyperion System 9 BI+ Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 About Hyperion System 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About Hyperion System 9 BI+ Reporting Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Hyperion System 9 BI+ Reporting Solution Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Client Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Application Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Database Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 CHAPTER 2 Administration Tools and Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Understanding Hyperion Home and Install Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Administer Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Impact Manager Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Job Utilities Calendar Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Service Configurators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Starting and Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Before Starting Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Starting Core Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Contents

iii

Starting a Subset of Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services and server.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services Individually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting Services in Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of How Services Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42 43 43 45 46 47

Changing Service Port Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Starting Workspace Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Implementing Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Configuring Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Starting Services with Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Quick Guide to Common Administrative Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 CHAPTER 3 Administer Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Setting General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 User Interface Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Assigning Hyperion System 9 BI+ Default Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Access Control for Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Printer Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output Directory Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inactivating or Re-activating MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 57 57 57 57 58 58 58 59 59 59 60 60

Managing Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Understanding Subscriptions and Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Modifying Notification Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Managing SmartCuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Managing Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Tracking System Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Usage Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracking Events and Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample Usage Tracking Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 66 66 67

iv

Contents

CHAPTER 4 Using Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 About Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Assessment Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 About Impact Management Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 The Metadata Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Running the Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Update Data Model Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Link Between Data Models and Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Access to Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Synchronize Metadata Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Using the Run Now Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Using the Schedule Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Update Data Model Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Specifying a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Viewing Candidates to Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Reviewing the Confirmation Dialog Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Accessing Updated Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Connecting Interactive Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Step 1Configuring the Hyperion Interactive Reporting Data Access Service . . . . . . . . 78 Step 2Creating Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . 78 Step 3Importing Interactive Reporting Database Connections into Workspace . . . . . . 79 Step 4Associating Interactive Reporting Database Connections with Interactive Reports 79 Using Show Task Status Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Using Show Impact of Change Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Creating the New Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Renaming Tables or Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Using Normalized and Denormalized Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Deleting Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Changing Column Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Changing User IDs and Passwords for Interactive Reporting Documents . . . . . . . . . . . . . . . . 97 Service Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 CHAPTER 5 Managing Shared Services Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Registering Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Managing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Contents

About Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Working with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Private Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Shared Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Applications for Metadata Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronizing Models and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sync Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Naming Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compare Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing and Editing Model Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renaming Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filtering the Content of Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracking Model History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Permissions to Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing and Setting Model Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites for Moving Data Between Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Access to Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing Data Integration Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filtering Integration Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating or Editing a Data Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduling Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Scheduled Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grouping Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 102 103 104 106 108 110 112 112 113 115 119 120 122 125 126 131 133 134 134 134 135 137 144 145 146 149

CHAPTER 6 Automating Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Managing Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing Calendar Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Calendar Manager Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the Job Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting Job Log Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Public Recurring Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Externally Triggered Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Triggering Externally Triggered Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 154 154 155 155 155 156 157 158 158 158 159

Administering Public Job Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

vi

Contents

Managing Pass-Through for Jobs and Interactive Reporting Documents . . . . . . . . . . . . . . . 160 Managing Job Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Scheduled Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Background Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Foreground Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 CHAPTER 7 Administering Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Organizing Items and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administrating Pushed Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administering Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Configuring the Generated Personal Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Understanding Broadcast Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Providing Optional Personal Page Content to Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Displaying HTML Files as File Content Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Configuring Graphics for Bookmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Configuring Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Viewing Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Publishing Personal Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Configuring Other Personal Pages Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 CHAPTER 8 Configuring RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 About RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Starting RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Logging On to RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Using RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Managing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Adding RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Deleting RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Pinging RSC Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Modifying RSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Common RSC Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Job Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Managing Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Adding Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Modifying Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Deleting Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Repository Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Defining Database Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Changing the Services Repository Database Password . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Changing the Repository Database Driver or JDBC URL . . . . . . . . . . . . . . . . . . . . . . . . . 187 Managing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Optimizing Enterprise-Reporting Applications Performance . . . . . . . . . . . . . . . . . . . . . . 189 From Adding Job Services to Running Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Contents

vii

Using the ConfigFileAdmin Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 About config.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Modifying config.dat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled . . . 193 CHAPTER 9 Configuring LSC Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 About LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Starting LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Using LSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Modifying LSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common LSC Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assessment and Update Services Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Data Access Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Host Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Database Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Shared Services Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Host Authentication Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 198 199 199 201 203 203 204 205 205

Modifying Properties in portal.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 CHAPTER 10 Configuring the Servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Using Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Modifying Properties with Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Interface Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Personal Pages Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internal Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cache Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Diagnostics Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 209 213 215 216 218 218

Zero Administration and Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 6x Server URL Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Client Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Load Testing Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Access Servlet Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Data Access Service Property . . . . . . . . . . . . . . . . . . . . Hyperion Interactive Reporting Service Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 222 222 222

CHAPTER 11 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Logging Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Log4j . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

viii

Contents

Logging Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Log Management Helper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Server Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service Local Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Log File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Log Message File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Configuration Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuring Log Properties for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Configuring Logging Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Configuring Appenders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Configuring Log Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Analyzing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Viewing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Standard Console Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Importing General Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Importing Interactive Reporting Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Running Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logs for Logon and Logoff Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Logs for Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Logs for Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Information Needed by Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 PART II Administering Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 CHAPTER 12 Understanding Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Enterprise Metrics Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Metrics and Configuration Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Database Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Application Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Enterprise Metrics Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Clients and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Implementation and Administration Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Contents

ix

CHAPTER 13 Enterprise Metrics Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Provisioning Users and Groups to Access Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 248 Using Analytic Services Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supported Security Rule Sets in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Granting Data Security in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Analytic Services Data Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 249 249 250

About Database Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 About Application-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Data Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 CHAPTER 14 Supporting Clips in Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Authentication and Authorization Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Preference Settings Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 CHAPTER 15 Enterprise Metrics Server Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Administration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Launching the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Monitoring Server Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Shutting Down the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Restarting the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Viewing the Server Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Monitoring Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Changing Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Setting Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Exporting Settings to Preference Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Monitoring Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Exiting the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 CHAPTER 16 Enterprise Metrics Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Load Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Scheduling the Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 BeginLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 FinishLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Publish Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Processed Enrichment Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Enrichment Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Enrichment Versus ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276

Contents

Enrich Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Failure During Enrichment Job Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Responding to a Finish Load Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Viewing Catalog Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Running the Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Reviewing the Load Support Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 mb.Loads.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 mb.Publish.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 mb.Enrich.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 CHAPTER 17 Troubleshooting Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Using Log Files for Tuning and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Locating and Viewing the Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Enterprise Metrics Server Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Tools and Client Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Servlet Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Thin Client Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Understanding Which Logs to View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Reading Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Log Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Specific Scenarios and Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Using the Deployment Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Using the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Metadata Export Utility Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Configuring the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Running the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 CHAPTER 18 Evaluating Enterprise Metrics Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Statistics Reporting Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Launching the Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Understanding the Enterprise Metrics Performance Statistics Utility . . . . . . . . . . . . . . . . . . . 310 Star Stats Summary Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Query Performance Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Query Performance Analysis Over Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Agg Usage Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 User Performance Analysis Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Slowest Queries Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Query Performance Analysis Over Publish Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Query Performance Analysis Using Max Start_Time Pivot . . . . . . . . . . . . . . . . . . . . . . . . 316 Query Performance Using Parameter Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Hierarchy Levels and Column Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

Contents

xi

Star Supported Levels Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Star Levels and Columns Reference Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference of Bursted Supported Levels Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Query Performance with Reject Reason Pivot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Performance Statistics Utility to Tune and Troubleshoot . . . . . . . . . . . . . . . . . . . . Star and Aggregate Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Slow Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needed Versus Supported Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carpooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Star is Picked but Not Used or Rejected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needed Columns and Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frequently Used Stars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User Complaints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analyze the Performance After Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

318 319 319 320 321 322 322 323 324 324 324 325 326 326

Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 CHAPTER 19 Enterprise Metrics Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Metrics_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Configuration_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Client.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Metadata_export.prefs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 PART III Administering Financial Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 CHAPTER 20 Administrative Tasks for Financial Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Deleting User POVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Report Server Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying the Maximum Number of Calculation Iterations . . . . . . . . . . . . . . . . . . . . . . Log File Output Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Periodic Log File Rolling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing Server Components Through a Device that Performs NAT . . . . . . . . . . . . . . Adding Required Java Arguments on UNIX Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 359 359 360 362 364 366

Analytic Services Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Differences Between Analytic Services Ports and Connections . . . . . . . . . . . . . . . . . . . . 367 Scheduler Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Batch Input Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launching Batches from a Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scheduling Batches Using an External Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Encoding Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 370 371 371 371 372

xii

Contents

Batch Input File XML Tag Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Setting XBRL Schema Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 RMI Encryption Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 PART IV Administering Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 CHAPTER 21 Understanding Connectivity in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 About Connection Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Working with Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Creating Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Setting Connection Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Creating an OLAP Connection File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Modifying Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Connecting to Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Monitoring Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Connecting with a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Connecting Without a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Setting a Default Interactive Reporting Database Connection . . . . . . . . . . . . . . . . . . . . . 394 Logging On Automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Logging On to a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Logging Off of a Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Modifying an Interactive Reporting Database Connection Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Changing Database Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Working with an Interactive Reporting Document and Connecting to a Database . . . . . . . . 397 Connecting to Web Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Connecting to Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 CHAPTER 22 Using Metatopics and Metadata in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . 401 About Metatopics and Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Data Modeling with Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Creating Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Copying Topic Items to a Metatopic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Creating Computed Metatopic Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Customizing or Removing Metatopics and Metatopic Items . . . . . . . . . . . . . . . . . . . . . . 404 Viewing Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 MetaData in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Using the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Accessing the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Configuring the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407

Contents

xiii

CHAPTER 23 Data Modeling in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 About Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Building a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Adding Topics to a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Removing Topics from a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Understanding Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simple Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatically Joining Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying an Automatic Join Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manually Joining Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Showing Icon Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Join Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Defined Join Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Local Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Topic Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Topic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modifying Topic Item Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restricting Topic Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changing Data Model Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Data Model Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatically Processing Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Promoting a Query to a Master Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronizing a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 419 419 420 420 421 421 422 422 423 423 427 428 429 430 430 431 431 432 436 436 437

Data Model Menu Command Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 CHAPTER 24 Managing the Interactive Reporting Studio Document Repository . . . . . . . . . . . . . . . . . . . . . . . 439 About the Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Administering a Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Repository Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confirming Repository Table Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Repository Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Repository Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Repository Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uploading Interactive Reporting Documents to the Repository . . . . . . . . . . . . . . . . . . . Modifying Repository Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling Document Versions in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . BRIOCAT2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BRIOOBJ2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 441 442 443 444 445 445 446 448 449 449

xiv

Contents

BRIOBRG2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 BRIOGRP2 Document Repository Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 Controlling Document Versions in Interactive Reporting Web Client . . . . . . . . . . . . . . . 450 CHAPTER 25 Auditing with Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 About Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Creating an Audit Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Defining Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Auditing Keyword Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Sample Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 CHAPTER 26 IBM Information Catalog and Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 About the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Registering Documents to the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Defining Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Selecting Subject Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Administering the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Creating Object Type Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Deleting Object Types and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 Administering Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Setting Up Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 CHAPTER 27 Row-Level Security in Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 About Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 The Row-Level Security Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Hyperion System 9 BI+ and Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Creating the Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 The BRIOSECG Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 The BRIOSECP Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 The BRIOSECR Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 OR Logic Between Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 Row-Level Security Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Defining the Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Dealing with The Rest of the Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Overriding Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476 Cascading Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Other Important Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Custom SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480

Contents

xv

CHAPTER 28 Troubleshooting Interactive Reporting Studio Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Connectivity Troubleshooting with dbgprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and the Interactive Reporting Web Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 CHAPTER 29 Interactive Reporting Studio INI Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 PART V Administering Web Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 CHAPTER 1 Web Analysis Configuration Options and Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Web Analysis Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling Result Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Java Plug-in Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Hyperion System 9 BI+ Analytic High Availability Services . . . . . . . . . . Considerations for Configuring Analytic High Availability Services . . . . . . . . . . . . . . . . Resolving Analytic Services Subscriptions in Web Analysis . . . . . . . . . . . . . . . . . . . . . . . Configuring a Web Analysis Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Formatting Data Value Tool Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Web Analysis to Log Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting Raw Data Values to Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 493 493 494 494 495 496 496 496 496 497

Web Analysis Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Repository Password Encryption Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Web Analysis Configuration Test Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 Changing Web Analysis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 CHAPTER A Backup Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 What to Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 General Backup Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Backing Up the Workspace File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Post-Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weekly Full and Daily Incremental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . As Needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference Table for All File Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 503 503 504 504 504

Sample Backup Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Backing Up the Repository Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Backing Up Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521

xvi

Contents

Preface

Welcome to the Hyperion System 9 BI+ Workspace Administrators Guide. This preface discusses these topics:

Purpose on page xvii Audience on page xviii Document Structure on page xviii Where to Find Documentation on page xix Help Menu Commands on page xx Conventions on page xxi Additional Support on page xxii Documentation Feedback on page xxii

Purpose
This guide provides information that you need to administrator the entire Hyperion System 9 BI+ Workspace of services, modules, and tools. It explains Workspace features and options and contains the concepts, processes, procedures, formats, tasks, and examples that you need to administer the software. This guide also provides information on administering Hyperion System 9 BI+ Enterprise Metrics, Hyperion System 9 BI+ Financial Reporting, Hyperion System 9 BI+ Interactive Reporting, and Hyperion System 9 BI+ Web Analysis. This guide does not cover end-user tasks. It assumes that you read the Hyperion System 9 BI+ Workspace Getting Started Guide and the Hyperion System 9 BI+ Workspace Users Guide.
Note: This book covers the entire Workspace system, while you may have installed only a subset of it. Therefore, components and features that your system does not include are discussed in this book. For more information, see About Hyperion System 9 BI+ Reporting Solution on page 26.

Preface

xvii

Audience
This guide is written for all levels of administrators, from those who administer a subset of Workspace to those who oversee the entire Workspace system. In addition, some information is intended for developers of Hyperion System 9 BI+ Production Reporting programs or system customizations, for advanced users of Interactive Reporting, and administrators of Enterprise Metrics, Financial Reporting, and Web Analysis.

Document Structure
This document contains the following information:

Part I, Administering Workspace, introduces the architecture, administrative tools, and administrative tasks available in Workspace, a DHTML based, zero-footprint client that provides the user interface for viewing and interacting with the content created by the authoring studios in addition to enabling users to create queries against relational and multidimensional data sources. It covers administration related to documents and jobs in Workspace, and explains how to configure and maintain the Workspace services, applications, and tools, and how to optimize, back up, and troubleshoot Workspace. Part II, Administering Enterprise Metrics, provides information on installing, implementing, administrating, and troubleshooting Enterprise Metrics, a toolset for creating, configuring, and delivering metrics that enable organizations to assess and improve business performance. Part III, Administering Financial Reporting, describes administrative tasks specific to Financial Reporting, which provides scheduled or on-demand highly formatted financial and operational reporting from most data sources. Part IV, Administering Interactive Reporting Studio, explains advanced features, such as auditing, connectivity, and data modeling, used to administer Interactive Reporting, which provides ad hoc relational query and self-service reporting from ODBC data sources. Part V, Administering Web Analysis, describes files and utilities used to configure, maintain, and optimize Web Analysis, which provides interactive ad hoc analysis, presentation, and reporting of multidimensional data. Glossary contains a list of key terms and definitions. Index contains a list of Workspace terms and page references.

xviii

Preface

Where to Find Documentation


All Workspace documentation is accessible from the following locations:

The HTML Information Map is available from the Workspace Help menu for all operating systems; for products installed on Microsoft Windows systems, it is also available from the Start menu. Online help is available from within Workspace. After you log on to the product, you can access online help by clicking the Help button or selecting Help from the menu bar. The Hyperion Download Center can be accessed from the Hyperion Solutions Web site.

To access documentation from the Hyperion Download Center:


1 Go to the Hyperion Solutions Web site and navigate to Services > WorldWide Support > Download Center.
Note: Your Login ID for the Hyperion Download Center is your e-mail address. The Login ID and Password required for the Hyperion Download Center differ from the Login ID and Password required for Hyperion Support Online through Hyperion.com. If you are not sure whether you have a Hyperion Download Center account, follow the on-screen instructions.

2 Enter your e-mail address and password. 3 Select a language and click Login. 4 If you are a member on multiple Hyperion Solutions Download Center accounts, select an account for the
current session.

5 To access documentation online, from the Product List, select a product and follow the on-screen
instructions.

Where to Find Documentation

xix

Help Menu Commands


The following table describes the commands that are available from the Help menu in Workspace.
Command Help on This Topic Contents Information Map Description Launches a help topic for the window or Web page. Launches the Workspace help. Launches the Workspace Information Map, which provides the following assistance:

Online help in PDF and HTML format Links to related resources to assist you in using Workspace

Technical Support Hyperion Developers Network

Launches the Hyperion Technical Support site, where you submit defects and contact Technical Support. Launches the Hyperion Developer Network site, where you access information about known defects and best practices. This site also provides tools and information to assist you in getting starting using Hyperion products:

Sample models A resource library containing FAQs, tips, and technical white papers Demos and Webcasts demonstrating how Hyperion products are used

Hyperion.com

Launches Hyperions corporate Web site, where you access a variety of information about Hyperion:

Office locations The Hyperion Business Intelligence and Business Performance Management product suite Consulting and partner programs Customer and education services and technical support

About Workspace

Launches the About Workspace dialog box, which contains copyright and release information, along with version details.

xx

Preface

Conventions
The following table shows the conventions that are used in this document:
Item Meaning Arrows indicate the beginning of procedures consisting of sequential steps or one-step procedures. In examples, brackets indicate that the enclosed elements are optional. Bold in procedural steps highlights user interface elements on which the user must perform actions. Capital letters denote commands and various IDs. (Example: CLEARBLOCK command) Keystroke combinations shown with the plus sign (+) indicate that you should press the first key and hold it while you press the next key. Do not type the plus sign. For consecutive keystroke combinations, a comma indicates that you press the combinations consecutively. Courier font indicates that the example text is code or syntax. Courier italic text indicates a variable field in command syntax. Substitute a value in place of the variable shown in Courier italics. When you see the environment variable ARBORPATH in italics, substitute the value of ARBORPATH from your site. Italic n stands for a variable number; italic x can stand for a variable number or a letter. These variables are sometimes found in formulas. Ellipsis points indicate that text was omitted from an example. This document provides examples and procedures using a right-handed mouse. If you use a left-handed mouse, adjust the procedures accordingly. Options in menus are shown in the following format. Substitute option names in placeholders, as indicated. Menu name > Menu command > Extended menu command For example: 1. Select File > Desktop > Accounts.

Brackets [] Bold CAPITAL LETTERS Ctrl+0

Ctrl+Q, Shift+Q

Example text

Courier italics

ARBORPATH

n, x Ellipses (...) Mouse orientation Menu options

Conventions

xxi

Additional Support
In addition to providing documentation and online help, Hyperion offers the following product information and support. For details on education, consulting, or support options, click the Services link on the Hyperion Web site at http://www.hyperion.com.

Education Services
Hyperion offers instructor-led training, custom training, and e-Learning covering all Hyperion applications and technologies. Training is geared to administrators, end users, and information systems professionals.

Consulting Services
Experienced Hyperion consultants and partners implement software solutions tailored to clients reporting, analysis, modeling, and planning requirements. Hyperion also offers specialized consulting packages, technical assessments, and integration solutions.

Technical Support
Hyperion provides enhanced telephone and electronic-based support to clients to resolve product issues quickly and accurately. This support is available for all Hyperion products at no additional cost to clients with current maintenance agreements.

Documentation Feedback
Hyperion strives to provide complete and accurate documentation. Your opinion on the documentation is of value, so please send your comments by going to
http://www.hyperion.com/services/support_programs/doc_survey/index.cfm.

xxii

Preface

Part

Administering Workspace

In Administering Workspace:

Chapter 1, Hyperion System 9 BI+ Architecture Overview Chapter 2, Administration Tools and Tasks Chapter 3, Administer Module Chapter 4, Using Impact Management Services Chapter 5, Managing Shared Services Models Chapter 6, Automating Activities Chapter 7, Administering Content Chapter 8, Configuring RSC Services Chapter 9, Configuring LSC Services Chapter 10, Configuring the Servlets

Administering Workspace

23

24

Administering Workspace

Chapter

1
In This Chapter

Hyperion System 9 BI+ Architecture Overview

This chapter describes the Hyperion System 9 BI+ architecture.

About Hyperion System 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 About Hyperion System 9 BI+ Reporting Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Hyperion System 9 BI+ Reporting Solution Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Hyperion System 9 BI+ Architecture Overview

25

About Hyperion System 9


Hyperion System 9 is a comprehensive Business Performance Management (BPM) system that consists of these products:

Hyperion System 9 BI+Management reporting including query and analysis in one coordinated environment Hyperion System 9 Applications+Coordinated planning, consolidation, and scorecarding applications Hyperion System 9 Foundation ServicesUsed to ease installation and configuration, provide metadata management, and support a common Microsoft Office interface

About Hyperion System 9 BI+ Reporting Solution


Hyperion System 9 BI+ is a modular business intelligence platform that provides management reporting, query, and analysis capabilities for a wide variety of data sources in one coordinated environment. One zero-footprint thin client provides users with access to content:

Enterprise metrics for management metrics and analysis presented in easy-to-use, personalized, interactive dynamic dashboards Financial reporting for scheduled or on-demand highly formatted financial and operational reporting from most data sources including Hyperion System 9 Planning and Hyperion System 9 Financial Management Interactive reporting for ad hoc relational query, self-service reporting and dashboards against ODBC data sources Production reporting for high volume enterprise-wide production reporting Web analysis for interactive ad hoc analysis, presentation, and reporting of multidimensional data

Hyperion System 9 BI+, which includes Hyperion System 9 BI+ Analytic Services, is part of a comprehensive BPM system that integrates this business intelligence platform with Hyperion financial applications and Hyperion System 9 Performance Scorecard.

26

Hyperion System 9 BI+ Architecture Overview

Hyperion System 9 BI+ Reporting Solution Architecture


The Hyperion System 9 BI+ reporting environment is organized into three layers:

Client Layer on page 27 Application Layer on page 29 Database Layer on page 35

Client Layer
The client layer refers to local interfaces used to author, model, analyze, present, report, and distribute diverse content, and third party clients, such as Microsoft Office:

WorkspaceDHTML based, zero-footprint client that provides the user interface for viewing and interacting with content created by the authoring studios, and enables users to create queries against relational and multidimensional data sources:

Analytic ServicesHigh performance multidimensional modeling, analysis, and reporting Hyperion System 9 BI+ Enterprise MetricsManagement metrics and analysis presented in personalized, interactive dashboards

Hyperion System 9 BI+ Reporting Solution Architecture

27

Hyperion System 9 BI+ Financial ReportingHighly formatted financial reporting Hyperion System 9 BI+ Interactive ReportingAd hoc query, analysis, and reporting including dashboards Hyperion System 9 BI+ Production ReportingHigh volume enterprise production reporting Hyperion System 9 BI+ Web AnalysisAdvanced interactive ad hoc analysis, presentation, and reporting against multidimensional data sources

Hyperion System 9 BI+ Interactive Reporting StudioHighly intuitive and easy-tonavigate environment for data exploration and decision making. With a consistent design paradigm for query, pivot, charting, and reporting, all levels of users move fluidly through cascading dashboardsfinding answers fast. Trends and anomalies are automatically highlighted, and robust formatting tools enable users to easily build free-form, presentation-quality reports for broad-scale publishing across their organization. Hyperion System 9 BI+ Interactive Reporting Web ClientRead-only Web plug-in for viewing Interactive Reporting Studio reports. Hyperion System 9 BI+ Financial Reporting StudioWindows client for authoring highly formatted financial reports from multidimensional data sources, which features easy, drag and drop, reusable components to build and distribute HTML, PDF, and hardcopy output. Hyperion System 9 BI+ Web Analysis StudioJava applet that enables you to create, analyze, present, and report multidimensional content. The studio offers the complete Web Analysis feature set to designers creating content, including dashboards for information consumers. Hyperion System 9 BI+ Production Reporting StudioWindows client that provides the design environment for creating reports from a wide variety of data sources. Reports can be processed in one pass to produce a diverse array of pixel-perfect output. Processing can be scheduled and independently automated, or designed to use form templates that prompt dynamic user input. Hyperion System 9 BI+ Enterprise Metrics Personalization WorkspaceEnables you to define metrics that allow users to view business information and trends to better understand business performance using a java applet. Dynamic charts and reports provide up-to-date information and expedite performance analysis. Hyperion System 9 BI+ Enterprise Metrics StudioJava applet for creating personal News pages and customizing Metrics pages. Hyperion System 9 BI+ Dashboard Development ServicesEnables creation of dashboards:

Dashboard StudioWindows client that utilizes extensible and customizable templates to create interactive, analytical dashboards without the need to code programming logic. Dashboard ArchitectWindows-based integrated development environment that enables programmers to swiftly code, test, and debug components utilized by Dashboard Studio.

28

Hyperion System 9 BI+ Architecture Overview

Hyperion System 9 Smart View for OfficeHyperion-specific Microsoft add-in and toolbar from which users can query Hyperion data sources including Analytic Services, Financial Management, and Planning. Users can use this environment to interact with Financial Management and Planning forms for data input, and can browse the BI+ repository and embed documents in the office environment. Documents are updated by user request. Performance ScorecardWeb-based solution for setting goals and monitoring business performance using recognized scorecarding methodologies. Provides tools that enable users to formulate and communicate organizational strategy and accountability structures:

Key Performance Indicators (KPIs)Create tasks and achievements that indicate progress toward key goals Performance indicatorsIndicate good, acceptable, or poor performance of accountability teams and employees Strategy mapsRelate high-level mission and vision statements to lower-level actionable strategy elements Accountability mapsIdentify those responsible for actionable objectives Cause and Effect mapsDepict interrelationships of strategy elements and measure the impact of changing strategies and performance

Application Layer
The application layera middle tier that retrieves requested information and manages security, communication, and integrationcontains two components:

Application Layer Web Tier on page 29 Application Layer Services Tier on page 30

Because the business intelligence platform is modular, it may consist of various combinations of components, configured in numerous ways. The end result is a comprehensive, flexible architecture that accommodates implementation and business needs.

Application Layer Web Tier


The application layer relies upon a J2EE application server and Web server to send and receive content from Web clients. An HTTP connector is required to link the Web server and the application server. The Web tier hosts the Workspace, Interactive Reporting, Financial Reporting, and Web Analysis Web applications. For a complete description of supported Web tier hardware and software, see the Hyperion System 9 BI+ Financial Reporting, Interactive Reporting, Production Reporting, Web Analysis Installation Guides for Windows and UNIX.

Hyperion System 9 BI+ Reporting Solution Architecture

29

Application Layer Services Tier


The application layer services tier contains services and servers that control functionality of various Web applications and clients. Most services fall into two main groups, depending on the tool used to configure their properties:

Local servicesServices in the local Install Home that are configured using the Local Service Configurator (LSC). Referred to as LSC services. Remote servicesServices on a local or remote host that are configured using the Remote Service Configurator (RSC). Referred to as RSC services.

Because most of these services are replicable, you may encounter multiple instances of a service in a system.

Core Services
Core Services are mandatory for authorization, session management, and document publication:

Repository ServiceStores Hyperion system data in supported relational database tables, known collectively as the repository. A system can have only one Repository Service. Publisher ServiceHandles repository communication for other LSC services and some Web application requests; forwards repository requests to Repository Service and passes replies back to initiating services. A system can have only one Publisher Service. Global Service Manager (GSM)Tracks system configuration information and monitors registered services in the system. A system can have only one GSM. Local Service Manager (LSM)Created for every instance of an LSC or RSC service, including GSM. When system servers start, they register their services and configuration information with GSM, which supplies and maintains references to all other registered services. Authentication ServiceChecks user credentials at logon time and determines whether they can connect; determines group memberships, which, along with roles, affects what content and other system objects (resources) users can view and modify. Authentication Service is replicable and does not have to be co-located with other services. Authorization ServiceProvides security at the level of resources and actions; manages roles and their associations with operations, users, groups, and other roles. A system must have at least one Authorization Service. Session Manager ServiceMonitors and maintains the number of simultaneous system users. Monitors all current sessions and terminates sessions that are idle for more than a specified time period. While Session Manager is replicable, each instance independently manages a set of sessions. Service BrokerSupports GSM and LSMs by routing client requests and managing load balancing for RSC services. A system can have multiple Service Brokers.

30

Hyperion System 9 BI+ Architecture Overview

Name ServiceMonitors registered RSC services in the system, and provides them with system configuration information from server.xml. Works in conjunction with Service Broker to route client requests to RSC services. A system can have only one Name Service.

Management Services
Management services are Core Services that collect and distribute system messages and events for troubleshooting and usage analysis:

Logging ServiceCentralized service for recording system messages to log files. A system can have only one Logging Service. Usage ServiceRecords the number and nature of processes addressed by Hyperion Interactive Reporting Service, which enables administrators to review usage statistics such as the number of logons, what the most used files are, what the most selected MIME types are, and what happens to system output. Systems can have multiple Usage Services.

Functional Services
Functional services are Core Services that are specific to various functional modules:

Job ServiceExecutes scripts that create reports, which can be prompted by users with permissions or by Event Service. Report output is returned to initiating users or published to the repository. Job Services can be created and configured for every executable. Event ServiceManages subscriptions to system resources. Tracks user subscriptions, job parameters, events and exceptions, and prompts Job Service to execute scheduled jobs. Event Service is configured to distribute content through e-mail and FTP sites, and to notify users with subscriptions about changing resources. A system can have only one Event Service.

Interactive Reporting Services


Interactive Reporting services are Core Services that support Interactive Reporting functionality by communicating with data sources, starting RSC services, and distributing Interactive Reporting client content:

Hyperion Interactive Reporting ServiceRuns Interactive Reporting jobs and delivers interactive HTML content for Interactive Reporting files. When actions involving Interactive Reporting documents are requested, Hyperion Interactive Reporting Service fulfills such requests by obtaining and processing the documents and delivering HTML for display. Hyperion Interactive Reporting Data Access ServiceProvides access to relational and multidimensional databases, and carries out database queries for the plug-in, Hyperion Interactive Reporting Service, and Interactive Reporting jobs. Each Hyperion Interactive Reporting Data Access Service supports connectivity to multiple data sources, using the connection information in one or more Interactive Reporting database connection files, so that one Hyperion Interactive Reporting Data Access Service can process a document whose sections require multiple data sources. Hyperion Interactive Reporting Data Access Service maintains a connection pool for database connections.

Hyperion System 9 BI+ Reporting Solution Architecture

31

Extended Access for Hyperion Interactive Reporting ServiceEnables users to jointly analyze multidimensional and relational sources in one document. It retrieves flattened OLAP results from Web Analysis documents, Production Reporting job output, or Financial Reporting Batch reports in the BI+ repository and imports data into Interactive Reporting documents (.bqy) as Results sections. Hyperion Interactive Reporting Base ServiceStarts all LSC and RSC services in one Install Home.

Financial Reporting Servers


Financial Reporting servers support Financial Reporting functionality by processing batch requests, generating output, and distributing Financial Reporting client content:

Hyperion Financial Reporting ServerGenerates and formats dynamic report or book results, including specified calculations. Hyperion Financial Reporting Server can handle numerous simultaneous requests for report execution from multiple clients, because each request is run on its own execution thread. Hyperion Financial Reporting Server caches data source connections, so multiple requests by the same user do not require a reconnection. Financial Reporting servers are replicablethe number necessary depends on the number of concurrent users who want to execute reports simultaneously through the clients. Multiple Financial Reporting servers can be configured to report against one repository. Hyperion Financial Reporting Communication ServerProvides a Java RMI Registry to which other Financial Reporting servers are bound. Hyperion Financial Reporting Print ServerEnables Financial Reporting content to be compiled as PDF output. Runs only on supported Windows platforms, but is replicable to provide scalability for PDF generation. Hyperion Financial Reporting Scheduler ServerResponds to Financial Reporting scheduled batch requests. At the specified time, Hyperion Financial Reporting Scheduler Server prompts the other Financial Reporting servers to fulfill the request.

Production Reporting Service


Production Reporting Service responds to scheduled and on-demand requests by Job Service to run jobs, process data, and generate reports. Production Reporting Service is optimized for high volume reporting through the use of native drivers, array processing for large data sets, and cursor management. It processes time-saving data manipulation operations in one pass of the data source and produces large quantities of reports in online and printed formats. Production Reporting Service is a replicable service.

32

Hyperion System 9 BI+ Architecture Overview

Hyperion System 9 BI+ Impact Manager Services


Impact Manager services enable you to harvest, update, and publish new Interactive Reporting content from old Interactive Reporting repository resources. These services must be used in conjunction with Interactive Reporting services. Both services perform automatic load balancing and fault tolerance when multiple instances are running:

Assessment (Harvester) ServiceHarvests metadata from published Interactive Reporting repository documents. Update (Transformer) ServiceUpdates published and harvested Interactive Reporting documents or publishes new versions to the repository.

Enterprise Metrics Servers


Metrics Server and Configuration Server support Enterprise Metrics client functionality used in conjunction with Hyperion System 9 BI+:

Metrics ServerMetrics engine that issues queries against a data warehouse and one or more Analytic Services sources. It combines result sets, calculates requested metrics, and displays Enterprise Metrics content in Workspace or Personalization Workspace. Configuration ServerUsed solely in conjunction with Enterprise Metrics Studio and Studio utilities to develop and test new catalog content. When appropriate, Configuration Catalog content is published to Metrics Catalog for production use by Metrics Server.

Performance Scorecard Services


Scorecard Module services support Performance Scorecard client functionality used in conjunction with BI+.

Common Administration Services


Common Administration Services include Hyperion System 9 Shared Services that support authentication and user provisioning for all Hyperion products, and Hyperion License Server used for product licensing. See the Shared Services documentation set.

Smart View Services


Smart View Services provide a common Microsoft Office interface for Hyperion products. See the Smart View documentation set.

Hyperion System 9 BI+ Reporting Solution Architecture

33

Services Tier Summary


LSC or RSC Service LSC

Type Core Core Core Core Core Core Impact Management Services Impact Management Services Interactive Reporting Interactive Reporting Interactive Reporting Interactive Reporting Management Management

Name Authentication Service Authorization Service Global Service Manager Local Service Manager Publisher Service Session Manager Assessment (Harvester) Service Update (Transformer) Service Extended Access for Interactive Reporting Service Hyperion Interactive Reporting Base Service Hyperion Interactive Reporting Data Access Service Hyperion Interactive Reporting Service Logging Service Usage Service Name Service Repository Service Service Broker Event Service Job Service Hyperion License Server Shared Services Configuration Server Metrics Server Hyperion Financial Reporting Communication Server Hyperion Financial Reporting Print Server Hyperion Financial Reporting Scheduler Server Hyperion Financial Reporting Server Scorecard Module Services Production Reporting Service Smart View Services

Instances Multiple Multiple 1 per system Multiple 1 per system Multiple Multiple Multiple Multiple Multiple Multiple Multiple 1 per system Multiple 1 per system 1 per system Multiple 1 per system Multiple 1 per system 1 per system 1 per system 1 per system

RSC

Core Core Core Functional Functional

N/A

Common Administration Services Common Administration Services Enterprise Metrics Servers Enterprise Metrics Servers Financial Reporting Servers Financial Reporting Servers Financial Reporting Servers Financial Reporting Servers Performance Scorecard Services Production Reporting Service Smart View Services

Multiple

Multiple

Multiple Multiple

34

Hyperion System 9 BI+ Architecture Overview

Database Layer
Architecturally, databases fall into two fundamental groups: repositories that store Hyperion system data; and data sources that are the subject of analysis, presentation, and reporting. There are three important repositories for information storage:

Common repositoryHyperion system data in supported relational database tables Shared ServicesUser, security, and project data that can be used across Hyperion products Common Hyperion License ServerLicensing information.

Database layer components:


Relational data sources, for example, Oracle, IBM DB2, and Microsoft SQL Server Multidimensional data sources, for example, Analytic Services Hyperion applications, for example, Financial Management and Planning Data warehouses ODBC data sources

For a complete description of supported data sources, see the Hyperion System 9 BI+ Financial Reporting, Interactive Reporting, Production Reporting, Web Analysis Installation Guides for Windows and UNIX.

Hyperion System 9 BI+ Reporting Solution Architecture

35

36

Hyperion System 9 BI+ Architecture Overview

Chapter

Administration Tools and Tasks

2
Administrative tools enable you to configure and administer Workspace.
In This Chapter Understanding Hyperion Home and Install Home on page 38 Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Starting and Stopping Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Starting Workspace Servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Implementing Process Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Quick Guide to Common Administrative Tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Administration Tools and Tasks

37

Understanding Hyperion Home and Install Home


When multiple Hyperion products are installed on the same computer, common internal and third-party components used by the products are installed to a central location, called Hyperion Home. On Windows platforms, the Hyperion Home location is defined in the system environment variable called HYPERION_HOME. The default location is C:\Hyperion for Windows and $HOME/Hyperion for UNIX, defined during product installation. Hyperion Home contains a \common directory. A Workspace installation adds a \BIPlus directory to Hyperion Home, which is the default installation location (Install Home) for Workspace (that is, C:\HYPERION_HOME\BIPlus (Windows), or $HOME/Hyperion (UNIX). It is possible to have multiple Install Homes on one physical host. All Java services in an Install Home run in one process space and share a GSM (not necessarily on one host). If a host has multiple Install Homes, each Install Home requires its own separate services process space and is managed by its own GSM. Services in an Install Home are referred to collectively as an Install Home.

Administration Tools
Topics that describe Workspace administration tools:

Administer Module on page 38 Impact Manager Module on page 39 Job Utilities Calendar Manager on page 39 Service Configurators on page 39 Servlet Configurator on page 40

Administer Module
Properties managed using the Administer module (accessed from the view pane Navigate panel):

General properties Your organization, including adding and modifying users, groups, and roles, through the User Management Console Physical resources including printers and output directories MIME types Notifications SmartCuts

38

Administration Tools and Tasks

Row-level security Usage tracking Event tracking

For detailed information on managing these items, see Administer Module on page 53. For information about common user-interface features among the modules, see the Hyperion System 9 BI+ Workspace Users Guide or the Hyperion System 9 BI+ Workspace Getting Started Guide.

Impact Manager Module


Impact Manager module enables users to replace Interactive Reporting data models. Changing the data model enables global changes across all Interactive Reporting documents, without requiring that every document which references a data source be edited individually. See Using Impact Management Services on page 69.

Job Utilities Calendar Manager


You create, modify, and delete custom calendars using Job Utilities Calendar Manager. You can create calendars to schedule jobs based on fiscal or other internal or organizational calendars. See Viewing Calendar Manager on page 154.

Service Configurators
All Workspace services have configurable properties that you modify using Local Service Configurator (LSC) or Remote Service Configurator (RSC). LSC and RSC handle different services.

RSC
RSC provides a graphical interface to manage a subset of Workspace service types referred to as RSC (or remote) services. You use RSC to configure services on all hosts in the system:

Modify or view RSC service properties Ping services Add, modify, or delete hosts Add, modify, or delete database servers in the system Delete services

See Configuring RSC Services on page 171.

Administration Tools

39

LSC
LSC enables you to configure manage a subset of Workspace services on a local host, referred to as LSC (or local) services:

View or modify properties of LSC services View or modify properties of the local Install Home Configure pass-through settings

See Configuring LSC Services on page 195.

Servlet Configurator
Servlet Configurator enables you to customize the Browse, Personal Pages, Scheduler, and Administration servlets for your organization. The many settings include the length of time to cache various types of data on the servlets, colors of various user interface elements, the locale, and language. See Configuring the Servlets on page 207.

Starting and Stopping Services


To start Workspace, you start services in each Install Home, and start each installation of Workspace servlets (usually on a Web server). This section focuses on how to start services of an Install Home, except for a discussion at the end about starting various Install Home services and hosts of a distributed system. In an Install Home, you can start all installed services, a subset of them, or an individual service. Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service always should be started separately. How you start services depends on your operating system, Workspace system configuration, and objectives. How you stop services depends on how you started them. Topics that explain prerequisites and methods for starting and stopping services:

Before Starting Services on page 41 Starting Core Services on page 41 Starting a Subset of Services on page 42 Starting Services Individually on page 43 Starting Services in Order on page 45 Stopping Services on page 46 Example of How Services Start on page 47

40

Administration Tools and Tasks

Before Starting Services


Before starting services, ensure that all required network resources are available to the services. For example, Hyperion Interactive Reporting Service may need to create job output on printers or file directories belonging to network hosts other than where the service is running. These connections must be established before Hyperion Interactive Reporting Service can start. For Windows, a service may need to log on as a user account rather than as the local system account to establish connections to shared resources on the network. ODBC data sources must be configured as system data sources rather than user data sources. Consult with the sites network administrators to configure the correct environment. For UNIX platforms, all necessary environmental settings should be made prior to starting services. Consult with the sites network administrators to create the necessary software configuration. Regardless of your method for starting Workspace services, you must first start the repository database.

Starting Core Services


Regardless of whether you installed the complete set of services in an Install Home, a few services, or one service, the methods presented in these topics can be used to start Core Services of a given Install Home:

startCommonServices Method on page 41 Windows Service Methods (Windows Only) on page 42

For a usable system, all Core Services must be started (see Core Services on page 30).
Note: Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service must be started separately (see Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service on page 44). Hyperion recommends that you restart your Web server after restarting Workspace services. If you do not restart the Web server, a delay of several minutes occurs before users can log on.

startCommonServices Method
The startCommonServices method of starting services is the preferred method for UNIX and an alternative method for Windows. To start Workspace Core Services (that is, all services except Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service), run the startCommonServices script in Install Home\bin:

UNIXstartCommonServices.sh WindowsstartCommonServices.bat

startCommonServices starts the Java services in an Install Home, except for inactivated

ones. Inactivating services is discussed in Starting a Subset of Services on page 42.

Starting and Stopping Services

41

Table 1

Flags Used in startCommonServices Start Scripts Description Length of database passwords. Default=5. Format for e-mails (HTML or text file). Default is HTML format. Number of job worker threads.Determines the speed at which jobs are built and sent to Job Service. Configure based on number of Job Services, schedules, and events; and size of connection pool for the repository. Default=2. Number of schedules processed at one time by the scheduler worker thread. Default=15. Number of seconds job execution is delayed when Job Services are busy. Default=300. Number of concurrent jobs per each Job Service. No default limit.

Flag
-Dminimum_password_ length -Ddisable_htmlemail -DPerformance.MaxSTWorkers

-DPerformance.SchedulerBatchSize

-DPerformance.SchedulerDelay -Djob_limit

Windows Service Methods (Windows Only)


On Windows, the preferred method for starting Core Services is by running Hyperion Interactive Reporting Base Service from Windows Services or from the Start menu.

To start Core Services, use one method:

From Administrative Tools, select Services, select Hyperion Interactive Reporting Base Service n, and click Start. Select Start > Programs > Hyperion System 9+ > Utilities and Administration > Start BI+ Core Services.

Starting a Subset of Services


You can start a subset of LSC and RSC services by inactivating those you do not want to start.

To start a subset of Workspace services:


1 Inactivate services that you do not want to start:

LSC servicesUsing LSC, set Runtype to Hold for each service. RSC servicesIn server.dat, delete the names of services you want to inactivate. Before modifying this file, save a copy of the original. Details about server.dat are provided in Starting Services and server.dat on page 43.

2 Run the start_CommonServices script or start Core Services.


See Starting Core Services on page 41.

42

Administration Tools and Tasks

Starting Services and server.dat


When Core Services are started, only RSC services listed in Install Home\common\config\server.dat are started. Each line in server.dat is formatted as:
serviceType:serviceName

serviceType must be one of the strings shown in the first column of Table 2.
Table 2

Service Types Official Service Name Name Service Repository Service Event Service Job Service Service Broker

Service Type Field in server.dat com.sqribe.transformer.NameServerImpl com.sqribe.transformer.RepositoryAgentImpl com.sqribe.transformer.MultiTypeServerAgentImpl com.sqribe.transformer.SQRJobFactoryImpl com.sqribe.transformer.ServiceBrokerImpl

The serviceName is the service name in the form:


serviceAbbrev#_localHost

where:

serviceAbbrev is an abbreviation listed in Table 3, Abbreviations for Service Names

Used in Start Scripts, on page 45


# is a number uniquely identifying the service localHost is the name of the computer where the service is installed, in the form of hostname.domain.com.

For example, to inactivate only Service Broker and Event Service on host apollo, remove the following lines from server.dat:
com.sqribe.transformer.ServiceBrokerImpl:SB1_apollo.Hyperion.com com.sqribe.transformer.MultiTypeServerAgentImpl:ES1_apollo.Hyperion.com

Starting Services Individually


Some services have their own start scripts. These single-service start scripts make it possible to start a service in a separate process from other services in an Install Home, but this is desirable only for certain services and situations. Topics that discuss starting services individually:

Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service on page 44 RSC Services Individual Start Scripts on page 44

Starting and Stopping Services

43

Starting Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service
You must start Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service individually. This is true whether the service is installed in an Instal Home with the Workspace services or alone in its own Instal Home.
Note: When you connect to a computer to start Hyperion Interactive Reporting Service on Windows, make sure the color property setting for the display is 16 bits or higher. If the color property setting is less than 16 bits, users may encounter extremely long response times when opening Chart sections of Interactive Reporting documents in Workspace. This is an important prerequisite, especially when starting the services remotely (for example using VNC, Terminal Services, Remote Administrator or Timbuktu, and so on), because many remote administration clients connect with only 8-bit colors by default.

To start Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting Data
Access Service):

1 In LSC, verify that Run Type for Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting
Data Access Service) is set to Start.

2 Start the common services. 3 Start Hyperion Interactive Reporting Service (or Hyperion Interactive Reporting Data Access Service) in its
own process using a process monitor (see Implementing Process Monitors on page 48).

For Windows, to start these services without a process monitor, run its start script:

Hyperion Interactive Reporting Service


\BIPlus\bin\startIntelligenceService.bat

Hyperion Interactive Reporting Data Access Service


\BIPlus\bin\startDataAccessService.bat

For UNIX, see Starting Services with Process Monitors on page 50.

RSC Services Individual Start Scripts


Some RSC services have individual start scripts, which are useful for debugging or isolating issues. Because of complex dependencies between the Workspace services, however, the order in which services are started is critical. You should use these scripts only (1) if you completely understand system interdependencies, or (2) with the guidance of Hyperion Customer Support. For information about sequential requirements, see Starting Services in Order on page 45. Each start script is an executable Bourne shell script file with the extension .sh (UNIX), or a batch file with the extension .bat (Windows). Start scripts are stored in /BIPlus/bin . Each start script name is composed of an abbreviation for the service type, a number uniquely identifying the service, an underscore, and the string start, followed by the extension .sh or .bat.

44

Administration Tools and Tasks

Table 3

Abbreviations for Service Names Used in Start Scripts Abbreviation DAS ES BI JF NS RM SB

Service Type Hyperion Interactive Reporting Data Access Service Event Service Hyperion Interactive Reporting Service Job Service Name Service Repository Service Service Broker

Example Start script for the first Name Service installed on a UNIX host named apollo:
NS1_apollo_start.sh

Start script for the third Job Service installed on a Windows host named zeus:
JF3_zeus_start.bat

Starting Services in Order


When you start all services in one process, the start script or service controls the start sequence and timing to assure that all dependencies are met. For example, if service As start depends on services B and C being available, then B and C must start successfully before A tries to start. When you start services in separate processes, the proper timing and sequence is your responsibility.

To use individual start scripts:


1 Start LSC services using startCommonServices. 2 Make certain that GSM and Session Manager started.
To find out whether they are running, view stdout_console.log, which is in \BIPlus\logs.

3 Start Name Service.


Verify that it is running by viewing stdout_console.log before proceeding.

4 Start Service Broker.


Verify that it is running by viewing stdout_console.log before proceeding.

5 Start other RSC services, in any order.

Starting and Stopping Services

45

Stopping Services
You stop all Workspace services and services started individually by stopping their processes. Do so at each services host computer. In all cases, stopping the services constitutes a hard shutdown and causes the services to stop immediately. In the event of a hard shutdown, all work in progress stops. The method for stopping a service must match how it was started:

Individual RSC services started with a start scriptRun its stop script. The name of a services stop script matches that of its start script except for the substitution of stop for start. For example, if Job Services start script is JF1_apollo__start.bat (or .sh), the stop script is JF1_apollo__stop.bat (or .sh).

Caution! Use a services stop script only if the service was started with its start script. A stop script cannot

be used to terminate one service within a multi-service process. The stop script stops all services running in that process.

Process running in a console windowUse a shutdown command, such as shutdown or [Ctrl+C] on Windows. Using an operating system kill command (such as kill on UNIX) to stop the Workspace services does not cause damage to the system; however, do not use kill -9. Windows serviceUse the Stop command in the Services tool.

If you are running services as different servers (that is, as separate processes), you must stop Repository Service last.
Note: Do not terminate Job Service when it is executing a job. If you do so, you cannot restart Job Service until the job exits (or until you terminate it). If a job exists that never terminates, you can restart Job Service by terminating the process. On Windows systems, use Task Manager to terminate the process. On UNIX systems, use the kill command. As a last resort, you can reboot the computer on which Job Service resides.

46

Administration Tools and Tasks

Example of How Services Start


At start time, all services go through similar start procedures. For example, Job Service does the following: 1. Locates its local config.dat file.
config.dat contains information for connecting with Name Service that has Job Service configuration data. For more information about config.dat, see the section About config.dat on page 191

2. Reads from the config.dat file. 3. Establishes a connection with Name Service to download Job Service configuration information. Because a service looks up its configuration information only when it starts, it does not learn about subsequent changes made to the environment. Therefore, if you change a services configuration and want it to take effect immediately, restart the service.

Changing Service Port Assignments


Common services ports are defined in these locations:

server.xml/common/config config.datv8_serviceagent in the repository server.xml entriesLSC

To change ports in config.dat, use the ConfigFileAdmin utility found in \bin. To change v8_serviceagent, use RSC.
See also Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution on page 362 and Changing Web Analysis Ports on page 499

Starting Workspace Servlet


Start Workspace servlet according to the instructions given in your Web server documentation. Make the URL available to your systems end-users. For Workspace, enter the following URL:
http://localhost:port/workspace

where localhost is the name of the Workspace server, and port is the TCP port on which the application server is listening. The default port for Workspace is 19000 if using Apache Tomcat.

Starting Workspace Servlet

47

Implementing Process Monitors


Process monitors enable you to periodically restart services (Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service) that might deteriorate in performance over time. Use a process monitor to start and stop services that need to be monitored and controlled. Each service needs its own process monitor. Process monitors gracefully shut down a service while starting another in its place. Currently, process monitors are available to start and stop Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service. Process monitors insure that a system has only one registered Hyperion Interactive Reporting Service and one registered Hyperion Interactive Reporting Data Access Service with the same instance ID processing incoming requests. In server.xml, you can select the event and event threshold to use to restart the service. For Hyperion Interactive Reporting Service, you can choose maximum number of documents retrieved, maximum number of jobs run, maximum amount of time running the service, or simply state a time. For Hyperion Interactive Reporting Data Access Service, you can choose from maximum number of database requests, maximum number of other database requests, maximum amount of time running the service, or simply state a time. Use process monitors to start Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service. Process monitors automatically start, stop, and restart services, resulting in less down time.

To configure and start process monitors:


1 Configure process monitor properties in BIprocessmonitor.properties or
DASprocessmonitor.properties.

See Configuring Process Monitors on page 48.

2 Configure the event trigger thresholds in server.xml.


See Hyperion Interactive Reporting Service Process Monitor Event Thresholds on page 49 and Hyperion Interactive Reporting Data Access Service Process Monitor Event Thresholds on page 50 for details.

3 Start Hyperion Interactive Reporting Service or Hyperion Interactive Reporting Data Access Service using
process monitor scripts.

See Starting Services with Process Monitors on page 50 for details.

4 Monitor services using the process monitor log files.

Configuring Process Monitors


You configure process monitor properties in server.xml and the BIprocessmonitor.properties file or DASprocessmonitor.properties file in the \BIPlus\common\config directory. server.xml stores event threshold information, while the properties files store configurable properties for Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service process monitors.

48

Administration Tools and Tasks

Table 4

Configurable Properties in the Properties Files Description Interval for polling the internal status of the service in seconds. Minimum and default = 30, maximum=300. Number of seconds the service is stopped if the polling is not working. Minimum and default = 300, maximum=600. Number of seconds the process continues before a hard shutdown. Maximum and default = 30, no minimum. Number of seconds the process continues during a graceful shutdown. Allows a service to continue processing in the background. Default=14400(4 hours), minimum=3600 (1 hour), maximum= 86400 (1 day). Path to services generated data file. Default is C:\\IOR.txt Path to services standard output file location. Default is C:\\DAS_stdout.txt. Path to services standard error file location. Default is C:\\DAS_stderr.txt.

Property MONITOR_THREAD_INTERVAL MONITOR_THREAD_TIMEOUT HARD_SHUTDOWN_TIMEOUT GRACEFUL_SHUTDOWN_TIMEOUT

IOR-FILE_NAME SERVICES_STDOUT_FILE_PATH SERVICE_STDERR_FILE_PATH

You set process monitor logging levels in remoteServiceLog4jConfig.xml (see Configuring Log Properties for Troubleshooting on page 228).

Hyperion Interactive Reporting Service Process Monitor Event Thresholds


You set threshold events to trigger process monitors to stop and restart services. Threshold events for Hyperion Interactive Reporting Service are in server.xml in a property list called BQ_EVENT_MONITOR_PROPERTY_LIST. Set the first property, EVENT_MONITORING, to ON to enable threshold event usage. Comment out or delete the thresholds not in use.
Table 5

Threshold Events for Hyperion Interactive Reporting Service Process Monitors Description Set to ON to use the following events. Number of Interactive Reporting documents retrieved. Number of Interactive Reporting jobs run. Total service running time since its first request. Time of day that the service is not available, in minutes after midnight. For example, 150 means 2:30 AM.

Property EVENT_MONITORING MAXIMUM_DOCUMENTS_THRESHOLD MAXIMUM_JOBS_THRESHOLD MAXIMUM_UP_TIME_THRESHOLD SPECIFIC_SHUTDOWN_THRESHOLD

Implementing Process Monitors

49

Hyperion Interactive Reporting Data Access Service Process Monitor Event Thresholds
You set threshold events to trigger process monitors to stop and restart the service. Threshold events for Hyperion Interactive Reporting Data Access Service are in server.xml in a property list called DAS_EVENT_MONITOR_PROPERTY_LIST. Set the first property, EVENT_MONITORING, to ON to enable threshold event usage. Comment out or delete the thresholds not in use.
Table 6

Threshold Events for Hyperion Interactive Reporting Data Access Service Process Monitors Set to ON to use one of the following events. Number of relational database process requests including Oracle, SQL Server, Sybase, DB2, etc. Number of MDD database process requests including Essbase, MSOLAP, SAP, etc. Number of all other relational database requests including requests like stored procedure calls, get function lists, etc. Number of all other MDD database requests including requests like build outline, get members, show values, etc. Total service running time since its first request. Time of day that the service is not available, in minutes after midnight. For example, 150 means 2:30 AM.

EVENT_MONITORING MAXIMUM_RELATIONAL_PROCESS_THRESHOLD MAXIMUM_MDD_PROCESS_THRESHOLD MAXIMUM_RELATIONAL_OTHER_THRESHOLD MAXIMUM_MDD_OTHER_THRESHOLD MAXIMUM_UP_TIME_THRESHOLD SPECIFIC_SHUTDOWN_THRESHOLD

Starting Services with Process Monitors


To start services with process monitors, you use the start scripts in \BIPlus\common\config, but do not pass command line parameters. (To start services without a process monitor, pass any command line parameter, such as nopm). Each start script is an executable Bourne shell script file with the extension .sh (UNIX), or a batch file with the extension .bat (Windows). Each start script name is begins with the string start, followed by the service name, followed by the extension .sh or .bat. Typical start script names are, for example:

startIntelligenceService.batHyperion Interactive Reporting Service startDataAccessService.shHyperion Interactive Reporting Data Access Service

For Windows, you can start Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service with process monitors in the Services tool. In the Services tool, in addition to the Workspace Server, there is a Windows service for each Hyperion Interactive Reporting Service and for each Hyperion Interactive Reporting Data Access Service that uses process monitors.

50

Administration Tools and Tasks

Quick Guide to Common Administrative Tasks


Use this section to quickly locate instructions for some common administrative tasks. Table 7 lists tasks involved in initially configuring and populating your system, and the system component used for each task. Table 8 gives this information for tasks involved in maintaining a system. These tables do not include all tasks covered in this Administrators Guide.
Table 7

System Configuration Tasks Component Reference Starting and Stopping Services on page 40 Stopping Services on page 46

Task Start or stop a server

Provision users, groups, and roles Configure generated Personal Page Configure Broadcast Messages Provide optional Personal Page content Provide graphics for bookmarks Create custom calendars for scheduling jobs Create public job parameters Create or modify printers or directories for job output Define database servers Configure services

User Management console Explore module Explore module Explore module Explore module Calendar Manager Schedule module Administer module RSC RSC, LSC

Hyperion System 9 Shared Services User Management Guide Configuring the Generated Personal Page on page 165 Understanding Broadcast Messages on page 166 Providing Optional Personal Page Content to Users on page 168 Configuring Graphics for Bookmarks on page 168 Creating Calendars on page 154 Administering Public Job Parameters on page 159 Managing Physical Resources on page 56 Adding Database Servers on page 184 Chapter 8, Configuring RSC Services Chapter 9, Configuring LSC Services

Set system properties Configure servlets

Administer module Servlet Configurator

Setting General Properties on page 54 Chapter 10, Configuring the Servlets

Quick Guide to Common Administrative Tasks

51

Table 8

System Maintenance Tasks Component Reference Starting Services Individually on page 43 or Starting Services and server.dat on page 43 RSC, LSC RSC Administer module RSC or the Installation program User Management console User Management console Administer module Administer module Administer module RSC installation program Administer module Chapter 8, Configuring RSC Services, Chapter 9, Configuring LSC Services Managing Jobs on page 189 Setting General Properties on page 54 Chapter 8, Configuring RSC Services or the Hyperion System 9 BI+ Installation Guide Hyperion System 9 Shared Services User Management Guide Hyperion System 9 Shared Services User Management Guide Defining MIME Types on page 59 Modifying MIME Types on page 59 Inactivating or Re-activating MIME Types on page 60 Adding Hosts on page 182 Hyperion System 9 BI+ Installation Guide Host Shared Services Properties on page 205

Task Change which services run in a server Modify services Modify Job Service Modify system properties Delete services Modify users, groups, or roles Inactivate obsolete users, Create MIME types Modify MIME types Inactivate obsolete MIME types Add hosts Add services Configure common Metadata Services

52

Administration Tools and Tasks

Chapter

Administer Module

3
Use the Administer module to manage settings that control how end users interact with Hyperion System 9 BI+ Workspace.
Note: You can use various methods to perform most Administer module tasks. For a complete list of all toolbars, menus, and shortcut menus, see the Hyperion System 9 BI+ Workspace Getting Started Guide.

See also Chapter 4, Using Impact Management Services, and Chapter 5, Managing Shared Services Models.

In This Chapter

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Setting General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Managing Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Managing MIME Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Managing Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Managing SmartCuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Managing Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Tracking System Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Administer Module

53

Overview
The Administer module, available from the Workspace View pane and toolbar, enables you to manage Workspace properties, performance, and user interaction. Toolbar icons represent Administer module panel items.
Table 9

Activities Available from Administer Module Toolbar Icons and Panel Items Administer Panel Item General Properties Activity Define general system and user interface properties

Icon

User Management console

Provision users, groups, and roles

Physical Resources

Specify printers and output directories for job output

MIME Types

Create, modify, and delete Workspace MIME types

Notifications

Define mail server properties and how end users receive e-mail notifications about jobs Specify how to construct SmartCuts (shortcuts to imported documents in Workspace) for inclusion in e-mail notifications Manage row-level security settings in data sources used by Interactive Reporting documents Track system usage and define related properties

SmartCuts

Row-level Security

Usage Tracking

Event Tracking.

Track events, such as document opens, documents closes for selected MIME types, and jobs run

Setting General Properties


To set general and user interface properties:
1 Navigate to Administer and select General. 2 Modify properties. 3 Click Save Properties.

54

Administer Module

General Properties

System NameDistinguishes the current installation from other Workspace installations (Installation is defined as a system served by one GSM.) Broadcast MessagesSpecifies the folder in which to store broadcast messages Enable users to use Subscription and NotificationActivates import event logging, which enables Event Service to identify subscription matches and notify users of changes in subscribed items (Effective Datewhen logging begins) Enable Priority RatingsEnables users to set priority ratings on items imported to the Explore module. Enable HarvestingActivates Harvester Service, which enables users to use Impact Manager to extract and save Interactive Reporting metadata to relational data sources for use in other formats (see Chapter 4, Using Impact Management Services).

User Interface Properties

Display all users, groups, or roles in the systemLists all available users, groups, and roles when end users set access control on repository items. Selecting this option may impact system performance. List up to nn users, groups, or rolesNumber of users, groups, or roles displayed when end users set access control on repository items. The default setting is 100. Specifying too low a number may prevent end users from seeing all users, groups, and roles to which they have access.

Managing Users
For information on managing users, groups, and roles, see the Hyperion System 9 Shared Services User Management Guide.

Assigning Hyperion System 9 BI+ Default Preferences


User Management Console enables users with Provisioning Manager and Explorer roles to set the default folder, desktop folder, new document folder, and start page application preferences for users and groups. Individual and group preferences have precedence over default preferences. For default preferences to succeed, users and groups must have the roles and permission to access specified folders and interface elements.

To assign default preferences for Hyperion System 9 BI+:


1 Log on to User Management console using a user ID provisioned with Provisioning Manager and Explorer
roles.

2 Access the View pane.

Assigning Hyperion System 9 BI+ Default Preferences

55

3 Expand the Projects node until a BI+ application is displayed. 4 Right-click the application name and select Assign Preferences.
A three-step wizard is displayed in the Process bar.

5 For step 1 of the Wizard, Select Users, select Available Users or Available Groups. 6 From the left panel, select user names or group names and click the right arrow.
To select consecutive names, select the first name, press and hold down Shift, and select the last name. To select names that are not consecutive, press and hold down Ctrl, and select each item. Use Add All to select all names.

7 Repeat steps 5 and 6 to select a combination of users and groups. 8 When all user and group names are displayed in Selected Users and Groups, click Next. 9 For step 2 of the Wizard, Manage Preferences, specify these default preferences for the selected users
and groups:

Default FolderRepository location of the default folder. Desktop FolderUsed as a scratch pad or to store items for easy access from the Viewer module:

From the Viewer module, all Desktop folder items are displayed as icons. From the Explore module, the name of the folder specified as the default Desktop folder is displayed; for example, /Sample Content, not /Desktop.

New Document FolderDefault folder in which the new document wizard searches for valid data sources, that is, Web Analysis database connection files and Interactive Reporting documents. Start PageHyperion System 9 BI+ interface displayed after logging on. Select None, Explore, Document, Favorite, Desktop, Enterprise Metrics, or Scorecard. If you select Explore or Document for Start Page, you must specify a repository location.

10 When all preferences are specified, click Next. 11 For step 3 of the Wizard, Finish, choose between three tasks:

To configure options for another application, select one from the View pane. To change preferences for currently selected users and groups, click Back. To specify another set of users and groups and set their preferences, click Continue.

Managing Physical Resources


Physical resources, such as printers and directories, are used as destinations for Interactive Reporting and Production Reporting job output. Physical resources must be accessible to each server that is running Hyperion Interactive Reporting Service. You should assign access control and notify end users about which physical resources to use. Users should see only the physical resources that they can use.

56

Administer Module

Viewing Physical Resources


To view physical resources defined for Workspace:
1 Navigate to Administer and select Physical Resources. 2 From Display, select All, Only Printer, or Only Output Directory and click Update List.

To view properties settings for physical resources, click a resource name.

Access Control for Physical Resources


Unlike other Workspace objects, which offer several access levels, physical resources have only two access levels: Access and No Access. You add roles, groups, or users to the Access Privileges list and set their access privileges as you do for other objects. See the Hyperion System 9 BI+ Workspace Users Guide for instructions on setting access privileges.

Adding Physical Resources


To add physical resources:
1 Navigate to Administer and Physical Resources. 2 In the Content pane, click Go next to Add Printer or Add Output Directory. 3 Specify required properties and optional properties.
See Printer Properties on page 58 and Output Directory Properties on page 58.
Note: Physical resources must be accessible to each server on which Hyperion Interactive Reporting Service is running.

4 Set Access Control for this resource (see Access Control for Physical Resources on page 57). 5 Click Finish.

Modifying Physical Resources


To modify physical resources:
1 Navigate to Administer and select Physical Resources. 2 Click Modify next to a resource name or select the resource name. 3 Make changes and click OK.
See Access Control for Physical Resources on page 57, Printer Properties on page 58 and Output Directory Properties on page 58.

Managing Physical Resources

57

Deleting Physical Resources


To delete physical resources:
1 Navigate to Administer and select Physical Resources. 2 Click Delete next to a resource name. 3 Confirm the deletion when prompted.

Printer Properties
Printers are used for Interactive Reporting job output:

TypeRead-only property; set as Printer. NameName for the printer; visible to end users. DescriptionHelps administrators and end users identify the printer. Printer AddressNetwork address of the printer (for example, \\f3prt\techpubs); not visible to end users.

Output Directory Properties


Output directories are used for Interactive Reporting and Production Reporting job output. They can be located locally or on a network and can be FTP directories:

General properties:

TypeRead-only property; set as Output Directory. NameName for the output directory; visible to end users. DescriptionHelps administrators and end users identify the directory. PathDirectorys full network path (for example, \\apollo\Inventory_Reports).

FTP properties:

Directory is on FTP ServerEnable if the output directory is located on an FTP server, and set these options:

FTP server addressAddress of the FTP server where the output directory is located (for example, ftp2.hyperion.com). FTP usernameUsername to access the FTP output directory. FTP passwordPassword for FTP username. Confirm passwordRetype the password entered for FTP password.

58

Administer Module

Managing MIME Types


Before you can import items into the repository, their MIME types must be defined in Workspace. Although Workspace has many built-in MIME types, you may need to define others. You can associate a MIME type with multiple file extensions. For example, you can associate the extensions .txt, .bat, and .dat with the text MIME type. Multiple MIME types can use one extension. For example, if your organization uses multiple versions of a program, you can define a MIME type for each version; however, file names of all versions use the same extension. When users opens files with extensions that belong to multiple MIME types, they are prompted to select a program executable. In the MIME type list, traffic-light icons indicate active (green) or inactive (red); see Inactivating or Re-activating MIME Types on page 60.

Defining MIME Types


To define MIME types:
1 Navigate to Administer and select MIME Types. 2 At the bottom of the content pane, click Go (to the right of Add MIME Type). 3 Supply a name and description. 4 In the file extensions box, enter an extension and click >> .
When entering extensions, type only the extension letters. Do not include a period (.).

5 Click Finish.
Note: Newly defined MIME types are active by default.

Modifying MIME Types


To modify MIME types:
1 Navigate to Administer and select MIME Types. 2 In the listing of MIME types, click Modify Properties. 3 Change properties.
To remove a file extension, select it in the <Extensions> list and click <<.

4 Click OK.

Managing MIME Types

59

Inactivating or Re-activating MIME Types


To prevent items from being imported to the repository, inactive their MIME types. Although repository items with inactive MIME types are still accessible, end users must specify which programs to use when opening them. You can re-activate an inactive MIME type at any time.

To inactivate or re-activate MIME types:


1 Navigate to Administer and select MIME Types. 2 In the MIME type list, click Modify Properties. 3 Change the Active setting:

To inactivate a MIME type, clear Active and click OK. Its traffic-light icon changes to red. To re-activate a MIME type, select Active and click OK. Its traffic-light icon changes to green.

Deleting MIME Types


Unlike inactivating MIME types, deletion is permanent and affects associated items. You cannot import files that have extensions associated with a deleted MIME type. For items associated with a deleted MIME type, the text unknown file type is displayed instead of MIME type icons. When users open these items, they are prompted to select a program executable. You can delete MIME types that you define; however, you cannot delete built-in Workspace MIME types.

To delete MIME types:


1 Navigate to Administer and select MIME Types. 2 Click Delete next to a MIME type.

Managing Notifications
Notification properties control how users receive notifications about the jobs and documents to which they subscribe:

Understanding Subscriptions and Notifications on page 61 Modifying Notification Properties on page 62

60

Administer Module

Understanding Subscriptions and Notifications


Subscriptions and notifications are handled by Event Service. Topics that discuss how Event Services handles subscriptions and notifications:

Subscription Types on page 61 How Event Service Obtains Information on page 61 Notification Mechanisms on page 62

Subscription Types
Subscription types that users can subscribe to and receive notifications about:

New or updated versions of items Changed content in folders Job completion Job exceptions

Independent of subscriptions, Event Service sends notifications to these users:


Owners of scheduled jobs, when job execution finishes Users who run background jobs, when job execution finishes

How Event Service Obtains Information


When users subscribe to items or folders, Workspace sends subscription information through LSM to Event Service, which adds the subscriptions to its subscriptions list. Repository Service maintains a list of imported and updated objects, which includes all imported items, folders, and job output; modified item properties; updated versions; and object metadata. Repository Service includes in its list both imported or modified items or folders, and the folders that contain them. Every 60 seconds, Event Service obtains the Repository Services list of new and modified items, and compares them to the subscription list. Event Service then sends notifications to subscribed users. Repository Service discards its list after giving it to Event Service, which, in turn, discards the list after it notifies subscribers of changes. Other services notify Event Service when they complete actions that may trigger subscriptions, such as successful job execution. Event Service checks these events against the subscription list and sends notifications to subscribers.

Managing Notifications

61

Notification Mechanisms
Ways in which Event Service notifies users:

Send e-mails with embedded SmartCuts to notify users about changes to items, folders, new report output, job completion, or exception occurrences Optionally, Event Service may send file attachments, based on how users chose to be notified on the Subscribe page.

Display notifications of completed scheduled jobs or background jobs in the Schedule module Display notification of job completion after a job runs in the foreground Display a redlight icon in Exceptions Dashboard when output.properties indicates that exceptions occurred When exceptions occur, the importer of the file sets properties to indicate the presence of exceptions and to specify exception messages. The importer is usually Job Service, and the file is usually job output. Exceptions can be flagged by any of these methods:

Production Reporting code Manually by users who import files or job output APIs that set exception properties on files or output

Hyperion Interactive Reporting Service does not support exceptions, but you can set exceptions on Interactive Reporting documents using the API or manual methods. Users choose whether to include the Exceptions Dashboard on Personal Pages and which jobs to include on the Exceptions Dashboard.

Modifying Notification Properties


To modify Notification properties:
1 Navigate to Administer and select Notifications. 2 Modify Notification properties and mail server options:
Note: If you change the Enable e-mail attachment, Maximum attachment size, Mail server host name for sending e-mail notifications, or Email account name for sending e-mail notifications property, you must restart Core Services for the setting to take effect. For information on starting services, see Starting Core Services on page 41.

Notifications

Enable e-mail attachmentAllows end users to send file attachments with their e-mail notifications. If jobs generate only one output file, that file is attached to the e-mail. If jobs generate multiple output files including PDF files, the PDF files are attached to e-mails; otherwise, no files are attached. Maximum attachment sizeMaximum allowed size for attachments, in bytes.

62

Administer Module

Time to live for entries in the notification logNumber of minutes after which events are removed from the notification log and are no longer displayed in the Explore module. Expiration times for scheduled jobs and background jobs.

Mail server options

Mail server host name for sending e-mail notifications

Note:

The e-mail service must be installed on the Financial Management Server computer to e-mail batch output correctly.

E-mail account name for sending e-mail notifications

Note: To send e-mails with embedded SmartCuts, you must also set SmartCut properties.

Require authenticationMakes authentication (ASMTP) mandatory. Enter user name and password when enabled. Default is disabled.

After specifying notification properties, you can click Send Test E-mail to view your mail server entries and enter a destination e-mail address.

3 Click Save Properties.

Managing SmartCuts
SmartCuts are shortcuts in URL form to imported documents in Workspace. SmartCut properties are used to construct SmartCuts that are included in e-mail notifications. URLs for SmartCuts:
http://Host:IP Port/workspace/browse/get/Smartcut

For Example:
http://pasts402:19000/workspace/browse/get/Patty/Avalanche_CUI_Style_Gui delines.pdf/

(Alternatively, a SmartCut may start with https instead of http.)

To modify SmartCut properties:


1 Navigate to Administer and select SmartCuts. 2 Modify Smartcut properties:
Note: If you change Smartcut properties except for URL encodings, you must restart the Workspace server and Job Service for the settings to take effect.

NameWeb component for the SmartCut DescriptionWorkspace description

Managing SmartCuts

63

HostHost on which UI Services reside IP PortPort number on which Workspace runs RootWeb application deployment name for Workspace, as set in your Web server software Typically, this is workspace/browse. The last segment (browse) must match the servlet name specified during installation.

Encoding for URLsHow Workspace encodes (and decodes) URLs. This property can take one of two values:

Default (Hexadecimal)Uses the standard encoding of URLs as defined in RFC 2396. The subset of ASCII characters which are valid in URLs are left as is. The space character is converted to %20. All other characters are converted to the 3-character string %xy, where xy is the two-digit hexadecimal representation of the lower 8-bits of the character. Because this encoding uses only the lower 8-bits of a character, it is used only for Latin-1 languages installations. UTF-8Uses the encoding of URLs as recommended in RFC 2718. Non-allowable characters are first converted into UTF8, and each resulting byte is converted to its %xy representation. This encoding must be used for installations supporting nonLatin-1 languages or installations using the WebSphere or Sun ONE native servlet engines.

Protocol for SmartCuts generated in e-mail notificationsHTTP or HTTPS

3 Click Save Properties.

Managing Row-Level Security


Row-level security enables users to view only those records that match their security profile, no matter what their search criteria. It enables administrators to tag data in the row level of a database, thus controlling who has read access to information. Row-level security is critical to applications that display sensitive data such as employee salaries, sales commissions, or customer details. Lack of row-level security could be a big detriment to organizations that want to distribute information to their user community using the Internet and intranets. If you want to implement row-level security in Workspace, keep these points in mind:

At least one Hyperion Interactive Reporting Data Access Service instance must be configured to access the data source storing your row-level security information. The database client library should be installed on the computer where the Hyperion Interactive Reporting Data Access Service is running. The data source for the Workspace repository that has the row-level security table information should be configured. For security reasons, the user name and password to access the data source should differ from that used for the Workspace user account.

See Chapter 27, Row-Level Security in Interactive Reporting Documents, for information about implementing row-level security in Interactive Reporting documents.

64

Administer Module

Row-level security properties are stored in the repository; however, the rules about how to give access to the data are stored in the data source.

To modify row-level security properties:


1 Navigate to Administer and select Row Level Security. 2 Modify these row-level-security properties:

Enable Row Level SecurityRow-level security is disabled by default. ConnectivityDatabase connectivity information for reports source data. Database TypeType of database that you are using. Database types available depend on connectivity selection. Data Source NameHost of the report data source database. UsernameDefault database user name used by Job Service for running Production Reporting jobs on this database server; used for jobs that were imported with no database user name and password specified. PasswordValid password for Username.

3 Click Save Properties.

Tracking System Usage


Usage tracking records information about Workspace activities as they occur and provides a historical view of system usage. This information answers questions like:

Who logged in yesterday? Which Workspace reports are accessed most frequently?

You can configure your system to track numerous activities. For example, you can track opening, closing, and processing Interactive Reporting documents or you can track only opening Interactive Reporting documents. Activities are recorded as events in the repository database. Events are recorded with pertinent details and information that distinguishes them from each other. Event times are stored in GMT. Events are deleted from the database in a configurable time frame. Usage Service must be running to track events set in the user interface. Usage Service can be replicated and all Usage Services access one database. The user name and password to access the usage tracking information may differ from that used for Workspace. Hyperion recommends that Usage Tracking use its own schema in the repository database; however, an alternate schema is not required. For more information about configuring Usage Tracking schema, see the Hyperion System 9 BI+ Installation Guide for Windows and UNIX. Topics that provide detailed information about tracking usage and events:

Managing Usage Tracking on page 66 Tracking Events and Documents on page 66 Sample Usage Tracking Reports on page 67
Tracking System Usage

65

Managing Usage Tracking


Usage tracking is managed through the Administer module and LSC. All configurable properties, except run type, are managed in the Administer module. To modify the run type, see Common LSC Properties on page 198.

To manage usage tracking:


1 Navigate to Administer and select Usage Tracking. 2 Change these properties:

General preferences

Usage Tracking ActiveSelect to turn on usage tracking. Mark records ready for deletion after_daysNumber of days after which usage tracking events should be marked for deletion by the garbage collection utility. Default is 30 days. Delete records every_daysNumber of days after which the garbage collection utility should be run. Default is 7 days.

Connectivity preferencesUsername and password are populated from the usage tracking database and should only be changed if the database is moved.

3 Select Apply.

Tracking Events and Documents


Usage Service keeps records about logon instances, document opens, documents closes for select MIME types, jobs run, job output views, and queries processed by Workspace. Usage Service must be running to track events. By default, events are not tracked.

To track events:
1 Navigate to Administer and select Event Tracking. 2 Select an event to track it:

System Logons Database Logons Timed Query Event Open Interactive Reporting Document Process Interactive Reporting Document Close Interactive Reporting Document Run Interactive Reporting Job View Interactive Reporting Job Output Run Production Reporting Job

66

Administer Module

View Production Reporting Job Output Run Generic Job View Generic Job Output

3 To track documents, move one or more available MIME types to the Selected MIME Types list.
Tracking occurs each time a document of the selected MIME types is opened.

4 Click Apply.

Sample Usage Tracking Reports


Sample usage tracking reports provide immediate access to standard Workspace usage reports. You can modify standard reports or create your own reports. The Interactive Reporting document, sample_usage_tracking.bqy, which generates usage tracking reports, is in the \Root\Administration folder in Explore. A copy of this file is in the BIPlus\docs\en installation folder.

To view the \Administration folder, from Explore in Viewer, select View > Show Hidden.

Caution! The sample reports could contain sensitive company information when used with your data.

Use access control when importing the reports so only the intended audience has access.

Tracking System Usage

67

68

Administer Module

Chapter

Using Impact Management Services

Impact Management Services, introduced with the Impact Manager module, enable you to collect and report on metadata, and to update the data models that imported documents use. Impact Management Assessment Services and Impact Management Update Services perform these tasks. Tasks results are displayed in Show Task Status and Show Impact of Change interactive reports.

In This Chapter

About Impact Management Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Assessment Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Impact Management Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Running the Update Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Update Data Model Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Access to Impact Management Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Synchronize Metadata Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Update Data Model Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Accessing Updated Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Connecting Interactive Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Using Show Task Status Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Using Show Impact of Change Interactive Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Creating the New Data Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Changing Column Data Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Changing User IDs and Passwords for Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Service Configuration Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Using Impact Management Services

69

About Impact Management Services


The Impact Manager module consists of two services: Impact Management Assessment Services and Impact Management Update Services. It also provides two interactive reports: Show Task Status interactive report and Show Impact of Change interactive report. Impact Management Services are fault tolerant. For example, they detect and finish tasks that were left incomplete following an unplanned system shutdown. Deployment of Impact Management Services is flexible. For example, the feature can be installed on computers in a Hyperion System 9 cluster, or on a computer recently added to the cluster. Impact Management Services are scalable. For example, they can be run on one computer or on multiple computers to accommodate escalating performance requirements.

Impact Management Assessment Services


Impact Management Assessment Services parse imported documents to extract and store metadata. Metadata includes sections that are in the document; tables and columns that are used by each data model, query, and results section; and section dependencies (for example, Results A depends on Query A depends on Data Model A).

About Impact Management Metadata


Extracted metadata is stored in metadata tables, which share a database with the repository tables. Impact Management Assessment Services can be invoked automatically when a new document is imported, or when a new version of a document is imported. Configure Impact Management Assessment Services by selecting Enable Harvesting in Manage General Properties of the Administer module, or configure the service to synchronize at a specific time. See Synchronize Metadata Feature on page 73. If harvesting is enabled, the Impact Management Assessment Services examine the task queue at set intervals, which by default is every 30 seconds. Impact Management Assessment Services can also synchronize metadata tables with repository tables. The operation harvests new documents found that have not been harvested or that have been reimported since the documents were last harvested.

The Metadata Service


The Metadata Service is an interface to the metadata tables maintained by Impact Management Assessment Services. It is also used by Impact Management Update Services, and by the user interface of the services.

70

Using Impact Management Services

Various queries on the metadata can be performed by the Metadata Service. The queries include which documents are harvested, retrieving section names from a document for a particular section type, retrieving sections that depend on a particular section, and so on.

Impact Management Update Services


Impact Management Update Services are responsible for updating imported documents according to prewritten instructions, referred to as transformations.
Note: The only transformation available in this release is Update Data Model. See Update Data Model Transformation on page 72.

The update services work in the following way: 1. Original documents are imported. 2. Documents are harvested as part of import or through a synchronize operation. 3. Documents are used to perform daily tasks until the database requires change. 4. Use an Impact of Change report to identify the documents impacted by proposed changes. 5. Create data models to update impacted imported documents. 6. Documents with replacement data models are harvested as part of import or through a synchronize operation. 7. Transformation parameters are specified. a. Select a document. The selection criteria is that the document contains an impacted data model. b. Select a replacement data model. c. The Impact Manager module displays Interactive Reporting documents that match the selection criteria. d. Documents selected from the list are composed into a task and are queued for transformation. Currently only Interactive Reporting documents are processed. 8. Transformation is applied to elements of the Impact Manager task. a. Documents are converted to XML. b. Transformation is performed on the XML. c. The XML is converted back to Interactive Reporting documents. d. Transformed documents are reimported as new versions of the original documents. 9. Documents are available for use against the new database definition.

Impact Management Update Services

71

Running the Update Services


The user interface uses the Impact Management Services to compose tasks and to queue them in the repository database. Recording the request in the database is a relatively quick operation and the response time to the user interface is fast. Concurrently, in a separate activity, the Impact Management Services process the queue of tasks and record the results in database tables and service logs, if the logging level is set high enough. Where possible the tasks are divided in a manner that enables them to be performed in parallel, by multiple worker threads, on multiple service instances.

Update Data Model Transformation


The Update Data Model transformation is used to replace one or more data model sections with another data model. The transformation is most useful when a database changes causing the documents that use the database to break. Transformation can also reduce the number of distinct data models in use to accommodate future upgrades.

Link Between Data Models and Queries


Data model sections are only referred to by query sections. Therefore, as long as the new data model can be attached to the query sections correctly, the rest of the document continues to work as expected. The coupling between a query section and a data model section is through symbolic references, based on the names for the tables and columns exposed by the data model section. A small number of more complex dependencies exist regarding filters, however coupling basically relies on names. If two data models expose the names required by a query; for example, those names used in the Request and Filter lines, then either data model can support that query. If data model A exposes equivalent or more names than data model B, A is a valid replacement for B. The concept of exposed names is vital. Data model sections detect names of tables and columns as defined in the database and expose a second set of internal names that are similar to the database names. Each table and column name is exposed to the dependent query section typically by using upper and lower case letters and replacing underscores with spaces. Duplication is necessary because the table may be present multiple times in a data model, and a query must be able to reference each instance unambiguously. Therefore, if the Dept table is displayed twice in a data model, the query sees two tables named Dept and Dept2. These are default names that can be modified. See Renaming Tables or Columns on page 84. The Update Data Model transformation leverages symbolic coupling, by using names that are independent of the database names to perform tasks.

72

Using Impact Management Services

Access to Impact Management Services


Only users who are assigned the BI+ Administrator role and that hold appropriate licensing can access Impact Management Services. After initially logging on to Workspace, select Impact Manager module.
Note: When one of the Impact Manager module options is selected, other module content is hidden in the content area.

Synchronize Metadata Feature


The Synchronize Metadata feature enables you to ensure that metadata in the repository is upto-date with the documents in the repository. The synchronize action is not required if Enable Harvesting is selected in Manage General Properties of the Administer module. The only option available for the feature is when to perform a synchronization. Request a synchronization to run now or schedule the operation to occur later. See Using the Run Now Option on page 74 and Using the Schedule Option on page 74.
Note: Scheduling the synchronize operation with a date and time in the past, is equivalent to requesting the operation to run now.

Figure 1

Synchronize Metadata User Interface

Synchronize Metadata Feature

73

Using the Run Now Option


This option enables you to process the synchronization now.

To synchronize the metadata to run immediately:


1 From Impact Manager, select Synchronize Metadata. 2 Select Run now. 3 Click Submit.
A Confirmation dialog box is displayed.

Using the Schedule Option


This option enables you to process the synchronization at a specified date and time in the future.

To schedule the synchronization of the metadata for later:


1 From Impact Manager, select Synchronize Metadata. 2 Select Schedule.
The date and time drop-down lists and calendar are displayed.

3 Select a date and time. 4 Click Submit.


A Confirmation dialog box is displayed.

Whether the metadata is run immediately or scheduled for the future, clicking Submit causes Impact Management Assessment Services to receive the request and return a numeric request identifier. The identifier is used to filter the Impact Management Assessment Services task log. See Using Show Task Status Interactive Report on page 80. When Impact Management Assessment Services synchronize the metadata, a comparison of each imported document is made with the metadata tables. If the imported document has been modified since it was last parsed, or it is not in the metadata tables, the document is added to a queue of documents to next be parsed.

5 Click OK to return to the main Impact Manager module screen.

74

Using Impact Management Services

Update Data Model Feature


The Update Data Model transformation feature enables data models in documents to be updated to reflect changes in underlying databases. You must select which data models are to be updated and supply new data models to replace the original. This process is described in these topics:

Specifying a Data Model on page 75 Viewing Candidates to Update on page 76 Reviewing the Confirmation Dialog Box on page 77

Specifying a Data Model


This procedure enables you to specify an original and a replacement data model.
Note: The documents that contain both data models must have been harvested. If a selected document has not been harvested, an error is displayed.

Figure 2

Specifying a Data Model

To specify the original data model and the replacement:


1 From Impact Manager, select Update Data Model. 2 Click Browse next to Please select file containing original data model. 3 Navigate to the file, and click OK.
File details are displayed.

4 Choose a data model from Select original data model from list.
Data model sections are created when a query section is created. Because the data model section is not visible as a unique section, users may not be aware that data models are in separate sections under default names.

Update Data Model Feature

75

Use Promote to Master Data Model to make a data model section visible. To assist with specifying which data model is to be updated, query names are displayed after the data model in the drop-down list. See Link Between Data Models and Queries on page 72.

5 Repeat steps 24 to select the replacement data model.

6 When both data models are selected, click Next to go to Step 2, Candidates.

Viewing Candidates to Update


This procedure enables you to view a list of candidates that matches the original data model and select data models to update.
Note: For convenience, the Update Data Model transformation service searches for all data models that are identical to the data model to be replaced. Any or all of them can be updated simultaneously.

76

Using Impact Management Services

Figure 3

Candidates to Update

To use the candidate list to select data models for update:


1 Select a document in the list to be updated.
Other options for selecting data models for update are:

Click Select All to update all candidates. Use Ctrl+click or Shift+click to highlight and select individual or all documents in the list.

At least one data model must be selected before clicking Finish.

2 Optional: To return to Step 1, Specify Data Model, Click Back. 3 Optional: To activate the sort feature, in the candidate list table, click a column header.
For example, click Document to sort candidates by document title. The sort feature reorders the selected candidates to be updated.

4 Click Finish.
A Confirmation dialog box is displayed.

Reviewing the Confirmation Dialog Box


The dialog box provides a numeric request identifier or a task reference number that can be used to filter the Impact Management Update Services task log. See Using Show Task Status Interactive Report on page 80. To close the Confirmation dialog box and return to the main Impact Manager module screen, click OK.

Update Data Model Feature

77

Accessing Updated Documents


After running the Update Data Model process, the changed documents are available for users at next log on. If Workspace sessions were active when Update Data Model transformed documents, and one or more documents were referenced from your session, you must refresh the Workspace to synchronize the document references to the latest versions in the repository. If multiple document references are active, it may be more convenient to log on again to ensure that all aspects of the Workspace session are updated.

Connecting Interactive Reports


The Impact Management Services include two prebuilt interactive report dashboards to report the impact of change and the current status of tasks (transformations and harvests). The dashboards use the Hyperion System 9 BI+ platform repository as a data source, so the dashboards must be configured correctly before they can report on the repository.

Step 1Configuring the Hyperion Interactive Reporting Data Access Service


Use the LSC to configure the Hyperion Interactive Reporting Data Access Service so the service references the database system that contains the Hyperion System 9 BI+ repository tables. The configuration must match the way that business reporting data sources are configured. For example, if the repository is implemented using SQL Server, the Hyperion Interactive Reporting Data Access Service configuration is displayed as illustrated.

Note: The data source name is metadata, as created in the ODBC configuration, and references the database instance in MS SQL Server.

Step 2Creating Interactive Reporting Database Connections


Use Interactive Reporting Studio to create an Interactive Reporting database connection (.oce extension) that references repository tables using a matching data source name (for example, metadata) as selected in the Hyperion Interactive Reporting Data Access Service configuration.

78

Using Impact Management Services

Step 3Importing Interactive Reporting Database Connections into Workspace


Import the Interactive Reporting database connection that you created.

To import an Interactive Reporting database connection into Workspace:


1 Log in to Workspace. 2 Select Viewer module and click Explore to view the Root folder. 3 Select View > Show Hidden to display the Administration folder. 4 Expand Administration. 5 Expand Impact Manager. 6 Import the Interactive Reporting database connection created in the Step 2Creating Interactive Reporting
Database Connections procedure.

7 Name the imported file metadata.oce. 8 Specify a default user ID and name to connect reports to the repository tables.
The user ID requires select access to the repository tables.

Step 4Associating Interactive Reporting Database Connections with Interactive Reports


Associate the Interactive Reporting database connection that you imported.

To associate the Interactive Reporting database connection:


1 From Root > Administration > Impact Manager, select the document named Impact of Change. 2 Right-click and select Properties.
Properties is displayed.

3 Select

4 From the Connection drop-down list, for each Query/DataModel Name, select metadata.oce. 5 From the Options drop-down list, select Use default username/password. 6 Click OK. 7 Repeat steps 16 for the document named Task Status.
The Interactive Reporting documents are ready to deliver output.

Connecting Interactive Reports

79

Using Show Task Status Interactive Report


Show Task Status is an Impact Manager module option, that displays the status of tasks performed by Impact Management Services. The interactive report is based on the logging tables. Logging tables provide a list of logs of Impact Management Assessment Services and Impact Management Update Services tasks that have been processed and are currently processing within Impact Management Services.

To use Show Task Status:


1 Click Show Task Status.
The Task Status interactive report is displayed.

2 Use the Task Status controls.


For example, select a date in the calendar control.

3 Limit the rows returned by using one or a combination of these actions:

Use the calendars to select time ranges for tasks. Task times are recorded in UTC format in the database.

Use the lists to select the user who submitted the tasks and the task statuses.
to process the query.

4 Click

Tasks are displayed in a table.

80

Using Impact Management Services

Table 10

Task Status Interactive Report Column Descriptions Description Local submit time and date for the task request Type of task request Task request number Request command, for example, harvest or DM update Document name Document version number Colorcode for the status: Green = successful Yellow = pending Red = failed

Column Name Task Submitted Task Type Req Command Document Ver Stat

Time (ms) Proc Pri Run By Description Path Status

Time taken in milliseconds to perform the request Name of processor Priority status of the task Name of requester Description of requested task Path of files for request Status of request, for example, Execution successful

Using Show Task Status Interactive Report

81

Table 10

Task Status Interactive Report Column Descriptions (Continued) Description Local completion time and date for the task request Coordinated Universal Time, based on the time zone of the application server (A computed item that extracts the time zone offset from a time string. The offset is used to translate the display of the Task Submitted column into local time. The assumption is that the server and client share a time zone. If this is not the case, the computed item can be edited to reflect the time zone difference between server and clients)

Column Name Task Completed UTC Offset

Using Show Impact of Change Interactive Report


Show Impact of Change is an Impact Manager module option, that displays the tables, columns, and joins that are used in documents on the Query Panel. Selected values display the effects on the documents of changes to selected items.

To use Show Impact of Change:


1 Click Show Impact of Change.
The Impact of Change interactive report is displayed.

2 Select items from the lists, and click

to apply the selections.

Selections are displayed in Currently Selected Query Limits. In this example, PCW_CUSTOMERS and PCW_SALES are selected.

82

Using Impact Management Services

3 Click

to process the query.

The table tabs display the items selected in the Query Panel. For example, PCW_CUSTOMERS and PCW_SALES are selected. The Impact of Change interactive report contains seven content tabs to assist in anticipation of change to schema:

Documents with RDBMS tables selectedImpacted documents that use the selected tables and columns. RDBMS/Topic column mappingsInteractive Reporting document topics or items mapped to RDBMS tables or columns. Topic/RDBMS column mappingsReverse map of RDBMS tables or columns to Interactive Reporting document topics or items. Data Models with topics in commonCommon data models where impacted tables or columns are used. For example, how many Interactive Reporting documents are updated with one replacement data model. RDBMS table usage detailsDocuments and sections in which tables and columns are used. Custom request itemsCustom SQL in request items that Update Data Model may impact. Custom query limitsCustom SQL in filter items that Update Data Model may impact.

Using Show Impact of Change Interactive Report

83

Creating the New Data Model


You must create the new data model and ensure that it exposes all internal table and column names which are exposed by the replacement data model.

Renaming Tables or Columns


To build the new data model, you recreate or synchronize the existing data model against the new database, and change the name of the tables or columns in the new data model to match those in the existing data model. For example, a column orders.orddate is renamed orders.order_date (physical name). The original data model exposed this column as Orders.Orddate (display name). The new data model gives the column a default name of Orders.Order Date (display name). To replace the original data model with the new one, edit the properties of the column and change the display name to Orders.Orddate. An example, changing physical and display names is provided in Figure 4.
Figure 4

Physical and Display Names Example

Access database software and Interactive Reporting Studio are used in these procedural examples.

To copy a table and make initial changes to the column names:


1 In a database, for example Access, open the Sample Database. 2 Right-click a table, and select Copy.
For example, select the PCW_CUSTOMERS table.

84

Using Impact Management Services

3 Right-click again and select Paste. 4 In Paste Table As, enter a Table Name.
For example, type Outlets. Ensure that Structure and Data is selected.

5 Click OK.
A copy of the PCW_CUSTOMERS table called Outlets is created.

6 Right-click Outlets, and select Design View.


The table is opened in design mode.

7 Overwrite Field Name to change the column names.


For example, overwrite STORE_ID with outlet_id, STORE with outlet, and STORE_TYPE with outlet_type.

8 Close the Outlets table, and click Yes to save changes.

To change the physical name of a table:


1 Open Interactive Reporting Studio, select Sample.oce, and click OK. 2 On the Sample.oce Host User and Host Password dialog box, click OK without entering any text. 3 From catalog, expand Tables, and drag a topic onto the content area.
For example, select PCW_CUSTOMERS.

Creating the New Data Model

85

4 Right-click the topic header, and select Properties.


Topic Properties is displayed.

5 Enter a new Physical Name.


For example, type outlets to replace PCW_CUSTOMERS.

6 Click OK.

86

Using Impact Management Services

To synchronize the data model with the database:


1 In Interactive Reporting Studio, select the topic with the new physical name, for example PCW Customers,
and select DataModel > Sync with Database.

Data Model Synchronization is displayed.

If Show Detail Information is selected, this dialog box provides information on changes that were made with the synchronization.

2 Click OK.

To change the display names of columns:


1 In Interactive Reporting Studio, from the topic in the content area, right-click a column name, and select
Properties.

For example, from the PCW Customers topic, right-click Outlet Id. Topic Item Properties is displayed.

2 Change the column name, and click OK.


For example, change Outlet Id to Store Id.

Creating the New Data Model

87

3 Repeat steps 12 to change the other column names.


For example, change Outlet to Store and Outlet Type to Store Type. The display names of the columns are renamed.

4 Optional: Alternatively, to achieve an equivalent end result of changing the display names, perform these
actions:

a. Drag a topic, for example Orders, onto the Interactive Reporting Studio content area. b. Rename the display names of the renamed columns and the topic. For example, a data model is created that can replace another data model that uses only the Pcw Customers topic. The edited topic now exposes names matching the original topic and is a valid replacement.

Using Normalized and Denormalized Data Models


If a data model requires change because tables are being consolidated or divided, the creation of the new data model involves additional steps. To create a data model that is a superset of the original table structure use metatopics. You must give metatopics and their columns correct names, so the new data model is a true superset of the original data model. When names are correct, use the new data model in place of the original. For information on metatopics, see Chapter 22, Using Metatopics and Metadata in Interactive Reporting Studio. Figure 5 and Figure 6 illustrate the impact of change on the schema and the creation of metatopics to mask changes.

88

Using Impact Management Services

Figure 5

Impact of Change on the Schema

Creating the New Data Model

89

Figure 6

Create Metatopics to Mask Change

Deleting Columns
Deleted columns are replaced by a computed item with a constant value. For example, string columns may return n/a, and numeric columns may return 0. Replacement enables reports to continue working and display the constant value (for example, n/a) for the deleted columns.
Note: If an entire table is deleted, it is treated as if the table has all columns deleted.

These procedures describe creating a computed item to mask the deletion of columns. Before creating the computed item, a series of processes, such as copying tables, changing names, and synchronizing data models, must be performed.

To copy a table and make initial changes to the column names:


1 In a database, for example Access, open the Sample Database. 2 Right-click a table, and select Copy.
For example, select the PCW_Items table.

3 Right-click again and select Paste. 4 In Paste Table As, enter a Table Name.
For example, type Goods. Ensure that Structure and Data is selected.

5 Click OK.
A copy of the PCW_Items table called Goods is created.

6 Right-click Goods, and select Design View.


The table is opened in design mode.

90

Using Impact Management Services

7 Select a row, for example Dealer Price, and delete it.

8 Save and close the database.

To change the physical name of a table:


1 Open Interactive Reporting Studio, select Sample.oce, and click OK. 2 In the Sample.oce Host User and Host Password dialog box, click OK without entering any text. 3 From catalog, expand Tables, and drag a topic onto the content area.
For example, select PCW Items.

4 Right-click the topic header, for example PCW Items, and select Properties.
Topic Properties is displayed.

5 Enter a new Physical Name.


For example, change the physical name to Goods.

Creating the New Data Model

91

6 Click OK.

To synchronize the data model with the database:


1 In Interactive Reporting Studio, select a topic, for example PCW Items, and select DataModel > Sync with
Database, to perform a synchronization.

Data Model Synchronization is displayed.

If Show Detail Information is selected, the dialog box provides information on synchronization changes. For example, Dealer Price was deleted from the Goods topic.

2 Click OK.

92

Using Impact Management Services

To use a computed item to mask deletion of columns:


1 In Interactive Reporting Studio, right-click a topic header, for example PCW Items, and select Promote to
Meta Topic.

Another topic is added to the content area. In this example, the topic is called Meta PCW Items.

2 Right-click the original topic header, for example PCW Items, and select Properties.
Topic Properties is displayed.

3 Change the topic name, and click OK.


For example, change the name to PCW Items topic. Two topics are now displayed. In this example, the topics are PCW Items topic and Meta PCW Items.

4 Right-click the topic header, for example Meta PCW Items, and select Properties.
Topic Properties is displayed.

5 Remove Meta from Topic Name, and click OK.

Creating the New Data Model

93

6 Select the topic from step 5, for example PCW Items, and select DataModel > Add Meta Topic Item >
Server.

Modify Item is displayed.

7 Enter the Name of the row that was deleted in the database, and enter a definition.
For example, type Dealer Price in Name, and type 0 as the Definition.

8 Click OK.
The computed item is added to the topic. In this example, Dealer Price is added to PCW Items.

94

Using Impact Management Services

9 Select the topic with the computed item added, for example PCW Items, and select DataModel > Data
Model View > Meta.

The selected topic is displayed in Meta View, for example PCW Items, and the other topics are removed.

Creating the New Data Model

95

Changing Column Data Types


Changes to a database schema may result in changes to the data types of columns. For example, strings become integers, or conversely integers become strings. When this occurs additional actions may be required to complete the migration of an Interactive Reporting document to the new schema. If the type change affects a filter, the data type of the data model column is applied to the filter in the Interactive Reporting document. The filter type in an Interactive Reporting document is copied from the data model when it is created and cannot be accessed by developers or users. Some data type changes require no action and are unaffected. Those changes are marked as OK in Table 11. The changes marked as Warn require attention because values cached in the Interactive Reporting document may not be migrated correctly.
Table 11

Data Type Changes string OK OK OK Warn Warn Warn int Warn OK Warn Warn Warn Warn real Warn OK OK Warn Warn Warn date Warn Warn Warn OK Warn Warn time Warn Warn Warn Warn OK Warn timestamp Warn Warn Warn OK Warn OK

From/To string int real date time timestamp

If the type change affects a Request line item, no action is taken because request item data types are accessed by clicking Option in Item Properties. If the Impact Manager module changes the data types, unforeseen effects in results, tables, charts, pivots, or reports may occurespecially if computations are applied to the column that is returned.

96

Using Impact Management Services

Figure 7

Item Properties Datatypes

Changing User IDs and Passwords for Interactive Reporting Documents


An Interactive Reporting document can be imported to obtain credentials for the queries used to connect to the data source in a variety of ways: 1. Credentials are specified for the Interactive Reporting document. 2. Credentials are obtained from the Interactive Reporting database connection. 3. Credentials are prompted. No action is required where credentials from queries are obtained from the Interactive Reporting database connection or where a prompt occurs for the credentials. The queries that are replaced continue to prompt or to reference the Interactive Reporting database connection for the credentials. Explicitly configured credentials may require changes as these credentials may stop working against the new data source. By changing the way the queries are imported in the replacement Interactive Reporting document, you can alter how credentials are handled in the updated Interactive Reporting document. Table 12 illustrates what happens to an Interactive Reporting document that was originally imported to connect to a data source with some explicit credentials, for example, user name=scott and password=tiger.

Changing User IDs and Passwords for Interactive Reporting Documents

97

Table 12

Interactive Reporting Document Before And After Update Interactive Reporting Document After Update Connects the query to the data source using new credentials, user name=sa and password=secret, and processes without asking the user for values and without regard to the contents of the Interactive Reporting database connection Displays a log on dialog box and the user supplies a user id and password to connect Connects the query to the data source using the definition in the Interactive Reporting database connection at the time the connection is attempted

Imported Replacement Interactive Reporting Document Explicit Credentials

Prompt User Use Interactive Reporting database connection Default

Service Configuration Parameters


For information on service configuration parameters, see Chapter 9, Configuring LSC Services.

98

Using Impact Management Services

Chapter

5
In This Chapter

Managing Shared Services Models

This chapter explains Hyperion System 9 Shared Services (formerly called Hyperion Hub) models as they are shared between multiple Hyperion products.

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Registering Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 About Managing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 About Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Working with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Working with Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Sharing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Managing Shared Services Models

99

Overview
Shared Services enables multiple applications to share information within a common framework. The following table lists the high-level tasks that you can perform with Shared Services.
Task Managing Models Sharing Metadata Sharing Data For Information About Managing Models on page 101 About Sharing Metadata on page 101 About Sharing Data on page 101

About Models
Shared Services provides a database, organized into applications, in which applications can store, manage, and share metadata models. A model is a container of application-specific data, such as a file or string. There are two types of models; dimensional hierarchies such as entities and accounts, and nondimensional objects such as security files, member lists, rules, scripts, and Web forms. Some Hyperion products require that models be displayed within a folder structure (similar to Windows Explorer). Folder views enable the administrator to migrate an entire folder structure or a portion of a folder structure easily using Shared Services. The process of copying a model or folder from a local application to Shared Services is known as exporting. The process of copying a model or folder from Shared Services to a local application is known as importing.

Prerequisites
Shared Services supports external directories for user authentication. To use Shared Services functionality, you must configure Workspace to use external authentication.
Note: After installation of Shared Services, you must configure external authentication. For more information about installation and configuration of Shared Services, see the Hyperion System 9 Shared Services Installation Guide.

Registering Applications
Before you can use Shared Services, you must register your product with Shared Services using the Configuration Utility. For more information about using the Configuration Utility to register your product with Shared Services, see the Hyperion System 9 BI+ Workspace Installation Guide.

100

Managing Shared Services Models

About Managing Models


Shared Services enables you to store metadata models and application folders. A separate application is provided for each product. Shared Services provides some of the basic management functionality for models and folders:

Version tracking Access control Synchronization between models and folders in the application and corresponding models and folders in Shared Services Ability to edit model content and set member properties of dimensional models Ability to rename and delete models

Users must be assigned the Manage Models user role to perform the preceding actions on Shared Services models.
Note: The Manage Models user must have Manage permission for a model via the Shared Services Model Access window in order to assign permissions to it.

See Working with Models on page 106 for detailed information about models. For more information about assigning user roles, see the Hyperion System 9 Shared Services User Management Guide available on the Hyperion Download Center.

About Sharing Metadata


Shared Services enables products to store private applications, nonshared models, and to share common models through common applications. Shared applications free multiple applications from maintaining a web of connections. To share an application, Administrators first share the applications using the common application by exporting the model to the application directory in Shared Services. Then the administrator specifies the shared models. Because filters specify the model during import, they also enable the sharing of asymmetric models between applications. See Working with Shared Applications on page 103 for the basic process of setting up and implementing sharing among applications.

About Sharing Data


In addition to sharing application metadata, Shared Services enables you to move data between applications. The method used to move data is called data integration. Data integration definitions specify the data moving between a source application and a destination application, and enable the data movements to be grouped, ordered, and scheduled. A data integration wizard is provided to facilitate the process of creating a data integration.

About Sharing Data

101

Users must be assigned the Create Integrations user role to create Shared Services data integrations. As a Create Integrations user, you can perform the following actions on data integrations:

Assign access to integrations Create an integration Edit an integration Copy an integration Delete an integration Create a data integration group View (including filtering the view of) an integration

To view and run Shared Services data integrations, users must be assigned the Run Integrations user role. As a Run Integrations user, you can perform the following actions on data integrations:

View (including filtering the view of) an integration Run, or schedule to run, an integration Run, or schedule to run, a group integration

Before data can be moved between applications, the models for both the source and destination application must be synchronized between Shared Services and the product. See Sharing Data on page 133 for details about moving data between applications. For more information about assigning user roles, see the Hyperion System 9 Shared Services User Management Guide available on the Hyperion Download Center.

Working with Applications


Metadata models are stored in directories in Shared Services. Shared Services provides two types of applications: private applications and shared applications. Private applications are used by applications to store their models. Shared applications enable private applications to share models with other applications.

Working with Private Applications


Shared Services manages models at the application level. Each application that is registered with Shared Services has a corresponding application in Shared Services, in which it stores its Shared Services models. An application has exclusive use of the application models. To put a local copy of a model under control of Shared Services, you export the model to the application directory in Shared Services. To make a model in Shared Services available to an application, you import the model to the application.

102

Managing Shared Services Models

Hyperion Shared Services provides management capabilities to manage models. For example, you can perform the following tasks, among others:

Track model versions Control access to models Edit member properties in dimensional models Synchronize models between the application and Shared Services

See Working with Models on page 106 for detailed information about how to manage models.

Working with Shared Applications


Shared applications enable you to share models among applications and with other products. Shared Services uses shared applications to support sharing models. A shared application defines the information that is common between two or more applications. Within a shared application, an application can contain private models or shared models. The following example outlines the process for sharing models between applications or products: 1. App1 exports its models to Shared Services. The models are stored in the private application for App1 in Shared Services. 2. App1 selects a shared application to share with, for example, Common. 3. App1 designates specific models for sharing. When a model is shared, it is available for use with other applications. 4. App2 selects the Common.Shared application (the same application that is shared by App1). 5. App2 selects models in the shared application. The shared models are displayed in the Model Listing view for App2. An application that is shared can contain both private models and shared models in the Model Listing View. Private models are for the exclusive use of the individual application. Shared models are available to any application that shares the same shared application. Filters enable you to designate which part of a shared model to use in an application. In this way, you can share models with other applications if the models share a core set of common members; the models are not required to be identical. When you import a shared model, the filter removes members that you have not designated as common. See Filtering the Content of Models on page 122 for information on creating filters.

Working with Applications

103

Managing Applications for Metadata Synchronization


Shared Services enables you to create, rename, and delete shared applications for metadata synchronization. You can view a list of all applications on the Manage Applications Browse tab. The Share tab enables you to manage sharing of an application. Figure 8 shows a sample Application Listing view.

Figure 8

Manage Applications Browse Tab

Creating Applications
Shared Services enables you to create a shared application. Shared Services provides one shared default application called Common. Should additional shared applications be needed, they must be created by application users or administrators.

To create a shared application:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Projects.

2 If it is not already selected, select the Browse tab. 3 Click Add. 4 In the Shared Application Name text field, type a name for the application.
See Application Naming Restrictions on page 105 for a list of restrictions on application names.

5 Click one of these options:


Add to add the application Cancel to cancel the operation

104

Managing Shared Services Models

Application Naming Restrictions


The following list specifies restrictions on application names:

The maximum length is limited to 80 characters regardless of the application in which you are working. Names are not case sensitive. All alphanumeric and special characters can be used, with the exception of the forward slash (/) and double quotation () characters.

Deleting Applications
You need Manage permission on an application to delete an application.
Note: Users must have the appropriate product-specific user roles to delete an application. For a listing of product user roles, see the appropriate product-specific appendix in the Hyperion System 9 User Management Guide.

To delete an application:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Projects.

2 If it is not already selected, select the Browse tab.


Ensure that each of the applications currently using the shared application are no longer sharing access to the application that you want to delete.

3 Select the application to delete and click Delete. 4 Click OK to confirm deletion of the application.

Sharing Applications
To be able to share models with other applications, you must share a private application with a shared application in Shared Services. Figure 9 shows a sample Select Shared Application window.

Figure 9

Select Shared Application Window

Working with Applications

105

To share an application with a shared application in Shared Services:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Projects.

2 If it is not already selected, select the Share tab.


A list of shared applications is displayed, including Common, which is the default shared application provided by Shared Services.

3 Select the application with which you want to share. 4 Click Share to begin sharing the application with the shared application that you specified.
After you have set up access to a shared application, you can designate models to be shared. See Sharing Models on page 120. You can stop sharing access to a shared application at any time. When you do so, models that are shared with the current application are copied into the application.

To stop sharing with a shared application:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Projects.

2 If it is not already selected, select the Share tab.


A list of shared applications is displayed.

3 Select the application with which you want to stop sharing. 4 Click Stop Share to stop sharing with the designated application.

Working with Models


Shared Services enables you to store and manage models in Shared Services. The Manage Models Browse tab lists the models that are in the product.

To list the models that are in Shared Services:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab.


Figure 10 shows a sample Manage Models Browse tab.
Note: Some Hyperion products do not display folders in the Manage Models Browse tab.

106

Managing Shared Services Models

Figure 10

Manage Models Browse Tab

Note: If the current application is new, the view might not show models. Application models are displayed in the Browse tab after you explicitly export them to Shared Services. See Synchronizing Models and Folders on page 108 for information.

All models are displayed in ascending order. The Manage Models Browse tab provides information about each model in Shared Services:

Model name Model type Last time the model was updated Whether the model is locked and who locked it If a filter is attached to the model and whether the filter is enabled: indicates a filter that is enabled indicates a filter that is disabled

You can see only the models to which you have at least Read access. If you do not have access to a model, it is not displayed in the Manage Models Browse tab. Icons indicate where models are located: indicates a private model indicates a shared model Some Hyperion products require that models be displayed within a folder structure (similar to Windows Explorer). Folder views enable the administrator to migrate an entire folder structure or a portion of a folder structure easily using Shared Services. Folders are visible on the Manage Models Browse tab, Manage Models Sync tab, and Manage Models Share tab. Path information for folders is displayed directly above the column headers and path text is hyperlinked to refresh the page within the context of the selected folder.

Working with Models

107

Icons indicate where folders are located: indicates a private folder indicates a shared folder From the Manage Models Browse tab, you can perform any of the following operations:

View and edit members and member properties in dimensional models. See Viewing and Editing Model Content on page 115. Filter content that is imported to an application from a shared model. See Filtering the Content of Models on page 122. Compare the latest application version of a model to the latest version stored in Hyperion Shared Services. See Comparing Models on page 112. Track model history. See Tracking Model History on page 125. View model properties. See Viewing and Setting Model Properties on page 131. Rename models. See Renaming Models on page 119. Delete models. See Deleting Models on page 120.

You can synchronize the Shared Services version of a model with the application version, by importing the model from Shared Services to the application, or by exporting the model from the application to Shared Services. To do so, select the Manage Models Sync tab. See Synchronizing Models and Folders on page 108. You can share a model with other applications. To do so, select the Manage Models Share tab. See Sharing Models on page 120.

Synchronizing Models and Folders


The Manage Models window lists the latest version of each model in Shared Services. Shared Services also tracks models in the application and determines whether a version of each model resides in the BI+ application only, in Shared Services only, or in both places. When the latest version of a model resides in both the BI+ application and in Shared Services, the BI+ application and Shared Services are said to be synchronized, or in sync, with regard to that model. If a model is out of sync, you can synchronize it by importing the model to the application or exporting the model to Shared Services, depending on where the latest version resides. You need Write permission to synchronize a model.
Note: Models within folders can also be synchronized using the Shared Services sync operation. If a folder is selected, then all models within that folder and within any subfolders will be synchronized.

108

Managing Shared Services Models

To synchronize models and folders:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 Select the Sync tab.


Figure 11 shows a sample Sync Preview Models window.

Figure 11

Sync Preview Models Window

The Sync Preview window lists all models and folders in Shared Services and in the BI+ application. The Sync Operation field provides a recommended operation to apply to each model or folder. For more information about sync operations, see Sync Operations on page 110.

3 Optional: For models with Select Sync Operation, you can compare the latest version of the model in
Shared Services to the model in the application by clicking the Compare button. Before clicking Compare, you must select a Sync Operation in the drop-down list box.

The latest version of the model in Shared Services is compared to the latest version in the application. The contents of the two models are shown line-by-line in a side-by-side format. Hub Version refers to the model in Shared Services. Application Version refers to the model in the application. For information on resolving differences between the models, see Comparing Models on page 112. After you resolve the differences in a model, you are returned to the Sync Preview page.

4 Select each model that you want to synchronize.


For models with Select Sync Operation, select a Sync Operation depending on whether the application or Shared Services has the latest version of the model.
Note: Before exporting a model from an application to Shared Services, check Model Naming Restrictions on page 112 to verify that the model names do not violate Shared Services naming restrictions.

Working with Models

109

5 Synchronize the selected models.


a. Click Sync. A window is displayed that enables you to enter comments for each of the selected models. b. Type comments for each model, or type comments for one model and click Apply To All to apply the same comment to all models. c. Click Sync. A progress message is displayed during the sync operation.

6 Click Report to see a report of the operations that have been completed. 7 Click Refresh to update the message. 8 Click Close to return to the Sync Preview window.

Sync Operations
The Sync Preview window lists all models in Shared Services and in the application. The Sync Operation field provides a recommended operation to apply to each model, as follows:

If a model exists in the application but not in Shared Services, the sync operation is Export to Hyperion Hub. You cannot change this operation. If you select the model, when you synchronize, the specified model is copied to Shared Services.

Note: Keep in mind when exporting that Shared Services supports dimensions that contain up to 100,000 members.

If a model exists in Shared Services but not in the application, the sync operation is Import From Hyperion Hub. You cannot change this operation. If you select the model, when you synchronize, the specified model is copied to the application. If a model exists in both the application and Shared Services, the sync operation is selectable. Select from one of the following options:

Note: Remember these factors when deciding which compare operation to perform. With export, the compare operation considers the application model to be the master model. With import, the compare operation considers the Shared Services model to be the master model. In the following descriptions, the master model is underlined.

Export with MergeMerges the application model content with the content in Shared Services. Notice the following factors:

This option considers any filters during the merge process and ensures that filtered members are not lost. If a property only exists in the application model, then the property is retained in the merged model. If a property only exists in the Shared Services model, then the property is retained in the merged model.

110

Managing Shared Services Models

If a property exists in both models, the value of the property in the application model will be retained in the merged model. A member in the application model but not in the Shared Services model will be retained in the merged model A member in the Shared Services model but not in the application model will not be retained in the merged model. A member which exists both in the Shared Services model and in the application model, but in different generation levels, will be merged and the position in the application model will be maintained. If an application system member exists only in a Shared Services model, export with merge will not delete this member. If an application system member exists both in a Shared Services model and in the application model, export with merge will merge the properties as usual and take the system member-specific attributes from the application model. For more information, see Application System Members on page 118.

For properties with attributes, the merge is based on the attribute value. For example, if the following Alias attribute exists in the Shared Services model:
<Alias table=French>Text in French<\Alias>

and if the following Alias attribute exists in the application model:


<Alias table=English>Text in English<\Alias>

then the merged result will contain both attributes and will look like the following example:
<Alias table=French>Text in French<\Alias> <Alias table=English>Text in English<\Alias>

If the value for both Alias attributes is the same in both models, then the value for the application model will be retained in the merged model.

Export with OverwriteReplaces the Shared Services model with the application model. Import and MergeMerges the content from the Shared Services model with the application model content. Import and ReplaceReplaces the application model with the Shared Services model. Clear before ImportRemoves the existing content of the application model and replaces it with the content from the Shared Services model.

Working with Models

111

Model Naming Restrictions


The following list specifies restrictions on model names:

The maximum length is limited to 80 characters regardless of the application in which you are working. Names are not case sensitive. You can use all alphanumeric and special characters, with the exception of the forward slash (/) and double quotation () characters. Therefore, you cannot export a dimension to Shared Services that contains forward slash or double quotation characters.

Note: The restrictions on names listed in this section are enforced explicitly by Shared Services. <Hyperion Product Name> may enforce additional restrictions on names. If you are sharing models with one or more other products, you should be aware of additional naming restrictions that may be enforced by those products.

Comparing Models
At any time, you can compare a model in Shared Services to its corresponding version in the application. The latest version in Shared Services is compared to the model in the application. To compare different versions in Shared Services, see Tracking Model History on page 125.

To compare the application representation of a model with the Shared Services representation
of the model:

1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 Select the Sync tab.


The Sync Preview window lists all models in Shared Services and in the application. The Sync Operation field provides a recommended operation to apply to each model or folder. For more information about sync operations, see Sync Operations on page 110.

3 Select a Sync Operation in the drop-down list box for the model of interest. 4 Click the Compare button next to the Sync Operation box.
The latest version of the model in the application is compared with the latest version in Shared Services.

5 Perform any compare operations.


For a detailed description of the compare operations, see Compare Operations on page 113.

6 Click OK to return to the Sync Preview window.

112

Managing Shared Services Models

Compare Operations
The contents of the two models are shown line-by-line in a side-by-side format. Application Version refers to the model in the application. Application versions of a model are displayed on the left side of the Resolve Models (Compare) window. Hub Version refers to the model in Shared Services. Hub versions of a model are displayed on the right side of the Compare Models window. Figure 12 shows a sample Resolve Models (Compare) window.

Figure 12

Resolve Models (Compare) Window

By default, the Resolve Models window displays up to 50 rows per page, displays any folders in an expanded format, and displays only those models with differences. Color coding highlights any differences between the content of the two models, as follows:

Red indicates that the element has been deleted from the model. Green indicates that the element has been inserted into the model. Blue indicates that the element has been changed.

Note: The compare operation filters out any application system members that are not relevant to the product being viewed. For example, if viewing HFM models, Shared Services will filter out any application system members that are not valid for HFM. For more information about application system members, see Application System Members on page 118.

Working with Models

113

Table 13 describes the Resolve Models (Compare) window elements.


Table 13

Resolve Models (Compare) Window Elements Description Click to display the selected member and any children under the selected member in an expanded format (default) Click to display the selected member and any children under the selected member in a collapsed format Click to jump to the first model element with a difference Click to display the difference immediately previous to the current difference Click to display the next difference after the current difference Click to jump to the last model element with a difference Click to display all model elements, not just the elements with differences Click to display only the model elements with differences (default)
Note: For contextual purposes, Show Diff Only also displays the members immediately previous to and immediately after the member with a difference.

Element Expand All button Collapse All button <<FirstDiff button <PrevDiff button NextDiff> button LastDiff>> button View All button Show Diff Only button

View button <--->

Click to display the member property differences for a selected element A red arrow indicates a deleted element in the Application Version of a model A green arrow indicates an inserted element in the Application Version of a model Click to jump to the first page of the model Click to display the previous page of the model

Page drop-down list box

Select a page to display in the Taskflow Listing area. Click to display in the Taskflow Listing area the page you selected in the Page dropdown list box. Click to display the next page of the model Click to jump to the last page of the model

Rows

The number of rows displayed on each page (default is 50)

114

Managing Shared Services Models

Viewing and Editing Model Content


The Shared Services interface provides an editor that enables you to view and edit the content of models directly in Shared Services. You can use the editor only with dimensional mode. You need Read permission to view the content of a model. You need Write permission to edit or delete a model. Figure 13 shows a sample View Model window.

Figure 13

View Model Window

The editor enables you to manage dimension members by performing these tasks:

View all members for a model, including application system members Add a sibling or a child to a member Change the description of a member Rename a member Move a member up or down in the hierarchy Move a member left or right (across generations) in the hierarchy Edit dimension member properties Enable or disable a filter

If you are renaming a member, keep the following rules in mind: a. You cannot rename a shared member. b. You cannot create a duplicate member name (the rename operation performs a uniqueness check). c. You cannot rename an application system member.
Note: Renaming a member and moving a member across generations within Shared Services enables products to retain the member properties for a shared model. Therefore, if you want to retain member properties across all products for a shared model, perform the rename or move member operation within Shared Services rather than within the individual product.

Working with Models

115

To view or edit dimension members:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click View.
The dimension editor shows the members of the selected model, including any application system members. For more information, see Application System Members on page 118.

4 Use the editing keys to make the following changes:


Add a child or sibling member Rename a member (notice the rules about renaming members in the previous section) Delete a member Move a member up, down, left, or right in the dimensional hierarchy Edit member properties For more information about editing member properties, see Editing Member Properties on page 117.

If a filter exists for a model, enable or disable a filter For more information about filters, see Filtering the Content of Models on page 122.

Note: If you click on a member and it is not editable, then the member is an application system member. For more information about application system members, see Application System Members on page 118.

5 Click Validate to perform a simple validation check.


The validation check verifies the following facts and lists any exceptions:

That you have not created names that are too long (for example, 20 characters for Hyperion Financial Management, 80 characters for Hyperion Planning) That you have not created any duplicate names

Note: Shared Services does not perform validations for Alias/UDA uniqueness.

6 Click Save to save the changes that you have made and to create a new version of the model in Shared
Services.

116

Managing Shared Services Models

Editing Member Properties


You can make changes to the settings of member properties of dimensional models and save the changes to a new version of the model.

To edit member property settings:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model name and click View.
The dimension editor shows the members of the selected model, including any application system members. For more information, see Application System Members on page 118.

4 Select a member and click Edit.


The Edit Member window provides separate tabs for properties that are unique to a particular product. For example, the HP tab contains properties used by Hyperion Planning and the HFM tab contains properties used by Hyperion Financial Management. Each tab also displays properties that are common or shared between multiple products. Shared properties are preceded by an icon, , that indicates that a property is shared.

Note: You cannot edit properties for an application system member. For more information about application system members, see Application System Members on page 118.

Figure 14 shows a sample Edit Member window.

Figure 14

Edit Member Window

Working with Models

117

To view which products share a particular shared property, hover the cursor over the shared property icon. A tool tip is displayed with the names of the products that share the property.

5 Select a tab and use the editing keys to change member property settings as you prefer.
Note: Alias properties may be displayed in a different order in Hyperion Shared Services than in <Hyperion Product Name>. See the discussion following the procedure for details.

6 In the Edit Member window, click Save to save the property settings that you have made. 7 In the Edit Member window, click Close to close the window.
Note: The Edit Member window remains open unless you manually close it.

8 Click one of these options:


Save to save the changes you have made and create a new version of the model Close to return to the Model Listing view

If a member has an alias property, all the aliases and alias table names for the member are displayed in the Edit Member window. For example: <Hyperion Product Name>:
<Alias table="English">MyAlias in English</Alias> <Alias table="German">MyAlias in German</Alias> <Alias table="French">MyAlias in French</Alias>

Shared Services:
Alias (English): MyAlias in English Alias (German): MyAlias in German Alias (French): MyAlias in French

The order in which Shared Services reads the alias tables is not necessarily the order in which the aliases are shown in <Hyperion Product Name>, which can be changed by user preferences.

Application System Members


Application system members store critical system information such as currency rates and ownership information. Each product has certain application system members that, when exported to Shared Services, are displayed in the model hierarchy. You can view the details and properties of an application system member, however you cannot delete, edit, add children to, or rename an application system member in Shared Services. Application system members will be filtered out of the hierarchy if they are not relevant to the product being viewed. The compare operation filters out any application system members that are not valid for your product. For example, if you are viewing HFM models, Shared Services will filter out any application system members that are not valid for HFM. For more information about compare operations, see Compare Operations on page 113.

118

Managing Shared Services Models

You can import and export models that contain application system members. Keep the following in mind while performing the following sync operations:

Import operations will only import application system members if they are valid for your product. For instance, if a shared model has a system member called active which is only valid for HFM, when this model is imported by Planning, it will ignore this member. Export with Overwrite replaces the Shared Services model with the application model, including any application system members. Export with Merge merges the application model content with the content in Shared Services. Notice the following factors:

If an application system member exists only in Shared Services, export with merge will not delete this member. If an application system member exists both in Shared Services and in the product, export with merge will merge the properties as usual and take the system memberspecific attributes from the product side of the model. All other export with merge scenarios will behave exactly the same way for system members as they do for normal members. For more information, see Sync Operations on page 110.

Renaming Models
Shared Services enables you to rename models in Shared Services. For example, you might want to rename a model if two applications want to share dimensional models that are named differently. For example, one application uses plural dimension names and the other application uses singular names. To share the models requires renaming one or both of them to a common name. Renaming a model changes the name only in Shared Services. The internal representation of the name does not change. If you import a new version of a renamed model to the application, the new version retains the original name. You need Write access to a model to rename it.

To rename a model:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click Rename. 4 Type a new name in the New Name text box. 5 Click one of these options:

Rename to save the new name Cancel to cancel the name change

See Model Naming Restrictions on page 112 for a list of restrictions on model names.

Working with Models

119

Deleting Models
You can delete a model if you have Write access to it.

To delete a model:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click Delete. 4 Click OK to confirm deletion.

Sharing Models
You set up the sharing of models between applications by designating a common shared application to be used by two or more applications. See Working with Shared Applications on page 103 and Sharing Applications on page 105 for details about shared applications. You can select two types of models to share:

You designate models in the private application in Shared Services to share with other applications. You select models from a shared application that have been made available for sharing by another application.

Note: Models within folders can also be shared using the Shared Services share operation. If a folder is selected, then all the models within that folder and within any subfolders will be shared.

To share models:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 Select the Share tab.


The sample Share Models window shown in Figure 15 lists both private and shared models.

120

Managing Shared Services Models

Figure 15

Share Models Window

Icons indicate whether a model is shared: indicates a private model that is not shared indicates a shared model indicates a model with a conflict (model exists in both the private application and in the shared application in Shared Services) The Share Operation column provides a recommended operation to apply to each model, as follows:
Note: The Share Operation column displays only the first 10 characters of the shared application name. If the shared application name exceeds 10 characters, then Shared Services appends ellipses (...) to the end of the application name.

Share to <shared_application_name>Copies the content of the model in the private application to the shared application. The share operation also deletes the model in the private application and creates a link in the private application to the model in the shared application. Unshare from <shared_application_name>Copies the content of the model in the shared application to the private application and removes the link to the shared application.

Note: The model remains in the shared application. A copy of this previously shared model will be available in the users private/working application.

If there is a conflict and the model exists in both a private application and a shared application, the share operation is selectable. This conflict sometimes occurs because a model was previously shared and then unshared. Selecting a share operation enables you to reshare a model that was previously shared. Use the drop-down list box to select one of the following options:

Working with Models

121

Share from <shared_application_name> (Overwrite)Deletes the model in the private application and creates a link to the model in the shared application. Share to <shared_application_name> (Merge)Merges the content of the model in the private application with the content of the model in the shared application. The model in the private application is then deleted and a link is created to the model in the shared application. Share to <shared_application_name> (Overwrite)Replaces the content of the model in the shared application with the content of the model in the private application. The model in the private application is then deleted and a link is created to the model in the shared application.

3 Select one or more models to share and, if the share operation for a model is selectable, choose a share
operation.

4 Click Share to begin the sharing operation. 5 Click Refresh to update the status of the operation. 6 Click Report to view information about the status of the operation, including whether it was successful and
the reason for failure if the operation failed.

7 Click OK to return to the Share Models view.


You can stop sharing a model at any time. When you stop sharing a model, a copy of the model is created in the private application in Shared Services.

To stop sharing a model:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 Select the Share tab. 3 In Share Models, select one or more models to remove from sharing. 4 Click Share. 5 When the status is complete, click OK.
The selected models are stopped from sharing and a copy of each model is made to the private application in Shared Services.

Filtering the Content of Models


When you share models with other applications or products, it is possible that the models have most members in common, or have a common core of members, but do not have all members in common. Shared Services enables you to write a filter that retains specified members of the shared model to remove when the model is imported to an application. For example, a Hyperion Financial Management application exports an account dimension and shares it in a common shared directory. A Hyperon Planning application decides to use the account dimension from the Hyperion Financial Management application and links to the shared account dimension.

122

Managing Shared Services Models

The Hyperion Planning application conducts budgeting on profit and loss accounts only and therefore does not require any balance sheet accounts from the account dimension. The Hyperion Planning application writes a filter that removes the Total Assets member and all of its descendants and the Total Liabilities member and all of its descendents. You can write filters for dimensional models only, and, you cannot have multiple filters on a particular dimension. Writing filters requires Write access to a model.

To write a new filter or to modify an existing filter:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click Filter.
In the Create/Edit Filter window, the Members List area shows the members of the model and the Filtered Out Members text box shows members that are to be retained in the model on import. Figure 16 shows a sample Members List area of the Create/Edit Filter window.

Figure 16

Create/Edit Filter Window: Members List Area

4 From the Members List area, select a member. 5 Click Add to move the selected member from the Members List area to the Filtered Out Members text box.
The Select Member drop-down list box indicates how much of the hierarchy is to be filtered, as follows:

Descendants (Inc). Filters the selected member and all of its descendants. Descendants. Filters descendants of the selected member (but not the member itself). Member. Filters the selected member only.

Working with Models

123

You can move selected members back to Members List from Filtered Out Members with the Remove and Remove All buttons.

6 Repeat the two previous steps until you have selected as many members to retain as needed. 7 Click one of these options:

Save to save the filter Close to cancel the changes you have made , in the Model Listing view indicates that a model has an attached filter.

The filter icon,

After a filter is applied to a model, you will see only those members within a model that are not filtered out. If you would like to see all the members in a filtered model, you can disable the filter and then, after viewing, enable the filter again.

To disable an existing filter:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a filtered model and click Filter. 4 Click Disable. 5 Click Save to view the model in the Model Listing view.
The disabled filter icon, but the filter is disabled. , in the Model Listing view indicates a model has an attached filter,

To enable an existing filter:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a filtered model with a disabled filter icon and click Filter. 4 Click Enable. 5 Click Save to view the model in the Model Listing view.
The enabled filter icon in the Model Listing view, indicates that the filter is enabled.

To delete an existing filter:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a filtered model and click Filter. 4 Click Delete. 5 When prompted to confirm the deletion of the filter, click OK.
124
Managing Shared Services Models

Tracking Model History


Shared Services maintains a version history for each model in the product, if versioning is enabled for the model. To see if versioning is enabled for a model and to enable versioning if it is not enabled, see Viewing and Setting Model Properties on page 131. Figure 17 shows a sample Model History window.

Figure 17

Model History Window

To view the version history of a model in Shared Services:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click History.
Shared Services displays a list of model versions, including the name of the person who updated the version, the update date, and comments for each model.

4 From the version list, you can perform any of the following tasks:

View the contents of any model. i. Click a version.

ii. Click View. See Viewing and Editing Model Content on page 115 for more information.

Compare any two model versions to each other. i. Select any two versions.

ii. Click Compare. The contents of the two model versions are shown line-by-line in a side-by-side format. See Comparing Models on page 112 for more detailed information.

Replace the current model in the application with a version in the list. i. Select any version.

ii. Click Import.

Working with Models

125

The specified version is imported to the application and replaces the current model. If a filter was applied to a previous version of a model, the model is imported with the filter applied.

View the properties of any model. i. Click a version.

ii. Click Properties. See Viewing and Setting Model Properties on page 131 for more information.

Managing Permissions to Models


Shared Services enables you to manage access permissions to models in applications independent of any <Hyperion Product Name> application. You assign permissions on a model-by-model basis to individual users or to groups of users. You can also assign permissions at the application level. User names and passwords are managed by an external authentication provider, so they must be created externallyusing NTLM, LDAP, or MSADbefore they can be added to <Hyperion Product Name>. The Shared Services administrator adds authenticated users to Shared Services by using the Shared Services User Management Console. The Shared Services administrator also creates and manages groups by using the User Management Console. See the Hyperion Shared Services User Management Guide for details. When applications are created in the Shared Services respository, all permissions are denied for all users, except the user who created the application. The creator of the application or the administrator (the admin user) must assign permissions for the new application. Similarly, when data integrations are created, all permissions are denied for all users (via a group called Users), except the user who created the data integration. To change the default setting for a user, you must explicitly add the user to the DataBroker application and apply the desired access rights.
Note: To override the DataBroker application access settings, it is possible to apply access rights to individual integration models. This can be done in the Manage Models page that displays the integration models in the DataBroker application. How each product navigates the user to this page is different so refer to each product's documentation for instructions on accessing the Manage Models page.

To access specific models in Shared Services, users must be assigned access rights individually or inherit access rights by being part of a group that is assigned access rights. If an individual user is assigned to a group and the access rights of the individual user conflict with those of the group, the rights of the individual user take precedence. To give users access to models other than their own, an administrator must add the users and assign their permissions.

126

Managing Shared Services Models

Permissions
Model management provides the following types of permissions:

Read. The ability to view the contents of a model. You cannot import a model if you have only Read access to it.

Write. The ability to change a model. Write access includes the ability to export, import, and edit a model. Write access does not automatically include Read permission. You must assign Read permission explicitly, in addition to Write permission, if you want a user to have these permissions.

Manage. The ability to create new users and change permissions for users. Manage access does not automatically include Read and Write permissions. You must assign Read and Write permissions explicitly, in addition to Manage permission, if you want a user to have all these permissions.

The following table summarizes the actions that a user can take in regard to a model with each of the permissions.
Table 14

Access Permissions Access Permission

Action Sync Import Export View Filter Compare History Set Properties Assign Access Share Assign Permissions Edit Rename Delete

Read No No No Yes No Yes Yes No No No No No No No

Write Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes

Manage Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Working with Models

127

You can apply permissions to groups and to individual users. Users are automatically granted the permissions of the groups to which they belong. You can, however, explicitly add or deny permissions to a user to override group permissions. For each type of access permission (Read, Write, and Manage), you must apply one of the following actions:

Grant. Explicitly grant the permission to the user or group. Granting permissions to a member of a group overrides permissions inherited from the group. For example, if a group is denied a permission, you can explicitly grant the permission to a member of the group.

Deny. Explicitly deny the permission to the user or group. Denying permissions to a member of a group overrides permissions inherited from the group. For example, if a group is granted a permission, you can explicitly deny the permission to a member of the group.

None. Do not apply the permission to the user or group. Not applying a permission is different from denying a permission. Not applying a permission does not override permissions inherited from a group. Specifying None for particular permissions for individual users enables you to apply permissions on a group basis.

Note: If a user belongs to groups with mutually exclusive permissions to the same model, permissions that are assigned override permissions that are denied. For example, if a user belongs to a group that denies Read access to a particular model and belongs to another group that assigns Read access to the model, the user in fact is granted Read access to the model.

Assigning Permissions to Models


You assign permissions on individual models in applications. You assign permissions on the models to individual users or to groups of users. You must have Manage permission for a model to assign permissions to it. Users inherit the permissions of the groups to which they belong. Permissions that you assign to an individual user, however, override any group permissions that the user inherits.
Note: The following procedure can be used to assign permissions to metadata models or to data integrations. To assign access to metadata models, begin the procedure when metadata models are displayed. To assign access to integrations, begin the procedure when integrations are displayed.

To assign permissions to models:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click Access.

128

Managing Shared Services Models

You can view the permissions that are assigned to users and groups for the selected model in the Model Access window. Figure 18 shows a sample Model Access window.

Figure 18

Model Access Window

4 To add users or groups, click Add.


The Add Access window is displayed. The Available Users/Groups text box lists users who are authenticated as Hyperion Shared Services users. If a user that you want is not on the list, contact the Hyperion Shared Services administrator. The administrator can use Hyperion Shared Services Configuration Console to add authenticated users.

Figure 19

Add Access Window: Select Available Users Area

5 In the Available Users/Groups text box, select users or groups to assign to this model (press Ctrl to select
multiple users). Click Add to move the selected users and groups to the Selected Users/Groups text box or click Add All to move all users and groups to the Selected Users/Groups text box.

Working with Models

129

Note: Group names are preceded by an asterisk (*).

6 Assign permissions to the selected users and groups by selecting one of the Grant, Deny, or None option
buttons for the Read, Write, and Manage permissions.

Figure 20

Add Access Window: Type of Access Area

Note: Assigning (or denying) a permission does not implicitly assign (or deny) any other permissions; that is, assigning Write permission does not implicitly assign Read permission, and assigning Manage permission does not implicitly assign Read and Write permissions. Likewise, denying Read permission does not implicitly deny Write and Manage permissions, and denying Write permission does not implicitly deny Manage permission. You must explicitly assign all permissions that you want a user to have.

See Permissions on page 127 for details about the Read, Write, and Manage permissions and the Grant, Deny, and None actions that you can apply to each permission.

7 Click Add to assign the new permissions.

Editing Permissions to Models


You can edit the permissions of individual users and groups on individual models. You must have Manage permission for a model to change permissions for it.

To edit permissions to models:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model name and click Access.
You can view the permissions that are assigned to users and groups for the selected model.

4 Select the check box next to one or more users or groups and click Edit.
The window shows the permissions currently assigned to the selected users or groups.

Note: The

icon indicates an individual user and the

icon indicates a group of users.

5 Change permissions for the selected user or group by selecting one of the Grant, Deny, or None option
buttons for the Read, Write, and Manage permissions.

130

Managing Shared Services Models

See Permissions on page 127 for details about the Read, Write, and Manage permissions and the Grant, Deny, and None actions that you can apply to each permission.

6 Click one of these options:


Update to accept the changes Close to cancel the changes

To view any changes made to model access, you must log out of the product application, close the browser, and then re-login to the product application.

Deleting Permissions to Models


You can delete all permissions for users and groups to individual models. You must have Manage permission for a model to delete access to it.

To delete access to a model:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model name and click Access.
You can view the permissions that are assigned to users and groups for the selected model.

4 Select the check box next to one or more users or groups and click Delete.
Note: When you click Delete, the permissions are immediately removed without a warning message being displayed.

Viewing and Setting Model Properties


Shared Services provides property data for each model in the product. You can view all model properties and set selected properties.

Working with Models

131

Figure 21

Model Properties Window

Shared Services displays the following model properties:


Creator. Name of the user who created the model Updated By. Name of the person who updated the model If there have been no updates, the name of the creator is listed and the Updated Date is the same as the Created date.

Create Date. The date on which the model was created in (exported to) Shared Services Updated Date. The date on which the model was last updated in Shared Services Versioning. Whether versioning is enabled If versioning is not enabled, you can enable it by changing this setting. Once versioning is enabled, however, you cannot disable it.

Lock Status. Whether the model is locked or unlocked You can change this setting to lock the model for your exclusive use or to unlock the model to allow other users to work with it. Models are locked for only 24 hours. After 24 hours, the model is automatically unlocked.

Share Information. Is provided if the model is shared with a shared application


Source Application. The name of the shared application Source Model. The path to the model in the shared application Transformation. The name of the transformation, if any, that Shared Services applies to the model to make it usable to the application

Dimension Properties. Is provided only if the model is a shared dimension model

Dimension Type. The name of the dimension type.

132

Managing Shared Services Models

If the Dimension Type value is None, then you can select a new dimension type in the Dimension Type drop-down list box next to the Change To button.

Change To. Only shown if the Dimension Type value is None. Click the Change To button after you select a new dimension type value in the Dimension Type drop-down list box.

Dimension Type drop-down list box. Only shown if the Dimension Type value is None. Use the drop-down list box to select a new dimension type. Then click Change To to change the dimension type.

You need Read access to view model properties and Write access to change model properties.

To view or change model properties:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select
Manage Models.

2 If it is not already selected, select the Browse tab. 3 Select a model and click Properties.
You can view the properties for the model.

4 You can make the following changes to model properties:

If versioning is not enabled, enable it by clicking the Enable button next to Versioning. After versioning is enabled, model management maintains a version history for the model. You cannot disable versioning for a model after you enable it.

Lock or unlock the model by clicking the Lock or Unlock button next to Lock Status. If the Dimension Type value is None, select a new dimension type in the drop-down list box next to the Change To button. After you select a new dimension type, click Change To and accept the resulting confirmation dialog box to invoke the change.

5 Click Close to return to the previous page and save any changes that you have made.

Sharing Data
Shared Services enables you to move data between applications. The method used to move data is called data integration. A data integration specifies the following information:

Source product and application Destination product and application Source dimensions and members Destination dimensions and members

A data integration wizard is provided to facilitate the process of creating a data integration.

Sharing Data

133

Whoever has write access to the DataBroker.DataBroker application can create data integrations. Users with read access to the DataBroker.DataBroker application can run data integrations. Access rights to this application are granted through the Shared Services User Management Console. For more information, see the Hyperion Shared Services User Management Guide. By default, all Shared Services users have full access (Read, Write, and Manage) to all integrations. A data integration can be run manually or scheduled to run at a specific time. Data integrations can also be placed in groups and run sequentially.

Prerequisites for Moving Data Between Applications


Before data can be moved between applications, the models for both the source and destination application must be synchronized between Shared Services and the application. See Synchronizing Models and Folders on page 108 for instructions on synchronizing models. Although <Hyperion Product Name> and Shared Services provide tools to create integrations, the movement of data between applications, particularly applications from different products, requires that you be very familiar with the data.

Assigning Access to Integrations


You assign access to integrations using a similar process to assigning access to metadata models.
Note: By default, all users except the creator of the integration are denied access rights to data integrations (via a group called Users). To change the default setting for a user, you must explicitly add the user to the DataBroker application and apply the desired access rights. To override the DataBroker application access settings, it is possible to apply access rights to individual integration models. For more information, see Assigning Permissions to Models on page 128.

To assign access to integrations:


1 With the Integrations page displayed, click Workspace > Manage Models. 2 Go to Assigning Permissions to Models on page 128 and complete the procedure to assign permissions
to integrations of your choosing.

Accessing Data Integration Functions


Write privileges are required to perform most data integration functions. A user with Read only privileges can view (including filtering the view) and run integrations. A user with Write privileges, however, can view, run, create, edit, copy, and delete an integration.
Note: In order for a user to not see a specific integration model, the user needs to be explicitly assigned Deny Read at a minimum (Write and Manage can be any setting) for the DataBroker application.

134

Managing Shared Services Models

To access all data integration functionality, click In the View Pane Navigation panel, select
Administer to activate the Administer module, then select Manage Data.

Figure 22 shows a sample Manage Data window.

Figure 22

Manage Data Window

A list of integrations is displayed. The list includes names, source applications, and destination applications. An application name identifies a product, application, and a shared application in the form: <Product.Application.Shared Application>, for example, HFM.App1.beta.
Note: When viewing a list of integrations, performance may become slower as you add more integrations and as more users view the list.

Group integrations do not have a source and destination; each integration in a group specifies its individual source and destination. A group icon in the source and destination columns identifies a group integration. The link, View group details, lists the integrations in the group. You can perform any of the following functions from the Integrations page:

Create, edit, or copy an integration (see Creating or Editing a Data Integration on page 137) Create a data integration group (see Grouping Integrations on page 149) Delete an integration (see Deleting Integrations on page 144) Run, or schedule to run, an integration (see Scheduling Integrations on page 145)

Filtering Integration Lists


In a list, by default, all available integrations are displayed. You can filter the list, based on the source product and application, or the destination product and application.

To filter an integration list:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

Sharing Data

135

A list of integrations is displayed. The integrations for all products and applications are shown by default. For a sample Manage Data window, see Figure 22 on page 135. Two combination boxes, Source and Destination, are displayed above the Filter View button. Each combination contains two drop-down list boxes, the first to specify a product and the second to specify a application.
Note: A list of integrations is displayed when you create an integration group. If you are creating a group, begin with step 2.

2 Select a product from the product Source or Destination drop-down list box or from both the product
Source and Destination drop-down list boxes.

The second Source or Destination drop-down list box is populated with the applications for the selected product.

3 Select a particular application or leave All Registered Applications selected.


You can select only one application.

4 Click Filter View to update the list based on the selections that you made.
The filter enables the display of integrations that act on a particular source product or application, or on a destination product or application, or on a combination of both. For example, if you specify HBM as the source application and Hyperion Planning as the destination application, the list includes all integrations whose source is Hyperion Business Modeling (HBM) or whose destination is Hyperion Planning. The following examples illustrate the different combinations of product and application that you can specify in the Source and Destination combination boxes

If a source product is specified and the three other drop-down boxes specify all, the list displays all integrations with the specified source product. If a source product and a source application are specified and the two destination dropdown boxes specify all, the list displays all integrations with the specified source application. If a source product and destination product are specified and the two application dropdown list boxes specify all, the list displays all integrations from the given source product to the given destination product. If an integration is bidirectional (can be transposed) and either source-to-destination or destination-to-source matches the given products, the integration is listed.

136

Managing Shared Services Models

Creating or Editing a Data Integration


Shared Services provides a data integration wizard to enable the process of moving data between applications.
Note: To create a data integration, you must be assigned Write access to the DataBroker application. Access rights to this application are granted through the Shared Services User Management Console. For more information, see the Hyperion Shared Services User Management Guide.

To create a new integration or edit an existing integration:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

A list of integrations is displayed. For a sample Manage Data window, see Figure 22 on page 135.

2 Take one of the following actions:


If you want to create an integration, click New. If you want to edit an integration, select an integration and click Edit.

Note: Locking of integration models in edit mode is not supported. As a consequence, it is possible for multiple users to simultaneously open an integration and make changes. If more than one administrator edits the same integration simultaneously, the last one to save takes precedence. The entire integration is overwritten with the last version saved. No warning message is displayed.

If you want to use an existing integration to create a new integration, select an existing integration and click Copy.

Note: Action buttons (New, Edit, Delete, Copy, and Run) that are enabled for a user are defined at the DataBroker application level and not at the model level. However, for existing integration models, the actions that a user can perform are controlled at the model level. For example, if a user has full access rights to the DataBroker application, but Read access to a specific integration model, all buttons are enabled but when the user tries to edit and save this integration, an error is displayed.

The first page of the wizard is displayed.

Sharing Data

137

Figure 23 shows a sample Create Integration window.

Figure 23

Create Integration Window

For a new integration, the fields are blank. For an integration to be edited or copied, the fields are populated with existing values.

138

Managing Shared Services Models

Figure 24 on page 139 shows a sample Edit Integration window.

Figure 24

Edit Integration Window

3 Enter information and select or clear check boxes.


Item Integration Name Description A text box for a unique name for the integration. For an edited integration, text box is read-only and cannot be changed. To rename an integration, copy the integration to a new name, and delete the original integration. A combination text box to identify the source for the data. The first box contains a dropdown list of products registered with Shared Services. When you specify a product, the second box is populated with applications belonging to the product. When you specify an application, the third box is populated with data sources. Data sources include elements like Hyperion Planning Plan Type. If you select an application that does not require or support data sources, Default Data Source is displayed in a disabled field that you cannot change. Destination A combination text box to identify the destination for the data. The first box contains a drop-down list of products registered with Shared Services. When you specify a product, the second box is populated with applications belonging to the product. A check box that determines the direction in which the integration can be run. If the box is not checked, data is moved from the source to the destination. If the box is checked, the user can choose a direction (source to destination or destination to source) when scheduling or running the integration. A check box that enables writing to read-only fields in the destination application.

Source

Bidirectional

System Override

Sharing Data

139

Item Suppress Empty

Description A check box that enables the integration, for performance reasons, not to transfer missing cell (#missing) values. If the box is checked, to ensure that data is transferred successfully, you must prepare the destination database before running the integration. See Prerequisites for Moving Data Between Applications on page 134 for details. A text box for a value that acts as a multiplier for the data. Enter a value with which you want to scale the integration data. For example, to convert data from a positive to negative value during the data transfer, specify a scale value of -1. Each transferred data value is then multiplied by -1, in effect, converting them to negative values. A text box for optional comments and notes.

Scale

Notes

4 Click Next to go to the second page of the wizard.


The second page of the create integration wizard enables you to specify the dimensions that are equivalent (shared) for purposes of the current integration. By default, the wizard identifies dimensions of the same name (in source and destination applications) as shared. A line between dimensions indicates that they are shared. To optimize performance, when creating an integration that has more than one shared dimension, dimension order is important. The order of shared dimensions in the integration is controlled by the order in which you share dimensions. In general, you should place the one shared dimension that has the largest number of selected members for that integration as the last shared dimension. To do this, it is best to determine first what members for each dimension will be used in an integration. Of those dimensions that will be shared, identify the one where your member selection yields the largest number of members. On page 2 where you identify shared dimensions, if that dimension is automatically shared, unshare it. Then share all desired dimensions except for that one dimension. Then share that one remaining dimension last before moving on to third page of the wizard. This will ensure that this dimension is displayed last in the Shared Dimension Members panel on the third page of the wizard.

5 Take one or more of the following actions:

If you want to specify the shared dimensions, select one or more pairs of dimensions (in source and destination applications) and click Share. A dimension can be shared with only one dimension in the other application. A line is drawn between any two dimensions that are shared.

If you want to unshare any dimensions that are shared by default, select one or more dimensions in either application and click Unshare; or click Unshare All to remove sharing from all dimensions. If you want to return to the default shared dimensions, click Default.

Note: You are not required to identify every dimension that is in fact identical. The reason to identify shared dimensions is to specify the dimensions for which you want to move a range of members. For any particular integration, if you are interested in only one member for a dimension, you can leave the dimension unshared.

140

Managing Shared Services Models

The third page of the wizard enables you to pick ranges of members from the shared dimensions to define the slice of data that will be transferred.

6 Click Next to go to the third page of the wizard.


The window displays dimensions in two categories:

Shared Dimension Members. Dimensions identified as shared on the previous wizard page Common POV. Dimensions not identified as shared Each POV (point of view) uses the same background POV members and a unique set of dynamic POV members. You specify the dynamic POVs.

7 Select shared dimension members:


a. Next to a shared dimension, from which you want to select members, select the Select Members menu control. b. Select From Source or From Destination. A list of members from the appropriate application (source or destination) is displayed. c. Select one or more members. d. Click Select. You can also specify a function to identify a set of members. For example, type Match(*) or AllMembers() to specify all members of the same name and the same dimension in the source and destination applications. When you run an integration with a function, the function transfers data only for members that are common to applications. For both the source and destination applications, however, the integration must iterate through every member identified by the function. For example, the following table shows members from a time dimension for two applications:
Application 2 (Destination) 1999 2000 2001 2002 2003 2004

Application 1 (Source) 1997 1998 1999 2000 2001 --

If you specify an AllMembers() function, the integration must check all 11 members; however, data is transferred only for 1999, 2000, and 2001, because these years are common to both applications. Warning messages are returned for the other years.
Note: You must select at least one member from each dimension in Common Members or specify a function that identifies a common member.

Sharing Data

141

When using double quotation marks ( " ) and parentheses () in member names in the Create Integration Wizard, follow these guidelines: The following examples illustrate valid use of these characters in member names: abc abc func(abc) func(abc) func(a,b,c) func(a(b)c) func(a(b)c) These are examples of invalid member strings: func(abc) func(a,b,c) func(a(b)c) func(abc) If you select invalid member names from the Data Integration Wizard, it automatically adjusts the syntax to be valid before passing the name on. However, if you manually type an incorrect name, the wizard does not correct the invalid name, and an error is returned. The following members may be valid within an application, but may behave differently: a,b,c will be treated as three members, not one named a,b,c. Different styles can be mixed in a single shared pair of dimensions value input box, for example: a, b, c, abc, Children(a,b,c), iDescendants(a(b)c), Ancestors(a(bc)

8 Optional: Create POVs.


You can create one or more POVs but are not required to create any. That is, you can create a single POV using background POV dimensions, you can use dynamic POVs to create multiple POVs, or you can leave the POV dimensions blank. The Common POV area shows all dimensions not identified as shared on the previous wizard page. Dimensions from the source and destination applications are shown in separate columns. Initially, all dimensions are in the Background POV area and the Dynamic POV area is blank. a. Next to a dimension in the source application select the magnifying glass. b. From the list of members select a single member. c. Repeat steps a and b for all other dimensions in the source application for which you want a member selected.
Note: You can leave background POV dimensions blank if the application does not require a value for them.

142

Managing Shared Services Models

d. Next to a dimension in the destination application, select the magnifying glass. e. From the list of members select a single member. f. Repeat steps d and e for all other dimensions in the destination application for which you want a member selected.

Note: You can leave background POV dimensions blank if the application does not require a value for them.

g. Click the Dynamic POV icon next to a dimension to move the dimension from the Background POV area to the Dynamic POV area. h. Click Add to create a POV that is based on the static and dynamic members that you have selected. i. Optional: If you want to create another POV, select a different member and click Add. You can repeat this step by selecting different members for the dynamic POV and clicking Add for each selection. The numbering in the lower right corner identifies the POV, for example, POV 3 of 5. You can navigate to each POV by using the left and right arrow keys. You can also move the dimension in the Dynamic POV area back to the Background POV area and move a different POV to the dynamic area and create another set of POVs. j. Optional: If you want to replace the content of any existing POV that you have access to, complete the following steps. i. Use the arrow keys in the lower right corner to navigate to a POV.

ii. Change the content in one of the Dynamic POV areas. iii. Click Replace. When the integration is run, it copies the data from the dimension member or members in the source application list to the dimension member or members in the destination application list.

9 Optional: If you want to see a list of POVs, click View All. 10 Optional: If you want to remove a POV, complete the following steps.
a. Click the left (<) or right (>) paging icon to navigate to a POV. b. Click Remove.

11 Save the integration, or cancel the changes that you made by taking one of the following actions:

Click Save to save the integration. The Create Integration window remains open. You can make additional changes to the integration and save it again when finished.

Click Save and Close. The integration is saved and the list of integrations is displayed. To schedule the new integration to run, see Scheduling Integrations on page 145.

Sharing Data

143

Click Save and Run. The integration is saved and the page to schedule an integration to run is displayed; see Scheduling Integrations on page 145.

Click Close. Any changes that you made since the last save are lost. Any new group that has not been saved is not created.

Note: Case-sensitivity in integration and integration groups is handled differently depending on the relational database. For Oracle configurations, if you save a new integration or group with a name comprised of the same characters but different case, such as ABC overwriting Abc, you are prompted to overwrite the existing one. After you overwrite, two integrations are created: Abc with the old contents and a new integration or group named ABC with the new contents. In the case of non-Oracle configurations, if you try to overwrite Abc with ABC, an initial message warning about overwriting is displayed. If you continue to overwrite, an exception is displayed stating that the name already exists and you are forced to select a new name.

Deleting Integrations
You can delete integrations that are no longer useful.

To delete an integration:
1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

A list of integrations is displayed.


Note: If the integration of interest is not displayed, check the Filter View Source and Destination drop-down boxes to see if the list of integrations is filtered. See Filtering Integration Lists on page 135 for information about how to filter an integration list.

2 Select one or more integrations to delete. 3 Click Delete.


A delete confirmation message is displayed.

4 Click one of these options:


OK to delete the selected integration or integrations Close to cancel the delete operation

144

Managing Shared Services Models

Scheduling Integrations
You can run an integration immediately or schedule it to run at a particular date and time. You can also place an integration in a group and schedule the group to run. See Grouping Integrations on page 149 and Scheduling Group Integrations on page 151.

To schedule an integration to run:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

A list of integrations and integration groups is displayed.


Note: If the integration of interest is not displayed, check the Filter View Source and Destination drop-down boxes to see if the list of integrations is filtered. See Filtering Integration Lists on page 135 for information about how to filter the integration list.

2 Select an integration to run.


Note: You can select one integration only at a time to schedule for running.

3 Optional: If the integration is bidirectional, the source and destination application can be reversed.
Selecting an application from the Source drop-down list box and the Destination drop-down list box automatically shows the other application that will be used as the destination.
Note: If the source and destination applications are the same, it can be confusing with a bidirectional integration to know which way the data is being moved. The first entry in the Source drop-down box is the original, default source application.

4 Click Run.
A popup window is displayed to schedule the integration to run. Figure 25 shows a sample Run Integration window.

Figure 25

Run Integration Window

Sharing Data

145

5 To run the integration immediately, click OK.


Immediately is selected by default.

6 Click Schedule for and scroll to select the month, day, and time in the drop-down list boxes to schedule
the integration to run at a particular time.

7 Click one of these options:


OK to schedule the integration Close to cancel the operation

The integration you scheduled is added to the list of scheduled integrations. For information on viewing scheduled integrations, see Viewing the Status of an Integration on page 146.

Managing Scheduled Integrations


When you schedule an integration, the integration is added to the list of scheduled integrations. The scheduled integration list includes integrations that are waiting to run (pending), integrations that are currently running (running), integrations that have been cancelled (cancelled) and integrations that have already run (completed or failed). Integrations remain on the list until you remove them. You can perform the following actions on integrations on the scheduled integration list:

View the status of a running, completed, or failed integration; see Viewing the Status of an Integration on page 146. Cancel a running integration; see Canceling an Integration on page 147. Run a copy of an integration; see Copying an Integration to Run on page 147. Reschedule an integration; see Rescheduling an Integration on page 148. Remove an integration from the list of scheduled integrations; see Removing an Integration on page 148.

Viewing the Status of an Integration


The scheduled integration page lists all integrations.

To view all scheduled integrations, click Workspace > Scheduled Integrations.


Figure 26 shows a sample Scheduled Integrations window.

Figure 26

Scheduled Integrations Window

146

Managing Shared Services Models

The Status column indicates whether an integration is pending, running, completed, or failed.

To view details about a completed or failed integration, click the Failed or Completed link in
the Status column.
Note: Data integrations that contain members with parentheses in the name, for example Account1(), will fail. If this is the reason for the failure, you will see an Unknown function name Account1 error.

Canceling an Integration
You can cancel an integration that is scheduled to run or in progress (running).

To cancel an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select a single integration only.

3 Click Cancel.
A confirmation message is displayed.

4 Click one of these options:


OK to cancel the integration Close to cancel the operation

Copying an Integration to Run


You can schedule the same integration to run multiple times by making a copy of the integration and scheduling the copy to run.

To schedule a copy of an integration to run:


1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select a single integration only.

3 Click Run Copy.


A popup window is displayed to schedule the integration to run.

4 To run the integration immediately, click OK.


Immediately is selected by default.

5 To schedule the integration to run at a particular time, click Schedule for and scroll to select the month,
day, and time in the drop-down list boxes.

Sharing Data

147

6 Click one of these options:


OK to schedule the integration Close to cancel the operation

The integration you scheduled is added to the list of scheduled integrations. You can schedule an integration multiple times, which results in the integration being listed multiple times on this page.

Rescheduling an Integration
You can reschedule an integration that is waiting to run to a different date or time.

To reschedule an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select a single integration only.

3 Click Run Copy.


A popup window is displayed to reschedule the integration to run.

4 To run the integration immediately, click OK.


Immediately is selected by default. The currently scheduled month, day, and time are displayed in the Schedule for combination boxes.

5 To schedule the integration to run at a particular time, click Schedule for and scroll to select the month,
day, and time in the drop-down list boxes.

6 Click one of these options:


OK to schedule the integration Close to cancel the operation

Removing an Integration
You can remove an integration that is pending to run or one that has already run (completed or failed).

To remove an integration:
1 Click Workspace > Scheduled Integrations. 2 Select an integration.
You can select multiple integrations to remove.

148

Managing Shared Services Models

3 Click Remove.
A confirmation message is displayed.

4 Click one of these options:


OK to remove the integration or integrations you have selected Close to cancel the operation

Note: In some cases, removing an integration or group that has been run and then attempting to remove it from the Scheduled Integrations page results in a blank screen. In these cases, select the Back button in your browser and refresh your screen using either F5 or your browser's Refresh button.

Grouping Integrations
You can create groups of integrations to run at the same time. Before creating a group, you must first create individual integrations that can be added to a group; see Creating or Editing a Data Integration on page 137. In the group, you specify the order in which to run the integrations.

To create or edit an integration group:


1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

A list of integrations is displayed.

2 Take one of the following actions:

To create a blank new group, click New Group. A Create Integration Group page with blank fields is displayed.

To create a new group with a list of integrations, select one or more integrations from the list of saved integrations, and click New Group. A Create Integration Group page with populated fields is displayed.

To edit an existing group, select the group and click Edit. A Create Integration Group page with populated fields is displayed.

Figure 27 on page 150 shows a sample Create Integration Group window.

Sharing Data

149

Figure 27

Create Integration Group Window

3 Type a name for the group, or change the name for an existing group.
The name must be unique among existing group and integration names.

4 Optional: Type or change comments in the notes field. 5 Click Next to go to the next page.
Note: If you click Save or Save and Close, the group (name and notes) is saved. You can edit the group later and add integrations.

6 Select one or more integrations from Available Integrations.


You can nest an integration group within another group, therefore the list of integrations includes integration groups in addition to individual integrations. For an individual integration, the list displays the source and destination product and directory. For a group, click on the View group details link to see the integrations contained in the group.
Note: You can filter the list of available integrations to show a more useful list. See Filtering Integration Lists on page 135.

7 Click Add to copy the selected integrations to Selected Integrations.

150

Managing Shared Services Models

The selected integrations are copied, not moved, to Selected Integrations. You can add an integration multiple times if you want to run it more than once.

8 Optional: If you are editing an existing group, or if you add integrations that you want to remove, select one
or more integrations in Selected Integrations and click Remove to remove them from the group.

You can click Remove All to remove all integrations from the group. Integrations are run in the order that they are shown in Selected Integrations.

9 Optional: Select an integration and click the up or down arrow keys to move the integration up or down in
the list to change the order in which it is run.

10 Save the group, or cancel the changes you have made by taking one of the following actions:

Click Save to save the group. The Create Integration Group window remains open. You can make additional changes to the group and save it again when finished.

Click Save and Close. The group is saved and the list of integrations is displayed. To schedule the new group to run, see Scheduling Group Integrations on page 151.

Click Save and Run. The group is saved and the page to schedule a group to run is displayed; see Scheduling Group Integrations on page 151.

Click Close. Any changes you made since the last save are lost. If it is a new group and it has not been saved yet, no group is created.

Scheduling Group Integrations


You can run an integration group immediately or schedule it to run at a particular date and time.

To schedule a group to run:


Note: If you selected Save and Run when you created the integration group, the page to run the integration group is displayed. Skip to step 4.

1 In the View Pane Navigation panel, select Administer to activate the Administer module, then select >
Manage Data.

A list of integrations and integration groups is displayed.


Note: If the group of interest is not displayed, check the Filter View Source and Destination drop-down list boxes to see if the list of integrations is filtered. See Filtering Integration Lists on page 135 for information about how to filter the integration list.

Sharing Data

151

2 Select a group to run.


Note: You can select one group only at a time to schedule for running.

3 Click Run.
A page is displayed to schedule the group to run. Figure 28 shows a sample Run Group Integration window.

Figure 28

Run Group Integration Window

4 To run the group immediately, click OK.


Immediately is selected by default.

5 To schedule the group to run at a particular time, click Schedule for and scroll to select the month, day,
and time in the drop-down list boxes.

6 Click one of these options:


OK to schedule the group Close to cancel the operation

The group you scheduled is added to the list of scheduled integrations. For information on viewing scheduled integrations, see Viewing the Status of an Integration on page 146.
Note: If one of the integrations within a group encounters an error while running, the entire group stops running.

152

Managing Shared Services Models

Chapter

Automating Activities

6
As administrator, you can automate activities related to Interactive Reporting, Production Reporting, and generic jobs, schedules, and physical resources used for job output.
In This Chapter Managing Calendars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Managing Time Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Administering Public Job Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Managing Pass-Through for Jobs and Interactive Reporting Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Managing Job Queuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Automating Activities

153

Managing Calendars
You can create, modify, and delete custom calendars using Calendar Manager. You can create calendars to schedule jobs based on fiscal or other internal or organizational calendars. Jobs scheduled with custom calendars resolve dates and variable date limits against quarterly and monthly dates specified in the custom calendars, rather than the default calendar. Topics that provide information on Calendar Manager:

Viewing Calendar Manager on page 154 Creating Calendars on page 154 Deleting Calendars on page 155 Modifying Calendars on page 155 Calendar Manager Properties on page 155 Viewing the Job Log on page 156

Viewing Calendar Manager


You invoke Calendar Manager by clicking its icon, , from the Job Utilities toolbar. At the Job Utilities Logon screen, supply a user name, password, Workspace host, and port number. After the system confirms these values, Calendar Manager is displayed. It lists the default calendar and custom calendars with the years for each calendar in subfolders. You choose a calendar name or year to modify or delete it. Selecting Calendars displays a blank Calendar Property tab. Selecting a calendar name displays the Calendar Properties tab with the selected calendar record. Selecting a year displays the calendar Periods and Years tab with the selected calendar and year. You need the Job Manager role (see the Hyperion System 9 Shared Services User Management Guide) to create, modify, or delete calendars.

Creating Calendars
Calendar Manager uses the standard Gregorian calendar, which cannot be modified except for holiday designations and start week day, by default.

To create a calendar:
1 Invoke Calendar Manager. 2 Select Calendars from the left navigation pane. 3 Enter a name for the calendar. 4 Enter information on the Calendar Manager windows, clicking Save on each window.
You must select New Year and enter a year before you can save the calendar. For field information, see Calendar Manager Properties on page 155.

154

Automating Activities

Deleting Calendars
You can delete whole calendars or individual calendar years.

To delete calendar or years:


1 Navigate to a calendar or year. 2 Click Delete,
.

3 When prompted, verify deletion of the calendar or year.


You cannot delete the calendar last year. To delete the calendar last yea, you must delete the entire calendar.

Modifying Calendars
You can modify or add years to calendars.

To modify calendars:
1 In Calendar Manager, navigate to a calendar.

Select a calendar name to view calendar properties. Select a year to modify periods or years and non-working days. When modifying periods or years be sure the dates for weeks or periods are consecutive.

For field information, see Calendar Manager Properties on page 155.

2 Select New Year to add a year to this calendar, and modify properties. 3 Click Save.

Calendar Manager Properties


Topics that present properties available in Calendar Manager:

Calendar Properties on page 155 Custom Calendar Periods and Years Properties on page 156 Custom Calendar Non-Working Days Properties on page 156

Calendar Properties

Calendar NameName cannot be changed after it is saved. User Defined WeeksEnables selection of week start day. The default week contains seven days and is not associated with other time periods. User-defined weeks can be associated with periods, quarters, or months, but cannot span multiple periods. Start and end dates cannot overlap and must be sequential. Week StartIf using user-defined weeks, select a starting day for the week.

Managing Calendars

155

Custom Calendar Periods and Years Properties

New YearAny year is valid if no other years are defined. If this is not the first year defined, the year entered must be sequential. Quarter/Period/WeekThe system automatically assigns sequential numbers to quarters. All calendars contain 12 periods. Start and EndEnter initial Start and End dates. The system automatically populates the remaining periods and start and end dates, and assigns quarters logically. After the fields are populated, you can edit start and end dates, which cannot overlap and must be sequential.

Custom Calendar Non-Working Days Properties

Days of the weekSelecting days of a week populates the calendar automatically. You can select non-working days by day or by day of the week.

CalendarThe calendar reflects the day starting the week as previously selected. Clicking the arrows moves the calendar forward or back one month. You indicate working and nonworking days on a day-by-day basis by selecting and deselecting days.

Viewing the Job Log


Calendar Manager keeps a job log that contains information about schedule execution including job names, start and stop times, names of users who executed jobs, reasons why jobs were executed, whether output was viewed, and directories where output is located. Jobs that are not complete have no stop time value.

To view the Job Log:


1 Click View Job Execution Log Entries,
.

The Job Log Retrieval Criteria dialog box is displayed.

2 Optional: Specify start and end dates and user information.


You can choose to view all log entries or only those for specific dates or users. See Job Log Retrieval Criteria on page 157.

3 Click OK to retrieve the log (see Job Log Entries on page 157).

156

Automating Activities

Job Log Retrieval Criteria


To limit Job Log entries:
1 Select Start Date or End Date.
A calendar is displayed from which you can select a date. If you omit a start date, Calendar Manager retrieves those entries with the defined end date, and vice versa.

2 Select All users or select User and enter a user name. 3 Click OK

Job Log Entries


The Job Log Entries window contains information about the execution of schedules, including schedule name, job name, start time, and user name who executed the job, Users can view only those log portions that pertain to their schedules. Administrators can view all log entries, but can limit their log view by requesting to view only those entries related to specific users. Log entries are initially sorted in ascending order by schedule name. You can sort by columns (Schedule Name, Job Name, Start Time, User, Mark for Deletion) by selecting a column heading. To sort a column in descending order, Shift+click a column heading. To change the column display order, select a column heading and drag it to the desired location.

Job Log Entry Details


To view Job Log entry details, select a log entry and click Detail.
Information displayed includes schedule name, job name, output object, start time, stop time, user, and times executed.

Deleting Job Log Entries


To delete job log entries:
1 In Job Log Entries, select a log entry, and select Mark for Deletion.
To select multiple log entries, use the Shift or Ctrl key.

2 Click Yes when prompted to confirm the deletion.


Entries marked for deletion are not deleted until the next Workspace server cycle, which is a recurring event where Workspace performs routine maintenance tasks.

Managing Calendars

157

Managing Time Events


Public recurring time events and externally triggered events, both of which can be viewed and accessed by users, are managed only by users with the Schedule Manager role (see the Hyperion System 9 Shared Services User Management Guide). Topics that provide information on managing time events:

Managing Public Recurring Time Events on page 158 Creating Externally Triggered Events on page 158 Triggering Externally Triggered Events on page 159

Managing Public Recurring Time Events


To create, modify, and delete public recurring time events, follow the procedures for personal recurring time events described in the Hyperion System 9 BI+ Workspace Users Guide.

Creating Externally Triggered Events


Externally triggered events are non-time events against which jobs are scheduled. Jobs scheduled against externally triggered events run after the event is triggered.

To create an externally triggered event:


1 Navigate to Schedule and select Manage Events. 2 Select Externally Triggered Event from Add Another and click Go. 3 From Create Externally Triggered Event, perform these tasks:
a. Enter a unique name and description for the event. b. Make sure Active is selected. If the Active option is not selected, the externally triggered event does not work. c. Optional: Select an Effective starting date and ScheduleTimeAt. The default is the current date and time. d. Optional: Select Inactive after and select a date and ScheduleTimeAt.

4 Set access control (see the Hyperion System 9 BI+ Workspace Users Guide) to enable roles, users, or
groups to view and use the public recurring time event.

5 Click Finish.

158

Automating Activities

Triggering Externally Triggered Events


Externally triggered events are non-time events that are triggered manually through the Schedule module. You can trigger an external event programmatically using the triggerETE() method specified for Externally Triggered Event interface of Interactive Reporting SDK (see the Hyperion System 9 BI+ Workspace Developers Guide).

To trigger externally triggered events:


1 Navigate to Schedule and select Manage Events. 2 Select Modify next to an event, and click Trigger Now.
A message is displayed verifying that the event triggered.

3 Click OK to close the verification message. 4 Click OK.

Administering Public Job Parameters


Public job parameters are managed by users with the Schedule Manager role (see the Hyperion System 9 Shared Services User Management Guide). To create, modify, and delete public job parameters, follow the procedures for personal job parameters described in the Hyperion System 9 BI+ Workspace Users Guide, except save the parameters as public instead of personal. Apply access control system to allow roles, groups, and users to use public job parameters.

Managing Interactive Reporting Database Connections


Interactive Reporting documents use Interactive Reporting database connections to connect to databases. Separate Interactive Reporting database connections can be specified for each query in Interactive Reporting documents. If no Interactive Reporting database connection is specified for a query when a document is imported, users cannot process that query unless it uses only local results. It is, therefore, important that you import and allow access to Interactive Reporting database connections by users who import Interactive Reporting documents. To process Interactive Reporting documents in Workspace, no explicit access to the Interactive Reporting database connection is required when the SC_ENABLE flag is set to true (this is the default). When the SC_ENABLE flag is set to false, only users given explicit access by the importer to the Interactive Reporting database connection associated with the Interactive Reporting document have access. Use the ConfigFileAdmin utility to toggle the flag and to set the ServletUser password. See Using the ConfigFileAdmin Utility on page 190 for detailed instructions.

Managing Interactive Reporting Database Connections

159

Managing Pass-Through for Jobs and Interactive Reporting Documents


Pass-through enables users to log on once to Workspace and access their reports databases without additional authentication. As the administrator, you can provide transparent access to databases for foreground jobs and for Interactive Reporting documents by enabling pass-through globally. When pass-through is enabled globally, item owners can enable or disable pass-through for jobs and Interactive Reporting documents. You can configure pass-through with user logon credentials or credentials set in Preferences, or you can leave the credential choice up to item owners (see Host Authentication Properties on page 205).

Managing Job Queuing


Job queueing occurs when no Job Service is available to process a job. Administrators can control Job Service availability using the Job Limit and Hold properties (see Job Service Dynamic Properties on page 178) Topics that explain how job queueing works in specific job types:

Scheduled Jobs on page 160 Background Jobs on page 161 Foreground Jobs on page 161

Scheduled Jobs
Scheduled jobs are queued when all Job Services are processing the maximum concurrent jobs defined. The queue is maintained by Event Service. Schedules in the queue are sorted based on priority and by the order in which they are triggered. When a schedule is ready for processing, Event Service builds the job and submits it to Service Broker. Service Broker gets a list of all Job Services that can process the job and checks availability based on the number of concurrent jobs that each Job Service is processing. This information is obtained dynamically from each Job Service. If Service Broker cannot find a Job Service to process a job, it gives a Job Limit Reached exception, which enables queuing in Event Service. The schedule is added to the queue and job data (including job application and executable information) for selecting a Job Service is cached. When the next schedule is ready for processing, Event Service builds the job and determines if that job type is in the queue (based on cached job data). If job type matches, the job is added to the queue. If not, the job is submitted to Service Broker for processing.

160

Automating Activities

When Event Service queuing is enabled, a Job Service polling thread is initialized that checks for available Job Services. If one is available, then Job Service processes the first schedule it can, based on job data cached in Event Service. Scheduled job data is removed from cache after the schedule is submitted to Job Service. Job properties that are modified are used only if the changes were made after the schedule is activated and added to the queue. Scheduled jobs are managed through Schedule module (see the Hyperion System 9 BI+ Workspace Users Guide).

Background Jobs
If a Job Service is not available to process a background job (which means job limits are reached), a command is issued to Event Service to create a schedule with a custom event that runs at that time. This command persists schedule information in the database. The schedule uses job parameters associated with the background job, and Event Service processes the job as it does other scheduled jobs.

Foreground Jobs
If a Job Service is not available to process a foreground job, an exception occurs notifying the user that Job Service is busy. The user is given the option to queue the job for processing by the next available Job Service. If the user decides to queue the job, a schedule is created with a custom event that runs at that time, and Event Service processes the job as it does other scheduled jobs. The schedule and event are deleted after the job is submitted to Job Service.

Managing Job Queuing

161

162

Automating Activities

Chapter

Administering Content

7
This section explains administrative tasks associated with system content stored in the repository.
In This Chapter Organizing Items and Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administrating Pushed Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Administering Personal Pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Administering Content

163

Organizing Items and Folders


For efficient Workspace functioning, structure folders so users can access items quickly and easily. Within the folder hierarchy, balance folder size against hierarchy depth. Do not let folders contain huge numbers of items, nor let the number of levels in the folder hierarchy become excessive.
Note: If you frequently import content into Workspace, run a virus scan regularly on the root folder.

A hidden folder named System is designed for administrator use. It is visible only to administrators, and only when hidden items are revealed. Use System to store files you do not want users to see, such as icon files for MIME types. You cannot rename, delete, or move the System folder.

To view the System folder:


1 Navigate to Viewer and click Explore. 2 Select View > Show Hidden.
The System folder is now displayed in the folder list. The import function enables you to import Interactive Reporting, Production Reporting, and generic files to the repository directly from Workspace. To import Financial Reporting files and Web Analysis files into the repository, you must use Financial Reporting Studio and Web Analysis Studio.

Administrating Pushed Content


You can push content to add it to users Favorites. For example, Chris, the marketing manager, wants everyone in marketing to access the marketing schedule document easily. Chris imports the schedule and pushes this item to the marketing group. Now members of the marketing group can view the schedule from Favorites rather than having to navigate through Explore to view the document. For instructions on how to push items, see the Hyperion System 9 BI+ Workspace Users Guide.

Administering Personal Pages


Administrators configure the generated Personal Page and content for users Personal Pages. For information about using the Personal Pages, see the Hyperion System 9 BI+ Workspace Users Guide. For details about the configuration properties of the Personal Pages servlet, see Personal Pages Properties on page 213.

164

Administering Content

Tasks involved with administering personal pages:


Configuring the Generated Personal Page on page 165 Understanding Broadcast Messages on page 166 Providing Optional Personal Page Content to Users on page 168 Displaying HTML Files as File Content Windows on page 168 Configuring Graphics for Bookmarks on page 168 Configuring Exceptions on page 169 Viewing Personal Pages on page 169 Publishing Personal Pages on page 169 Configuring Other Personal Pages Properties on page 169

Configuring the Generated Personal Page


When users first log on to Workspace, a default generated Personal Page is listed under Favorites, which Workspace automatically creates and saves the page as part of the users Personal Pages. Changes the administrator makes do not affect users who previously logged on. Therefore, the exact content of a users generated Personal Page depends on when that user first logs on. After logging on initially, users modify their own Personal Pages. They can also create additional Personal Pages. Due to access privileges, the generated page may differ between users. By carefully setting the access control on files used for the generated page, you can arrange, for example, for users in the Sales department to see different content on the generated page than users in the Production department. Items included on the generated Personal Page by default:

One Broadcast Messages content window with links to all items in /Broadcast Messages One Broadcast Messages file content window for each displayable item in /Broadcast
Messages

One content window for each of the first two pre-configured folders The first (as sorted) displayable HTML item in any pre-configured folder My Bookmarks content window Exceptions Dashboard content window

You can customize items included by default by setting Generated Personal Page properties in Servlet Configurator (see Personal Pages: Generated Properties on page 214).

Administering Personal Pages

165

To configure the generated Personal Page, do any or all of the following:


Set Generated Personal Page properties in Servlet Configurator. Populate /Broadcast Messages with combinations of nondisplayable items for which links display on the generated Personal Page, and displayable HTML files or external links, whose content displays there. All these items appear as links and constitute one content window under the Broadcast Messages heading. Some displayable items may be displayed as file content windows, depending on configuration settings in Generated Personal Page properties.

In /Broadcast Messages, create pre-configured subfolders that are displayed when users first log on. Populate these folders with displayable HTML items and nondisplayable items. Each pre-configured folder has a corresponding content window that contains links to all items in the folder. Each displayable item is displayed as a file content window.

Tip: As with any content, only users with required access privileges can see items and folders in /Broadcast Messages and other pre-configured folders. To tailor the generated page for groups, put folders and items intended for those groups in /Broadcast Messages and pre-

configured folders, and assign access privileges to the target groups. For example, if each group accesses different subsets of pre-configured folders, then users in each group see different content windows when they first log on.

Understanding Broadcast Messages


The /Broadcast Messages folder disseminates messages to all system users, except as restricted by access privileges granted on individual content items. Put announcements and documents for wide distribution in this folder. The terms Broadcast Messages content windows and Broadcast Messages refer only to the content of the Broadcast Messages folder itself, excluding the content of its subfolders (the preconfigured folders). Broadcast Messages include:

One content window that displays links to all items in /Broadcast Messages File content windows for each displayable item in /Broadcast Messages

Unlike other content window types, Broadcast Messages cannot be deleted from users Personal Pages. If users makes another page their default Personal Page, Broadcast Messages remain on the originally generated Personal Page. User can delete the generated page only if they added the /Broadcast Messages folder to another Personal Page. (A user can acquire multiple pages containing the Broadcast Messages by copying pushed Personal Pages.)

166

Administering Content

Configuring Content for Broadcast Messages


/Broadcast Messages is your vehicle for customizing what users see according to enterprise or administration needs. By including content for various groups and setting the access control on each item or folder to ensure that only its intended group has access, you push content to users browser.

Configuring Pre-Configured Folders


To configure pre-configured folders for Broadcast Messages, add subfolders to /Broadcast
Messages.

To add folders to /Broadcast Messages:


1 From Viewer, select Broadcast Messages.
Tip: To view the /Broadcast Messages folder, select View > Show Hidden.

2 Select File > New Folder. 3 Enter a folder name and click OK.
The folder you created is displayed in /Broadcast Messages in Viewer.

Configuring Folder Items


To configure folder items:
1 From Explore, select a folder in /Broadcast Messages. 2 Select File > Import > Item. 3 Select Tools > Personalize > Manage Personal Pages. 4 Select My Personal Page, and click Content. 5 Move the Broadcast Message subfolder from Select Content to My Personal Page Content, and click Save
Settings.

6 Select Favorites > My Personal Page to view the added content.


Follow the directions for adding content to Personal Pages in the Hyperion System 9 BI+ Workspace Users Guide.

Administering Personal Pages

167

Renaming Broadcast Messages Folders


When you rename the /Broadcast Messages folder, the changed folder name is displayed in the title bar of the Broadcast Messages content window in Viewer and on users Personal Pages. The system property Folder containing broadcast messages automatically reflects the changed name. After renaming /Broadcast Messages or its subfolder, /PersonalPage Content, you must manually change another property, Location. The Location property is found in Servlet Configurator, in the Personal Pages/Publish section (see Personal Pages: Publish Properties on page 214).

Providing Optional Personal Page Content to Users


Beyond what you configure for the generated Personal Page, you can configure optional content for users to include on their Personal Pages. All pre-configured folders are optional content for users and are displayed on the Content page for users to add to Personal Pages. A pre-configured folder is displayed on a Personal Page as a content window when added, with links to the items it contains. Import all content to pre-configured folders using Viewer (see the Hyperion System 9 BI+ Workspace Users Guide).

Displaying HTML Files as File Content Windows


Workspace allows users to display HTML files on their Personal Pages as file content windows. This means that, rather than having links to HTML files, the file contents are displayed on Personal Pages. By default, the first displayable item in a pre-configured folder automatically displays as a file content window on each users generated Personal Page. As an administrator, you ensure that users with the required access privileges see the content HTML items by subscribing to them. See the Hyperion System 9 BI+ Workspace Users Guide for information on displaying HTML files as file content windows.

Configuring Graphics for Bookmarks


To provide graphics that users can use for image bookmarks, place graphic files in /wsmedia/personalize in the servlets deployment directory. You can add customized icon files for users upon request. Add these image files to /wsmedia or folders that are within the scope of the Context root (/Hyperion) and give the user a URL that points to that file; for example, /wsmedia/sqr/vcr.gif.
Note: Icons do not display on Personal Pages if the file names or directory contains double-byte character set (DBCS) characters.

168

Administering Content

Configuring Exceptions
To enable exceptions to be added to the Exceptions Dashboard, select the Advanced Option Allow users to add this file to the Exceptions Dashboard when importing through Viewer. For information on how users can add exception-enabled jobs or items to their Exceptions Dashboard, see the Hyperion System 9 BI+ Workspace Users Guide. To give jobs exceptions capability, you must design jobs (usually, Production Reporting programs or Interactive Reporting jobs) to write exceptions to the output.properties file. See the Hyperion System 9 BI+ Workspace Users Guide. For programmers information about supporting exceptions in jobs, see the Hyperion System 9 BI+ Workspace Users Guide.

Viewing Personal Pages


Content that you defined in Viewer is displayed in the Personal Page generated by Workspace for first-time users.

Publishing Personal Pages


You can publish Personal Pages so that users can copy them to their own Personal Pages, and you can change the default Publish properties for publishing Personal Pages (see Personal Pages: Publish Properties on page 214). When Personal Pages are published, they are added to the Personal Page Content folder in /Broadcast Messages (default folder location is root/Broadcast Messages/Personal Page Content). Users with modify access to /Personal Page Content can publish Personal Pages (see the Hyperion System 9 BI+ Workspace Users Guide).
Note: Make sure that users understand that even though two users can copy a published page, they are not guaranteed identical results. Access privileges on items included on the published page determine what users see.

Configuring Other Personal Pages Properties


Use Servlet Configurator to set Personal Page configuration properties (see Personal Pages Properties on page 213); for example:

Color schemes Maximum number of Personal Pages Visibility of content window headings (colored bars that resemble title bars)

Administering Personal Pages

169

170

Administering Content

Chapter

Configuring RSC Services

8
Administrators configure RSC services and their properties using RSC.
In This Chapter About RSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Managing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Modifying RSC Service Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Managing Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Managing Repository Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Using the ConfigFileAdmin Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

Configuring RSC Services

171

About RSC
RSC is a utility that enables you to manage remote or RSC services. RSC can configure services on all hosts of a distributed Workspace system. RSC modifies the config.dat file that resides on the target host. You can run RSC from all server hosts in the system. In addition to modifying services, you can use RSC for these tasks:

Adding, deleting, and modifying hosts Adding, deleting, and modifying database servers Changing the database password used by RSC services

To remove RSC services, use the ConfigFileAdmin utility (see Using the ConfigFileAdmin Utility on page 190).

Starting RSC
To start RSC:
1 Start Service Configurator.

Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Tools > Service Configurator. UNIX: Run ServiceConfigurator.sh, installed in Install Home/bin.

2 From the Service Configurator toolbar, select Module > Remote Service Configurator, or click RSC icon,
.

Logging On to RSC
To log on to RSC, enter the requested information:

Administrative user ID Password for user name Workspace host of the services to configure Workspace port number for the server host

172

Configuring RSC Services

Using RSC
When you first log on to RSC, the services that are installed on the host that you are logged on to, and basic properties of the highlighted service, are displayed. Toolbar icons represent functions you perform using RSC.
Table 15

RSC Toolbar Icons Tooltip Exit Remote Service Configurator Description Closes RSC after user confirmation

Icon

Refresh item listing

Updates the list of services and basic properties of the selected service Checks whether a service is alive

Ping service

Show defined hosts

Displays the Defined Hosts window, where you define, delete, or modify hosts Displays the Defined Database Servers window, where you add, delete, and modify database servers Deletes a service after user confirmation

Show defined database servers in the system Delete selected item

Show item properties

Displays properties of a service for editing

Show Help for Remote Service Configurator

Displays online help for RSC

About RSC

173

Managing Services
With RSC, you can modify properties or delete installations of these services:

Event Service Job Service Name Service Repository Service Service Broker

Topics that provide information about working with services:


Adding RSC Services on page 174 Deleting RSC Services on page 174 Pinging RSC Services on page 175

See also Modifying RSC Service Properties on page 175.

Adding RSC Services


To add RSC services to Workspace:
1 Run the Workspace installation program to install the service software on a host computer. 2 Configure the service during the installation process.
For information about adding Workspace services, see the Hyperion System 9 BI+ Workspace Installation Guide. After the service is installed, you can reconfigure it using RSC.
Note: After adding RSC services, all Service Brokers in your system are notified and begin dispatching requests to the services.

Deleting RSC Services


To delete RSC services:
1 Select a service from the Services list and click the Delete icon,
Note: You cannot delete Name Service, Repository Service, or Event Service.

2 When prompted, click Yes to delete the service.


When you delete services, a warning appears letting you know that an error message is displayed during uninstallation. You can safely ignore the message.

174

Configuring RSC Services

Pinging RSC Services


Errors sometime occur because services did not start correctly or stopped working properly. An easy way to test service availability is to ping it; that is, to send a message and see if it responds.

To ping a service from RSC, select the service and click


Service NS1_stardust is alive.

If the service is responsive, a message is displayed; for example:

If the service is not responsive, a message is displayed indicating that ping could not connect to the service; for example:
A Brio.Portal error occurred in Ping: ConnectionException: Connection refused: connect

This indicates that the service is not running. If you receive this error, refer to the service log file to investigate why the error occurred.

Modifying RSC Service Properties


When you modify service properties, the service receives notification of changes and immediately updates its configuration. Most properties are used while the service is running and take effect immediately. Examples of such properties are Max Connections and Logging Levels. Properties that are only used at start time, however, do not take effect until the next time the service starts. Such properties include Host, Directory, Log File, and IP Port. Not every service has all property groups. The groups of properties that all or most RSC services have, and other properties of each service, are described in these topics:

Common RSC Properties on page 176 Job Service Properties on page 178

Note: RSC services not mentioned explicitly in this section have only common properties.

To view or modify service properties:


1 Double-click the service name or select the service name and click
A properties page is displayed.
.

2 Select a tab to view or modify a group of properties.


For example, select the Storage tab to modify properties that define where persistent storage of business data is located.

Modifying RSC Service Properties

175

Common RSC Properties


All RSC services have general and advanced properties and most have storage properties:

General RSC Properties on page 176 Advanced RSC Properties on page 176 RSC Storage Properties on page 177

General RSC Properties


General properties of all RSC services:

DescriptionBrief description of the service. HostHost on which the service resides. You can select or define a host. If you define a host, enter a name that makes the service easily identifiable within your organization. The maximum number of characters allowed is 64. See Managing Hosts on page 182. IP PortService IP port number. The wizard assigns a unique port to each service. Even if you install multiple services of one type (Job Service, for example) on one host, the wizard automatically enters a unique IP port number for each one. DirectoryLocation where the service resides. Adopt a convention for naming the directories where you store service information. For example, for Event Service named ES_apollo, the directory might be j:\Brio\Brio8\server\ES_apollo.

Note: Changes to Host, IP Port, and Directory properties do not take effect until the service is restarted.

Advanced RSC Properties


Advanced properties describe the services logging level and the maximum number of connections the service supports. All services have advanced properties.

Log LevelsLevel at which service errors are logged. See Configuring Logging Levels on page 229. A change to this property takes effect immediately. Therefore, when errors occur and you want more debugging information, you can change the logging level without restarting the service.

Max ConnectionsMaximum number of connections allowed. Consider memory allocation for the connections you allow. You must increase the maximum number of file descriptors on some systems, such as UNIX. A change to this property takes effect immediately. Changing the Max Connections setting without restarting the service is useful to dynamically tune the service at run time.

176

Configuring RSC Services

RSC Storage Properties


Storage properties are used by a service to connect to the database where it stores its data. These services store data of their own:

Name ServiceGeneral configuration information, such as lists of hosts, and database servers Repository ServiceWorkspace content metadata Event ServiceSchedules and subscriptions

Service Broker and Job Service do not have storage properties. Data for all these services is stored in the repository database, for which storage properties define connectivity:

DB DriverName of the driver used to access the database. This is database-dependent and should only be changed by an experienced administrator. If you change DB Driver, you must change other files, properties, data in the database, and the Java classpath. See Changing the Repository Database Driver or JDBC URL on page 187.

JDBC URLURL for Java access to the database using the JDBC driver. The services use this URL to connect to the database server. If you change JDBC URL, you must change other files, properties, and data in the database. For details, see Changing the Repository Database Driver or JDBC URL on page 187.

User NameUser name for the database account. All services should use one database account. PasswordPassword for the database account.

Caution! Workspace only supports configurations in which all services connect to one database. For this

reason, change the settings on this tab only if you are an experienced Workspace administrator; otherwise, request assistance from Hyperion Solutions Customer Support.

Storage property settings rarely should be changed. The circumstances that would require changes include, for example, assignment of host names on your network, changes to a database user account (name or password), or changes to database type (as from Oracle to Sybase). Such changes require extensive changes to external systems configuration as well.

Modifying RSC Service Properties

177

Job Service Properties


Topics that describe properties unique to Job Service and other tasks that start from the Job Services Properties dialog box.

Job Service Dynamic Properties on page 178 Job Service Database Properties on page 178 Job Service Production Reporting Properties on page 179 Job Service Application Properties on page 179 Executable Job Service Properties on page 182

When you modify properties of Job Service, the service receives change notifications and updates its configuration immediately. Properties used while the service is running take effect immediately. Such properties include Max Connections, Logging Levels, and all properties on the Database, Production Reporting, Application, and Executable tabs. Properties only used at start time, however, do not take effect until the next time Job Service starts. Such properties include Directory, Log File, and IP Port.

Job Service Dynamic Properties


Dynamic properties provide information about how Job Service processes jobs:

Job LimitMaximum number of concurrent jobs to be run by Job Service If this value is 0 or -1, an unlimited number of concurrent jobs can be run. Job limit cannot be modified at runtime. Changes made to Job Limit are picked up by Job Service dynamically.

HoldDetermines whether Job Service can accept jobs for processing When set to true, a Job Service continues to process jobs that are running, but does not process any new jobs.

Both properties can be changed without restarting Job Service. Only Job Service has Dynamic properties.

Job Service Database Properties


Database properties provide information needed for Job Service to connect to the databases against which it runs job, including Server Name, Type, and Location of the host where the database server resides. Adding connectivity to local database servers enables Job Service to run programs that connect directly to local databases. (For information on adding database servers, seeAdding Database Servers on page 184.)

178

Configuring RSC Services

To define connectivity between Job Service and an additional database:


1 Click Add.
A list of Workspace database servers is displayed.

2 Select a database and define connection strings or environment variables.

To delete a databases connectivity from Job Service, click Delete. To modify the connectivity properties of a database:
1 Select a database from the list and click Modify. 2
Modify or create environment variables using Name and Value.

For example, name=ORACLE_SID, value=PAYROLL.


Note: The Database Servers list combined with the Production Reporting servers list is used to construct data sources for importing Production Reporting documents.

Job Service Production Reporting Properties


The Production Reporting page lists Production Reporting servers that are currently defined and available on the same host as Job Service. Production Reporting properties define the Production Reporting servers in the system that are used to run Production Reporting jobs. You can add and delete Production Reporting servers, and modify the path of Production Reporting servers by clicking the corresponding button.

Job Service Application Properties


Application properties describe the applications used by Job Service. You can configure Job Service to run combinations of three job types: Production Reporting, Interactive Reporting, and generic.

ApplicationName of the application. Select an application or add one. All applications defined in Workspace are listed. Applications can have multiple executables, each on a different Job Service to distribute the load. DescriptionOptional read-only description of the application. Click Modify to change the description. Command StringRead-only command string to pass to the application when it runs. Click Modify to change the command string.

You can add applications to Job Service, delete applications that have no associated executables, and modify application properties by clicking the corresponding button. The Add button is available only when you must define executables for applications (see Adding Applications for Job Service on page 180). After you add applications, you must define their executable properties (see Executable Job Service Properties on page 182).

Modifying RSC Service Properties

179

Adding Applications for Job Service


When adding applications, you must specify the application and an executable. An application may be installed on multiple hosts. Each installation of the application has a different executable, or program file and path, which you define on the Executable page. For example, Oracle Reports (an application) might be installed on two hosts, apollo and zeus. The Job Services on apollo and zeus might have identical application properties, but their executables would differ, because each host has its own executable file. For more information about Executable properties, see Executable Job Service Properties on page 182.

To add applications:
1 Display the Job Service application properties, 2 Click Add to open Application Properties. 3 Supply a name and description. 4 Enter a command string to pass to the application when it runs.
Use one method:

Select a pre-defined template. Enter a command string in the field provided. Build a command string using command tokens.

5 Click OK, then click the Executable tab to define the executable properties for the application.
See Executable Job Service Properties on page 182.

Command Tokens
You can use command tokens to build command strings to pass to applications when they run:

$CMDFull path and name of the executable. $PARAMSParameters defined for the program. You can set prompt and default values for

individual parameters in program properties.

$PROGRAMProgram to run. Examples of programs include shell scripts, SQL scripts, or Oracle Reports. $BPROGRAMProgram name with the file extension removed. Use this in combination with

hardcoded text to specify a name for an error file, a log file, or another such file. An example would be log=$BPROGRAM.log.

$FLAGSFlags associated with the program. $EFLAGSFlags associated with the executable or an instance of it. All jobs associated with

this executable use these flags.

$DBCONNECTDatabase connect string associated with the program. If set, end users

cannot specify a connect string at runtime.

180

Configuring RSC Services

$DBUSERNAMEDatabase user name associated with the program. If set, end users cannot specify a user name at runtime. $DBPASSWORDDatabase password associated with the program. If set, ends users cannot

specify a password at runtime.

$BPUSERNAMEUser name. If the user name is required as an input parameter to the job,

specifying this token instructs the system to include the user name in the command line automatically, rather than prompting the user.

Command String Examples

Example 1 Command string template that runs Oracle Reports: $CMD userid=$DBUSERNAME/$DBPASSWORD@$DBCONNECT report=$PROGRAM destype=file desname=$BPROGRAM.html batch=yes errfile=$BPROGRAM.err desformat=html

When the tokens in the above command string for Oracle Reports are replaced with values, the command executed in Job Service looks like this: r30run32 userid=scott/tiger@Brio8 report=inventory destype=file desname=inventory.html batch=yes errfile=inventory.err desformat=html

Example 2

Command string template that runs shell scripts on a Job Service running on UNIX: $CMD $PROGRAM $PARAMS When the tokens in the above command string for running shell scripts are replaced with values, the command executed in Job Service looks like this: sh runscript.sh p1 p2 p3

Example 3 Command string template that runs batch files on Job Service running on a Windows system: $PROGRAM $PARAMS

When the tokens in the above command string for running batch files are replaced with values, the command executed in the Job Service looks like this: Runbat.bat p1 p2 p3

Executable Job Service Properties


Executable properties provide information about running applications used by Job Service:

ExecutableLocation of the executable program for the application (full path and executable name); must be co located with Job Service. FlagsValue used in the command line for the token $EFLAGS, which represents the flags associated with the program. Environment VariablesEnvironment variables associated with the application, for example, $PATH, $ORACLE_HOME.

Only Job Service has executable properties.

Modifying RSC Service Properties

181

Managing Hosts
The Defined Hosts dialog box lists the currently-defined hosts in Workspace and identifies the host name and platform. Topics that describe how to add, modify, and delete hosts:

Adding Hosts on page 182 Modifying Hosts on page 183 Deleting Hosts on page 183

Adding Hosts
After you install services on a computer, you must add the computer as a host in Workspace.

To add hosts:
1 Click
, and click Add.

2 Supply a host name and the platform used by the host.

Caution! The host name cannot start with numerals. Hyperion Interactive Reporting Data Access

Service and Hyperion Interactive Reporting Service do not work if host names start with numerals.

3 Click OK.
Workspace pings the host to make sure it is on the network. If the ping fails, an error message is displayed. After Workspace successfully pings the host and validates the host name, Workspace adds the host and lists it in the Defined Hosts dialog box.

4 Click OK.
Note: If you change the host name, you must restart Workspace services and Job Service in order for the host to take effect.

182

Configuring RSC Services

Modifying Hosts
You modify a host to change its platform designation.

To modify hosts:
1 Click
.

2 Select a host from the list, and click Modify. 3 Select a platform for the host, and click OK.

Deleting Hosts
You cannot delete a host if services are installed on it.

To delete hosts:
1 Click
.

2 Select a host from the list and click Delete. 3 When prompted, click Yes to delete the host, and click OK.

Managing Repository Databases


Workspace uses repository databases to store and manage application metadata:

Defining Database Servers on page 184 Changing the Services Repository Database Password on page 187 Changing the Repository Database Driver or JDBC URL on page 187

Defining Database Servers


The Defined Database Servers dialog box lists the currently defined Workspace repository database servers, identifying the database server name, type, and location (host) of each. Topics that describe how to manage database servers using RSC:

Database Server Properties on page 184 Adding Database Servers on page 184 Adding Job Service Database Connectivity on page 185 Modifying Database Servers on page 185 Deleting Database Servers on page 186

Managing Repository Databases

183

Database Server Properties


Properties for all repository database servers are Database type, User name, and Password, where user name is the default user name used by Job Service for running Production Reporting programs on the database server (used when a database user name and password are not supplied when storing jobs in the repository).

Adding Database Servers


To add database servers:
1 Click 2 Click Add 3 Supply this information:

NameAlphanumeric name for the database server you want to add that is at least five characters. Database typeType of database server you are using. HostHost where the database server resides. User nameDefault user name used by the Job Service for running Production Reporting programs on the database server. Used if the job owner does not supply a database user name and password when importing a given job. PasswordValid password for user name.

4 Click OK.

Adding Job Service Database Connectivity


To facilitate database connectivity, after you add a database server, you must associate it with Job Service. Doing so enables Job Service to eliminate network traffic by running a program that connects directly to a local database. Multiple Job Services can access the same database. For example, you can define three Job Services within one Workspace domain, and each Job Service can point to a given XYZ database loaded on a large UNIX server. When asked to run a given report that uses data on the XYZ database, Service Broker dispatches the job to one of three Job Services associated with the database. Should a computer hosting one Job Service go down, Service Broker automatically routes the job to another Job Service.

184

Configuring RSC Services

To associate database servers with Job Service:


1 Select a Job Service to associate with the database server. 2 Click
.

3 Select the Database tab, and click Add. 4


Select the database server to associate with Job Service, and click OK.

5 Supply this information:

Connectivity informationInformation needed depends on the database type. For example, for an Oracle database, enter a connect string. Environment variablesRequired only to execute Production Reporting jobs against the database. Used to specify database information and shared library information that may be required by Production Reporting. For example: name = ORACLE_SID, value=PAYROLL

6 Click OK.

Modifying Database Servers


To modify database servers:
1 Click
.

2 Select a database server from the list and click Modify. 3 Make changes as necessary (see Database Server Properties on page 184), and click OK.

Deleting Database Servers


To delete database servers:
1 Click
.

2 Select a database server from the list and click Delete. 3 When prompted, click Yes to verify database deletion, and click OK.

Managing Repository Databases

185

Changing the Services Repository Database Password


When you change the password that the services use to access the repository database, the order of steps is critical. Carefully read all instructions before performing them.

Caution! Make sure to change the password in Workspace before changing it in the database. If you

perform the steps in the wrong order, you may lose the ability to run Workspace.

To change the repository database password:


1 From RSC, select Name Service, Repository Service, or Event Service. 2 Click Show item properties, and select the Storage tab. 3 Change the password and click OK. 4 Repeat step 1 through step 3 for all Name Services, Repository Services, and Event Services with the same
database account, making certain to enter the same password for each one.

If these services use different database accounts, perform this step only for those that use the account whose password you are changing.

5 Close RSC. 6 In LSC, click Show host properties, and select the Database tab. 7 Change the password and click OK.
This password property (like the other properties on the Database tab) applies to all LSC services on the local host, all of which use one database account. For more information about LSC, see Chapter 9, Configuring LSC Services.

8 Repeat step 6 and step 7 on every host that contains LSC services, making certain to enter the password
the same way each time.

9 If you are using the same database for row-level security, change the password for row-level security from
the Administer module.

10 Stop the Workspace services. 11 Change the password in the database, making certain it matches the password entered for Workspace
services.

12 Restart the services.

Changing the Repository Database Driver or JDBC URL


When you change the driver for the repository database or its URL, the order of steps is critical. Carefully read all instructions before performing them.

Caution! If you perform steps in the wrong order, you may lose the ability to run Workspace.

186

Configuring RSC Services

If parts of the JDBC URL change, such as the database server name, port number, or SID, you must update the JDBC URL property. To do so, perform the JDBC URL portions of the instructions.

To change the database driver:


1 Stop Workspace services. 2 Back up config.dat and server.xml, in \BIPlus\common\config. 3 Start LSC. 4 Click Show host properties, and select the Database tab. 5 Update the database driver and JDBC URL properties. 6 Start the ConfigFileAdmin utility.
See Modifying config.dat on page 192.

7 Type 3 to select Get Name Server Data.


You can use this data listing to preserve all Name Service properties you do not wish to change.

8 Type 4 to select Modify Name Server Data. 9 As the program prompts you for each property, refer to the listing you just displayed, and enter the same
values for all properties except Name Server JDBC URL and Name Server JDBC Driver.

10 Enter the values for Name Server JDBC URL and Name Server JDBC Driver properties; for example:
Name Server JDBC URL jdbc:brio:oracle://brio8host:1521;SID=brio8 Name Server JDBC driver com.brio.jdbc.Oracle.OracleDriver

11 Run this SQL against the repository database:


update v8_jdbc set jdbc_driver='newDriverName', jdbc_url='newJdbcUrl'

For example:
update v8_jdbc set jdbc_driver= 'com.hyperion.jdbc.Oracle.OracleDriver', jdbc_url='jdbc:hyperion:oracle://hyperionhost:1521;SID=hyperion'

12 Update the variable BP_DBDRIVER.


BP_DBDRIVER is defined in Install Home/bin/set_common_env.bat (or set_common_env.sh). By default, this is set to:
Hyperion Home\common\JDBC\DataDirect\3.4.1\lib\hyjdbc.jar

13 Add a JDBC driver to Hyperion Home\common\JDBC and set BP_DBDRIVER to the full path of the
JAR files.

14 Restart the services.

Managing Repository Databases

187

Managing Jobs
Job Service compiles and executes content-creation programs or jobs. Job Service listens for Workspace job requests (such as requests initiated by users from the Scheduler module), manages program execution, returns the results to the requester, and stores the results in the repository. Three job types that Workspace can store and run:

Interactive ReportingJobs created with Interactive Reporting Studio. Production ReportingSecure or nonsecure jobs created with Production Reporting studio. GenericJobs created using other applications (for example, Oracle or Crystal Reports) through a command line interface.

For Interactive Reporting jobs, no special configuration is necessary. Every Job Service is preconfigured to run Interactive Reporting jobs. For users to run Production Reporting or generic jobs, you must configure a Job Service to run the report engine or application program. One Job Service can run multiple types of jobs, as long as it is configured for each type (except Interactive Reporting). Topics that explain how to configure Job Service to run jobs.

Optimizing Enterprise-Reporting Applications Performance on page 189 From Adding Job Services to Running Jobs on page 190

See also Adding Applications for Job Service on page 180 and Executable Job Service Properties on page 182.
Note: The system automatically creates a textual log file (listed beneath the job) for every job it runs. You can suppress all job log files by adding the java system property, -Dbqlogfile_isprimary=false, to the common services and Job Service startup scripts. You must stop and restart all services. See Chapter 2, Administration Tools and Tasks, for more information on stopping and starting the services.

Optimizing Enterprise-Reporting Applications Performance


The Workspace architecture is designed for distributed, enterprise implementations. For optimum performance:

Replicate Job Services (multiple Job Services assigned to a given data source on different computers) to increase overall reliability and decrease job turn-around time. Install Job Service on the same computer as the database to conserve valuable network resources.

Note: Normally, there should be one Job Service on a given host. You can configure a Job Service to run multiple applications.

188

Configuring RSC Services

To run jobs against an enterprise application, configure these parameters:


HostPhysical computer identified to the system by host name. Job ServiceJob Service on the host using RSC. ApplicationThird-party vendor application designed to run in the background. Application examples include Production Reporting, Oracle Reports, or public domain application shells such as PERL. ProgramSource used to drive an invocation of an application. For example, a user might submit a Production Reporting program that generates a Sales report to a Production Reporting application on a given host through Job Service.

From Adding Job Services to Running Jobs


This topic synthesizes the configuration process for running jobs into one set of steps, taking you from service installation to successful job execution. To run jobs in Workspace, complete these steps, which are explained in detail in other topics throughout this document: 1. Install the report applications executable on the host where you run Job Service. Use the installation program that comes with the report application. 2. On the host, install Job Service software from the Workspace services installation CD. 3. Configure Job Service using RSC. 4. Start Job Service. 5. For generic jobs, add an application and executable. For Production Reporting jobs, add a Production Reporting executable, and define a database server and database connectivity properties. Interactive Reporting jobs do not need special configuration. 6. Import a job (a report or program) to run against the application. This can be an Interactive Reporting, Production Reporting, or generic job. 7. Users can now run the job from Viewer.

Using the ConfigFileAdmin Utility


Topics that explain how to modify the config.dat file and how to configure access to process documents and job output using the ConfigFileAdmin utility:

About config.dat on page 191 Modifying config.dat on page 192 Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output on page 193 Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled on page 193

Using the ConfigFileAdmin Utility

189

About config.dat
Regardless of whether services are running on Windows or UNIX, and whether they are running in the common services process or in separate processes, RSC services always use config.dat to begin their startup process.
config.dat resides in \BIPlus\common\config. All RSC services on a host (within an Install Home) share a config.dat file. If you distribute RSC services across several computers, each computer has its own config.dat.

When Name Service starts, it reads config.dat to get database connectivity and logon information. All other RSC services reads this file to get their password, host, and port for Name Service. Name Service gets its configuration information directly from the database. Other RSC services connect to Name Service to get their configuration information.
config.dat uses plain ASCII text. Passwords contained in the file are encrypted, and you can

modify them only with RSC or the ConfigFileAdmin utility. This ensures that only people who know the config.dat password can modify the service passwords in the file. See Modifying config.dat on page 192. To modify configuration information in config.dat, modify service properties using RSC. RSC writes your changes to config.dat.

Sample config.dat File


[Setup] Key=30721481 Password=AD5FA5E0B71DE9E7F142DD39548571725AC01E801EBAB4345FEA58B8317398F 002C8E468B6F8D7AE [NameServer] Name=NS1_ggodbee1.hyperion.com Host=ggodbee1.hyperion.com Password=36873B0EB76584AC386F1BBB20A2D4E702C8E468B6F8D7AE SAPassword=C0CBBC76515E6DCEB7A75E3DF4017F56AC54BA329010DA0979489183D1BFF 61A02C8E468B6F8D7AE JDBC_URL=jdbc:hyperion:sqlserver://localhost:1433;DatabaseName=db446 Login=db446 JDBC_DRIVER=hyperion.jdbc.sqlserver.SQLServerDriver Port=1498 [AGENT=RM1_ggodbee1.hyperion.com] Password=16B3CA62244959C8F1F35F3FF7064A42AC54BA329010DA0979489183D1BFF61 A02C8E468B6F8D7AE [AGENT=SB1_ggodbee1.hyperion.com] Password=F2D10A63E8C641C938C47417F4A5F01BAC54BA329010DA0979489183D1BFF61 A02C8E468B6F8D7AE [AGENT=ES1_ggodbee1.hyperion.com] Password=02BB7F0E8241FB2CB0123177FC5C48ADAC54BA329010DA0979489183D1BFF61 A02C8E468B6F8D7AE [AGENT=JF1_ggodbee1.hyperion.com] Password=D8DD3759EA06983E5689A5B24D3AE557AC54BA329010DA0979489183D1BFF61 A02C8E468B6F8D7AE

190

Configuring RSC Services

Modifying config.dat
You view or modify information in config.dat by using a simple utility run from a command line, named ConfigFileAdmin.bat (Windows) or ConfigFileAdmin.sh (UNIX). This file is in Install Home\bin. To run the ConfigFileAdmin utility, specify the config.dat password on a command line after the file name. For example, with the default password, you would type configfileadmin.bat administrator (on Windows) or ConfigFileAdmin.sh administrator (on UNIX). Tasks you can accomplish with the ConfigFileAdmin utility:

Deleting services Changing services passwords Changing the password for access to config.dat Changing the ServletUser password

The main menu of the ConfigFileAdmin utility offers these commands:


0) Exit 1) Create New Config File 2) Load Existing Config File 3) Get Name Server Data 4) Modify Name Server Data 5) Add Service Agent 6) Delete Service Agent 7) List Service Agents 8) Get Service Agent Password 9) Change Service Agent Password 10) Change Config File Password 11) Validate Password 12) Encode Password 13) Encrypt Password 14) Miscellaneous Commands Menu

To list the properties of Name Service, such as its database logon name and password, select option 3. When the Workspace installation creates a config.dat file, it assigns a default password, namely, administrator. This differs from the admin account password. As a matter of system security, you should change the config.dat password using the ConfigFileAdmin utility, by selecting option 10. You can use option 4 to modify the database password that Name Service uses to connect to the repository database, or you can use RSC to do so.

Using the ConfigFileAdmin Utility

191

Specifying Explicit Access Requirements for Interactive Reporting Documents and Job Output
By default, no explicit access to Interactive Reporting database connections is required to process Interactive Reporting documents or job outputs using the plug-in or Workspace. To require explicit access, as when a database is associated with Interactive Reporting documents or job output, use the ConfigFileAdmin utility.

To require explicit Interactive Reporting database connection access to process documents and
job out:

1 At a command line, go to the Install Home\bin directory of the Workspace server. Enter:
configfileadmin password

2 Type 14.
. . . 11) 12) 13) 14)

Validate Password Encode Password Encrypt Password Miscellaneous Commands Menu

Supply the requested information for the database (user) name, database password, database URL, and database driver. You can find this information in the <xref> section of the server.xml file.

3 Type 1.
0) Exit 1) Toggle the SC_ENABLED flag for ServletUser (enables/disables feature) 2) Update the ServletUser password and re-generate properties file.

This flag is stored in the repository.

4 After toggling, restart the server, because Repository Service caches this information.

Setting the ServletUser Password when Interactive Reporting Explicit Access is Enabled
The special user ServletUser has read-only administrative privileges. When the SC_ENABLE flag is set to true, ServletUser sends a request for access to Interactive Reporting documents or job output on behalf of users without explicit access to the Interactive Reporting database connection associated with the document or job output. When the SC_ENABLE flag is set to false, ServletUser cannot make such a request. Only users with explicit access given by the importer to the Interactive Reporting database connection associated with the Interactive Reporting document or job output have access.

192

Configuring RSC Services

The password for ServletUser is updated in the repository and stored, encrypted, in the sc.properties file. The directory in which this file is located depends on the servlet engine you are using. For example, for Apache Tomcat, this file is in:
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace\webapps\wor
kspace\WEB-INF\config\sc.properties

To change the ServletUser password:


1 At a command line, go to the \BIPlus\bin directory of the Workspace server. Enter:
configfileadmin password

2 Type 14.
. . . 11) 12) 13) 14)

Validate Password Encode Password Encrypt Password Miscellaneous Commands Menu

3 Type 2.
0) Exit 1) Toggle the SC_ENABLED flag for ServletUser (enables/disables feature) 2) Update the ServletUser password and re-generate properties file.

4 Enter the information requested. 5 Manually update the sc.properties file on all Workspace servlet installations.

Using the ConfigFileAdmin Utility

193

194

Configuring RSC Services

Chapter

Configuring LSC Services

9
Administrators configure LSC services and their properties using LSC and the portal.properties file.
In This Chapter About LSC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Modifying LSC Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Modifying Host Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Modifying Properties in portal.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

Configuring LSC Services

195

About LSC
LSC enables you to modify properties of installed LSC services:

Analytic Bridge Service (ABS)Also known as Extended Access for Hyperion Interactive Reporting Service Assessment (Harvester) Service (HAR) Authentication Service (AN) Authorization Service (AZ) Global Service Manager (GSM) Hyperion Interactive Reporting Service (BI) Hyperion Interactive Reporting Data Access Service (DAS) Local Service Manager (LSM) Logging Service (LS) Publisher Service (PUB) Session Manager (SM) Super Service (BPS)Also known as Hyperion Interactive Reporting Base Service Update (Transformer) Service (TFM) Usage Service (UT)

LSC only modifies LSC service properties; it neither creates nor removes LSC services. To add services, use the Workspace installation program. To remove services, see Using the ConfigFileAdmin Utility on page 190. LSC cannot configure services on a remote host (nor in another Install Home on the same host) or on a system with no GUI capability. LSC edits repository information and server.xml (in Install Home\common\config), which holds configuration information only for services in that Install Home.
Note: Multiple Workspace installations, or Install Homes, may reside on one host computer. A server installation is a set of installed services in one Install Home directory that run in one process space. If a host has two Install Home directories, they require two separate process spaces. LSC always edits server.xml for its own Install Home.

196

Configuring LSC Services

Starting LSC
To start LSC:
1 Start Service Configurator.

Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Tools > Service Configurator. UNIX: Run the ServiceConfigurator.sh file, installed in Install Home/bin.
.

2 Select Module > Local Service Configurator, or click the LSC icon, 3 Enter your user ID and password.

Note: If you log on with a normal user account, some fields, such as the Trusted Password and Pass-through configuration information, are read-only. For full access to all functionality, you must be logged in as a user who is provisioned with the BI+ Global Administrator role.

Using LSC
LSC lists services. that are installed in the Workspace installation (Install Home) from which LSC is running, along with basic properties of the highlighted service. Toolbar icons represent functions you perform using LSC.
Table 16

LSC Toolbar Icons Tooltip Exit Description Closes LSC after users confirmation

Icon

Show host properties

Displays the properties of the current Install Home for editing

Show item properties

Displays properties of the selected service for editing

Show Help for Local Service Configurator

Displays online help for LSC

About LSC

197

Modifying LSC Service Properties


Administrators use LSC to modify LSC service properties. Not every service has all property groups. The property groups that all or most LSC services have, and other properties of each service, are described in these topics:

Common LSC Properties on page 198 Assessment and Update Services Properties on page 199 Hyperion Interactive Reporting Service Properties on page 199 Hyperion Interactive Reporting Data Access Service on page 201

To view or modify most LSC service properties, double-click the service name or select the
service name and click .

To view or modify GSM or LSM properties (which do not appear in the Local Service list box),
click the Show host properties icon these service properties. to display the General Properties tab, which contains

To modify host properties, click

A properties page is displayed. Select a tab to view or modify a group of properties.

Common LSC Properties


All LSC services have general properties, of which three are standard:

Service NameRead-only name of the service, assigned during installation Run TypeControls whether a service is started with other services (by the startCommonServices script or Hyperion Interactive Reporting Base Service) Setting Run Type to Start makes the service active, so it starts with the others. Setting Run Type to Hold inactivates the service, so it does not start with the others. The Hold setting is useful for troubleshooting, to temporarily limit which services start.

Log LevelSee Configuring Logging Levels on page 229

Services with only standard general properties:


Analytic Bridge Service Authentication Service Authorization Service Hyperion Interactive Reporting Base Service (starts all LSC and RSC services in one Install Home)

198

Configuring LSC Services

Logging Service Publisher Service Session Manager Usage Service

Assessment and Update Services Properties


In addition to standard general properties (that is, Service Name, Run Type, and Log Level; see Common LSC Properties on page 198), Assessment Service and Update Service have these general properties:

Work directoryName of the directory where the services temporary files are stored Max concurrent threadsMaximum number of concurrent threads the service supports Request Queue polling intervalFrequency with which the service checks for the Request Queue lock timeout setting For example, to set the service to poll every 30 seconds, type 30.

Request Queue lock timeoutNumber of seconds after which the Request Queue lock timeout expires Clear log entries afterNumber of hours after which log entries should be cleared

Hyperion Interactive Reporting Service Properties


This service has additional general properties, and special font considerations for UNIX systems, as described in these topics:

Hyperion Interactive Reporting Service General Properties on page 199 Fonts for UNIX on page 200

Hyperion Interactive Reporting Service General Properties


In addition to standard general properties (that is, Service Name, Run Type, and Log Level; see Common LSC Properties on page 198), Hyperion Interactive Reporting Service properties has these general properties:

Cache LocationDirectory name where the services temporary files are stored For example, to set cache location to the D drive, type D:\\temp.

Max Concurrent RequestsMaximum number of concurrent requests the service supports; requests that exceed this setting are blocked For example, to block the number of concurrent requests after 4999, type 5000.

Polling IntervalFrequency with which the service checks the Document Unload Timeout setting For example, to set the service to poll every 180 seconds, type 180.

Modifying LSC Service Properties

199

Min. Disk space (MB)Minimum disk space required to service requests For example, to allocate 10 MB as the minimum disk space, type 10.

Document Unload TimeoutInactive time in seconds after which documents are unloaded from memory to conserve system resources For example, to retain documents in memory no longer than 30 minutes after last use, type 1800.

Document Unload ThresholdNumber of open documents that activates the document unloading mechanism For example, to set the maximum number of open documents to 15, type 15.

Fonts for UNIX


If UNIX users want Interactive Reporting documents to have a consistent look and feel, you must make Type1, TrueType, or OpenType fonts available to Hyperion Interactive Reporting Service. For a Windows-like look and feel, download Microsofts TrueType Web fonts. If you have Type1, TrueType, or OpenType fonts, and a fonts.dir file, perform step 5 on page 200 and step 6 on page 200 to make these fonts available to Hyperion Interactive Reporting Service. If you have Type1, TrueType, or OpenType fonts, but no fonts.dir file, you must perform step 4 on page 200, step 5 on page 200, and step 6 on page 200.

To make Microsofts TrueType Web fonts available to Hyperion Interactive Reporting Service
when you do not have Type1, TrueType, or OpenType fonts:

1 Download Microsoft TrueType Web fonts from


http://sourceforge.net/projects/corefonts/ or other source.

2 Create a directory, directory. 3 Extract each CAB file (*.exe) into the newly created directory using the cabextract utility in
\BIPlus\bin. \BIPlus\bin/cabextract -d directory <CAB file>

4 Create a fonts.dir file in the directory containing font files using the ttmkfdir utility in \BIPlus\bin.
\BIPlus\bin\ttmkfdir -d directory -o directory\fonts.dir

5 Set the environmental variable BQ_FONT_PATH to the directory where fonts.dir was created.
Add this variable to the start-up script to save your changes. In the start-up script, BQ_FONT_PATH=directory, export BQ_FONT_PATH. This environmental variable can contain colon-separated paths to directories containing fonts.dir.

6 Restart Hyperion Interactive Reporting Service.

200

Configuring LSC Services

Hyperion Interactive Reporting Data Access Service


Topics that discuss Hyperion Interactive Reporting Data Access Service properties:

Hyperion Interactive Reporting Data Access Service General Properties on page 201 Hyperion Interactive Reporting Data Access Service Data Source Properties on page 201 Adding Data Sources for Hyperion Interactive Reporting Data Access Service on page 202

Hyperion Interactive Reporting Data Access Service General Properties


In addition to standard general properties (that is, Service Name, Run Type, and Log Level; see Common LSC Properties on page 198), these general properties can be used to fine-tune Hyperion Interactive Reporting Data Access Service performance.

Relational Partial Result Cell CountMaximum number of relational data table cells that a block of results data from a query can contain when sent from Hyperion Interactive Reporting Data Access Service to the client Default value is 2048; minimum is 1.

Multidimensional Partial Result Row CountMaximum number of multidimensional data table rows that a block of results data from a query can contain when sent from Hyperion Interactive Reporting Data Access Service to the client Default value is 512; minimum is 1.

Reap IntervalFrequency in seconds with which Hyperion Interactive Reporting Data Access Service clears query data from memory when the requesting client seems to be disconnected Default value is 180; minimum is 5.

Minimum Idle TimeMinimum number of seconds to retain query data in memory for the client retrieval before assuming that the client is disconnected Default value is 180; minimum is 0.

Hyperion Interactive Reporting Data Access Service Data Source Properties


Data source properties page lists all defined data sources for Hyperion Interactive Reporting Data Access Service. From this page, you can modify data source properties, or create and remove data sources. These properties apply to all Hyperion Interactive Reporting Data Access Service data sources:

Connectivity TypeData source database driver; must be installed on the host for Hyperion Interactive Reporting Data Access Service Database TypeDatabase type for the data source Whether Hyperion Interactive Reporting Data Access Service can connect to databases is determined by Interactive Reporting database connections and database drivers installed.

Modifying LSC Service Properties

201

Hostname/ProviderDatabase host name or logical data source name For OLE DB database connections, this is the OLE DB Provider identifier.

Server/File (OLE DB only)Server file or data source name used for database connections

Note: Connectivity Type, Database Type, Name of Data Source, and Server/File properties are used only to route requests to Hyperion Interactive Reporting Data Access Service. Database client software to connect to the requested database must be installed and properly configured on each host where Hyperion Interactive Reporting Data Access Service is configured to accept routed requests for database access.

Maximum Connections to DBMaximum number of connections permitted from Hyperion Interactive Reporting Data Access Service process to the datasource, using the current driver Default value is 2048; minimum is 0.

Maximum Queue SizeMaximum number of requests that can simultaneously wait to obtain a connection to the database server Default value is 100; minimum is 0.

Minimum Idle TimeMinimum number of seconds to keep open unused database connections Default value is 180; minimum is 0.

Reap IntervalFrequency (in seconds) at which the system checks for unused database connections and closes them Default value is 180; minimum is 5.

Maximum Connections in PoolMaximum number of unused database connections to keep open for a database user name and Interactive Reporting database connection combination Default value is 1000; minimum is 0.

Minimum Pool Idle TimeMinimum number of seconds to keep unused connections for a database user name and Interactive Reporting database connection combination in memory Default value is 180; minimum is 0.

Adding Data Sources for Hyperion Interactive Reporting Data Access Service
When adding data sources, these Hyperion Interactive Reporting Data Access Service properties, which are set using LSC, must match the specified corresponding Interactive Reporting database connection properties, which are set in Interactive Reporting Studio:

202

Configuring LSC Services

Hyperion Interactive Reporting Data Access Service Properties (in LSC) Connectivity type Database type Hostname/Provider

Interactive Reporting Database Connection Properties (in Interactive Reporting Studio) Connection software Database type Host or provider (OLE DB)

Interactive Reporting Studio uses Interactive Reporting database connections to determine which Hyperion Interactive Reporting Data Access Service to use; Hyperion Interactive Reporting Data Access Service uses Interactive Reporting database connections to connect to databases.

Modifying Host Properties


Services that are installed in one Install Home directory on a host computer are collectively called an Install Home and run in one process space. Most hosts have only one Install Home. For hosts that have multiple Install Homes, host properties belong to the Install Home whose LSC you are running, rather than the host computer. Use LSC to modify these host properties:

Host General Properties on page 203 Host Database Properties on page 204 Host Shared Services Properties on page 205 Host Authentication Properties on page 205

To modify host properties:


1 From the LSC main window, click
.

2 Modify General, Database, Shared Services, or Authorization properties as necessary. 3 Click OK.

Host General Properties


Host general properties include specification of the systems GSM and LSM:

Installation DirectoryRead-only path to the directory where Workspace services are installed Cache Files DirectoryDirectory where temporary files are stored for caching of user interface elements and content listings Root Log LevelLogging level for all services (see Configuring Logging Levels on page 229)

Modifying Host Properties

203

GSM: NameRead-only name of GSM that manages this Install Homes services GSM: Service Test IntervalFrequency in minutes with which GSM checks that registered services on all hosts are running GSM: HostComputer on which GSM is installed GSM: PortPort number on which GSM is running LSM: Log LevelLogging level for LSM (see Configuring Logging Levels on page 229) LSM: Service Test IntervalFrequency in minutes with which LSM checks that other services are running LSM: GSM Sync TimeFrequency in seconds with which LSM synchronizes its information with GSM

Host Database Properties


Host database properties, such as the database driver and the database password used by the services, relate to the repository database:

Database DriverName of the driver used to access the database This is database-dependent, and should be changed only by experienced administrators. If you change the database driver, you must change other files, properties, data in the database, and the Java classpath. See Changing the Repository Database Driver or JDBC URL on page 187.

JDBC URLURL for Java access to the database using the JDBC driver If you change the JDBC URL, you must change other files, properties, and data in the database. See Changing the Repository Database Driver or JDBC URL on page 187.

User NameUser name that services use to access the database that contains their metadata This name must match for all installations using the same GSM.

Password

Host database properties rarely should be changed, but if modifications are necessary, then edit these files, which contain database information for services, to keep them in sync:

server.xmlModify using LSC config.datModify using ConfigFileAdmin utility

Every RSC serviceYou must set properties on every RSC service individually
startCommonServices script

All service-specific start scripts

Instructions for changing some of the database properties are given in Changing the Services Repository Database Password on page 187, and in Changing the Repository Database Driver or JDBC URL on page 187.

204

Configuring LSC Services

Host Shared Services Properties


Shared Services properties provide information about the computer that hosts the Shared Services installation to which this Workspace installation (Install Home) is registered. Modifying the Host, Port, and CSS Config File URL properties changes only the values stored in the repository. It does not re-register the application to Shared Services on the specified host and port. You must use Hyperion Configuration Utility (see the Shared Services Registration and Deregistration tasks) to do so.
Note: You can edit Shared Services properties only if you have the BI+ Global Administrator role.

HostName of the computer hosting Shared Services PortPort for the Shared Services User Management Console; default port number is 58080 Project nameShared project name; defined through Shared Services Application nameShared application name; defined through Shared Services CSS Config File URLURL used to retrieve external configuration information from Shared Services

Default URLURL stored in the database and used by all services Use this URL instead for this serverUsed to override the URL just for this Install Home (typically, it is not necessary to set this property)

The CSS Config File URL is stored in BpmServer.properties, the location of which depends on your servlet engine. For example, with Apache Tomcat, this file is in:
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace \webapps\WEB-INF\conf
Note: If the Host, Port, or CSS Config File URL changes, you must update the BpmServer.properties file.

Host Authentication Properties


Host authentication properties relate to the use trusted password and pass-through configuration values, which apply to jobs and Interactive Reporting documents:

Set trusted passwordEnables the use of a trusted password Use users login credentials for pass-throughEnables pass-through using the users logon credentials Allow users to specify credentials for pass-throughEnables pass-through using the credentials the user specifies in Preferences If no credentials are specified in Preferences, an error message displays each time users attempt to open Interactive Reporting documents or run jobs.

Modifying Host Properties

205

Modifying Properties in portal.properties


A few properties are modified by editing the portal.properties text file, in Install Home\lib\msgs. To edit portal.properties, use a plain text editor. To change a property value, edit the string that follows its equal sign (=). Change only value strings; do not modify the file in any other way. When saving the file, be sure to preserve its name and file extension. Properties configured in portal.properties:

defaultCalendarName listenerThreadPollingPeriodFrequency in minutes with which the system should poll for externally triggered events multiValueSQRParamSeparatorCharacter to use as a separator between values of a multi-value parameter in Production Reporting jobs bqDocsTimeOutInterval in seconds that services should wait for Hyperion Interactive Reporting Service to open Interactive Reporting documents defaultCategoryUuidRoot folder name outputLabelName of a set of job output files, which is composed of outputLabel value followed by job name outputLabel1Part of a job output label identifying a cycle of an Interactive Reporting job bqlogfilenameprefixLog file name for Interactive Reporting job output, without the file extension bqlogfileextFile extension of the log file for Interactive Reporting job output

206

Configuring LSC Services

Chapter

Configuring the Servlets

10
Configuring the servlets enables Workspace to more precisely meet the needs of your organization. Configuration settings depend on aspects of your organizations environment such as how the system handles user passwords, usage volume, and how users interact with Workspace.
Note: For information on customizing parameter forms for Production Reporting and generic jobs, see the Hyperion System 9 BI+ Workspace Users Guide. For information on customizing Web module user interfaces, refer to the Hyperion System 9 BI+ Workspace Developers Guide.

In This Chapter

Using Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Modifying Properties with Servlet Configurator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Zero Administration and Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Load Testing Interactive Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Configuring the Servlets

207

Using Servlet Configurator


You can configure many details of the servlets behavior with Servlet Configurator, which configures all locally-installed servlets. Servlet Configurator and the configuration file it edits, ws.conf, are installed automatically when the servlets are deployed. The location of ws.conf depends on your servlet engine. For example, for Apache Tomcat, ws.conf is in:
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace\webapps \workspace\WEB-INF\config
Note: If you replicated the servlets in your system and want to make the configurations match, copy the ws.conf file from one servlet host to the other, and check for host-specific settings.

To start Servlet Configurator:

Windows: Select Start > Programs > Hyperion System 9 BI+ > Utilities and Administration > Servlet Configurator. UNIX: Run the config.sh file, installed in Install Home/bin.

The configuration toolbar is displayed above the navigation pane and contains these icons:
Icon

Description Saves all configuration settings (Keyboard shortcut: Alt+S)

Sets the visible configuration settings (that is, those currently displayed in the right-hand frame) to their default values Sets all configuration settings to their default values

Displays the online help

208

Configuring the Servlets

Modifying Properties with Servlet Configurator


Servlet Configurator displays a list of Property folders that you use to view and modify servlet properties.

To view or modify the properties in a folder:


1 Click the magnifying glass icon, 2 Make changes.
See these topics for property descriptions:

, next to the folder.

User Interface Properties on page 209 Personal Pages Properties on page 213 Internal Properties on page 215 Cache Properties on page 216 Diagnostics Properties on page 218 Applications Properties on page 218

3 Save your settings. 4 Make the settings effective by restarting the servlets.

User Interface Properties


User Interface: Login Properties on page 210 User Interface: Localization Properties on page 211 User Interface: Subscription Properties on page 212 User Interface: Job Output Properties on page 212 User Interface: SmartCut Properties on page 212 User Interface: Color Properties on page 212

Modifying Properties with Servlet Configurator

209

User Interface: Login Properties


Login properties pertain to the common logon mechanism for all servlets:

LoginPolicy class for $CUSTOM_LOGIN$Name of the class that implements the LoginPolicy interface (the fully package-qualified name without the .class extension); specify only if you are using a custom logon implementation For more information about custom logon, see the loginsamples.jar file in Install Home\docs\samples.

Custom username policyPossible values are CUSTOM_LOGIN$ (the custom policies), $HTTP_USER$, $REMOTE_USER$, $SECURITY_AGENT$, or to $NONE$:

Set to $NONE$ unless you implement a custom logon or configure transparent logon If set to a value other than $NONE$, uses the specified user name policy to obtain the user name for all users logging on to Workspace servlets Use $CUSTOM_LOGIN$ only if you use a custom implementation for the username value If set to $SECURITY_AGENT$, the Custom password policy must be set to $TRUSTEDPASS$

Custom password policyPossible values are CUSTOM_LOGIN$ (the custom policies), $HTTP_PASSWORD$, $TRUSTEDPASS$, $USERNAME$, or to $NONE$:

Set this option to $NONE$ unless you implemented a custom logon or configured transparent logon If set to a value other than $NONE$, uses the specified password policy to obtain the password for all users logging on to Workspace servlets Use $CUSTOM_LOGIN$ only if you use a custom implementation for the password value. If the custom user name policy is set to $SECURITY_AGENT$, the Custom password policy must be set to $TRUSTEDPASS$

Allow users to change their passwordDisplays the Change Password link in Workspace Preferences for native users in Shared Services:

If you do not select this option, the change password link is not available to users If you configured transparent logon, do not select this option

Set default server toIP address or name for the server hosting GSM, and optional port number, where server and port are separated by a colon (:); if port number is omitted, the default GSM port number of 1800 is used, for example:
apollo:2220Uses port 2220 apolloUses default port 1800

210

Configuring the Servlets

User Interface: Localization Properties


Localization properties enable you to customize time, date, and language settings for locales:

Format times usingServlets can display time fields in a 12-hour (AM/PM) format or in a 24-hour format; for example, in a 24-hour format, the servlets display 6:30 PM as 18:30 Date display orderServlets can display dates in month day year order (for example, May 1 2004) or day month year order (for example, 1 May 2004) Use locale-sensitive sortSorts names using the default locale (locale-sensitive sorts are slightly slower but more user-intuitive; For example, A and a are sorted together in a locale-sensitive sort, but not in a lexicographical sort) If no locale-sensitive sort is defined, the servlets use a lexicographical sort.

Default local language codeLowercase, two-letter code for the language most commonly used by servlet end users (for example, en for English or fr for French) For a complete list of codes, go to:
http://www.ics.uci.edu/pub/ietf/http/related/iso639.txt

Users can use the servlets in the language of their choice (if templates exist in that language) by setting their browser language option. (In Internet Explorer, select Tools > Internet Options, General tab, Languages button. In Firefox, select Tools >Options, Language button.) Used in conjunction with country codes and local variants to determine (1) the set of templates the servlet reads upon startup, and (2) in what language to display pages. The system checks for localization settings in this order (until a non-default value is found): a. User browser b. Localization properties for the servlet (iHTML or Data Access) c. Default localization properties for Workspace servlets d. Default locale specified on the Web server Localization settings found are used in this order (until a default value is found): a. Language code b. Country code c. Local variant For example, Viewer checks the user browser first. If it has no language setting, then Viewer, which does not have its own localization settings, checks the default localization settings. This check begins with Default local language code. If that setting is specified (is not Default), Viewer checks Default local country code to refine localization. If it too is specified, Viewer checks Default local variant. If, on the other hand, Default local language code is set to Default, Viewer skips the default localization settings and checks the locale for which the servlets host is configured.

Default local country codeUppercase, two-letter code for the country (for example, US for United States, CA for Canada, and so on) Used in conjunction with the language code and local variant parameters to obtain and display user data

Modifying Properties with Servlet Configurator

211

For a complete list of codes, go to:


http://ftp.ics.uci.edu/pub/websoft/wwwstat/country-codes.txt

Used only if Default local language code is specified (is not set to Default); if country code is set to Default, the iHTML servlet uses the language code value to determine user

Default local variantOptional localization property used for a finer granularity of localization in messages for a user audience with matching language and country codes; for example, if you specify a variant of WEST_COAST, the system uses it to deliver specialized data, such as time for the local time zone Used only if Default local country code is not set to Default; if Default local variant is set to Default, the servlet uses the Default local language code and Default local country code values to determine the user locales

User Interface: Subscription Properties


The Enable subscription features option enables users to subscribe to items from Viewer. If this option is not selected, users cannot receive notifications when items are modified.

User Interface: Job Output Properties


Job Output properties enable you to customize the format and display of job output:

Display HTML icon when displaying Production Reporting job output in listing pages Display SPF icon when displaying Production Reporting job output in listing pages Output format to display after a Production Reporting job is run

User Interface: SmartCut Properties


The Show SmartCut as link property displays SmartCuts as links. If set to off, which is the default setting, SmartCuts display as plain text.

User Interface: Color Properties


Color properties enable you to customize the colors of the servlets user interface main frame. These properties apply only to user interface HTML templates, not to JSPs. Therefore, for consistency across the user interface, when you change colors here (in Servlet Configurator), do so in the stylesheets, in the css directory (for example, Install Home\servlets\deployment\CSS).

General Properties

Main frame: Background colorBackground color of the main frame (or pane). Does not apply to Personal Pages. If you leave this option blank, your platforms default background color is used.

212

Configuring the Servlets

Personal Page wizard: Background colorPersonal Page wizard is the sequence of pages displayed after a user chooses New Personal Page. Wizard pages have two colors, a main background color and the color of the top and bottom borders. Personal Page wizard: Border colorSee preceding paragraph.

Title PropertySets the underline color when titles are underlined. Text Properties

Regular text colorRegular text is most of the text on servlet pages. If you leave this option blank, the browser default is used. Link text colorColor of links which the user has not (recently) chosen.

Personal Pages Properties


Personal Pages properties enable you to control user capabilities in Personal Pages:

Personal Pages: General Properties on page 213 Personal Pages: Publish Properties on page 214 Personal Pages: Generated Properties on page 214 Personal Pages: Syndicated Content Property on page 214 Personal Pages: Color Scheme Properties on page 214

Personal Pages: General Properties


General properties for Personal Pages:

Max Personal Pages per userSet to 20 or less; default is 5 Max initial published Personal PagesMaximum number of Personal Pages to be copied from published Personal Pages when a user first logs on; set to at least 1 less than the value of Max Personal Pages per user; default is 2. Users can choose default Personal PageDefault is enabled

Users change their default by putting the desired default Personal Page at the top of the list on the My Personal Pages page in the servlets When disabled, users cannot delete or reorder the default Personal Page To ensure that users see the Personal Page containing the Broadcast Messages every time they log on, disable this option

Show headings of Content Windows on Personal PagesContent windows are displayed with headings (title bars); enabled by default

Modifying Properties with Servlet Configurator

213

Personal Pages: Publish Properties


Publish properties control the options available to end-users for publishing their Personal Pages for others to use; at least one of the last three properties must be enabled:

LocationFolder path and name that contains published Personal Pages; must be located in the /Broadcast Messages folder. Default value is /Broadcast Messages/Personal Page Content, which is not browsable by default

Show publishers groupsEnables end users to give permissions to their own groups; enabled by default Allow publisher to enter group nameEnables end users to give permission to a specified group; enabled by default Allow publishing to all usersEnables end users to give permissions to all users; enabled by default

Personal Pages: Generated Properties


Generated Personal Page properties involve the Personal Page that is generated by the servlet the first time that users log on to Workspace. You can prepare versions of this page for different users.

Show My BookmarksGenerated Personal Page includes the My Bookmarks content window; enabled by default Show Exceptions DashboardGenerated Personal Page includes the Exceptions Dashboard; enabled by default Number of foldersNumber of pre-configured folders (subfolders of the /Broadcast Messages folder) that are displayed on the generated Personal Page; default is 3 Number of File Content WindowsNumber of displayable items in pre-configured folders (subfolders of the /Broadcast Messages folder) that are displayed as content windows on the generated Personal Page; default is 1 Default color schemeDefault color scheme for generated Personal Page and the Edit Personal Page page

Personal Pages: Syndicated Content Property


The Syndicated Content property specifies the location of syndicated content, the default for which is /Broadcast Messages/Syndicated Content.

Personal Pages: Color Scheme Properties


Color scheme properties let you prepare sets of colors for users to apply to their Personal Pages. Four color schemes are set to default values. You can rename each one and set its colors for parts of a Personal Page.

NameRequired Headings colorBackground color of the heading (title bar) of each content window

214

Configuring the Servlets

Background colorBackground color of content windows in the main (wide) column Text colorColor of servlet-generated text on Personal Pages, such as the names of content windows Link colorColor of the text of servlet-generated links on a Personal Page, such as bookmarks in My Bookmarks Broadcast Messages colorColor of the heading of each Broadcast Messages content window Header background colorBackground color of content windows in the optional header area at the top of a Personal Page Footer background colorBackground color of content windows in the optional footer area at the bottom of the page Left column background colorBackground color of content windows in the optional narrow column on the left side of a Personal Page Right column background colorBackground color of content windows in the optional narrow column on the right-hand side of a Personal Page

Internal Properties
Internal properties control how servlets or the Workspace server works:

Internal: Redirect Property on page 215 Internal: Cookies Properties on page 215 Internal: Transfer Property on page 216 Internal: Jobs Property on page 216 Internal: Upload Property on page 216 Internal: Temp Property on page 216

Note: The session time-out value is configured on the servlet engine. For example on JRun, the HTTP session time-out value can be modified for the JVM. All Hyperion System 9 BI+ Web applications should have session timeouts set to greater than 10 minutes.

Internal: Redirect Property


The Redirect URLs using property enables the servlets to redirect URLs using HTTP or JavaScript. HTTP redirection is more efficient and therefore preferred.

Internal: Cookies Properties


These properties concerns the cookies that the servlets create and use:

Keep cookies between browser sessionsSaves information between browser sessions. The user name last used to log on is saved and used for subsequent logon instances. Encrypt cookiesEncrypts saved cookies.

Modifying Properties with Servlet Configurator

215

Internal: Transfer Property


The transfer property, Pass data using streams instead of files, controls how data is passed between services and servlets. If enabled, servlets retrieve files from services using streamed input and output (I/O) and a direct connection instead of temporary file system storage. Data is transferred out-of-band over a separate socket connection between Repository Service and servlets. If disabled, data is transferred in-band and stored in a file (or in memory if the data is less than 500 KB) for servlets (in a temporary directory) and Service Broker. Data is transferred from Repository Service to Service Broker, and to servlets. In general, you should enable this option because streamed I/O is more efficient. If your system has a firewall between the servlets and the services, however, and the servlets cannot open additional sockets for file transfer, then you should disable this option. Note about firewalls: When this option is enabled, the system opens a socket for a file transfer. The operating system generates the port number and you cannot control this number. A firewall, however, prohibits access through random port numbers. Therefore, you must disable this option, which causes file transfers to use the open socket in use by Service Broker.

Internal: Jobs Property


The jobs property, Show confirmation screens for, sets the number of seconds that the confirmation screens are displayed when running a background job.

Internal: Upload Property


The upload property, Max file size allowed for publish, sets the maximum size for files that users can import into the repository. The default setting is 100 MB.

Internal: Temp Property


The temp property specifies the location of the Workspace /temp directory.

Cache Properties
Cache properties set limits on how long the servlets can cache various data. These properties affect the responsiveness of the user interface, so setting them involves a trade-off between performance and the freshness of displayed data. Cache folders for property can be described in three ways: (1) maximum time to cache folders, in seconds; (2) maximum delay between when a modification is made to a folder in the repository and when the user sees the change in Viewer; (3) maximum time interval during which users see old folder contents.

216

Configuring the Servlets

Increasing the value of Cache folders for makes pages display more quickly to the user, but increases the length of time that the user sees stale folder contents. Decreasing the value of Cache folders for reduces the duration that the user can see stale folder contents, but slows the display of pages. Topics that describe Cache properties:

Cache: Objects Properties on page 217 Cache: System Property on page 218 Cache: Templates Property on page 218 Cache: Notification Property on page 218 Cache: Browser Property on page 218

Cache: Objects Properties


Objects properties concern the caching of object types:

Number of folders cachedSize of the cache for folders; default is 200 Cache folders forMaximum time in seconds to cache folders (that is, the limit for the delay between changes to a folders contents and Viewers display of the changes); set to zero or greater; default is 3600 User sees old folder contents for no more than the number of seconds specified here.

Cache browse queries forMaximum time in seconds for changes to browse queries in the Workspace servers to be reflected in the servlets; set to zero or greater; default is 60 Cache jobs forMaximum time in seconds for changes to jobs in the Workspace servers to be reflected in the servlets; set to zero or greater; default is 60 Cache parameter lists forMaximum time in seconds that the servlets cache job parameter lists; default is 60 Cache published Personal Pages forMaximum time in seconds that the servlets cache the content of the /Personal Page Content folder; must be greater than zero; default is 60 Note that this cache is refreshed whenever a Personal Page is published using the Personal Pages servlet.

Cache Content Windows on Personal Pages forMaximum time in seconds for changes to Broadcast Messages on a Personal Page to be reflected in the Personal Pages servlet; must be greater than zero; default is 60 Cache Content Windows being modified forMaximum time in seconds that Viewer or Administer module caches content while it is being modified; default is 180 Cache list items forMaximum time in seconds that item or resource lists are cached; default is 900 Max items to cache for listingMaximum number of items in a listing that are cached; default is 100

Modifying Properties with Servlet Configurator

217

Cache: System Property


The system property, Cache system properties for, specifies the number of seconds the servlets should retain system property information before refreshing it from the server. The default value is 1200. Note that refreshing system properties makes the updated settings effective only for users who have not yet logged on. Users who are logged on when a refresh occurs are not affected.

Cache: Templates Property


The Templates property, Cache parsed HTML templates, controls whether the servlets cache templates. It is enabled by default. While testing customized templates, however, it is useful to disable this option so that template changes display immediately.

Cache: Notification Property


The Notification property, Refresh notifications every, specifies how frequently the View Jobs Status page in the Schedule module is refreshed; that is, the maximum amount of time between Event Service issuing a notification and the notification appearing on View Jobs Status pages. Values can be zero or greater seconds.

Cache: Browser Property


The Browser property, Max Interactive Reporting job outputs listed for modification, specifies the maximum number of job output collections to list in the Versions area of an Interactive Reporting jobs properties page. Accordingly, the maximum number of output collections whose properties can be modified.

Diagnostics Properties
Configuration Log properties are used for diagnostic purposes:

Logging Service ServerHost name of the server on which Logging Service resides ConfigurationPath of Servlet Configurator log configuration file, servletLog4jConfig.xml (the default can be used)

Applications Properties

Applications: URL Properties on page 219 Applications: iHTML Properties on page 219 6x Server URL Mapping on page 220

218

Configuring the Servlets

Applications: URL Properties


The URL properties must match the servlet locations specified in web.xml:

Browse Administer Personal Pages Job Manager iHTML Data Access

Applications: iHTML Properties


These properties pertain only to iHTML servlet:

Clear disk cache afterMaximum time interval between clearing of disk cache, in seconds; default is 300 Terminate idle iHTML session afterNumber of seconds for iHTML servlet to wait for a response from Hyperion Interactive Reporting Service before timing out

Default is 1800 Changes the BQServiceResponseTimeout property in ws.conf If exceeded, Hyperion Interactive Reporting Service does not respond

Applications: Data Access Properties


These properties pertain only to Data Access servlet:

DAS response timeoutNumber of seconds that Data Access servlet should wait for a response before timing out:

Changes the DASResponseTimeout property in ws.conf Default is 1800

Hyperion Intelligence Backward Compatibility SupportEnables Hyperion Intelligence clients of prior versions (8.2.1 and earlier) to communicate with Workspace:

Changes the BackwardCompatibility property in ws.confD Default setting is false.

Enable backward compatibility only for testing or diagnostic purposes; it is not recommended for production environments.

Enable Zero AdministrationIdentifies the release number of the most up-to-date version of Interactive Reporting on the server and triggers the downloading of the Interactive Reporting Web Client when a user selects a link to an Interactive Reporting document

Modifying Properties with Servlet Configurator

219

Zero Administration and Interactive Reporting


Zero Administration identifies the release number of the most up-to-date version of Interactive Reporting on the server. When a user chooses a link to an Interactive Reporting document or job from Workspace or by using a SmartCut, Zero Administration is triggered and the Interactive Reporting Web Client download starts. The user has the option to download the online help files or use the help files from the Web server. Zero Administration files (JSP, HTML, XPI, and CAB files) are hosted on the Web server file system. Interactive Reporting release numbers are stored in the registry for Firefox and Internet Explorer browsers. Available Interactive Reporting capabilities are determined by the user roles and adaptive states. The higher-level access functions include processing database queries and the full analytical features of Interactive Reporting Studio. Topics that provide details on Zero Administration:

6x Server URL Mapping on page 220 Client Processing on page 221

6x Server URL Mapping


Users may encounter problems using locally saved, release 6x Interactive Reporting documents when their Web server deployment changes, or when you migrate to another release of Workspace. You can configure URL mappings to automatically redirect to other URLs when Interactive Reporting is installed. To configure URL redirection, add commands to zeroadmin.jsp that establish required redirections for each deployment of the servlets. (The location of this file depends on your servlet engine. For example, for Apache Tomcat, this file is in
Install Home\AppServer\InstalledApps\Tomcat\5.0.28\Workspace\webapps \workspace\jsp\dataaccess\zeroadmin.)

These mappings are made by adding calls to the Map6xUrlTo8() method and should be added to the CustomizeInstallForIE(insight) function.
The Map6xUrlTo8(Old_URL, New_URL) method establishes a URL mapping. Passing an empty string as New_URL cancels the URL redirection. Clear6xUrlMap() function removes all URL redirections established so far. The CustomizeInstallForIE(insight) function only runs when Interactive Reporting is downloaded. Mappings are saved in the Windows registry for use with locally saved documents. If the mappings are to be updated dynamically (once per session), then the call to the CustomizeInstallForIE(insight) function should also be made from the Zero Administration main function. Example function CustomizeInstallForIE(insight) {
insight.Map6xUrlTo8("http://<brio6x_host>:<brio6x_web_port>/odsisapi/ods.ods","http://<hyperion9x_host>:<hyperion 9x_web_port>/workspace/dataaccess/Browse") }

220

Configuring the Servlets

Client Processing
When an Interactive Reporting document is opened in Viewer, the Web browser retrieves and parses the HTML documents from the Web server. The JSP logic for Zero Administration, which is included in these HTML files, runs in the clients Web browser. The zeroadmin.jsp file is retrieved from the Web server. Release numbers from that file are compared to release numbers on the client computer. There are three possible outcomes:

If no release number is found on the client, the user is prompted to install. If the numbers are equal (meaning the client release number matches the zeroadmin.jsp file), or if the client release is greater than the zeroadmin.jsp version, the Interactive Reporting document is opened using the previously installed Interactive Reporting release. If the release number on the client is less than that in zeroadmin.jsp, the user is prompted to upgrade their client product.

Web browsers can interrogate Interactive Reporting to find out the release number. You can view this information by locating the DLL files (for example, axbqs32.dll under Internet Explorer, or npbqs32.dll under Firefox) and displaying their file properties. Most popular Web browsers allow automatic download and installation and provide a digital certificate for an extra layer of security. The JSP automatically provides the correct application (plug-ins for Windows in a browsercompatible file format).

Load Testing Interactive Reporting


Interactive Reporting uses unique request IDs for requests sent to the server, and expiring keys to encrypt the DB credentials and SQL string sent over the wire. Because of this, customers using load-testing tools such as Mercury LoadRunner, Seque SilkPerformer, and so on, have difficulty conducting their tests.

To load test Interactive Reporting:


1 Assign unique request IDs to the CURRENT_REQUEST_ID URL parameter.
One way to generate the unique IDs for LoadRunner is to use this date and time stamp, and virtual user ID:
Url=Browse?REQUEST_TYPE=getSectionMap&DOC_NAME={BQY_Files}.bqy&DOC_UUID ={par_sDocUUID}&DOC_VERSION=1&MULTI_PART=0&CURRENT_REQUEST_ID={DateTime Stamp}{UserRuntimeID}", "Referer=", ENDITEM,

where:

DateTimeStamp is the date and time stamp parameter type with format %Y%m%d%H%M%S UserRuntimeID is a virtual userID parameter type with format %03s

2 Enable static key encryption for recording the scripts and running the scripts within Workspace.
This setting is not recommended for production environments.

Load Testing Interactive Reporting

221

3 Set the three properties described in these topics:


Data Access Servlet Property on page 222 Hyperion Interactive Reporting Data Access Service Property on page 222 Hyperion Interactive Reporting Service Property on page 222

Note: Setting only one of these properties can cause processing (running of Interactive Reporting jobs, querying from Interactive Reporting, querying from the Workspace) to fail, because the source and target encryption schemes do not match.

Data Access Servlet Property


To load test the Data Access servlet, add this property to ws.conf and restart the Web server:
WebClient.Applications.DAServlet.UseStaticKeyForEncryption=true

Hyperion Interactive Reporting Data Access Service Property


To load test the Hyperion Interactive Reporting Data Access Service:
1 Add this property to server.xml
<property defid="0ad70321-0001-08aa-000000e738090110" name="USE_STATIC_KEY_FOR_ENCRYPTION">true</property>

Make sure the property is defined inside the <properties> subnode of the <service type="DataAccess"> node and outside of this node: <propertylist
defid="0ad70321-0002-08aa-000000e738090110" name="DAS_EVENT_MONITOR_PROPERTY_LIST">

2 Restart the Hyperion Interactive Reporting Data Access Service for all Install Homes.

Hyperion Interactive Reporting Service Property


To load test Hyperion Interactive Reporting Service:
1 Add this property to server.xml
<property defid="0ad70321-0001-08aa-000000e738090110" name="USE_STATIC_KEY_FOR_ENCRYPTION">true</property>

The property must be inside the <properties> subnode of <service type="BrioQuery"> node and outside of this node: <propertylist defid="0ad70321-0002-08aa000000e738090110" name="BQ_EVENT_MONITOR_PROPERTY_LIST">

2 Restart Hyperion Interactive Reporting Service for all Instal Homes.

222

Configuring the Servlets

Chapter

Troubleshooting

11
Administrators can generate log files throughout Workspace to help technicians identify system or environmental problems or to help developers debug reports or API programs.
In This Chapter Logging Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Log File Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Configuring Log Properties for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Analyzing Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Information Needed by Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236

Troubleshooting

223

Logging Architecture
All log messages are routed through Logging Service and stored in one location. Logging Service writes log messages to one or more files, which can be read using a viewer. Log4j (version 1.2) is used as the basis for the logging framework and configuration files. Log Management Helper is used by C++ services (Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service) in conjunction with the log4j framework and Logging Service. Workspace comes with preconfigured loggers and appenders. Loggers correspond to areas in code (class) where log messages originated. Appenders correspond to output destinations of log messages. You can troubleshoot system components by setting the logging level of loggers.

Log4j
The log4j package enables logging statements to remain in shipped code without incurring heavy performance costs. As part of the Jakarta project, log4j is distributed under the Apache Software License, a popular open source license certified by the Open Source Initiative. Logging behavior is controlled through XML configuration files at runtime. In configuration files, log statements can be turned on and off per service or class (through the loggers) and logging levels for each logger can be set, which provide the ability to diagnose problems down to the class level. Multiple destinations can be configured for each logger. Main components of log4j:

LoggersControl which logging statements are enabled or disabled. Loggers may be assigned levels ALL, DEBUG, INFO, WARN, ERROR, FATAL, or INHERIT. AppendersSend formatted output to their destinations.

Go to www.apache.org or see The complete log4j manual by Ceki Glc (QOS.ch, 2003).

Logging Service
Logging Service stores all log files in one location. If Logging Service is unavailable, log messages are sent to backup log files. When Logging Service is restored, messages in backup files are automatically sent to Logging Service, which stores them in log files and deletes the backup files. Logging Service cannot be replicated.

Log Management Helper


Log Management Helper (LMH) consolidates all logs from Hyperion Interactive Reporting Data Access Service or Hyperion Interactive Reporting Service and sends them to Logging Service.

224

Troubleshooting

One LMH process exists for each Hyperion Interactive Reporting Data Access Service and for each Hyperion Interactive Reporting Service per Install Home. Logging Service consolidates all log messages in separate log files for Hyperion Interactive Reporting Data Access Service and Hyperion Interactive Reporting Service per Workspace.

Server Synchronization
Because log files are time-stamped and written in chronological order, time synchronization between servers, which is the responsibility of the administrator, is important. Many products, free and commercial, are available to manage server clock synchronization.

Log File Basics


Topics that provide information about using log files for troubleshooting:

Log File Location on page 225 Log File Naming Convention on page 226 Log Message File Format on page 227

Log File Location


All log files are in \BIPlus\logs on the computer where Logging Service is running. Services, servlets, process monitors, and Web services log messages centrally using Logging Service. LSC, RSC, and Calendar Manager log messages locally.

Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service Local Log Files
Hyperion Interactive Reporting Service and Hyperion Interactive Reporting Data Access Service have additional log files that are stored in the directory where the services are run, and which collect log messages before these services connect to Logging Service. Log messages in these files are not routed to Logging Service log files. Start-up problems are collected in BIstartup.log and DASstartup.log. Other log messages generated when Logging Service is unavailable are collected in these log files:

For services started without process monitorsBI1_hostname.log and


DAS_hostname.log

For services started with process monitors0_BI1_hostname.log and


0_DAS_hostname.log

If you change the name or location of these files, you must change the entry in server.xml that points to them. server.xml resides in \BIPlus\common\config.

Log File Basics

225

Log File Naming Convention


Each service or servlet has its own log file. In a multi-Install Home installation, all services of one type log their messages to one file. Separate log files are generated for license information, configuration and/or environment information, and stdout messages. Services and servlets log filenames format: server_messages_OriginatorType.log where OriginatorType is one of these components:

Servlets

BrowseServlet AdministrationServlet PersonalPagesServlet DataAccessServlet iHTMLServlet

Services:

AnalyticBridgeService AuthenticationService AuthorizationService CommonServices DataAccessService EventService GSM HarvesterService IntelligenceService JobService LSM NameService PublisherService RepositoryService SessionManager ServiceBroker TransformerService Usage Service

Miscellaneous

BIProcessMonitor

226

Troubleshooting

DASProcessMonitor CalendarManager WebService SDK EventComponent LocalServiceConfigurator RemoteServiceConfigurator Installer

Special log files are:


license_messages.logContains license information configuration_messages.logContains basic environment and configuration

information

name_backupMessages_ip-address_port.log (where name is the process name)

Contains logging messages when Logging Service is unavailable (for example, BI_PM_sla1_backupMessages_10_215_34_160_1800.log).

stdout_console.logContains messages sent to stdout and stderr.

Log Message File Format


All log messages contain this information in the order shown:

LoggerName of the logger that generated the logging message Time stampTime stamp in coordinated universal time (UTC); ensures that messages from differing time zones can be correlated The administrator is responsible for time synchronization between servers.

LevelLogging level ThreadThread name Sequence numberUnique number to identify messages with matching time stamps TimeTime the log message was generated ContextInformation about which component generated the log message

SubjectUser name Session IDUUID of the session Originator TypeComponent type name Originator NameComponent name HostHose name

MessageLog message ThrowableStack trace of a throwable error

The format for backup log files match the format for regular log files.
Log File Basics

227

Configuration Log
Basic configuration information is logged to configuration_messages.log in BIPlus/logs. The file format matches service and servlet log file formats. This log file contains Java system property information, JAR file version information, and database information.

Configuring Log Properties for Troubleshooting


To troubleshoot Workspace, you can configure these logging properties:

Logging levels Loggers Appenders Log rotation

Loggers, logging levels, and appenders are configured in XML files. The log rotation property is a Java system property and is configured in startcommonservices.bat. Logging levels for LSC services, RSC services, and the root logger are configured using LSC and RSC. All other configuration changes are made by editing XML files.

Configuration Files
Configuration file types are main and imported: Imported files are used by main files and organize the loggers and appenders into separate XML files. Main configuration files:

serviceLog4jConfig.xmlMain configuration file for services; in \BIPlus\common\config\log4j remoteServiceLog4jConfig.xmlMain configuration file for Hyperion Interactive

Reporting Service and Hyperion Interactive Reporting Data Access Service, and for RSC services when started remotely; in \BIPlus\common\config\log4j

adminLog4jConfig.xmlMain configuration file for LSC, RSC, and Calendar Manager servletLog4JConfig.xmlMain configuration file for the servlets; in \WEBINF\config of the servlet engine deployment

Note: If you change the location of serviceLog4jConfig.xml or remoteServiceLog4jConfig.xml, you must update the path information stored in server.xml. If you change the location of servletLog4jConfig.xml, you must update the path information in ws.conf.

Imported configuration files:

appenders.xmlImported by serviceLog4jConfig.xml, servletLog4JConfig.xml, and remoteServiceLog4jConfig.xml

228

Troubleshooting

Appenders can be added by referencing them in <logger> and <root> elements using <appender-ref> elements.

serviceloggers.xmlImported by serviceLog4jConfig.xml and remoteServiceLog4jConfig.xml; configure through LSC debugLoggers.xmlContains definitions for loggers that can be enabled to debug problems in the services; imported by serviceLog4jConfig.xml file and remoteServiceLog4jConfig.xml; in \BIPlus\common\config\log4 debugLoggers.xmlContains definitions for loggers that can be enabled to debug problems in the servlets; imported by servletLog4jConfig.xml; in the \WEBINF\config folder of your servlet engine deployment

Configuring Logging Levels


Logging levels specify the amount and type of information to write to log files. Except for the inherit level, levels in Table 17 are listed from most verbose to least verbose, and logging levels are cumulative. The default logging level, which is set on root, is WARN; therefore, messages at that level or lower (ERROR, FATAL) appear in the log. You can change this for the entire system or per service or servlet. If a given logger is not assigned a level (or its level is set to INHERIT), it inherits the level from its closest ancestor with an assigned level. The root logger resides at the top of the logger hierarchy and always has an assigned level.
Table 17

Logging Levels Description Uses the logging level set at its closest ancestor with an assigned level; not available at the root level All messages levels Minor and frequently occurring normal events; use only when troubleshooting Normal significant events of the application Minor problems caused by factors external to the application Usually, Java exceptions that do not necessarily cause the application to crash; the application may continue to service subsequent requests Implies the imminent crash of the application or the relevant sub-component; rarely used

Level INHERIT ALL DEBUG INFO WARN ERROR FATAL

Configuring Loggers
Use RSC to configure RSC service logging levels, which are stored in the database (see Advanced RSC Properties on page 176). Use LSC to configure LSC service logging levels (stored inserviceLoggers.xml) and the root logger (see Host General Properties on page 203). Configure the servlet root logger level in servletLog4JConfig.xml. Configure other servlet loggers in the servlet debug configuration file (debugLoggers.xml).

Configuring Log Properties for Troubleshooting

229

To configure the servlet root logger:


1 Open \WEB_INF\config\servletLog4JConfig.xml 2 Scroll to the end of the file and change the root logging level.
For example, change WARN to INFO:
<root> <level value="WARN"/> <appender-ref ref="LOG_REMOTELY"/> </root>

3 Save the file.

Configuring Debug Loggers


Debug loggers are activated by changing the logging level from INHERITED to DEBUG. Use these loggers only with help from Hyperion Solutions Customer Support.
Note: Some java properties, such as print_config, print_query debug, and echo, are mapped to debug loggers in \BIPlus\common\config\log4j\debugLoggers.xml.

Configuring Appenders
You can send log messages to multiple destinations by adding appenders, defined in appenders.xml, to loggers.

To add appenders to loggers:


1 Locate an appender in appenders.xml and copy its name. 2 Open the XML file of the logger to which you want to add this appender. 3 Paste the name of the appender after <appender-ref ref= under the logger to which you want to
add this appender.

For example:
<appender-ref ref="LOG_LOCALLY_BY_LOGGING_SERVICE"/>

4 Save the file.

Configuring Synchronous or Asynchronous Messaging


Log messages can be sent synchronously (the default) or asynchronously. Asynchronous mode offers performance advantages, while synchronous mode provides reliability in that all messages get logged. You can change the BufferSize parameter to limit message loss.

230

Troubleshooting

To enable asynchronous messaging:


1 Open appenders.xml and locate the asynchronous appender.
<appender name="SEND_TO_LOGGING_SERVICE_ASYNC" class="org.apache.log4j.AsyncAppender">

2 Optional: Change BufferSize.


<param name="BufferSize" value="128" />

3 Copy the appender name, "SEND_TO_LOGGING_SERVICE_ASYNC". 4 Locate the root logger.


You can change the default appender for the service or the servlet root logger in the XML file.

5 Replace the name of the default appender, "LOG_LOCALLY_BY_LOGGING_SERVICE", with the


name of the asynchronous appender, "SEND_TO_LOGGING_SERVICE_ASYNC".

6 Save the file.

Configuring Root Level Appenders


In the services main configuration file, serviceLog4jconfig.xml, the default appender for the root level logs locally by Logging Service. If the server does not contain Logging Service, the appender LOG_REMOTELY is uncommented. You can also uncomment the second appender, LOG_LOCALLY, to log messages remotely and locally. This code, from serviceLog4jconfig.xml, shows the root level appenders:
<!-- The following appender should be enabled if the server does not contain the logging service --> <!-- <appender-ref ref="LOG_REMOTELY"/> --> <!-- The following appender can be enabled in conjunction with the remote appender to also send log messages locally --> <!-- <appender-ref ref="LOG_LOCALLY"/> --> <!-- The following appender should only be enabled if the server contains the logging service --> <appender-ref ref="LOG_LOCALLY_BY_LOGGING_SERVICE"/>

Configuring Log Rotation


You can roll and delete log files by time intervals or by file size. File size log rotation is controlled by CompositeRollingAppender. Time interval log rotation is controlled by CompositeRollingAppender and a Java property in the common services start file. By default, the system rolls logs every 12 hours, and deletes the oldest log file when the number of logs exceeds five. Log files are created and deleted by originator type (see Log File Naming Convention on page 226).

Configuring Log Properties for Troubleshooting

231

All appenders in XML configuration files are configured to use default values for CompositeRollingAppender. You can configure CompositeRollingAppender properties for each appender separately.
Note: If you want all log files to rotate using matching criteria, change the configuration for each CompositeRollingAppender defined in both appenders.xml files.

To change log rotation settings:


1 Open appenders.xml in \BIPlus\common\config\log4j (for services) or in \WEBINF\config (for servlets).

2 Locate the CompositeRollingAppender definition and change the properties.


RollingStyle There are three rolling styles:

1 - Roll the logs by size 2 - Roll the logs by time 3 - Roll the logs by size and time RollingStyle 3 could provide confusing results because naming conventions for logs rolled by time and size differ, and deletion counters do not count logs rolled differently together.

DatePattern value

If RollingStyle=2 or 3, set the time interval to write log messages to another log file. Set the Date Pattern value using the string, yyyy-MM-dd-mm; for example, yyyy-MM-dd-mm means every 60 minutes, yyyy-MM-dd-a means every 12 hours, and yyyy-mm-dd means every 24 hours. Default is every 12 hours.

MaxFileSize

If RollingStyle=1 or 3, when the maximum file size is reached, the system writes log messages to another file. Default is 5MB. You can use KB (kilobyte), MB (megabyte), or GB (gigabyte). If RollingStyle=1 or 3, when the maximum number of log files per originator type (plus one for the current file) is reached, the system deletes the oldest file. Default is 5. Log files rolled by time are not affected by this setting.

MaxSizeRollBackups

The appenders.xml files for server and servlets tell the server when to create another log file, which two parameters. The best practice rolling style is 3, which toggles log files by time or size. The default 5MB log file size is the default for software packages such as e-mail and Web servers.
Note: Best practices recommend that RollingStyle for all entries be set to 3, and that default log file size be set to 1 MB. Log files that exceed 1 MB may slow down the server, with possible outages (the service crashes or needs to be restarted) occurring after the log exceeds 25 MB. Large log files can be problematic to open in a text editor such as Notepad or vi.

Sample CompositeRollingAppender definition:


<appender name="BACKUP_MESSAGES_FILE" class="org.apache.log4j.CompositeRollingAppender"> <param name="File" value="${directory}/log/${name}_backupMessages.log"/>

232

Troubleshooting

<!-- Select rolling style (default is 2): 1=rolling by size, 2=rolling by time, 3=rolling by size and time. --> <param name="RollingStyle" value="1"/> <!-- If rolling style is set to 2 then by default log file will be rolled every 12 hours. --> <param name="DatePattern" value="'.'yyyy-MM-dd-a"/> <!-- If rolling style is set to 1 then by default log file will be rolled when it reaches size of 5MB. --> <param name="MaxFileSize" value="5MB"/> <!-- This is log file rotation number. This only works for log files rolled by size--> <param name="MaxSizeRollBackups" value="5"/> <layout class="com.brio.one.mgmt.logging.xml.XMLFileLayout"> </layout> </appender>

3 If RollingStyle is 2 or 3, set the maximum log rotation number in /BIPlus/bin/startcommonservices.bat.


set BP_ROTATIONNUM=-Dlog_rotation_num=5

Analyzing Log Files


This section details how to view log files, log files that are always generated, and log files to look at for troubleshooting. For information about logs for Shared Services, users, groups, or roles, see the Hyperion System 9 Shared Services Installation Guide.

Viewing Log Files


You can view log messages directly in log files or by using a log viewer. This version of Workspace contains the log4j viewer, LogFactor5, which provides a way to filter and sort log messages.

To use LogFactor5:
1 Copy the name of the LogFactor5 appender, <appender-ref ref="LF5APPENDER"/>. 2 Paste the copied codeline under the logger in which to use LogFactor5.
<root> <level value="WARN"/> <appender-ref ref="LF5APPENDER"/> <appender-ref ref="LOG_REMOTELY"/> </root>

LogFactor5 starts automatically when the component to which you added the appender is started. If the component is ongoing, LogFactor5 starts in 30 seconds. The LogFactor5 screen is displayed when logging initializes. Log messages are displayed as they are posted.

Analyzing Log Files

233

Standard Console Log File


The stdout_console.log is always generated regardless of the operation being performed or logging level, and represents standard output and standard errors (console output). Some errors that are caught by the application are logged here, as are start-up failures.

Logs for Importing General Content


When creating, modifying, and deleting files or folders, use these logs to analyze errors:

Server logs

server_messages_PublisherService.log server_messages_RepositoryService.log server_messages_ServiceBroker.log

Client logserver_messages_BrowseServlet.log

Logs for Importing Interactive Reporting Content


When creating, modifying, and deleting Interactive Reporting documents or jobs, use these logs to analyze errors:

Logs for importing general content Server logs


BI1_hostname.log DAS1_hostname.log 0_DAS_hostname.log (when using a process monitor) 0_BI_hostname.log (when using a process monitor)

hostname_BI1_LSM.log hostname_DAS1_LSM.log server_DataAccessService.log server_IntelligenceService.log

Client logs

server_messages_DataAccessServlet.log server_messages_iHTMLServlet.log

Logs for Running Jobs


Job Service runs jobs directly or through Event Service. Use these logs to analyze errors:

Server logs

server_messages_EventService.log server_messages_JobService.log server_messages_ServiceBoker.log

234

Troubleshooting

server_messages_DataAccessService.log server_messages_IntelligenceService.log
DAS1_hostname.log BI1_hostname.log

Client logs

server_messages_BrowseServlet.log server_messages_JobManager.log

Logs for Logon and Logoff Errors


User logon instances requires information from multiple areas in the system, each of which can cause errors and logon attempts to fail. Use these logs to analyze logon and logoff errors:

Server logs

server_messages_SessionManager.log server_messages_GSM.log server_messages_LSM.log server_messages_Authentication.log server_messages_Authorization.log server_messages_Publisher.log server_messages_ServiceBroker.log server_messages_RepositoryService.log

Client logs (servlet)


server_messages_BrowseServlet.log server_messages_AdministrationServlet.log server_messages_PersonalPagesServlet.log

Logs for Access Control


Access control is maintained by Authorization Service. Use these logs to analyze access control errors:

Server logs

server_messages_Authorization.log

Logs for the service involved in the operation being performed

Client logs (servlet)


server_messages_BrowseServlet.log server_messages_AdministrationServlet.log server_messages_PersonalPagesServlet.log

Analyzing Log Files

235

Logs for Configuration


Configuration errors for RSC services show at startup in stdout_console.log or server_messages_NameService.log; configuration_messages.log might be helpful.

Information Needed by Customer Support


If a problem occurs and you need help from Hyperion Solutions Customer Support, send all application server logs for the instance being used. If applicable, compress the log directory. For services and servlets, compress and send all logs under \BIPlus\logs.

236

Troubleshooting

Part

II

Administering Enterprise Metrics

In Administering Enterprise Metrics:


Chapter 12, Understanding Enterprise Metrics Chapter 13, Enterprise Metrics Security Chapter 14, Supporting Clips in Enterprise Metrics Chapter 15, Enterprise Metrics Server Administration Chapter 16, Enterprise Metrics Load Support Programs Chapter 17, Troubleshooting Enterprise Metrics Chapter 18, Evaluating Enterprise Metrics Performance Chapter 19, Enterprise Metrics Preference File Settings

Administering Enterprise Metrics

237

238

Administering Enterprise Metrics

Chapter

12
In This Chapter

Understanding Enterprise Metrics

This chapter provides an overview of the components of Enterprise Metrics. It introduces the major components that are installed and configured, and it explains what functions are available in the resulting environments.

Metrics and Configuration Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Database Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Enterprise Metrics Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Clients and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Implementation and Administration Process Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245

Understanding Enterprise Metrics

239

Enterprise Metrics Components


Enterprise Metrics is a toolset for creating, configuring, and delivering metrics that enable organizations to assess and improve business performance. Enterprise Metrics consists of three elements, Workspace, Personalization Workspace, and Metrics Server, which are configured as part of a standard three-tier architecture for accessing a database. Enterprise Metrics Workspace and Personalization Workspace are front-end applications for users that allow them to view charts and reports and navigate through analytical paths by linking and drilling through various pages of information. The Enterprise Metrics Server is the metrics engine, which sits between the client and the Application Data. It uses the metrics configuration information to convert client display requests into generated SQL or MDX queries, processes results to derive sophisticated metrics, and returns the information to the client for display. Other components in this layer are used to maintain and extend the catalog of metrics and to administer the server.

Metrics and Configuration Environments


A standard installation includes two parallel environments: a metric environment for production use by end-users, and a configuration environment in which the Editor may extend the catalog to develop and deliver enhanced metrics capabilities. The Editor uses the configuration environment as a development area to build and test new metrics without affecting production users in the analytic environment. When satisfied, the Editor publishes the changes, which migrates the metadata from the Configuration Catalog to the Metrics Catalog. The new metrics then become available the next time the Enterprise Metrics Server is restarted, usually the next day. The two environments share the Application Data (which is read-only to the servers), but have their own Enterprise Metrics Solution, Enterprise Metrics Server, and client instances. The Personalization Workspace, Studio, and Servers are the same code, but with slightly different configuration parameters to define the connection paths and to enable additional functions in the configuration environment. Both environments also include a Server Console application, which allows an authorized user to monitor activity on the server, shut down or restart the server, dynamically configure certain tuning and logging parameters, and view server logs. The configuration environment includes one additional component, the Studio Utilities. This portion of this book describes the administration of Enterprise Metrics. The following sections provide further background on the nature and implementation of the individual components.

240

Understanding Enterprise Metrics

Database Overview
Enterprise Metrics uses four sets of database tables. Each set is described in these sections:

Application Data on page 241 Catalogs on page 242

You can manually install the catalogs and a small number of required system tables after you complete the Enterprise Metrics installation. The Application Data and the staging database are defined by the customer as part of the application development process or by installation of a preconfigured Enterprise Metrics Solution. Depending on the data extraction strategy, the staging database tables can be distributed in a separate database instance or reside in the same instance as the Application Data and libraries. The database tables are easily distinguishable because each set includes a different prefix in the table names. For example, all tables in the Configuration Catalog have the prefix PUB_ and all tables in the Metrics Catalog have the prefix PRD_.

Application Data
The Application Data contains the data that is viewed and analyzed by general end-users of Enterprise Metrics. The data for a customers application may reside in relational star schema tables, Analytic Services cubes, or both, as defined by the customer during the application development process. The Application Data also must include the relational system tables (such as BAP_PERIOD and BAP_LOAD) that are created during the Enterprise Metrics installation.

Relational-Only Application Data


When all of the data is stored in relational tables, the application is considered relationalonly. Table names are prefixed with BAP_ (such as BAP_PERIOD), and view names are prefixed with VAP_ (such as VAP_PERIOD).

Cube-Only Application Data


When all of the data is stored in Analytic Services cubes, the application is considered cubeonly. Note that Enterprise Metrics still requires a relational database to store the required system tables. The Analytic Services cubes may reside on any machine, as long as the Enterprise Metrics Server and Configuration Server can connect to that machine.
Note: The BAP_PERIOD table must be populated with records (days) that span the entire range of time represented in the cubes. Time is an important dimension in Enterprise Metrics that requires this system tabledespite the fact that data might already be aggregated in the cube(s) along various time members. (See the Hyperion System 9 BI+ Enterprise Metrics Users Guide for more information.)

Database Overview

241

Mixed Application Data


When data is stored in both relational tables and Analytic Services cubes, the application is considered mixed. For example, a customer may wish to use relational tables to store detailed data, and cubes to store higher aggregates of that same data. In addition, a customer may store data on one subject entirely in relational tables (actual sales), and data on another subject entirely in cubes (sales forecasts). When the Application Data is mixed, the information in both sections above is pertinent (Relational-only Application Data and Cube-only Application Data). In addition, any hierarchies that are shared across both cubes and relational tables need special attention.

Catalogs
The catalogs each contain a set of relational tables. The tables share names (except for prefixes) and columns. The tables in the catalogs contain configuration information that controls many aspects of the Enterprise Metrics application. These aspects include the following:

The definition of metrics, measures, pages in the Monitor Section, pages in the Investigate Section, Report pages (in the Pinpoint Section), enrichment rules, and much more, including behaviors associated with those objects (such as, the link from a chart on a page) The appearance of the charts (such as, chart colors, scaling, and number formatting) The layout and format of mini reports

The Metrics Catalog represents the production metadata. The Metrics Catalog tables affect what end-users see and are handled by the Server. This set of tables duplicates those in the Configuration Catalog, except that the table names are prefixed with PRD_ (such as, PRD_STAR_HIERARCHY). The Configuration Catalog represents the publishing metadata. The Configuration Catalog tables are handled by the Configuration Server and can only be viewed by those with publishing privileges, such as the Editor. This set of tables allows Editors to make configuration changes to the application, then view those changes without affecting the production application (Personalization Workspace) that end-users access. The tables in the Configuration Catalog are prefixed with PUB_ (such as, PUB_METRIC).

Enterprise Metrics Servers


The Enterprise Metrics Server is written in Java, uses JDBC to access the databases, and interacts with clients (and the thin client servlet) using RMI. The Servers are simply two copies of the same program. During the installation process, you configure parameters that provide each server instance with the URL for accessing the corresponding metadata catalog, and determine the port number on which the server will listen for client connections. The server that is acting as the Configuration Server also disables the system cache (to permit faster restarts while testing) and provides some additional safeguards, such as ensuring that no other testing clients are active while the Editor is logged in.

242

Understanding Enterprise Metrics

The servers are designed to run continuously, without intervention. They poll the database to detect database and network outages, and they close and re-establish database connections as necessary. More importantly, they monitor a group of flags in the BAP_LOAD table (in the Application Data area) to detect when new data is being loaded, or that metadata publishing is in progress, and automatically re-initialize when these processes complete. The Application Data, and most of the catalog tables, are treated as read-only by the servers. Each time a server initializes, it reads in all of the metadata, performs various consistency checks, and then permits clients to connect. In the case of the Enterprise Metrics Server, before accepting client connections, the server preloads the system cache with some of the pages most likely to be accessed by users. Typical initialization time for the Enterprise Metrics Server is from two to seven minutes, depending on the amount of data to preload. The primary function of the server is to act as the metrics engine, using the catalog definitions to convert a client request for a complex set of metrics into an optimal set of generated SQL or MDX queries, to access the required columns in the Application Data. The results are then used to calculate the desired metrics, return them to the client, and cache them for possible future use. The server also implements a variety of functions related to performance, scalability, and security, including:

Aggregate navigation and query consolidation, to minimize query times Management of a dynamically adjusted connection pool Enforcement of data level (row and column) security restrictions, on a user basis Client authentication, authorization, and idle session timeouts Personalization functions, allowing users to customize their pages and links in the Personalization Workspace (this information is also stored in the catalog, not on the client machines) Activity logging and statistics collection for use in performance tuning

The Enterprise Metrics Server is remarkably efficient and does not require a large investment in CPU, memory, or network resources. After the databases have been created, all it takes to bring up a server is to define the database connections and assign a port number and invoke the startup script. When one of the Enterprise Metrics Servers is started, it reads a preference file to determine the settings. Many preference settings (prefs) are available for fine-tuning the installation. See Chapter 19, Enterprise Metrics Preference File Settings. When the Workspace or Personalization Workspace or one of the Enterprise Metrics servers is started, it reads the preference file to determine the settings. Preference settings include information such as the user ID and password that the server uses when connecting to the database, and the page that appears initially when a user starts the Workspace.

Enterprise Metrics Servers

243

Servlets
Servlets are used to support three functions for Enterprise Metrics.

Launcher ServletsEnterprise Metrics has two Launcher Servletsone for the Configuration environment and the second for the Metrics environment. These servlets are responsible for handling the login process, authentication and single sign-on across Enterprise Metrics clients and also with other single sign-on applications. Thin Client ServletThe Thin Client Servlet handles dynamic HTML and image generation for running the Workspace. The Enterprise Metrics servlets are designed to run in a dedicated JVM without other servlets.

Clients and Tools


The primary front-end component of Enterprise Metrics is the client (Workspace, Personalization Workspace or Enterprise Metrics Studio), which allows users to navigate through an analytical process in a simple and intuitive fashion.

Enterprise Metrics Personalization WorkspaceThe Personalization Workspace uses pure HTML. The Personalization Workspace uses a Java applet and requires a one-time setup of the Java plug-in. The Personalization Workspace allows end-users to create personal pages in the Monitor Section and to customize pages in the Investigate Section. Enterprise Metrics StudioThe Studio is used by the Editor to configure pages in the Monitor and Investigate Sections.

The client tier has several other significant components:

Server ConsolesAllow an Administrator to manually restart the associated server, adjust various preference settings dynamically for tuning or monitoring purposes, and view server logs remotely. Studio UtilitiesA collection of functions used by the Editor to edit the definitions in the Configuration Catalog, and eventually publish them to the Metrics Catalog. Although a few of these functions require some database knowledge, the majority are designed to be used by a business analyst rather than an information services staff member. Log filesTrack activity on the Workspace, Personalization Workspace, Enterprise Metrics Studio, and Servers, and as well as the Studio Utilities. Data in the log files is used for troubleshooting. Technical UtilitiesA set of tools that includes a Calendar Utility to generate the time (or period) dimension table, a Performance Statistics Utility to gather statistics, and a Metadata Export Utility to extract metadata for troubleshooting purposes.

With the exception of the Enterprise Metrics Workspace and Technical Utilities, all front-end components run as Java applets. There is a one-time setup process to install the Java plug-in.

244

Understanding Enterprise Metrics

Implementation and Administration Process Overview


As the administrator, you are typically involved (at various stages) in installing, implementing, administrating, and troubleshooting Enterprise Metrics. Each of these aspects are described briefly in the following sections, and in more detail throughout these sections:

Installation on page 245 Implementation on page 245 Administration on page 246 Troubleshooting on page 246

Installation
As the administrator, you may be involved in the installation and initial configuration of Enterprise Metrics. Accordingly, there are a number of prerequisites that must be addressed before installing the software. After you install the software, there are manual configuration steps that you may need to perform depending upon your system configuration. In addition, there are steps that you should follow to verify the installation. See the Hyperion System 9 BI+ Enterprise Metrics Installation Guide for additional information.

Implementation
After completing the installation process (including verification and testing of installed components) there are certain tasks that must be performed to complete the initial implementation of Enterprise Metrics. These tasks include:

Setting up the Technical Utilities (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Provisioning users and groups to access Enterprise Metrics (Chapter 13, Enterprise Metrics Security) Meeting the requirements to support Analytic Services, if you plan to use Analytic Services as a data source (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Generating the period table information using the Calendar Utility (required) (Hyperion System 9 BI+ Enterprise Metrics Users Guide) Configuring Enterprise Metrics to support clips on Interactive Reporting Studio Dashboards, if desired (Chapter 14, Supporting Clips in Enterprise Metrics) Performing load balancing for the Enterprise Metrics Workspace

Implementation and Administration Process Overview

245

Administration
Typically, there are two types of administration you will perform on the Enterprise Metrics system: daily administration and periodic maintenance. Daily administration may include:

Starting and stopping the Enterprise Metrics Servers Starting and stopping the dedicated servlet JVM Scheduling ETL jobs Enrichment job processing Standard and enrichment publishing

Periodic maintenance may include:


Adding new Enterprise Metrics users Updating calendar information using the Calendar Utility (for example, adding an additional five years of calendar data) Assisting the Editor with enrichment functions, such as adding new tables or columns to the Application Data area to be used for data enrichment or modifying the ETL jobs as necessary Modifying the server preferences, if necessary for troubleshooting or other purposes Monitoring performance statistics

Troubleshooting
The Enterprise Metrics log files are the primary source of information if you need to troubleshoot problems in Enterprise Metrics. These logs files provide detailed information regarding the activity for the Workspace or Studio, Servers, and Studio Utilities. In addition, there are a set of log files for the dedicated servlet JVMs that can be used to troubleshoot problems with the Enterprise Metrics servlets.

246

Understanding Enterprise Metrics

Chapter

Enterprise Metrics Security

13
This chapter provides information on Enterprise Metrics authentication and security. It also includes information on how to use Analytic Services security.
In This Chapter Provisioning Users and Groups to Access Enterprise Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Using Analytic Services Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 About Database Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 About Application-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

Enterprise Metrics Security

247

Provisioning Users and Groups to Access Enterprise Metrics


Access to Enterprise Metrics clients can be controlled by provisioning users and/or groups with adequate Metrics roles in the Shared Services User Management Console. The following three roles apply for Enterprise Metrics:

Metrics ViewerReview Enterprise Metrics content. This allows a Hyperion System 9 BI+ user to view Enterprise Metrics content within the BI+ workspace. If the user is not granted this role, he will not see the option to launch Enterprise Metrics from his Workspace. Metrics AnalystPersonalize Enterprise Metrics Workspace. This role allows a Hyperion System 9 BI+ user to launch Enterprise Metrics Personalization Workspace. This role internally contains the Metrics Viewer role, by definition. Metrics EditorCreate and distribute Enterprise Metrics. Generate the content used to create Enterprise Metrics. Assign Data Security to users. This role must be assigned only to Enterprise Metrics administrators and Editors and includes the Metrics Analyst and Metrics Viewer roles.

Unless a user is granted one of these roles, she is not able to access any of the Enterprise Metrics Clients.

Using Analytic Services Security


In cube-only configurations, you have the option to use Analytic Services data-level security restrictions. Enterprise Metrics supports Analytic Services data-level security, in cube-only environments. This includes managing a separate connection pool for each user, for each cube that they access. Analytic Services Security is more flexible for cube security because it allows more complex logic for cube data. Enterprise Metrics itself supports complex logic in security rules for relational database data but not for cube data because for cubes, it is limited by what is possible to specify via an MDX query which returns a single sub-cube. Analytic Services uses a different mechanism to enforce its security and therefore can support more complex rules for cube security.
Note: If you plan to use Enterprise Metrics security rule set definitions (rather than Analytic Services), if any cubes are present in the configuration, the server rigorously validates that security rule set definitions are sufficiently simple to be expressed in MDX syntax.

If you plan to use Analytic Services security, review the detailed guidelines in the following sections:

Supported Security Rule Sets in Enterprise Metrics Provisioning Users and Groups to Access Enterprise Metrics Enabling Analytic Services Data Security

248

Enterprise Metrics Security

Supported Security Rule Sets in Enterprise Metrics


Enterprise Metrics enables you to control the end-users access to data, using advanced security features such as, Hierarchical security, Time restriction on unreported periods, or Fact security. Enterprise Metrics does not support Fact security against cubes. Any rule sets found to violate this restriction are treated as invalid, and users in those rule sets are denied login, with a dialog box stating that their security rule set definition is invalid when using Analytic Services security. See the Hyperion System 9 BI+ Enterprise Metrics Users Guide.
Note: If you plan to use Enterprise Metrics security, you can use equals or IN in the security rule but only if there is one value in the list. You cannot use not equals, OR, or NOT logic with cubes if you plan to use Enterprise Metrics security. If you need to use this type of logic, use Analytic Services security. For example, a user could be restricted to only see the data for the United States. However, Enterprise Metrics cannot restrict the user to see only the data for the United States OR Canada, data for any country that is NOT the United States, or the data for United States AND for Desktop products. Keep in mind that Enterprise Metrics can implement these more complex security restrictions for relational data marts.

Granting Data Security in Enterprise Metrics


If you plan to use Analytic Services security, follow these guidelines:

You must grant data security to a user using the Enterprise Metrics Studio Utilities Security tool for each user in Analytic Services. Within Analytic Services (using an Analytic Services administration tool such as EAS or MaxL), the user must be assigned a minimum access level of at least read for all of the cubes in the configuration. When you grant data security to a user, you must place the user in the UNRESTRICTED rule set. Although you are placing them in the UNRESTRICTED rule set in Enterprise Metrics, each users security restrictions are determined by Analytic Services. Do not assign a user to more than one hierarchical rule set.

For information on creating users, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

Using Analytic Services Security

249

Enabling Analytic Services Data Security


To enable Analytic Services data security in cube-only configurations, you must add a preference setting to the server preference file.

To modify server preference settings:


1 Locate the Configuration_server.prefs file in the /Server directory. 2 Using a text editor, open the file. 3 Scroll to the end of the file and add this line:
CUBE.USE_ESSBASE_SECURITY=TRUE

4 Verify that AUTH_METHOD=CSS. (You must use external authentication in order to use Analytic Services
data security.)

5 Save and close the file. 6 Repeat the above steps for the Metrics_server.prefs file.
Users must still be authorized to use Enterprise Metrics by defining rule sets in the Security tool. All users are treated as if they were in the UNRESTRICTED rule set, and data security of all types is provided solely by Analytic Services. In the Personalization Workspace, the security restriction display shows Using Analytic Services security. See the Hyperion System 9 BI+ Enterprise Metrics Users Guide for information the Security tool.
Note: You must not enable this feature in a mixed relational data mart and cube environment. If you request a relational query with this option configured, no data security would be applied.

About Database Security


Database security is associated with database user IDs (logins). Certain user IDs are required by the Metrics Server or Configuration Server to interact with the database. Other user IDs confirm the rights of a user to use the application, that is the right of a typical end-user to view and analyze data or the right of the Editor to access the Studio Utilities. Enterprise Metrics uses four database user IDs for connecting to the database:

CDB_USERUsed for a pool of read-only connections to the Application Data tables and views. These connections are used for generating SQL queries against the Application Data that gathers the data to display pages in the Monitor, Investigate, and Pinpoint Sections, and possibly to create views in the Application Data for building reports. DB_USERUsed for two connections to the Application Data database. One connection is used only during server initialization for the purpose of reading constraint values, reading values from BAP_PERIOD or BAP_PERIOD_TIME, checking column names in tables, and so on. One connection remains open continuously for polling the BAP_LOAD table. MDB_USERUsed for a single connection to read the data in the catalog tables during server initialization, which is closed at the end of initialization.

250

Enterprise Metrics Security

UMDB_USERUsed for a single connection for access to the catalog tables. This connection is used during server initialization to write report constraints, and during normal operation for saving user changes to the pages in the Monitor and Investigate Sections, preferences, and for accumulating some statistics. Although this connection is open continuously, there are rare cases that require updates to catalog tables by more than one user, in which case one or more additional connections may be acquired briefly, then released.

The standard installation uses only two database user IDs: one for the Application Data, and another for the Metrics and Configuration Catalog tables. CDB_USER and DB_USER are set as the user ID for the Application Data, and MDB_USER and UMDB_USER are set as the user ID for the Metrics and Configuration Catalog tables. These two database IDs are a prerequisite to the installation of Enterprise Metrics.

About Application-Level Security


Application-level security restricts users access to particular rows or columns of data in the Application Data tables. Behind the scenes, this is implemented by attaching a WHERE clause to the SQL query that the Enterprise Metrics Server issues when gathering data from the database. For example, if a particular rule set defines that users should only see information regarding the Northern sales territory, a rule set can be defined by the Editor at the application level with this restriction, resulting in a WHERE clause like sales_territory = Northern. This section discusses the following topics:

Authorization Data Level Security

Detailed information on application-level security is provided in the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

Authorization
In a standard configuration, the Editor defines rule sets using the Security tool in the Enterprise Metrics Studio Utilities. The access rights associated with general users differ from those associated with the Editor. The two special predefined rule sets in Enterprise Metrics are: Reported Periods Only and Unrestricted. See the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

Data Level Security


Enterprise Metrics enables the Editor to control provisioned end-users access to data, using the following advanced data security features:

About Application-Level Security

251

Hierarchical securityRow-level security that restricts users to specific members of a particular hierarchy level. For example, product managers can be restricted to seeing data only for those product families they manage. Time restriction on unreported periodsA special type of row-level security that limits users access to unreported fiscal periods. For example, you can prevent non-insiders from seeing certain data for unreported quarters. Fact securityColumn-level security that restricts users from seeing certain factual data. For example, you might want all cost and revenue figures to be accessible only to upper management.

See the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

252

Enterprise Metrics Security

Chapter

14

Supporting Clips in Enterprise Metrics

Enterprise Metrics clips are Enterprise Metrics charts or mini reports that are defined and invoked via a URL. Enterprise Metrics allows users to copy the URL of charts and mini reports from the Monitor Section in Enterprise Metrics Personalization Workspace to external Web pages or Interactive Reporting Studio dashboards. This chapter provides important requirements that are necessary to support Enterprise Metrics clips with Interactive Reporting Studio.

In This Chapter

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Authentication and Authorization Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Preference Settings Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

Supporting Clips in Enterprise Metrics

253

Overview
Enterprise Metrics clips allow end-users to copy URLs of charts or mini reports in the Monitor Section and paste the URLs to an external Web page or Interactive Reporting Studio dashboard. Enterprise Metrics clips:

Present livecurrent data when viewed. Apply user security rules of currently logged in user when viewed. Launch Enterprise Metrics with the context of that objecttaking end-users directly to the specified target page. The clips specify to launch Enterprise Metrics within Hyperion System 9 BI+ Workspace or Enterprise Metrics Personalization Workspace. Can target a page in the Monitor, Investigate, or Pinpoint Section. Can target several reportswith different constraints applied.

All clips to Enterprise Metrics display a Tooltip when the end-user positions the mouse pointer over the clip. The Tooltip includes important information on security restrictions and where the clip links to in Enterprise Metrics. The following sections describe requirements that are necessary for end-users to use Enterprise Metrics clips with Interactive Reporting Studio.

Authentication and Authorization Requirement


You can place Enterprise Metrics clips in Hyperion System 9 BI+ Workspace in the Interactive Reporting dashboards, Personal Pages and View Manager and may also be placed in external single sign-on environments supported by Netegrity. To support clips on Interactive Reporting dashboards, Enterprise Metrics has the following requirements:

Authentication and ProvisioningEnterprise Metrics must use external authentication using the same Shared Services instance that is used by Hyperion System 9 BI+ Workspace. This is automatically configuring (by default) when you run the Configuration Utility after installing Enterprise Metrics. Hyperion System 9 RolesUsers must be granted adequate roles and access control to view Interactive Reporting documents. In addition, they must be granted at least one of the following roles: Metrics Viewer, Metrics Analyst, or Metrics Editor. Enterprise Metrics SecurityThe users must be assigned adequate data security in the Enterprise Metrics Security tool.

For additional information on assigning rule sets to provisioned users and groups in the Data Security tool, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

254

Supporting Clips in Enterprise Metrics

When an end-user clicks a Enterprise Metrics clip on a Interactive Reporting document, the link opens up either a new Enterprise Metrics tab within the Workspace or a new browser window and starts a session of Enterprise Metrics Personalization Workspace, depending on the options used when generating the Clip URL. In addition, when the Enterprise Metrics clients are launched, the context of the Clip automatically is displayed in a Monitor, Pinpoint or Investigate Section. The user is not prompted to log in.
Note: Enterprise Metrics clips do not contain the User ID or any data security restrictions. The User ID and corresponding data security restrictions are applied for the logged-in user, when the clip is viewed.

Preference Settings Requirement


When a user generates a clip in the Personalization Workspace, the Clip Generation Options dialog box appears indicating the types of clips available for generation. The Clip Generation Options dialog box contains two to four options, depending on settings in the Enterprise Metrics Client.prefs file. The following figure shows an example of the Clip Generation Options dialog box.

Figure 29

Clip Generation Options Dialog Box

Essentially two Client preference settings determine what options you see in the Clip Generation Options dialog box. They are:

CLIP.URL_TYPEControls the options displayed on the Clip Generation options dialog box. This preference setting has one of the following values:

GENERALIs the default option. Allows the user to generate URLs for Clips in the standard format. You can use these URLs to embed clips in a single sign-on web environment other than Hyperion System 9. PREFIXEnables clip URLs to be generated in the required format for Interactive Reporting. In this mode, you must also set the CLIP.URL_PREFIX value. Essentially, the standard URL is URL-encoded and appended to the prefix, for these options. BOTHEnables the user to generate clip URLs in any of the above formats. In this mode, you must also set the CLIP.URL_PREFIX value.

Preference Settings Requirement

255

CLIP.URL_PREFIXContains the prefix to use for the clip URL when the URLs are generated for the clip.

Note: The Metrics Server, however, has the ability to automatically derive the values for these preference settings if the AV_URL Server preference setting is set in the Server preference file. In a typical Enterprise Metrics installation, the AV_URL preference setting is set when the server setup is completed using the Configuration Utility. In the default scenario, you do not need to update the preference settings listed above. However, for your installation you may choose to suppress the last two options, which can be done by explicitly setting the values for CLIP.URL_TYPE and CLIP.URL_PREFIX.

For addition information, see Chapter 19, Enterprise Metrics Preference File Settings.

To modify the options in the Clip Generation Options dialog box:


1 Locate two settings in the Client.prefs file in the EnterpriseMetrics/Server directory. 2 Make the following changes to support Enterprise Metrics clips in Interactive Report Studio and Web
Client.

a. Add the CLIP.URL_TYPE= setting to the file. b. Indicate the value BOTH or PREFIX. Since BOTH is what the Metrics Server defaults to if AV_URL is specified, you may want to set this to PREFIX to reduce the options to only the first two. c. Add the CLIP.URL_PREFIX= setting to the file to use a custom prefix on the URL. For example:
CLIP.URL_PREFIX=http://<System 9 BI+web server:port>/workspace/Hyperion/browse/extRedirect?extUrl=

You must specify the <System 9 BI+web server> exactly as you expect the end-users to use when launching the Hyperion System BI+ Workspace.
Note: If these two preference settings are already present in the Client.prefs, modify the existing settings.

3 Close and save the file.


The settings will take effect when the Enterprise Metrics server is next restarted.

256

Supporting Clips in Enterprise Metrics

Chapter

17

Troubleshooting Enterprise Metrics

This chapter describes how you can troubleshoot problems using the Enterprise Metrics log files. In addition, this chapter also provides information on how to use the Metadata Export Utility if a Hyperion Solutions Customer Support staff members requests to view your catalog (metadata).

In This Chapter

Using Log Files for Tuning and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Locating and Viewing the Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Understanding Which Logs to View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Reading Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Using the Deployment Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Using the Metadata Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

Troubleshooting Enterprise Metrics

287

Using Log Files for Tuning and Troubleshooting


Enterprise Metrics components, specifically the servers, clients, and Studio Utilities, write detailed activity logs. Any problem referred to Hyperion Solutions Customer Support should be accompanied by the log file(s), however, you will also find them useful for doing your own performance tuning, and troubleshooting simple problems. To use the logs effectively, you should understand:

Where the logs are stored, and how to identify them The best ways for locating and viewing them Which log(s) are most likely to contain relevant information

The three types of logs are: server, client, and tools. Each log is identified with a prefix and each server maintains its own logs. Server logs are distinguished by the port number in the third segment of the filename. To manage disk space effectively, a log rotation scheme is used. Each component is configured to maintain some number of log files (two or three); when the current log file exceeds a configurable size limit, the file is closed, a new one is started, and older ones are removed. For this purpose, each log filename includes a date and timestamp indicating when the log was created (first written to). Sample log names are:

Workspace and Personalization Workspace logmb.client.20020130.042005.log Server logmb.server.2005.20020130.041841.log Studio Utilities logmb.tools.20020130.114913.log Studio logmb.client.20020301.035903.log Configuration Server logmb.server.2006.20020130.112647.log

Locating and Viewing the Logs


After you locate the log, you can open it in a text editor (if the log is in use, you should close and re-open it to see more current entries). However, there are more convenient methods of locating and viewing the logs.

Enterprise Metrics Server Logs


Server logs are stored in the \Server directory. The simplest way to review these is to use the Server Console (Metrics or Configuration). You can run this applet remotely and take advantage of features such as asking only for entries since last viewed. See Viewing the Server Log on page 261.

288

Troubleshooting Enterprise Metrics

If an end-user runs Personalization Workspace, the log file is stored on their computer. The exact location is browser-dependent, so the simplest way to find and view these is to use the View Log link on the Login page. This function locates the current log (provided the client has been run at least once in the current browser session). For example, on Windows 2000, the log is stored in the temporary directory:
C:\Documents and Settings\<userid>\Local Settings\Temp. Note: If an end-user runs the Workspace using Tomcat, the activity is written to a central log file on the machine where the Enterprise Metrics Web components are installed in the <Hyperion_Home>\AppServer\InstalledApps\Tomcat\5.0.28\EnterpriseMetrics \server folder.

Tools and Client Logs


The Editor has both a tools.log and a client.log on their computer. You can view the tools.log and client.log using the View Log link on the Editor launch page; if running the Workspace or Personalization Workspace, use the View Log link on the Viewer launch page. The Studio Utilities log and Studio log are stored in the same directory as the Personalization Workspace log.
Note: Due to limitations in the ability to redirect system output when running multiple applets in a browser, at times the tools might write to the client.log, and vice versa, so you may need to check both. Also, if the Editor runs both the Personalization Workspace and the Studio, they will both write to the same, single client.log. So, if you are investigating a problem, run only one applet at a time to avoid confusion, and always be sure to check the date/time stamps to ensure you are looking at the correct entries. Another option is to check both logs if you cannot find what you are looking for.

Servlet Logs
The Servlet log, mb.servlets.log, location depends on your web environment.

Tomcat
<Hyperion_Home\<EM_Home>\AppServer\InstalledApps\Tomcat\5.0.28\Enter priseMetrics\server

WebLogic 8.1<BEA_HOME>/user_projects/domains/HMB/MBServletJVM WebLogic 7.0<BEA_HOME>/user_projects/HMB/MBServletJVM WebSphere<WAS_HOME>/logs/MBServletJVM

Locating and Viewing the Logs

289

Thin Client Logs


The Thin Client log, mb.servlets.log, is not written until the Thin Client Servlet loads. The location of the log depends on your web environment. For example:

For Tomcat, the log is written in the deployment directory


<Hyperion_home><EM_Home>\AppServer\InstalledApps\Tomcat\5.0.28\Enter priseMetrics\server

For WebLogic, the log is written in the directory containing the WebLogic startup script
<BEA_HOME>/user_projects/domains/HMB

For WebSphere, the log is written in the WebSphere Application Server home directory <WAS_HOME>.

Understanding Which Logs to View


Use these guidelines to determine which logs to view:

If the server does not fully initialize, view the all of the information you might need should be available in the server.log. For performance tuning, use the server logs to review the issued database queries and aggregate table usage. If an end-user complains of slow response time, first review the users client.log to determine which page or item is causing the delay, and then match to the entries in the server.log for further analysis (further details are explained below). If an end-user has a chart or report that will not display, first review the client.log to identify which specific chart or report is causing the problem, and then trace it back to the server.log (where you might find that a query was failing, or perhaps the metadata was configured improperly). In such cases, it usually helps to have the user begin a new session and recreate the problem in the most direct manner possible, to simplify your search. If the tools are misbehaving, view the tools.log (unless you have a problem with authentication or authorization). With the exception of the initial login authentication, the tools interact with the database directly, so it is unnecessary to view the server.log. If the problem is launching the clients or using the Thin Client, check the mb.servlets.log.

All Enterprise Metrics applets display error dialog boxes if they have issues starting. For example, the server cannot be located, the database is down, or the server is still initializing. However, in rare cases the applet may not start, which means that no information appears in the log file. Typically this is due to browser or Web server configuration issues. To investigate these, you must enable and open the Java Console Window (in the browser), and watch for messages while the browser is connecting and downloading the applet from the Web server.

290

Troubleshooting Enterprise Metrics

Reading Log Files


This section explains how to read the information contained in Enterprise Metrics log files. It contains:

Log Formats Specific Scenarios and Tips

Keep in mind the following tips when reading the logs:

Always start from the bottom and work your way back up to the point of interest, and carefully check the date/time stamps to ensure that you are not reading old data. If the client is in a different time zone than the server, look for an entry in beginning of the client.log, which notes the corresponding time on the server. Requests from the client to the server are identified by a user ID and request ID within a client session. This enables it to match client and server activity. If performance is slow, compare the time stamps on consecutive entries to see if you can determine where the time was spent. Read carefully, and do not be intimidated. At first the amount of information may seem overwhelming, but with a little practice you may be surprised at how much you can determine on your own. Most importantly, if it seems like you may need assistance from Hyperion Solutions Customer Support, save the relevant logs before they get overwritten. When reviewing the Enterprise Metrics log files, be aware of the following terminology:

The term dash corresponds to a page in the Investigate Section The term graph corresponds to chart The term database measure corresponds to measure

Log Formats
All three types of logs begin with a standard set of information about the system environment and the current prefs settings, and the majority of the activity log entries follow a standard format which includes the date, time, severity code, user ID (or function name in some case), and message text.

Reading Log Files

291

System Environment Information


All log files start with a few basic, yet important details about the system environment. Some of the general information that appears at the beginning of the log is shown in the following excerpt.
****** LOGGING starts at 2005-06-08 00:08:15 ****** System properties: java.version 1.4.2_05 java.class.version 48.0 java.fullversion null java.vm.info interpreted mode java.vendor Sun Microsystems Inc. java.compiler null java.home C:\Hyperion\common\JRE\Sun\1.4.2 java.class.path pb_dashall.jar;hydb_sig.jar;ess_japi_sig.jar;log4j1.2.8_sig.jar;adm_sig.jar;ap_sig.jar;xerces_sig.jar;admees_sig.jar;admod bo_sig.jar;admfiles_sig.jar;xercesImpl.jar;jdom.jar;dom.jar;sax.jar;C:\H yperion/common/CSS/3.0.0\lib\css-3_0_0.jar;jakarta-regexp1.3_sig.jar;comutil1_01.jar;foundation.jar;C:\Hyperion/common/CLS/1.0.0\ lib\cls2_0_0.jar;C:\Hyperion/common/CLS/1.0.0\lib\EccpressoAll.jar;C:\Hyperion/ common/CLS/1.0.0\lib\flexlm.jar;C:\Hyperion/common/CLS/1.0.0\lib\flexlmu til.jar java.io.tmpdir C:\Documents and Settings\build_bmb\Local Settings\Temp user.name build_bmb user.home C:\Documents and Settings\build_bmb user.dir C:\Hyperion\EnterpriseMetrics\Server os.arch x86 sun.arch.data.model32 os.name Windows 2000 os.version 5.0 default locale en_US

The log file excerpt shows information pertaining to the Java software installation, the user name and home directory, the Enterprise Metrics installation directory, and other environment variables, such as the operating system (os.name) and operating system version (os.version).

292

Troubleshooting Enterprise Metrics

Prefs Settings
The next section of the log file shows information about the current prefs settings. The following lines show an excerpt from the server.log.
Current preferences are: AUTH_AUTO_REGISTER=TRUE AUTH_AV_PROD_ID= AUTH_DEF_FILTER_CRIT=* AUTH_DEF_FILTER_TYPE=GROUPS AUTH_METHOD=CSS AUTH_METHOD_CLASS= AUTH_PROVISIONED=FALSE BALLPARK=DUMPTOFILE BILLIONS_SYSTEM=AMERICAN CACHE_DEBUG_USER= CACHE_ENABLE_PURGE=FALSE CACHE_PRELOAD_LIMIT=0 CACHE_SIZE_METRICS=50 CACHE_SIZE_REPORT=50 CACHE_SYS_METRICS_MAX=200 CACHE_SYS_METRICS_MIN=150 CDB_PASS= CDB_USER=smpl CHECK_PERIOD_TABLE=FALSE CLEANUP_WAIT=1800 CLIENT_PREFS=Client.prefs CONFIG_PORT_NUMBER=2006 CONFIG_SERVER=TRUE

default was: DATABASE

default was: default was: admin

default was: 2006 default was: FALSE

As you scroll down the server.prefs settings, you will find the following prefs settings relating to the logs. The LOG_LEVEL is typically set to 3, which is the recommended setting.
LOG_FILE_MAX=3000000 LOG_LEVEL=3 LOG_SAVE_COUNT=3

In addition, the following server.prefs settings affect logging:


SQL.PRINT_SQL=TRUE SQL.TIME_MINIS=TRUE SQL.TIME_QUERIES=TRUE VERBOSE_INIT=FALSE

See Chapter 19, Enterprise Metrics Preference File Settings.

Reading Log Files

293

Detailed Information
After the environment and prefs settings information, Initializing appears. The following lines show an excerpt from the client.log.
06/08 00:08:15 I *** Server is Initializing, Code Level 90J8, Version 9.0.0.0.0.08 06/08 00:08:15 W *** LICENSING DISABLED *** Read prefs from C:\Hyperion\EnterpriseMetrics\Server\Client.prefs DashServer.main: creating registry DashServer.main: binding server as //carson.hyperion.com:2006 DashServer.main: initializing LocalServer Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for polling loads table ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for data access ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for metadata access ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl Connecting to database: jdbc:hyperion:oracle://carson:1521;SID=orcl, for metadata update ...using driver: hyperion.jdbc.oracle.OracleDriver, as user: smpl 06/08 00:08:19 I TABLEMAP Reading DB_MAP_TABLE named <pub_map_table> for entries tagged as DB_MAP_NAME <pub> 06/08 00:08:19 I TABLEMAP Schema = SMPL, Catalog = null, finding columns using SELECT * 06/08 00:08:20 I TABLEMAP Mapping turned on, using <pub> 06/08 00:08:20 I 'loads' table name is <bap_load> 06/08 00:08:20 I SERVER Database connections established, and load complete - (re)starting Connection Pool 06/08 00:08:22 I ADM Initializing multidimensional application info 06/08 00:08:25 I ADM Established connections to 0 multidimensional application(s) 06/08 00:08:25 W AUTH Pref Setting AUTH_MODE is CSS. Resetting server prefs USER_NAME_POLICY to CUSTOM_LOGIN and CUSTOM_LOGIN_CLASS to launcher.LoginCSSImpl. Reading trusted password... 06/08 00:08:25 W AUTH Using default trusted password. TP from database is null.

Each line in the detailed area of the log is typically in the following format:
date: timestamp: I : User ID

Keep in mind that not every line item has a date stamp. Table 23 shows some hints and tips that might help you interpret information displayed in the log.
Table 23

Hints and Tips for Reading Log Lines Description The I appearing after the date and time stamp represents an informational message, an example is:
03/15 07:04:01 I QUERY

Item/Symbol I

E W

The E represents an error. The W represents a warning.

294

Troubleshooting Enterprise Metrics

Table 23

Hints and Tips for Reading Log Lines (Continued) Description The first and second set of angle brackets indicate the request and return for data. The following lines show an excerpt from the client.log. In this case, the number is 9, meaning this is the ninth time the user requested data in this session. Each request is assigned a number sequentially.
03/13 07:43:04 I ashah <9> Requesting reportData 03/13 07:43:04 I ashah <9> Returning reportData

Item/Symbol <1>

When a user views a page in the Monitor Section and drills down to a chart or mini report, a request and return data message shows for each individual chart and mini report that the user requests. Any requests for reports and metrics in the Investigate and Pinpoint Sections show a single request and return message. *** Below are excerpts from the log representing a warning, error and invalid constraints:
*** Warning, no constraint_name specified for clickable <3> in mini <105>, hopefully this is a crosstab data cell... *** invalid constraints specified for [page: test, position(380, 515), size(427, 113), MINI, mini_id <50> common.NewsObj@497934] *** Error, invalidating graph_template <3> due to missing measures in metric <Bookings ASP Qago>

Specific Scenarios and Tips


The following sections offer some guidance in how to get started, depending on the nature of your problem.

Crashes, Hangs, Initialization Failures


Serious problems such as crashes, server hangs, or initialization failures are quite rare, but they can happen. For example, you might click a button somewhere in the Studio Utilities and the button might appear to stick in the down position, and it just will not come back. In such cases, the first step is to look at the bottom of the log and see whether a stack trace has been logged. A stack trace is written when the program encounters an unexpected exception, and records the program calling sequence leading to the point of the error. The following lines show a stack trace in an excerpt from a server.log.
03/04 11:38:11 E #admin stacktrace follows for: null java.lang.NullPointerException at server.MetricDefTemplate.checkMdefStuff(MetricDefTemplate.java:1128) at server.MetricDefTemplate.createMDefParms(MetricDefTemplate.java:1101) at server.MetricSetCreator.createClientSet(MetricSetCreator.java:496) at server.MetricSetCreator.getMetricSet(MetricSetCreator.java:943) at server.LocalServer.getClientData(LocalServer.java:1294) at java.lang.reflect.Method.invoke(Native Method) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:241) at sun.rmi.transport.Transport$1.run(Transport.java:152) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:148) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:465) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run (TCPTransport.java:706) at java.lang.Thread.run(Thread.java:484)

Reading Log Files

295

Obviously, if it really is a programming error, you are not going to fix it, and the method names and line numbers will not be useful to you (though they help Hyperion Solutions tremendously). However, if you scroll up and read several lines before the exception, you might get a very good idea of what the problem is. For example, in the Studio Utilities you may see an exception error if a SQL query or update fails, and it might be as simple as the database being down. It might also be related to a metadata problem, in which case you may be able to determine which particular item (metric, chart template, and so on) was involved. On the client, you might be able to determine that a particular mini report caused the problem, see what constraint settings were being used, and then investigate the mini report further in the Studio Utilities, or check for corresponding SQL errors in the server.log. Stack traces are used as gross indicators: they are easy to spot, almost always indicate a problem, and might give you a clue about what is happening based on the surrounding context. If the problem persists and is not obvious, please forward the logs and associated details to Hyperion Solutions Customer Support for investigation.

Server Returns a Metadata Initialization Failed Message


Whenever the server is restarted, it reads in all of the metadata from the catalog and validates it for consistency. In most cases, problems are handled by simply ignoring a problematic item (see A Chart or Report is Missing in the Client on page 297), but in extreme cases it may just give up.
Note: In the case described above, it will still authenticate users for the Studio Utilities and Configuration Server Console, so that you have a chance to review the log, possibly correct a problem in the catalog, and restart the server.

The log should at least give you a good idea of what it was trying to do at the time it failed. So if it returned the error while Reading hierarchies, for example, and you know that you did something a bit unusual with hierarchies yesterday, you might review that and try restoring the hierarchies to the way they were. Usually, this type of error will not result from a database or network connectivity problem. The server expects to run continuously for weeks or months, and is quite robust about handling these conditions.
Tip: When you first install the application, only the Configuration Catalog is populated with

metadata, and you must publish (using the Publishing Control tool) to migrate the information to the Metrics Catalog before attempting to start the Server.

296

Troubleshooting Enterprise Metrics

A Chart or Report is Missing in the Client


For the servers own protection, it is quite rigorous about validating the catalog definitions, and it does so in a logical sequence. It helps to understand this process, because to determine why something is missing, you typically need to work backwards through the log to find the root cause of the problem. For example, a page consists of multiple charts, using various metrics, computed from several measures, which are accessed using StarGroups, consisting of one or more stars, ultimately leading to particular columns in the Application Data areaand if something looks questionable at any step in this process, you end up with a column on the page that says Invalid chart. If you browse through the initialization portion of the log, you will see that the server is processing these items in the logical sequence: find all the valid measures, then build the metric definitions, then the charts, and so on. What you want to do is find the place where it is finally processing the pages, and then work upward to determine the source of the problem. The following example was created by renaming a column in a fact table, such that a particular measure could not be retrieved from the Application Data. Omitting many intervening log entries, you might see something like this:
... Reading measure_stargroup (and measure, star, and stargroups)... ... Discarding star Revenue aggr1 for use with measure <Revenue $> Discarding star Revenue for use with measure <Revenue $> *** Error, no useful info found for measure <Revenue $>, discarding ... (created 26 measures) Reading metric... *** Error, can't find measure <Revenue $> requested by metric <Revenue $> operand 1, skipping metric *** Error, can't find measure <Revenue $> requested by metric <Revenue $ Contribution> operand 1, skipping metric ... (created 69 metrics, 3 of which are invalid) Reading blowup_lines... Reading graph_templates... ... *** Error, metric <Revenue $ Contribution> not found, for graph_template <83> *** Error, invalidating graph_template <83> due to missing measures in metric <*MISSING*> ... (created 78 templates) Reading dash_pages... ... 04/03 18:43:06 E #admin *** METADATA Error in <#admin/Contribution/Contribution> - chart 83 contains no valid metrics. Invalidating column. ... (created 11 #admin metrics sets, plus 0 shadows and 8 clones) (created 10 news sets)

Starting at the bottom, you see that the page named Contribution (owned by user #admin, meaning it is visible to all users) has an invalid chart in some column, because one or more of its metrics was invalid. Working up, you find that chart <83> had two missing metrics, in turn

Reading Log Files

297

due to the missing measure Revenue $. Finally, you see that this measure was discarded because none of the associated stars were able to access the required column (because it had been renamed). At this point, you would want to review the definition of the measure to determine whether it was a problem with the fact snippet, or the star/StarGroup definitions. Note that in this case, it is not a problem with the applicationyou have a problem with how the catalog has been configured, and you also have enough information to track it down. A similar sequence is used with reports: a report will be ignored if it contains an invalid mini report, which might occur because the SQL was deemed invalid, possibly because it used a constraint that was not properly associated with a hierarchy or was missing a parent constraint. The report simply will not appear in the menu on the client, but the server.log will give you a very clear indication of why it was rejected.
Tip: Periodically, review the initialization sequence in the server.log and clean up any errors

(you may have some errors without realizing it, since the server can apply temporary corrections in some cases). This will make it much easier to spot real problems, should they occur.

Performance is Poor
Generally speaking, the server is sophisticated enough so that it does not have to do very much work. If performance seems slow, it is usually because the Application Data area is taking a long time to execute a query. You need to determine which query is involved, and why, and then find a way to make it faster. For example, suppose an end-user complains that a page takes too long to display. This is a fairly complicated problem, because the page contains multiple charts, each using multiple metrics/measures, and the server is actually combining queries to the Application Data across all the different measures involved, wherever possible. Also, it must mean that the page has not already been cached, so either it is a private page belonging to that end-user, or they have drilled further than anyone else, or perhaps they have some unusual security restrictions. It usually helps to isolate the problem as much as possible, before looking at specific database query timings. You can look at what slicing/drilling constraints were being used, and try creating a series of pages each containing a single chart from the original page. If you can narrow it down to a particular column, that will make the rest of the analysis much easier, because you will be able to quickly pinpoint the relevant SQL statement without being distracted by server optimizations. The following example is a bit more complex. In this case, the server is preloading the cache (hence the user ID PRELOAD), but the processing is mostly the same as if some individual client had made a request for the same page, with the same constraints.
Note: The following example shows a sample log, however, you may notice minor wording differences in your log.

298

Troubleshooting Enterprise Metrics

After you locate the query which is causing the problem, you then have to consider the possible need for more aggregate tables, restricting unreasonable drilling levels, creating more indexes, and so forth. But the first step is understanding what the problem is, and the following log sample below includes comments along the way.
01/04 09:09:49 I PRELOAD Qtr/Opportunity-Qtr <2> Metrics data requested for #admin/Opportunity-

Metrics data requested... appears at the start of the process, and at the very end there is a corresponding Returning metrics data... (or sometimes Returning cached data...).

Notice the Request ID in angle brackets <2>: this is used to tagged many of the entries. Request IDs are unique within a single client's login session, since you often need to consider both the USERID and Request ID to match things up. Also note the string at the end #admin/Opportunity-Qtr/Opportunity-Qtr. This identifies the page, as <userid>/<metrics page name>/<metrics page title>.
#admin indicates that it is a system, or Editor page, so <metrics page title> would appear

in the page selector menu with no asterisk after it.


--> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: , interval: , **** START QUERY: Opportunity-Qtr ****

This is the start of a timer **** START QUERY, which will cover the entire page process. The total accumulates, while the interval shows just the time from the previous timing entry. All times are in milliseconds (60,000 = 1 minute).
--> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 143, interval: 143, all SQL generated for PRELOAD:Opportunity-Qtr

The server has now generated all required SQL for the entire page and has done the carpooling function of combining multiple select items into a single query wherever possible or reusing an item that has already been selected for some other chart on the page.
1/04 09:09:49 I PRELOAD Final SQL for DETAILS (using star Opp Rev Line): SELECT SUM(F1.opp_actual_amt), COUNT(distinct(F1.opp_key)), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:49 I PRELOAD Connection requested 01/04 09:09:49 I PRELOAD Connection obtained 01/04 09:09:52 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 2651, interval: 2508, finished query 0, results saved

This group of entries covers the execution of the first query for the page. It begins by showing the SQL that will be executed, and also notes which star was selected. A connection was obtained from the pool, the query executed, and it took 2.508 seconds. This includes the time for the SELECT clause, and the time to retrieve all of the result rows. Several more queries for the same page follow.

Reading Log Files

299

01/04 09:09:52 I PRELOAD Final SQL for DETAILS (using star Opp Rev Line): SELECT COUNT(DISTINCT F1.cust_key), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.Day_Last_Of_Qtr_Ind = 1 AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:52 I PRELOAD Connection requested 01/04 09:09:52 I PRELOAD Connection obtained 01/04 09:09:54 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 4995, interval: 2344, finished query 1, results saved 01/04 09:09:54 I PRELOAD Final SQL for TOTALS (using star Opp Rev Line): SELECT COUNT(DISTINCT F1.cust_key), P.QTR_OVERALL_NO FROM bap_opportunity_revenue_line F1, brio_mart.bap_fiscal_period P WHERE F1.Period_Key=P.Period_Key AND P.Day_Last_Of_Qtr_Ind = 1 AND P.QTR_OVERALL_NO IN (11,12,13,14,15) GROUP BY P.QTR_OVERALL_NO 01/04 09:09:54 I PRELOAD Connection requested 01/04 09:09:54 I PRELOAD Connection obtained 01/04 09:09:55 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 6286, interval: 1291, finished query 2, results saved 01/04 09:09:55 I PRELOAD Final SQL for DETAILS (using star Booking Header): SELECT SUM(F1.order_ext_actual_amt), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_order_header_fact F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME

The SQL generation above is the typical case, where three items appear in the SELECT. With rare exceptions, the last item is the current hierarchy or slice level (sliced by customer country), the second to last item is the time value (quarter number), and any items before that are the actual facts or measures. In this case, there is only one (sum of actual amt), but there may be severalfor example, the first query at 09:09:49 has two measures being selected.
01/04 09:09:55 I PRELOAD Connection requested 01/04 09:09:55 I PRELOAD Connection obtained 01/04 09:09:59 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 9624, interval: 3338, finished query 3, results saved 01/04 09:09:59 I PRELOAD Final SQL for DETAILS (using star Billing Header): SELECT SUM(F1.invoice_ext_actual_amt), P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME FROM bap_invoice_header_fact F1, brio_mart.bap_fiscal_period P, bap_customer D1 WHERE F1.Period_Key=P.Period_Key AND F1.CUST_KEY=D1.CUST_KEY AND P.QTR_OVERALL_NO IN (12,13,14,15) GROUP BY P.QTR_OVERALL_NO, D1.CUST_SITE_COUNTRY_NAME 01/04 09:09:59 I PRELOAD Connection requested 01/04 09:09:59 I PRELOAD Connection obtained 01/04 09:12:29 I PRELOAD Connection returned, idle conns=5 numConns=5 --> timing metrics query <2> for user PRELOAD, page #admin/OpportunityQtr/Opportunity-Qtr, total: 159850, interval: 150226, finished query 4, results saved

300

Troubleshooting Enterprise Metrics

Notice the above excerpt shows 150 seconds for the one query, while the others totalled less than 10. Having retrieved all of the data, the server finally does any necessary calculations to construct metrics and so forth.
01/04 09:12:29 I 13280 PRELOAD <2> Returning metrics data for Opportunity-Qtr, size

That is the end of processing for this page, when the results would normally be returned to the client (but in this case are just going into the cache). The size refers to the storage, in bytes, required for the full set of results.

Using the Deployment Logs


In addition to the Enterprise Metrics server, client, and tools logs, a log file also exists for the Enterprise Metrics deployment of Tomcat. This file is stored in
<Hyperion_Home>\AppServer\InstalledApps\Tomcat\5.0.28\EnterpriseMetrics \server. You can use this log to determine if an error occurred during servlet initialization.

In addition, there are logs files generated by your Web environment software that may also contain valuable information if you are experiencing a problem. The types and location of these log files varies by vendor. Refer to your vendor documentation for specific details.

Using the Metadata Export Utility


The Metadata Export Utility is used to troubleshoot problems. It allows you to export metadata from the Metrics and Configuration Catalog tables. After the export, the data is captured in an output file that can be sent to Hyperion Solutions Customer Support or Consulting Services for analysis. You can also use this tool to move metadata between test environments. The Metadata Export Utility is designed to be run on Microsoft Windows. Hyperion currently does not support running the Metadata Export Utility on UNIX. This section contains the following topics:

Metadata Export Utility Files Configuring the Metadata Export Utility Running the Metadata Export Utility

Using the Metadata Export Utility

301

Metadata Export Utility Files


The Metadata Export Utility is included in the TechnicalUtilities.zip file (accessible via the Technical Utilities link on the Editor launch page). When you unzip the Technical Utilities, a \MetadataExport folder is created. The files included in the folder are:

metadata_export.prefsPreferences file that defines database and driver information,

log file settings, input file settings, and extract parameters.

metadata_export_table_list.txtExport table list file that defines the tables from which the records are extracted. The list may contain one or more tables. metadata_export_presql.sqlPre-SQL processing file that defines SQL that should be added to the beginning of the output file. metadata_export_postsql.sqlPost-SQL processing file that defines SQL to be

appended to the output file.

metadata_export.jarThe JAR file containing the Java code for the Metadata Export

Utility.

run_metadata_export.batRuns the Metadata Export Utility. The BAT file contains the path to the JRE and the preference files. The Java plug-in available from the Enterprise Metrics launch pages is sufficient to run the Metadata Export Utility.

This folder also contains database drivers for each supported database. The following sections provide more detail on the files associated with the Metadata Export Utility.

Metadata_export.prefs File
There are preferences that you need to define to run the Metadata Export Utility. These preferences specify directories where the source files or log files are located:

TABLES_DIR LOG_DIR SQL_DIR OUT_DIR

These preferences pertain to the source database you are using:


DRIVER URL USER & PWD SRC_DBTYPE

These preferences pertain to the specifics of what table(s) you are exporting from and what database you will be importing to:

TABLE_PREFIX UPDATE_USER_ID

302

Troubleshooting Enterprise Metrics

TGT_DBTYPE COMMIT_INTERVAL=1000 COMMIT_TEXT= COMMIT;

For a complete list of preference settings and descriptions, see Chapter 19, Enterprise Metrics Preference File Settings.

Export Table List File


The export table list file, metadata_export_table_list.txt, contains a complete list of PUB metadata tables. The tables are listed in processing order required to maintain foreign keys. It consists of three comma-separated columns of information:

Column one: Table NameLists the metadata table name without the normal prefix of PUB_ or PRD_. By default, the table prefix is set to PUB_. You can change the table prefix setting to PRD_ in the metadata_export.prefs file. The Metadata Export Utility concatenates the table prefix setting (from the preference file) to the name of each table listed in the export table list file. See Chapter 19, Enterprise Metrics Preference File Settings.

Column two: Action FlagContains an action flag that determines whether an insert (I) statement is to be generated or if the table should be skipped (S). The metadata tables that are core to Enterprise Metrics are marked with an I. The metadata tables that are separate and distinct are marked with an S.

Note: I and S are the only valid values for the action flag. If the action column is blank, the Metadata Export Utility logs an error and processing stops. You must enter a valid value before processing can continue.

If you want to use the insert statements to populate a complete set of metadata tables, the tables must be listed in foreign key order, so that the insert statements work if the foreign keys are enabled (the parent records are inserted before the child records).

Column three: Where ClauseDefines an optional SQL where clause to limit output to the output file for a table. This is useful when you are exporting constraint items where you do not need to output generated items.

Example Export Table List File


STAR_STATS_DETAIL,S STAR_STATS_SUMMARY,S HIERARCHY,I HIERARCHY_LEVEL,I CONSTRAINT_ITEM,I,item_id<1000

Using the Metadata Export Utility

303

Please refer to the metadata_export_table_list.txt file for a complete list of metadata tables. The following tables only exist in the PUB environment; there is not a comparable table in the PRD environment:

PUB_USERID PUB_BUSINESS_NAME_TBL_MAP PUB_BUSINESS_NAME_COL_MAP PUB_MAP_TABLE PUB_VERSION

Pre-SQL and Post-SQL Files


The pre-SQL and post-SQL files allow you to append SQL lines to the beginning or end of the output file. For example, with Oracle it may be necessary to add pre-SQL to apply a date format. Similarly, with Oracle, it is necessary to add a commit statement at the end of the SQL. The format of these files is one SQL line per line, ending with a semi-colon. Pre- and post-SQL files with relevant sample content for Oracle are packaged with Enterprise Metrics.

Output File
Each time you run the Metadata Export Utility, it creates an output file named metadata_export.sql. The same file name is used each time the output file is generated; therefore, if you plan to run the Metadata Export Utility more than once, you should rename the output file after each run (before the next run).

Log File
The Metadata Export Utility creates a log file, metadata_export.log, that supports three levels of detail. The default logging level is 5. Table 24 describes the detail associated with each logging level.
Table 24

Metadata Export Utility Logging Levels Activity Logged Minimal information Intermediate Detail

Logging Level 0 5 10

304

Troubleshooting Enterprise Metrics

Configuring the Metadata Export Utility


To configure the Metadata Export Utility:
1 List the tables in the export table list file to extract the metadata. 2 Save metadata_export_table_list.txt. 3 Define pre-SQL processing logic in the pre-SQL file, if necessary, based on your database platform. 4 Define post-SQL processing logic in the post-SQL file, if necessary, based on your database platform. 5 Enter the setting in the preference file.
See Chapter 19, Enterprise Metrics Preference File Settings.

6 Save the file.

Running the Metadata Export Utility


To run the Metadata Export Utility:
1 Open run_metadata_export.bat. 2 Verify the path to the Java plug-in.
@C:\Progra~1\Java\j2re1.4.1_03\bin\java -cp %cp% com.brio.bin.metadata_export C:\Hyperion\EnterpriseMetrics\MetadataExport\metadata_export.prefs @echo %error_level%

3 Verify the preference file name and path.


@C:\Progra~1\Java\j2re1.4.1_03\bin\java -cp %cp% com.brio.bin.metadata_export C:\Hyperion\EnterpriseMetrics\MetadataExport\metadata_export.prefs @echo %error_level%

4 Save and run the batch file.


The Metadata Export Utility:

Reads the parameters from the preference file, the pre and post SQL files, and export table list file. Reads all rows from the specified tables applying any filters specified in the preference files. Appends all pre-SQL processing statements to the output file. Creates and outputs insert statements to the output file. Appends post SQL processing statements to the output file.

5 After you run the Metadata Export Utility, open the metadata_export.log file and review the log
activity, making sure there are no errors.

If you find errors in the log file, you generally need to address the error, and then rerun the tool and verify that data from all tables identified has been correctly exported.

Using the Metadata Export Utility

305

306

Troubleshooting Enterprise Metrics

Chapter

15
In This Chapter

Enterprise Metrics Server Administration

This chapter provides information on Enterprise Metrics server administration.

Administration Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Launching the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Monitoring Server Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Shutting Down the Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Restarting the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Viewing the Server Log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Monitoring Server Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Exporting Settings to Preference Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Monitoring Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Exiting the Server Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

Enterprise Metrics Server Administration

257

Administration Overview
You can use the Server Consoles to:

Monitor server statistics Shut down the server Restart the server View the server log Monitor server settings Monitor user activity

The Metrics Server Console allows you to administer the Metrics Server, and the Configuration Server Console allows you to administer the Configuration Server. This section shows you how to work with the Configuration Server Console; the process to work with the Server Console is identical.
Note: Before you launch the Server Console, you must make sure that the server is running (Metrics or Configuration).

Launching the Server Console


To launch the Server Console:
1 Enter the URL to access the Editor page.
For example, http://<webserverhost>:<port>/metrics/editor.

2 Click Configuration Console.


The Server Console appears.

258

Enterprise Metrics Server Administration

Monitoring Server Statistics


The first tab on the Server Console is Statistics. While viewing Statistic, you can shutdown and restart the server, view the server log, and monitor server statistics. Specific values shown include the port number for the server, the number of users currently connected to the server, and whether or not the server is accepting connections. If you restart the server from within the Server Console, you must click the Refresh button to update the Server State.
Table 18

Server Statistics in the Server Console Description The host name of the Configuration Server. The port number of the Configuration Server. Typically, the Configuration Server uses port 2006 and the Metrics Server uses port 2005. The Enterprise Metrics version number. The amount of time that the server has been running. Formatted as Hour:Minutes:Seconds. Number of users currently logged in to the Configuration Server. Since there is typically only one Editor, this shows only one user. The current number of connections in the pool. The number of connections in the pool not currently being used. The smallest size the pool has reached since the server was last started. The largest size the pool has reached since the server was last started. The current size of the system Metrics pages cache in kilobytes (KB). The current size of the system Metrics pages cache in pages. The name of the Metrics and Configuration Catalog database. The name of the Application Data database. If TRUE is displayed, the console is running against the Configuration Server. If FALSE is displayed, the console is running against the Server. Indicates whether the server is accepting connections. When you restart, the server state shows that the server is initializing. You must click Refresh to determine if the server has restarted and is accepting connections.

Item Server Host Name Port Number Version Number Server Up Time Number of Users Connection Pool Size Idle Connections Minimum Connections Maximum Connections Sys Cache-Metrics Size (KB) Sys Cache-Metrics Pages Meta DB Name Data DB Name Config Server Server State

If you plan to shutdown or restart the server, the Statistics tab shows the number of users currently connected. You can also click User to show the users currently using the application. Shutting down and restarting the server both drop all users that are currently logged in. When you click Shutdown or Restart, a dialog box indicates how many users are currently logged in. If you shut down or restart the Server, the dialog box shows how many users are currently logged in to Personalization Workspace. Similarly, if you shut down or restart the Configuration Server, the dialog box shows how many users are logged in to Studio Utilities or Enterprise Metrics Studio.

Monitoring Server Statistics

259

Shutting Down the Server


The Shutdown button is used to shutdown the server. If the server has been shutdown, you cannot start the server from within the Server Console. If the server is installed on a Windows machine, you can start the server using the program shortcut entries that are created by the Enterprise Metrics installer. If the server is on a UNIX platform, you should use the stop script installed in the \EnterpriseMetrics\Server folder.

Restarting the Server


The Restart button is used to restart the server. This causes the catalog to reload. Typically, you restart the Configuration Server if you make Configuration Catalog changes and want to see the updated metadata in Enterprise Metrics Studio or you modify the Configuration Server preference file and want the new preference setting to take effect. Restarting the server allows you to do these things without shutting down the server. To restart the server:

Use the Metrics or Configuration Server Console. If you want to restart the Configuration Server only, use the Studio Utilities Publishing tool. On the Config Server tab, you can click Restart or Restart Fast. Use Restart when hierarchy changes have been made in the metadata since the last restart. Otherwise, use Restart Fast.

To start the server:

Use a UNIX command. For example, on UNIX, to start the Enterprise Metrics Server, type start_config (for the Configuration Server) or start_metrics (for the Server) and press [Enter].

Use the shortcut in the Windows Start menu.

To shutdown the server:


Use the Metrics or Configuration Server Console. Use a UNIX command. For example, on UNIX, to shut down the server, type stop_config (for the Configuration Server) or stop_metrics (for the Server) and press [Enter].

To restart the Configuration Server:


1 On Statistics, click Restart.
When you restart the server, the Server State indicates that the server is initializing.

2 Click Refresh to refresh the Statistics tab with current information.


When the server is available for connectivity, the Server State shows Accepting Connections.
Note: If you attempt to restart the server with users logged in, a dialog box is displayed, indicating how many users are currently logged in.

260

Enterprise Metrics Server Administration

Viewing the Server Log


The Configuration Server Log contains information regarding the activity on the Configuration Server. Activity includes the status of the connections to the database, the reading of the Configuration Catalog, and the requests sent from Enterprise Metrics Studio and Studio Utilities to the Configuration Server. See Reading Log Files on page 291. You can view the complete server log, apply filters to view sections of the log based on date and time stamps, or change the maximum size of the log using the Configuration Server Console.

To view the server log:


1 On Statistics, click Server Log.
The Retrieve Server Log dialog box is displayed.

To modify the size limit, specify a maximum size in kilobytes. To view a specific portion of the log based on a date and/or time stamp, click the option Specify Date and Time. By default, the Enter Earliest Date field is populated with yesterday's date and the Enter Earliest Time field is populated with the current time. You can change the settings by clicking the field and entering a new value. This is useful if you are troubleshooting a problem that can be isolated to a specific date or time. If you have previously viewed the server log through the console window and now want to view only new activity, click Since Last Viewed. Only the new activity appears in the log. This button is enabled for the duration of your logon to the Server Console. If you log out of the Console and log back in, you must view the server log through the console to enable this button.

2 After making your selection, click OK.


The Save As dialog box is displayed.

3 Choose a location to save the log as a text file. 4 Click Save. 5 Using Windows Explorer, open the file from the saved location. 6 After you open server.log.txt, scroll to the end to view the most current information.

Viewing the Server Log

261

7 Then, scroll from the bottom up to locate the date stamp of the portion of the log you want to view.
An example of server.log.txt is shown in the following figure.

262

Enterprise Metrics Server Administration

Monitoring Server Settings


The Settings tab on the Server Console window displays the server preference settings that can be changed dynamically, as described in the following sections:

Changing Server Settings on page 264 Setting Passwords on page 264

You can only change blue values.

Monitoring Server Settings

263

Changing Server Settings


To change settings:
1 Click the value.
The Change Setting Dialog is displayed.

2 Enter the new value and click Set.


The dialog box closes and the new value shows on the Settings screen. If you do not see a new value, click Refresh.
Note: Any changes you make on this screen are lost when the server is restarted. To make a permanent change, you must edit the preference file directly.

Setting Passwords
The Settings tab contains a Set Passwords button. This is used to administer the trusted password for embedded mode, or the LDAP Directory Manager password if you are using LDAP authentication in stand-alone mode.

See Chapter 13, Enterprise Metrics Security.

Exporting Settings to Preference Files


The export function allows you to write to a file all of the server preference settings that are not set to the system default values. When you click Export, the settings are saved to the directory that contains the log files. The name of the export file is identified in the last setting Export settings to file, for example: saved.server.prefs.
Note: The saved settings file may contain back slashes not found in the original file. These are escape characters provided by the save operation, and will not affect usability.

264

Enterprise Metrics Server Administration

Monitoring Users
The Users tab allows you to monitor the number of users logged in to Enterprise Metrics.

Table 19 describes each column on the Users tab.


Table 19

User Information Description The user ID of the person logged in. You can sort the list of users by clicking the User ID column. The amount of time the user has been logged in. The name of the server hosting the Configuration Server (or Server). The amount of time since the server last heard from the client. The number of queries for pages in the Investigate Section and individual charts in the Monitor Section. The number of metric queries that were satisfied by the server. The number of queries for (Pinpoint Section) pages and individual mini reports in the Monitor Section. The number of report queries that were satisfied by the server. The number of other requests from the client to the server, not including metric or report requests.

Column User ID Duration Login Host Idle Time Met Qrys Met Hits Rpt Qrys Rpt Hits Other Reqs

Exiting the Server Console


To close the Server Console, click the Windows Close button.

Exiting the Server Console

265

266

Enterprise Metrics Server Administration

Chapter

16

Enterprise Metrics Load Support Programs

There are four programs delivered with Enterprise Metrics that support the load processes which maintain the data in the Application Data area. These programs are: BeginLoad, FinishLoad, Publish, and Enrich. These load support programs must execute as part of the extract, transform, and load process that moves data from your source system(s) to the Application Data. The functionality of each of these load support programs is explained in this chapter.

In This Chapter

Load Process Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Scheduling the Load Support Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 BeginLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 FinishLoad Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 Publish Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Processed Enrichment Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Enrichment Versus ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Enrich Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Failure During Enrichment Job Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Studio Utilities in Stand-alone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Reviewing the Load Support Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

Enterprise Metrics Load Support Programs

267

Load Process Overview


To fully understand the load support programs, it is first important to understand the overall load process. The data in the Application Data area is usually populated and maintained by a nightly extract, transform and load (ETL) process, which includes the following steps. 1. Source-to-stage extraction programs execute to populate data in the staging tables from the source system(s). 2. The BeginLoad program executes, causing Enterprise Metrics to become inaccessible. 3. Stage-to-mart extraction programs execute to populate data in the Application Data tables from the staging database and build any aggregate tables that are critical to the execution of Enterprise Metrics. 4. The FinishLoad program executes, performing the following tasks: a. Executes Standard Publishing (if standard publishing was requested), which copies all metadata tables except enrichment from the Configuration Catalog to the Metrics Catalog. b. Executes Enrichment Publishing (if enrichment publishing was requested), which copies all enrichment metadata tables from the Configuration Catalog to the Metrics Catalog. c. Executes the Enrich program to enrich data in the Application Data area based on the enrichment job definitions stored in the catalog. Enrichment job processing occurs with every load, irrespective of whether enrichment publishing was requested. d. Updates period related information and other flags in the BAP_LOAD system table, which resides in the Application Data. If FinishLoad is successful, the BAP_LOAD flags are set in a manner that cause Enterprise Metrics to become accessible. If FinishLoad is unsuccessful, the load process halts and the BAP_LOAD flags are set in a manner that causes Enterprise Metrics to remain inaccessible. 5. Additional aggregate programs may execute to build any aggregates that are not critical to the execution of Enterprise Metrics. Enterprise Metrics users may experience slower query times before these aggregates become available. For more detailed information on the system tables, load process and how to build and schedule the source-to-stage, stage-to-mart and aggregate programs, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

268

Enterprise Metrics Load Support Programs

Scheduling the Load Support Programs


The load support programs are written in Java and are delivered as part of the loadSupport.jar file, located in the <HMB_HOME>\server\ folder. There are two command files BeginLoad.bat (BeginLoad.sh on UNIX) and FinishLoad.bat (FinishLoad.sh on UNIX) in the \server\ folder that you can use to execute the BeginLoad and FinishLoad programs. You can schedule execution of the BeginLoad and FinishLoad programs through the same scheduling mechanism that you use to execute your ETL load process, assuming that the ETL scheduler has adequate authority to execute these programs on the Enterprise Metrics server. On occasion, you may also need to manually execute the BeginLoad and FinishLoad programs, especially when re-running to correct an error. In addition to the required preference file argument, which is hard-coded within the BeginLoad and FinishLoad program command files, the FinishLoad program takes an optional argument, which identifies whether the Application Data load succeeded (Y) or failed (N). This optional argument is not hard-coded within the FinishLoad program command file. Instead, it can be passed as a parameter when the FinishLoad program command file is called. If this parameter is not passed, the Application Data load is assumed to have succeeded. If this parameter is passed, then the Application Data load is assumed to have failed unless the parameter is Y. Please keep in mind that these parameters are not case sensitive.

Preference File Settings


The BeginLoad and FinishLoad programs use a preferences file to identify the connection parameters for the Application Data and metadata connections. By default, the BeginLoad and FinishLoad programs are configured to use the Metrics_server.prefs file. (If you wish to change the name of the preferences file, edit this argument within the BeginLoad and FinishLoad program command files.) In addition to the standard preference settings stored in the Metrics_server.prefs file, there are a few optional preference settings that the load support programs use. Table 20 shows the optional preference settings used by the load support programs. The standard preference settings are documented in Chapter 19, Enterprise Metrics Preference File Settings.

Preference File Settings

269

Table 20

Load Support ProgramsOptional Preference Settings Description Defines whether the output from the BeginLoad and FinishLoad programs should be written to a separate log file (mb.Loads.log) or to the same output stream as the calling program. If the setting is TRUE it writes output to the mb.Loads.log file. If the setting is FALSE, it writes output to the system console.

Preference/Default Setting LOADS.LOG_TO_FILE=TRUE

LOADS.LOG_LEVEL=2

Defines whether the output from the BeginLoad and FinishLoad programs should include SQL statements, commit, and rollback points. If the setting is 1, it does not include SQL statements unless an error occurs. If the setting is 2, it does include SQL statements.

PUBLISH.LOG_TO_FILE=TRUE

Defines whether the output from the Publish program should be written to a separate log file (mb.Publish.log) or to the same output stream as the calling program. If the setting is TRUE, it writes output to the mb.Publish.log file. If the setting is FALSE, it writes output to the mb.Loads.log file.

PUBLISH.LOG_LEVEL=2

Defines whether the output from the Publish program should include SQL statements, commit, and rollback points. If the setting is 1, the Publish program does not include SQL statements unless an error occurs. If the setting is 2, the program includes SQL statements.

ENRICH.LOG_TO_FILE=TRUE

Defines whether the output from the Enrich program should be written to a separate log file (mb.Enrich.log) or to the same output stream as the calling program. If the setting is TRUE, it writes output to the mb.Enrich.log file. If the setting is FALSE, it writes output to the mb.Loads.log file.

ENRICH.LOG_LEVEL=2

Defines whether the output from the Enrich program should include SQL statements, commit, and rollback points. If the setting is 1, the Enrich program does not include SQL statements unless an error occurs. If the setting is 2, the Enrich program includes SQL statements.

If any of the preference settings are missing from the Metrics_server.prefs file or if the value does not match one of the possible values shown in Table 20, then the default value applies for that setting. The preference setting values are not case sensitive. By default, three separate logs are written that include SQL statements, commit, and rollback points. Any combination of these settings above is considered valid. For example, to create a separate log for Enrich but not for Publish, and to show SQL in the Enrich log but not the others, you would use the following settings: LOADS.LOG_TO_FILE=TRUE LOADS.LOG_LEVEL=1 PUBLISH.LOG_TO_FILE=FALSE PUBLISH.LOG_LEVEL=1 ENRICH.LOG_TO_FILE=TRUE ENRICH.LOG_LEVEL=2

270

Enterprise Metrics Load Support Programs

BeginLoad Program
The BeginLoad program is required to execute just before any Application Data ETL loads are executed. When executed, this program reads the preference settings from the preferences file and sets up the metadata and Application Data connections and defines the logging level and output stream according to the preference file settings. Then, the BeginLoad program sets the following flags in the BAP_LOAD system table, which cause the Enterprise Metrics Server to become inaccessible:

loading_flagSet to Y (loading) load_compl_flagSet toN (not complete) load_error_flagSet to N (no errors) last_load_event_nameSet to BeginLoad Succeeded

If the BeginLoad program runs successfully, the transaction is committed in the database. If any errors occur, then all processing is immediately halted, any pending transactions are rolled back and error messages are written to the log, and the BAP_LOAD table is not modified.

FinishLoad Program
The FinishLoad program is required to execute just after the Application Data ETL load has completed including any required aggregate loads. When executed, the FinishLoad program reads the preference settings from the preferences file and sets up the metadata and Application Data connections. In addition, it defines the logging level and output stream according to the preference file settings. Then, the FinishLoad program sets the following flags in the BAP_LOAD system table, which define the load time and prevent execution of the FinishLoad program if the program is already running:

loading_flagSet to Y (loading) load_compl_flagSet to N (not complete) load_error_flagSet to N (no errors) last_etl_load_timeSet to the current date and time last_load_event_nameSet to FinishLoad Started

If the FinishLoad program starts successfully, the transaction is committed in the database. If any errors occur up to this point in the processing, all processing immediately halts, any pending transactions are rolled back and error messages are written to the log, and the BAP_LOAD table is not modified. The FinishLoad program then checks the optional argument that indicates whether the Application Data load succeeded or not. If the argument exists and it is not Y, then the Application Data load is assumed to have failed which causes the FinishLoad program to fail. On the other hand, if the FinishLoad optional argument does not exist or is set to Y, then the Application Data load is assumed to have succeeded which causes the FinishLoad program to continue.

FinishLoad Program

271

After the FinishLoad program checks the optional argument, it reads the publish_meta_flag and the publish_enrich_flag from the BAP_LOAD system table. These flags are set to Y by Enterprise Metrics to indicate that publishing is requested. If standard publishing has been requested, the FinishLoad program calls the Publish program twice in order to publish the standard metadata:

First, the Publish program is called to save any user-defined metadata (such as personal pages in the Monitor Section) from the Metrics Catalog to the Configuration Catalog. Then, the Publish program is called to copy all metadata (except enrichment metadata) from the Configuration Catalog to the Metrics Catalog.

If both of these processes succeed, then the publish_meta_flag is reset to N and the transaction is committed in the database. If enrichment publishing has been requested, then the Publish program is called to copy the enrichment metadata from the Configuration Catalog to the Metrics Catalog. If this process succeeds, then the publish_enrich_flag is reset to N and the transaction is committed in the database. Next, the FinishLoad program calls the Enrich program to enrich the Application Data based on the enrichment job definitions in the catalog. The Enrich program runs every time the FinishLoad program is executed, regardless of whether enrichment publishing has been requested. Finally, the FinishLoad program determines the as of date for the Application Data based on the VAP_LOAD_DONE view and updates the BAP_LOAD system table to set the period information for the as of date. In addition, the BAP_LOAD flags are updated and committed in the following manner to indicate the success of the load:

loading_flagSet to N (not loading) load_error_flagSet to N (no errors) last_load_event_nameSet to FinishLoad Succeeded

If the FinishLoad program fails for any reason after the FinishLoad Started event has occurred, all processing is immediately halted, any pending transactions are rolled back to the last commit point, errors are logged, and the BAP_LOAD system table flags are updated and committed in the following manner to indicate the failure of the load:

loading_flagSet to N (not loading) load_error_flagSet to Y (errors) last_load_event_nameSet to FinishLoad Failed

Note: The failure of the Publish and Enrich programs automatically causes the failure of the FinishLoad program in the manner described above.

272

Enterprise Metrics Load Support Programs

Publish Program
The Publish program is automatically called by the FinishLoad program when standard or enrichment publishing has been requested. When the FinishLoad program calls the Publish program, the preferences file and a publishing group code are passed as arguments. The publishing group code identifies which subset of the metadata tables to publish (standard versus enrichment). In order to publish the standard metadata tables, the Publish program must execute twice, once to save the user-defined metadata (such as personal pages) from the Metrics Catalog to the Configuration Catalog, and once to copy all metadata (except enrichment metadata) from the Configuration Catalog to the Metrics Catalog. To publish the enrichment metadata tables, the Publish program is only executed once in order to copy the enrichment metadata from the Configuration Catalog to the Metrics Catalog. When executed, the Publish program reads the preference settings from the preferences file and sets up the metadata and Application Data connections. In addition, it defines the logging level and output stream according to the preference file settings. Then, the Publish program reads the list of metadata tables to be published (standard versus enrichment) based on the publish group code in the PUB_MAP_TABLE. Additional columns in the PUB_MAP_TABLE define the publish order and any filters that should be applied when the data is published. For each metadata table to be published, the Publish program deletes the metadata from the target table and then inserts the metadata into the target table from the source table. When enrichment publishing is requested, the publishing process also updates a flag in the PUB_ENRICHMENT_JOB metadata table, to indicate that the enrichment job definitions in the Configuration and Metrics Libraries are consistent with one another. In Enterprise Metricss Processed Enrichment tool, this flag is used to set the Edited column on the Processed Enrichment Administration window. The Edited column tells the Editor whether the enrichment job has been edited since it was last published. If the publish process succeeds, the transactions are committed in the database. If any errors occur, then all processing is immediately halted, all transactions initiated by the Publish program are rolled back, and error messages are written to the log. In effect, the Publish program either succeeds or all transactions are rolled back, indicating that no changes will occur to the metadata in the Libraries.

Processed Enrichment Overview


Processed Enrichment allows data in the Application Data area to be augmented based on enrichment job definitions, which are defined by the Editor and stored in the catalog.

Roles
To fully understand processed enrichment, it is important to understand the role of the database administrator, business analyst, and Enterprise Metrics Editor in the enrichment process.

Processed Enrichment Overview

273

Database Administrator
The database administrator prepares the Application Data to receive enriched data. For example, the database administrator may add a column to a table to receive the enriched data or add indexes to the database in order to improve performance.

Business Analyst
The Business Analyst may provide expert knowledge or specific business information that define the data mappings.

Enterprise Metrics Editor


The Editor defines the enrichment by using the Processed Enrichment tool to create the enrichment jobs that are processed during the regular load. The Editor also serves as a liaison between the business analyst and database administrator.

Enrichment Process
The above roles are reflected in the enrichment process described in the following steps: 1. The Editor coordinates with the database administrator to add any new columns that are required to receive the enriched data and any new indexes that are required to support enrichment processing. When choosing columns for data enrichment, please consider that most databases restrict the use of very large data types in SQL sub-queries and WHERE clauses. In order to enrich the data in the mart, the enrichment program generates UPDATE statements and executes them in the database. Any columns in the database that cannot be used directly in UPDATE statements, sub-queries, or WHERE clauses, are not supported for enrichment. Furthermore, the database administrator should insure that the fields update_time and update_user_id are included in all tables that are used in enrichment. The update_time column stores a timestamp for each row of data, identifying when the row was loaded or when changes were last made to the row. The update_user_id column stores the ID of the user that loaded or modified the row of data. The ETL should be designed to populate these two columns when data is loaded into the Application Data area. Furthermore, these columns are automatically populated when data is uploaded via manual enrichment. Processed enrichment does not modify the values in these columns; however, it does rely on the update_time when determining which rows have been modified since the last successful enrichment processing. To achieve optimal performance for processed enrichment, Hyperion Solutions recommends that the DBA create indexes on the update_time column for all source and target tables. In addition, for table-to-table enrichment, indexes should be created for all the columns in the source table that are used in joins. Often, primary or alternate key columns are used in the table-to-table joins which are already indexed.

274

Enterprise Metrics Load Support Programs

2. The Editor defines the enrichment job definitions using the Processed Enrichment tool. Please see the Processed Enrichment chapter of the Hyperion System 9 BI+ Enterprise Metrics Users Guide for more information on defining enrichment jobs and the enrichment functionality that is available through Enterprise Metrics. 3. The Editor requests Enrichment Publishing through the Publishing Control tool. 4. The FinishLoad program is executed as part of the nightly load process which: a. Calls the Publish program to copy the enrichment job definitions from the Configuration Catalog to the Metrics Catalog. b. Calls the Enrich program, which reads the enrichment job definitions from the catalog and builds SQL update statements to modify the data in the Application Data accordingly. For additional information, see Enrich Program on page 277. 5. The Editor reviews the output logs and handles any error conditions. At times, the database administrator may be asked to assist in this process. For additional information, see Reviewing the Load Support Logs on page 283. Figure 30 shows the enrichment process.

Figure 30

Enrichment Process

Processed Enrichment Overview

275

Enrichment Versus ETL


It is important to understand that Enrichment in Enterprise Metrics is not analogous to a general-purpose ETL tool. ETL tools allow companies to extract, transform, and load large volumes of data from a source system into another data warehouse or data mart, or multi-dimensional OLAP data source, such as Analytic Services. For use with Enterprise Metrics, ETL tools are ideal for loading the bulk of a companys data into the Application Data area on a regular basis. ETL tools provide sophisticated data transformations and features for loading large volumes of data safely and efficiently. Enrichment, by contrast, is used for augmenting data that has already been loaded into the Application Data area. Enrichment functionality should be used to tweak the data, add small amounts of data, or inject expert knowledge into the data.
Table 21

ETL Tools versus Enrichment Functionality ETL Tools Enrichment Tweak the data, add small amounts of data, or inject expert knowledge into the data Analyst Desktop, such as forecasts stored in spreadsheets, dimensional attributes, or hierarchy mappings based on rules that only the analyst knows Fairly small; typically tens or hundreds of rows of data or rules (occasionally thousands of rows, using manual enrichment). Shorter process, driven by the business analyst. Involves obtaining the source data, possibly adding column or indexes to the Application Data, and defining enrichment mappings in the Enrichment tool.

Purpose User Data Source

Automate the bulk loading of data (extract, transform, and load) System Information Systems or warehouses, such as ERP, CRM, SCM Can be very large; thousands to millions of rows per night. Longer process; more involved and more formal. Includes formal requirements, design, coding, testing, validating data, operations procedures, and migrating to production. Requires heavy involvement from the information technology, database, or systems administrator. Extensive extract, transform, and load functionality. Transformations may be complex, with coded transformations as well as the use of predefined functions.

Volume of Data

Process/Speed of implementing a requirement

Functionality

Functionality is limited to satisfying the main enrichment use cases. Transformations supported by enrichment are fairly straightforward. Furthermore, there are a few technical restrictions: 1. The number of distinct target values for a column cannot exceed 999. 2. The UPDATE statement from an enrichment job cannot exceed the maximum statement size allowed by the database. 3. For UPDATE statements from an enrichment job, there must be sufficient rollback space in the database to perform the update as a single transaction.

276

Enterprise Metrics Load Support Programs

Enrich Program
The FinishLoad program automatically calls the Enrich program and the preference settings are passed as arguments. When executed, the Enrich program:

Reads the preference settings and sets the metadata and Application Data connections. Defines the logging level and output stream according to the preference file settings. Reads the active enrichment job definitions from the catalog and builds update statements to enrich the Application Data, which are executed in the Application Data area based on the sequence specified by the Editor.

For Direct and Rule-Based enrichment jobs, the default value defined by the Editor is used whenever a row does not qualify for an explicit value assignment. For Table-to-Table enrichment jobs, only rows that join are enriched and all other rows are not modified.

When you modify the enrichment job definitions, you can determine whether all rows should be processed or only new rows, meaning those rows that have been added to the Application Data area since the last successful processing for this job. If new rows processing has been selected for an enrichment job, a filter is added to the enrichment update statement to select those target rows with an update_time greater than the prd_enrichment_job.max_target_update_time. For table to table jobs, the filter also selects any target rows that join to a source table row with an update_time greater than the prd_enrichment_job.max_source_update_time. The prd_enrichment_job.max_target_update_time and prd_enrichment_job.max_source_update_time columns are maintained by the Enrich program. These columns are updated at the end of each successful enrichment job processing by selecting the max(update_time) from the source and target tables. As a special consideration, the database administrator can manually manipulate these dates to have more control over which rows of data get enriched (beyond the simplistic all rows versus new rows approach). By manipulating the max_target_update_time and the max_source_update_time in the PUB_ENRICHMENT_JOB and PRD_ENRICHMENT_JOB tables, rows can be skipped or reprocessed as desired (when the enrichment job is set to process new rows). For example:

Your Editor has created a new enrichment job for a table that contains a large amount of historical data. The Editor is only interested in enriching current and future data within this table, and you would like to eliminate unnecessary processing time that would be needed to enrich all of the historical data. In this case, you can set the max_source_update_time and max_target_update_time to a current timestamp (since the job has never been processed before), and make sure the Editor has set the enrichment job to process only new rows before requesting enrichment publishing.

You have an enrichment job that is based on a product category code, and this job has been used to successfully enrich data for many months. Now, your Editor learns of a new product category code that has been in use since the beginning of last monthyet the enrichment

Enrich Program

277

job has not been using that code. The Editor could immediately modify the enrichment job to include the new product category code, request enrichment publishing, and reprocess all rows of data. Alternatively, the Editor could set the job to process only new rows and then ask the database administrator to manipulate the enrichment job processing to enrich only rows of data that have been loaded since the beginning of last month. In this case, the database administrator would update the PUB_ENRICHMENT_JOB and PRD_ENRICHMENT_JOB table to set the max_source_update_time and max_target_update_time to the beginning of last month for the desired enrichment job. Note that if you plan to manually manipulate the max_source_update_time and max_target_update_time fields, it is important to ensure that a very recent database backup exists. If the enrichment processing succeeds for an enrichment job, the all_rows flag is reset to new, the max_source_update_time and max_target_update_time columns are updated and all transactions for that job are committed in the database. If any errors occur, then all processing is immediately halted, all transactions for the current job are rolled back, and error messages are written to the log. In effect, each enrichment job succeeds and is committed until a failure occurs which causes the rollback of the current job and no additional enrichment jobs are processed.

Failure During Enrichment Job Processing


We recommend that you review the enrichment log for errors on a regular basis. If there is a failure in the enrichment job processing, the entire enrichment process comes to a halt. The target table for the failed job is restored to its original state (that is, the state it was in before the enrichment job), and the max_target_update_time and the max_source_update_time for that job are not refreshed and the all rows flag is not reset. Jobs that were sequenced before the failed job will remain in their enriched state, with updated enrichment timestamps. Jobs sequenced after the failed job remain untouchedthat is, enrichment is not executed for those jobs. (This is necessary because subsequent jobs may be contingent on previous jobs.) In the case of failure, an error message is written to the enrichment log file. Suppose, for example, there are five enrichment jobs, and a failure occurs part way through the third job. Table 22 shows the state of the five enrichment jobs after the failure.
Table 22

Enrichment Jobs After a Failure Example State of Data Relative to this Load Enriched Enriched Not Enriched Not Enriched Not Enriched

Sequence 1 2 3 4 5

max_source_update_time

max_target_update_time 1/23/03 10:06 p.m. 1/23/03 9:49 p.m. 1/22/03 8:51 p.m.

1/22/03 11:45 p.m.

1/22/03 8:23 p.m. 1/22/03 9:02 p.m.

Note: Jobs are either enriched or not enriched and are never partially enriched.
278
Enterprise Metrics Load Support Programs

Once a failure occurs, the Editor is responsible for determining the best course of action. One approach might be to ignore the failure and mark the load process as done despite incomplete enrichment. This would be appropriate in cases where the Editor does not expect heavy use of the enriched data by the user community before the next load. The administrator and/or Editor would then fix the enrichment problem by the next load, at which point the processing would begin again with the job sequenced first. Another approach is to fix the enrichment problem and then re-run the enrichment jobs. If the failure occurred because of a problem with the underlying data, the Administrator could fix the data manually and then just run the FinishLoad program again. If the failure occurred because of the way the enrichment job was defined, the following steps could be used to fix the problem and complete the load process: 1. The Editor launches the Studio Utilities in stand-alone mode (since the server is down). 2. In the Studio Utilities, the Editor edits the problematic enrichment job and requests enrichment publishing. 3. The Editor re-runs the FinishLoad program. 4. The FinishLoad program publishes the new job definition(s) to the catalog. 5. The FinishLoad program processes the enrichment job. The enrichment timestamps for each jobalong with the rows-to-enrich flaginsure that the proper rows of data are processed for each job.

Studio Utilities in Stand-alone Mode


Normally, the Editor launches Studio Utilities as an applet via the Editor launch page. The Editor then creates and modifies metadata (metrics, enrichment jobs, etc.) in the Configuration Catalog, and afterwards requests publishing to copy the changes to the Metrics Catalog during the next load. There are several circumstances where you may want to use the Studio Utilities in stand-alone mode instead. This means running the Studio Utilities as an application that you launch separately (not via the Editor launch page), without requiring the Configuration Server to be running. There are two main situations in which you may want to do this:

The FinishLoad program failed due to faulty metadata; as a result, the Configuration Server will not start, but you need to modify that metadata through the Studio Utilities. You need to view the metadata in the Metrics Catalog using the Studio Utilities.

Each situation is described in more detail in the following sections, which include instructions on how to run the Studio Utilities in standalone mode:

Responding to a Finish Load Failure Viewing Catalog Metadata Running the Studio Utilities in Stand-alone Mode

Studio Utilities in Stand-alone Mode

279

Responding to a Finish Load Failure


If the FinishLoad program fails, it may be due to an error in publishing or enrichment processing. If you can fix the problem by modifying the metadatafor example, an enrichment jobthen you can access Studio Utilities. However, since the Configuration Server is down due to the load failure, you must launch Studio Utilities in stand-alone mode. In general, if the FinishLoad program fails due to a publishing or enrichment error, and you determine that the problem occurred due to faulty metadata, the safest way to fix the problem is to follow these steps: 1. Launch the Studio Utilities in stand-alone mode (since the Configuration Server is down), using the steps described below in Running the Studio Utilities in Stand-alone Mode 2. Fix the problem. For example, edit the enrichment job that failed, using the Processed Enrichment tool. 3. In the Publishing Control tool, request publishing (standard publishing or enrichment publishing, depending upon where you have made modifications). 4. Exit Studio Utilities. 5. Run the FinishLoad program.

Viewing Catalog Metadata


When you launch the Studio Utilities from the Editor launch page, you automatically access the Configuration Catalog. In some cases, however, you may want to view the metadata in the Metrics Catalog instead. For example:

You want to examine the enrichment jobs that are in the Metrics Catalog (and therefore running nightly), because you cannot remember how they were defined. That is, you have not yet published some recent changes to the enrichment job definitions, and you need to view the jobs that are currently being executed with each load. You want to look up some standard metadata definition (metric, chart, etc.) that was unintentionally deleted or modified in the Configuration Catalog.

To view Metrics Catalog metadata, you may launch the Studio Utilities in stand-alone mode while pointing to the Metrics Catalog. Note that this is recommended solely for viewing metadata, and should never be used as a normal means of creating or modifying metadata. The steps for launching the Studio Utilities in stand-alone mode are covered in the next section, Running the Studio Utilities in Stand-alone Mode.

280

Enterprise Metrics Load Support Programs

Running the Studio Utilities in Stand-alone Mode


To run the Studio Utilities in standalone mode, you will basically copy a server start-up script and change it into a Studio Utilities startup script. The following specific steps are necessary: 1. Determine which metadata catalog (Configuration Catalog vs. Metrics Catalog) you want to access with the Studio Utilities, based on your needs as described in the scenarios above.
Note: Recall that to fix a metadata problem that caused a FinishLoad error, you will want to point to the Configuration Catalog. To view Metrics Catalog metadata, you will want to point to the Metrics Catalog.

2. Go to the server that hosts the Enterprise Metrics Servers. 3. Copy the server startup script. The specific server startup script you should copy depends upon which metadata catalog you want to access with the Studio Utilities (determined in Step 1 above), and whether you are using Windows or UNIX. The following table identifies the startup script you should copy.
To use the Studio Utilities to access: Configuration Catalog

System Windows UNIX

Copy this menu entry/file: Start > Programs > Hyperion Solutions > Enterprise Metrics > Start Configuration Server
\Hyperion_Home\EnterpriseMetrics\Server\start_config.sh

Metrics Catalog

Windows UNIX

Start > Programs > Hyperion Solutions > Enterprise Metrics > Start Metrics Server
\Hyperion_Home\EnterpriseMetrics\Server\start_metrics.sh

Note: Each startup scripts java command line ends with the name of a prefs file: Metrics_server.prefs in the Server startup script, and Configuration_server.prefs in the Configuration server startup script. Copying the correct server startup script will insure that you are referencing the correct prefs file when you launch the Studio Utilities in stand-alone mode. (The Studio Utilities look at settings within that prefs file to determine whether to point to the Metrics Catalog vs. the Configuration Catalog. In the prefs file, if either CONFIG_SERVER=FALSE, or DB_MAP_NAME includes the string prd as one of the values, then the Studio Utilities will point to the catalog.)

4. Name the copied script. The following table suggests names you might use, depending on which script you have copied from Step 3 above:
To use the Studio Utilities to access: Configuration Catalog

System Windows UNIX

Suggested Name Start Config Tools Stand-alone


start_config_tools_standalone.sh

Metrics Catalog

Windows UNIX

Start Config Utilities Analytic View


start_config_tools_analytic_view.sh

Studio Utilities in Stand-alone Mode

281

5. Edit the java command line in the copied script to change the second-to-last item from DashServer to admin.DashAdmin, matching case exactly. (This change is what causes the Studio Utilities to launch, instead of the server.) 6. Save the copied script. 7. If you need to run the tools in stand-alone mode from a different machine (that is, a machine other than the one that hosts the Enterprise Metrics Enterprise Metrics Servers), you will need to copy the database driver jar, the dashall.jar, the server prefs file, and the startup script created above in Steps 4-6 to the target machine. You may also need to make adjustments to the classpath setting in the startup script, to indicate the location of these files. 8. Execute the script as follows: a. If you are using Windows, select the entry in the Start > Programs menu. For example, if you used the name suggested in Step 4 above, and you are pointing the Studio Utilities to the catalog, then select Start > Programs > Hyperion System 9 BI+ Enterprise Metrics > Start Config Utilities Analytic View. b. If you are using UNIX, run the file. Move to the
\Hyperion_Home\EnterpriseMetrics\Server directory and, at the prompt, type a period, forward slash, and the name of the script you created in steps 4-6 above.

For example, if you used the name suggested in Step 4 above, and you are pointing the Studio Utilities to the catalog, then type ./start_config_tools_analytic_view.sh. 9. Use the Studio Utilities as needed. Upon executing the script, the Studio Utilities should launch and operate as documented in the Hyperion System 9 BI+ Enterprise Metrics Users Guide. If you are running the tools against the Metrics Catalog, you will see several additional warnings as you enter the tools. First, you will see a warning about connecting to production metadata, with an option to exit. If you click Yes to continue, you will see a second dialog indicating that you should not save any changesthat is, that you will simply be viewing the metadata. It is strongly encouraged that you click Yes, thereby disabling all Save functions. If you want to make changes to the catalog, you should do so using the standard method of using the Configuration environment to configure your metadata and then request publishing. Assuming you click Yes, the main Studio Utilities window should appear and you should be able to proceed with viewing the metadata. All functionality of the Studio Utilities is available except saving changes.
Note: If you do try to save a change, you will see two error dialogs: one stating that updates to production metadata are prohibited, and another indicating a database error (simply noting that the database update did not occur).

If you override the recommendation on the Prohibit Updates dialog box shown above by clicking No, the Studio Utilities allow you to save changes directly to the catalog. This is strongly discouragedtherefore, a third dialog box will appear that displays this message:

282

Enterprise Metrics Load Support Programs

If you do make any changes, you must go back and make those same changes to the Configuration Catalog as soon as possible Enterprise Metrics does not provide a reverse publishing mechanism, and you may encounter serious problems at a later

Reviewing the Load Support Logs


We strongly recommend that you review the load support logs regularly for any error conditions. As mentioned earlier, the output from the load support programs may be written to a single log file or it may be written to three separate log files or some combination in between, depending on the preference file settings. Please see Enterprise Metrics Preference File Settings on page 329 for additional information. Each output log in the following sections is described based on the default preference settings:

mb.Loads.log mb.Publish.log mb.Enrich.log

Each major event performed by the load support programs is logged to a log file along with the timestamp of the event. Depending on the preference file settings, SQL statements, commit and rollback statements, and row counts may also be included.

mb.Loads.log
By default, the mb.Loads.log file contains the output from the BeginLoad and FinishLoad programs. For the BeginLoad program, the following major events are logged.
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Starting BeginLoad Processing Setting load flags in table <bap_load> BeginLoad Successfully Completed

For the FinishLoad program, the following major events are logged:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Starting FinishLoad Processing Setting last load time in table <bap_load> Reading publish flags from table <bap_load> Calling Publish Program for user-defined metadata Calling Publish Program to publish standard metadata Setting publish_meta_flag to 'N' Calling Publish Program to publish enrichment metadata Setting publish_enrich_flag to 'N' Calling Processed Enrichment Program Reading the as of date using view <VAP_LOAD_DONE>

Reviewing the Load Support Logs

283

Begin reading period data for the as of date <1978-01-01> Begin Reading period data for the year ago date <1977-01-01> Updating period data in table <bap_load> FinishLoad Successfully Completed

If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log and possibly a rollback statement as mentioned earlier in this chapter. In the case of the FinishLoad program, additional output is included in the log to indicate that the flags in the BAP_LOAD table are being updated to indicate the failure. As an example:
********** Error: No row found in <vap_fiscal_period> for the as of date at 2003-08-02 19:09:33 ********** Setting load error flag in table <bap_load> to 'Y' UPDATE bap_load SET LOAD_ERROR_FLAG='Y', LOADING_FLAG='N', LAST_LOAD_EVENT_NAME = 'FinishLoad Failed', UPDATE_TIME= {ts '2003-08-02 19:05:37'}, UPDATE_USER_ID='FinishLoad' <1> rows effected. COMMIT Completed update of load error flag

mb.Publish.log
By default, the mb.Publish.log file contains the output from the Publish program including the following major events:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Begin Publishing for Publish Group <P> Begin reading metadata table information for publish group <P> Begin deleting metaData for publish group <P> [each metadata table will be listed as it is deleted] Begin inserting metaData for publish group <P> [each metadata table will be listed as it is inserted] Successfully Completed Publishing for Publish Group <P>

Note: In the mb.Publish.log file, the publish group could be P, S, or E.

If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log possibly with a rollback statement as indicated earlier in this chapter.

284

Enterprise Metrics Load Support Programs

mb.Enrich.log
By default, the mb.Enrich.log file contains the output from the Enrich program including the following major events:
Logging Starts Begin setup of DB connection Begin setup of MDB connection Loading table map from table <pub_map_table>, selecting <prd>, using Begin Enrichment Processing Begin Reading Enrichment Source Criteria Begin Reading Enrichment Target Criteria Begin Enriching Data each enrichment job is listed as it is processed Enrichment Processing was Successfully Completed

If an error occurs, events are logged as shown above to the point of the error, where an error message is included in the log along, possibly with a rollback statement as indicated earlier in this chapter.

Reviewing the Load Support Logs

285

286

Enterprise Metrics Load Support Programs

Chapter

18
In This Chapter

Evaluating Enterprise Metrics Performance

This chapter describes how to use the Performance Statistics tool to tune Enterprise Metrics and gather performance statistics and use the information to identify performance problems, determine the causes, and design solutions.

Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Statistics Reporting Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Launching the Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Understanding the Enterprise Metrics Performance Statistics Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Using the Performance Statistics Utility to Tune and Troubleshoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Preference File Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

Evaluating Enterprise Metrics Performance

307

Introduction
The most important factor for Enterprise Metrics users, besides accuracy, is database query response time. Data mart performance tuning is therefore one of the most important administration tasks. Enterprise Metrics provides a comprehensive tool to monitor Enterprise Metrics query performance, and to provide all necessary information for quick and effective database tuning. The Enterprise Metrics Server continuously records performance statistics in two metadata database tables and records the SQL and timing for every query in the server log (see Statistics Reporting Background on page 308). The Performance Statistics tool helps you organize and understand this historical data, isolate the root causes of problems, determine the tuning actions to take, and check the effectiveness of your actions. This tool requires database administration knowledge in common star schema tuning practices, such as aggregation and indexing techniques. You should be familiar with how Enterprise Metrics represents hierarchies and stars, and configures aggregate navigation using StarGroups. This chapter is organized into these topics:

Statistics Reporting BackgroundExplains how the tool gathers and presents information Launching the Performance Statistics UtilityDescribes how to run the tool Understanding the Enterprise Metrics Performance Statistics UtilityProvides details on interpreting each pivot table Using the Performance Statistics Utility to Tune and TroubleshootShows how to use each pivot table to understand specific types of performance problems and scenarios Preference File SettingsDescribes how to configure Enterprise Metrics Server to record statistics

Statistics Reporting Background


It is important to understand the best approach to capture Enterprise Metrics performance statistics before tuning. After you begin tuning, you can use the statistics to make comparisons with the data. Enterprise Metrics gathers query statistics related to performance and aggregate usage into two tables in the Metrics Catalog.

PRD_STAR_STATS_DETAILstores statistics at the query transaction level PRD_STAR_STATS_SUMMARYstores statistics at a summary level by star

Preference files are text files containing settings that affect the appearance and functionality of Enterprise Metrics. Specific settings in the Enterprise Metrics Server preference file are used to trigger Enterprise Metrics to collect performance statistics data and store the data in the Metrics Catalog tables. For information about settings that gather performance statistics, see Preference File Settings on page 327.

308

Evaluating Enterprise Metrics Performance

The Enterprise Metrics Server log file contains detailed information on each query that is executed by Enterprise Metrics. This information includes the time of the query execution, the SQL statement issued to the database and the user that requested the query. For detailed information on reading log files, see Using Log Files for Tuning and Troubleshooting on page 288. The Enterprise Metrics Technical Tools include a Performance Statistics tool that provides pivot tables which combine the performance statistics with basic hierarchy level and StarGroup information. These pivot tables are provided in the format of an Interactive Reporting document (.bqy extension). You can use the pivot tables to analyze the statistics. During your analysis, you may need to correlate information from multiple reports or use the query details from the Enterprise Metrics Server log file to determine the cause of a specific problem and resolve the problem. The following sections provide detailed information to guide you through this process and describe possible causes and solutions to typical problems.

Launching the Performance Statistics Utility


After you gather performance statistics in the Metrics Catalog tables and retain the corresponding Enterprise Metrics Server log files, you can examine the pivot tables contained in the Performance Statistics tool (Perf stats.bqy).
Note: The following instructions assume that you have followed the steps outlined in the Hyperion System 9 BI+ Enterprise Metrics Users Guide regarding setting up the Technical Utilities. This includes downloading the Technical Tools Zip file, extracting the contents of the Zip file, and creating the OCE connections.

To launch the Performance Statistics tool:


1 In the Hyperion System 9 BI+ Viewer module, choose File > Import. Choose
C:\Hyperion\EnterpriseMetrics\TechnicalUtilities\Perf stats.bqy.

2 Follow the steps for the import process and click Finish. 3 Then, right-click the perf stats.bqy file and choose Open.
The document opens and displays one of the Query sections. Before you start analyzing and tuning your installation, Hyperion recommends that you make a backup copy of the Perf stats.bqy file incase you need to restore the original file.
Note: If you want to restore the Performance Statistics tool BQY file, open the Technical Tools Zip file from the Enterprise Metrics Editor launch page and extract the Perf Stats.bqy document.

In addition, as a precaution, copy the pivot tables before you drill down any cells in the BQY file.

Launching the Performance Statistics Utility

309

To make a copy of a pivot, duplicate the Pivot section in your Interactive Reporting document.
This creates a duplicate of the Pivot section with a numeric suffix appended to the original section label. For example, if you duplicate a section named SalesPivot one time, the Section pane would show SalesPivot and SalesPivot2. Now you can use this duplicated section for analysis, while ensuring that the original pivot table remains intact. You can delete the duplicated section after you complete your analysis, and you can easily recreate the section from the original. Also, note that the default setting for the retention of the statistics is 14 days in the STAR_STATS_DELETE DAYS preference setting. You can change the setting to a maximum of 90 days depending upon how many days of history is needed for performing the tuning tasks. When you save the Interactive Reporting document, it saves the data so you need to reprocess the Query section(s).
Note: If stars or hierarchies are deleted and the new metadata is published, the performance statistics BQY history for those stars and hierarchies is orphaned. If orphaned, some of the pivots that displayed data for those objects no longer can. This is how the tool is designed. Accordingly, we recommend that you keep versions of your Perf stats.bqy file so that you can get to this history if necessary. The history will not reside in the performance tables in the database. The information becomes obsolete with the metadata changes and there is also a default of deleting detail records after 72 hours.

To reprocess a Query section, select the Query section label and click Request line and choose
Process Query from the shortcut menu.
Note: If you want to restore the Performance Statistics tool file to its original state, open the Technical Utilities Zip file from the Enterprise Metrics Editor launch page and extract the Perf stats.bqy document. Before doing so, you should rename the previous version of the file.

Understanding the Enterprise Metrics Performance Statistics Utility


The Performance Statistics tool contains performance pivot tables organized into general categories called pivot sections. Table 25 lists the pivot sections contained in the Performance Statistics Utility and briefly describes their contents.
Table 25

Pivot Sections in the Performance Statistics Utility Description Contains performance information about each StarGroup broken down to the individual star level. Shows the performance of queries related to each star and StarGroup. Shows the performance of queries in relation to time. Shows the performance of queries, specifically how the aggregates are used, based on the needed versus supported levels.

Pivot Table Star Stats Summary Pivot on page 311 Query Performance Analysis Pivot on page 312 Query Performance Analysis Over Time Pivot on page 313 Agg Usage Analysis Pivot on page 313

310

Evaluating Enterprise Metrics Performance

Table 25

Pivot Sections in the Performance Statistics Utility (Continued) Description Shows the performance of queries based on each user. Lists the slowest running queries for your Enterprise Metrics installation. Shows the performance of queries in relation to the last time publishing occurred. Shows you the performance of queries only for the most recent set of statistics written. Shows the performance of queries related to each star and StarGroup; accepts a parameter to filter (customize) the query. Shows the hierarchy levels for each hierarchy in terms of level number and level name. Shows the supported level number along with the supported level for each StarGroup broken down by aggregate rank and star. Shows the supported level number along with the supported level for each StarGroup broken down by aggregate rank and star; displays information vertically. Lists each of the supported level codes on the left (sorted), and lists the slices across the top in the same order they appear in the needed level code. Shows the status of each star in a star group if it was picked or rejected.

Pivot Table User Performance Analysis Pivot on page 314 Slowest Queries Pivot on page 315 Query Performance Analysis Over Publish Time Pivot on page 316 Query Performance Analysis Using Max Start_Time Pivot on page 316 Query Performance Using Parameter Pivot on page 317 Hierarchy Levels and Column Reference Pivot on page 317 Star Supported Levels Reference Pivot on page 318 Star Levels and Columns Reference Pivot on page 319

Reference of Bursted Supported Levels Pivot on page 319 Query Performance with Reject Reason Pivot on page 320

The following sections provide specific information relating to each pivot table, including detailed column descriptions.

Star Stats Summary Pivot


The Star Stats Summary pivot table contains performance information about each StarGroup broken down to the individual star level. It contains information about how often a given star was picked or rejected and time statistics for query performance. You can use this pivot table when you need to get a high level picture of how your aggregates and stars are being used and the query performance against these stars and aggregates. The Star Stats Summary pivot table contains the following columns:

StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level.

Understanding the Enterprise Metrics Performance Statistics Utility

311

Star NameThe name of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Times RejectedThe number of times the star was rejected for use by queries. Times PickedThe number of times the star was picked for use by queries. Times UsedThe number of times the star was used by a query after it was picked. Percent of UsageThe percentage this star was used as compared to other stars. Total Query SecsTotal number of seconds that the associated queries took to run. Percent of Query SecsThe percentage of time (in seconds) the queries took for this star as compared to other stars. Aft Query SecsThe average time (in seconds) the queries took for this star as compared with other stars.

Query Performance Analysis Pivot


The Query Performance Analysis pivot table shows the performance of queries related to each star and StarGroup. The stars are shown in the ranked order within a StarGroup. Displaying by ranked order helps you analyze the performance of your aggregates. You can reference the supported levels code by looking at the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. The Query Performance Analysis pivot table contains the following columns:

StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Total Query DurationTotal number of seconds of all queries combined for this star. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.

312

Evaluating Enterprise Metrics Performance

Number of QueriesThe total number of queries that were issued against this star. Percent of QueriesThe percentage of queries issued against this star as compared with other stars. Avg Query DurationThe average time (in seconds) the queries ran for this star.

Query Performance Analysis Over Time Pivot


The Query Performance Analysis Over Time pivot table shows the performance of queries in relation to time. The performance statistics are refreshed (a new set of statistics records are written when the load_compl_flag in the BAP_LOAD table is updated) by the Enterprise Metrics Server. The Enterprise Metrics Server resets this flag after successful completion of the FinishLoad program, which is executed as the last step of the ETL process. This pivot table selects data based on the most recent start_time field and this field shows when the next set of performance statistic records were written. Using this pivot table, you can see the performance over time based on ETL process. The Query Performance Analysis Over Time pivot table contains the following columns:

Start TimeThe last time statistics were updated and a new set of statistics are gathered. This is dependent upon the ETL load schedule. StarGroup DescrContains the name of the StarGroup. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time in seconds for a given star to help find the slowest running query.

Agg Usage Analysis Pivot


The Agg Usage Analysis pivot table shows the performance of queries, specifically how the aggregates are used, based on the needed versus supported levels. This pivot table also shows whether a star was picked, used, rejected, or was offline, and the reason if it was rejected. The supported levels code can be determined by reviewing the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. For more information about the needed levels, see Hierarchy Levels and Column Reference Pivot on page 317. The needed levels are the levels that were requested in each query. By comparing the needed and supported levels, you can understand if you have the right aggregates or if you need to build additional aggregates. The Agg Usage Analysis pivot table contains the following columns:

StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level.
Understanding the Enterprise Metrics Performance Statistics Utility

313

Star NameName of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Needed LevelsThe levels that were required based on the query issued. To find the levels that correspond to each digit in the needed levels code, see Hierarchy Levels and Column Reference Pivot on page 317. StatusIndicates if the star was rejected, picked, used, or offline. Reject CauseThe reason the star was rejected. The possible values are: Not Applicable, Missing Needed Levels, Needed ok Missing Cols, and Offline. The value, Not Applicable, appears when the star was either picked or used. Rejected StarThe number of times the star was rejected because it was not suitable based on the request. The most likely cause of a rejected star is that the needed levels were not available in the supported levels. Picked StarThe number of times the star was picked because it was a candidate star to satisfy the request. You may not see a record in the used star for every entry in the picked star, since a star could get picked but ultimately the query could get carpooled and write one record for the used star. Used StarThe number of times that the star was actually used to satisfy a query. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query. Total DurationTotal number (in seconds) of all queries combined for this star. Avg Query DurationThe average time (in seconds) the queries took to run for this star.

User Performance Analysis Pivot


The User Performance Analysis pivot table shows the performance of queries based on each user. You should use this pivot table when you need to drill down and find the cause of a performance problem. You will eventually use the information in the log file to find the specific query or performance problem. The User Performance Analysis pivot table contains the following columns:

User IDA unique identifier of a user of the application. A user ID of #admin indicates that this is the chart as defined by the page publisher. Any other user ID means that this definition is specific to the user. Total Query DurationTotal number of seconds of all queries combined for this star. Percent of Query DurationPercent of duration (in seconds) of all queries for this star as compared with other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.

314

Evaluating Enterprise Metrics Performance

Number of QueriesThe total number of queries that were issued against this star. Percent of QueriesThe percentage of queries that was issued against this star as compared with other stars. Avg Query DurationThe average time (in seconds) the queries took to run for this star.

Slowest Queries Pivot


You can use the Slowest Queries pivot table to look at the slowest running queries for your Enterprise Metrics installation. This pivot table provides information about the query time (in seconds) and shows you the individual user with the slowest query. It also shows information about the star, Request ID and Item ID. You can drill down to get information about the StarGroup description and needed levels if required. The Slowest Queries pivot table contains the following columns:

Query DurationTotal number of seconds for each query, sorted so you can see the slowest query at the top. Query TimeThe time the query was issued. This will help in correlating this information with the log file User IDUnique identifier of a user of the application. A user ID of #admin indicates that this is the chart as defined by the page publisher. Any other user ID means that this definition is specific to the user. Request IDUnique identifier of a query request within a single client session. This information is important when you look at the log file and need to quickly locate the query. Item IDUnique identifier of a sub activity within a single query. This information is also required when you look at the log file and need to locate the query and make a correlation between the log and the pivot table. Number of QueriesThe total number of queries for this star. Star NameThe name of the star within a StarGroup. This is a unique identifier of the star schema. (For example, the combination of a single fact table and all related dimension tables.) Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. This Star Avg Query SecsThe average time (in seconds) the queries took to run for this star.

Understanding the Enterprise Metrics Performance Statistics Utility

315

Query Performance Analysis Over Publish Time Pivot


The Query Performance Analysis Over Publish Time pivot table shows the performance of queries in relation to the last time publishing occurred. The performance statistics are refreshed based on the ETL schedule as described in Query Performance Analysis Over Time Pivot on page 313. This pivot table selects data based on the most recent publish time. Therefore, it is possible to see the effects of adding or modifying aggregates and see the performance before and after the change to the metadata. The Query Performance Analysis Over Publish Time pivot table contains the following columns:

Last Publish Meta TimeThe last time a publish occurred and shows the affect of modifying the Metrics Catalog (metadata) by adding or removing aggregates, indexes, and so on. StarGroup DescrContains the name of the StarGroup. Percent of Query DurationPercent of duration (in seconds) of all queries for this star in comparison to other stars. Slowest QueryQuery time (in seconds) for a given star to help find the slowest running query.

Query Performance Analysis Using Max Start_Time Pivot


The Query Performance Analysis Using Max Start_Time pivot table is similar to the Query Performance Analysis pivot table except that this pivot table shows you the performance only for the most recent set of statistics written. Each time an ETL process is complete, the load_compl_flag is updated and this causes a new set of records written in the statistics tables with a new start_time. This pivot table shows only the most recent data by doing a max on the start_time column. This pivot table shows you the performance of queries related to each star and StarGroup. The stars are shown in the ranked order within a StarGroup. This helps you analyze the performance of your aggregates. The supported levels code can be determined by looking at the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. The columns for the Query Performance Analysis Using Max Start_Time pivot table are the same as for Query Performance Analysis Pivot on page 312.

316

Evaluating Enterprise Metrics Performance

Query Performance Using Parameter Pivot


The Query Performance Using Parameter pivot table is similar to the Query Performance Analysis pivot table; however, this pivot table is designed to accept a parameter to filter (customize) the query.

To use this pivot table:


1 Click the Query Performance Using Parameter Query section. 2 In the Contents Pane (on the right), right-click anywhere in the Request Line and choose Process Query. 3 From the dialog box, choose the start time.
The start time updates each time an ETL process completes. The Query Performance Using Parameter pivot table shows only the most recent data by performing a max function on the start_time column. It also shows the performance of queries related to each star and StarGroup. The stars are shown in the ranked order within a StarGroup. Displaying the stars in ranked order helps you analyze the performance of your aggregates. The supported levels code can be determined by looking at the Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. The columns for the Query Performance Using Parameter pivot table are the same as for Query Performance Analysis Pivot on page 312.

Hierarchy Levels and Column Reference Pivot


The Hierarchy Levels and Column Reference pivot table shows the hierarchy levels for each hierarchy in terms of level number and level name. This pivot table helps you interpret the supported and needed levels when you use the log files to view the name of a level or column. Each of the cell values contain the level name with the column name in parentheses. You can use this pivot table when you define aggregates, since it shows the hierarchies, supported levels, and column names. The Hierarchy Levels and Column Reference pivot table contains the following columns:

Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports. The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Default AliasThe alias used in the SQL that you will find in the log file. You will easily be able to refer to this pivot table when you are looking at the log file to check the name of the hierarchy based on the alias.

Understanding the Enterprise Metrics Performance Statistics Utility

317

Level NumberEach hierarchy contains at least one level, but generally two or more levels. The level number indicates the corresponding level for each item that comprises a hierarchy. Level Name and ColumnThe business name for a level followed by the name of the database column for that level. This is helpful when you are reviewing the SQL in the log file.

Star Supported Levels Reference Pivot


The Star Supported Levels Reference pivot table shows the supported level number along with the supported level for each StarGroup broken down by aggregate rank and star. This pivot table presents information along each hierarchy based on the slice indicator and slice order. It is useful when you are looking at the supported levels and need to understand why a given star was rejected based on the aggregate level. The Star Supported Levels Reference pivot table contains the following columns:

StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports.The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (for example, the combination of a single fact table and all related dimension tables). Supported Level-Level NameThe lowest supported level for each hierarchy of this star. This shows the level number with the level name.

318

Evaluating Enterprise Metrics Performance

Star Levels and Columns Reference Pivot


The Star Levels and Columns Reference pivot table is similar to the Star Supported Levels Reference pivot table, except it displays information vertically. This pivot table can be useful when you want to print the pivot table, since it presents information along each hierarchy based on the slice indicator and slice order. This pivot table is also useful when you want to review the supported levels and need to understand why a given star was rejected based on the aggregate level. The Star Levels and Columns Reference pivot table contains the following columns:

StarGroup DescrContains the name of the StarGroup. Aggregate RankThe rank of a specific star. Denotes the order that the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, this is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies. Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports. The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (i.e., the combination of a single fact table and all related dimension tables). Supported Level-Level NameThe lowest supported level for each hierarchy of this star. This shows the level number with the level name. Column NameThe physical name of the database column for each level-level name.

Reference of Bursted Supported Levels Pivot


The Reference of Bursted Supported Levels pivot table lists each of the supported level codes on the left (sorted), and lists the slices across the top in the same order they appear in the needed level code. Each cell contains the supported level number followed by the level name. This is a useful reference when you want to look at the SQL and determine what the supported levels really are based on the supported level code. The Reference of Bursted Supported Levels pivot table contains the following columns:

Slice IndThe first row, sorted, so that the order matches the order of supported levels in the statistics. This forces the sliceable hierarchies to display first, followed by the nonsliceable hierarchies.

Understanding the Enterprise Metrics Performance Statistics Utility

319

Slice OrderThe order in which sliceable hierarchies appear on the pages in the Investigate Section, and the hierarchies that are available for constraining reports.The order is helpful in the pivot table so you can verify that the sliceable hierarchies are ordered correctly. A nonsliceable hierarchy value displays as -1. Hier NameThe name of the hierarchy. Supported LevelsThe lowest supported level for each hierarchy of this star. Supported Level-Level NameThe level number followed by the level name. This information should be sorted to match the supported levels on the left column.

Query Performance with Reject Reason Pivot


This pivot table shows the status of each star in a star group if it was picked or rejected. In addition, it interprets the reject_reason column from the Prd_Star_Stats_Detail table and shows why a given star was rejected. This pivot table is similar to the other query performance pivot tables but has a column that shows the reason for the rejection of a star (if it was rejected). The Query Performance with Reject Reason pivot table contains the following columns:

Stargroup DescrThe name of the star group. Aggregate RankThe rank of the star. Denotes the order in which the star should be selected for use in a query in relation to other stars in the same StarGroup. (A StarGroup generally contains stars with the same facts but at different levels of grain.) Typically, it is used for the purpose of aggregate navigation. The higher the rank number, the better the choice, because the fact table contains the least number of rows. This means that a rank of 1 would be at the base level. Star NameName of the star within a StarGroup. This is a unique identifier of the star schema (for example, the combination of a single fact table and all related dimension tables). Supported LevelsThe lowest supported level for each hierarchy of this star. You can refer to Reference Of Bursted Supported Levels pivot table to find the levels that correspond to each digit in the supported levels code. Needed LevelsThe levels that are required based on the query issued. You can refer to the Hierarchy Levels and Column Reference Pivot table to find the levels that correspond to each digit in the needed levels code. StatusShows if the star was rejected, picked, used, or offline. Explain RejectShows the reason for the rejection of the star. There are four possible values: Not Applicable, Missing needed levels, Needed ok missing cols, or offline. A reject reason of Not Applicable indicates that the star was used or picked and hence the Reject Reason does not apply in this case. Missing needed levels indicates that the star was missing some levels that was required in the query. Needed ok missing cols indicates that the star had the needed levels but there were other columns that were

320

Evaluating Enterprise Metrics Performance

required in addition to the needed levels for the query possibly because of query carpooling which this star could not satisfy and hence a different star was picked. Offline indicates that the star could still be loading or offline for other reasons.

Rejected StarThe number of times the star was rejected because it was not suitable based on the request. The most likely cause is that the needed levels were not available in the supported levels. Picked StarThe number of times the star was picked because it was a candidate star to satisfy the request. Keep in mind that you do not necessarily see a record in Used Star for every entry in the Picked Star because a star could get picked but ultimately the query could get carpooled and finally one record is written for the used star. Used StarThe number of times that the star was actually used in order to satisfy a query. Slowest QueryQuery time (in seconds) for a given star. This helps find the slowest running query. Total DurationTotal number of seconds of all queries combined for this star. Avg Query DurationThe average time in seconds the queries took to run for this star.

Using the Performance Statistics Utility to Tune and Troubleshoot


After you gather the performance statistics in the Metrics Catalog tables and familiarize yourself with the pivot tables, you are ready to begin tuning and troubleshooting. The following sections describe some typical performance issues that you may want to diagnose using the pivot tables contained in the Performance Statistics tool. Each issue is outlined along with possible causes and solutions. This section contains the following topics:

Star and Aggregate Performance on page 322 Slow Queries on page 322 Needed Versus Supported Levels on page 323 Carpooling on page 324 A Star is Picked but Not Used or Rejected on page 324 Needed Columns and Levels on page 324 Frequently Used Stars on page 325 User Complaints on page 326 Analyze the Performance After Tuning on page 326

Using the Performance Statistics Utility to Tune and Troubleshoot

321

Star and Aggregate Performance


To understand (at a high level) how your stars and aggregates are being used and how they perform to queries, view the Star Stats Summary pivot table. You can see the performance of each star in a StarGroup and the query performance for these stars. Based on the information presented in this pivot table, you can also look at the Agg Usage Analysis pivot table or the Query Performance Analysis pivot table to further investigate the problem.

Slow Queries
Some of the causes for slow running queries include:

Lack of indexes or unused indexes. No aggregate table available. The star does not support specific levels. A page in the Investigate Section is carpooling queries and some of the queries need to go to the base star. Multiple queries are running at the same time which slows down Enterprise Metrics. Query design is not optimal. There could be an expensive join between a large dimension table and a large fact table which you could avoid by joining the fact to a different and smaller dimension. For example, if you need to select recently shipped orders from a large fact table, you can join the fact table to a large Order dimension table to obtain the ship date and use that column to filter the fact rows. However, a more optimal query design would use a ship_period_key within the fact table to join to a small Period dimension and use the corresponding calendar date from the Period dimension table to filter the fact rows.

To find the slowest running query:


1 Click Slowest Queries Pivot and examine the information in the first column: Query Duration. 2 Click the number in the Query Duration cell. 3 Right-click and choose Focus on Items to narrow down the search. 4 Locate and open the server log.
Server logs are stored in the \Server directory by default. The simplest way to review the log is to use the Server Console. The log file name is similar to mb.server.2005.20020130.041841.log. For additional information on locating and viewing log files, see Locating and Viewing the Logs on page 288.

5 In the log file, locate the query that was running slow to analyze the problem.
You can look at the Query Time, User Id and Request Id columns in the Slowest Queries pivot table where you have narrowed down your search and match the entries in the log file to find the query.

322

Evaluating Enterprise Metrics Performance

Once you have found the slow running query, there could be many reasons why the query is running slow as described earlier in this chapter. You can use the information in the following sections to further investigate the cause for the slow running query.

Needed Versus Supported Levels


The Agg Usage Analysis pivot table helps you determine why an aggregate was not used. You can view information about all the StarGroups and stars and specifically determine whether a star was picked, used, or rejected. If a star was rejected, it shows you the reason the star was rejected. To understand when a star was rejected, you can look at the Supported Levels and Needed Levels columns. Reasons why a star might be rejected include:

The query from Enterprise Metrics may require some levels that may not be supported by the aggregate star. In some cases, the star is rejected even if the needed and supported levels match and this information is shown in the Reject Cause column. You can determine if a star supports the levels by reviewing the needed versus supported levels. For example, the set of numbers that you see in the pivot table is a group of numbers where each digit corresponds to the lowest supported level of the hierarchy for the given star. They are in the order of the hierarchy slice order. To decode the digit that displays in the Agg Usage Analysis pivot table for the needed and supported levels, you will need to refer to the Star Supported Levels Reference pivot table.

If you review the Star Supported Levels Reference pivot table, the very first column on the left is the StarGroup referencewhere you want to locate the StarGroup that contains your star. The Star Name column shows the star. To the right, you can review the Supported Level for each hierarchy. Use the following hints to decode the lowest supported level number (53136501111011):

The very first hierarchy listed is the hierarchy with the slice order of 0 which is the Period hierarchy. The pivot table shows that for this Period hierarchy, the lowest supported level is level 5 which is the Day level. The 5 here corresponds to the first digit in the supported level combination of 53136501111011. The second number displayed is 3, which in the Star Supported Levels Reference pivot table is the second hierarchy for the star that you are checking. Thus, this would be the lowest supported level that corresponds to the slice order of 1.

You can continue doing this for the remaining digits to determine the lowest supported level for all the hierarchies of the star. Now, you can look at the Needed Levels in the Agg Usage Analysis pivot table and perform the same exercise to determine the levels required by the query. As you traverse the digits from left to right, you may find that a given level was needed but was not supported in the star. This

Using the Performance Statistics Utility to Tune and Troubleshoot

323

means that the aggregate star is unable to satisfy the request and Enterprise Metrics needs to go against the base star to satisfy the query. This could provide you with a clue if you need to build an aggregate star and add that to the StarGroup.

Carpooling
Typically, many of the metrics on a page in the Investigate Section require the use of the same StarGroup. Enterprise Metrics is designed for optimal performance. The server automatically combines the separate select items of each metric into a single query (if the query is against the same StarGroup) and issues a single query that avoids multiple round trips to the database. This is called carpooling. Since all the queries are against the same StarGroup, if there are some queries that need to access a given star at the base level where as other queries need to access some of the stars at an aggregated level, Enterprise Metrics chooses the base star because it is required to access the base star and it would be an overhead to access the other aggregate stars.

A Star is Picked but Not Used or Rejected


When you review the Agg Usage Analysis pivot table, you may notice that at times, a star is picked, but is not used or rejected. This happens when a query from a page in the Investigate Section is carpooled. When there are multiple queries from a page for the same StarGroup, Enterprise Metrics checks each query (one at a time) and picks a star and a record is written. However, after it is done looking at all the queries, it determines that the query can benefit from carpooling, it will write a record used only once against the star it picked. In this case, you could see some picked records for a star where that star was neither rejected or used.

Needed Columns and Levels


The Agg Usage Analysis pivot table shows Needed Levels, which shows columns that are needed and are part of a hierarchy of a star. If there is SQL using tables or columns that are not part of the hierarchy of a star, Enterprise Metrics will not consider this for the display of needed levels. You can review the Reject Cause column in the Agg Usage Analysis pivot table to determine if the values show Needed ok missing cols the needed levels and supported levels did match up (the needed levels could be satisfied by the supported levels, but there were other columns that were required). In some cases you may see that the needed levels for a query could be satisfied by the supported levels for that star, but yet the star was not picked. This typically occurs in mini report SQL query where you are using a non-hierarchical constraint (for example, an ordering or date range constraint) for the mini report.

To determine if the needed levels for a query can be satisfied by the supported levels of the star:
1 Look at the query for the mini report in the server log file and locate the query that was running slow. If it is
a report query, identify the star against which the query was issued.

2 Launch the Enterprise Metrics Studio Utilities and click Mini Reports.
For detailed information on using the Studio Utilities, see the Hyperion System 9 BI+ Enterprise Metrics Users Guide.

324

Evaluating Enterprise Metrics Performance

3 Locate the mini report that you found in the log file, then click Edit button to view the details of the mini
report.

4 Click the View Constraints button to see which constraints are being used.
There could be some constraints, which are hierarchical or non-hierarchical.

5 To determine if a constraint is hierarchical or not, close the Mini Report tool and return to the Studio
Utilities main window.

6 Click the Constraints button to view all constraints.


For each constraint you can view the hierarchy that the constraint is based on. If there are constraints that are non-hierarchical, nothing appears in the Hierarchy column.

7 For each constraint, check the level being used, then close the Constraints tool to return to the Studio
Utilities main window.

8 Click the Stars button to launch the Star tool and click the aggregate star that you suspect should have
been used.

9 Check the hierarchies of the star to find out if the star supports the levels (columns) that the constraint was
based on.

If the star does not support the hierarchy or the level for the constraint, then the star would not be used by Enterprise Metrics despite the fact that the supported levels matched with the needed levels. If you find that the query is performing slow because of the star getting rejected due to your constraint, you will need to improve the performance by either indexing the table you are using for the non-hierarchical constraint appropriately or adding the hierarchy of the constraint to the aggregate star.

Frequently Used Stars


You can use the Query Performance Analysis pivot table (Percent of Queries column) to find which stars are most frequently used and review what percent of queries are used and issued against a given star. If you notice that there are a large number of queries against a given star (aggregate or base star), you can whether it would be beneficial to build an aggregate star for a given base star or an aggregate star on top of an existing aggregate star.

To determine if you can add an aggregate star:


1 Using the Query Performance Analysis pivot table, find the most frequently used star or StarGroup. 2 Click the Star Name or StarGroup Desc cell for frequently used star or StarGroup you want to analyze. 3 Right-click and choose Drill Anywhere. 4 Select the Query Time. 5 Right-click additional times, as needed, to select Request ID and User ID to obtain the details for this star
or StarGroup.

6 After you have collected the details for the star, view the server log file and search for the queries by
looking at the query time, request ID and user ID.

Using the Performance Statistics Utility to Tune and Troubleshoot

325

7 Compare the queries that are issued for this star and look for common hierarchy levels (columns) across all
the queries.

If you find that there are queries that are accessing a subset of the levels that are common across all the queries, it may be beneficial to build aggregate stars either on the base star or an aggregate star over an existing aggregate star.

User Complaints
At times, you may receive complaints from users about Enterprise Metricss performance. You can use the User Performance Analysis pivot table to analyze Enterprise Metricss performance, by user. First, locate the user ID of the user experiencing performance problems, then compare the information with the other users to see if it is widespread problem or is isolated to only one user. If it is an issue isolated to only one user, drill down on the StarGroup name, star name, request ID, or query time to narrow down the problem. You can also check the Star Stats Summary or Query Performance Analysis pivot tables to further analyze the problem. Typically, the reasons for slow performance may be that there are aggregate stars that are supporting the query or that the tables being accessed needs to be indexed. You can look at the query by looking at the server log file and search for the query based on the information gathered using the Performance Statistic tool.

Analyze the Performance After Tuning


After you determine the cause of the performance problems, you can fix the problems by creating additional aggregate stars, adding indexes, and so on. You can check the performance and compare with the past performance using these three pivot tables:

Query Performance using ParameterAllows you to pick a date that you want to see the performance statistics. For detailed information on setting the start date, see Query Performance Using Parameter Pivot on page 317. Query Performance Analysis over TimeShows you the performance based on capturing information on the basis of the ETL load job and reflects the changes according to your ETL schedule. Query Performance Analysis over Publish TimeShows you the performance statistics based on when you publish the catalog (metadata). Typically, you will use this pivot table first to determine the changes in performance as you add or remove aggregates, indexes, and so on.

326

Evaluating Enterprise Metrics Performance

Preference File Settings


There are several preference settings that are used to gather performance statistics. These settings include:

STAR_STATS.COLLECT_DETAILSpecifies whether collection of star usage statistics is enabled at the detail level. By default, this setting is set to TRUE. STAR_STATS.COLLECT_SUMMARYSpecifies whether collection of star usage statistics are enabled at the summary level. By default, this setting is set to TRUE. STAR_STATS.DELETE_DAYSWhenever detail records are written to star usage statistics, existing records older than the number of days specified by this setting will be deleted. The minimum setting is 0 (which means never delete) and is not recommended. The maximum setting is 90. The default setting is 14 days. STAR_STATS.DETAIL_WRITE_EVERYIf detail star usage statistics records are being collected, they are written to the catalog database each time this number of records has been accumulated, or when the server is shutdown or restarted. The default is 1000 records. STAR_STATS.SUMMARY_INTERVAL_SECSWhen summary star usage statistics collection is enabled, statistics are accumulated in memory and only written to the catalog database each time this specified interval (in seconds) expires, or the server is shutdown or restarted. The default is 1800 seconds.

For additional information regarding each of the above preference settings, see Chapter 19, Enterprise Metrics Preference File Settings. To begin the tuning process, verify that the STAR_STATS.COLLECT_DETAIL and STAR_STATS.COLLECT_SUMMARY preferences are set to TRUE. Since these are the default value, if you open the Metrics_server.prefs file and do not see an entry for either preference setting then the default values are being used. You can confirm this by opening the Enterprise Metrics Server log and reviewing the preference settings at the beginning of the log file to confirm the values being used. Then, make sure to set the STAR_STATS.DETAIL_DELETE_HOURS appropriately so that the records are retained for the period you wish to tune and troubleshoot Enterprise Metrics. Depending upon the user load and activity, you may want to change the defaults for the STAR_STATS.DETAIL_WRITE_EVERY and STAR_STATS.SUMMARY_INTERVAL_SECS. After you verify or change the preference settings, the statistics data will be captured in the metadata tables and server logs based on usage of the application. Ensure that these statistics are retained in the tables for the duration of the tuning process. It is also important that you keep an archive (and do not purge) any Enterprise Metrics Server log files in the \Server folder and retain these archives for the duration of the tuning process.

Preference File Settings

327

328

Evaluating Enterprise Metrics Performance

Chapter

19
In This Appendix

Enterprise Metrics Preference File Settings

This chapter describes preference settings for the Enterprise Metrics Server and Configuration Server, Workspace and Personalization Workspace, Studio Utilities, and Metadata Export Utility.

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Metrics_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Configuration_Server.prefs Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Client.prefs Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Metadata_export.prefs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352

Enterprise Metrics Preference File Settings

329

Overview
Preference files are text files that contain settings that affect the appearance and functionality of Enterprise Metrics. Four preference files supply settings for Enterprise Metrics.
Table 26

Enterprise Metrics Preference Files Preference File Name


Metrics_Server.prefs Configuration_Server.prefs Client.prefs

Application/Server Metrics Server Configuration Server Workspace and Personalization Workspace Metadata Export Utility

metadata_export.prefs

When Personalization Workspace or an Enterprise Metrics server is started, it reads the preference file to determine the settings. If you change settings in a preference file, the changes take effect when the server is started or restarted. The following lines show an excerpt from the Configuration_Server.prefs file.
CONFIG_SERVER=TRUE STAR_STATS.COLLECT_SUMMARY=TRUE # logging is DUMPTOFILE by default # uncomment to METABOLISM to write to console window #BALLPARK=METABOLISM # by default, save three logs up to 3 Meg each #LOG_FILE_MAX=3000000 #LOG_SAVE_COUNT=3 SQL.PRINT_SQL=TRUE SQL.TIME_MINIS=TRUE SQL.TIME_QUERIES=TRUE # for normal operation, default log level of 3 is best. # for debugging purposes, up to a value of 6 may be useful. #LOG_LEVEL=6 # for demo and personal systems, should probably change to TRUE SERVER_WINDOW=TRUE # Only allow login by userids in the security tables REQUIRE_UTABLE=TRUE CLIENT_PREFS=Client.prefs DB_MAP_NAME=pub DB_MAP_TABLE=pub_map_table

330

Enterprise Metrics Preference File Settings

Note: Lines in a preference file that begin with the pound sign (#) are comments and blank lines are ignored. Comments are not read by the Enterprise Metrics Server.

Many of the settings are intended for use only by development or support and should not be altered. These are indicated by an X in the column labeled Do Not Edit. In the tables that follow, some settings are too long to fit without wrapping to the next line; whereas in the preference file, the settings each appear on a single line. The values that need to be entered following the equal (=) sign should be entered without a space between the equal sign and the value.

Metrics_Server.prefs Settings
The Metrics_server.prefs file is stored in the same directory where you installed the Metrics Server. Settings prefixed with TOOLS set the catalog (metadata) configuration functions of Enterprise Metrics. Settings prefixed with DB relate to the Application Data, and settings prefixed with MDB relate to the catalog (often referred to as metadata).
Table 27

Enterprise Metrics Metrics_server.prefs Settings Do Not Edit

Preference/Default Setting AUTH_DEF_FILTER_TYPE=GROUPS

Description When you launch the Enterprise Metrics Security tool, this setting determines whether or not to display a list of group entries. If you set the AUTH_DEF_FILTER_TYPE to USERS, the Security tool will display a list of users initially. This setting works in conjunction with the AUTH_DEF_FILTER_TYPE setting. If you click Reset Filter in the Security tool, it reverts to these defaults. The default is GROUPS.

AUTH_DEF_FILTER_CRIT=*

Use this setting to provide a different search string (0 or more characters with optional wildcard asterisk at the end) to work with the AUTH_DEF_FILTER_TYPE. This only affects the list of USERS/GROUPS initially displayed in the Security tool. Limits the number of users displayed within a group when displaying properties of a user. X Specifies that the server should use CSS authentication when logging on users. If you have special authentication requirements, contact Hyperion Customer Support. Used to provide the name of a custom authentication driver. This setting is ignored unless AUTH_METHOD is set to OTHER. When set to TRUE, Enterprise Metrics synchronizes authentication and authorization (for Roles) with the rest of the Hyperion System 9 BI+ modules. It uses the Shared Services to obtain role information for a user trying to log into metrics. Other related settings for provisioning configuration are AV_URL, AUTH_METHOD, AUTH_AV_PROD_ID, and CSS.CONFIG_FILE. The default is TRUE. Setting to FALSE is not recommended for a customer installation.

AUTH_MAX_DISPLAY_USERS=50 AUTH_METHOD=CSS

AUTH_METHOD_CLASS AUTH_PROVISIONED=TRUE

X X

Metrics_Server.prefs Settings

331

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting AUTH_AV_PROD_ID=

Description The default is blank. This setting is the ID that was used to register the Hyperion System 9 BI+ instance. Enterprise Metrics uses this ID to access user provisioning information. The setting appears as: <Product code>:<local_id> and is specified during registration. If AV_URL is provided, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for AUTH_AV_PROD_ID overrides. We recommend that you specify AV_URL only and let the server derive this setting.

AV_URL=

Identifies the Hyperion System 9 BI+ URL, exactly as the end-user is expected to use it to launch the system. For example, http:// <machinename:19000>/workspace/. Normally, this value is set when you run the Hyperion System 9 Configuration Utility and configure Enterprise Metrics. You must do this whenever the System 9 BI+ URL has changed, and then also repeat server setup and Shared Services registration. This setting is used to synchronize User Management/Provisioning settings with that used by the rest of the Hyperion System 9 BI+ modules. When set, Hyperion System 9 BI+ must be up and running at the time of Enterprise Metrics servers startup. The value here is used to derive values for CSS.CONFIG_FILE, AUTH_AV_PROD_ID, and CLIP.URL_PREFIX, unless they have been set explicitly. The recommended approach is to use the AV_URL setting rather than set the four preference settings explicitly.

BALLPARK=DUMPTOFILE BILLIONS_SYSTEM=AMERICAN

A value of METABOLISM causes logging to the console window for debug mode, and DUMPTOFILE causes logging to a file for normal operation. This setting determines what system settings to apply for scale codes used in the Metrics tool. You can change the setting to BILLIONS_SYSTEM=BRITISH to apply European abbreviations for thousands and millions. X X X Specifies the user for which to trace cache activity, or * (asterisk) for tracing all users. For development purposes only. Do not change this setting. The default value is zero and should not be changed for normal operation. A non-zero value limits the cache preload on the Metrics Server only to the first <n> global Monitor Section pages. Specifies the number of private (Investigate Section) pages cached on the Server per user. A larger number can improve performance, but at the cost of a larger memory footprint for the Server. The default is 50. The minimum value is 1, and the maximum value is 100.

CACHE_DEBUG_USER= CACHE_ENABLE_PURGE=FALSE CACHE_PRELOAD_LIMIT

CACHE_SIZE_METRICS=50

CACHE_SIZE_REPORT=50

Specifies the number of (Pinpoint Section) pages cached on the Server per user. A larger number can improve performance, but at the cost of a larger memory footprint for the Server. The default is 50. The minimum value is 1, and the maximum value is 100.

CACHE_SYS_METRICS_MAX=200

Specifies the maximum number of global (Investigate Section) pages to cache. When the maximum is reached, the server purges the older pages down to the minimum setting (see below). The maximum is 1000. Specifies the minimum number of global (Investigate Section) pages to cache. The maximum allowed is 900.

CACHE_SYS_METRICS_MIN=150

332

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting CDB_PASS=$J(pbDbPwd) CDB_USER=$J(pbDbOwner)

Description Specifies the corresponding password for the user ID set in CDB_USER. Specifies the user ID the servers use in establishing the connection pool for accessing the Application Data. This user ID requires only read-only database privilege, since it is used only for making SQL queries. Some IS groups may find it acceptable to use the same user ID for CDB_USER as that used for DB_USER. This setting normally is set to FALSE; however, if you use the Calendar Utility to generate the bap_period table, you should set this preference to TRUE. If TRUE, the server performs a number of consistency checks on the BAP_PERIOD table during initialization. Note: This setting should be set to TRUE the first time you start the Metrics or Configuration server after data has been loaded into or modified in the BAP_PERIOD (calendar) table. You should also set this to TRUE if you make changes to the init scripts which might possibly be related, including the population of period_trans, ago_code_lookup, epc_lookup, and so on.

CHECK_PERIOD_TABLE=FALSE

CLEANUP_WAIT=1800

Specifies, in seconds, how long the sweeper should wait between periodic housekeeping checks. The default is a half hour. Sweeper functions include such things as logging off idle users, writing summary statistics to the log, checking whether the log should roll over to a new file, and so forth. Specifies the name of the client preference file. The standard installation sets this to Client.prefs. Note: This setting is case-sensitive. Specifies the port number for the Configuration Server. This setting is used only when the CONFIG_SERVER is set to a value of TRUE. Set to FALSE to cause the server to launch in Server mode. If this setting is set to TRUE, it causes this server to launch in Configuration Server mode, which allows the user to login as Editor, to define global pages for the Monitor and Investigate Sections. Specifies the connection pool idle limit in seconds (that is, how long a connection to the Application Data is kept open if it is idle). Specifies the initial number of connections to be created in the Application Data connection pool upon server initialization. Specifies how many wait loops the server should run, when the connection pool is found to be empty, before extending the size of the pool. See CONNPOOL.WAIT_TIME below. Specifies how many seconds the server should wait for each wait loop before checking to see whether a connection has become available in the pool. Points to the XML file that contains configuration information for external authentication. It contains details of all the authentication sources and the order to access for authentication.

CLIENT_PREFS=Client.prefs CONFIG_PORT_NUMBER=2006 CONFIG_SERVER=FALSE

CONNPOOL.IDLE_CLOSE=3600 CONNPOOL.SIZE=5 CONNPOOL.WAIT_LOOPS=2

CONNPOOL.WAIT_TIME=5 CSS.CONFIG_FILE

Metrics_Server.prefs Settings

333

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting CUBE.ADS_ALTERNATE=

Description If the server or tools fail to connect to a cube using the native Analytic Services driver, they attempt to connect using the thin EDS driver. This setting allows you to specify where to attempt to contact the EDS server for this alternate connection. With the default setting of <nothing> an alternate URL will be constructed that assumes the EDS server is on the same machine as the Analytic Services host, or you may specify an explicit EDS server to use for all fallback connection attempts. (This option is intended primarily for the case where the Editor might be temporarily working from a machine that does not have the native Analytic Services driver installed, so that sources may still be configured to use the native driver.) Determines which of the defined cube data sources the server and Studio Utilities will connect to automatically when they initialize, in conjunction with the auto connect code setting specified in the Sources function of the Cube Tool. The default setting of 1 causes the server and tools to automatically attempt to connect to any cube source with an auto connect code greater than or equal to 1, while a zero or negative setting means that all source definitions are ignored (not connected). A value greater than 1 connects only sources with an identical auto-connect code.

CUBE.AUTO_CONNECT_CODE=1

CUBE.DEBUG=FALSE CUBE.INFO=FALSE CUBE.MAX_CHILD_NODES=20

X X

For development purposes only. For development use only. A value of TRUE dumps detailed information about how metrics query result cubes are constructed. At various times, member names are retrieved from a cube for some dimension (mostly in the tools, for display or selection purposes). This setting limits the number of child nodes that are retrieved beneath any particular node, to minimize problems with extremely large dimensions. When appropriate, the tools provide an option to override this setting. This setting does not affect server behavior or limit data query results. These are suggested values, to help the Cube Wizard pre-select the Use As settings when creating a cube-star. If a dimension in the cube has one of the names in this list, it is pre-selected as the dimension containing measures, provided no dimension is explicitly flagged as Accounts. Suggested names must be separated by commas, without extraneous spaces.

CUBE.MEASURE_DIMS=Accounts,Measures

CUBE.MISSING_STRING=#MISSING CUBE.SCENARIO_DIMS=Scenario,Scenarios CUBE.SLICE_ALIAS_TABLE=Default

This string indicates the value returned from cube data queries that should be interpreted as no data. Similar to CUBE.MEASURE_DIMS; this identifies likely names for the scenario dimension. Identifies the name of the Analytic Services alias table to be used, only in the case where CUBE.USE_SLICE_ALIAS_TABLE=TRUE. Note the same alias table name will be used for all cubes in the configuration, and the aliases in those cubes must be consistently defined. Setting this to TRUE enables member name transformations using either the CUBE_NAME_TRANSFORM or CUBE_TRANSFORM_MAP tables. Defaults to FALSE. This setting is useful only in a mixed environments with cubes and a data mart, where member name transformations were applied in the process of loading the cubes.

CUBE.SLICE_NAME_TRANSFORM=FALSE

334

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting CUBE.TIME_DIMS=Time,Period,Year, Year Total CUBE.TIME_PREFIX=HMB CUBE.USE_NATIVE_SECURITY=FALSE

Description Similar to CUBE.MEASURE_DIMS; this identifies likely names for the detail time dimension.

Used internally, and should not be changed. Configures the Enterprise Metrics Server(s) to use Analytic Services data security, rather than the security group definitions offered by the Security tool. To use Analytic Services security for all metrics queries, in addition to setting this to TRUE, you must also set AUTH_METHOD=CSS, and must not be using any relational stars (other than perhaps the initial Days star). All system caching and preload is disabled, and each user will effectively have a private connection pool, for each cube they access.

CUBE.USE_SLICE_ALIAS_TABLE=FALSE

Specifies the display of alias names for cube members. This setting is used in conjunction with CUBE.SLICE_ALIAS_TABLE, which defaults to the value DEFAULT. You have the option to display alias names for cube members, rather than the member names, if all of the following conditions are satisfied:

You are not using member name transformations (relational-cube). All cube sources are actually Analytic Services cubes. All cube sources have an alias table of the same name (such as 'Default'). All alias tables contain consistent values, across cubes.

If all these conditions are met and you wish to see aliases (when they exist), set CUBE.USE_SLICE_ALIAS_TABLE to TRUE, make sure the setting of CUBE.SLICE_ALIAS_TABLE is accurate, and be sure that CUBE.SLICE_NAME_TRANSFORM is set to FALSE. Note that this setting applies to all cube queries issued by the server, and you can only use a single alias table. CUBE.YEARS_DIMS=Years CUSTOM_POLICY_CLASS X Similar to CUBE.MEASURE_DIMS; this identifies likely names for the years dimension. Used to provide the name of a class file to implement a custom user policy. This setting is ignored unless USER_NAME_POLICY is set to CUSTOM_LOGIN. If you have special requirements, contact Hyperion Customer Support. For development purposes only. Specifies the date formats available in the Personalization Workspace. You can set the formats to MDY (month before the day) or DMY (month before day). The default is MDY. If the setting is MDY then time settings appear with AM/PM indicators, whereas DMY does not. X (See the description of SQL.FIND_COLUMNS below.) The default setting is NULL and is the only thing that works with the Data Direct drivers.

DATA_MGR_REGRESSION=FALSE DATE_ORDER

DB_CATALOG=NULL

Metrics_Server.prefs Settings

335

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting DB_DATABASE=$J(pbDbJdbcURL)

Description Specifies the location of the Application Data. For example,


DB_DATABASE=jdbc:hyperion:oracle://ravtar2:1521;SID=o ra9i.

Note: The DB_ prefix for this group of settings indicates that they all refer to the Application Data, not the catalog. DB_DRIVER= $J(pbDbJdbcDriver) DB_MAP_NAME=dev Specifies the driver used for accessing the Application Data. For example,
DB_DRIVER=hyperion.jdbc.oracle.OracleDriver.

Specifies which entries in the catalog table named PUB_MAP_TABLE are read by this server, selecting only rows in which the column db_version contains this value. The standard settings are prd for the Server, and pub for the Configuration Server. You can also specify multiple map table version names. For example, DB_MAP_NAME=pub,lq would first read all map table entries where db_version='pub', and then apply any where the version was lq as overrides (adding or replacing). The overrides are listed in the log, and you can apply as many different versions as you like, separated by commas. Naturally, this applies to both the tools and the servers.
Note: These settings are case-sensitive and must be lowercase.

DB_MAP_TABLE=pub_map_table

Specifies the name of the catalog table that is used to provide a map between the catalog table names that the servers Java code uses and the physical names of the tables in the database. The default is pub_map_table. Specifies the corresponding password for DB_USER below. The default setting is set by the Enterprise Metrics installer. You should update the DB_SCHEMA setting with the Application Data user ID. Specifies the user ID that the server uses to connect to the Application Data. This user ID is usually the Editor's database user ID. The DB_USER user ID requires database read-write privileges on the BAP_LOAD and BAP_DUMMY tables and read-only for all other tables. X Development use only. This setting controls number formatting in the placement of the comma and decimal. The default is COMMA_PERIOD. You can also set this to: PERIOD_COMMA (for example: 0.000,00), SPACE_COMMA (for example: 0 000,00), and APOSTROPHE_COMMA (for example: 0'000.00). X X X X Causes the server to record all of these prefs settings in the log. A value set to FALSE logs only settings with non-default values. Used by the enrichment process and should not be changed. Used by the enrichment process and should not be changed. If TRUE, the server dumps the generated ending period numbers during initialization. Specifies the file name used to save non-default settings to when the Export function is used from the Server Console and Configuration Server Console. The default filename is saved.server.prefs.

DB_PASS=$J(pbDbPwd) DB_SCHEMA=$J(pbDbOwner) DB_USER=$J(pbDbOwner)

DEBUG_MOVING_AGGREGATES DECIMAL_FORMAT=COMMA_PERIOD

DUMP_ALL=TRUE ENRICH.LOG_LEVEL=2 ENRICH.LOG_TO_FILE=TRUE EPN_DUMP=FALSE EXPORT_SETTINGS_TO_FILE= saved.server.prefs

336

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X X

Preference/Default Setting FAKE_AS_OF_DATE= GEN_CONSTRAINTS=TRUE

Description Do not change this setting without assistance from Hyperion Solutions Customer Support. Specifies whether to dynamically rebuild all the constraints which appear in the (Pinpoint Section) page menus when initializing the server. This should always be left as TRUE. Note: The Studio Utilities have a Restart Fast option, which temporarily overrides this setting for only the Configuration Server.

GEN_CONSTRAINTS_LIMIT=2000

If the number of constraint items to be generated for a single hierarchy (dimension) would exceed the specified limit, new constraint generation for that entire hierarchy is skipped. If constraints had been generated previously, the old constraint items remain in place and a warning appears in the log. The default is TRUE. Communicates to the server to keep any dimension member trees, requested by Hyperion System 9 BI+ Scorecard, in memory. May be set to FALSE to release them upon return to Scorecard; has no effect unless using Scorecard integration. Specifies the number of seconds a user may be idle before the server terminates his or her session. The default setting is 3600 seconds (one hour). This setting applies in all authentication modes; if the Enterprise Metrics Server has not seen a request for a given client in the specified time, the clients session is invalidated, and the next request results in the user being logged out with a timeout message. Allows you to disable number scaling in charts, ZoomCharts, and reports. Setting this to TRUE causes the server to ignore all scaling codes, as if you had gone through all ZoomChart line and report definitions and selected NONE as the scaling code, which may be useful for data validation purposes, but not for normal operation. The Enterprise Metrics installer populates this setting with the deployment ID. The Enterprise Metrics installer populates this setting during installation. (port@host) X X This setting is used by the load process and should not be changed. This setting is used by the load process and should not be changed. A value of TRUE forces the server to disable all caching, so that every client request results in fresh queries to the Application Data to retrieve the most current data. This has severe performance implications, and is intended only for very limited use. This setting applies only when using AUTH_METHOD=DATABASE or LDAP, and provides a limited form of SSO when running in standalone mode. The default setting is 2 hours (7200 seconds). Once you supply your user ID and password to the Launcher Servlet in this mode, as long as you keep your browser window open, you may then launch other Enterprise Metrics applets without having to re-enter your user ID and password for two hours (at which time the next launch prompts you, and then you are logged in for another two hours).

HPS.SAVE_MEMBER_TREES=TRUE

IDLE_TIME_OUT=3600

IGNORE_SCALING=FALSE

LICENSE.DEPLOYMENT_ID= LICENSE.SERVER_PATH= LOADS.LOG_LEVEL=2 LOADS.LOG_TO_FILE=TRUE LIVE_DATA=FALSE

LOGIN_REPROMPT=7200

Metrics_Server.prefs Settings

337

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X X

Preference/Default Setting LOG_ALL_CALLS=FALSE LOG_DATE=TRUE

Description Setting to TRUE causes a log entry to be written for every single request from a client to the server. This is intended only for debugging. By default, all logs include a MM/DD prefix before the timestamp on each log entry. This setting should not be changed, as it reduces the effectiveness of the log viewing utility. Specifies the directory in which server logs should be stored. You can use forward slashes to separate elements in the path even for Windows. If you prefer to use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is C:\\. The doubling of backslashes is necessary because Java uses '\' as an escape character. The default value is empty, which causes logs to be written to the current directory (from which the server was launched). Specifies the approximate maximum size of one log file in bytes, defaulting to 3 MB. When the server notices at CLEANUP_WAIT intervals that the current log exceeds this setting, it switches logging to a new file. This setting has no effect if LOG_SAVE_COUNT is set to a value of 1.

LOG_DIRECTORY=

LOG_FILE_MAX=3000000

LOG_LEVEL=3 LOG_SAVE_COUNT=3

Specify the level of logging detail for the server. The default is 3 and should not be changed unless recommended by development or support. Specifies the maximum number of log files maintained by the server, erasing older logs when the count is exceeded. The default is 3. Changing it to a value of 1 (not recommended) causes the server to write indefinitely to a single log with no timestamp in the name. Determines the number of characters to use for the user ID field in server log entries. Limits the number of items returned by the server when a user requests a tab-delimited export of a report. This value provides a default limit on the number of cells (rows x columns) to be returned for a single mini report, when the Row Limit for the mini report itself is left blank. An explicit setting for a mini report overrides this value. Specifies the location of the catalog database. See DB_DATABASE for examples. Note: The MDB_ prefix for this group of settings indicates that they all refer to the catalog.

LOG_USERID_LENGTH=16 MAX_MINI_EXPORT_ITEMS=100000 MAX_MINI_ITEMS=5000

MDB_DATABASE=$J(pbMdbJdbcURL)

MDB_DRIVER=$J(pbMdbJdbcDriver) MDB_PASS=$J(pbMdbPwd) MDB_USER=$J(pbMdbOwner)

Specifies the driver to use for accessing the catalog database. See DB_DRIVER for examples. Specifies the corresponding password for MDB_USER (see below). Specifies the user ID that the server should use to access the catalog database. This user ID is usually the Editor's ID, and must have read-write access to all catalog tables. X MODULE_ID is used as part of the interface and should not be changed.

MODULE_ID=HMB.send

338

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting POLL_DB=60

Description Specifies how often, in seconds, the server should poll the Application Data. Polling checks the status of the database, so the server can take the right action when the database server is available, unavailable for loading, or unavailable due to network or system maintenance or failure conditions. Specifies the polling interval for interrogating the BAP_TABLE_LIST table to determine whether fact tables should be considered available for use or not. The default setting is 900 seconds (15 minutes). This setting supports the delayed loading of aggregate tables. At the end of each interval, the server also attempts to connect to any cube data sources that were previously unavailable. Specifies the polling interval to use during periods when connection to the database is unsuccessful, instead of the standard polling interval (POLL_DB=60) that is used while the database is responsive. Default is 300 seconds (5 minutes). This setting only applies to checking the flags in the BAP_LOAD table in the Application Data. Specifies the port number the Server uses for accepting client connections. This setting is used when the CONFIG_SERVER setting is set to FALSE. The default is 2005. The client applets must be assigned to connect to this same port in their anchor Web pages.

POLL_DB_FOR_TABLES=900

POLL_DB_WHEN_DOWN=300

PORT_NUMBER=2005

PUBLISH.LOG_LEVEL=2 PUBLISH.LOG_TO_FILE=TRUE READ_ONLY=FALSE

X X

This setting is used by the publish process and should not be changed. This setting is used by the publish process and should not be changed. This may be set to TRUE to place the server in read-only mode, in which case it does not update any catalog tables (as a result of user configuration changes, and so on). Note that you would typically be setting up a new server instance, for which you would likely want to change the port number, and other configuration changes are required.

READ_ONLY_DLG=TRUE

If READ_ONLY=TRUE, the default value of READ_ONLY_DLG causes a dialog box to be displayed to users at login, reminding them that any user configuration changes will be lost at logout. Restricts the Enterprise Metrics Server to reading the BAP_PERIOD (calendar) table only down to the level of day, and sampling data at finer levels, such as hour, if present. This setting should never be set to anything other than Day without guidance from Hyperion Solutions Customer Support. Normally the server registers itself by host name. Setting this to TRUE causes it to register by IP address instead (and requires that the connection parameters in the Web site be changed accordingly). There is no good reason for doing this. With the default setting of TRUE, all Enterprise Metrics users must be granted data security through the Security tool to be allowed to launch Metrics clients. Rule sets must be associated with the user or with at least one of the direct groups that the user may belong to (in addition to being authenticated and having valid Metrics roles). If the setting is changed to FALSE, users need only pass authentication, and are assumed to have unrestricted data access if they were not defined through the Security tool.

READ_PERIOD_LEVEL=Day

REGISTER_IP=FALSE

REQUIRE_UTABLE=TRUE

Metrics_Server.prefs Settings

339

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting ROLE_CACHE_SECONDS=300

Description Controls how long the server remembers a user's role and data security information, before acquiring them again from CSS and our security tables. (min 0, max 1800) Specifies whether to show the small server status window. This is typically set to TRUE on NT servers and FALSE on UNIX servers.

SERVER_WINDOW=TRUE SQL.ALWAYS_RESET=FALSE SQL.CANCEL_STATEMENTS=TRUE X

For development only. Should never be set to TRUE. Specifies whether you want to allow users to cancel queries on client (Investigate Section) pages. To disable the Cancel feature, change this setting to FALSE.

SQL.CLOSE_IF_CANCELLED=FALSE

If TRUE, the connection used when a metrics query was cancelled will be closed and discarded, rather than returned to the connection pool. There currently appear to be no reasons to do this, and it is highly recommended the setting be left as FALSE. For development and testing purposes only. For development only. Should never be set to a non-zero value. If the bap_period table represents a non-fiscal calendar (for example, a given week starts in one month and ends in the next), this setting should be set to TRUE to force an additional constraint to be used for database queries which are retrieving cumulative data, to ensure consistent results.

SQL.DEBUG_CANCEL_STMT_DELAY_SECS=0 SQL.DELAY_SECS=0 SQL.EXTRA_WHERE_FOR_CUMES= FALSE

X X

SQL.FIND_COLUMNS=SELECT

This setting determines what technique the Enterprise Metrics Server uses to find out what columns exist in the Application Data tables. The default method is to issue a SELECT * SQL statement and interrogate the result set. The other possible setting is TABLE, through which the information is obtained by querying the database catalog. In this case, it may also be necessary to identify the database catalog and schema names using DB_CATALOG and DB_SCHEMA. However, not all database drivers support this function, and the recommended setting is SELECT. For development only. Should never be set to TRUE. Enables special support for ragged relational hierarchies, which defaults to FALSE. To enable, you must also have SQL.MAX_DASH_REQ_THREADS set to a value greater than 0 (default is 4), and also have a value set for SQL.SKIPPED_LEVEL_STRING.

SQL.FORCE_ALL_JOINS=FALSE SQL.MART_HAS_RAGGED_HIERARCHY=FALSE

SQL.MAX_DASH_REQ_THREADS=4

This setting determines the number of parallel queries the server executes on behalf of a single client request for a page in the Monitor or Investigate Sections. If the database is running on a multi-processor machine with sufficient resources, increasing this number typically improves client response time for these requests dramatically. This is, however, a tuning issue which must be carefully coordinated with settings for the connection pool, database process limit, and various other settings. NOTE: The minimum value of 0 is not recommended.

SQL.PRINT_SQL=TRUE

Specifies whether to include SQL statements in the log file. Logging level setting has no effect on this SQL logging. For typical installations, this should always be set to TRUE.

340

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X X

Preference/Default Setting SQL.RAND_EXCEPTION=0 SQL.RETRY_SQL=FALSE SQL.SKIPPED_LEVEL_STRING=~skipped level~

Description For development only. Should never be set to a non-zero value. For development only. Should never be set to TRUE. If using ragged relational hierarchies (SQL.MART_HAS_RAGGED_HIERARCHY=TRUE), this setting specifies the value that must be stored in the hierarchy level column(s) for the lowest <n> levels that do not exist in some particular path. The default setting logs the timing information on mini report query processing, which is recommended for typical installations. Specifies whether to log the timing information of the query processing for pages in the Monitor or Investigate Section requests, which is recommended for typical installations.

SQL.TIME_MINIS=TRUE SQL.TIME_QUERIES=TRUE

SQL.TIME_QUERY_DETAILS=FALSE SQL.USE_INS=TRUE

X X

Specifies whether to log the detail timing information of the query processing for pages in the Monitor and Investigate Sections. This setting determines whether SQL query generation for pages in the Monitor and Investigate Sections constructs the time period constraint as a single IN (l, m, n) phrase, or uses a series of greater or less than comparisons. Based on experience to date with different databases, this should always be left TRUE. For development only. Should never be set to TRUE. For development only. Should never be set to TRUE. Specifies whether collection of star usage statistics is enabled at the detail level. Results are stored in a catalog table and can be extremely useful for query tuning. There is, however, some overhead involved, and you may wish to change this to FALSE for normal operations. Note: Setting STAR_STATS.COLLECT_DETAIL to TRUE also forces STAR_STATS.COLLECT.SUMMARY to TRUE (see below).

SSL_DEBUG=FALSE SSL_SOCKET=FALSE STAR_STATS.COLLECT_DETAIL=TRUE

X X

STAR_STATS.COLLECT_SUMMARY=TRUE

Specifies whether collection of star usage statistics are enabled at the summary level. The overhead for this is minimal, and it should be left enabled. Whenever star usage statistics collection is started (normally, during server restart) any summary and detail statistics older than the specified number of days are deleted. If detail star usage statistics records are being collected, they are written to the catalog database each time this number of records has been accumulated, or when the server is shutdown or restarted. When summary star usage statistics collection is enabled, statistics are accumulated in memory and only written to the catalog database each time this specified interval (in seconds) expires, or the server is shutdown or restarted. X For development use only. Should never be set to TRUE.

STAR_STATS.DELETE_DAYS=14

STAR_STATS.DETAIL_WRITE_EVERY= 1000

STAR_STATS.SUMMARY_INTERVAL_SECS=18 00

TGC_DUMP=FALSE

Metrics_Server.prefs Settings

341

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X

Preference/Default Setting TOOLS.CREATE_TIME_COLUMN=

Description Provided for backward compatibility only. Allows different names to be specified for the columns in the Application Data that the standard data model refers to as create_time. Enables the Cubes tool in the Enterprise Metrics Studio Utilities. Defaults to TRUE, so that the Cubes icon appears in the Studio Utilities main window. May be set to FALSE if not using multidimensional data sources. The Filter dialog box, used in both the Measures tool and Mini Report SQL Editor, supports both CASE and DECODE SQL syntax, but only one at a time. The default is CASE, which causes the tools to generate the so-called simple form of a standard CASE statement, which is supported as a native database function by Oracle, DB2, and SQL Server. DECODE and IIF are still supported, but unless you have a specific need to use one of these forms, you should remove any explicit setting you might have in your server prefs. Note that the generated syntax is different than it used to be. The old form constructed by the Filter and Time dialogs, when set to CASE, was:
CASE a WHEN b THEN c ELSE 0 END

TOOLS.CUBE_TOOL=TRUE

TOOLS.FILTER_SYNTAX=$J(pbFilter)

while the new form is:


CASE WHEN a=b THEN c ELSE 0 END

TOOLS.GEN_COLOR.ACTUAL=GREEN TOOLS.GEN_COLOR.AGO=ORANGE TOOLS.GEN_COLOR.COMPARISON=RED TOOLS.GEN_CUME_LIMIT=3

Specifies the default chart color to be used for the actual metric, when generating metrics and chart templates. Specifies the default chart color to be used when generating time offset metrics and chart templates. Specifies the default chart color to be used when generating comparison metrics and chart templates. Specifies the default limit for generating cumulative charts, as a time grain code where 1=year, 2=quarter, 3=month, and so on. Used in the Cube, Measures, and Metrics tools, to avoid generating week to date and day to date charts unless explicitly overridden (and the data supports it). Offers flexibility in generating chart headers. If this setting is empty (nothing other than just blanks or an immediate return after the equal sign), then a two-metric chart uses the full metric labels for both metrics, and will not insert a middle line (such as 'vs.'). Three metric charts will use the full metric names, adjusted for cumulative or time offsets as usual. If the value is non-blank, then a middle header is inserted with this value for two metric charts (always black), and simplified names are used for the comparison and/or time offset metric. For example, [Sales][vs. Budget][vs. Prev Year], while an empty setting would instead produce [Sales][Budget Sales][Sales Prev Year].

TOOLS.GEN_HEADER_VERSUS=' vs. '

TOOLS.GEN_VERSUS= vs.

Provides the string to use for separating metric names, when generating the name of a chart template containing more than one metric. Note that single quotes are required, assuming you wish to use leading and/or trailing blanks. Identifies the image used for the background of the main configuration window. This image is used when TOOLS.CUBE_TOOL=TRUE.

TOOLS.IMAGE.MAIN_CUBE_BACKGROUND=co nfig_tools_cube_bgnd.jpg

342

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X

Preference/Default Setting TOOLS_IMAGE.MAIN_BACKGROUND= config_tools_bgnd.jpg TOOLS.LOGON_PASS=$J(pbDbPwd)

Description Identifies the image used for the background of the main configuration window. This setting must not be changed. Used only in emergency situations, in which the Configuration Server is not available to authenticate login to the Studio Utilities. At installation, TOOLS.LOGON_PASS should be set to the same values as DB_PASS. Used only in emergency situations, in which the Configuration Server is not available to authenticate login to the Studio Utilities. At installation, TOOLS.LOGON_USER should be set to the same values as DB_USER.

TOOLS.LOGON_USER=$J(pbDbOwner)

TOOLS.LOG_DATE=TRUE TOOLS.LOG_FILE_MAX=500000

Causes MM/DD to be prefixed on log entries for the Studio Utilities. Specifies an approximate size limit, in kilobytes (KB), for the log file written by the Studio Utilities. Each time the Studio Utilities is launched (when logging to a file), the size of the most recent log file is compared to this value, and a new file is started if the size exceeds this value. However, this setting is ignored if TOOLS.LOG_SAVE_COUNT is set to a value of 1.

TOOLS.LOG_LEVEL=6 TOOLS.LOG_SAVE_COUNT=2

Determines the level of detail in the Studio log. Should not be changed without guidance from Hyperion Solutions Customer Support. Specifies the maximum number of log files maintained by the Studio, erasing older logs when the count is exceeded. The default is 2. If set to 1, the tools log indefinitely to a single file which omits the date/time usually indicated in the log filename. If changed to FALSE, log messages from the Studio Utilities are written to the Java console window, instead of to a file. The recommended setting is TRUE. Determines the number of characters available for displaying a userid in the log entries for the Studio Utilities (longer IDs are truncated). When using the Processed Enrichment tool for editing Direct jobs, the initial display includes all distinct values found in the source column if less than this setting. If more values exist, only the Show Used view is enabled. Allows you to adjust the limit that determines whether we show you a menu of values for a selected column, or require you to simply type one in. For example, in the Enrichment tool, when you select a source column, you must also designate a source value. There is a menu of values providing that the number of choices falls below this limit. By default, the limit is 100 distinct values. This limit applies to the Filter dialog box (used in the Measures tool and Mini Report SQL Editor), various menus showing column values in the Processed Enrichment tool, and for selection of comparison values in the Security tool.

TOOLS.LOG_TO_FILE=TRUE TOOLS.LOG_USERID_LENGTH=12 TOOLS.MAX_DIRECT_VALUES=500

TOOLS.MAX_VALUES=100

TOOLS.MIN_AUTOGEN_ITEM_ ID=1000

The items that populate the menus on (Pinpoint Section) pages are, for the most part, automatically generated by the Enterprise Metrics Server. This setting establishes the minimum item ID that the server assigns, preserving lower ID values for use by manually-defined constraints (such as a date range). Do not change this value unless directed to do so by Hyperion Solutions Customer Support.

Metrics_Server.prefs Settings

343

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit

Preference/Default Setting TOOLS.PTD.DEFAULT_SUFFIX=' PTD'

Description Used when generating metric and chart names for cumulative metrics. The default is PTD (period to date), however you can change this setting to use a different suffix. If the specified suffix is used, Enterprise Metrics automatically converts (for example) PTD to YTD, QTD, and so forth for cumulative metrics. This suffix appears in metric names and chart template headers. This setting must be coordinated with the TOOLS.PTD.SUB_PATTERN preference setting for proper results. Note: Single quotes are necessary around the value, in cases where either leading or trailing blanks must be preserved, as in the default shown here.

TOOLS.PTD.SUB_PATTERN=-*--

This setting is used in conjunction with TOOLS.PTD.DEFAULT_SUFFIX, and specifies which portion of that string should be substituted with a specific time grain value. Ignoring the surrounding single quotes (if any) on the TOOLS.PTD.DEFAULT_SUFFIX value, this substitution pattern should use asterisk character(s) to indicate the positions in the pattern to be replaced with a specific time grain name, and minus signs (hyphens, dashes) to fill in all other positions. If only a singly asterisk is used, then the substitution will be done using the first character of the time grain name (Y/Q/M/ and so on); if more than one consecutive asterisk is present, then the full name (e.g. Year) will be substituted. As an example, if you were to set TOOLS.PTD.DEFAULT_SUFFIX=' so far this CHUNK' and also set TOOLS.PTD.SUB_PATTERN=------------*****, then the resulting names would be something like 'so far this Year'.

TOOLS.SET_THEME=TRUE TOOLS.SWING_LOOK_AND_ FEEL= TOOLS.UPDATE_TIME_COLUMN=

X X X

Determines the general appearance of the Studio Utilities. Determines the general appearance of the Studio Utilities. Provided for backward compatibility only. Allows a different name to be specified for the columns in the Application Data that the standard data model specifies as update_time. Specifies the syntax to be used when generating case-insensitive report constraints. The default is TRUE, which causes a warning dialog to be displayed if the Studio Utilities are connecting to the metrics catalog. The password for the UMDB_USER below. This setting specifies the user ID that the server should use when connecting to the catalog database for UPDATE purposes. This user ID must have write privileges to a number of catalog tables, which is different from the MDB_USER that requires only read access to the catalog.

TOOLS.UPPER_SYNTAX=UPPER TOOLS.WARN_IF_PROD=TRUE UMDB_PASS=$J(pbMdbPwd) UMDB_USER=$J(pbMdbOwner)

X X

344

Enterprise Metrics Preference File Settings

Table 27

Enterprise Metrics Metrics_server.prefs Settings (Continued) Do Not Edit X

Preference/Default Setting USER_NAME_POLICY=NONE

Description This is the default setting. Other valid options are HTTP_USER, REMOTE_USER, and CUSTOM_LOGIN. If you have special requirements, contact Hyperion Customer Support. If set to TRUE, the Enterprise Metrics Server logs an overwhelming (but sometimes useful) number of details about the catalog information processed during initialization. This setting can be useful for viewing detail on stars, measures and chart templates. It affects the logs in the following ways:

VERBOSE_INIT=FALSE

For each star, it tells which facts are used. For each star, it tells you the detail about the star. Shows which measure is used. Shows the fact snippet for the measure. For normal production use, this should be left as FALSE.

Metrics_Server.prefs Settings

345

Configuration_Server.prefs Settings
The Configuration_Server.prefs file for the Configuration Server is analogous to Metrics_server.prefs, but the settings noted in Table 28 have different values. Configuration_Server.prefs resides in the same directory as the Configuration Server.
Table 28

Enterprise Metrics Configuration_Server.prefs Settings Do Not Edit

Preference/Default Setting CONFIG_PORT_NUMBER=2006 CONFIG_SERVER=TRUE

Description Specifies the port number for the Configuration Server. This setting is used only when CONFIG_SERVER is set to a value of TRUE. This setting causes the Enterprise Metrics Server to launch as a Configuration Server, rather than an Server. When a client connects to the Configuration Server, the Login dialog contains an extra check box, allowing a user to login as the Editor and create or modify global pages in the Monitor and Investigate Sections. The Configuration Server also runs with caching disabled, to improve server restart performance, since the Studio Utilities are recycled every time metric and report metadata changes are to be tested. Note that the Configuration Server listens on the port specified by CONFIG_PORT_NUMBER. Specifies which db_version names in PUB_MAP_TABLE to use for this Enterprise Metrics Server. The default is pub. Points to the Configuration Catalogwhich is used for editing. This allows the Editor to change and test the metrics and pages before migrating the Configuration Catalog to production, where all users are affected by the change.

DB_MAP_NAME=pub MDB_DATABASE=

346

Enterprise Metrics Preference File Settings

Client.prefs Settings
The settings in Client.prefs control Enterprise Metrics Workspace and Personalization Workspace. This file resides in the same directory as the Metrics and Configuration Servers.
Table 29

Enterprise Metrics Client.prefs Settings Do Not Edit

Preference / Default Setting BALLPARK=DUMPTOFILE

Description The value METABOLISM causes client logging to be directed to the Java Console window on the client machine for debug mode. The standard setting, DUMPTOFILE, creates a log file on the client system during the client session. The location of the client log file is browser dependent unless specified by LOG_DIRECTORY. Specifies how many rows to move when using the Fast Scrolling feature on the (Investigate Section) page. Ignored if the value exceeds MAX_DIMENSION_ROWS, but if set to a value that is one less, for example, fast scrolling would display one row from the previous block and 19 new rows. Limits the amount of staggering that occurs with ZoomChart detail lines. This applies only to Monitor Section charts that have x-axis set to display by point of view (time period labels are never truncated). The labels are not limited unless they would cause staggeringsuch as if you have two bars which are very wide, it uses the available space. The minimum setting is 50; the maximum is 200.

BLOCK_SCROLL_AMOUNT=20

BLOWUP.MAX_AXIS_LABEL_WIDTH=80

CHART.GRAY_BARS=TRUE

Specifies that any chart which is drawing bars for only a single metric draws the bars in gray. This is the default setting. If set to FALSE, the metric color specified in the Chart tool is always used for drawing bars, regardless of how many metrics are displayed that way. When a ZoomChart displays color keys for more than the maximum possible colors (20), the label specified identifies the remaining (black) area. Specifies that the End of xxx labels in chart headers should be suppressed when they are not meaningful (that is, the chart displays months and the current month is right-most, so End of Month is unnecessary). This is the default. If set to FALSE, the End of xxx label is always displayed. The maximum number of objects (charts and mini reports) to be cached by the clipping servlet. X For development/testing purposes only. Contains the prefix used when generating the Clip URL. For example, if you generate clips to be used in a Hyperion System 9 BI+ Interactive Reporting dashboard environment, you would set this to CLIP.URL_PREFIX=
http://<HPSu web server:port>/Hyperion/browse/ extRedirect?extUrl=

CHART.KEYS.OTHER=(other) CHART.LIMIT_ENDOF=TRUE

CLIP.CLIPS_CACHED=50

CLIP.DEBUG_CACHE=FALSE CLIP.URL_PREFIX=

This setting is relevant only if CLIP.URL_TYPE is set to PREFIX or BOTH. If AV_URL is set, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for CLIP.URL_PREFIX overrides. We recommend that you specify AV_URL only and let the server derive this setting.

Client.prefs Settings

347

Table 29

Enterprise Metrics Client.prefs Settings (Continued) Do Not Edit

Preference / Default Setting CLIP.URL_TYPE=GENERAL

Description Controls the options displayed on the Clip Generation options dialog box. This setting can have one of three possible values:

GENERAL-is the default option. This allows the user to generate URLs for Clips in the standard format. These URLs can be used to embed Clips in a single sign-on web environment other than Hyperion Performance Suite. PREFIX-allows Clip URLs to be generated in the required format for the Interactive Reporting Studio. In this mode, you must also set the CLIP.URL_PREFIX value. Essentially, the standard URL is URL-encoded and appended to the prefix, for these options. BOTH-allows the user to generate Clip URLs in any of the above formats. In this mode, you must also set the CLIP.URL_PREFIX value.

If AV_URL is set, you do not need to set this. If this setting is supplied and AV_URL is supplied, the setting for CLIP.URL_TYPE overrides. We recommend that you specify AV_URL only and let the server derive this setting. DEBUG_WIZ_GRAPHS=FALSE X The default is FALSE and should not be changed unless directed to do so by Hyperion Solutions Customer Support. When set to TRUE, additional debugging information is written to the log whenever displaying a ZoomChart or previewing a chart in the wizard. The default directory on the users computer for data that the user exports to a file from a (Investigate Section) or (Pinpoint Section) page. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is C:\\. The doubling of backslashes is necessary because Java uses '\' as an escape character. Though the default setting may not be appropriate for all users, you can change the destination if desired. EXPORT.JPEG_QUALITY=95 Controls JPEG quality for export images (not thin client images). The default of 95 reduces the size of the exported file by roughly a third of the best quality, with no serious compromise of the image. This is a global setting that applies to all full clients. X X X X X The name of the JAR for the client applet, which contains default and required images. The name of the JAR containing custom images. The path to the directory containing the applet and image files relative to the applet code base. The path to the directory containing the applet and image files relative to the Enterprise Metrics Web site URL. By default, all logs include a MM/DD prefix before the timestamp on each log entry. The default is TRUE; you can turn this off by changing this setting to FALSE. The default directory of the log file; defaults to the users temporary directory.

DEFAULT_DIRECTORY=C:\

JAR.CLIENT_IMAGE=pb_client_sig.jar JAR.CUSTOM_IMAGE=pb_custom.jar JAR.FAT_CLIENT_PATH==../jars/ JAR.THIN_CLIENT_PATH=/jars/ LOG_DATE=TRUE

LOG_DIRECTORY=

348

Enterprise Metrics Preference File Settings

Table 29

Enterprise Metrics Client.prefs Settings (Continued) Do Not Edit

Preference / Default Setting LOG_FILE_MAX=500000

Description Approximate size limit, in bytes, for the log file written by the Personalization Workspace. Each time the client is launched (when logging to a file), the size of the most recent log file is compared to this value, and a new file is started if the size exceeds this value. This setting is ignored, however, if LOG_SAVE_COUNT is set to 1.

LOG_LEVEL=4 LOG_SAVE_COUNT=2

The level of logging detail. By default, the client saves a maximum of two log files, erasing older ones as necessary. If set to 1, the client writes indefinitely to a single file named mb.client.log, ignoring the value of LOG_FILE_MAX. The number of characters to use for recording the user ID in log entries. The number of detail rows displayed in a (Investigate Section) page. Setting this to a smaller value somewhat reduces client memory requirements. Setting it higher is strongly discouraged. The maximum number of columns permitted in a (Investigate Section) page. Changing this is discouraged. The number of characters permitted in a page title in the Monitor Section, and in page names and titles in the Investigate Section. Must not exceed the field widths in the catalog tables. May be set to a smaller value at installation to prevent the page selector menu from becoming too wide, but should never be set to a value smaller than existing titles and names.

LOG_USERID_LENGTH=16 MAX_DIMENSION_ROWS=20

MAX_METRIC_COLUMNS=7 MAX_TITLE_LENGTH=480

PIXELS_FREE_EXTRA_HEIGHT=30

The PIXELS_FREE settings are used to reserve some screen area around the client window, to avoid conflict with things such as a Windows Microsoft Office toolbar. If you prefer to have the client open with a maximized view, changing these settings will help, but may not be suitable for all users. See description for PIXELS_FREE_EXTRA_HEIGHT. See description for PIXELS_FREE_EXTRA_HEIGHT. The font to use for the footer on printed pages. The text specified appears at the bottom of all printed pages. Limits the size of the preview image. It is highly recommended that you not increase this setting. Limits the size of the preview image. It is highly recommended that you not increase this setting. Specifies the filename suffix to be appended, when exporting tab-delimited files from (Investigate Section) or (Pinpoint Section) pages. Note that the file is really just a text file, but the default extension of .xls simplifies the process of opening the file in a spreadsheet.

PIXELS_FREE_EXTRA_WIDTH=30 PIXELS_FREE_TOP=1 PRINT.FOOTNOTE.FONT= SansSerif-bold-9 PRINT.FOOTNOTE.TEXT=Confidential PRINT.PREVIEW.HEIGHT=600 PRINT.PREVIEW.WIDTH=800 TAB_EXPORT_SUFFIX=.xls

X X

Client.prefs Settings

349

Table 29

Enterprise Metrics Client.prefs Settings (Continued) Do Not Edit

Preference / Default Setting TABS.MIN_PRINT_WIDTH=450

Description This setting specifies (in pixels) the minimum width to be used for all printing. For example, if you print a mostly empty page with just an object in the top left corner, the width of the printed image is still guaranteed to be this value. This may be useful for ensuring that enough of the background/foreground images above the tabs will appear. Note: The printed size is determined by the space required to show everything, not by the current size of the client window.

THIN.ANTI_ALIAS=TRUE

The default is TRUE. If set to FALSE, this disables anti-aliasing when drawing lines on charts, and for the pie chart. If the servlet is generating GIF format images for charts, disabling anti-aliasing drastically reduces CPU utilization, because the anti-aliasing pushes us over the limit of 256 colors for a GIF and the quantization code is terribly expensive. This applies only to the thin client servlet. X X X For development use only. Do not set this to TRUE without explicit direction to do so from development. The default is FALSE. If set to TRUE, the glass pane will be 50% opaque. This is useful in debugging any problems with the glass pane. For development use only. When using JPEGs for thin client chart images, this setting controls the image quality, with permissible values of 50-100. We strongly advise against changing this setting, as 95 gives you most of the image size (and network traffic) reduction with minimal loss of quality. X This should not be changed without direction from Hyperion Customer Support. The level of logging detail for the thin client servlet and the launcher servlets. By default, the thin client servlet saves a maximum of two log files, erasing older ones as necessary. If set to 1, the client writes indefinitely to a single file ignoring the value of THIN.LOG_FILE_MAX. Default setting. X X For development use only. If set to TRUE, userid and password are read from the HTTP request, rather than cookies. As the name implies, this is provided for performance testing only. If set to TRUE, only the current page is cached (effectively no caching). Note that in this case, Cancel (progress dialog) and Export are not expected to work. Determines how often (in seconds) the thin servlet polls the Server, to check for changes in state, such as restart after the nightly load, so that the servlet may determine when to re-initialize, or prevent logins.

THIN.CLIENT_DEBUG THIN.DISABLE_REQ_ID_CHECK=FALSE THIN.GLASSPANE_VISIBLE

THIN.JPEG_QUALITY=95

THIN.LOG_FILE_MAX=500000 THIN.LOG_LEVEL=3 THIN.LOG_SAVE_COUNT=2

THIN.LOG_TO_FILE=DUMPTOFILE THIN.PAGES_CACHED=10 THIN.PERFORMANCE_TEST_LOGIN= FALSE THIN.PERFORMANCE_TEST_NO_CACHE=FA LSE THIN.SERVER_POLL_SECS=120

350

Enterprise Metrics Preference File Settings

Table 29

Enterprise Metrics Client.prefs Settings (Continued) Do Not Edit

Preference / Default Setting THIN.SUMMARY_LOG_SECS=1800 THIN.USE_JPEGS=TRUE

Description Determines how often (in seconds) the thin servlet writes summary statistics (for example, number of users) to the servlet log file. By default, this setting causes the thin servlet to generate JPEG images (rather than GIFs) for charts, and major portions of the Investigate Section. If set to FALSE, images are generated as GIFs, which preserves the color accuracy of the full client, but at a tremendous cost in CPU for the thin client servlet. In this case, it may be important to set THIN.ANTI_ALIAS to FALSE.

Client.prefs Settings

351

Metadata_export.prefs
Table 30 lists the valid preference settings for the Metadata Export Utility preference file.
Table 30

Metadata Export Utility Preference File Settings Description Driver syntax (differs per database type). Note: The driver and URL information for each supported database are included. Only one DRIVER and URL needs to be uncommented for a run of metadata export. The values should be based on the source database.

Setting DRIVER= com.brio.jdbc.oracle.OracleDriver

URL= jdbc:brio:oracle://<host>:<port>;SID=<sid>

JDBC URL reference (differs per database type). See note above.

LOG_LEVEL=5 LOG_DIR= C:\\Hyperion\\EnterpriseMetrics\\MetadataExport

Supports levels 0, 5, and 10. Location of the log file. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of the log file. Location of pre- and post-SQL files. You can use forward slashes to separate elements in the path for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of pre-SQL file. Name of post-SQL file. Location of output file(s). You can use forward slashes to separate elements in the path even for Windows. If you prefer to use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Output file name. Location of table list file. You can use forward slashes to separate elements in the path even for Windows. If you use backslashes, you must double them in the prefs file setting, for example, C:\\Documents and Settings\\All Users. Thus the default for these is "C:\\". The doubling of backslashes is necessary because Java uses '\' as an escape character. Name of extraction table list.

LOG_FILE=metadata_export.log SQL_DIR= C:\\Hyperion\\EnterpriseMetrics\\MetadataExport

PRESQL_FILE=metadata_export_presql.sql POSTSQL_FILE=metadata_export_postsql.sql OUT_DIR= C:\\Hyperion\\EnterpriseMetrics\\MetadataExport

OUT_FILE=metadata_export.sql TABLES_DIR= C:\\Hyperion\\EnterpriseMetrics\\MetadataExport

TABLES_FILE=metadata_export_tables.txt

352

Enterprise Metrics Preference File Settings

Table 30

Metadata Export Utility Preference File Settings (Continued) Description Valid values are PUB_, PRD_, or blank. The Metadata Export Utility concatenates the prefix to the name of each table listed in the table files. If the setting is blank, the tool reads the table names from the table list. If it cannot find the tables in the specified database, it generates errors. You can leave the prefix field blank, if the prefix of PUB or PRD is included in the name of each table in the export table file list.

Setting TABLE_PREFIX=PUB_

UPDATE_USER_ID= USER=

Specify the value for the update_user_id column to filter the records based on that column value. The database user who owns the database tables. If set, the module list is ignored. Used with the user password (below) to determine the database.

PWD=

Password for the user. Password used if USER is set. Used with the user ID (above) to determine the database.

SRC_DBTYPE=Oracle TGT_DBTYPE=Oracle COMMIT_INTERVAL=1000 COMMIT_TEXT= COMMIT;

Source database type. Target database type. Number of rows after which commit text is added. Commit text to be added after each interval.

Metadata_export.prefs

353

354

Enterprise Metrics Preference File Settings

Part

III

Administering Financial Reporting

Chapter 20, Administrative Tasks for Financial Reporting

Administering Financial Reporting

355

356

Administering Financial Reporting

Chapter

20
In This Chapter

Administrative Tasks for Financial Reporting

The following administrative tasks are specific to Financial Reporting.

Deleting User POVs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Report Server Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Analytic Services Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Scheduler Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Batch Input File XML Tag Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 RMI Encryption Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

Administrative Tasks for Financial Reporting

357

Deleting User POVs


A system administrator can use the command line utility, deletepov.cmd, that is included with the BI+ installation to manually delete corrupt user POVs.

To delete user POVs:


1 From the command line, run the deletepov.cmd file. 2 Enter DeletePOV AdminID, AdminPassword, ReportServer, UserPattern, DBConnectionNamePattern
[ShowOnly]. See Table 31 for a definition of parameters.
Note: User Pattern and DBConnectionNamePattern formats support regular expressions.

For example:
DeletePov admin pass localhost user1.* .* ShowOnly DeletePov admin pass localhost user .*
Table 31

Delete User POV Parameters Description System administrators ID used at login System administrators Password used at login Name of the report server you are using Includes user name Includes data source user name, data source server name, application name, database name, data source type name
Note: You must use double quotation marks on the command line for data source names with spaces and special characters.

Parameter AdminID AdminPassword ReportServer UserPattern DBConnectionNamePattern

ShowOnly

Note: This parameter is optional.

Displays a list of matching user POVs without prompting to delete them. [ShowOnly] Result format: User, Db Connection Name, Server, App, Db, Type, ID

358

Administrative Tasks for Financial Reporting

Report Server Tasks


Financial Reporting Server configuration is performed as part of the installation process. The following topics describe additional tasks that can be performed.

Specifying the Maximum Number of Calculation Iterations


You can specify the maximum number of calculation iterations for all grids and cells in the fr_repserver.properties file to resolve dependencies within references in formulas. During the calculation process of a grid, it may be necessary to evaluate a cell multiple times due to reference precedence. This usually occurs in grids with references to other grids. The maximum iteration property indicates the number of times a formula cell can be evaluated before it is marked as unresolved. Setting the maximum iteration property avoids the possibility of cells, with circular referencing, being evaluated an infinite amount of times. Circular referencing occurs when one cell refers to another cell which then refers to the original cell. If there are no circular references and calculation cells are returning #Error, you can increase the maximum iteration property value. The default value for the maximum number of calculation iterations is 10. This value is set in the fr_repserver.properties file which is installed in the lib directory of your Financial Reporting installation. This file also contains comments to guide you in modifying this value, if necessary.
Note: Making the maximum iteration property number very high may degrade grid execution performance.

Log File Output Management


Financial Reporting uses the log4j logging mechanism to manage log file output. Logging settings can be configured and are set in the fr_repserver.properties file for each Financial Reporting server component. This code is an example of Financial Reporting Server default settings in the fr_repserver.properties file. Settings you might modify are shown in bold text.
# log4j.rootLogger=ERROR,dest1 # log4j.appender.dest1=org.apache.log4j.RollingFileAppender # log4j.appender.dest1.ImmediateFlush=true # log4j.appender.dest1.File=C:/hyperion/BIPlus/logs/FRReportSrv.log # log4j.appender.dest1.Append=true # log4j.appender.dest1.MaxFileSize=512KB # log4j.appender.dest1.MaxBackupIndex=5 # log4j.appender.dest1.layout=org.apache.log4j.PatternLayout # log4j.appender.dest1.layout.ConversionPattern=%d{MM-dd HH:mm:ss} %6p%c{1}\t%m%n

Report Server Tasks

359

Note: See fr_global.properties for details on logging levels (FATAL, ERROR, WARN, INFO, DEBUG) and formatting options. The Financial Reporting logging settings can be changed without restarting the servers. The .properties files are monitored for changes to the logging setting every minute. The frequency is set in fr_global.properties. This can be very handy if you want to set the logging level to DEBUG briefly and then change it back for troubleshooting a production environment.

Periodic Log File Rolling


http://reports.hyperion.com/cgi-bin/wiki.pl?Reports_Logging - Delete

In addition to the default RollingFileAppender there is a DailyRollingFileAppender option to make periodic backups of the current log file. The DailyRollingFileAppender rolls the log file over at a user chosen frequency: monthly, weekly, half-daily, daily, hourly, or every minute. The rolling schedule is specified by the DatePattern option. An example of a schedule that rolls the log file on a daily basis:
log4j.rootLogger=ERROR,dest1 log4j.appender.dest1=org.apache.log4j.DailyRollingFileAppender log4j.appender.dest1.ImmediateFlush=true log4j.appender.dest1.File=d:\\Hyperion\\HR\\Logs\\Daily_HRReportSrv.log log4j.appender.dest1.Append=true log4j.appender.dest1.DatePattern='.'yyyy-MM-dd log4j.appender.dest1.layout=org.apache.log4j.PatternLayout log4j.appender.dest1.layout.ConversionPattern=%d{MM-dd HH:mm:ss} %-6p%c{1}\t%m%n

In the above example, at midnight, the Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-30 at midnight on October 30. Logging for the current day continues in Daily_HRReportSrv.log until it rolls over the next day as Daily_HRReportSrv.log.2003-10-31.
Note: The current log file is not rolled to a daily backup file until a log entry needs to be written. So it may not happen right at midnight and if you go a day without log entries you will not see a backup log file for that day. This is done for efficiency reasons. Note that regardless of the delay, all logging events are logged to the correct file.

Reports uses the standard Log4j package from the Apache group to handle logging duties. For more details on the DailyRollingFileAppender syntax and options, see:
http://jakarta.apache.org/log4j/docs/api/org/apache/log4j/DailyRollingFi leAppender.html

Changing Logging Options


The Hyperion Reports logging settings can be changed without restarting servers. The .properties files are monitored for changes to the logging setting every minute. The frequency is set in the fr_global.properties file. This is handy if you want to set the logging level to DEBUG briefly and then change it back for troubleshooting a production environment.

360

Administrative Tasks for Financial Reporting

Application Server Logging


Each application server has its own mechanism for logging stdout and stderr for the JVM process(es) it starts:

JRun\Jrun4\logs WebSphere\WebSphere\AppServer\logs\<server name>\ WebLogic\<domain location>\wl-domain.log and/or


\weblogic<version>\common\nodemanager\NodemanagerLogs (if using node

manager to start/stop servers)

Tomcat%CATALINA_BASE%\logs (e.g. \Program Files\Hyperion Solutions\Hyperion Reports\HRWeb\logs)

While Financial Reports logs nothing of interest to stdout or stderr, you can check the output in case of a JVM crash (blown heap or native thread dump).

Backing Up Current Log Files


You can make a backup of the current log file on a periodic basis by changing the default RollingFileAppender option to DailyRollingFileAppender. The DailyRollingFileAppender rolls the log file over at a user-chosen frequency of monthly, weekly, half-daily, daily, hourly, or minute.The rolling schedule frequency is specified by the DatePattern option. See the following table for details on the DailyRollingFileAppender syntax and options.
Table 32

DatePattern Options Rollover Schedule Rollover at the beginning of each month. Example At midnight, on October 31, 2003, the
Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10. Logging for the month of November continues in Daily_HRReportSrv.log until it rolls over the next day to Daily_HRReportSrv.log.2003-11.

DatePattern
-yyyy-MM

-yyyy-ww

Rollover at the first day of each week. The first day of the week depends on the locacle. Rollover at midnight each day.

Assuming the first day of the week is Sunday, on Saturday midnight, October 9th 2003, the file Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-23. Logging for the 24th week of 2003 is output to Daily_HRReportSrv.log until it is rolled over the next week. At midnight, on October 2003, the Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-30. Logging for the current day continues in Daily_HRReportSrv.log until it rolls over the next day to Daily_HRReportSrv.log.200310-31.

-yyyy-MM-dd

Report Server Tasks

361

Table 32

DatePattern Options (Continued) Rollover Schedule Rollover at midnight and midday each day. Example At noon, on October 9th, 2003, Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-09-AM. Logging for the afternoon of the 9th is output to Daily_HRReportSrv.log until it is rolled over at midnight.

DatePattern
-yyyy-MM-dd-a

-yyyy-MM-dd-HH

Rollover at the top of every hour.

At approximately 11:00.000 oclock on October 9th, 2003,


Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-09-10. Logging for the

11th hour of the 9th of October is output to Daily_HRReportSrv.log until it is rolled over at the beginning of the next hour
-yyyy-MM-dd-HH-mm

Rollover at the beginning of every minute

At approximately 11:23.000 oclock on October 9th, 2003,


Daily_HRReportSrv.log is copied to Daily_HRReportSrv.log.2003-10-09-10-22. Logging for

the minute of 11:23 (9th of October) is output to Daily_HRReportSrv.log until it is rolled over the next minute.

Assigning Financial Reporting TCP Ports for Firewall Environments or Port Conflict Resolution
By default, Financial Reporting components communicate with each other through Remote Method Invocation (RMI) on dynamically assigned Transmission Control Protocol (TCP) ports. To communicate through a firewall, you must specify the port of each Financial Reporting component separated by the firewall in its .properties file and then open the necessary ports in your firewall. These .properties files are located in the Financial Reporting lib directory. In addition, you may need to open ports for the Reports Server RDBMs, for data sources that you report against, and for LDAP/NTMLM for external authentication.
Note: Ports should be opened in the firewall only for Financial Reporting components that must communicate across the firewall. If the Financial Reporting components are not separated by a firewall, they can use the default dynamic port setting.

You can change the port assignments to use in a firewall environment for servers in these Financial Reporting .properties files.

The Communication Server runs on each computer running any of the Financial Reporting server components shown below and requires 1 port. By default, this is 1099 but can be specified in the fr_global.properties file using RMIPort=. The Report Server requires 2 ports which are specified in the fr_repserver.properties file using HRRepSvrPort1= and HRRepSvrPort2=.

362

Administrative Tasks for Financial Reporting

Workspace requires 1 port which are specified in the fr_webapp.properties file using HRHtmlSvrPort=. The Scheduler Server requires 1 port which is specified in the fr_scheduler.properties file using HRSchdSvrPort=. The Print Server requires 1 port which is specified in the fr_printserver.properties file using HRPrintSvrPort=.

Note: When assigning static ports for each Financial Reporting component, the typical values are between 1024 and 32767.

When the Financial Reporting Server Components are distributed among several machines, there may be a need for you to change the default RMIPort on one or more machines. For example, suppose you installed a Report Server on MachineA, and left the default RMIPort configuration intact, but installed a Print Server on MachineB, and had to change the default RMIPort assignment to 1100 to resolve a conflict with another application. In this case, it would be necessary for you to reference the Print Server using hostname:port nomenclature in any or all .properties files that refer to the Print Server. In this example, your would assign printserver1=machineB:1100 in fr_repserver.properties.
Note: If you change the RMIPort for the Report Server component, users logging on through the Reports Desktop should use the same hostname:port nomenclature.

What follows is a list of properties file entries that require :port be appended if the target hostname computer uses another RMIPort. If all components define the same RMIPort, you need supply only hostname in all properties files.
Table 33

Properties Files Entries printserverx= SchedulerServer=

Properties Files fr_repserver.properties

fr_webapp.properties

HRWebReportServer=

Note: If RMIPort is changed in fr_global.properties on the computer where the Report Server is running, and Planning Details is a valid data source, then ADM_RMI_PORT should also be changed in ADM.properties. For example: C:\Hyperion\common\ADM\9.0.0\lib\ADM.properties.

Report Server Tasks

363

Accessing Server Components Through a Device that Performs NAT


The following topics discuss how to access server components through Network Address Translation (NAT).

Network Address Translation (NAT)


Network Address Translation (NAT) makes possible the use of a device, such as a router, to act as an agent between two networks whereby only one unique IP address is required to represent an entire group of computers.

Remote Method Invocation (RMI)


Communication between the client and server in Financial Reporting is achieved via Java's Remote Method Invocation (RMI) protocol. By default, an RMI server program communicates with clients using the IP address of the computer on which it is running.

Issue Using RMI Through Devices that Perform NAT


The combination of Java RMI and NAT has an inherent issue, in that Java attempts to route client requests to the IP of the computer on which the RMI server is running, rather than to the masqueraded address supplied by NAT. To work around this issue, the Java Virtual Machine (JVM) accepts two arguments that allow RMI server programs to communicate with clients using the IP of another computer or device, such as a router that performs NAT. If your Report Client computers access Reports Server components though a device that masks outgoing packets through NAT (Network Address Translation), then you must follow the procedures listed below.

Adding Required Java Arguments on Windows Systems


The following procedures explain how Java arguments are added to Windows. This applies to the Report Server, Print Server, and Scheduler Server.

To add Java arguments to Windows systems for the Report Server, Print Server, and Scheduler
Server:

1 Open the Windows registry, and navigate to HKEY_LOCAL_MACHINE\Software\Hyperion


Solutions\Financial Reporting.

364

Administrative Tasks for Financial Reporting

2 For each of the keys HRReportSrv, HRPrintSrv, HRSchedSrv:

Add two new String Values: JVMOptionx and JVMOptiony, where x and y are replaced with the next available number on their JVMOption series. Assign the new entries these values:

JVMOptionx: -Djava.rmi.server.hostname=<IP or hostname of NAT device>

JVMOptiony: -Djava.rmi.server.useLocalHostname=false Increase the value of JVMOptionCount by 2.

3 Start or restart the components.

WebLogic For the WebLogic server:


1 Take an action:

If you chose to run Workspace as a Windows service, open the Windows registry, and navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
beasvc hr_domain_HReports\Parameters.

Edit the string value CmdLine by prepending


-Djava.rmi.server.hostname=<IP or hostname of NAT device> Djava.rmi.server.useLocalHostname=false to the existing value.

If you did not choose to run Workspace as a Windows Service, open


...\HyperionReports\HRWeb\hr_domain\startWeblogic.cmd in a text editor, and add -Djava.rmi.server.hostname=<IP or hostname of NAT device> Djava.rmi.server.useLocalHostname=false to the JAVA_OPTIONS variable declaration.

2 Start or restart the components.

WebSphere For the WebSphere server:


1 Start the WebSphere Administrator's Console. 2 Navigate to Servers > Application Servers and select your server. 3 In the Additional Properties section, select Process Definition. 4 In the Process Definition's Additional Properties section, select Java Virtual Machine. 5 Append the following lines to the Generic JVM Argument property:
-Djava.rmi.server.hostname=IP or hostname of NAT device -Djava.rmi.server.useLocalHostname=false

6 Start/restart the components.

Report Server Tasks

365

Tomcat For the Tomcat server:


1 Open the Windows registry, and navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HRWeb\ Parameters

2 Add two new String Values called JVM Option Number x and JVM Option Number y, where
x and y are replaced with the next available number on their JVM Option Number series.

3 Assign the new entries these values:

JVM Option Number x --> -Djava.rmi.server.hostname=<IP or hostname of NAT device> JVM Option Number y --> -Djava.rmi.server.useLocalHostname=false

4 Increase the value of JVM Option Count by 2. 5 Start/restart the components.

Adding Required Java Arguments on UNIX Systems


The following procedures explain how Java arguments are added to UNIX systems. This applies to the Report Server, Print Server, Scheduler Server, and Tomcat.

To add Java arguments to UNIX systems for the Report Server, Print Server, Scheduler Server,
and Tomcat:

1 Open .../BIPlus/bin/freporting for editing. 2 In the start block for each component installed, add the two new required entries;
-Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false after the appropriate -c "${JAVA_HOME}/bin/java" line.

For example:
-c "${JAVA_HOME}/bin/java -Djava.rmi.server.hostname=IP or hostname of NAT device -Djava.rmi.server.useLocalHostname=false

3 Start/restart the components.

366

Administrative Tasks for Financial Reporting

WebLogic
For the WebLogic Web server:
1 Open .../HyperionReports/HRWeb/hr_domain/startWeblogic.sh in a text editor,
and add -Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false to the JAVA_OPTIONS variable declaration.

2 Start/restart the components.

WebSphere
For the WebSphere Web server:
1 Start the WebSphere Administrator's Console. 2 Navigate to Servers > Application Servers and select your server. 3 In the Additional Properties section, select Process Definition. 4 In the Process Definition's Additional Properties section, select Java Virtual Machine. 5 Append the following to the Generic JVM Argument property:
-Djava.rmi.server.hostname=<IP or hostname of NAT device> -Djava.rmi.server.useLocalHostname=false

6 Start/restart the components.

Analytic Services Ports


The following topics contain procedures that you might need to implement, you may already have them implemented, or your system may not require them.

Differences Between Analytic Services Ports and Connections


This section describes the differences between Analytic Services ports and connections when running this release of Financial Reporting. Important considerations:

You are licensed by Analytic Services ports. A 100 concurrent user license for Analytic Services means 100 Analytic Services ports are licensed. An unlimited number of connections is allowed on each of those ports. The number of connections you open to Analytic Services is not relevant for licensing purposes. What matters is the number of Analytic Services ports.

Analytic Services Ports

367

When a user runs a report in Financial Reporting, connections are opened to Analytic Services. For performance optimization purposes, these connections are cached. When the connections become idle, a process is run periodically to close them. The system administrator can modify the length of time before a connection is considered inactive (MinimumConnectionInactiveTime, default of 5 minutes) and the length of time before inactive connections are closed (CleanUpThreadDelay, default of 5 minutes) in the fr_global properties file. The number of ports used by Financial Reporting varies, depending on the configuration, as follows:

If a Report Client such as the Windows UI runs a report, two Analytic Services connections are made; one for the Report Client, and one for the Report Server. If the Report Client and Report Server are on the same computer, two Analytic Services connections using one Analytic Services port are made.

The Report Client keeps the Analytic Services connection until the window with the report displayed is closed. The Report Server keeps this Analytic Services connection until the process is run to close idling open connections. When both connections are closed, the port is released.

If the Report Client and Report Server are on two different machines, two Analytic Services connections using two Analytic Services ports are made.

The Report Client keeps the Analytic Services connection until the window with the report displayed is closed. The Report Server keeps this Analytic Services connection until the process is run to close idle open connections. When the Report Client connection is closed, the corresponding port for that connection is released. When the Financial Reporting connection is closed, the corresponding port for that connection is released.

When a Financial Reporting Studio user such as the Browser UI runs a report, two Analytic Services connections are made: one for the Web server and one for the Report Server.

If the Web server and Report Server are on the same computer, two Analytic Services connections using one Analytic Services port are made.

The Web server keeps the Analytic Services connection until the process is run to close idle open connections. The Report Server keeps this Analytic Services connection until the process is run to close idle open connections. When both connections are closed, the port is released.

If the Web server and Report Server are on two different computers, two Analytic Services connections using two Analytic Services ports are made.

368

Administrative Tasks for Financial Reporting

The Web server keeps the Analytic Services connection until the process is run to close idling open connections. The Report Server keeps this Analytic Services connection until the process is run to close idling open connections. When the Web server connection is closed, the corresponding port for that connection is released. When the Report Server connection is closed, the corresponding port for that connection is released.

The recommended configuration is as follows:


The Report Server and Web server are installed on the same computer. The Report Client is installed on several other computers. In this case, you must take two Analytic Services ports only for users working with the Report Client. All users connecting to view reports in Workspace take a single Analytic Services port for each Analytic Services user, because the Web server and Report Server are on the same computer.

Checking the Current Analytic Services Connections


To check for current connections in the Analytic Services server command window, type USERS in the command window. The list displays the current connections and ports currently used.

Changing Settings in Analytic Services Configuration File


When using the following settings in the Analytic Services configuration file, Analytic Services Connection Time Outs are reduced.

Add or increase the following Analytic Services client settings in the essbase.cfg file:

NETDELAY 1000 NETRETRYCOUNT 1000

Calculating the Formula for the Maximum Number of Analytic Services Ports
The basic formulas for calculating the maximum number of Analytic Services ports you need for Financial Reporting are as follows:

If Workspace and Report Server are on the same computer: Number of Analytic Services ports = 2 X the number of Report Clients + the number of Workspace users

If Workspace and Report Server are on different computers: Number of Analytic Services ports = 2 X the number of Report Clients + 2 X the number of Workspaces

Analytic Services Ports

369

Note: This formula is for Financial Reporting and does not consider other ways users might be connecting to Analytic Services; for example, the Application Manager, Web Analysis, or the Excel Add-in. You must consider those potential port-takers separately. If they are used on the same computer as one of the Financial Reporting components, no extra ports are taken as long as the same Analytic Services user ID is being used.

Data source considerations are as follows:

If you run a report with two data sources, your number of connections doubles, but the number of ports remains the same as described previously. If you run a report with three data sources, your number of connections triples, but the number of ports remains the same as described previously. If, after closing the report with two data sources, you run a report with a 3rd data source, your connections increases again but the number of ports does not change.

A user's connection is open for at least five minutes and remains open for up to 10 minutes, assuming no new activity occurs during that time. If you have a limited number of Analytic Services ports, and many users are accessing Financial Reporting, you may want to lower both values to 30 seconds (30000).

Scheduler Command Line Interface


Scheduler Command Line Interface is the process of launching a Financial Reporting batch input file from a command line. You can automate the process of launching batch input files using an external scheduler or launching batch input files after some external event occurs, such as the completion of a consolidation.

Creating Batch Input Files


The batch input file specifies the options for the scheduled batch such as the name of the batch to be scheduled, output destinations, e-mail notification information, POV settings, and others.

To create a batch input file:


1 Right-click a previously scheduled batch in the Batch Scheduler dialog box and choose Export for
Command Line Scheduling.

2 Open the mybatch.xml where mybatch is the name of your batch input file. 3 Modify this file as needed by editing the values in the tags, see Modifying Attributes on page 372 for the
commonly used attributes.

370

Administrative Tasks for Financial Reporting

Launching Batches from a Command Line


You can use the ScheduleBatch.cmd command file provided in the BIPlus\bin directory to launch the batch specified in the batch input file against a Financial Reporting scheduler server.yperion

To launch a batch from a command line prompt in the BIPlus\bin directory, enter the
command by specifying the fully qualified name of the batch input file and the computer name or IP address of the Scheduler Server on which to schedule the batch, for example:
ScheduleBatch c:\DailyReports\mybatch.xml MySchedulerServer

where MyBatch.xml is the name of your batch input file and MySchedulerServer is the name or IP address of your scheduler server which is typically located on the same computer as the report server. This launches a batch to run immediately against the scheduler server specified.

Scheduling Batches Using an External Scheduler


You can launch a batch on a periodic basis from an external scheduler. To do this, you set up your own command files and call them from the external scheduler. For example, you might have a NightlyBatch.cmd file containing these lines:
ScheduleBatch MgtSummaryBatch.xml hr_Server ScheduleBatch MgtDetailBatch.xml hr_Server

Encoding Passwords
Your passwords are encoded when you export the batch input file. To specify another user ID or data source ID in the batch input file, then you can use the following file to produce an encoded password for use in the batch input file.

WindowsEncodePassword.cmd UNIXEncodePassword

Note: This procedure is optional.

To encode passwords:
1 Open the batch input file to modify the data source and user ID passwords. 2 From the command line, run the EncodePassword.cmd file. 3 Type EncodePassword Password, where Password is the new password you want to use. 4 Place the encoded password produced in the batch input file.

Scheduler Command Line Interface

371

Modifying Attributes
In a typical batch input file, there are very few attributes to modify. Most attributes are already set properly based on the originally scheduled batch. The following table lists attributes that you are most likely to modify for the associated XML tags.
Table 34

Commonly Used Attributes Attribute AUTHOR Description Displays in the batch scheduler's User ID column and is a useful place to show a comment or the name of the XML file that generated the batch. Enter a Yes or No value, depending on whether you want to attach PDF or HTML files generated to the e-mail. E-mail to recipients if schedule batch failed Text if scheduled batch fails A comma-separated list of recipients e-mail addresses. The senders e-mail address. The subject of the e-mail. The encrypted data source password from an existing batch or that you generate using the command line utility. The data source user whose credentials are used for running the reports/books in the batch. The encrypted Financial Reporting user password from an existing batch or that you generate using the command line utility. The Financial Reporting user whose credentials are used for running the reports/books in the batch.

Category General

E-mail

ATTACH_RESULTS

FAILURE_RECIPIENTS FAILURE_SUBJECT RECIPIENTS SENDER SUBJECT Credentials DS_PASSWD

DS_USER_NAME HR_PASSWD

HR_USER_NAME

372

Administrative Tasks for Financial Reporting

Table 34

Commonly Used Attributes (Continued) Attribute HTML VALUE PDF VALUE HTML EXPORT_HTML_FOLDER_LABEL PDF EXPORT_HTML_FOLDER_LABEL Description Enter a Yes or No value, depending on whether you want to generate HTML output for the batch. Enter a Yes or No value, depending on whether you want to generate PDF output for the batch. If exporting as HTML (Value=Yes), The path and folder to external directory. If exporting as PDF (Value=Yes), the path and folder to external directory. Enter a Yes or No value, depending on whether you want to save the snapshot output in the repository. The Folder Name where the Snapshots are to be stored. This must be specified in ReportStore:\\ format. If SAVE_NAME = , the snapshot output is saved to the same folder as the original object. Comma-separated Financial Reporting user names who are granted access to the snapshot output. Comma-separated Financial Reporting group names which are granted access to the snapshot output. A special system-defined group, called Everyone, includes all Financial Reporting users and can be used to ensure that all users have access to a snapshot output. The printer name, if the PRINT VALUE attribute is set to Yes. Note: You must make sure that this printer is available to the scheduler server. Enter a Yes or No value, depending on whether you want to generate printed output for the batch.

Category HTML and PDF output

Snapshot Output

SAVE_AS_SNAPSHOT VALUE

SAVE_NAME

USER_NAMES

GROUP_NAMES

Printed Output

PRINT NAME

PRINT VALUE

Note: In the USER_POV section of the XML file, HIDDEN="0' indicates a dimension which is on the POV and therefore is a candidate or value to be set in the XML file. The value to be changed is _ in this example.

Scheduler Command Line Interface

373

Batch Input File XML Tag Reference


The following topics provide a complete listing of tags and values for the associated attributes. The structure of an XML file is similar to a tree level or directory structure. There is basically one parent-level node tag, and the tags that follow are child node tags.

BATCH_JOB_OBJECT - Node Tag


Attribute AUTHOR BATCH_JOB_ID BATCH_ NAME REPORT_SERVER_NAME UNSAVED_BATCH Description Displays in the batch scheduler's User ID column and is a useful place to show a comment or the name of the XML file that generated the batch. A random number assigned to the batch The name of batch. For example, "ReportStore:\\SchdApi\Batches\TestBatch3". The name of the report server where the batch is located The value of this attribute must be set to "No".

RUN_OPTIONS - Child Node Tag


Attribute FREQUENCY RUN_IMMEDIATELY Description The value of this attribute should be 1. The value of this attribute should be Yes.

NOTIFICATION, EMAIL - Child Node Tag


Attribute ATTACH_RESULTS RECIPIENTS SENDER SUBJECT Description Enter a Yes or No value, depending on whether you want to attach PDF or HTML files generated to the e-mail. A comma-separated list of recipients e-mail addresses. The senders e-mail address The subject of the e-mail.

JOB_STATUS - Child Node Tag


This must be copied as it is shown in the following example, JOB_STATUS
CURRENT_STATUS="Pending

374

Administrative Tasks for Financial Reporting

JOB_OBJECT - Child Node Tag


Attribute OBJECT_ID Description Leave this attribute blank.

DATA_SOURCE_USER_CREDENTIALS - Child Node Tag


Attribute DS_PASSWD DS_USER_NAME Description The encrypted data source password from an existing batch or that you generate using the command line utility. The data source user whose credentials are used for running the reports/books in the batch.

HR_USER_CREDENTIALS - Child Node Tag


Attribute HR_PASSWD HR_USER_NAME Description The encrypted Financial Reporting user password from an existing batch or that you generate using the command line utility. The Financial Reporting user whose credentials are used for running the reports/books in the batch.

OUTPUT_OPTIONS - Child Node Tag


This XML tag enables you to select the format of the batch output.

CHILD NODE - HTML


Attribute HTML VALUE Description Enter a Yes or No value, depending on whether you want to generate HTML output for the batch.

CHILD NODE - PDF


Attribute PDF VALUE Description Enter a Yes or No value, depending on whether you want to generate PDF output for the batch.

Batch Input File XML Tag Reference

375

CHILD NODE - SAVE_AS_SNAPSHOT


Table 35

Child Node - Save as Snapshot Description Enter a Yes or No value, depending on whether you want to save the snapshot output in the repository. The Folder Name where the Snapshots are to be stored. This must be specified in ReportStore:\\ format. If SAVE_NAME = , the snapshot output is saved to the same folder as the original object. Comma-separated Financial Reporting user names who are granted access to the snapshot output. Comma-separated Financial Reporting group names which are granted access to the snapshot output. A special system-defined group, called Everyone, includes all Financial Reporting users and can be used to ensure that all users have access to a snapshot output. This attribute can be left blank or removed from the text file. Note: This attribute is ignored if USER_NAMES or GROUP_NAMES is used.

Attribute SAVE_AS_SNAPSHOT VALUE SAVE_NAME

USER_NAMES GROUP_NAMES

SUBJECT_TOKENS

CHILD NODE - PRINT


Attribute PRINT NAME PRINT VALUE Description The printer name, if the PRINT VALUE attribute is set to Yes. Note: You must make sure that this printer is available to the scheduler server. Enter a Yes or No value, depending on whether you want to generate printed output for the batch.

USER_POV - Child Node


This node is optional. If the User POV is not specified here, the USER POV of the data source
user specified in the text file is used instead.

Caution! This should be modified only by power users. Specifying a partial USER POV does not work.

Note: In the USER_POV section of the XML file, HIDDEN="0' indicates a dimension which is on the POV and therefore is a candidate or value to be set in the XML file. The value to be changed is _ in this example.

376

Administrative Tasks for Financial Reporting

Setting XBRL Schema Registration


This task is used to update and maintain XBRL schema registration. The update enables you to define XBRL line items for an XBRL instance report. For a description on updating XBRL schema registration, see the Hyperion System 9 BI+ Financial Reporting Studio Users Guide.

CHILD NODE - KEY


Attribute NAME Description This is: <datasourceServer>:<AppName>:<DatabaseName>: <DatsourceType> REF_COUNT should be set to 1.

CHILD NODE - POV


Attribute ALIASTABLE APPNAME DBNAME DIMCOUNT DRIVERTYPE SERVERNAME ORGBYPERIOD_ENABLED Description The Alias Table name The name of the data source application The name of the database The total Number of Dimensions in that Database The type of the data source driver The name of the data source Server 0 or 1 depending on whether the ORG By period is enabled. This is used only if HFM is the data source. ORGBYPERIOD_PERIOD ORGBYPERIOD_SCENARIO ORGBYPERIOD_YEAR SHOW_ALIAS SHOW_DIMNAME SHOW_MEMBERDESC SHOW_MEMBERNAME VERSION The period member name. This is dependent on the previous attribute. The scenario member name. This attribute is dependent on whether Org by Period is enabled. The year member name. This attribute is dependent on whether Org by Period is enabled. Leave this value as 0. Leave this value as 1. Leave this value as 0. Leave this value as 1. The version number for Financial Reporting

Setting XBRL Schema Registration

377

DIMENSION - Child Node


Attribute DISABLED HIDDEN NAME Description This should be set to 0. This should be set to 1. The name of the Dimension

METADATAEXPRESSION - Child Node


Attribute DATA Description The member selection query, must be in the following format: <DIMENSION DISABLED="0" HIDDEN="0" NAME="Market"> <METADATAEXPRESSION VALUE="<?xml version="1.0" encoding="UTF-8"?> <COMPOSITEOPERATION TYPE="MemberQuery"><OPERATION TYPE="Select"><MULTIVALUE><STRING>Name</STRING></MULTIVALUE><STRING >Market</STRING></OPERATION><COMPOSITEOPERATION TYPE="Filter"><OPERATION TYPE="Member"><STRING>New York</STRING><STRING>Market</STRING></OPERATION></COMPOSITEOPERATIO N></COMPOSITEOPERATION>" /> </DIMENSION>

RMI Encryption Implementation


You can encrypt passwords entered in the Win32 Client by uncommenting the following line in the fr_repserver.properties file:
RMI_Encryptor=com.hyperion.reporting.security.impl.HsRMICryptor

After enabling or disabling encryption, all Financial Reports services must be restarted. The following text appears in the fr_repserver.properties file:
# Specify the class name of encryption algorithm to encrypt values # passed in RMI calls. # # By default, no encryption is applied. # # To use the encryption provided with the product, set the value to # com.hyperion.reporting.security.impl.HsRMICryptor # # To use any other custom encryption algorithm, extend your # implementation from # com.hyperion.reporting.security.IHsRMICryptor interface. # This interface defines two methods # public String encrypt(String value) throws HyperionReportException; # public String decrypt(String value) throws HyperionReportException; # #RMI_Encryptor=com.hyperion.reporting.security.impl.HsRMICryptor

378

Administrative Tasks for Financial Reporting

Part

IV

Administering Interactive Reporting Studio

In Administering Interactive Reporting Studio: Chapter 20, Understanding Connectivity in Interactive Reporting Studio Chapter 22, Using Metatopics and Metadata in Interactive Reporting Studio Chapter 23, Data Modeling in Interactive Reporting Studio Chapter 24, Managing the Interactive Reporting Studio Document Repository Chapter 25, Auditing with Interactive Reporting Studio Chapter 26, IBM Information Catalog and Interactive Reporting Studio Chapter 27, Row-Level Security in Interactive Reporting Documents Chapter 28, Troubleshooting Interactive Reporting Studio Connectivity Chapter 29, Interactive Reporting Studio INI Files

Administering Interactive Reporting Studio

379

380

Administering Interactive Reporting Studio

Chapter

21
In This Chapter

Understanding Connectivity in Interactive Reporting Studio

This section describes how to connect to a relational database and a multidimensional database using connection files, including how to set up connection files and connection preferences, and how to manage connections.

About Connection Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Working with Interactive Reporting Database Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Connecting to Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Using the Connections Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Working with an Interactive Reporting Document and Connecting to a Database . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Connecting to Web Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Connecting to Workspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400

Understanding Connectivity in Interactive Reporting Studio

381

About Connection Files


Connectivity is generally one of the most difficult aspects of querying for end users to master. client/server database applications rely on a complicated web of listeners, network addresses, and preferences that are difficult for anyone but a database administrator to troubleshoot. Fortunately, Interactive Reporting Studio users can sidestep these potential difficulties by using an Interactive Reporting database connection. Interactive Reporting Studio uses Interactive Reporting database connections (.oce files) to define the terms, conditions, and methods for connecting to data sources. With a database administrators assistance, Interactive Reporting database connections enable a stable connection to be set up once and then distributed and reused. End users need only supply a database user name and password each time they log on to query a database. Interactive Reporting database connections retain all the information necessary to log on to a specific configuration of database and connection API software. In addition, Interactive Reporting database connections retain DBMS-specific connection preferences as well as specifications for automatic access to metadata (see Using Metatopics and Metadata in Interactive Reporting Studio on page 401). Interactive Reporting database connections store complete sets of connection parameters about:

Connection software Database software Database server hosts Database user names (optional)

Note: For security reasons, user passwords are not saved with Interactive Reporting database connections.

Interactive Reporting database connections have significant advantages in network environments with many database users. One connection can be created for each database connection in the environment and shared with each end user. Interactive Reporting database connections simplify the connection process for company personnel by transparently handling host and configuration information. Each user can substitute his or hew own database user name when using an Interactive Reporting database connection, which enforces security measures and privileges that are centralized at the database server. Because passwords are not saved with Interactive Reporting database connections, there is no danger that distribution will provide unauthorized access to any user who receives the wrong Interactive Reporting database connection or acquires it from other sources. By default, no explicit access to an Interactive Reporting database connection is required to process Interactive Reporting documents or job outputs using the Workspace or Interactive Reporting Web Client. That is, a user is not required to have specific access privileges to process an Interactive Reporting document. However, a control setting of an Interactive

382

Understanding Connectivity in Interactive Reporting Studio

Reporting document or job access can be defined to require explicit access. For more information, see the Hyperion System 9 BI+ Workspace Administrators Guide and the Hyperion System 9 BI+ Workspace Users Guide.
Note: It is to your advantage to create and distribute Interactive Reporting database connections to facilitate the logon process when storing Interactive Reporting Studio data models.

Working with Interactive Reporting Database Connections


Interactive Reporting Studio provide a Database Connection Wizard to help you create new Interactive Reporting database connections. Before you create a new Interactive Reporting database connection, make sure to collect and verify the following connection information:

Connection API software and version (for example, Essbase, SQL*Net for Windows NT, and so on) Database software and version (for example, MetaCube 4, Oracle 8, and so on) IP address, database alias, or ODBC data source name for you database server Database user name

Creating Interactive Reporting Database Connections


The Database Connection Wizard steps you through the Interactive Reporting database connection creation process and captures the connection parameters in a file that enables you to connect to a data source. Interactive Reporting Studio saves the connection file in the default Interactive Reporting database connection directory. With an advanced users assistance, a connection file can be set up once, and then distributed and reused. You only have to supply a database user name and password each time you log in to query a database.

To create an Interactive Reporting database connection:


1 Select Tools > Connection > Create.
The Database Connection Wizard is displayed.

2 Select the connection software that you want to use to connect to the database server from the pull down
list in the What connection software do you want to use? field.

3 Select the database server that you want to use in the What type of database do you want to connect to?
field.

4 To configure metadata settings, select Show Meta Connection Wizard. 5 To configure advanced connection preferences, select Show advanced options. 6 Click Next.
The second dialog box of the wizard is displayed.

Working with Interactive Reporting Database Connections

383

7 Depending on the database, enter your user name in the User Name field, your password in the Password
field, the IP address, ODBC database source or server alias name in the Host field and click Next.

If you selected to work with meta data settings, the Meta Data Connection Wizard launches. See the Accessing the Open Metadata Interpreter on page 406 for more information.

8 The wizard prompts you to save the connection file. 9 To save the connection file so that it can be reused or modified, click Yes.
The Save dialog box is displayed. Interactive Reporting Studio saves the connection file in the default Interactive Reporting database connection directory.

10 To save the connection file in a different directory, navigate to the desired directory and click Save.
Table 36

Database Connection Configuration Wizard options Description Select the connection software with which you want to connect to the database from the pull-down list. Depending on the connection software you select, additional fields may be displayed in this dialog box. These fields enable you to customize the connection file; show metadata settings; and select ODBC logon dialogs.

Option What connection software do you want to use?

What type of database do you want to connect to? Show Metadata Connection Wizard? Show advanced options?

Select the type of database to which you want to connect from the pull-down list. To view and edit meta data settings, select this field. The Metadata Definitions dialog box is configured with specific SQL statements to read meta data on multiple databases. To select advanced preferences for the connection file, select this field. Connection preferences enable you to select what instructions and protocols the database connection should observe. The preferences are saved with the connection file and applied each time you use the connection. For example, you can use connection preferences to filter extraneous tables from the Table Catalog or specify how the connection software should manage SQL statements. Connection preferences vary depending on the connection software and database.

Prompt for database name Use ODBC Logon Dialogs?

To select the specific database name on the server, select this field. If you select ODBC connection software and that you want to use the ODBC logon dialog boxes instead of the Interactive Reporting Studio dialog boxes, select this field. To use the Interactive Reporting Studio connection dialog boxes, leave this field unchecked. Enter the name that you want to use to sign onto the database. Enter the password that you want to use to sign onto the database Enter the IP address, database alias, or ODBC data source name.

User Name Password Host

384

Understanding Connectivity in Interactive Reporting Studio

Setting Connection Preferences


Connection preferences enable you to specify the way certain aspects of the database connection are managed. The preferences are saved with an Interactive Reporting database connection and are applied each time you use the connection. For example, you can use connection preferences to filter extraneous tables from the Table Catalog or change the way the connection software handles SQL transaction statements. Connection preferences differ depending on the Interactive Reporting Studio edition, connection API, and DBMS. Connection preferences are accessed by selecting the Show Advanced Options check box in the Database Connection Wizard. Table 37 list all of possible options that are available in the Wizard: the options available to you depend on the connection configuration.
Table 37

Database Connection Configuration Wizard options Descriptions Enable s support for the Intersection and Difference operators in the Append Query option. Enables specification of table filter conditions for limiting or customizing the list of tables in the table catalog. Specifies exclusion of all repository tables from the table catalog. Filter by and metadata definitions override this preference. Prohibits processing when topics are not joined in the Query Contents frame. Specifies use of SQL to retrieve tables, instead of using SQL Server sp_tables and sp_columns stored procedures. This option enables table filtering, but may be slower than stored procedures. (Sybase and MS SQL Server) Specifies how the server returns data. In most cases, Retrieve data as Binary is the most appropriate, and fastest method. Select Retrieve data as Strings if the connection API does not support native datatype retrieval, or if queries return incorrect or unreadable data.

Options ALLOW SQL-92 Advanced Set Operations Apply Filters to restrict the tables that are displayed in the table catalog Exclude Hyperion Repository Tables Allow Non-Joined Queries Use SQL to get Table Catalog

Choose the Data Retrieval Method

Time Limit ___ Minutes Auto Commit After Select

This setting will establish an automatic disconnect from the database after the specified period of inactivity. Sends a commit statement to the database server with each Interactive Reporting Studio SQL statement to unlock tables after they have been used. Use this feature if tables are locked after use or users experience long waits for tables. Enables general distribution of an Interactive Reporting database connection by saving it generically, without a user name. Instead, any user can log on by typing their own user name. Specifies that internal keywords or table and column, or owner names with special characters sent to the server be enclosed in quotation marks. For example, SELECT SUM(AMOUNT), STORE_ID FROM HYPERION.PCS_SALESGROUP BYSTORE_ID The default value for new connections is off.

Save Interactive Reporting database connection Without User Name Use Quoted Identifiers

Allow Change Database at Logon

Adds Database field to logon dialog box enabling the user to select a specific database when logging on to the DBMS. (Sybase and MS SQL Server)

Working with Interactive Reporting Database Connections

385

Table 37

Database Connection Configuration Wizard options (Continued) Descriptions Specifies a binding process to retrieve more records per fetch call. If the ODBC driver supports binding, use this option for faster retrieval. (ODBC only). If this feature is turned on, the ODBC Extended Fetch call requests data at 32k at a time. The Packet Size setting enables Sybases DB-Lib users to set up a large buffer retrieval from the database so that more data can be transferred at one time. If this feature is selected, you can specify a multiple of 512 bytes for the number of bytes that you want to transfer at one time. Before you specify a multiple of 512 bytes, the server must have enough memory to allocate for the transmission of the selected packet size. To check which packet size the Sybase server will support, run the isql command: sp_configure and type go. A list of parameters is returned. Find the parameter showing the Maximum Network Packet Size. If the packet size you entered exceeds the maximum packet size, you will have to reenter a smaller packet size. To change the packet size, issue the following command in isql: Sp_configure maximum network packet size. <new value> (where <new value> is the new size).

Options Use large buffer query mode

Packet Size Setting.512*

Oracle Buffer Size

Determines the default buffer size when retrieving rows of data from an Oracle connection. The default size is 8000 bytes. A user can change this value to retrieve more rows per buffer, which may result in a performance improvement, but at the expense of additional memory requirements. The minimum size is 8000. If a user specifies a smaller value, nor error is returned, but 8000 bytes is used. There is no hard coded maximum size value for this field. Turns off the ability to make simultaneous requests to the database server. This feature is available in Interactive Reporting Studio only.

Disable Asynchronous Processing

Retain Data Formats

Interactive Reporting Studio uses the default formats specified by the database server when handling date, time, and timestamp values. If the default formats of the server have been changed, you can retain or preserve these adjusted preferences to ensure Interactive Reporting Studio interprets date/time values correctly. Enables alteration of internal Interactive Reporting Studio date handling to match server default settings in case of a discrepancy. For more information on this feature, see Modifying Server Date Formats on page 389. On upload to the repository, Interactive Reporting Studio brackets SQL Insert statements with transaction statements. Disable Transaction Mode if the RDBMS does not support transactions. This feature is only available in Interactive Reporting Studio.

Server Dates

Disable Transaction Mode

Do you want to save your Interactive Reporting database connection

Enables you to save the connection file so that it can be reused at a later time.

386

Understanding Connectivity in Interactive Reporting Studio

Table 37

Database Connection Configuration Wizard options (Continued) Descriptions Inserts an outer join operator (+) in the SQL on limits applied to the inner table for Oracle Net connection software to an Oracle database. By default this feature is enabled and is recommended; it is provided to work around Oracle restrictions when using outer joins with certain limit conditions, such as when an OR expression is needed. An outer join operator enables Interactive Reporting Studio to retrieve all rows from the left or right table matching joined column values if found or retrieves nulls for non-matching values. If this feature is disabled, then nulls for non-matching values are not retrieved. Use the Join Properties dialog box to assist in determining which is the left and right table. Oracle does not support full (left AND right) outer joins with the (+) operator. When an ODBC driver is used, this feature is greyed out. When a limit has been applied to an inner table of an outer join, this feature enables the limit to be placed on the On clause of the SQL statement instead of the Where clause. The default setting for this feature is unchecked. Inserts ODBC outer join escape syntax in the SQL statement.

Options Use outer join operator on limits

Use ODBC outer join syntax on limits

Use ODBC outer join escape syntax

Filtering Tables
For databases with many tables, it can help to filter out tables you do not need from the Table catalog. The table filter enables you to specify filter conditions based on table name, owner name, or table type (table or virtual views).
Note: The table filter works with all database server connections except ODBC. If you are working with a Sybase or Microsoft SQL Server database, modify the connection and specify that Interactive Reporting Studio use SQL statements to retrieve the Table catalog before filtering tables.

Typically, you filter tables when creating a connection file, although you can modify an existing connection file later to filter tables.

To filter tables from the Table Catalog when creating a connection file:
1 Select Tools > Connection > Create.
The Database Connection Wizard is displayed.

2 Select the connection software that you want to use to connect to the database server from the pull down
list in the What connection software do you want to use? field.

3 Select the database server that you want to use in the What type of database do you want to connect to?
field.

4 Select Show Advanced Options and click Next. 5 Connect to the data source and click Next.
The dialog box varies according to the connection software you are using. In most cases, you need to specify a user name, password and host name. Click Next.

6 Click Define next to a table name, table owner, or table type filter check box.
The Limit:Filter Table dialog box is displayed.

Working with Interactive Reporting Database Connections

387

7 Select a comparison operator from the drop-down box. The filter constraints determine which tables are
included in the Table catalog.

Complete a filter definition by doing one of the following:


Enter constraining values in the edit field and select the check mark. Click Show Values to display a list of potential database values and select values from the list. If you are comfortable writing your own SQL statements, click Custom SQL to directly code table filters that have greater flexibility and detail.

8 Click OK.
Interactive Reporting Studio prompts you to save the filter settings. Once saved, a check mark displays in the appropriate filter check box, which you can use to toggle the filter on and off.
Note: After you complete the Data Connection Wizard, verify that the filter conditions screen out the correct tables. In the Catalog frame, select Refresh on the pop-up menu.

To filter tables from the Table Catalog when modifying a connection file:
1 To filter tables for the current connection, select Tools > Connection > Modify.
The Meta Connections Wizard dialog box is displayed.

2 If you want to filter tables for another connection, select Tools > Connections Manager > Modify.
The Connections Manager dialog box is displayed. In the Document Connections frame, select the connection file that you want to modify and click Modify. The Meta Connections Wizard dialog box is displayed.

3 Configure the first Wizard as necessary, and then click Next to go to the second Meta Connections Wizard
dialog box.

4 Configure the second Wizard as necessary, and then click Next to go to the third Meta Connection Wizard
dialog box.

5 On the third Meta Connection Wizard dialog box, click Define next to a owner, table or type filter check box.
A Filter dialog box is displayed. The Filter dialog boxes resemble and operate using the same principles as the Limit dialog box.

6 Select a comparison operator from the drop-down box. The filter constraints determine which tables are
included in the Table Catalog.

7 Complete a filter definition by doing one of the following:


Enter constraining values in the edit field and select the check mark. Click Show Values to display a list of potential database values and select values in the frame. If you are comfortable writing your own SQL statements, click Custom SQL to code table filters directly with greater flexibility and detail. For example, you can write a SQL filter, which enables only tables beginning with Sales to be displayed in the table catalog. As new Sales tables are added to the database, they automatically are displayed in the Table Catalog.

388

Understanding Connectivity in Interactive Reporting Studio

8 Select any other customizing options to apply, and click OK.


You are prompted to save the filter settings. Once saved, a check mark is displayed in the appropriate filter check box, which you can use to toggle the filter on and off.

9 Click Next to continue through each dialog box, selecting any preferences for the connection file. 10 Click Finish. 11 In the Hyperion dialog box, click Yes to save the connection file. 12 In the Save Open Catalog dialog box, browse to a directory, enter the new connection name in the File
Name field, and then click Save.

13 In the Table Catalog of the Query section, select Refresh on the pop-up menu to verify that the filter
conditions screen out the correct tables.

Modifying Server Date Formats


Interactive Reporting Studio uses the default formats specified by the database server when handling date, time, and timestamp values. If the default formats of the server have been changed, you can adjust preferences to ensure that Interactive Reporting Studio interprets date/time value

To modify server date formats:


1 Select Tools > Connection > Create.
The Database Connection Wizard is displayed.

2 Select Show Advanced Options and click Next. 3 Click Server Dates.
The Server Date Formats dialog box is displayed.

To Server FormatsDate and time formats submitted to the server (such as limit values for a date or time field). From Server FormatsFormats Interactive Reporting Studio expects for date/time values retrieved from the server. The default values displayed in the To and From areas are usually identical.

4 If the server defaults have changed, select the date, time, and timestamp formats that match the new
server defaults from the To and From format drop-down boxes.

If desired, click Default to restore all values to the server defaults stored in the connection file.

5 If you cannot find a format that matches the database format, click Custom.
The Custom Format dialog box is displayed.

6 Select a data type from the Type drop-down box. 7 Select a format from the Format drop-down box or type a custom format in the Format field. 8 Click OK.
The new format is displayed as a menu choice in the Server Date Formats dialog box.

Working with Interactive Reporting Database Connections

389

Creating an OLAP Connection File


To create an OLAP connection file:
1 Select Tools > Connection > Create.
The Database Connection Wizard is displayed.

2 Select the connection software that you want to use to connect to the OLAP database server from the dropdown box.

3 Select the OLAP database server that you want to use from the drop-down box and click Next.
Depending on the database you select in this field, you may have to specify a password to connect to the database. Enter your name, password, and host address information. The sequence of dialog boxes that are displayed depend on the multidimensional database server to which you are connecting. The following sections provide connection information for these multidimensional databases:

Connecting to Essbase or DB2 OLAP Connecting to an OLE DB Provider

Connecting to Essbase or DB2 OLAP


To connect to an Essbase or a DB2 OLAP database:
1 Follow the instructions for Creating an OLAP Connection File on page 390Creating an OLAP Connection
File on page 390.

2 Select the application/database name to which you want to connect and click Next.
This is the cube from which you want to retrieve values.

3 Select the measures dimension for the cube in the Dimension Name field and click Next.
This is the specific measure group from which you want to retrieve values.

4 Click Finish to save the connection file.

Connecting to an OLE DB Provider


To connect to an OLE DB provider:
1 Follow the instructions in Creating an OLAP Connection File on page 390. 2 Select the database to which you want to connect.
NT domain authentication is performed for OLAP cube files (.cub). If the username and password provided (when attempting to process or retrieve dimensions) is not a valid NT domain username and password, an access error is returned and the user cannot access the file. To access the file, provide a valid NT domain username and password. To specify a domain, enter it in the username field in the form DOMAIN\jdoe.

390

Understanding Connectivity in Interactive Reporting Studio

Note: As a default, Interactive Reporting Web Client users are prompted to enter their Windows credentials (user ID, password, and optionally Windows domain, which can be specified in the login user ID prompt field, preceding the user ID text and delimited by a backslash (\); for example, if domain is HyperionDomain and user ID is user1, HyperionDomain\user1 can be specified in the user ID field) when logging on to Microsoft OLAP databases. These changes are enforced to provide more secure access to these databases. If prompted, the user must enter credentials that can be successfully authenticated by the Windows operating system at the database server. Failure to provide credentials that can be successfully authenticated by Windows results in an error message being returned to the user and login to the database being denied. If the user's credentials are successfully authenticated, the database login proceeds and any role-based security on cube data granted at the database level for the specified user ID is invoked and honored. If no role-based security is implemented at the database level (the database cubes and their data are available to all users), the database administrator can choose to publish an Interactive Reporting database connection for the database with a pre-assigned system-administratorlevel user ID and password. Thus, if users access the database using this Interactive Reporting database connection, they are not prompted to enter any login credentials. They will have passed through to the database, where access to all cube data is allowed. Note that these statements also apply to Interactive Reporting Web Client users who access local cube files created from Microsoft OLAP or other OLE DB for OLAP databases (such as the sample cube files that are presented with the sample files provided with the installation).

3 If the OLE DB for OLAP database provides the ability to retrieve dimension properties and you want to work
with them, click Enable Retrieval Of Dimension Properties and click Next.

4 Select the name of the Provider from the drop-down box and click Next.
For more information about the remaining dialog boxes, consult the database documentation of the provider.

Modifying Interactive Reporting Database Connections


When you create an Interactive Reporting database connection, you establish a working database connection for data modeling and querying. You may need to modify an Interactive Reporting database connection to reflect changes in the network or hardware configuration, or to manage other connection information.
Note: Changes to basic connection configuration, such as new database or host name, require you to log off and rebuild the Interactive Reporting database connection.

To modify an Interactive Reporting database connection:


1 Close any open Interactive Reporting documents. 2 Select Tools > Connection > Modify.
The Modify Connection dialog box is displayed.

3 Select the connection file that you want to modify and click Open.
The Database Connection Wizard is displayed showing the information for the Interactive Reporting database connection you selected.

4 Make any desired changes and then save the Interactive Reporting database connection when prompted.

Working with Interactive Reporting Database Connections

391

Connecting to Databases
In Interactive Reporting Studio, you use an Interactive Reporting database connection whenever you perform tasks that require you to connect to a database, such as:

Downloading a data model Processing a query to retrieve a data set Showing values for a server limit Using server functions to create computed items Scheduling an Interactive Reporting document

The way you select an Interactive Reporting database connection depends on which edition of Interactive Reporting Studio you are using and the data model or Interactive Reporting document with which you are working. If a data model is present in the Query section workspace, Interactive Reporting Studio automatically prompts you with the correct Interactive Reporting database connection when your actions require a database connection. When you open Interactive Reporting Studio to begin a work session (for example, by downloading a data model from an Interactive Reporting Studio repository, or creating a data model from scratch) you must select the correct Interactive Reporting database connection for the targeted database.

Monitoring Connections
Before you attempt to connect to a database, make sure you are not already connected. You can monitor the current connection status by observing the connection icon, the lower right side of the Status bar. An X over the icon, database connection. , on

, indicates there is no current

To check the connection information, position the cursor over the connection icon. The Interactive Reporting database connection in use and database name is displayed on the left side of the Status bar.

392

Understanding Connectivity in Interactive Reporting Studio

Connecting with a Data Model


Once a data model is downloaded to or created in the Interactive Reporting document, the Interactive Reporting document is associated with the Interactive Reporting database connection used to create the data model. Interactive Reporting documents store a reference that calls the associated Interactive Reporting database connection whenever you need to log on to the database to build or process a query.

To log on to a database from an existing Interactive Reporting document:


1 Select Tools > Connection > Logon or double-click the connection icon on the Status bar.
The Interactive Reporting database connection dialog box is displayed with the Interactive Reporting database connection name in the title bar.

2 Enter the user name and password and click OK.

Connecting Without a Data Model


Interactive Reporting Studio users have the option of creating new data models in an empty Interactive Reporting document. Other users download prebuilt data models from the repository. In either situation, you need to select an Interactive Reporting database connection and connect to a database before you proceed. The database you select contains either the source tables for the data model you plan to create, or the repository that contains the data models you need to download.

To select an Interactive Reporting database connection when you create a new Interactive
Reporting document:

1 Select File > New to display the New File dialog box. 2 Select the Recent Database Connection Files radio button and select a connection file from the list, then
click OK.

If the Interactive Reporting database connection that you want to use is not displayed, click Browse to display the Select Connection dialog box. Navigate to the connection file that you want to use and click Open. Interactive Reporting Studio prompts you for a user name and password.

3 Enter the user name and password and click OK.


If you do not have the right Interactive Reporting database connection for a particular database, ask the database administrator to provide one or help you create an Interactive Reporting database connection.
Note: You can create new blank Interactive Reporting documents without connecting to a database. Blank Interactive Reporting documents are useful for importing data files such as Excel spreadsheets; for creating a Dashboard master Interactive Reporting document; and for performing tasks you do not necessarily want to associate with a database.

Connecting to Databases

393

Setting a Default Interactive Reporting Database Connection


If you log on to one database more frequently than others, you should set the Interactive Reporting database connection for that particular database as the default connection. Whenever you log on to create a new data model, the default Interactive Reporting database connection will load automatically. If you frequently use different databases in the work, you may not want to set a default Interactive Reporting database connection. If you leave the default Interactive Reporting database connection preference blank, Interactive Reporting Studio will prompt you to select an Interactive Reporting database connection each time you log on.

To set a default Interactive Reporting database connection:


1 Select Tools > Options > Program Options.
The Interactive Reporting Studio Options dialog box is displayed.

2 Click the File Locations tab to display the File Locations tab. 3 Under Connections Directory, enter the default connection directory that contains the Interactive Reporting
database connection files you use to connect to different databases and click OK.

4 Under Default Connection, enter the full path and file name of the Interactive Reporting database
connection that you want to use as the default connection.

The next time you log on (and create a new Interactive Reporting document), the default connection is automatically used. Be sure to store your default Interactive Reporting database connection in your connections directory so that Interactive Reporting Studio can find them when you or users of your distributed Interactive Reporting documents attempt to log on.

Logging On Automatically
Interactive Reporting Studio provides an Auto Logon feature that maintains the current database connection when you create a new Interactive Reporting document. Auto Logon is enabled by default.

To toggle Auto Logon:


1 Select Tools > Options > Program Options.
The Interactive Reporting Studio Options dialog box is displayed.

2 Click the General tab to display the General tab. 3 Select the Auto Logon check box and click OK.

394

Understanding Connectivity in Interactive Reporting Studio

To use Auto Logon when creating a new Interactive Reporting document:


1 Select the connection icon on the Status bar to verify that Interactive Reporting Studio is connected to the
database.

2 Select File > New.


The Auto Logon dialog box is displayed.

3 Click Yes to accept the existing connection.


Interactive Reporting Studio opens the new documen.I f Auto Logon was accepted, you are connected to the database server automatically. Otherwise, you can select a different Interactive Reporting database connection.

Using the Connections Manager


The Connections Manager enables you to view the status of all connection files in all open Interactive Reporting documents. Use the Connections Manager to check or change database connection status, to modify connection preferences in Interactive Reporting database connection files, or to change database passwords. The Document Connections frame of the Connections Manager lists each open Interactive Reporting document and its associated Interactive Reporting database connections. The right frame shows the connection information for the selected Interactive Reporting database connection:

ConnectionName of the selected Interactive Reporting database connection StatusConnection status (connected or disconnected) Used ByName of the Interactive Reporting document section that accesses the database

Use the plus (+) and minus () signs to navigate through the tree structure.

Logging On to a Database
To log on to a database:
1 Select Tools > Connections Manager.[F11]
The Connections Manager dialog box is displayed.

2 Select the Interactive Reporting database connection associated with the database that you want to use
and click Logon.

The Database Password dialog box is displayed.

3 Enter your user name and password and click OK.


Once connected, the X is removed from the connection icon on the tree.

Using the Connections Manager

395

Logging Off of a Database


To log off of a database:
1 Select Tools > Connections Manager.[F11]
The Connections Manager dialog box is displayed.

2 Select the Interactive Reporting database connection associated with the database that you want to log off
of and click Logoff.

Modifying an Interactive Reporting Database Connection Using the Connections Manager


You can use the Connections Manager to change your connection file preferences, depending on the database and connection software.
Note: If you are not familiar with the preferences and their effects, ask the database administrator for assistance before changing the default settings.

To modify an Interactive Reporting database connection using the Connections Manager:


1 Select Tools > Connections Manager or press F11.
The Connections Manager dialog box is displayed.

2 Select the connection file that you want to modify and click Modify.
The Database Connection Wizard is displayed showing the information for the Interactive Reporting database connection you selected.

3 Make any desired changes and then save the Interactive Reporting database connection when prompted.

Changing Database Password


You can change the database password if you are connected to any of these database servers: Essbase, Oracle, Red Brick Warehouse, Microsoft SQL Server, or Sybase.

To change the password:


1 Select Tools > Connections Manager.[F11]
The Connections Manager dialog box is displayed.

2 Select the connection file associated with the database whose passwords that you want to change and
click Change Database Password.

3 Type the requested information and click OK.


Note: Some database servers support case-sensitive passwords and/or require a minimum password length. For more information, see the documentation for the database server.

396

Understanding Connectivity in Interactive Reporting Studio

Working with an Interactive Reporting Document and Connecting to a Database


Interactive Reporting documents consolidate instructions and specifications for querying, limiting, sorting, and computing data stored on the database server. An Interactive Reporting document is centered on a Data Model and queries. You can build Data Models and queries yourself using Interactive Reporting Studio, or download shared Data Models from a document repository into an empty Interactive Reporting document. Interactive Reporting documents are associated with a database server by way of a separate connection file. You use a separate connection file for each database server that you connect to in the work. Each connection file retains routines, instructions, protocols and parameters in a small file, also called an Open Catalog Extension. This connection file also preserves DBMSspecific connection preferences and specifications for automatic access to metadata. For Interactive Reporting Studio users, the process of creating a new Interactive Reporting document and logging onto a database is simple. You select a connection file for the database server you plan to use and enter the database password. You can select either a new or an existing connection.

To create a new Interactive Reporting document using an existing connection file:


1 Select File > New.
The New File dialog box is displayed.

2 Select the Recent Connection Files field and select a connection file from the list. 3 If the connection file that you want to use is not displayed, click the Browse button to display the Select
Connection dialog box. Navigate to the connection file that you want to use and click Open.

The Connection Password dialog box is displayed.

4 Type your user name in the Host User field and your password in the Host Password field, and then click
OK.

If you do not have the right connection file to connect to a particular database, ask your administrator to provide or help you create a connection file.

To create a new Interactive Reporting document using a new database connection file:
1 Select File > New.
The New File dialog box is displayed.

2 Select A New Database Connection File field and then click OK.
The Database Connection Wizard is launched.

3 Follow the instructions provided by the Database Connection Wizard.

Working with an Interactive Reporting Document and Connecting to a Database

397

To create a blank Interactive Reporting document with no database connection file:


1 Select the Other check box. 2 Select the Blank Document field and click OK.
Blank Interactive Reporting documents are useful for importing data files such as Excel spreadsheets; for creating a Dashboard master Interactive Reporting document; and performing tasks you dont necessarily want to associate with a database.

To open an existing Interactive Reporting document:


Select Open an Existing Document and select a Interactive Reporting document file from the Recent Connect Documents list. If the Interactive Reporting document that you want to use does not display, click Locate File. When the browse box is activated, click the Browse button to display the Open File dialog box. Navigate to the Interactive Reporting document that you want to use and click Open. Interactive Reporting documents are saved with a BQY extension on Windows.

To select a connection file from the document repository:


1 On the File menu, Open from Repository and click Select.
The Select Connection dialog box is displayed.

2 Navigate to the connection file that you want to use and click Open.
When querying the database, you first select the data items that interest you from a Data Models, Standard Query or Standard Query with Reports. You can find a repository object to start with by selecting one from the Repository Catalog and downloading it to the desktop. When you download the object to the Contents frame, the object becomes the basis of a new Interactive Reporting document.

3 If you are not connected, log on to the database containing the document repository by selecting a
connection file from the Select Connection dialog box and entering your database user name and password.

The Open from Repository dialog box is displayed. The Open from Repository dialog box shows the Repository Catalog in the left frame and description information in the right frame. The Repository Catalog is in directory tree format, which enables you to navigate through the repository structure. Repositories are organized into subdivisions, which depending on the database may have subdivisions called databases, and will most likely have subdivisions called owners. Databases and owners can be departmental headings, people in your organization, or other criteria established by the administrator. You cannot access versions 4.0 and older of the repository.

4 Under each owner name in the repository, there are user groups.
User groups are established by an advanced user to categorize and store repository objects by content and access privileges. You have been granted access to only the items you see in the Repository Catalog.

398

Understanding Connectivity in Interactive Reporting Studio

5 Select the document icons in the directory tree to display profiles in the Model Info and Description Areas
to the right.

6 When you have navigated to the correct repository owner and user group and found the repository object
that you want, select the object in the directory tree and click Open.

Interactive Reporting Studio downloads the repository object to the appropriate section.

Connecting to Web Clients


Connections made through Interactive Reporting Web Client and the Hyperion System 9 BI+ Workspace can be made immediately to a database, or deferred until a query is actually processed.
Note: A locally saved Interactive Reporting document does not prompt to connect to the Workspace when opened by dragging the Interactive Reporting document into a Web browser. A message is displayed stating that the Interactive Reporting document is opening in offline mode. This is part of Windows XP SP2's new pop-up blocker feature. The workaround is to disable the pop-up blocker. In Microsoft Internet Explorer, select Tools > Pop-Up Blocker > Turn Off Pop-up Blocker.

To select a Web client connection method:


1 Select Tools > Connect or press F11.
The Connect drop-down box is displayed.

2 Select Web Clients.


The Web Clients dialog box is displayed.

3 Select the connection method that you want to use for the web client:

Immediately connect to databaseSelect this method to immediately connect to a database using genuine database authentication. You are prompted for the logon credentials to the database being accessed. The value set here for the Interactive Reporting document in Interactive Reporting Studio cannot be changed in Interactive Reporting Web Client. This connection method is the preferred method for Interactive Reporting documents created in Hyperion Intelligence version 8.2 and later. Defer connection to database until used to process SQLSelect this method to defer making a connection to a database until the query is processed. You are prompted for logon credentials to the database without using genuine database authentication. That is, no actual database connection is attempted until the query is processed.

Connecting to Web Clients

399

Connecting to Workspace
Use the Connect to Workspace dialog box to specify the Data Access Servlet URL required to launch Workspace. Workspace consists of services, applications, and tools for those users who need to find and view Interactive Reporting documents, and for users who need to import files, schedule jobs, and distribute the output. For more information on Workspace, see Hyperion System 9 BI+ Workspace Users Guide.
Note: To use the Connect to Workspace dialog box to connect to the repository (that is, for embedded browser/hyperlink content in Interactive Reporting Studio), see the Hyperion System 9 BI+ Interactive Reporting Studio Developers Guide).

To connect to Workspace:
1 Select Tools > Connect to Workspace.
The Connect to Server dialog box is displayed.

2 Specify the Data Access Servlet URL required to launch Workspace in the Server Address field.

400

Understanding Connectivity in Interactive Reporting Studio

Chapter

22

Using Metatopics and Metadata in Interactive Reporting Studio

This section explains how to use metatopics and metadata to simplify data models for end users.
Note: Most of the information in this section is intended for Interactive Reporting Studio advanced users and does not apply to Interactive Reporting Web Client users.

In This Chapter

About Metatopics and Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 Data Modeling with Metatopics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 MetaData in Interactive Reporting Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Using the Open Metadata Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406

Using Metatopics and Metadata in Interactive Reporting Studio

401

About Metatopics and Metadata


Metatopics and metadata enable advanced users to mask the more confusing technical aspects of databases for non-technical users. While data models are already simplified views of the database, they sometimes still present a challenge to novice users. This is especially true when confusing database names and complicated strategies are visible in the data model. For most end users, the confusing aspects of query building stem from two sources:

Data model topic and join structures Database naming conventions

Interactive Reporting Studio provides two solutions to deal with each of these problems. These complementary solutions can be integrated to shield company personnel from the technical aspects of the query process and make end-user querying completely intuitive:

MetatopicsTopics created from items in other topics. Metatopics are higher level topics, or virtual topics that simplify the data model structure and make joins transparent. A metatopic looks and behaves like any other topic and can accept modifications and metadata. MetadataData about data. Typically stored in database tables, and often associated with data warehousing, metadata describes the history, content, and function of database tables, columns, and joins in understandable business terms. Metadata is useful for overcoming the awkward names or ambiguous abbreviations often used in a database. For example, for a database table named CUST_OLD, metadata can substitute a description business name for the table, such as Inactive Customers, when it is viewed by the end user. Metadata may also include longer comments. Because most business maintain their metadata on a database server, it is a potentially useful guide to the contents of the database, if it can be synchronized and used in conjunction with the data it describes.

Data Modeling with Metatopics


As noted earlier, metatopics allow you to create higher level topics that can greatly simplify the appearance of a data model. Unlike other topics, metatopics are independent of actual database tables. You can use metatopics to make the column and join structure of an underlying database transparent. You can substitute instead streamlined and intuitive topics adapted to the way users conceptualize information. For example, you can replace a data model of joined topics with a single metatopics that contains only the items business personnel need in their queries. The joins are completely transparent.
Tip: Metatopics do not support detail view.

402

Using Metatopics and Metadata in Interactive Reporting Studio

Creating Metatopics
You can create a new, empty metatopic or copy an existing topic to use as the basis for a metatopic.

To create a new, empty metatopic:


1 Select DataModel > Add Metatopic. 2 Type the name of the new topic in the Topic Properties dialog box and click OK.

To create a metatopic from an existing topic:


1 Select a topic in the Content frame. 2 Select DataModel > Promote To Metatopic.
A new metatopic is displayed in the Content frame with the default name: Meta_TopicName. The new topic contains the same items defined in the source topic.

Copying Topic Items to a Metatopic


After you create a metatopic, you can rebuild its structure by copying topic items from other topics. Once the topic items are in place, you can view the data model solely at the metatopic level, excluding the original topics in favor of a single metatopic or multiple unjoined metatopics.

To copy items from other topics to a metatopic:


Select the item that you want to add from an existing topic and drag it to the metatopic. To select and drag multiple topic items from the same topic, press and hold down the modifier key (Windows [Alt], Mac [Option], Motif [Ctrl+Alt]) while using the mouse.
Note: You can select items from only one topic at a time.

Caution! If a metatopic contains items copied from an original source topic, do not remove the original

topic from the workspace or use the icon view. Because metatopic items model data through the original source topics, removing the original source topics or using an icon view also removes the copied topic items from the metatopic.

Data Modeling with Metatopics

403

Creating Computed Metatopic Items


You can customize metatopics by adding computed items that do not exist in the database. Computed metatopic items provide end users with access to information they need without storing the data in the database or forcing them to master complicated computations. Computed metatopic items can be calculated by either the database server or locally. Locally computed metatopic items are restricted to reference items drawn from the metatopic where the item is placed. Server computed items can reference any items in the original topics or metatopics of the data model.

To create a computed metatopic item:


1 Select the metatopic for which you want to create a computed metatopic item. 2 Select Data Model > Add Metatopic Item > Server or Local.
The server or local version of the Modify Item dialog box for computed items is displayed.

3 Enter a descriptive item name in the Name field. 4 Type or use the following buttons to create computed item expression:

Functions buttonApplies scalar functions to data items. Reference buttonAdds Request Items to the expression. Options but tonSpecifies a data type. Operator buttonsAdds logical and arithmetic operators to the expression.

Customizing or Removing Metatopics and Metatopic Items


You can apply the same customization options that you use to make original topics and items more intuitive to metatopics and metatopic items. For more information on customizing topics, see Modifying Topic Item Properties on page 430.

To remove a metatopic or metatopic item, use one of these options:

Select the metatopic or topic item that you want to remove and select Remove on the popup menu. Press [Del]. Press the Delete button.

Caution! If you remove a metatopic item, it cannot be restored to the metatopic. You must copy the

item back to the metatopic or recreate it.

404

Using Metatopics and Metadata in Interactive Reporting Studio

Viewing Metatopics
There are a number of ways to view a data model. By default, database-derived source topics and any metatopics you have created are displayed together in the Content frame in Combined view.

To change the data model view:


Select DataModel > Data Model View > Option. Options include:

CombinedDisplays both original (database-derived) and metatopics in the Content frame. OriginalDisplays only database-derived topics in the Content frame. MetaDisplays only metatopics in the Content frame.

Caution! If an original topic contains items that have been copied to a metatopic, do not iconize or

remove the original topic from the Content frame in Combined view. Metatopic items are based on original items and remain linked to them. If an original topic is iconized or removed, any metatopic items based on its contents become inaccessible.

MetaData in Interactive Reporting Studio


Interactive Reporting Studio utilizes available metadata to simplify data models. By applying metadata naming conventions and descriptive information, metadata makes the information locked away in database tables and columns more accessible. Metadata can be applied in several ways in Interactive Reporting Studio. If you have a source of metadata stored on a database server, Interactive Reporting Studio users can use the Open Metadata Interpreter to link it to data models and automatically apply the metadata information. The data modeling features of Interactive Reporting Studio provide ways to add the benefits of metadata if you dont have a centralized metadata source. Interactive Reporting Studio automatically makes topic and item names more intelligible, and enables you to customize and change the appearance of these entities on the workspace.

MetaData in Interactive Reporting Studio

405

Using the Open Metadata Interpreter


The Open Metadata Interpreter is a powerful tool you can use to link Interactive Reporting Studio to metadata, or information about the database. By modifying the SQL that Interactive Reporting Studio sends to the database server, you can dictate where Interactive Reporting Studio finds the information it uses to create a data model from database tables. The Open Metadata Interpreter enables Interactive Reporting Studio users to draw this information from an enterprise source of business metadata. The Open Metadata Interpreter reads metadata from tables on a database and applies it to data models through a live database connection. The specifications for reading these tables are stores in Interactive Reporting database connections. Once configured, metadata definitions are available to anyone who uses the Interactive Reporting database connection.

Accessing the Open Metadata Interpreter


The Open Metadata Interpreter is a feature of Interactive Reporting database connection files that enables Interactive Reporting Studio to manage database connectivity. OMI is implemented using the Metadata Definition dialog box of the Database Connection Wizard.

To open the Metadata Definition dialog box:


1 If Interactive Reporting Studio is not connected to a database, select the Interactive Reporting database
connection that you want to direct to the metadata source and log on.

2 Select Tools > Connection > Modify.


The Database Connection Wizard is launched with the Meta Connection Wizard displayed.

3 Select whether to run the Meta Connection Wizard on the current connection or on a different connection.
If you select a different connection, the Select Metadata Interactive Reporting database connection field becomes active. a. Enter the full path and file name of the connection file that you want to use. You can also click Browse to navigate to the location of the connection file. b. Click Next. The Password dialog box is displayed. c. Enter the database name in the Host Name field and the database password in the Host Password field and click OK. d. Select the current database name and password to make the metadata connection or to specify an alternate name and password. If you specify an alternate user name and password, enter the name and password that you want to use for the metadata connection.

4 Click Next.

406

Using Metatopics and Metadata in Interactive Reporting Studio

5 Select the metadata schema where the meta settings are stored from the drop-down box.
Metadata schema are provided by third party vendors and saved in the bqmeta0.ini file. When you select a metadata schema, the predefined schema populates the fields in the Metadata Definition dialog box and is saved to the connection file. If you select another schema, the metadata definitions are overwritten in the connection file. If you want to customize the metadata settings, select Custom from the drop-down box and click Edit. The Metadata Definition dialog box is displayed, which contains tabs for tables, columns, joins, lookup, and remarks. For detailed explanations of the metadata definitions, see Configuring the Open Metadata Interpreter on page 407.

6 Enter the schema name or owner of the metadata repository table (for custom settings) or click Next to
complete the Meta Connection Wizard and return to the Data Connection WIzard.

Configuring the Open Metadata Interpreter


The Open Metadata Interpreter is implemented using the Metadata Definitions dialog box. You add metadata definitions in the Metadata Definition dialog box, which contains five tabbed pages. The pages can be independently configured and are designed to assist you in creating SQL Select statements to extract and apply metadata from predefined source tables or provided by third party vendors. Radio buttons at the top of the certain pages enable you to specify naming based on actual default table and column names, or a custom metadata source. When the custom option is selected, the SQL entry fields on the tab are activated, and you can enter SQL statements into the separate metadata definition areas.

Metadata Definition: SQL Entry Fields


Each Metadata Definition tab has up to three Metadata Table Definition SQL entry fields:

SelectGenerates SQL Select statements, and is divided into distinct fields which specify the columns that store the metadata. The columns are located in the database table described in the From field. If necessary, you can use aliases in the Select fields to distinguish between multiple tables. FromGenerates an SQL From clause, and specify the table(s) that contains metadata that applies to the database item described by the tab. You can also enter SQL to access system tables when necessary. If you need to reference more than one table in the From field, you can use table aliases in the SQL. WhereGenerates SQL Where clauses and is used on the Columns and Joins pages to indicate which topic needs to be populated with item names or joined to another topic. It can also be used to establish relationships between multiple tables or filter tables.

Using the Open Metadata Interpreter

407

Notes on Entering SQL:


Entries are required in all From entry fields, and in all fields marked with an asterisk (*). Under default settings, Metadata Definition fields specify the system-managed directory tables (except when using ODBC). You cannot modify field values when the Default radio button is selected. Clicking Reset at any time when defining a custom source populates the entry fields with the database default values. It may be helpful to start with the defaults when setting up metadata definitions. You may sometimes use database variables when entering a Where clause. Interactive Reporting Studio provides :OWNER, :TABLE, :COLUMN, :LOOKUPID, :TABALIAS, and :COLALIAS variables which temporarily store a database owner, table, column, or domain ID number and aliases of the active topic or item. Each variable must be entered in all caps with a leading colon.

Metadata Definition: Tables


Extracting and applying metadata to topics is the simplest metadata configuration. When metadata is defined for database tables, they display in the Table catalog with the names supplied in an alternate table of tables, and topics drawn from the tables are renamed to reflect the metadata as well. Once the Table tab is configured, all data models using the connection apply metadata names instead of the default server name to topics in the Content frame.

To apply metadata names to data model topics:


1 On the Tables tab, select Custom Definition.
The SQL entry fields activate and the system-managed information clears. Click Reset if you want to use the database default as a starting point.

2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
tables.

Owner NameName of the owner column in the alternate table of tables Physical Table NameName of the column of physical table names in the alternate table of tables Table AliasName of the column of metadata table aliases in the alternate table of tables Table TypeName of the column of physical table descriptions in the alternate table of tables

3 In the From field, enter the physical name of the alternate table of tables. 4 Use the Where fields to filter selected topics (for example, to limit the metadata mapping to include only
certain owners).
Note: If multiple folders exist in the repository, the following modifications are necessary to the Interactive Reporting Studio bqmeta0.ini file in order to filter the list of tables by folder:

408

Using Metatopics and Metadata in Interactive Reporting Studio

To filter Informatica tables:


1 Under the heading labeled [Informatica], change the TableWhere property as follows (do not include
brackets): TableWhere=SUBJECT_AREA='<folder name>'

2 Change the ColumnWhere property as follows (do not include brackets): ColumnWhere=table_name
':TABLE' and SUBJECT_AREA='<folder name>'

Metadata Definition: Columns


On the Columns tab, you need to specify the topics in which items should display. You may also need to refer to the system-managed table of columns (in addition to the alternate table of columns) for some specific column information. Once you configure the Columns tab, all data models using the connection apply metadata to topic items in the Content frame instead of using default server names.

To apply metadata names to data model topic items:


1 On the Columns tab, select Custom Definition.
The SQL entry fields activate and the system-managed information clears. Click Reset if you want to use the database defaults as a starting point.

2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
columns and/or system-managed table of columns.

Physical Column NameName of the column of physical column names in the alternate table of columns Column AliasName of the column of metadata column aliases in the alternate table of columns Column TypeName of the column of column data types Byte LengthName of the column of column data lengths FractionName of the column of column data scales Total DigitsName of the column of column precision values Null ValuesName of the column of column null indicators

If you use more than one table in the From field, enter the full column name preceded by a table name in the Select field.
table_name.column_name

3 In the From field, enter the physical names of the alternate table of columns (and system-managed table
of tables, if necessary).

If you are using both tables in the From field, you can simplify SQL entry by using table aliases.

4 Use the Where field to relate columns in the alternate and system-managed tables of tables to ensure
metadata is applied to the correct columns.

Use the following syntax in the Where field (do not include brackets):
<table of columns>.<tables column>=:TABLE and <table of columns>.<owners column>=:OWNER.

Using the Open Metadata Interpreter

409

Interactive Reporting Studio automatically populates a topic added to the Content frame with the metadata item names when it finds rows in the alternate table of columns that match the names temporarily stored in :TABLE and :OWNER. Use also the variables :TABALIAS and :COLALIAS to specify table and column aliases in SQL.
Note: The database variables must be entered in upper case and preceded with a colon.

Metadata Definition: Joins


You can use the auto-join feature to automatically join topics based not only on the best guess of Interactive Reporting Studio (see Automatically Joining Topics on page 420), but also on primary and foreign key information stored in an alternative table of joins. Join strategies include:

Best GuessAutomatically joins columns of similar name and data type. CustomSelects joins defined in a custom metadata source. Server-DefinedUses joins that have been established on the database server.

The Joins tab uses SQL instructions to employ a custom join strategy stored in metadata. Once Interactive Reporting Studio is directed to the metadata source, all data models using the connection apply specified join logic between topics.

To automatically join topics using metadata join information:


1 On the Joins tab, select Custom.
The SQL entry fields activate. (There are no system defaults for the Joins tab.) Click Clear to clear the entry fields if you make a mistake and want to start over

2 In the Select fields, enter the appropriate column names as they are displayed in the alternate table of
joins. Interactive Reporting Studio requires data in the Primary Table and Primary Column fields to find the primary keys.

Primary Database NameSets the name of the column of databases for primary key tables in the alternate table of joins. Primary OwnerSets the name of the column of owners belonging to primary key tables in the table of joins. Primary TableSets the name of the column of primary key tables in the table of joins. Primary ColumnSets the name of the column of primary key items in the table of joins. Foreign Database NameSets the name of the column of databases for foreign key tables in the alternate table of joins. Foreign OwnerSets the name of the column of owners belonging to foreign key tables in the table of joins. Foreign TableSets the name of the column of foreign key tables in the table of joins. Foreign ColumnSets the name of the column of foreign key items in the table of joins.

410

Using Metatopics and Metadata in Interactive Reporting Studio

If you use more than one table in the From field, enter the full column name preceded by a table name in the Select fields.
table_name.column_name

3 In the From field, enter the physical name of the alternate table of joins. 4 Use the Where field to tell Interactive Reporting Studio which topics to auto-join.
Use the following syntax in the Where field (do not include brackets):
<owners column>=:OWNER and <tables column>=:TABLE

If Auto-Join is enabled, Interactive Reporting Studio automatically joins topics added to the Content frame when it finds rows in the alternate table of joins that match the names temporarily stored in :TABLE and :OWNER. You can also use the variables :TABALIAS and :COLALIAS to specify table and column aliases in the SQL.
Note: The database variables must be entered in upper case and preceded with a colon.

Metadata Definition: Lookup


Lookups apply metadata to values that are queried by the Show Values command in the limit dialog box. If the database tracks data by codes, abbreviations, or ID numbers, lookup values can help users effectively limit queries. For example, the product table may track sales by product ID number. When the user attempts to limit the Product ID column in a query, a Show Values call to the database yields only ambiguous product ID numbers. It can be hard to tell where to apply the limit. Using the Lookup tab, you can map the product ID values to a column of descriptive product names elsewhere in the database. When the user clicks Show Values, he or she selects among descriptive product names to set the limit on the underlying product ID numbers.
Note: To use this feature, you need a table of descriptive lookup values in the database, and an additional mapping table to verify which items are supported by lookup values and where the corresponding lookup values are stored.

To apply metadata to limit lookup values:


1 On the Lookup tab, select Use SQL Definition.
The SQL entry fields activate. Click Clear to clear the entry fields if you make a mistake and want to start over.

2 In the Select fields, enter the appropriate column names as they displayed in the domain registry table.
The Lookup Table, Lookup Value Column, Lookup Description Column, and Lookup Domain ID Column are required for Interactive Reporting Studio to locate lookup values.

Lookup DatabaseName of the column of databases in the domain registry table. Lookup OwnerName of the column of owners in the domain registry table. Lookup TableName of the column of tables containing lookup domain description values in the domain registry table.

Using the Open Metadata Interpreter

411

Lookup Description ColumnName of the column of columns containing descriptive lookup values in the domain registry table. Lookup Value ColumnName of the column of columns of original column values in the domain registry table. Lookup Domain ID ColumnName of the column of domain IDs in the domain registry table.

3 In the From field, enter the physical name of the domain registry table.
Interactive Reporting Studio first sends SQL to the domain registry table to see if Lookup values are available for a given item.

4 Use the Where field to identify which items have lookup values.
Use the following format (do not include brackets):
<tables column>=:TABLE and <columns column>=:COLUMN

When you limit an item and show values, Interactive Reporting Studio stores the physical table and column names of the item in the variables, :TABLE and :COLUMN. Interactive Reporting Studio searches the domain registry table for a row that matches the values temporarily stored in :TABLE and :COLUMN. When it finds a row that matches, it pulls lookup values from the specified columns in the domain descriptions table. You can also use the :LOOKUPID variable to store the lookup domain ID value.
Note: The database variables must be entered in upper case and preceded with a colon.

5 Use the Lookup Where field to sync the values in the domain registry and domain description tables.

Metadata Definition: Remarks


If database remarks already exist for the database, you can configure the Interactive Reporting database connection to retrieve and display them as part of the data model. Database remarks function like context-sensitive help by providing detailed contextual information about a table or column, and can be very helpful to users when navigating through a large data model. The Remarks tab uses SQL instructions to direct Interactive Reporting Studio toward the unified server source of remarks for tables and columns. Once the Remarks tab is configured, all data models using the connection have access to remarks (Query > Show Remarks).

To add remarks from stored metadata:


1 On the Remarks tab, select Table Remarks to set up remarks for tables, or select Column Remarks to set
up remarks for columns.

Click Clear to clear the entry fields if you make a mistake and want to start over.

2 In the Tab Name field, type the name of the tab that you want to be displayed in the Show Remarks dialog
box.

412

Using Metatopics and Metadata in Interactive Reporting Studio

3 In the Select field, enter the name of the column of table or column remarks. 4 In the From field, enter the physical name of the table containing table or column remarks. 5 Use Where to link the selected topic to its corresponding remark.
Use the following syntax in the Where field:
Name of the Remarks Table =:TABLE

and
Name of the Remarks Column=:COLUMN

The dynamic variable automatically inserts the physical name of the object from which the user is requesting data in the application. Interactive Reporting Studio displays remarks when it finds rows in the remarks tables which match the names temporarily stored in :TABLE and :COLUMN. You can also use the variables :TABALIAS (displays name of a table) and :COLALIAS (displays name of a column) to specify table and column aliases in the SQL.
Note: The database variables must be entered in upper case and preceded with a colon.

6 Click Add to add the tab to the Remarks Tabs list.


The Remarks Tabs list shows all of the tabs you entered in the order in which you entered them. The first tab in the lists is the default or first tab to be displayed in the Show Remarks dialog box. Use the following buttons to reorder the appearance of Remarks tabs:

UpMoves a tab up one position (toward the front of the Show Remarks dialog box). DownMoves a tab down one position (toward the back of the Show Remarks dialog box).

To update a Remarks tab:


1 On the Remarks tab, select the tab from the Remarks tabs list.
The information for the selected tab is displayed in Remarks SQL fields.

2 Enter the desired changes in the Select, From, and Where fields, and then click Update.

To delete a Remarks tab:


On the Remarks tab, select the tab from the Remarks tabs list and click Delete.

Using the Open Metadata Interpreter

413

414

Using Metatopics and Metadata in Interactive Reporting Studio

Chapter

23
In This Chapter

Data Modeling in Interactive Reporting Studio

This section describes how to create data models from the database tables. It provides detailed information on joins, topics, and views, and data model properties and options.

About Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 Building a Data Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Understanding Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Working with Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 Working with Data Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Data Model Menu Command Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438

Data Modeling in Interactive Reporting Studio

415

About Data Models


When you use Interactive Reporting Studio to query a relational database and retrieve information, you work with a data model: a focused visual representation of the actual database tables Interactive Reporting Studio users can create data models, selectively viewing and packaging the contents of a database for querying or distribution. Distributed or shared data models are beneficial for several reasons:

They substitute descriptive names for arcane database table and column names, enabling users to concentrate on the information, rather than the data retrieval. They are customized for users needs. Some kinds of data models include prebuilt queries that are ready to process, and may even include reports that are formatted and ready to use. Other data models may automatically deliver data to a users computer. They are standardized and up-to-date. A data model stored in the document repository can be used throughout the company and is easily updated by the database administrator to reflect changes in the database structure.

Note: You can only add create and modify Data Models if you have the Data Model, Query, and Analyze adaptive state.

A Data Model displays database tables as topics in the Contents frame.Topics are visually joined together like database tables and contain related items used to build a query. Multiple queries can be constructed against a single Data Model in the same Interactive Reporting document. If you modify the Data Model, any changes are automatically propagated to the corresponding queries. In addition to standard Data Models derived from database tables, you can create metatopicsvirtual views independent of the actual database. You use metatopics to standardize complex calculations and simplify views of the underlying data with intuitive topics customized for business needs. If you want to preserve a Data Model for future queries, you can promote it to a master data model and lock its basic property design. This feature enables you to generate future queries without having to recreate the Data Model. A Interactive Reporting document can contain any number of master data models from which any numbers of queries can be generated.

416

Data Modeling in Interactive Reporting Studio

Building a Data Model


Data models are the building blocks of queries. In a data model, database tables are represented by topics. A topic is a list of items, each corresponding to a column in the database tables.

Adding Topics to a Data Model


You create data models by choosing database tables from the Table Catalog and assembling them as topics in the Content frame. The Table catalog is a listing of the tables available in the database. Once connected to a database, you can display the Table catalog and drag the topics that you want to include in the data model to the Content frame.

To add a topic to a data model:


1 In the Query section, select DataModel > Table Catalog, or press F9.
If you are not connected to the database, you are prompted to log on. Once connected, the Table catalog is displayed listing the available database tables.
Note: Users can filter tables from the display as part of the database connection. See Filtering Tables on page 387 for more information.

2 Drag tables from the Table catalog to the Content frame.


Each database table you place in the Content frame is converted to a topic in a data model.

Removing Topics from a Data Model


To remove a topic from a data model, select the topic and select Remove on the pop-up menu
or press [Del].

Building a Data Model

417

Understanding Joins
Tables in relational databases share information through a conceptual link, or join, between related columns in different tables. These relationships are displayed in the data model through visual join lines between topic items. Joins enable you to connect or link records in two tables by way of a shared data field. Once a data field is shared, other data contained in the joined tables can be accessed. In this way, each record can share data with another record, but does not store and duplicate the same kind of information. Joins can be automatically created joins for you, or you can manually join topics. Suppose you queried only the Customers table to determine the number of customers. You would retrieve 32 records with the names of the stores that purchase products since 32 is the exact amount of stores that have made a purchase. But suppose you made the same query with the Customers table and Sales table joined. This time you would retrieve 1,000 records, because each store made multiple purchases. Figure 31 shows the intersection of all records in the Sales table that mention stores listed in the Customers table.

Figure 31

Result of Join Between Two Tables

In other words, a database query returns the records at the intersection of joined tables. If one table mentions stores 1-32 and the other table mentions those same stores repeatedly, each of these records will be returned. If you join still a third table, such as items, records are returned from the intersection of all three. Figure 32 shows the intersection of all records in the Sales table that have stores in the Customers table and items in the Items table.

Figure 32

Result of Join Between Three Tables

418

Data Modeling in Interactive Reporting Studio

The following sections discuss the types of joins available and how to use them:

Simple Joins on page 419 Cross Joins on page 419 Automatically Joining Topics on page 420 Specifying an Automatic Join Strategy on page 420 Manually Joining Topics on page 421 Showing Icon Joins on page 421 Specifying Join Types on page 422 Removing Joins on page 422 Using Defined Join Paths on page 423 Using Local Joins on page 423

Simple Joins
A simple join between topic items, shown in Figure 33, retrieves rows where the values in joined columns match.

Figure 33

Simple Join Between Identical Store Key Fields in Two Topics

Joins need to occur between items containing the same data. Often, the item names between two topics are identical, which sometimes indicates which items join. When selecting items to join, recognize that two items may share the same name, but refer to completely different data. For example, an item called Name in a Customer table and an item called Name in a Product table are probably unrelated.

Cross Joins
If topics are not joined, a database cannot correlate the information between the tables in the data mode. This leads to invalid datasets and run-away queries. In this case, a database creates a cross join between non-joined tables, where every row in one table is joined to every row in another table.

Understanding Joins

419

Automatically Joining Topics


The Auto Join Tables option automatically joins database tables as they are added to the Content frame using one of three different join strategies. If Auto Join Tables is not selected, you can manually create joins between topics in the Content frame.

To automatically join topics as they are added to the Content frame:


1 Select DataModel > Data Model Options.
The Data Model Options dialog box is displayed.

2 Select the General tab. 3 Select the Auto Join Tables check box and then click OK.
When you add tables from the Table catalog to the Content frame, joins automatically display between topics. Clear the Auto Join Tables check box to turn off this feature and manually create joins yourself.
Note: Joins are not added for topics that are in the Content frame before you select the Auto Join Tables option.

Specifying an Automatic Join Strategy


You can instruct Interactive Reporting Studio to use one of three different strategies when automatically joining topics. The strategy chosen is employed with a particular connection and saved with the Interactive Reporting database connection.

To select an automatic join strategy for a database connection:


1 If you are not currently connected to the database, select an Interactive Reporting database connection
and log on.

2 Select Tools > Connections > Modify.


The Meta Connection Wizard is displayed with the On The Current Connection option selected.
Note: For information on metatopics and metadata, see Chapter 22, Using Metatopics and Metadata in Interactive Reporting Studio.

3 Click Next.
The Meta Connection Wizard displays the repository where the meta settings are stored.

4 Click Edit.
The Metadata Definition dialog box is displayed.

5 Select the Joins tab.

420

Data Modeling in Interactive Reporting Studio

6 Select a join strategy. Join strategy options are:


Best GuessJoins topics through two items that share the same name and data type CustomJoins topics according to specified schema coded in SQL in the Metadata Join Definitions area Server-DefinedJoins topics based on primary and foreign keys established in the underlying relational database

7 When you have completed the selection, click OK.

Manually Joining Topics


You can create relationships between topics by manually joining topic items in the Content frame (see Figure 34).

Figure 34

Manually Created Join Between Two Related Data Items in Two Topics

To manually join two topics, select a topic item, drag it over a topic item in another topic, and
release. A join line is displayed, connecting the items in the different topics.

Showing Icon Joins


When a topic is iconized, you can toggle the display of joins to other topics in the Content frame.

To show icon joins:


1 Select DataModel > Data Model Options.
The Data Model Options dialog box is displayed.

2 Select the General tab to display the General tab. 3 Select the Show Icon Joins check box and click OK.
Clear the Show Icon Joins check box to turn off this feature and hide joins of iconized topics.

Understanding Joins

421

Specifying Join Types


Join types determine how data is retrieved from a database.

To specify a join type:


1 Select a join line and select View > Properties or click the Properties icon,
The Join Properties dialog box is displayed.
.

2 Select a join type and click OK.


Four types of joins are supported:

Simple join (=, >,<, >=, <=+)A simple (linear) join retrieves the records in both tables that have an identical data in the joined columns. You can change the default join setting for simple joins by choosing an operator from the drop-down box. The default setting, Equal, is preferred in most situations.

Left outer join (+=)A left join retrieves all rows from the topic on the left and any rows from the topic on the right that have matching values in the join column. Right outer join (=+)A right join retrieves all rows from the topic on the right and any rows from the topic on the left that have matching values in the join column. Outer or full outer join (+ = +)An outer join combines the impact of a left and right join. An outer join retrieves all rows from both tables matching joined column values, if found, or retrieves nulls for non-matching values. Every row represented in both topics is displayed at least once.

Note: A fifth join type, Local Join, is available for use with local Results sets. See Using Local Joins as Limits on page 425 for more information.

Caution! Not all database servers support all join types. If a join type is not available for the database to

which you are connected, it is unavailable for selection in the Join Properties dialog box.

Removing Joins
You can remove unwanted joins from the data model. Removing a join has no effect on the underlying database tables or any server-defined joins between them. A deleted join is removed from consideration only within the data model.

To remove a join from a data model, select the join and select Remove on the pop-up menu.

422

Data Modeling in Interactive Reporting Studio

Using Defined Join Paths


Defined Join Paths are customized join preferences that enables you to include or exclude appropriate tables based on the items referenced on the Request and Limit lines. Bridge tables, which are not explicitly referenced in the query, are transparently added to the SQL From clause. The net effect limits the query to all referenced tables based on available table groupings, which generate the most efficient SQL for queries off the data model.

To use defined join paths:


1 Select DataModel > Data Model Options.
The Data Model Options dialog box is displayed.

2 Select the Joins tab to display the Joins tab. 3 Select the Use Defined Join Paths option and click Configure.
The Define Join Paths dialog box is displayed.

4 In the Define Join Paths dialog box, click New Join Path to name and add a join path.
The New Join Path dialog box is displayed.

5 In the New Join Path dialog box, enter a descriptive name for the join path and click OK.
The join path name is highlighted in the Defined Join Paths dialog box.

6 Select a topic in the Available topics list and use the move right (-->) button,
In Join Path list.

, to move it to the Topics

7 To remove join paths from the Topics in Join Path list, select the move left (<--) button,
paths from the Topics In Join Path list.

, to remove join

8 When join paths are completely defined for the data model, click OK.
Tip: Join paths are not additive; Interactive Reporting Studio cannot determine which tables are

common among several paths and link them on that basis. Join paths are not linear, and if selected, the simplest join between all tables in the path is included when processing a query.

Using Local Joins


You can add the results of one query to the results of another query in a Interactive Reporting document by using local joins. Rows from the data sources are joined in the Results section.
Note: No aggregation can be applied to local result tables and the local results data set cannot be processed to a table.

For example, you might want to see budget figures drawn from MS SQL server and sales figure drawn from an Oracle database combined in one Results set.

Caution! Local joins are memory and CPU intensive operations. When using this feature, please limit

the local joins by using a moderate number of rows.

Understanding Joins

423

The following sections explain how to work with local joins:


Creating Local Joins on page 424 Using Local Joins as Limits on page 425 Limitations of Local Results and Local Joins on page 426

Creating Local Joins


To create a local join:
1 Select Insert > Insert New Query to create the first query that you want to include in the Interactive
Reporting document:

a. Verify item data types and associated data values in source documents so you will know how to join them in the Interactive Reporting document. b. Build the Request Line, and add server and local limits, data functions, and computations to the query as needed. c. Process the query, which will fill the Results section.
Tip: For consistent results, queries that use local joins should be placed after queries that generate

the needed results.

2 Select Insert > Insert New Query to create the second query.
Add topics from the Table catalog to the Content frame, and build the Request line.

3 In the Table catalog, select Local Results on the pop-up menu. 4 In the Table catalog of the second query, select Local Results on the pop-up menu.
A Local Results icon, displays in the Catalog frame.
.

5 Expand the Local Results icon to display the Results table icon, 6 Double-click a Results set or drag it to the Content frame.

The Results set from the first query that you built is displayed as a topic in the Content frame.

7 In the Content frame, manually create a join between the Results set and another topic. 8 Build the Request line and click Process.
Local joins are processed on the client machine. You can use Process All to process the queries, in which case the queries are processed in the order in which they are displayed in the Section catalog. For example, in a Interactive Reporting document with three queries, Query1, Query2, and Query3, the queries are executed in the order shown. If Query1 is a local join of the results of Query2 and Query3, it will still be processed first. If Query2 and Query3 have existing Results

424

Data Modeling in Interactive Reporting Studio

sets, then the local join in Query1 will occur first, before processing Query2 or Query3. If the Results sets for either Query2 or Query3 are not available, then one or both of those queries will be processed first, in order to get the required results.

Using Local Joins as Limits


A limit local join is a variation of a local join. Instead of independently running two queries then locally joining the data on the desktop, a limit local join runs the first query to retrieve a list of values, then uses those values to limit a column in the second query. For example, a query may be run from an inventory table in an Oracle database to retrieve a list of part numbers that are out of stock. The resulting part number list may be used as a limit join to define the list of values retrieved from a work_in_process table in another database to determine the status of the stock replenishment.
Note: The second query could potentially be a very long SQL statement since using limit local joins generates an SQL Having clause for each item.

To use the values retrieved from one query as limit values for another query:
1 Build the first query that you want to include as a limit in the second query:
a. Verify item data types and associated data values in source documents so you will know how to join them in the second query. b. Build the Request line, and add server limits, data functions and computations to the query as needed. c. Click Process.

2 Select Insert > Insert New Query. 3 Build the second query.
a. Verify item data types and associated data values in source documents so you will know how to join them to the first query. b. Build the Request line, and add server and local limits, data functions, and computations to the query as needed.

4 In the Table catalog of the second query, select Local Results on the pop-up menu.
A Local Results icon, is displayed in the Catalog frame.
.

5 Expand the Local Results icon to display the Results table icon, 6 Double-click the Results icon or drag it to the Content frame.

The Results set from the first query that you built is displayed as a topic in the Content frame.
Note: The purpose of embedding the Results is to obtain a list of values. Do not include and Results set topic items on the Request line. Also, do not place any limits on topic items in this Results set. must not include any fields from the embedded Results section. If you do add a topic item from or set a limit on this Results set, you will not be able to set a Limit Local join.

Understanding Joins

425

7 In the Content frame, manually join the Results set to a another topic in the second query.
A join line is displayed, connecting the different topics.

8 Double-click the join line that was created by joining the Results set and other topic, or click the Properties
icon, .

The Join Properties dialog box is displayed.

9 Select Limit Local Join and click OK.


Note: If the Limit Local Join option does not display in the Join Properties dialog box, make sure that no Results set topic items are included in the Request line and that no limits have been placed on any Results set topic item.

10 Click Process to build the query and apply the limit constraint.

Limitations of Local Results and Local Joins


The following limitations apply to local results and local joins: 1. You cannot use any governors with local results topics as part of the query. The following are governors accessed from the Query Options dialog box:

Returning Unique Rows Row limit Time limit Auto-Process Custom Group by

2. You cannot have more than one local join per local results topic. When setting up a query using a local results topic, you cannot have more than one local join between the local results topic and another topic/local results topic. 3. You cannot set query limits on local results topic items. Limits must be set in the query/result sections of the query that produces the local results. Attempting to set a query limit on a local results topic item invokes the following error message: Unable to retrieve value list for a computed or aggregate request item. 4. You cannot aggregate local results tables. 5. You cannot process local results data to a table.

426

Data Modeling in Interactive Reporting Studio

6. You cannot have more than one limit local join. A limit local join involves two topics, one of which is a local results topics. A local results item is used as a limit to the other topic. Attempting to define more than one limit local join invokes the following error message: This query contains a local results object involved in a join limit. It is not possible to have other local results objects when you have a local join limit. 7. You cannot combine limit local joins with local joins. Attempting to combine a limit local join and local join invokes the following error message: This query contains a local results object involved in a join limit. It is not possible to have other local results objects when you have a local join limit. 8. You should expect compromised performance when a query is associated with large local results sets. This is expected behavior since Interactive Reporting Studio is not a database. 9. You cannot use metatopics with local results. You cannot promote a local results topic to a metatopic or add a local results topic item as a metatopic item. The Promote To Meta Topic and Add Meta Topic Item DataModel menu options are not available for local results topics and topic items. 10. You cannot access or change properties for local results topic items. Properties include remarks, number formatting, aggregate/date/string functions, data types, and name. 11. You cannot have query request line computed columns from local results topic items. The Add Computed Item menu option is not available for local results topic items. 12. You cannot use Append Query features of unions or intersections with local results topic items. The Append Query menu option is not available when a local result topic is part of a query.

Working with Topics


There are several features that enable you to customize the appearance of topics to make them easier for end users to work with. The following sections describe how to work with topics:

Changing Topic Views on page 428 Modifying Topic Properties on page 429 Modifying Topic Item Properties on page 430 Restricting Topic Views on page 430

Working with Topics

427

Changing Topic Views


You can change how you view topics in the Content frame. There are three ways to view topics, as shown in Figure 35:

Figure 35

Structure (1), Detail (2), and Icon (3) Topic Views

Structure viewDisplays a topic as a simple list of component data items. This is the default setting. Structure view enables you to view and select individual data items to include in a query. This is the easiest view to use if you are familiar with the information that a data model, topics, and topic items represent.

Detail ViewPresents a topic in actual database view with a sample of the underlying data. When you change to Detail view, a small query is executed and a selection of data is loaded from the database server. The topic is displayed as a database table with each topic item displayed as a database column field. Detail view is useful when you are unfamiliar with a topic. You can browse the first few rows of data to see exactly what is available before adding a topic item to the query.

Note: Detail view is not available for special items such as metatopics or computed data items.

Icon ViewDeactivates a topic and reduces it to an icon in the Content frame. When a topic is displayed in Icon View, associated items are removed from the Request and Limit lines. The topic is not recognized as being joined to other topics, and is temporarily removed from the data model and the SQL statement. If no items from a topic are needed for a particular query and the topic does not link together other topics which are in use, reduce the topic temporarily to Icon view to make large queries run faster and to consume fewer database resources.

428

Data Modeling in Interactive Reporting Studio

To change a topic view:


1 Select a topic in the Content frame. 2 Select DataModel > Topic View > View.
The topic is displayed in the chosen view. In Icon view, you can restore the topic view by double-clicking the topic icon.

Modifying Topic Properties


Use the Topic Properties dialog box to customize the way a topic and associated items are displayed in the data model. By default, items are displayed in the order in which they are defined in the underlying table, or the order in which they are added to a metatopic. You can change the way items are ordered or restrict their display of items within a topic.

To modify topic properties:


1 In the Catalog frame, select the topic and select View > Properties or click the Properties icon,
The Topic Properties dialog box is displayed.
.

2 Change the properties to the desired setting and click OK.


Available options include:

Topic NameThe name of the topic that is s in the Catalog frame. You can change this field to display a more user-friendly name in the Content frame. Physical NameFull name of the underlying database table. Items To DisplayThe topic items available for the selected topic.

Hide/Show AllHides or actively displays all topic items. Up/DownMoves selected item up or down one space in the topic display. SortAlphabetically sorts listed items.

Set As DimensionDefines the drill-down path or hierarchy for dimensional analysis as shown in the data model. This feature is used in conjunction with the Set As Fact field in the Topic Item Properties dialog box. Allow Icon ViewEnables the icon view option for the topic Allow Detail ViewEnables the detail view option for the topic. Cause ReloadSpecifies automatic reloading of server values the next time Detail View is activated. Rows to LoadSpecifies the number of rows to be loaded and displayed in Detail View.

Working with Topics

429

Modifying Topic Item Properties


Topic items are discrete informational attributes of topics, such as Customer ID, Street Address, or Sales Revenue, and are the basic building blocks of a query. Topic items are organized within topics and represent the columns of data in database tables. You can modify the names of topic items to make them easier for users to understand and set drill-down path information.

To modify a topic item:


1 Select the topic item and select View > Properties or click the Properties icon,
.

The Topic Item Properties dialog box is displayed, showing information about the source of the topic column in the database.

2 Change the topic item properties to the desired setting and click OK.
Available options include:

Item NameDisplays the name of the item. Set As FactEliminates items with integer or real values from a drill-down path. This feature is used in conjunction with the Set As Dimension field in the Topic Properties dialog box. InformationAdditional column information from the database. Information about keys is displayed only when server-defined joins are enabled. LengthEnables you to change the string length of columns.

Restricting Topic Views


Individual topics within a data model can be restricted to control the availability of the Icon view and Detail view, or to limit the number of rows retrieved (which can consume network and server resources) for Detail view.

To set access to Icon or Detail views:


1 Double-click a topic to be view-restricted.
The Topic Properties dialog box is displayed with the view options shown toward the bottom of the dialog box. The dialog box also contains options for customizing topics.

2 Select the Allow Icon View or Allow Detail View check boxes to toggle the availability of either view. 3 If necessary, Cause Reload to specify loading from the server when Detail View is selected.
New data is retrieved the next time Detail View is activated for the topic, after which Cause Reload will be toggled off automatically.

4 If desired in Detail View, enter the number of rows to be returned from the server for Detail View, and click
OK.

By default, the first ten rows of a table are retrieved for preview in Detail View.

430

Data Modeling in Interactive Reporting Studio

Working with Data Models


You can customize data models in a number of ways. You can change how data models are displayed in the Content frame. You also can define other data model options, such as user access, feature availability, and query governors. Review the following sections for information on:

Changing Data Model Views on page 431 Setting Data Model Options on page 432 Automatically Processing Queries on page 436 Promoting a Query to a Master Data Model on page 436 Synchronizing a Data Model on page 437

Changing Data Model Views


There are a number of ways to view a data model. By default, database-derived source topics and any metatopics you have created are displayed together in the Content frame in Combined view.

To change the data model view:


Select DataModel > Data Model View > Option. Options include:

CombinedDisplays both original (database-derived) and metatopics in the Content frame. OriginalDisplays only database-derived topics in the Content frame. MetaDisplays only metatopics in the Content frame.

Caution! If an original topic contains items that have been copied to a metatopic, do not iconize or

remove the original topic from the Content frame in Combined view. Metatopic items are based on original items and remain linked to them. If an original topic is iconized or removed, any metatopic items based on its contents become inaccessible.

Working with Data Models

431

Setting Data Model Options


To set data model options:
1 Select DataModel > Data Model Options.
The Data Model Options dialog box is displayed.

2 Set the desired options for the data model and click OK.
Note: All users have access to the join preferences, but not to the limit, query governor, or auditing features, which are designed to customize data models stored for distribution.

Before applying any new features, be aware that:

One of the first three limit options (Show Values, Custom Values, or Custom SQL) must be enabled in order for users to apply limits in the Query section. Changing join usage usually changes the number of rows retrieved from the database. It also introduces the possibility that novice users may create improperly joined queries. If query governors are set as part of a data model, and end users set query governors on a query built from the data model, the more restrictive governor takes precedence.

The following sections provide additional information about data model options:

Saving Data Model Options as User Preferences on page 432 Saving Data Model Options as Profiles on page 433 Data Model Options: General on page 433 Data Model Options: Filters on page 434 Data Model Options: Auditing on page 436

Saving Data Model Options as User Preferences


You can save the data model options you specify as default user preferences by clicking the Save as Defaults button on any of the tabs in the Data Model options dialog box.

To change the defaults without affecting any existing data models (including the current one),
click Save as Defaults and then click Cancel.

To change the defaults and apply them to the current data model, click Save as Defaults and
then click OK.
Note: The following data model options apply to the current data model only and cannot be saved as defaults: Topic Priority information and the Use Defined Join Paths option on the General tab.

432

Data Modeling in Interactive Reporting Studio

Saving Data Model Options as Profiles


When you save data model options as default user preferences and apply them to a data model, you can save the Interactive Reporting document for use as a profile. Over time, you can build a set of profile Interactive Reporting documents. By opening a profile Interactive Reporting document and saving the options from the data model of the profile Interactive Reporting document as defaults, users can switch between proven data model options appropriate to the task at hand. A first time profile Interactive Reporting document, created from a blank data model before saving any changes to the default settings, can be used to restore the data model options to the default settings. A more complete profile Interactive Reporting document, appropriately populated with topics, can be used to promulgate data model options for the Use Defined Join Path feature.

Data Model Options: General


Use the General tab to select the following design options for the tables and the governors for the data model:

Design Options

Auto Alias TablesEnables the product to replace underscores with spaces and display item names in mixed upper/lower case when a table is added to the Content frame from the Table catalog. Auto Join TablesInstructs the product to automatically join database tables based on one of three different join strategies as they are added to the Content frame if their names and data types are identical. If Auto Join Tables is not selected, you must manually create joins between topics in the Content frame. Show Icon JoinsShows topic joins when a topic is in icon view (minimized). It is recommended that you activate this feature. Allow Drill AnywhereActivates the drill anywhere menu item on the menus within the Pivot and Chart sections. This option enables users to drill to any field. Allow Drill To DetailActivates the drill to detail menu item on the menus within the Pivot and Chart sections. This option enables users to query the database again once they have reached the lowest level of detail; it only works if the Allow Drill Anywhere option is selected.

Governors (Interactive Reporting Studio Only)

Return First ____ RowsSpecifies a cap on the number of rows retrieved by a query against the data model, regardless of the size of the potential Results set.

Note: All users can also set query governors, but data model options automatically override governors set at the query level. If row limits are also set at the query level, the lower number is enforced.

Time Limit ____ MinutesSpecifies a cap on the total processing time of a query against the data model. Seconds are entered as a decimal number. Available for asynchronous connection API software (for example, Open Client) that support this feature.

Working with Data Models

433

Data Model Options: Filters


Use the Filters tab to specify limit browse level preferences and to select global limit options. When you use Show Values to set filters, you may sometimes need to sift through a lot of data to find the particular values you need. limit preferences enable you to dictate the way existing limits reduce the values available through the Show Values command. For example, you want to retrieve customer information only from selected cities in Ohio. However, the database table of customer addresses is very large. Because Interactive Reporting Studio apply a default limit preference, once you place the initial limit on State, the Show Values set returned for City is automatically narrowed to those cities located in Ohio. This saves you from returning thousands of customers, states, and from all sales regions. You can adjust this preference so that the initial limit selection has no effect on the potential values returned for the second limit (all cities are returned regardless of state).

Filter Options

Show Minimum Value SetDisplays only values that are applicable given all existing filters. This preference takes into account limits on all tables and related through all joins in the data model (which could be potentially a very large and long running query). Show Values Within TopicDisplays values applicable given existing limits in the same topic. This preference does not take into account limits associated by joins in the data model. Show All ValuesDisplays all values associated with an item, regardless of any established limits.

Tip: When setting these preferences for metatopics, be sure to display the data model in Original

view. Global Limit Options (Interactive Reporting Studio Only)

Show ValuesGlobally restricts use of the Show Values command in the Limit dialog box, which is used to retrieve values from the server. Custom ValuesGlobally restricts use of the Custom Values command in the Limit dialog box, which is used to access a custom values list saved with the Interactive Reporting document or in a flat file. Custom SQLEnables the user to code a limit directly using SQL.

Note: The Topic Priority dialog box is displayed only if you first select join in the data model.

Note: Since most data models do not have the same set of topics, you cannot save changes to the topic priority as default user preferences. (For more information on default user preferences, see Saving Data Model Options as User Preferences on page 432.)

434

Data Modeling in Interactive Reporting Studio

Data Model Options: Joins


Use the Joins tab to select join usage preferences.

Use All Joined Topics Specifies the use of all joined (non-iconized) topics in the data model. Use The Minimum Number Of Topics Specifies the use only of topics represented by items on the Request Line. Use All Referenced Topics Specifies the use only of topics represented by items on the Request or Limit lines. Changing join usage usually changes the number of rows retrieved from the database. It also introduces the possibility that novice users may create improperly joined queries. Use Defined Join Paths Specifies the use of a user predefined join path that groups the joins necessary to query from the data model. Click Configure to create a custom join path. Note that since most data models do not have the same predefined join paths, you cannot save the Use Defined Join Paths option as a default user preference. (For more information on default user preferences, see Saving Data Model Options as User Preferences on page -432.) Use Automatic Join Path Generation Instructs Hyperion Intelligence Clients to dynamically generate joins based on the context of user selections on the Request and Limit lines.

Data Model Options: Topic Priority


Use the Topic Priority tab to define the order that tables are included in the Hyperion Intelligence Clients SQL statement. Defining a topic priority can significantly speed up large queries. When defining topic priorities, remember that the centralized fact topic in your data model is the largest and receives the most use during a query. By prioritizing this topic first, followed by the remaining topics in descending order of magnitude, the database server can more efficiently use the internal join logic between tables.

To set topic priorities in a data model:


1 Choose Data ModelData Model Options.
The Data Model Option dialog boxes appear.

2 Click the Topic Priority tab to view the Topic Priority tab.
Topics in the data model appear listed in the Tables list in the order they were placed in the Content pane.

3 Rank the topics in the desired order. Click the arrow to move selected topics up or down in the list.

Working with Data Models

435

4 Click Auto-Order to automatically detect the magnitude of each topic and rank them accordingly in
descending order.

5 When the topics appear in the desired order, click OK.

Note

Since most data models do not have the same set of topics, you cannot save changes to the topic priority as default user preferences. (For more information on default user preferences, see Saving Data Model Options as User Preferences.)

Data Model Options: Auditing


Use the Auditing tab to monitor user events within a managed query environment. By attaching SQL statements to specific Interactive Reporting document events, an advanced user can record how Interactive Reporting Studio, a database server, and network resources are being used. When triggered, the SQL statements update an audit log table, which the a administrator can query independently to track and analyze usage data. For detailed information about auditing, see Auditing with Interactive Reporting Studio on page 453.
Note: Although you can save the definitions of specific audit events as default user preferences, you cannot save the enabled/disabled state of the audit events as defaults. (For more information on default user preferences, see Saving Data Model Options as User Preferences on page 432.)

Automatically Processing Queries


Use the auto-process feature to have a standard query Interactive Reporting document automatically process when it is downloaded from the repository.

To set Auto-Process:
1 Display a standard query Interactive Reporting document open in the Content frame. 2 Select Query > Query Options.
The Query Properties dialog box is displayed.

3 Select the Auto-Process check box, and then click OK. 4 Select File > Save To Repository to upload the Interactive Reporting document to the repository.
The query automatically processes when a user opens the Interactive Reporting document from the repository.

Promoting a Query to a Master Data Model


A query may be promoted to a master data model. This essentially separates the data model from the query, enables multiple queries to be based on a single master data model, and creates a new data model-only section in the Interactive Reporting document. Master data models do not contain Request lines.

436

Data Modeling in Interactive Reporting Studio

The benefit is that any changes to the master data model get propagated to all dependent queries that are based on the master data model. Each time a new query is inserted into a Interactive Reporting document that contains a master data model, you are prompted to link the new query to the master data model.When a query is promoted to a master data model, it is added to the Section frame as a new section. Once you promote a query to a master data model, you cannot undo it.

To promote a query to a master data model:


1 In the Query section, select or build a data model. 2 Select DataModel > Promote To Master Data Model.
Data models in Query sections that are linked to the master data model are locked and cannot be changed. They display with a gray background and the message: Locked Data Model.

Synchronizing a Data Model


If data models are distributed to company personnel, it is important to keep them updated to reflect system changes. Data models provide visual understanding of the database; if they are corrupted, users can become lost and frustrated. For example, consider the situation when a database administrator structurally alters a database table by adding columns, modifying data, or renaming a field. If the changes are not registered to data models, then Dashboard sections, metatopics, or intranet-distributed reports become obsolete. Advanced users can ensure data model integrity using the Sync With Database command, a one-step integrity check and update. Sync With Database detects inconsistencies with the database, updates the data model, and provides an itemized list of the changes made. The list can use be used to update metatopics and report sections quickly and without interrupting workflow.

To synchronize a data model:


1 Open the data model and log on to the database. 2 Select DataModel > Sync With Database.
Interactive Reporting Studio compares original topics with their corresponding database tables. If the structure of the tables has changed, Interactive Reporting Studio modifies data model topics to reflect the changes. The Data Model Synchronization dialog box is displayed, describing changes to the database. Select the Show Detail Information check box for an itemized list.
Tip: Because metatopics are a separate logical layer constructed from original topics, they are not

automatically updated. The Sync With Database feature removes any altered items from metatopics, but preserves the remaining structure so that repairs are minor. Sync With Database works transparently with most other customized attributes of a data model.

Working with Data Models

437

Data Model Menu Command Reference


The following table provides a quick reference to the commands available on the Data Model menu and lists any related shortcuts.
Table 38

Data Model Menu Commands Description Expands the Table catalog in the Catalog frame. Enables you to select combined, original (database-derived), or metaviews of topics. Enables you to select structure, detail, or icon views of topics. Creates a metatopic from an existing topic. Adds a metatopic to the data model. Enables you to add either a server or local metatopic item. Detects inconsistencies with the database, updates the data model, and provides an itemized list of the changes. Promotes the current query to a master data model. Enables you to specify options for General, Limits, Joins, Topic Priority, and Auditing. 4 4 Keyboard Shortcut [F9] Pop-up Menu

Command Table Catalog Data Model View Topic View Promote to Metatopic Add Metatopic Add Metatopic Item Sync With Database Promote To Master Data Model Data Model Options

438

Data Modeling in Interactive Reporting Studio

Chapter

24

Managing the Interactive Reporting Studio Document Repository

This section describes how to create and manage the document repository, including how to upload Interactive Reporting documents to, and open Interactive Reporting documents from, the repository, and how to control document versions. Note that most of the features described in this section are available only to advanced users of Interactive Reporting Studio.

In This Chapter

About the Document Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Administering a Document Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Working with Repository Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Document Repository Table Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448

Managing the Interactive Reporting Studio Document Repository

439

About the Document Repository


The document repository provides an efficient way to manage and distribute data model objects for end-user querying. By storing standardized objects in a document repository located on the database server, advanced uses can provide version-controlled data models for entire workgroups to access as needed.
Note: Interactive Reporting documents are no longer published and distributed through the OnDemand Server. Instead these functions have been integrated into Hyperion System 9 BI+ Workspace. For more information, see the Hyperion System 9 BI+ Workspace Administrators Guide. The document repository described in this section is a place for a Interactive Reporting Studio user to store and maintain data models and standard reports.

Objects that can be stored in the document repository are:

Data modelA basic data model that is a group of related topics designed as a starting point for building a query. A basic data model opens in the Content frame of the Query section in which a group of joined topics is displayed. Standard queryA data model with a query already assembled. After the query is downloaded, you simply process the query to retrieve data. Standard queries are ideal for users who use the same data on a regular basis; for example, to get inventory updates that fluctuate from day to day. A standard query opens in the Results section. If a standard query has the auto-process feature enabled, the query automatically runs when it is downloaded and populates the Results and report sections with data.

Standard query with reportsA standard query that includes preformed reports which enable you to process the query and view the data using customized report sections. A formatted standard query with reports is displayed in the Pivot, Chart, Dashboard, or Report sections.

Administering a Document Repository


Use the Administer Repository dialog box as an access point to create and maintain repositories and the objects stored inside the repositories. You can use this dialog box to inventory the contents of all repositories on a database server, and update the descriptions of the stored contents.
Note: Repository administration is the province of the Interactive Reporting Studio advanced user. The data model contents of a document repository are available to end users, but only an advanced user can store and manage shared repository objects.

The following sections describe the tasks associated with administering a document repository:

Creating Repository Tables on page 441 Confirming Repository Table Creation on page 442 Managing Repository Inventory on page 443 Managing Repository Groups on page 444

440

Managing the Interactive Reporting Studio Document Repository

Creating Repository Tables


A repository is a central place in which an aggregation of data is kept and maintained in an organized way. A document repository is a group of specialized database tables used to store different kinds of data models. A document repository can be located on any database in the network environment, and can even store data models associated with any other database in the environment.

To create repository tables:


1 Select Tools > Administer Repository. 2 Choose Select to open the Select Connection dialog box and select the Interactive Reporting database
connection for the database on which you want to create repository table, or select the Interactive Reporting database connection for the active Interactive Reporting document.

The Administer Repository dialog box is displayed.

3 Click Create to open the Create Repository Tables dialog box. 4 Change the default configuration.

Owner NameEnter the database and owner names (if applicable) under which you want to create the tables. If both database and owner are specified, separate them with a period (for example, Sales.GKL). Grant Tables to PublicCheck Grant Tables to Public to grant general access to the repository tables at the database level. You must grant access to the repository tables in order for users to download data models; otherwise, you will need to manually grant access to all authorized users using a database administration tool. Do not grant tables to public if you need to maintain tight database security and upload privileges are only permitted for a small group of users.

Data Type FieldsChange default data types for column fields to match data types of the database server. If the DBMS and middle ware support a large binary data type, use it for VarData columns. If not, use the largest character data type.

5 Click Create All to create the repository tables under the specified user.
The All Tables Created dialog box is displayed.
Note: If table creation fails, make sure the database logon ID of the server has been granted Table Create privileges.

6 Click OK, and then click Close to close the Create All dialog box.

Administering a Document Repository

441

Confirming Repository Table Creation


Repository tables are hidden in the Table Catalog by default. To confirm that the repository tables were created (or if you would prefer to display the tables), you can modify the connection preferences of the Interactive Reporting database connection, and include the repository tables in the Table catalog.

To include repository tables in the Table catalog:


1 Select Tools > Connection > Modify.
The Database Connection Wizard is displayed.

2 Select the Show Advanced Options check box, and then click Next. 3 Enter a user name and password to connect to the data source, and then click Next. 4 Clear the Exclude Hyperion Repository Tables check box and click Next. 5 Click Next through the rest of the wizard dialog boxes, and then click Finish. 6 Save the Interactive Reporting database connection file. 7 Select DataModel > Table Catalog or press F9 to view the Table catalog including the document repository
tables:

The following document repository tables should be displayed:


BRIOCAT2 BRIOGRP2 BRIOBRG2 BRIOOBJ2

For detailed information on the document repository tables, see Document Repository Table Definitions on page 448.

442

Managing the Interactive Reporting Studio Document Repository

Managing Repository Inventory


Use the Administer Repository dialog box to create and maintain document repositories and the objects stored inside the repositories. You can use this dialog box to inventory the contents of all repositories on a database server and update the descriptions of the stored contents.

To update a repository object description:


1 Select Tools > Administer Repository, and select the connection file associated with the repository object
with which you want to work.

The Administer Repository dialog box is displayed.

2 Click the Inventory tab. 3 Select a model type from the Model Type drop-down box.
The Model Type drop-down box shows the model type folders that contain the repository objects. Interactive Reporting Studio supports three types of repository objects: Data Model, Standard Query and Standard Query with Reports. When you select a model type, the description for that model type becomes active.

4 Edit the description in the Description frame of the BRIOCAT2 area.


The BRIOCAT2 area shows the following catalog details for the selected model:

Unique NameName of the object as it is displayed s in the repository. CreatorName of the person who created the object. CreatedDate on which the object was saved to the repository defaults. VersionVersion number of the object.

5 Click Update.
To modify the attributes of a document object itself, download the object, alter the document and upload it to the repository. For more information, see Modifying Repository Objects on page 446.

To delete a repository object:


1 Select Tools > Administer Repository, and select the connection file associated with the repository object
with which you want to work.

The Administer Repository dialog box is displayed.

2 On the Inventory tab, select the model type to be deleted from the Model Type drop-down box. 3 Select a repository object from the Model List and click Delete.
The object is deleted from the repository.

Administering a Document Repository

443

Managing Repository Groups


The repository group feature enables you to classify stored objects by their availability to distinct workgroups that you define. Users can download repository objects provided that they have access privileges in an authorized workgroup. This feature complements the open repository by adding a security layer which enables you to consolidate objects into a single repository while selectively restricting access to certain objects as needed. For example, you are the database administrator at a software firm. Ellen needs access to sales and marketing data models to complete a customer survey presentation. Gavin, a product manager, uses these and product management data models to complete his competitive analyses. Jason, a salesperson, needs access only to the standard query with reports for sales. The solution is to create groups: Product Management, Marketing, and Sales, and give each group access to the objects that they need. Then assign users to appropriate groups: Ellen would have access to both sales and marketing, Jason to sales, and Gavin to all three.

To set up a repository group:


1 Select Tools > Administer Repository, and select the connection file associated with the repository group
with which you want to work.

The Administer Repository dialog box is displayed.

2 Select the Groups Setup tab to display the Groups Setup tab. 3 Enter the group name that you want to add the repository structure in the Groups field and click Add.
Tip: If you enabled Grant Tables To Public when creating the repository, the default group, Public,

is in the Groups list.

4 Select the group for which you want to associate a user name or names. 5 Enter the user name(s) in the Users field, and click Add to add the names to the group.
Add multiple users by delimiting with commas in the edit field, for example: user1, user2, and user3.

6 All users with access to the repository, regardless of other grouping affiliations, have default access to
documents placed in the Public group.

To remove a user group or user, select the user name in the Users list and click Remove.

444

Managing the Interactive Reporting Studio Document Repository

Working with Repository Objects


The following section discusses how to create and modify repository objects, and how to use the automatic distributed refresh (ADR), a sophisticated version control feature, to control document versions:

Uploading Interactive Reporting Documents to the Repository on page 445 Modifying Repository Objects on page 446 Controlling Document Versions in Interactive Reporting Studio on page 448 Controlling Document Versions in Interactive Reporting Web Client on page 450

Uploading Interactive Reporting Documents to the Repository


After you have created a document repository, you can upload repository objects (data models, standard queries, and standard queries with reports) for version-controlled distribution to networked Interactive Reporting users.
Note: When you store objects in the document repository for user access, make the connection file available to users as well.

To upload an object to the repository:


1 With the repository object you want to upload open in the Content frame, select File > Save To Repository
and select the connection file you want to associate with the object.

If necessary, click Select to launch the Select Connection File dialog box, navigate to the connection file that you want to use, and click OK. The Save To Repository dialog box is displayed showing the Model tab.

2 In the Model Type area, select the type of object you are saving to the repository.
Select between Data Model, Standard Query, and Standard Query with Reports.

3 In the Model Info area, enter information about the repository object.

Unique NameName of the object that you want to show for the object in the repository CreatorName of the person who created the object. This information is useful in tracing the document source for updates and so on CreatedDate on which the object was saved to the repository defaults Locked/Linked Object (Required For ADR)Toggles repository object locking. Previously, repository models were locked to maintain versions (see Controlling Document Versions in Interactive Reporting Studio on page 448), and could not be modified by the end user. Unlocked data models can be downloaded as usual and the query modified. However, once saved outside the repository, the unlocked model loses its automatic version-control.

Working with Repository Objects

445

Prompt For Sync On DownloadPrompts users with the request: A newer version of the object exists in the repository, downloading the changes may overwrite changes you have made to the local file. Would you like to make a copy of the current document before proceeding? If the user selects Yes, a copy of the locally saved object is made, Automatic Distributed Refresh is disabled for the copy, and the object is synchronized with the newer version of the object

DescriptionEnter a description of the repository object and what it can be used for. The maximum character length that you can add is 255 characters.

4 Select the Groups tab to display the Groups tab.


Groups associated with the owned repository are displayed in the Groups list. The PUBLIC group is included by default.

5 Use the arrow buttons to grant access to repository groups by adding them from the Available Groups list to
the Selected Groups List.

Available GroupsAvailable user groups from which access can be granted. Selected GroupsGroups added to the granted access list for the stored object.

Tip: You must move the PUBLIC group to the Selected Groups list if you want to provide general,

unrestricted access to the repository object.

6 Click OK to save the object to the repository. 7 Distribute the connection file to end users as needed to access both the object source database, and if
necessary, the document repository used to store the object.

Modifying Repository Objects


You can make modifications to document objects stored in the repository by downloading, modifying, and uploading them again. You can save the object under a new name, but if the object is not significantly altered, it is best to retain consistency by reloading the document under the same name. This ensures that linked documents are automatically updated.

Caution! Modifications made to repository objects propagate throughout the user environment via

Automatic Distributed Refresh (ADR), which track objects by unique ID and version number. Each time the object is uploaded to the repository, it is also assigned a new version number. For ADR to work properly, you must upload a modified repository object with the same name as the original.

446

Managing the Interactive Reporting Studio Document Repository

To modify a repository object:


1 Select File > Open From Repository > Select.
The Select Connection dialog box is displayed.
Note: You can also select the connection file currently in use if there is one. Current Interactive Reporting database connections are listed below the Select menu item.

2 Select the connection file that you want to use and click OK. 3 In the Password dialog box, type your user name and password and click OK.
The Open From Repository dialog box is displayed.

4 Navigate through the repository tree and select the repository object that you want to use
The Open From Repository dialog box displays information about the selected object.

Unique NameName of repository object CreatorCreator of the repository object CreatedDate on which the repository object was created DescriptionGeneral description of the repository object, its contents, and the type of information that can be queried

5 Click Open.
The repository object is downloaded to the appropriate section.

6 Make the desired changes to the object, and then select File > Save To Repository. 7 Select the correct Interactive Reporting database connection for the repository object, and enter the user
name and password if prompted.

The Save To Repository dialog box is displayed.

8 Select the Model tab and verify the correct document type in the Model Type field.
If the Model type is grayed out, the object has not been modified and it cannot be saved to the repository at this time.

9 Add any object information in the Model Info area and then click OK.
You are asked if you want to enter a unique name for the object. Click No to replace the current object with the object you just modified. Click Yes to save the modified object under a different name. For Automatic Distributed Refresh to work properly, you must save a modified object with the original object name and model type, and save it in the same userowned repository.

10 If you assigned another name to the object, you are prompted to associate the modified object with a
group. Click OK.

The Group tab is displayed automatically so that you can associate the object with a group.

11 Use the arrow buttons to grant access to repository groups by adding them from the Available Groups list to
the Selected Groups List.

12 Click OK.

Working with Repository Objects

447

Controlling Document Versions in Interactive Reporting Studio


Automatic Distributed Refresh (ADR) is a sophisticated version control feature that transparently updates Interactive Reporting documents when the data model or standard query is changed in the document repository. ADR operates completely in the background without any user interaction ADR assumes that:

Each object in the BRIOOBJ table has a unique ID number. Each object is assigned an iterative version number each time it is altered and uploaded.

Data model objects are typically downloaded from the document repository into Interactive Reporting documents that are used to analyze data through pivots, charts, and other reports. When a user saves work to a Interactive Reporting document on disk (either a local hard disk or file server), Interactive Reporting Studio stores both a link to the source object (which was downloaded from the document repository) and the connection information needed to reconnect to the repository.

Document Repository Table Definitions


When the Interactive Reporting document is reopened, Interactive Reporting Studio reads the link information, connects to the repository, checks to see if the object exists, and checks if it has the same version number stored in the document file. If the object in the repository has been modified, it will have a new version number, which indicates that Interactive Reporting Studio should update the old version saved in the Interactive Reporting document. For ADR to work properly, you must save a modified object with the original object name and model type, and save it in the same user-owned repository. Data models and standard queries (with or without reports) are synchronized using ADR. The document repository tables are detailed in the following sections:

BRIOCAT2 Document Repository Table on page 449 BRIOOBJ2 Document Repository Table on page 449 BRIOBRG2 Document Repository Table on page 450 BRIOGRP2 Document Repository Table on page 450

Note: The following tables, which were created in Hyperion Intelligence version 6.6 and prior, are no longer used nor will they be referenced by any aspect of Interactive Reporting Studio: BRIOOCE2, BRIODMQ2, BRIOUSR2, BRIOSVR2, and BRIOUNIQ.

448

Managing the Interactive Reporting Studio Document Repository

BRIOCAT2 Document Repository Table


Table 39 lists the details of the BRIOCAT2 table, which records a description of the repository objects and local documents loaded in the document repository
Table 39

BRIOCAT2 Table Datatype NUM CHAR CHAR DATE NUM CHAR CHAR CHAR CHAR CHAR NUM Description Unique identifier for a stored Repository object. Creator of the object. Version used to upload the object. Most recent date of upload for the object. Number of rows occupied by the stored object in the BRIOOBJ2 table. Indicates whether previous upload of the stored object was completed successfully. Descriptive name of the stored object. File type of the stored object, such as data model, locked query, locked report, LAN-based, folder. Description of the object. Latest version number of the object, used for ADR. Total size of the stored object in bytes.

Column UNIQUE_ID OWNER APP_VERSION CREATE_DATE ROW_SIZE READY FILE_NAME FILE_TYPE DESCRIPTION VERSION TOTAL_SIZE

BRIOOBJ2 Document Repository Table


Table 40 lists the details of the BRIOOBJ2 table, which stores the actual objects loaded in the document repository.
Table 40

BRIOOBJ2 Table Datatype NUM NUM BLOB or LONG RAW Description Unique identifier for a stored repository object. Sequence ID for segment of the object. Data Model object in binary chunk format.

Column UNIQUE_ID ROW_NUM VAR_DATA

Working with Repository Objects

449

BRIOBRG2 Document Repository Table


Table 41 lists the details of details of the BRIOBRG2 table, which stores the associations between registered documents and repository groups.
Table 41

BRIOBRG2 Table Datatype NUM CHAR Description Unique identifier for a repository document. Name of a repository group.

Column UNIQUE_ID GROUP_NAME

BRIOGRP2 Document Repository Table


Table 42 lists the details of the BRIOGRP2 table, which maintains the list of repository groups and their associated users and privileges.
Table 42

BRIOGRP2 Table Datatype CHAR CHAR Description Name of a repository group. Name of a document repository user assigned to the group.

Column GROUP_NAME USER_NAME

Controlling Document Versions in Interactive Reporting Web Client


Automatic Document Refresh (ADR) in the Interactive Reporting Web Client enables the end user to merge and synchronize locally saved Interactive Reporting documents (.bqy) as soon as they connect to the repository with the latest version of the repository document. This feature is only for documents opened for the first time. When a locally saved document is opened, a connection dialog for the Interactive Reporting document connection file in the repository prompts the user to connect to the repository. The version information in the Interactive Reporting document is compared with the current version information for the document in the repository. If ADR has been enabled for the document, you are prompted to update to the latest version. The user can proceed with the refresh or not. After the document has been refreshed, the refreshed data cannot be undone. If the flag ADR is disabled for the document, document refresh is not available and the locally saved document is opened as is. The ADR synchronizing procedure is controlled at the system level and at the document level. Interactive Reporting Web Client ADR always refreshes the whole document. All the documents are published and stored in the repository with some ADR control flags enabled or disabled. Unlike in the Designer version, in Interactive Reporting Web Client there is no concept of Model Type.

450

Managing the Interactive Reporting Studio Document Repository

ADR Control Flags


Control flags determine if an Interactive Reporting document is eligible for ADR. These flags include:

ADR Global FlagThis flag controls the availability of the ADR feature. For a new installation of Interactive Reporting Studio, this flag defaults to enabled. For an upgrade installation, this flag is disabled. You system administration can enable or disable this feature as needed. ADR BQY MetadataThis flag is enabled or disabled when an Interactive Reporting document is published to the repository. If the flag is enabled, then only this particular document is allowed for ADR. For simple ADR, this flag is always enabled. ADR for job output defaults to a disabled flag when an Interactive Reporting document is published by a job action. In this case, a user can enable this flag by modifying the properties of the Interactive Reporting document. This flag is always disabled for a job output collection.

ADR Behavior
The following table shows how ADR behaves with documents in different scenarios.
Table 43

Simple ADR Behavior Interactive Reporting document section in local Interactive Reporting document Section does not exist. Section exists. Section does not exist. Section exists.

Interactive Reporting document section in Repository version Section does not exist. Section does not exist. Section exists. Section exists.

Action in Merged Interactive Reporting document No action Add from local document. Add from Repository document. Write Repository version.

Working with Repository Objects

451

452

Managing the Interactive Reporting Studio Document Repository

Chapter

25

Auditing with Interactive Reporting Studio

This section provides information on the Interactive Reporting Studio auditing features, including how to track and log who uses data model, how database resources are allocated, how database resources are consumed, and how to optimize the allocation and availability of data models. Note that most of the features described in this section are available only to advanced users of Interactive Reporting Studio.

In This Chapter

About Auditing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Creating an Audit Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Defining Audit Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 Auditing Keyword Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 Sample Audit Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458

Auditing with Interactive Reporting Studio

453

About Auditing
Auditing enables information to be collected about data models downloaded from the repository. You can use auditing features to track how long queries take to process, which tables and columns are used most often, and even record the full SQL statement that is sent to the database. Audit information can help the database administrator monitor not only the effectiveness of each distributed data model, but also the weaknesses and stress points within a database. The results are useful for performing impact analysis to better plan changes to the database. Auditing requires minimal additional setup and can be implemented entirely within Interactive Reporting Studio. The steps required for auditing data models are:

Create a document repository with an inventory of distributed data models. Create a database table in which to log audit events. Use data model options to define events that you want to audit for each data model. Save the audited data models to the document repository. Use Interactive Reporting Studio to query the audit table and to analyze the data it contains.

Special Considerations

The Audit log may fill up. Monitor it regularly and delete any entries that are no longer used. Before uploading your audited data model to the document repository, log in as a user and test each auditing event to verify that your SQL statements are not generating any errors. Auditing is not supported for the Process Results To Database Table feature, nor for Essbase data models. However, scheduled Interactive Reporting documents containing linked data models are audited normally.

454

Auditing with Interactive Reporting Studio

Creating an Audit Table


Before you enable auditing of data models, you need to identify the events that you want to track and create a database table to record the information. Use an SQL editor to create an audit table. Since the query accesses only one database, the audit table needs to reside where the query is processed. Create columns that reflect the types of information that you want to record. Table 44 provides a sample structure for the table named BQAUDIT. You can customize your audit table and columns to store information related to any events that you can define.
Table 44

Sample Structure for the BQAUDIT Table Data Source Text Explanation/Example Events which occur within the context of a query session, such as: Logon Logoff Post Process

Column EVENT_TYPE

USERNAME

SQL function

Database user information returned by a database SQL function, such as: user (Oracle) user_name (Sybase) CURRENT_USER (Red Brick)

DAY_EXECUTED

SQL function

Date, time, and duration information returned by a database SQL function, such as: sysdate (Oracle) getdate (Sybase) CURRENT_TIMESTAMP (Red Brick)

SQL_STMT

Interactive Reporting Studio keyword Interactive Reporting Studio keyword Interactive Reporting Studio keyword

SQL statements generated by the user and captured from the Interactive Reporting Studio SQL log, and returned by the keyword variable :QUERYSQL Data models accessed by the user and returned by the keyword variable :REPOSITORYNAME Query information returned by the keyword variable :ROWSRETRIEVED

DATAMODEL NUM_ROWS

Creating an Audit Table

455

Defining Audit Events


After you create the audit table on the database, you can begin defining the events that you want to track for each data model.
Note: To log audit data, you must provide Interactive Reporting Studio users the database authority to execute each SQL statement you define for auditing events. For example, all users must have insert or update authority to the Audit table that you create.

To define audit events:


1 Download an existing data model that you want to track from the document repository, or create a new
data model in the Content frame using the Table catalog.

For more information about creating a new data model, see Building a Data Model on page 417.

2 In the Query section, select DataModel > Data Model Options.


The Data Model Options dialog box is displayed.

3 Select the Auditing tab to display the Auditing tab.


The Auditing tab displays the events you can audit.

4 Click Define to define the way in which an event is audited.


The SQL For Event dialog box is displayed.

5 Enter one or more SQL statements to update the audit table when the event occurs, and click OK.
A check mark is displayed next to the event on the Auditing tab in the Data Model Options dialog box. You can use the check box to enable or disable the event definition without reentering the SQL statement. You can also click Define again at any time to modify the SQL statement.

6 Select File > Save to Repository to save the audited data model to the document repository.
The SQL statement is sent to the database whenever a user triggers the event while using the data model.

456

Auditing with Interactive Reporting Studio

Auditing Keyword Variables


Interactive Reporting Studio provides keyword variables (see Table 45) that can be used to help define audit events. The keywords can be inserted into audit event SQL statements to return specific data each time the event is triggered.
Tip: When entering an auditing keyword variable, always precede it with a colon (:) and enter all

keyword text in uppercase. Other items in the SQL statement may also be case sensitive, depending on your database software.

Table 45

Auditing Keyword Description Number of rows retrieved by the most recently executed query. Name of the repository object in use (data model or standard query with reports). (Pre Process, Limit: Show Values, and Detail View only) Complete SQL text of the most recently executed query statement. Tip: Consider the maximum column length when using :QUERYSQL. You may want to use a substring function to limit the length of the SQL being logged. For example: SUBSTR(:QUERYSQL,200)

Keyword Variable :ROWSRETRIEVED :REPOSITORYNAME :QUERYSQL

:SILENT

Restricts display of the audit-generated SQL statement within the users SQL Log. When the :SILENT keyword variable is included in the audit statement, the SQL Log output reads Silent SQL sent to server instead of the SQL statement. This keyword variable provides a security feature when the triggered SQL statement is sensitive or should remain undetected.

Auditing Keyword Variables

457

Sample Audit Events


Table 46provides examples of audit events. Most examples include ORACLE SQL database functions.
Table 46

Sample Audit Events Description Executed each time a successful logon occurs.
insert into <owner>.bqaudit (username, day_executed, event_type) values (user, sysdate, 'Logon')

Audit Event Logon

Note: The logon audit event fires for each action when used with the Data Access Service. Because of connection pooling, it is not possible, at the Data Model level, to determine when an actual logon event is required.

Logoff

Executed each time a successful logoff occurs.


insert into <owner>.bqaudit (username, day_executed, event_type) values (user, sysdate, 'Logoff')

Note: The logoff audit events fires for each action when used with the Data Access Service. A logoff event does not happen until the connection reaches the configured idle time.

Pre Process

Executed after Process is selected, but before the query is processed. It is useful to track the date and time of both Pre Process and Post Process in order to determine how long a query takes to process.
insert into <owner>.bqaudit (username, day_executed, event_type) values (user, sysdate, 'Pre Process')

Post Process

Executed after the final row in the result set is retrieved at the user's workstation. It is useful to track the date and time of both Pre Process and Post Process in order to determine how long a query takes to process.
insert into <owner>.bqaudit (username, day_executed, event_type, num_rows, sql_stmt) values (user, sysdate, 'Post Process', :ROWSRETRIEVED, SUBSTR(:QUERYSQL, 1, 200))

Limit:Show Values

Executed after selecting the Show Values button when setting a Limit.
insert into <owner>.bqaudit (username, day_executed, event_type, datamodel, sql_stmt) values (user, sysdate, 'Show Values', :REPOSITORYNAME, :QUERYSQL)

Detail View

This statement is executed when a user toggles a topic to Detail View and a sampling of data from the database is loaded. Remember that values are only loaded when you first toggle to Detail View, or when Cause Reload is selected in the Topic Properties dialog box. This statement is executed when the Data Model is downloaded from the document repository into a

New Data Model

Interactive Reporting document.


insert into <owner>.bqaudit (username, day)_executed, event_type, datamodel) values (user, sysdate, 'New Data Model', :REPOSITORYNAME)

Data Model Refresh

This statement is executed after a Data Model is refreshed through ADR.


insert into <owner>.bqaudit (username, day_executed, event_type, datamodel) values (user, sysdate, 'Data Model Refresh', :REPOSITORYNAME)

458

Auditing with Interactive Reporting Studio

Chapter

26

IBM Information Catalog and Interactive Reporting Studio

This section provides instructions for registering and managing client objects in the IBM Visual Warehouse Information Catalog.
Note: The information in this section applies only to Interactive Reporting Studio.

In This Chapter

About the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Registering Documents to the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 Administering the IBM Information Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

IBM Information Catalog and Interactive Reporting Studio

459

About the IBM Information Catalog


IBMs Visual Warehouse (VW) is a family of products that design, load, manage, and retrieve information from data warehouses. Interactive Reporting Studio is a component for the IBM VW solution, and is resold by IBM as part of VW bundles. To further extend the capabilities of the solution, register, administer, and distribute Interactive Reporting documents (.bqy file extension). in the VW Information Catalog. The Information Catalog is a repository of document information with pointers to the physical objects. Another feature of the Catalog is that it enables you to categorize content stored in documents by specific subject area. A full-search engine in the repository enables you to search for information stored in the documents. For example, you could search on all documents associated with sales. In this case, the search results could yield Word files, Excel files, and Interactive Reporting documents. When you find a document that you want to work with, the IBM Information Catalog launches the appropriate application and opens the document.

Registering Documents to the IBM Information Catalog


This section explains how to register documents to the Catalog. It includes the following sections:

Defining Properties on page 461 Selecting Subject Areas on page 461

Visual Warehouse must already be installed before you can register or administer this feature. Also, the client document object types must already exist before completing the following steps. For more information see Creating Object Type Properties on page 462.

To register an Interactive Reporting document:


1 Display the repository object that you want to upload open in the Content pan. 2 Select File > Register To IBM Information Catalog.
The Save File dialog box is displayed.

3 Type the name of the Interactive Reporting document in the File Name field. 4 In the Save As Type field, leave the default .bqy file type and click Save.
The Connect To Information Catalog Repository dialog box is displayed.

5 Type your user identification in the User field. 6 Type your password in the Password field. 7 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.

The Register To Information Catalog dialog box is displayed, showing the Properties and Subject Area tabs. Use these corresponding pages to describe the properties and subject matter of the Interactive Reporting documents.

8 Click the Properties tab to display the Properties tab.

460

IBM Information Catalog and Interactive Reporting Studio

9 In the Available Properties list, select a property of the Interactive Reporting document to which you want
to add a value.

10 In the Enter Value for Selected Property edit box, type a value for the property. 11 Repeat Step 8 through Step 9 for all properties. 12 Click the Subject Areas tab. 13 In the Specify The Subject Area list, use the plus (+) and minus () signs to navigate through the Subject
area structure (Grouping Category) and select the subject area folder to which you want to add the Interactive Reporting document.

The Subject Area displays a tree view of eligible subject area folders in which you can add the Interactive Reporting document.

14 Click Add to add the Interactive Reporting document or instance to the Subject Area specified in Step 12. 15 Click OK.

Defining Properties
You can define the values of selected properties for a document when registering to the catalog. Use the Properties tab to show and edit properties, data types, and lengths:

Available PropertiesDisplays a list of available properties that you can specify. Enter ValueEdit any available value by typing the information in this edit box. For a description of eligible values for the properties, see the Description field.

Selecting Subject Areas


Use the Subject Area tab to display and select a subject area for the document that you are registering. By including the document in a Subject Area folder, you can later search for the document by topic.

Specify The Subject AreaDisplays a tree view of eligible subject area folders in which you can add the document. Use the plus (+) and minus () signs to navigate through the folders. To add a document to folder, select the subject area folder and click Add. Subject Areas ContainingDisplays the subject area folder to which the document has been added.

Administering the IBM Information Catalog


This section explains how to administer the IBM information catalog, including:

Creating Object Type Properties on page 462 Deleting Object Types and Properties on page 462 Administering Documents on page 463 Setting Up Object Types on page 464

Administering the IBM Information Catalog

461

Creating Object Type Properties


Use the Setup Objects Types under Administer IBM Information Catalog to create an object type and specify its properties. An object type shows a category of business information, for example, a Interactive Reporting document or an image. An object type property describes an attribute of the object type, for example, its name or data type. Once an object type has been created, you cannot modify its existing properties or add new properties. You can, however, delete the entire object type, but not the individual properties of a selected object type.

To set up the BQY object type and properties:


1 Choose File > Administer IBM Information Catalog.
The Connect To Information Catalog Repository dialog box is displayed.

2 Type your user identification in the User field. 3 Type your password in the Password field. 4 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.

5 Click OK.
The Administer Information Catalog dialog box is displayed.

6 Click the Setup Object Types tab. 7 In the Object Type drop-down box select Interactive Reporting document. 8 In the Name field, type the name of the property that you want to associate with the object type. 9 In the Short Name field, type an abbreviated version of the property name. 10 In the Datatype list, select the data type classification of the property (for example, character-based) from
the drop-down list box.

11 In the Length field, type the maximum length character of the property. 12 To require that the property be completed when a user registers a document, click the Entry Required
check box.

13 To add the object type property to the Properties for Object Type list box, click Set. 14 Repeat Step 8 through Step 12 for each property that you want to associate with the selected object type. 15 To create the object type, click Create Object Type.

Deleting Object Types and Properties


You can delete the entire object type, but not the individual properties of a selected object type once an object type has been created.

462

IBM Information Catalog and Interactive Reporting Studio

To delete an BQY object type and properties:


1 Choose File > Administer IBM Information Catalog.
The Connect To Information Catalog Repository dialog box is displayed.

2 Type your user identification in the User field. 3 Type your password in the Password field. 4 Type the ODBC data source name in the Database Alias field if it is different than the default database
alias value.

5 Click OK.
The Administer Information Catalog dialog box is displayed.

6 Click the Setup Object Types tab. 7 In the Object Type drop-down list box, select Interactive Reporting document. 8 Click Delete Object Type.

Administering Documents
Use the Administer Documents tab to search for a specific document based on an object type, property, and other selected criteria (see Table 47). After the document has been located, you can either delete or edit the associated properties.
Table 47

Object Type Search Criteria Description Interactive Reporting document object type. Property by which you want to search on the document from the pull-down list. Complete the search condition by selecting a value in the Search Criterion field below. For example, if you specify a Name property, type the name of the document in the Search Criterion field below. Use this field in conjunction with the Select Property field above. Once you have selected a property complete the search conditions by specifying the value of the property. For example, if you selected the Order Type property, you might type Interactive Reporting document in this field. If you want the search engine to distinguish between uppercase and lowercase letters when determining which documents to retrieve, click this field. A wildcard is a special symbol that represents one or more characters and expands the range of your searching capabilities. You can use the % wildcard symbol to match any value of zero or more characters. For example, to documents whose properties contains 1997 sales, type:
1997 Sales %

Field Object Type Select Property

Search Criterion

Case-sensitive Search Wildcard Search

in the Search Criterion field. Search Clear Search Retrieves the search results. Clears the results of the current search.

Administering the IBM Information Catalog

463

Table 47

Object Type Search Criteria (Continued) Description Results of the search. Deletes a selected document from the repository. Enables you to edit the value properties of a document through the Properties tab of the Register To IBM Information Catalog option.

Field Search Results Delete Edit

Setting Up Object Types


Use the Set Up Object Types tab to set up object types and their properties (see Table 48). An object type shows a category of business information, for example, a document or an image. An object type property describes an attribute of the object type, for example, its name or data type.
Note: You can create and delete only the Interactive Reporting document object types and properties through the Interactive Reporting Studio Setup Object Types features. For more information, see Creating Object Type Properties on page 462.

Table 48

Object Types and Properties Description Interactive Reporting document object types. Name of the property that you want to associate with the object type. Short name of the property that you want to associate with the object type. Data type of the property. Length of the property. Requires a user to select a property when registering a document to the DataGuide repository. Adds a new object type property to the Properties for Object Type list. If an object type has already been created, this button is unavailable. Removes a new object type property from the Properties For Object Type list. If an object type has already been created, this button is unavailable. Once an object type has been created, you cannot remove its properties; the entire object type must be deleted. Properties defined for the object type. To show the entire definition for a property, click a property in the list. Creates an Interactive Reporting document (.bqy) object type. Once an object type has been created, you cannot modify its existing properties or add new properties. Deletes an Interactive Reporting document (.bqy) object type. You cannot delete the individual properties of a selected object type. Clears the definition fields of a property.

Field Object Type Name Short Name Datatype Length Entry Required Set Remove

Properties For Object Type Create Object Type Delete Object Type Clear

464

IBM Information Catalog and Interactive Reporting Studio

Chapter

27
In This Chapter

Row-Level Security in Interactive Reporting Documents

This appendix explains the row-level security feature: what it is and how to implement it for Interactive Reporting documents.

About Row-Level Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Row-Level Security Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Row-Level Security Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 Other Important Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479

Row-Level Security in Interactive Reporting Documents

465

About Row-Level Security


Properly implemented, row-level security gives individuals in an organization access to only the information they need to make informed decisions. For example, managers need payroll information on their direct reports. Managers do not need to know payroll information for other departments within the organization. Row-level security allows this level of granularity.

The Row-Level Security Paradigm


Most database administrators understand the concept of row-level security. Returning to the payroll data example, all detailed compensation data on the employees of an organization is stored in the same table(s) within the database. Typically, some column within this table can be used as a limit, either directly or by a join to another table with its own set of limits, to restrict access to the data within the table based on the identification of the user accessing the data. Following the payroll example, an employee ID often identifies the sensitive compensation data. A join to a separate employee information table, which contains noncompensation related information such as home address and title, would include a department number. A manager would be limited to details on the employees for her/his particular department. Row-level security, implemented at the database level, is often done by means of a view. To an application, accessing a view is no different than accessing a table. However, the view is instantiated based on the appropriate limits. Coupled with the GRANT and corresponding REVOKE data definition statements available with the prevalent Relational Database Management Systems (RDBMS), the base tables can be made inaccessible to most users, and the views on that data, filtered based on user identification, made accessible instead. Multiple views may sometimes be required to fully implement a security scheme, depending on how the tables are defined and how the information contained therein must be shared. For instance, a different view for managers versus those in human resources might be required. Column level security, a companion concept to row-level security, can be similarly enforced. Views can easily hide a piece of information. For example, in the payroll example, the salary column can be left out of the view, but other types of information about an employee can still be accessible to those who need it. This type of security can impart a special problem for standardized reports, available throughout the organization but for audiences with different access permissions. The reporting software might have difficulty dealing with the missing information, consequently requiring different implementations of these otherwise similar reports.

Hyperion System 9 BI+ and Row-Level Security


The Hyperion System 9 BI+ approach to data security is server based. Both the Interactive Reporting Web Client and Hyperion System 9 BI + Workspace are designed to fully implement a secure data access platform. The non-server based clients do not participate in this security mechanism. Users of Interactive Reporting Studio need access beyond that of most users to effectively create the dashboards, and analytic reports required by the majority of the data consumers. In addition, the security information can be placed in a centralized location for the servers (the repository). For the desktop clients, it would in some cases need to be dispersed to multiple databases and maintained separately.
466
Row-Level Security in Interactive Reporting Documents

To effectively control access, the servers key off the users identification when connecting to it. This is the users logon name, used to establish a session with the Hyperion System 9 BI+ services. Beyond this user name, the servers make no assumptions about the users place within the organization. A security system can be built entirely independent of any existing grouping of users. New groupings can be defined in lieu of existing ones. This is especially important where groups were not defined with data security as a primary goal. Row-level security can also take full advantage of the existing structures where data security was built into the user and group structure. In many cases, row-level security will work within existing database role definitions and third-party software security systems.

Column Level Security


Interactive Reporting servers can easily handle the challenge of column level security. Such hidden information is replaced with an administrator-supplied value (possibly NULL, zero, blanks, or a similar placeholder value), and thus it will not cause existing reports to fail when encountering this type of a security constraint.

Performance Issues
The system is designed to not impose any significant performance penalty. The security information is collected at the time the user opens a Interactive Reporting document from the servers repository, and only then if the server knows the security controls are enabled. When a user opens a locally saved Interactive Reporting document from a previous session with the Hyperion System 9 BI+ services, the security information is recollected when reconnecting to the server in case it has changed.

Publishing in a Secure Environment


A powerful feature of Interactive Reporting is the ability to take data on the road. Once data has been extracted from the database, which is where the row-level security restrictions are enforced, that data can be saved with the Interactive Reporting document for offline analysis and reporting. Users who publish should be aware of the implications of their audience when publishing data and reports. If the publication of the data is difficult to control in the current configuration of users and groups known to the server, consider the following options:

Publish without the detailed results of the queries, leaving only the summary charts and Pivots for the general audience. If they need to drill into the summary data, they will need to rerun the queries, at which time their particular security restrictions will be applied. (Even some charts and Pivots can reveal too much, so there is still a need for prudence when publishing these Interactive Reporting documents.) Create the Interactive Reporting documents with OnStartup scripts to reprocess queries as the Interactive Reporting document is opened. This will always give the user only the data to which they are entitled.

About Row-Level Security

467

All users should take similar precautions when sharing information generated from Interactive Reporting. This includes exchanging the Interactive Reporting documents (.bqy extensions) themselves by e-mail or shared network directories, exporting the data as HTML files and publishing them to a web site, posting the data on FTP servers as the result of a job action, and creating PDF files from the reports.

Securing the Security Information


The row-level security feature is implemented by means of database tables. The servers read this data and never update it. Hyperion Solutions recommends these tables actually be defined somewhere other than the repository schema, and that read-only access be granted to only the select few that should be able to update the security information. As additional protection, the actual tables can be hidden via a view, and a WHERE clause can be added to each view definition so that only the servers user identification, by which it connects to the database to read the row-level security tables, can read the content, if the database supports it. Table 49 shows examples of Where clauses if the repository connection is made as user brioserver.
Table 49

Repository Connection Made as brioserver Sample Where Clause on CREATE VIEW WHERE USER = BRIOSERVER WHERE USER = BRIOSERVER WHERE USER = brioserver

Database DB2 Oracle SQL Server

Note: Be aware of case sensitivity with the user name and allow that, for SQL Server, the user might be dbo.

Each view has the same name as its underlying table, and all available columns from that table would be selected.

Row-Level Security Tables


Three tables implement the row-level security features for Interactive Reporting Web Client and the Workspace. Because the tables can be populated in a manner appropriate to the sites requirements, no predefined user interface to maintain the tables is provided as part of the Interactive Reporting product. However, the row_level_security.bqy can be modified to suit an ad hoc implementation of the row-level security feature, and as a tutorial or test tool when setting up a production system. The only requirement is that the basic set of column definitions be retained. The sample Interactive Reporting document can be used in all cases as a reporting tool for the row-level security data as the servers see it.

468

Row-Level Security in Interactive Reporting Documents

Implementing a secure data access environment using row-level security requires an understanding of SQL. First, knowing how the database relationships are defined is critical. Second, specifying the restrictions is directly translated into the SQL ultimately processed at the database.

Creating the Row-Level Security Tables


When the Interactive Reporting components are installed using the custom option, the installer prompts for whether or not to create the row-level security tables and in what database. If the Interactive Reporting components are installed using the express option, or it is elected not to create the row-level security tables during a custom install, then the tables must be created manually. The Interactive Reporting document mentioned above can create the tables, the necessary SQL DDL can be created based on the table definitions found in this documentation, or the script that creates them during the install can be run. To locate these scripts, look on the install CD under the DATA directory, and then under the appropriate database brand or vendor. The script of interest is named CreateRLS.sql. When creating the tables post-install, use the web-based Administration module to tell the system where the tables are located. The information required includes the data source name, the database type, the API used to access the database, and the database credentials needed to access the row-level security tables. For details on setting the row-level security properties, see Managing Row-Level Security on page 64. There is also a setting to enable or disable the row-level security feature. This setting is intended to enhance performance in systems where the feature is not needed. When disabled, no attempt is made to access these tables. The feature should always be enabled if data security is to be enforced.

The BRIOSECG Table


The BRIOSECG table defines the users and groups that are subject to row-level security restrictions. There are two columns, BUSER and BGROUP, both of varying character length (VARCHAR(n)). The maximum length is not fixed by the server; set it to a practical value. A user name is defined as the server authentication name (ODSUsername is the property of the ActiveDocument object in the Brio Object Model). For jobs, it is the user who scheduled the job. Group names are arbitrary. The data security administrator is free to define these as required. When both columns of a row are populated with non-null values, the user name defined in the BUSER column is a member of the group name defined in BGROUP. As maintained by the sample Interactive Reporting document, row_level_security.bqy, when a user is added, a row is added to the table with a NULL value in the BGROUP column. When a group is added, a NULL value is stored in the BUSER column. This is a device used by the sample Interactive Reporting document to maintain the table and is recommended practice, but it is not a requirement for correct operation of row-level security.

Row-Level Security Tables

469

This table is theoretically optional. Without it, however, all users exist as single individuals; they cannot be grouped to apply a single set of restrictions to all members. For example, Vidhya and Chi are members of the PAYROLL group. If this relationship is not defined in BRIOSECG, then any restrictions that apply to Vidhya that should also apply to Chi have to be defined twice. By defining the PAYROLL group and its members, Vidhya and Chi, the restrictions can be defined only once and applied to PAYROLL group. A group name cannot be used in BUSER; that is, groups cannot be members of other groups. Users, of course, can be members of multiple groups, and this can effectively set up a group/subgroup hierarchy. For example, a PAYROLL group might contain users Sally, Michael, Kathy, David, Bill, Paul, and Dan. Sally, Dan, and Michael are managers, and so they can be made members of a PAYROLL MANAGER group. Certain restrictions on the PAYROLL group can be overridden by the PAYROLL MANAGER group, and Dan, to whom Sally and Michael report, can have specific overrides to those restrictions placed explicitly on the PAYROLL MANAGER group. Where the database supports it, and if the users authentication name in Hyperion System 9 BI+ corresponds, this table can be a view created from the roles this user has in the database. For example, in Oracle:
CREATE VIEW BRIOSECG (BGROUP, BUSER) AS SELECT GRANTED_ROLE, GRANTEE FROM DBA_ROLE_PRIVS

DBA_ROLE_PRIVS is a restricted table. Since the server reads the view using a configured database logon, it would not be appropriate to use USER_ROLE_PRIVS instead of DBA_ROLE_PRIVS, because that user view will reflect only the servers roles, not the user on whose behalf the server is operating. Again, this is an Oracle example; other RDBMS may or may not provide a similar mechanism. In some cases, depending on the database, a stored procedure could collect the role information for the users and populate a BRIOSECG table if a simple SELECT is inadequate to collect the information. This would require some means to invoke the procedure each time role definitions were changed. When using the databases catalog or some other means to populate BRIOSECG, the sample Interactive Reporting document, row_level_security.bqy, cannot be used to maintain user and group information. A special group, PUBLIC, exists. It does not need to be explicitly defined in BRIOSECG. All users are members of the PUBLIC group. Any data access restriction defined against the PUBLIC group applies to every user unless explicitly overridden, as described later. All users can be made part of a group at once by inserting a row where BUSER is PUBLIC and BGROUP is that group name. While this may seem redundant, given the existence of the PUBLIC group, it offers some benefits:

It allows the database catalog technique described above to work. For example, in Oracle, a role can be granted to PUBLIC. It allows restrictions for a group other than PUBLIC to quickly be applied to or removed from everyone in an instant. It provides more flexibility when using override specifications as described later.

Note: Restrictions are never applied against a user named PUBLIC, but only the group PUBLIC. For this reason, do not use PUBLIC as a user name. Similarly, to avoid problems, do not name a group the same as a user name.

470

Row-Level Security in Interactive Reporting Documents

The BRIOSECP Table


The BRIOSECP (P for parameter) has one column, named BCONJUNC with a data type of CHAR(3). Its value is either the word AND or the word OR, and the Interactive Reporting document that administers the RLS creates and populates the table.

The BRIOSECR Table


The BRIOSECR table is the heart of the row-level security feature. It defines the specific restrictions to be applied to users and the groups (including PUBLIC) to which they belong. These restrictions take the form of join operations (a user cannot access a column in the employee salary table unless it is joined to the employee table), and limits (WHERE clause expressions) to be applied to either the source table (SALARY) or table(s) (EMPLOYEE) to which it is joined. Table 50 lists the columns in the BRIOSECR table. As suggested earlier, existing security definitions can sometimes be translated into the format described here. A view, stored procedure, or programmatic mechanism can be used to translate and/or populate the information needed in BRIOSECR by the Workspace servers. When these methods are used, the servers require the column names and data types defined above. And again, do not attempt to use the sample Interactive Reporting document, row_level_security.bqy, to manage this information. If a join table is specified and it does not already exist in the data model the user accesses, it will still be added to the final SQL generated to ensure the security restrictions are enforced. This process is iterative. When a table is added and the present user, either directly or by group membership, has restricted access to that added table, those restrictions will also be applied, which may mean additional tables will be added, and those restrictions will also be checked, and so on. Circular references will result in an error if they are defined.
Table 50

Columns in the BRIOSECR Table Column Type INT Functional Use This column contains an arbitrary numeric value. It should be unique, and it is useful for maintaining the table by whatever means the customer chooses. The servers do not rely upon this column, and the servers never access this column. To that extent, it is an optional column but recommended. (It is required when using the sample Interactive Reporting document, row_level_security.bqy.) When the RDBMS supports it, a unique constraint or unique index should be applied to the table on this column. The name of the user or the name of a group to which a user belongs. If PUBLIC, the restrictions are applied to all users. Used to identify a topic in the Data Model. (In Interactive Reporting, a topic typically corresponds to a table in the database, but it could be a view in the database.) If the physical name property of the topic is of the form name1.name2.name3, this represents name1. Most often, this represents the database in which the topic exists. This field is optional unless required by the connection in use. The most likely circumstance in which to encounter this requirement will be with Sybase or Microsoft SQL Servers where the Interactive Reporting database connection (the connection definition file) is set for access to multiple databases.

Column Name UNIQUE_ID

USER_GRP SRCDB

VARCHAR VARCHAR, can be null

Row-Level Security Tables

471

Table 50

Columns in the BRIOSECR Table (Continued) Column Type VARCHAR, can be null VARCHAR VARCHAR Functional Use Used to identify the owner/schema of the topic in the Data Model. This would be name2 in the three-part naming scheme shown above. If the topic property, physical name contains an owner, then it must be used here as well. Used to identify the table/relation identified by the topic in the Data Model. This is name3 in the three-part naming scheme. Used to identify a column in SRCTBL. This is a topic item in Data Model terminology, and is an item that might appear on the Request line in a query built from the Data Model. In the context of the security implementation, the item named here is the object of the restrictions being defined by this row of the security table BRIOSECR. If this column contains an *, all columns in SRCTBL are restricted. If present, defines the database name qualifier of a table/relation that must be joined to SRCTBL. If present, defines the schema/owner name qualifier of a table/relation that must be joined to SRCTBL. If present, names the table/relation that must be joined to SRCTBL. If present, names the column name from SRCTBL to be joined to a column from JOINTBL. If present, names the column name in JOINTBL that will be joined (always an equal join) to the column named in JOINCOLS. If present, identifies a table/relation to be used for applying a constraint (limit). This is a coded value. If the value in this column is S, the column to be limited is in SRCTBL. If J, a column in JOINTBL is to be limited. If the value in this column is O, it indicates that for the current user/group, the restriction on the source column for the group/user named in column OVRRIDEG is lifted, rendering it ineffective. If this value is NULL, then no additional restriction is defined. If the JOIN* columns are also all NULL, the column is not accessible at all to the user/group. This implements column level security. See the functional use description of CONSTRTV for more information on column level security. The column in the table/relation identified by CONSTRTT to which a limit is applied. The constraint operator, such as =, <> (not equal), etc. BETWEEN and IN are valid operators. Basically, any valid operator for the database can be supplied.

Column Name SRCOWNER

SRCTBL SRCCOL

JOINDB JOINOWNR JOINTBL JOINCOLS JOINCOLJ CONSTRTT

VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null VARCHAR, can be null CHAR(1), can be null

CONSTRTC CONSTRTO

VARCHAR, can be null VARCHAR, can be null

472

Row-Level Security in Interactive Reporting Documents

Table 50

Columns in the BRIOSECR Table (Continued) Column Type VARCHAR, can be null Functional Use The value(s) to be used as a limit. The value(s) properly form a condition that together with the content of CONSTRTC and CONSTRTO columns create valid SQL syntax for a condition in a WHERE clause. Subquery expressions, therefore, are allowed. Literal values should be enclosed in single quotes or whatever delimiter is needed by the database for the type of literal being defined. If the operator is BETWEEN, the AND keyword would separate values. If :USER is used in the value, then the user name is the limit value. If :GROUP is used, all groups of which the user is a member are used as the limiting values. Both :USER and :GROUP can be specified, separated by commas. The public group must be named explicitly; it is not supplied by reference to :GROUP. When applying column level security, CONSTRTV provides the SQL expression that will effectively replace the column on the Request line. For example, the value zero (0) might appear to replace a numeric value that is used in the Interactive Reporting document but should not be accessible by the specified user/group. While any valid SQL expression that can be used in a SELECT list is permitted, pick a value that is acceptable for the likely use. For example, the word NULL is permitted, but note that in some cases, it might not be the appropriate choice, as it could also end up in a GROUP BY clause.

Column Name CONSTRTV

OVRRIDEG

VARCHAR, can be null

The name of a group or user. Used when CONSTRTT is set to O. If the group named in OVRRIDEG has a restriction on the source element, then this restriction is effectively ignored for the user/group named in USER_GRP. SRCDB, SRCOWNER, SRCTBL, and SRCCOL as a collection must be equal between the row specifying the override and the row specifying the conditions to be overridden. (See examples.)

OR Logic Between Groups


Each permission/restriction applied to the same group or user is separated by AND logic, but for users who belong to multiple groups, these sets of permissions/restrictions can be separated optionally by OR conditions or by AND conditions. This option makes the logic disjunctive or conjunctive respectively. For example, a user who is in group A and is allowed to see sales data in the truck category for the Eastern Region, and who also belongs in group B and is allowed to see sales data in the minivan category for the Eastern Region, can see both truck and minivan data from one query using OR Logic. If conjunctive logic were used, that user would see no data, since the category could not be truck and minivan simultaneously.

Row-Level Security Tables

473

Row-Level Security Examples


The examples are based on the sample Access database provided by option when installing Interactive Reporting on a Windows platform, using the connection file Sample.oce. For these examples, the users BRIO and VIEW&PROCESS require access to data that is denied to the rest of the users. These two users both belong to the group AMERICAS, which corresponds to a region of the same name. However, the user BRIO is a corporate officer who should be able to see all data. Only one piece of data will be accessed in the course of these examples: the amount sales column from the sales fact table. The examples are more far-reaching than this might seem. Screenshots for these examples come from the Interactive Reporting document to which processing restrictions are applied, and from the sample Interactive Reporting document, row_level_security.bqy, mentioned earlier. For the screen shots from the sample Interactive Reporting document, the columns in the BRIOSECR table follow in a top down, left to right manner for the most part with the fields on the screen. Deviations from this will be noted where possible. In particular, though, note that the UNIQUE_ID column is not shown in this sequence of fields, appropriate to its optional role to the functionality of the software, although it is used behind the scenes by the sample Interactive Reporting document. Figure 36 shows the layout of the data in the database, to illustrate the possible joins as intended by the database designer.

Figure 36

Database Layout Showing Possible Joins

Figure 37 shows the data model in the published Interactive Reporting document.

474

Row-Level Security in Interactive Reporting Documents

Figure 37

Data Model in the Published Interactive Reporting Document

Defining the Users and Groups


Based on the above description of the users and groups involved in this example, insert a minimum of two rows into the BRIOSECG table, as shown in Table 51.
Table 51

Rows to Insert in the BRIOSECG Table BUSER BRIO VIEW&PROCESS

BGROUP AMERICAS AMERICAS

In the sample Interactive Reporting document for maintaining row-level security information, once the information has been added, it would look something like Figure 38 when the AMERICAS group is selected.

Figure 38

Sample Interactive Reporting Document with AMERICAS Group Selected

Row-Level Security Examples

475

Dealing with The Rest of the Users


The requirement here was that users who were not part of the AMERICAS group should have no access to this information. There are several ways to do this, and in part, what the best way is depends on who the rest of the users are. If they are extranet users, this probably means no access; users outside of the corporate network should not get sales data, even summary data, as this might be considered proprietary and certainly not for any potential competitors. Using the PUBLIC group, restrict the entire SALES_FACT table from accessing this information by using the asterisk to reflect all columns (see Figure 39).

Figure 39

Restriction on SALES_FACT Table

(This is an example of column level security. All values from this table, if they appear on the Request line, will be substituted with NULL.) Where there are no extranet concerns, it might be appropriate for all the employees to know how their company is doing overall, such a blanket restriction is not recommended. Instead, restrict the use of the STORE_ID column, the only means by which the sales information can be tied back to any particular store, country, region, etc. This will look identical to the case above except that STORE_ID is specified instead of an asterisk for the Source Column Name.

Overriding Constraints
Obviously, members of the AMERICAS group are also members of PUBLIC. So, regardless of the way the PUBLIC group was restricted, those restrictions are not to be applied to the AMERICAS group for the sales information. That group might be restricted in different ways, or not at all, and the same mechanism ensures that happens while PUBLIC restrictions are in place. Figure 40 shows this when using the sample Interactive Reporting document, row _level_security.bqy.

476

Row-Level Security in Interactive Reporting Documents

Figure 40

Overriding Constraint on User/Group

This only overrides PUBLIC constraints for this particular column. Restrictions on PUBLIC against other columns are still enforced against members of the AMERICAS group as well. If the restriction is on all columns of a table, designated by an asterisk, the override must also be specified with an asterisk and then specific column constraints reapplied to groups as needed.

Cascading Restrictions
In order to give the members of the AMERICAS group access to sales information only for the appropriate region, the query includes references to columns in other tables which are not necessarily part of the existing data model. The row-level security will function the same whether or not the tables already existed in the data model. As seen in the table relationships pictured above, the region information is bridged to the sales information by the store table. To implement a constraint that makes only sales information available for a particular region requires two entries in the BRIOSECR table, one to join sales to stores, and one to join stores to regions. This latter case also requires a limit value for the region name. (A limit on REGION_ID could also accomplish the same goal, but is not as readable, especially in an example. See the discussion to follow about subqueries for another perspective on limits on ID type columns.) The first restriction required for this example is on the STORE_ID column. In order to use that column, a join must be made back to the STORES table. Figure 41 shows how this join would be specified.

Row-Level Security Examples

477

Figure 41

Join to STORES Table

Now, the join to the Regions table is added, with the appropriate constraining value, as shown in Figure 42.

Figure 42

Join to Regions Table

The only remaining part of the example is letting user BRIO, also a member of the AMERICAS group, see the data in an unrestricted way. Handling this case is left as an exercise for the reader.

478

Row-Level Security in Interactive Reporting Documents

Other Important Facts


This section contains miscellaneous information about the implementation of row-level security.

Custom SQL
Custom SQL is used to provide special SQL syntax that the software does not generate. In the absence of row-level security, users with proper permissions on the Interactive Reporting document can modify custom SQL to produce ad hoc results. When row-level security is in place, Custom SQL is affected in two ways:

If the published Interactive Reporting document contains an open Custom SQL window, it is used as is when the user processes a query. No restrictions are applied to the SQL. However, the user cannot modify the SQL. While this can be a handy feature, care should be taken when publishing Interactive Reporting documents that require custom SQL that they dont compromise the security requirements. If the user chooses the Reset button on the Custom SQL window, the SQL shown includes the data restrictions, and the original intent of the Custom SQL is lost and the user will not be able to get it back except by requesting the Interactive Reporting document from the server again.

Similar issues apply to the use of imported SQL.

Limits
The row-level security feature affects limits three ways: First, if a user is restricted from accessing the content of certain columns, and the user attempts to show values when setting a limit on the restricted column, the restrictions will be applied to the SQL used to get the show values list. That way, the user cannot see and specify a value they would not otherwise be permitted to access. Second, setting limits can result in some perhaps unexpected behavior when coupled with row-level security restrictions. This is best explained by example. In order to read the amount of sales, the user is restricted to a join on the STORE_ID column back to the stores table and in addition, the user can only see information for the STORE_ID when the state is Ohio. This user tries to set a limit on the unrestricted column STATE, and chooses something other than Ohio, thinking this a way to subvert the data restrictions. Unfortunately for that user, no sales amount information will be returned at all in this case. The SQL will specify where state = user selected value AND state = OH. Obviously, the state cannot be two different values at the same time, so no data will be returned. Of course, a user may try to set a limit on the CITY column instead of the STATE column, thinking the city name might exist in multiple states. As long as the need exists to access the amount of SALES column in the SALES table with identifying store information, though, the state limit will still be applied, and no data the user should not be able to see will be accessible to that user. It just will not prevent a user from getting a list of stores when sales data is not

Other Important Facts

479

part of that list. Generally speaking, restricting access to facts based on the foreign key in the fact table(s) works best. If it is necessary to restrict the users access to a list of stores, these dimension restrictions work best when applied to all columns in the dimension table with a limit on the source table. For example, using the requirements described above to restrict the amount of sales information in Ohio only, with the same restriction on the dimension-only queries, do not apply any limit on access of the amount sales information except that it must be joined back to the STORES table on STORE_ID. Then, add a restriction for all columns in the STORES table, limiting it to only stores in Ohio. This limits access to both fact and dimension data. Third, when setting a limit using Show Values, it has already been noted that any restrictions on the column to be limited are applied to the SQL that generates the show values list. For example, using the restrictions described in the previous paragraph, attempting to show the values list for the CITY column would be constrained to those cities in Ohio. Now, consider the following scenario. The SALES FACT table also has a TRANSACTION DATE and PRODUCT_ID column. The transaction date column is tied back to a PERIODS table, where dates are broken down into quarters, fiscal years, months, and so on. In this somewhat contrived example, a restriction is placed on the PERIODS table, where values there are joined back to the SALES TRANSACTION table and restricted by PRODUCT_ID values in a certain range. The user sets a limit on fiscal year in the PERIODS table and invokes show values in the Limit dialog box to pick the range. Because of the restrictions in place, only one fiscal year is available, and the user picks it. Now, the user builds a query that does not request the FISCAL YEAR column itself but does reference the PRODUCT_ID field and processes it. This query returns, for the sake of argument, 100 rows. Now the user decides there is a need to see the fiscal year value and adds it to the Request line. Reprocessing the query only returns 50 rows. Why? In the first case, PRODUCT_ID values outside of the range allowed when querying the FISCAL YEAR column will appear in the results. In the second case, the query will cause the restriction on PRODUCT_ID range to be included. Restrictions are only applied when a user requests to see data. There was no request to see the FISCAL YEAR column in the first case, except while setting the limit. There is no restriction on seeing PRODUCT_ID values. This example is contrived because restricting access to a dimension based on data in a fact table would be extremely unusual. Nevertheless, it illustrates a behavior that should be kept in mind when implementing restrictions.

Naming
Another way to set the restrictions described above is by a subquery. Instead of directly setting the limit on the STATE column, limit the values in the STORE_ID column in the STORES table. The constraint operator would be IN, and the constraint values field might look something like this:
(SELECT S.STORE_ID FROM STORES S WHERE S.STATE = 'OH')

Now, no matter what limit the user sets in the STORES table, they will always be constrained to the set of store IDs that are allowed based on their group memberships and their own user name. Even if a city outside of the allowed state is chosen, such as a city that exists in more than one state, any stores that other city has will not show up in the results.

480

Row-Level Security in Interactive Reporting Documents

Using a subquery can be useful when incorporating existing security systems into the row-level security feature of Interactive Reporting. When constructing constraints of this type, it is especially important to know SQL. For example, to specify a subquery, it helps to know that a subquery is always enclosed in parentheses. It is also important to know how the Workspace generates SQL and to follow its naming conventions to make sure the syntax generated is appropriate.

Table and Column Names


For the most part, simple security constraints reference directly the actual object names in the database. Case sensitivity in names should be observed when and where required. For subqueries and other SQL constructs used to specify the constraint values, it is sometimes useful to refer to objects already used by the softwares SQL generation process. To do this:

For table references in the FROM clause, use From.tablename, where tablename is the display name seen in the Interactive Reporting documents data model as the display name. If the display name contains a space, use the underscore to represent the space. For column names, use tablename.columnname, following the same rule as above, except that the From. should not be used.

Alias Names
By default, when processing user queries, table references in the SQL are always given alias names. Alias names are convenient shorthand for long table references, and they are required when trying to build correlated subqueries. These alias names take the form ALn, when the n is replaced by an arbitrary number. These numbers are usually based on the topic priority properties of the data model and can easily change based on several factors. For example, a user with the proper permissions can rearrange the topics, thus giving them different priorities. Because these numbers are dynamic, constraint specifications should never rely on them. Instead, by using the naming scheme above, the appropriate alias will be added to the constraints. So, if the requirement is a correlated subquery, the appropriate name will be given to the column in the outer query when referenced by the correlated subquery. In the example above, using a subquery to restrict STORE_ID values to those in a specific state, it was neither necessary nor desirable to use the Hyperion Solutions naming conventions. There, the set of values was to be derived in a subquery that operated independently of the main query. Consequently, the From. was not used in the FROM clause of the subquery, and the alias names were given in a way to not conflict with the alias names generated automatically by the software. To use a correlated subquery, then, consider syntax like the following:
FROM STORES S WHERE S.STORE_ID = Stores.Store_Id

The reference to the right of the equal sign will pick up the alias name from the outer query and thus provide the correct correlation requirements.

Other Important Facts

481

482

Row-Level Security in Interactive Reporting Documents

Chapter

28
In This Chapter

Troubleshooting Interactive Reporting Studio Connectivity

This section describes how to use the dbgprint tool to diagnose connectivity problems in the Interactive Reporting products.

Connectivity Troubleshooting with dbgprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and Interactive Reporting Studio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 dbgprint and the Interactive Reporting Web Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485

Troubleshooting Interactive Reporting Studio Connectivity

483

Connectivity Troubleshooting with dbgprint


If you experience difficulties logging on to or querying a database, you may be able to solve the problem with the help of a dbgprint (debug print) file. The dbgprint file automatically logs detailed status information which can assist you when troubleshooting platform configuration and connectivity problems. A dbgprint file will usually be requested by the Hyperion Solutions Customer Support personnel if they help you to solve a connectivity-related problem. Although this topic is written with reference to Interactive Reporting, the dbgprint instructions apply to other Hyperion tools as well. If you experience continued connectivity problems with any of these tools, or have difficulty understanding the contents of a dbgprint file, you can forward the contents of the dbgprint file to Hyperion Solutions Customer Support for assistance.
Note: dbgprint is strictly a diagnostic tool, and the information contained is useful only for troubleshooting. Because Hyperion tools repeatedly log information to the file, dbgprint considerably slows application performance and should only be used if you encounter connectivity problems.

dbgprint and Interactive Reporting Studio


dbgprint is a text file. When placed in a directory containing the Interactive Reporting Studio executable (brioqry.exe), Interactive Reporting Studio automatically writes internal status information to the dbgprint file.

To use dbgprint with Interactive Reporting Studio:


1 Exit Interactive Reporting Studio if it is still running. 2 Start a text editor (that is, Notepad, Simple Text, WordPad, and so on). 3 Save an empty file as dbgprint (with no file extension) to the directory which contains the Interactive
Reporting Studio executable.

Typically the brioqry.exe is saved to


HYPERION_HOME\BIPlus\Client\bin\brioqry.exe.

If you are using Notepad, you first have to type a space or character before you can save the file. Do not save the file with a file extension. In the UNIX environment you need to create a file named DbgPrint. Please note the capitalization. This file will be placed in the bin directory for Interactive Reporting Studio. If you are operating in a Windows environment, make sure that no extensions are appended to the end of the file name. If you are using Notepad as the text editor, the .txt extension is automatically appended to the saved file. Make sure you remove any extension before you proceed to the next step.

4 Close the text editor and start Interactive Reporting Studio by opening the actual application file.

484

Troubleshooting Interactive Reporting Studio Connectivity

In some instances dbgprint does not log information if Interactive Reporting Studio was started through an alias or shortcut. Instead, start Interactive Reporting Studio using the Finder (Macintosh), or Windows Explorer. Clicking a shortcut only works if the Start In field in the Properties dialog box for the shortcut shows the path to the brioqry.exe file.

5 Once Interactive Reporting Studio is running, recreate the steps which resulted in the previous error
problem, or follow any instructions given to you by a Hyperion Solution customer support representative.

Typical things you may be asked to do are:


Connect to the database Retrieve a list of tables Add tables to the work space Create and process a query Set a limit

6 Once you have completed the above tasks, quit Interactive Reporting Studio and open the dbgprint file. 7 View the contents of the dbgprint file.
The file should contain status information detailing the Interactive Reporting Studio logon session. You will probably be asked to either fax or email the contents of the dbgprint file to Hyperion Solutions. If the file is blank, review the previous steps and repeat the process.
Note: If you need to run another dbgprint file, save the contents of the file with a unique name. Each time you run the brioqry.exe file, the existing dbgprint file is overwritten.

dbgprint and the Interactive Reporting Web Client


DbgPrint files can also be used with Interactive Reporting Web Client.

To use dbgprint with Interactive Reporting Web Client:


1 Shut down the Web browser. 2 Start a text editor (for example, Notepad, Simple Text, MS-WordPad, and so on). 3 Save an empty file as dbgprint to the folder where the browser executable (.exe) resides, for example,
C:\Program Files\Internet Explorer directory).

If you are using Notepad, you first have to type a space or character before you can save the file. If you are operating in a Windows environment, make sure that no extensions are appended to the end of the file name. If you are using Notepad as the text editor, the .txt extension is automatically appended to the saved file. Make sure you remove any extension before you proceed to the next step.

4 Start the Web browser.


The DbgPrint file starts collecting debug information about the processing of the queries.

dbgprint and the Interactive Reporting Web Client

485

486

Troubleshooting Interactive Reporting Studio Connectivity

Chapter

29
Table 52

Interactive Reporting Studio INI Files

The Interactive Reporting Studio INI files are simple text files that are used to store system and application settings. Table 52 shows each INI used by application and the type of information it contains.
INI Files used in Interactive Reporting INI File BQFORMAT.INI BQMETA0.INI BQTOOLS.IN Content Stores locale and custom numeric formats. Stores OMI metadata settings for supported Metadata sources. Stores Custom Menu definitions (only present if custom menus are defined)

Application Interactive Reporting Studio

Interactive Reporting Web Client

BRIOQPLG.INI

Stores regional settings for the plug-in

BQFORMAT.INI BQTOOLS.INI

Stores locale and custom numeric formats Stores Custom Menu definitions (only present if custom menus are defined ALL server INI files are stored in sub-directories off of the HYPERION_HOME directory, and NOT in the Windows OS directory. Internationalized versions of the BQFORMAT.INI and BQMETA0.INI files are also present.

Workspace

BQFORMAT.INI INTELLIGENCE.INI

Stores locale and custom numeric formats for use with the Workspace. Stores default configuration settings for the UI

SQR.INI

Stores SQR configuration info for the Job Factory

Interactive Reporting Studio INI Files

487

488

Interactive Reporting Studio INI Files

Part

Administering Web Analysis

Chapter 1, Web Analysis Configuration Options and Utilities

Administering Web Analysis

489

490

Administering Web Analysis

Chapter

1
In This Chapter

Web Analysis Configuration Options and Utilities

Administrators use the WebAnalysis.properties file and related Web Analysis utilities to configure, maintain, and optimize Web Analysis behavior in BI+.

Web Analysis Configuration Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 Web Analysis Utilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Changing Web Analysis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499

Web Analysis Configuration Options and Utilities

491

Web Analysis Configuration Options


WebAnalysis.properties contains variables that control Web Analysis functionality.

Administrators must modify this file to support their specific implementations.


WebAnalysis.properties location when using Apache Tomcat 4.0.1 application server: C:\Hyperion\WebAnalysis\deployments\webapps\WebAnalysis\WEB-INF\conf WebAnalysis.properties location when using IBM WebSphere application servers: C:\Hyperion\appserver\hosts\default_host\WebAnalysis\classes

To edit WebAnalysis.properties variables:


1 Stop the application server. 2 In a text editor, open WebAnalysis.properties.
The default installation location is
\\Hyperion\WebAnalysis\conf\WebAnalysis.properties.

3 Edit variables and save changes. 4 Restart the application server.


Topics that explain Web Analysis configuration options:

Controlling Result Sets on page 493 Configuring Java Plug-in Versions on page 493 Configuring the Repository on page 494 Configuring Hyperion System 9 BI+ Analytic Deployment Services on page 494 Considerations for Configuring Analytic Deployment Services on page 495 Resolving Analytic Services Subscriptions in Web Analysis on page 496 Configuring a Web Analysis Mail Server on page 496 Formatting Data Value Tool Tips on page 496 Setting Web Analysis to Log Queries on page 496 Exporting Raw Data Values to Excel on page 497

492

Web Analysis Configuration Options and Utilities

Controlling Result Sets


Users and system administrators can set row limits to control relational and OLAP query result set size. This protects server and network resources from being consumed by large query result sets. The Relational Drill-through dialog box features a Max rows to return field that controls the number of rows returned during relational drill-through. You must set other OLAP and relational row limits in WebAnalysis.properties, the path to which varies by application server configuration.
WebAnalysis.properties contains variables that control the query result sets in terms of

rows:

MaxDataCellLimitOLAP database connection query result set size; default is 50000 MaxJdbcCellCountRelational database connection query result set size; default is 50000

Note: MaxJdbcCellCount is located in the Analytic Services Config section.

Configuring Java Plug-in Versions


Web Analysis Studio is configured to use a specific Sun Java Plug-in, which administrators can update by editing the static versioning statement in WebAnalysis.properties. This mandates the loading of the Sun Java plug-in specified version when clients log on to Web Analysis Studio, but does not impact the JDK used by the server.

To edit the WebAnalysis.properties static versioning statement:


1 Stop the application server. 2 In a text editor, open WebAnalysis.properties.
The default location is \\Hyperion\WebAnalysis\conf\WebAnalysis.properties. At the top of the file are three variables:
_JREVersion=1.4 _JREClassID=CAFEEFAC-0014-0002-0006-ABCDEFFEDCBA _JRECodeBaseVersion=j2re-1_4_2_06-windows-i586-p.exe

3 Edit these values and remove the preceding underscore to change the Sun Java Plug-in version.
Alphabetic characters at the beginning and end of the string are set by Sun. Do not change these characters. The first two sets of numeric digits indicate the Sun Java plug-in version. The third set of numeric digits indicates the patch number. For example, this is the class identifier (CLSID) value for Sun Java plug-in 1.3.1_10:
clsid:CAFEEFAC-0013-0001-0010-ABCDEFFEDCBA

4 Save your changes. 5 Restart the application server.

Web Analysis Configuration Options

493

Configuring the Repository


The Repository Config section of WebAnalysis.properties sets a series of variables supporting JDBC connectivity for supported repositories:
db.type= db.subprotocol= db.driver= db.alias= db.user= db.password= db.password-encrypted=true

Valid alternate values for each variable are commented out on the line after the variable. To use an alternate value, remove the pound sign (#) and place it before the old value. When moving, migrating, or upgrading the repository, you may need to edit these variables and re-encrypt the password (see Repository Password Encryption Utility on page 497).

Configuring Hyperion System 9 BI+ Analytic Deployment Services


WebAnalysis.properties supports variables that enable Analytic Deployment Services,

which provides Analytic Services connection alternatives for administrators running Web Analysis on Solaris operating systems. Review Considerations for Configuring Analytic Deployment Services on page 495 before configuring Analytic Deployment Services. See the Hyperion System 9 BI+ Analytic Deployment Services Installation Guide for complete information on installing, configuring, and using this service.
Table 53

EDS Variables Description A value of true prompts ADM to use Analytic Deployment Services to access Analytic Services; a value of false enables ADM to use the default JNDI driver Analytic Deployment Services driver to ADM; do not modify Server running Analytic Deployment Services Locale for Analytic Deployment Services Domain for Analytic Deployment Services; do not modify this variable ORB type for Analytic Deployment Services; only TCP/IP is supported Analytic Deployment Services communication port

Variable UseEES EESDriverName EESServerName EESLocale EESDomain EESORBType EESPort

494

Web Analysis Configuration Options and Utilities

Table 53

EDS Variables (Continued) Description Method ADM uses for Analytic Deployment Services connection pooling A connection pool is a set of login sessions from Analytic Deployment Services to an Analytic Services server. Analytic Deployment Services uses a connection pool to process requests for Analytic Services services. There are three valid combinations of these properties:

Variable EESUseConnPool EESConnPerOp

EESUseConnPool=false EESConnPerOp is ignoredConnection pooling is not used EESUseConnPool=true EESConnPerOp=falseConnection pooling is used; connection is held from when cube view is opened until it is closed EESUseConnPool=true EESConnPerOp=trueConnection pooling is used; connection is released immediately after each operation

EESUseReportOption

Set to true only if using Microsoft's JVM

Considerations for Configuring Analytic Deployment Services


Keep in mind the following items when configuring Analytic Deployment Services:

User name and password used by the ADM Analytic Deployment Services driver must be valid on the Analytic Deployment Services server and the Analytic Services server. ChangePassword and SetPassword server actions attempt to modify both Analytic Deployment Services and Analytic Services OLAP server passwords. To be successful, olap.server.autoChangePassword must be set to true, and the administrator user ID specified in the EDS_ES_HOME/bin directory (olap.server.admin.name=admin) must differ from the user ID being passed by the action. Two archives installed with Analytic Deployment Services must be defined in Web Analysis classpath: ess_es_server.jar and ess_japi.jar. Hyperion does not recommend implementing Analytic Deployment Services in conjunction with AIX platforms.

Web Analysis Configuration Options

495

Resolving Analytic Services Subscriptions in Web Analysis


Administrators can set the WebAnalysis.properties file variable FastResolveAnalytic ServicesSubscriptions=true to realize a Web Analysis performance improvement, as long as they are not conducting hybrid analysis. Setting to true indicates use of Analytic Services pass-through methods to generate dimension member lists of subscription controls. Because there is no way for Web Analysis to detect hybrid analysis, this variable is set to false by default.
FastResolveAnalytic ServicesSubscriptions=false indicates to use standard Analytic Services resolve member methods to generate dimension member lists for subscription controls.

Configuring a Web Analysis Mail Server


To configure a Web Analysis mail server:
1 Stop the application server. 2 In a text editor, open WebAnalysis.properties.
The default location is \\Hyperion\WebAnalysis\conf\WebAnalysis.properties.

3 Scroll to the end of the file. 4 For the MailServer=<localhost>, remove the pound signs (#) and enter a value for localhost. 5 Save the file. 6 Restart the application server.

Formatting Data Value Tool Tips


WebAnalysis.properties contains a variable that controls the format of data value tooltips,

which display as small boxes over data cells when the cursor triggers a float-over event. When the variable FormatToolTips=true, tooltips displays data values in scientific notation unformatted to up 1E7. When FormatToolTips=false, or when the variable is not specified, tooltips display data values in a format that matches the spreadsheet grid.

Setting Web Analysis to Log Queries


Setting the WebAnalysis.properties variable LogQueries=true redirects the ALE query report and Analytic Services report specification created by ADM to the Web Analysis output log. This variable is set to false by default, to minimize the amount of logged information.

496

Web Analysis Configuration Options and Utilities

Exporting Raw Data Values to Excel


Setting the WebAnalysis.properties variable ExportDataFullPrecision=true exports data values directly from data sources to Microsoft Excel (in lieu of data values with clientbased formatting).

Web Analysis Utilities


Topics that explain the Web Analysis utilities:

Repository Password Encryption Utility on page 497 Web Analysis Configuration Test Servlet on page 498

Repository Password Encryption Utility


When moving, migrating, and upgrading repositories, users may change the repository user ID and password values listed in WebAnalysis.properties. Because these file values are viewable over the Web using the Configuration Test Servlet, a method exists to encrypt password values.

To change and encrypt repository passwords:


1 Stop the application server. 2 In a text editor, open WebAnalysis.properties.
The default location is \\WebAnalysis\conf\WebAnalysis.properties.

3 In the Repos Config section, locate these variables:


db.user=<userID> db.password=<encrypted password> db.password-encrypted=true

4 Edit values for user ID and password.


Note that the password is not encrypted.

5 Change the db.password-encrypted value to false. 6 Save your changes. 7 Navigate to \\WebAnalysis\conf\ and run EncryptUtil.bat or EncryptUtil.sh.
You may use alternative methods to execute this file. EncryptUtil locates the user ID, password, and encryption variable, encrypts the password, and resets db.passwordencrypted to true. To review the changes, open WebAnalysis.properties.

8 Restart the application server.

Web Analysis Utilities

497

Web Analysis Configuration Test Servlet


Use Web Analysis Configuration Test Servlet to diagnose and resolve connectivity issues. The servlet displays links that centrally report environmental variables and WebAnalysis.properties parameters, and test connectivity to the class factory, the repository, the external authentication configuration file, and the Analytic Services driver.

To launch Configuration Test Servlet, open a Web browser and type this URL:
http://<hostname>/WebAnalysis/Config

Configuration Test Servlet provides links to configuration information as discussed in these topics:

List Environment Variables on page 498 View Web Analysis Property Files on page 498 Services Framework Test on page 498 Test Pages for Analytic Services, Financial Management, and SAP BW ODBO on page 499

Tip: Use the browsers Back button or the Available Tests link at the page bottom to return to the

Configuration Test Servlet home page.

List Environment Variables


The List Environment Variables page provides information about Java system properties and system environment variables, such as user.name, java.class.path, java.home, HYPERION_HOME, LOGONSERVER, and CLASSPATH.

View Web Analysis Property Files


The Web Analysis Property Files page provides links to and locations for WebAnalyis.properties and CssConfig.xml.

Services Framework Test


The Test ATF Configuration page retrieves information from the repository and tests the repository connection. The last line on the page indicates whether the test executed successfully. If the test failed, a stack trace is displayed to help you troubleshoot problems.

498

Web Analysis Configuration Options and Utilities

Test Pages for Analytic Services, Financial Management, and SAP BW ODBO
The test pages for Analytic Services, Financial Management, and SAP BW ODBO provide the configuration information:

ADM Environment Variables ADM Property File Locations (click a link to view the property file) ADM Jar Locations Version Information

You use these pages to test your connectivity (using ADM) to Analytic Services, Financial Management, and SAP BW ODBO.

Changing Web Analysis Ports


You can change Web Analysis port numbers without rerunning the Configuration Utility. This procedure describes how to change Web Analysis port numbers for Tomcat Application Servers. You must match port numbers in three locations to successfully change Web Analysis port numbers.
Note: Web Analysis must not be running during this process. At the end of the process, you must restart any applications using the HTTP Server.

To change Web Analysis port numbers:


1 Change the Web Analysis port number on Tomcat Application Servers:
a. In Windows Explorer, navigate to
%BIPlus_Home%\AppServer\InstalledApps\Tomcat\5.0.28\WebAnalysis\conf and open Server.xml file for editing.

b. Change values for shutdown and service connector ports: (At the top) Server port =port_number (At the bottom) Connector port=port_number c. Save and close Server.xml.

2 Update the HTTP Server with the Web Analysis port number:
a. In Windows Explorer, navigate to
%HYPERION_HOME%\common\httpServers\Apache\2.052\conf and open HYSLWorkers.properties for editing.

b. Change the Web Analysis port number to match the AJP port specified by the Service Connector port parameter in Server.xml (Connector port=). c. Save and close HYSLWorkers.properties.

Changing Web Analysis Ports

499

3 Update the Hyperion Apache application server HTTP with the Web Analysis port number:
a. In Windows Explorer, navigate to
%HYPERION_HOME%\common\httpServers\Apache\2.052\conf and open HTTP.conf for editing.

b. Change the Listen port number to match the port specified by the Service Connector port parameter in Server.xml (Connector port=). c. Save and close HTTP.conf.

4 Restart Hyperion Apache HTTP Server and any applications using the HTTP Server.

500

Web Analysis Configuration Options and Utilities

APPENDIX

Backup Strategies

A
Standard data center policies for database backups include incremental daily backups and weekly full backups with off-site storage to protect an organizations investment. When you back up Workspace, you should plan the backup in the same way that you plan other database backups.

In This Chapter

What to Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 General Backup Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Backing Up the Workspace File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 Sample Backup Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 Backing Up the Repository Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 Backing Up Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506

Backup Strategies

501

What to Backup
You must back up the following items in your system:

File system, which contains Workspace content and other system information (including files in other directories and on other hosts) Repository database, which contains user and item metadata Report registry keys from the same point in time (Windows only) Shared Services

Note:

For information about backing up Shared Services, see the Hyperion System 9 Shared Services Installation Guide.

Workspace maintains an item repository in the native file system and stores metadata, or descriptive information, about each user and object in an RDBMS.
Note: To recover data, restore the database and file system backups (and registry if required), and restart the services.

General Backup Procedure


To back up Workspace:
1 Shut down Workspace services. 2 Back up the Workspace file system. 3 Back up the Workspace repository database. 4 Save the backup (on tape or CD).
Note: If you use Windows, export the Workspace registry key. If you use UNIX, backup the /etc/rc or the /etc/init.d boot startup scripts.

Backing Up the Workspace File System


There are five backup types, distinguished by when you perform them:

CompleteBacks up the entire system. Your organizations policies and procedures determine whether and how often you perform a complete backup. Post-installationBacks up certain directories, performed after completing an installation and before using the system.

502

Backup Strategies

Daily incrementalBacks up only files that are new or modified since the previous day. Daily incremental backups involve directories that contain frequently changing information, such as repository content and log files. Weekly fullBacks up all files in the directories for which you do incremental backups on a daily basis. As NeededBacks up data only after changes are made, rather than on a regular schedule. As-needed backups involve directories containing files that are customizable but are not modified regularly.

The Hyperion Home directory contains the Workspace products you installed on the host. Subdirectories of Hyperion Home include \BIPlus and \common, among others.

Complete Backup
To back up your system comprehensively, back up the Hyperion Home directory. This is the default installation directory for all Hyperion Solutions products on a given host.

Post-Installation
Immediately after installing, back up these directories:
Directory BIPlus\Install BIPlus\bin Contents All configuration information defined during installation; back up on all hosts and compress each backup Start batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) Service configuration files used at service startup:

BIPlus\common\config

server.xml config.dat server.dat

BIPlus\lib Hyperion Home\common\JDBC Hyperion Home\common\ODBC BIPlus\common\sqr\lib

JAR files required by one or more Workspace components and library files for Job Utilities, LSC, and RSC JDBC drivers required to run the Workspace services Required ODBC drivers Files necessary to manipulate the metadata for versions of Production Reporting

Backing Up the Workspace File System

503

Weekly Full and Daily Incremental


Back up these directories fully once a week and incrementally every day:
Directory BIPlus\logs BIPlus\data\RM1_host Contents Log files for services operating on a computer Content (repository files)

As Needed
Back up the following directories as needed:
Directory BIPlus\bin Description Start batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) Service configuration files used at service startup, server.xml, and config.dat Directories associated with services

BIPlus\common\config BIPlus\data

Reference Table for All File Backups


The following lists the directories for all types of backups and is sorted alphabetically by directory.
Table 54

Backup Reference Contents Library files for Job Utilities Workspace startup batch scripts for each service, and the ConfigFileAdmin utility used by the administrator to decode and change passwords (typically, the only password of interest is the RDBMS login password) On Windows systems: Setup.exe program file, used to create or delete services running as Windows Services, and to update the Windows Registry information Backup Requirements After initial installation After initial installation and after any changes are made to start scripts

Directory BIPlus\lib BIPlus\bin

Hyperion Home\common\JDBC Hyperion Home\common\ODBC

All drivers required to run the Workspace services

After initial installation

504

Backup Strategies

Table 54

Backup Reference (Continued) Contents Content (repository files) Service configuration files used at service startup, server.xml, and config.dat. Configuration information defined during installation JAR files required by Hyperion components Log files for services operating on a computer Files necessary to manipulate the metadata for versions of Production Reporting Backup Requirements Daily incremental, weekly full (consistent with company backup policy) After initial installation, before and after subsequent service configuration changes that focus on adding and removing services to a given domain Perform after initial installation on each host; back up on each host and compress each backup After initial installation Daily incrementals, weekly fulls (consistent with company backup policy) After initial installation

Directory BIPlus\data\RMx_hostName BIPlus\common\config

BIPlus\Install

BIPlus\lib BIPlus\logs BIPlus\common\sqr\lib

Sample Backup Script


Sample script for a file-system backup of a Sun Solaris deployment of Workspace:
Backup Utility: Solaris Dump Backup Type: Full Level 0 Dump Backup Frequency: Weekly run on Saturday at 1 AM #!/bin/sh PARMS="0ucbsdf 126 5000 61000" DEVISE="/dev/rmt/0hn" CMD="/usr/sbin/ufsdump" FileSystems="/Hyperion/BIPLus/logs /Hyperion/BIPlus/data/RM1_Solar12" # --------------------------------------------------------# Perform Level 0 Dump of all listed filesystems # --------------------------------------------------------echo "Starting Backup set for the following filesystems:" echo "" for i in $FileSystems do echo backing up filesystem: echo $i $CMD $PARMS $DEVISE $i done

Sample Backup Script

505

Backing Up the Repository Database


Backup the repository database according to your company policy for database backups, taking into account repository usage volume. A backup of the Workspace repository database is RDBMS- (or vendor-) dependent. For details about the backup procedure for your particular RDBMS, see its documentation.

Backing Up Clients
The backup needs of Workspace client installations are minimal. You should perform a standard post-installation full backup according to your company policy. Thereafter, the only files you need to backup are these servlet files:

Servlet configuration file, ws.conf on Windows, or wsrun_platform on UNIX, located in the /WEB-INF/config directory of your servlet engine deployment (under BIPLUS/Appserver)
/WEB-INF/conf/BpmServer.properties

Modified files in /BIPLUS/Appserver


/WEB-INF/web.xml

Customized JSPs Customized HTML templates

506

Backup Strategies

Glossary

access control A security mechanism that manages a users privileges or permissions for viewing, modifying, and importing files or system resources. access privileges The level of access-for example, view, modify, run, full control-that the importer of an item grants to others. accountability map A visual, hierarchical representation of the responsibility, reporting, and dependency structure of your organization. An Accountability map depicts how each accountability team in your organization interacts to achieve strategic goals. An accountability team is also known as a critical business area (team, department, office, and so on. action A task or group of tasks executed to achieve one or more strategic objectives. In a Hyperion Performance Scorecard application, each action box represents an activity or task that helps to accomplish a strategic objective. Each action is usually assigned measures. actions Job output definitions for an Interactive Reporting job is defined in terms of a series of actions. active group A group that is entitled to access the system. active service A service whose Run Type is set to Start rather than Hold. active user A user who is entitled to access the system. active user/user group The user or user group identified as the current user by user preferences. Determines default user preferences, dynamic options, access, and file permissions. You can set the active user to your user ID or any user group to which you belong. adaptive states Interactive Reporting level of permission. There are six levels of permission: view only, view and process, analyze, analyze and process, query and process, and datamodel and analyze.

aggregate cell A cell comprising several cells. For example, a data cell that uses Children(Year) expands to four cells containing Quarter 1, Quarter 2, Quarter 3, and Quarter 4 data. aggregate limit A limit placed on an aggregated request line item or aggregated metatopic item. alias An alternative name. Analysis Server Web Analysis Server. An application server program that distributes report information and enables Web client communication with data sources. Analyze The main Web Analysis interface for analysis, presentation and reporting. appender A Log4j term for destination. application A program running within a system. application server A middle-tier server that is used to deploy and run Web-based application processes. asymmetric analysis A report characterized by groups of members that differ by at least one member across groups. The number and names of members can differ. attribute Characteristics of dimension members that are not stored in the data source but calculated on demand. You can select, group, or calculate members that have a specified attribute. For example, an Employee Number dimension member may have attributes of Name, Age, or Address. attribute dimension A type of dimension that enables analysis based on the attributes or qualities of dimension members. authentication service A core service that manages one authentication system.

Glossary

507

authentication service repository (ASR) A database that contains a complete model of users/groups in an external system. authentication system A security measure designed to validate and manage users and groups. axis A two-dimensional report aspect used to arrange and relate multidimensional data, such as filters, pages, rows, and columns. bar chart A chart that can consist of one to 50 data sets, with any number of values assigned to each data set. Data sets are displayed as groups of corresponding bars, stacked bars, or individual bars in separate rows. batch POV A collection of all the dimensions on the user POV of every report and book in the batch. While scheduling the batch, you can set the members selected on the batch POV. book A container that holds a group of similar Financial Reporting documents. Books may specify dimension sections or dimension changes. book POV The dimension members for which a book is run. A book is a collection of Financial Reporting documents that may have dimensions on the User POV. Any dimension on a reports user POV is added to the book POV and defined there. The member for a dimension on the book POV can be one of the following items: (a) User POV. This means the member is set by the end user just before the book is run. (b) A specific member. If a specific member is chosen, then the selection is stored in the book definition and can only be altered in the Book Editor. (c) A set of member selections. A dimension left on the user POV of a report may be iterated over within the book. For example, a report may be run for four entities within one book. bookmark A link to a reporting document or a Web site, displayed on a personal page of a user. The two types of bookmarks are My Bookmarks and image bookmarks. bounding rectangle The perimeter that encapsulates the Interactive Reporting document content when embedding Interactive Reporting document sections in a personal page. It is required by the Interactive Reporting to generate HTML and is specified in pixels for height and width or row per page. calculation The process of aggregating data, or of running a calculation script on a database.

calculation script A set of instructions telling Hyperion Essbase how to aggregate and extrapolate the values of a database. Catalog pane A pane displaying a list of elements available to the active section. For example, if Query is the active section, the Catalog pane displays a list of database tables. If Pivot is the active section, the Catalog pane displays a list of results columns. If Dashboard is the active section, the Catalog pane displays a list of embeddable sections, graphic tools, and control tools. categories Groupings by which data is organized (for example, month). cause and effect map A map that depicts how the elements that form your corporate strategy are interrelated and how they work together to meet your organizations strategic goals. A Cause and Effect map tab is automatically created for each of your Strategy maps. cell A unit of data representing the intersection of dimensions in a multidimensional database; the intersection of a row and a column in a worksheet. chart A graphical representation of spreadsheet data. The visual nature of charts expedites analysis, color-coding, and visual cues that aid comparisons. There are many different chart types. chart cell value Appears in the lower right corner of a chart on pages in the Monitor and Investigate Sections. The Editor defines the chart cell value that you see in Enterprise Metrics. The chart cell value might display a metric on the chart, such as Booking $, or a calculation based on the metrics displayed on the chart, such as ratio of Booking $ to Forecast $. chart column Enterprise Metrics Detail charts are displayed in columns below each Summary chart. Chart section With a varied selection of chart types, and a complete arsenal of OLAP tools like group and drill-down, the Chart section is built to support simultaneous graphic reporting and ad hoc analysis. Chart Spotlighter A feature that enables you to colorcode charts based on some condition in Interactive Reporting Studio. chart template A template that defines the metrics to display in Workspace charts.

508

Glossary

child A member that has a parent above it in the database outline. choice list A list of members that a report designer can specify for each dimension when defining the reports point of view. A user who wants to change the point of view for a dimension that uses a choice list can select only the members specified in that defined member list or those members that meet the criteria defined in the function for the dynamic list. client A client interface, such as Web Analysis Studio or a workstation on a local area network. clustered bar charts Charts in which categories are viewed side-by-side within a given category; useful for side-byside category analysis. Clustering is only done with vertical bar charts. column A vertical display of information in a grid or table. A column can contain data from a single field, derived data from a calculation, or textual information. column heading A part of a report that lists members across a page. When columns are defined that report on data from more than one dimension, nested column headings are produced. A member that is listed in a column heading is an attribute of all data values in its column. computed item A virtual column (as opposed to a column that is physically stored in the database or cube) that can be calculated by the database during a query, or by Interactive Reporting Studio in the Results section. Computer items are calculations of new data based on functions, data items, and operators provided in the dialog box and can be included in reports or reused to calculate other data. connection file A file used to connect to a data source. console The console is displayed on the left side of the Enterprise Metrics workspace. The console is context sensitive, depending on the page displayed. content Information stored in the repository for any type of file. content area The Contents pane appears on the right side of the Workspace and provides specific information for the page that you are using. cookie A small piece of information placed on your computer by a Web site.

correlated subqueries Subqueries that are evaluated once for every row in the parent query. A correlated subquery is created by joining a topic item in the subquery with one of the topic items in the parent query. critical business area (CBA) An individual or a group organized into a division, region, plant, cost center, profit center, project team, or process; also called accountability team or business area. critical success factor (CSF) A capability that must be established and sustained to achieve a strategic objective. A CSF is owned by a strategic objective or a critical process and is a parent to one or more actions. cube The query result set from a multidimensional (OLAP) data source; a logically organized subset of OLAP database dimensions and members. custom calendar Any calendar created by an administrator. custom report A complex report from the Design Report module, composed of any combination of components. cycle A Interactive Reporting job parameter that is used when scheduled Interactive Reporting jobs need to process and produce different job output with one job run. Dashboard A collection of metrics and indicators that provide an interactive summary of your business. Dashboards enable you to build and deploy analytic applications. Dashboard Home A button that returns you to the Dashboard section designated as the Dashboard Home section. If you have only one Dashboard section, Dashboard Home returns to that section. If you have several Dashboard sections, the default Dashboard Home is the top Dashboard section in the Catalog pane. In Design mode, you can specify another Dashboard section to be the Dashboard Home section. data The values (monetary or non-monetary) associated with the query intersection. data function A function that computes aggregate values including averages, maximums, counts, and other statistics, that summarize groupings of data. You can use data functions to aggregate and to compute data from the server before it reaches the Results section, or compute different statistics for aggregated totals and items in the other analysis sections.

Glossary

509

data layout The data layout interface is used to edit a query, arrange dimensions, make alternative dimension member selections, or specify query options for the current section or data object. data model Any method of visualizing the informational needs of a system. data object A report component that displays the query result set. The display type of a single conventional data object can be set to spreadsheet, chart, or pinboard, and it displays OLAP query result sets. A SQL spreadsheet data object displays the result set of a SQL query, and the freeform grid data object displays the result set of any data source included in it. data source 1. A data storage application. Varieties include multidimensional databases, relational databases, and files. 2. A named client-side object connecting report components to databases. Data source properties include database connections and queries. database A repository within Essbase Analytics that contains a multidimensional data storage array. Each database consists of a storage structure definition (outline), data, security definitions, and optional scripts. database connection A file that stores definitions and properties used to connect to data sources. Database connections enable database references to be portable and widely used. database function A predefined formula in a database. default folder A users home folder. descendant Any member below a parent in the database outline. For example, in a dimension that includes years, quarters, and months, the members Qtr2 and April are descendants of the member Year. Design Report An interface in Web Analysis Studio for designing custom reports, from a library of components. Desktop An interface that presents the icons to open items. detail chart A chart that provides the detailed information that you see in a Summary chart. Detail charts appear in the Investigate Section in columns below the Summary charts. For example, if the Summary chart shows a Pie chart, then the Detail charts below represent each piece of the pie.

dimension A data category used to organize business data for retrieval and preservation of values. Each dimension usually contains a hierarchy of related members grouped within it. For example, a Year dimension often includes members for each time period, such as quarters and months. dimension tab In the Pivot section, the tab that enables you to pivot data between rows and columns. dimension table 1. A table that includes numerous attributes about a specific business process. 2. In Enterprise Metrics, a table in a star schema with a single part primary key. display type One of three Web Analysis formats saved to the repository: spreadsheet, chart, and pinboard. dog-ear The flipped page corner in the upper right corner of the chart header area. You can click the dog-ear to display a shortcut menu. The dog-ear is displayed only on charts in the Investigate Section. drill Allows you to investigate results reflected by a chart in the Investigate Section. You can click a chart that hyperlinks to a lower (more detailed) level in the Investigate Section. This concept is called drilling. drill anywhere A feature that enables you to drill into and add items to pivot reports residing in the Results section without returning to the Query section or trying to locate the item in the Catalog pane. Drill Anywhere items are broken out as new pivot label items. drill target The data to which you are drilling. Specifying a drill target automatically creates a hyperlink enabling you to click the chart to obtain additional detail. drill to detail A feature that enables you to retrieve items from a data model that are not in the Results section without rerunning the original query. This feature provides the ability to query the database interactively and filter the data that is returned. Drill-to-detail sets a limit on the query based on your selection and adds the returned value as a new pivot label item automatically.

510

Glossary

drill-down Navigation through the query result set using the organization of the dimensional hierarchy. Drilling down moves the user perspective from general aggregated data to more detailed data. While default drill down typically refers to parent-child navigation, drilling can be customized to use other dimension member relationships. For example, drilling down can reveal the hierarchical relationships between year and quarters or between quarter and months. drill-through The navigation from a data value in one cube to corresponding data in another cube. For example, you can access context-sensitive transactional data. Drill through occurs usually from the lowest point of atomicity in a database (detail) to a next level of detail in an external data source. dynamic report A report containing current data. A report becomes a dynamic report when you run it. Edit Data An interface for changing data values and sending edits back to Essbase Analytics. employee Users responsible for, or associated with, specific business objects. Employees do not necessarily work for an organization, such as an analyst or consultant. An employee must be associated with a user account for authorization purposes. ending period The ending chart period allows you to adjust the date range shown in the chart. For example, an ending period of month produces a chart that shows information through the end of the current month. exceptions Values that satisfy predefined conditions. You can define formatting indicators or notify subscribing users when an exception has been generated. external authentication Logging on to Hyperion applications by means of user information stored outside the application, typically in a corporate authentication provider such as LDAP or Microsoft Windows NTLM. externally triggered events Non-time-based events that are used to schedule job runs. Extract, Transform, and Load Data source-specific programs that are used to extract and migrate data to an application.

extrapolation A means of showing projected figures. Extrapolation from the current date to the end of the current period is displayed on Enterprise Metrics charts with a white area of the bar. If a line chart shows extrapolation, the line that is extrapolated is dotted. fact table The central table in a star join schema, characterized by a foreign key and elements drawn from a dimension table. This table typically contains numeric data that can be related to all other tables in the schema. filter A filter is used to limit data. While every dimension in the cube must participate in every intersection, you can make filter selections that focus the intersections on a smaller portion of the cube. For example, in Interactive Reporting Studio use a filter to exclude certain tables or data values. In Enterprise Metrics Studio implement a filter by adding a where clause on a join statement. folder A file that contains other files for the purpose of ordering and structuring a hierarchy. footer The text or images that are displayed at the bottom of each page in a report. A footer can contain a page number, date, company logo, document title or file name, author name, and so on. Footers can contain dynamic functions as well as static text. format The visual characteristics of a document or a report object. free-form grid A data object that present OLAP, relational, and manually entered data together and enables you to leverage all these data sources in integrated dynamic calculations. generic jobs Jobs that are neither Production Reporting nor Interactive Reporting jobs. grid POV A means for specifying members for a dimension on a grid without placing the dimension on the row, column, or page intersection. A report designer can set the POV values at the grid level, preventing the user POV from affecting that particular grid. If a dimension has only one value for the entire grid, the dimension should be put into the grid POV instead of the row, column, or page. group A construct that enables the assignment of users with similar system access requirements.

Glossary

511

grouping columns A feature in the Results and Table sections that creates a new column in a dataset by grouping data from an already existing column. Grouping columns consolidate nonnumeric data values into more general group values and map the group values to a new column in the dataset. header The text or images that are displayed at the top of each page in a report. A header can contain a page number, date, company logo, document title or file name, author name, and so on. Headers can contain dynamic functions as well as static text. highlighting Depending on your configuration, you may see highlighting applied to a chart cell value or ZoomChart detail values. A value can be highlighted in red (indicating the value is bad), yellow (indicating that the value is a warning), or green (indicating the value is good). host A server on which applications and services are installed. host properties Properties pertaining to a host, or if the host has multiple Install_Homes, to an Install_Home. The host properties are configured from LSC. hyperlink A link to a file, Web page, or an HTML page on an intranet. Hypertext Markup Language A programming language of tags that specify how Web browsers display data. image bookmarks Graphic links to Web personal pages or repository items. implied share A member with only one child, or a member with multiple children of which only one child is consolidated. For this reason the parent and child share the same value. inactive group A group that cannot access the system because an administrator has inactivated it. inactive service A service that has been placed on hold or excluded from the list of services to be started. inactive user A user who cannot access the system because an administrator has inactivated the user account. Install_Home A variable name for the path and directory where Hyperion applications are installed. Refers to a single instance of a Hyperion application when multiple applications have been installed on the same machine.

Interactive Reporting document sections Divisions of a Interactive Reporting document that are used to display and analyze information in different formats (such as Chart section and Pivot section). Interactive Reporting files or jobs Files created by Interactive Reporting and published into the repository as files or as jobs. Files and jobs have different capabilities. intersection A unit of data representing the intersection of dimensions in a multidimensional database; also, a worksheet cell. Java Database Connectivity A client-server communication protocol used by Java based clients and relational databases. The JDBC interface provides a calllevel API for SQL-based database access. job output Files or reports produced from running a job. job parameters The compile time and runtime values necessary to run a job. job parameters Reusable, named job parameters that are accessible only to the user who created them. jobs A collection of documents that have special properties and can be executed to generate output. A job can contain Interactive Reporting documents, Production Reporting documents or generic documents. join A link between two relational database tables based on common content in a column or record or a relational database concept indicating a link between two topics. A join typically occurs between identical or similar items within different topics. Joins enable row records in different tables to be linked on the basis of shared information in a column field. For example, a row record in the Customer table is joined to a related record in the Orders table when the Customer ID value for the record is the same in each table. This enables the order record to be linked with the record of the customer who placed the order. If you request items from unjoined topics, the database server has no way to correlate the information between the two tables and leads to awkward datasets and run-on queries. join path A predetermined join configuration for a data model. Administrators create join paths for users to select the type of data model needed in a user-friendly prompt upon processing a query. Join paths ensure that the correct tables in a complex data model are used in a query.

512

Glossary

JSP Java Server Pages layer Stack a single object in relative position (sends back and front, or brings forward or backward) to other objects. legend box An informative box containing color-keyed labels to identify the data categories of a given dimension. level A hierarchical layer within the database outline or tree structure. line chart A chart that displays one to 50 data sets, with automatic, uniform spacing along the X-axis. Each data set is rendered by a line. A line chart can optionally shows each line set stacked on the preceding ones, using either the absolute value or a normalized value from 0 to 100 percent. link Link files are fixed references to a specific object in the repository. Links can reference folders, files, shortcuts, and other links using unique identifiers. Links present their targets in the current folder, regardless of where the targets are located or how the targets are renamed. linked data model Documents that are linked to a master copy in a repository. When changes are made to the master, users are automatically updated with the changes when they connect their duplicate copy to the database. linked reporting object A cell-based link to an external file in the Analytic Services database. Linked reporting objects can be cell notes, URLs, or files that contain text, audio, video, or pictures. Note that support of Analytic Services LROs in Financial Reporting applies only to cell notes at this time (by way of Cell Text functions). local report object A report object that is not linked to a Financial Reporting report object in Explorer. local results Results of other queries within the same data model. These results can be dragged into the data model to be used in local joins. Local results are displayed in the catalog when requested. locked data model Data models that cannot be modified by a user. logger Log4j term for where the logging message originates; The class or component of the system in which a log message originated.

LSC services The services that are configured with the Local Service Configurator. They include Global Services Manager (GSM), Local Services Manager (LSM), Session Manager, Authentication Service, Authorization Service, Publisher Service, and in some contexts, Data Access Service (DAS) and Interactive Reporting Service. Map Navigator A feature that displays your current position on a Strategy, Accountability or Cause and Effect map. Your current position is indicated by a red outline on the Map Navigator. master data model A data model that exists independently and has multiple queries that reference it as a source. When you use a master data model, the text Locked Data Model is displayed in the Content pane of the Query section. This means that the data model is linked to the master data model displayed in the Data Model section, which may be hidden by an administrator. MDX (multidimensional expression) The language used to give instructions to OLE DB for OLAP- compliant databases (MS Plato), as SQL is the language used for relational databases. When you build the OLAPQuery sections Outliner, Intelligence Clients translate your requests into MDX instructions. When you process the query, MDX is sent to the database server. The server returns a collection of records to your desktop that answer your query. measures Numeric values in an OLAP database cube that are available for analysis. Measures may be margin, cost of goods sold, unit sales, budget amount, and so on. member A discrete component within a dimension. A member identifies and differentiates the organization of similar units. For example, a time dimension might include such members as Jan, Feb, and Qtr1. member list A named group that references members, functions, or other member lists within a dimension. A member list can be system- or user-defined. metadata A set of data that defines and describes the properties and attributes of the data stored in a database or used by an application. Examples of metadata are dimension names, member names, properties, time periods, and security.

Glossary

513

metric A numeric measurement computed from your business data. Metrics help you assess the performance of your business and analyze trends in your company. For immediate and intuitive understanding, Enterprise Metrics metrics display visually in charts. MIME Type (Multipurpose Internet Mail Extension) An attribute that describes the format of data in an item, so that the system knows which application to launch to open the object. A files mime type is determined either by the file extension or the HTTP header. Plug-ins tell browsers what mime types they support and what file extensions correspond to each mime type. minireport A minireport is a component of a report, and includes layout, content, hyperlinks, and the actual query or queries to load the report. Each report can include one or more minireports. missing data A marker indicating that data in the labeled location either does not exist, contains no meaningful value, or was never entered. model In Shared Services, a file or string of content containing an application-specific representation of data. Models are the basic data managed by Shared Services. Models are of two types: dimensional hierarchies, and nondimensional application objects. Dimensional hierarchies include information such as entities and accounts. Nondimensional application objects include security files, member lists, calculation scripts, and web forms. multidimensional database A method of organizing, storing, and referencing data through three or more dimensions. An individual value is the intersection of a point for a set of dimensions. multithreading A client-server process that enables multiple users to work on the same applications without interfering with each other. native authentication The process of authenticating a user ID and password from within the server or application. note Additional information associated with a box, measure, scorecard or map element. null value A value that is absent of data. Null values are not equal to zero.

OLAPQuery section A document section that analyzes and interacts with data stored in an OLAP cube. When you use Intelligence Clients to connect to an OLAP cube, the document immediately opens an OLAPQuery section. The OLAPQuery section displays the structure of the cube as a hierarchical tree in the Catalog pane. online analytical processing (OLAP) A multidimensional, multiuser, client-server computing environment for users who analyze consolidated enterprise data in real time. OLAP systems feature drilldown, data pivoting, complex calculations, trend analysis, and modeling. Open Catalog Extension Files (OCE) files Files that encapsulate database connection information. OCE files specify the database API (ODBC, SQL*Net, etc.), database software, the network address of the database server, and your database username. Administrators create and publish OCE files. origin The intersection of two axes. page A display of information in a grid or table often represented by the Z-axis. A page can contain data from a single field, derived data from a calculation, or text. page member A member that is displayed on the page axis. palette A JASC compliant file with an extension of PAL. Each palette contains 16 colors that complement each other and can be used to set the color elements of a dashboard. performance indicator An image file used to represent measure and scorecard performance based on a range you specify; also called a status symbol. You can use the default performance indicators or create an unlimited number of your own. period A time interval that is displayed along the x-axis of a chart. Periods might be days, weeks, months, quarters or years. personal pages Your personal window to information in the repository. You select what information to display, as well as its layout and colors. personal recurring time events Reusable time events that are accessible only to the user who created them. personal variable A named selection statement of complex member selections.

514

Glossary

perspective A category used to group measures on a scorecard or strategic objectives within an application. A perspective can represent a key stakeholder (such as a customer, employee, or shareholder/financial) or a key competency area (such as time, cost, or quality). pie chart A chart that shows one data set segmented in a pie formation. pinboard One of the three data object display types. Pinboards are graphics, composed of backgrounds and interactive icons called pins. Pinboards require traffic lighting definitions. pins Interactive icons placed on graphic reports called pinboards. Pins are dynamic. They can change images and traffic lighting color based on the underlying data values and analysis tools criteria. plot area The area bounded by the X, Y, and Z axes; For pie charts, the rectangular area immediately surrounding the pie. predefined drill paths Paths that enable you to drill directly to the next level of detail, as defined in the data model. presentation A playlist of Web Analysis documents. Playlists enable reports to be grouped, organized, ordered, distributed, and reviewed. Presentations are not reports copied into a set. A presentation is a list of pointers referencing reports in the repository. primary measure A high-priority measure that is more important to your company and business needs than many other measures. Primary measures are displayed in the Contents frame and have Performance reports. private application An application for the exclusive use of an product to store and manage Shared Services models. A private application is created for a product during the registration process. Production Reporting A specialized programming language for data access, data manipulation, and creating Production Reporting documents. property Characteristics of an object, such as size, color, type. proxy server A server that acts as an intermediary between a workstation user and the Internet to ensure security.

public job parameters Reusable, named job parameters created by an administrator and accessible to users who have the requisite access privileges. public recurring time events Reusable time events created by an administrator and accessible through the access control system. range A set of values that includes an upper and lower limit, and the values that fall between the limits. A range can consist of numbers, amounts, or dates. reconfigure URL URL used to reload servlet configuration settings dynamically when a user is already logged in to the Workspace. recurring time event An event that specifies a starting point and the frequency for running a job. relational database A database that stores its information in tables related or joined to each other by common pieces of information called keys. Tables are subdivided into column fields that contain related information. Column fields have parents and children. For example, the Customer table may have columns including Name, Address, and ID number. Each table contains row records that describe information about a singular entity, object, or event, such as a person, product, or transaction. Row records are segmented by column fields. Rows contain the data that you retrieve from the database. Database tables are linked by Joins. (See also join.) report footer See footer. report header See header. report object A basic element in report designs. Report objects have specific properties that define their behavior or appearance. Report objects include text boxes, grids, images, and charts. Reports section A dynamic, analytical report writer, that provides users with complex report layouts and easy-touse report-building tools. Pivot tables and charts can be embedded in a report. The report structure is divided into group headers and body areas, with each body area containing a table of data. Tables are created with dimension columns and fact columns. These tables are elastic structures. Multiple tables can be ported into each band, each originating from the same or different result sets.

Glossary

515

request line A line that holds the list of items requested from the database server and that will appear in the users results. request line items Columns listed in the request line. resources Objects or services that the system manages. Examples of a resource include a role, user, group, file, job, publisher service, and so on. result A value that an application collects for measures. If you have the required permissions, you can use the Result Collection report to enter or modify measure results. result frequency The algorithm used to create a set of dates for either the collection of data (collection frequency) or the display of data (result frequency). The result frequencys algorithm is defined by: Major type (for example, weekly, monthly, and so on.) Minor type (for example, first, last, last Friday, 5th day of period, and so on.) Interval (for example, every one, every two, every 5, and so on.) Results section A section in an Interactive Reporting document that contains the dataset derived from a query. Data is massaged in the Results section for use in the report sections. role A construct that defines the access privileges granted in order to perform a business function; for example, the job publisher role grants the privilege to run or import a job. row heading A report heading that lists members down a report page. The members are listed under their respective row names. RSC services The services that are configured with the Remote Service Configurator. They include Repository Service, Service Broker, Name Service, Event Service, and Job Service. scale The range of values on the Y axis of a chart. scale code Specification of how an individual metric or minireport field is scaled. It may be displayed in thousands, or multiplied by 100 in conjunction with a percent format. schedule Specify the job that you want to run as well as the time and job parameter list for running the job.

score The level at which specified targets are being achieved. It is usually expressed as a percentage of the target for a given time period. scorecard Business Object used to represent the progress of an employee, strategy element, or accountability element toward specific goals. Scorecards ascertain this progress based on the data collected for each measure and child scorecard you add to the scorecard. scorecard report A report that presents the results and detailed information about scorecards attached to employees, strategy elements, and accountability elements. secondary measure A low-priority measure that is less important to you than primary measures. Secondary measures do not have Performance reports but can be used on scorecards and to create dimension measure templates. Section pane Lists all the sections that are available in the current Intelligence Client document. security agent A Web access management solutions provider employed by companies to protect Web resources; also known as Web security agent. The Netegrity SiteMinder product is an example of a security agent. security platform A framework enabling Hyperion applications to use external authentication and single signon using the security platform driver. security rights Rights defined by a users data access permissions and activity-level privileges as explicitly defined for a user and as inherited from other user groups. services Resources that provide the ability to retrieve, modify, add, or delete business items. Some services are Authorization, Authentication, Global Service Manager (GSM). servlet A piece of compiled code executable by a Web server. Servlet Configurator A software utility for configuring all locally installed servlets. shortcut A pointer to an actual program or file that is located elsewhere. You can open the program or file through the shortcut, if you have permission.

516

Glossary

shortcut menu A menu that is displayed when you rightclicks a selection, an object, or a toolbar. A shortcut menu lists commands pertaining only to that screen region or selection. sibling A child member at the same generation as another child member and having the same immediate parent. For example, the members Florida and New York are both children of East and siblings of each other. Single Sign-On A feature that enables you to access multiple Hyperion products after logging on just once using external credentials. SmartCut A link to an item in the repository in the form of a special URL. snapshot Read-only data from a specific point in time. See snapshot report. snapshot report A report that has been generated and that stores static data. Any subsequent change of the data in the data source does not affect the report content. A snapshot report is portable and can be stored on the network, locally, or e-mailed. See snapshot. sort Reorder or rank result sets in ascending or descending order. sort order An indicator specifying the method by which you want your data to be presented. Data is typically shown in one of two sort orders. Ascending sort order presents data from lowest to highest, earliest to latest, first to last, A to Z, and so on. Descending sort order presents data from highest to lowest, latest to earliest, last to first, Z to A, and so on. SPF files Printer-independent files created by an Production Reporting server that contains a representation of the actual formatted report output, including fonts, spacing, headers, footers, and so on. spreadsheet One of the three data object display types. Spreadsheets are tabular reports of rows, columns, and pages. SQL spreadsheet A data object that displays the result set of a SQL query.

stacked charts A chart where the categories are viewed on top of one another for visual comparison. This type of chart is useful for subcategorizing within the current category. Stacking can be used from the Y and Z axis in all chart types except pie and line. When stacking charts the Z axis is used as the Fact/Values axis. Start in Play The quickest method for creating a Web Analysis document. The Start in Play process requires you to specify a database connection, then assumes the use of a spreadsheet data object. Start in Play uses the highest aggregate members of the time and measures dimensions to automatically populate the rows and columns axes of the spreadsheet. strategic objective (SO) A long-term goal defined for an organization that is stated in concrete terms whose progress is determined by measuring results. Each strategic objective is associated with one perspective in your application, has one parent, the entity, and is a parent to critical success factors or other strategic objectives. It also has measures associated with it. Strategy map A detailed representations of how your organization translates its high-level mission and vision statements into lower-level, constituent strategic goals and objectives. structure view A view that displays a topic as a list of component items allowing users to see and quickly select individual data items. Structure view is the default view setting. Structured Query Language The language used to give instructions to relational databases. When you build the Query sections Request, Limit, and Sort lines, Interactive Reporting translate your requests into SQL instructions. subscribe Register an interest in an item or folder, in order to receive automatic notification whenever the item or folder is updated. subset A group of members selected by specific criteria. substitution variable A variable that acts as a global placeholder for information that changes regularly. You set the variable and a corresponding string value; the value can be changed at any time.

Glossary

517

Summary chart A chart that is displayed at the top of each chart column in the Investigate Section and plots metrics at the summary level, meaning that it rolls up all Detail charts shown below in the same column. All colors shown in a stacked bar, pie, or lines Summary chart also appear above each Drill button of the Detail charts and extend across the row, acting as the key. super service A special service used by the startCommonServices script to start RSC services. table The basic unit of data storage in a database. Database tables hold all user-accessible data. Table data is stored in rows and columns. Table catalog A display of the tables, views, and synonyms to which users have access. Users drag tables from the Table catalog to the Content pane to create data models in the Query section. Table section The section used to create tabular-style reports. It is identical in functionality to the Results section, including grain level (table reports are not aggregated). Other reports can stem from a Table section. target The expected result for a measure for a specified period of time, such as a day, quarter, month and so on. You can define multiple targets for a single measure. time events Triggers for execution of jobs. time scale A scale that enables you to see the metrics by a specific period in time, such as monthly or quarterly. token An encrypted identification of one valid user or group existing on an external authentication system. toolbar A series of shortcut buttons providing quick access to the most frequently used commands. top and side labels In the Pivot section, the column and row headings on the top and sides of the pivot. These define categories by which the numeric values are organized. top-level member A dimension member at the top of the tree in a dimension outline hierarchy, or the first member of the dimension in sort order if there is no hierarchical relationship among dimension members. The top-level member name is generally the same name as the dimension name if a hierarchical relationship exists. trace level A means of defining the level of detail captured in the log file.

traffic lighting Color-coding of report cells, or pins based on a comparison of two dimension members, or on fixed limits. Traffic lighting definitions are created using the Web Analysis Traffic Light Analysis Tool. transparent login A mechanism that enables users who have been previously authenticated by external security criteria to log in to a Hyperion application, bypassing the login screen. trend How the performance of a measure or scorecard has changed since the last reporting period or a date that you specify. trusted password A password that enables users who have been previously authenticated in another system to have access to other applications without reentering their passwords. trusted user A user authenticated by some mechanism in the environment. Uniform Resource Locator The address of a resource on the Internet or an intranet. variable A value that can be modified when you run a report. String variables are useful for concatenating two or more database columns. Numeric variables can calculate values based on other values in the database. Encode variables are string variables that contain nondisplay and other special characters. variable limits Limits that prompt users to enter or select limit values before the queries are processed on the database. Web server Software or hardware hosting intranet or Internet Web pages or Web applications. This term often refers to the Interactive Reporting servlets host, because in many installations, the servlets and the web server software reside on a common host. This configuration is not required, however; the servlets and the web server software may reside on different hosts. weight A value assigned to an item on a scorecard that indicates the relative importance of that item in the calculation of the overall scorecard score. The weighting of all items on a scorecard accumulates to 100%. For example, to recognize the importance of developing new features for a product, the measure for New Features Coded on a developers scorecard would be assigned a higher weighting than a measure for Number of Minor Defect Fixes.

518

Glossary

ws.conf A configuration file for Windows platforms. wsconf_platform A configuration file for UNIX platforms. Y axis scale The range of values on the Y axis of the charts displayed in the Investigate Section. You can use a unique Y axis scale for each chart, the same Y axis scale for all Detail charts, or the same Y axis scale for all charts in the column. Often, using a common Y axis improves your ability to compare charts at a glance. Zero Administration A software tool that identifies the version number of the most up-to-date plug-in on the server. zoom A feature that sets the magnification of a report. The report can be magnified to fit the whole page, page width or a percentage of magnification based on 100%. ZoomChart A feature that makes it easy to view detailed information by enlarging a chart displayed on a page in the Monitor or Investigate Section. Zooming in on a chart enables you to see detailed numeric information on the metric that is displayed in the chart. You can click the + (plus sign) in the lower right corner of the chart or rightclick anywhere on the chart to enlarge it.

Glossary

519

520

Glossary

Index

Symbols
.bqy, 460 :COLALIAS, 408, 410 to 411, 413 :COLUMN, 408 :LOOKUPID, 408, 412 :OWNER, 408, 410 to 411, 413 :QUERYSQL, 457 :REPOSITORYNAME, 457 :ROWSRETRIEVED, 457 :SILENT, 457 :TABALIAS, 408, 410 to 411, 413 :TABLE, 408, 410 to 411, 413

process, 245 server, 258 administration tasks, 246 administrator password, 192 Agg Usage Analysis pivot table, 313 Aggregating Local Results tables, 426 aliases, specifying table and column in SQL, 408, 410 to 411, 413 Allow Drill Anywhere option, 433 Allow Drill To Detail option, 433 Analytic Bridge Service, 196 Analyzer.properties file editing, 492 maximum query result set size settings, 493 overview, 492 analyzing performance, 326 Apache, 224 API software, 383 APIs exceptions, 62 APIs, triggering an event with, 159 Append Query command, local results and, 427 appenders, 224 Application Data tables, 241 application management. See Shared Services applications. application-level security, security, application level, 251 applications command strings, 181 enterprise-reporting applications, 189 running jobs against enterprise applications, 190 URL property, 219 Applications properties (of servlets), 218 applying metadata to limit values, 411 metatdata names to data model topic items, 409 metatdata names to data model topics, 408

A
access, topic view, setting, 430 accessing Open Metadata Interpreter, 406 updated documents, 78 activating or inactivating services, 198 Add Meta Topic Item command, 438 adding metatdata definitions, 407 remarks from stored metadata, 412 topics to data models, 417 Administer module, 53 common tasks, 51 to 52 introduced, 38 Administer Repository dialog box, 440 administering documents, in IBM Information Catalog, 463 IBM Information Catalog, 461 Interactive Reporting repositories, 440 public job parameters, 159 repository groups, 444 administration

Index Symbols

521

metatdata to limit lookup values, 411 archives. See backing up. ASMTP, 63 Assessment Service, 196 associating interactive reports with Interactive Reporting database connections, 79 attachments enabling, 62 maximum size, 62 attributes, modifying, 372 audit events defining, 456 examples, 458 samples, 457 to 458 testing, 454 audit events, defining, 456 audit log, monitoring, 454 audit table creating, 455 sample structure, 455 auditing keyword variables, 457 where not supported, 454 authorization, 251 Auto Alias Tables option, 433 Auto Join Tables option, 420, 433 Auto Logon command, 394 Auto-Process command, 436

Best Guess join strategy, 421 blank documents, 393 bookmarks generated Personal Pages, 165 setting up graphics for, 168 BQAUDIT table, sample structure, 455 bqmeta0.ini, 407 to 408 bridge tables, 423 brioqry.exe, installation location, 484 BRIOSECP table, 471 Broadcast Messages changing default Personal Pages and, 166 generated Personal Pages and, 165 to 166 overview, 166 push content, 167 renaming folder, 55, 168 specifying categories for, 55 subfolders, 166 understanding, 166 Browse servlet Personal Page preconfiguration, 165 to 166 Web application deployment name, 64 Browser properties, 218 building queries, confusing aspects, 402 business analysis, 274

C
Cache properties group Browser subgroup, 218 described, 216 Notification subgroup, 218 Objects subgroup, 217 System subgroup, 218 caches of content listings, 203 for services, 199 of user interface elements, 203 calculation iterations, default, 359 calendars creating, 154 default name, 206 deleting, 155 end dates, 157 managing, 154 modifying, 155

B
backing up clients, 506 overview, 502 procedures, 502 Repository database, 506 restoring data, 502 scripts, 505 servlet configuration files, 506 servlets, 506 batch files, and application command strings, 181 batch input file creating, 370 launching, 371 BeginLoad program, 271 benefits of data models, 416

522

Index B

non-working days, 156 periods and years, 156 properties of, 155 user-defined weeks, 155 week start, 155 carpooling, 324 CDB_USER, 250 changing data model views, 431 database passwords, 396 join types, 422 server settings, 264 topic views, 428 charts, missing in client, 297 classpath, 177, 204 client, tools, 244 client.prefs settings, 347 clips authentication, 254 external authentication, 254 overview, 254 preference settings, 255 requirements, 253 security, 254 COLALIAS, 408, 410 to 411, 413 color properties, described, 212 color schemes customizing on Personal Pages, 169 properties, 214 COLUMN, 408 column data type changes, 96 delete, 90 rename, 84 column aliases, specifying in SQL, 408, 410 to 411, 413 columns, usage statistics, 454 combined view, of data models, 431 combining limit local joins with local joins, 427 Command Line Scheduler XML tags, 374 command strings for applications described, 181 example, 181 commands, DataModel menu, 438 commands, Help menu, xx components, Enterprise Metrics, 240

computed items, and local results, 427 computed metatopic items, creating, 404 config.dat file distributing services and, 191 editing, 172, 192 encryption and, 191 services startup and, 191 startup process, 47 sync host properties, 204 ConfigFileAdmin utility, 191 to 192 configfileadmin.bat. See ConfigFileAdmin utility. ConfigFileAdmin.sh. See ConfigFileAdmin utility. configuration ConfigFileAdmin utility, 192 information in startup process, 47 Configuration file Hyperion Analytic Services, 369 configuration files See also notification.properties, services.properties, output.properties, config.dat file backing up, 506 server.xml, 204 ws.conf, 208 Configuration Test Servlet, 498 configuration_server.prefs settings, 346 configuring Essbase XTD Deployment Services, 496 for Microsoft Excel, 496 Metadata Export tool, 305 Shared Services, 492 confirming repository table creation, 442 connecting databases, 392 Essbase or DB2 OLAP, 390 OLE DB Provider, 390 with data model, 393 without data model, 393 connection files creating OLAP, 390 default directory, 383 definition, 382 modifying, 396 connection information, 383 connection parameters, 382 to 383 connection preferences modifying, 391

Index C

523

setting, 385 connections directory, accessing, 394 monitoring, 392 Connections Manager, 395 connections pool, 202 connectivity issues, diagnosing and resolving, 498 connectivity type, 201 connectivity, defining for a database server, 185 connectivity-related problems, troubleshooting, 484 consulting services, xxii content windows, headings, 169 content, providing optional Personal Page to users, 168 controlling document versions, 448, 450 conventions, naming, 176 copying Personal Pages, 169 topic items to metatopics, 403 crashes, troubleshooting in Enterprise Metrics, 295 creating data models, 84 Interactive Reporting database connections, 78, 383 log tables, 455 metatopics, 403 object type properties, 462 OLAP connection files, 390 repository objects, 445 repository tables, 441 credentials, user, 205 CSS Config File URL, 205 custom formats, server date, 389 custom join strategy, 421 custom login implementation, 210 Custom Values limit option, 434 customizing metatopics, 404

data integrations. See Shared Services data integrations. Data Model menu commands, 384 to 385, 438 data model options auditing, 436 design, 433 general, 433 joins, 435 limits, 434 topic priority, 435 Data Model Refresh audit event, 458 Data Model Synchronization dialog box, 437 data models adding topics to, 417 automaticall processing, 436 automatically processing, 436 benefits, 416 BRIOCAT2 table, 449 BRIOOBJ2 table, 449 changing topic views, 428 connecting with or without, 393 creating, 84 definition, 440 ensuring integrety, 437 governors, 433 joins, 418 looking up metadata definitions, 411 master, 436 normalized and denormalized, 88 removing topics from, 417 simplifying, 403 synchronizing, 437 topic priority options, 435 uploading to repository, 445 version-controlled, 440 viewing at metatopic level, 403 data sources connectivity type, 201 listing, 201 maximum connections, 202 name, 65, 202 ODBC, 383 properties of DAS, 201 data type column changes, 96 database administrator, 274 Database Connection Wizard, 383

D
daily administration tasks, 246 DAS response timeout, 219 DAT files. See services.dat., 43 Data Access Service configuring, 78 data sources, adding, 202 OCE properties, 203 starting with process monitor, 48

524

Index D

database joins, 418 database properties (of host or Install_Home), 204 database security, 250 database servers adding, 184 associating with a Job Service, 185 changing driver, 187 deleting, 186 environmental variables for Production Reporting, 185 managing, 184 database tables in data models, 417 metadata definitions, 408 database tables and columns, usage statistcs, 454 database variables, 408, 410 to 413 databases aliases, 383 changing password, 187, 396 connecting, 392 connectivity, 65 logging off, 396 logging on, 395 overview, 241 planning changes, 454 type, 65, 201 user IDs, 250 user name, 383 using joins in, 418 data-level security, 251 DATAMODEL column, in sample BQAUDIT table, 455 DAY_EXECUTED column, in sample BQAUDIT table, 455 DB_USER, 250 DB2 OLAP, connecting to, 390 dbgprint connectivity troubleshooting with, 484 Insight and, 485 Intelligence Clients and, 484 overwriting files, 485 default Interactive Reporting databse connections, setting, 394 default Personal Pages, changing, 166 default settings, simple joins, 422 defined join paths, using, 423 defining

audit events, 456 metadata, 407 properties in IBM Information Catalog, 461 deleting joins, 422 MIME types, 60 object types and properties in IBM Information Catalog, 462 Remarks tabs, 413 repository objects, 443 services, 174 design options, data model, 433 Detail view audit event, 458 changing topic views, 428 diagnostics properties, 218 differences between Hyperion Analytic Services ports and connections, 367 dimension name, Essbase, 390 dimensions, setting topics as, 429 directories, naming conventions for, 176 directories. See output directories. displayable items. See file content windows. displaying HTML file on a Personal Page, 168 icon joins, 421 document versions, controlling, 448 documents accessing Hyperion Download Center, xix Hyperion Solutions Web site, xix Information Map, xix online help, xix administering, 463 blank, 393 BRIOBRG2 table, 450 conventions used, xxi feedback, xxii registering to the IBM information catalog, 460 structure of, xviii tracking, 67 uploading to repository, 445 drill anywhere, allowing, 433 drill to detail, allowing, 433 drill-down paths, defining, 429 drivers, database, 177, 187, 204

Index D

525

E
Education Services, xxii enabling users to apply limits, 432 encoding of URLs, 64 UTF-8, 64 enrich program, 277 enrichment restrictions, 274 versus ETL, 276 enrichment jobs after a failure, example, 278 enrichment process, 274 to 275 ensuring data model integrity, 437 Enterprise Metrics components, 240 database overview, 241 Editor, 274 environment variables, 498 environments,system information, 292 Essbase auditing, 454 connecting to, 390 subscriptions,resolving, 496 Essbase XTD Deployment Services, configuring, 496 ETL tools, versus enrichment functionality , 276 Event Service event log, 55 service type in server.dat file, 43 EVENT_TYPE column, in sample BQAUDIT table, 455 events creating externally triggered event, 158 defining audit, 456 time events, managing, 158 tracking, 66 triggering, 159 examples, enrichment jobs after a failure, 278 exceptions described, 62 Exceptions Dashboard described, 62 Exceptions Dashboard, generated Personal Pages and, 165 exceptions, configuring, 169 exiting, Server Console, 265 expiration times, 63 export table list file, 303 exporting

registry keys, 502 settings, 264 Extended Access for Hyperion Interactive Reporting Service, 196 externally triggered events creating, 158 polling for, 206

F
fact security, 252 facts, setting topic items as, 430 failures during enrichment job processing, 278 initialization, 295 file content window, 168 file size, of attachments, 62 file systems backing up, 502, 505 restoring data, 502 files adding to folders, 167 creating OLAP connection, 390 in e-mail attachments, 62 modifying connection, 396 filtering Informatica tables, 409 tables, 387 FinishLoad program, 271 firewalls, 216 folders administrator-only System folder, 164 Broadcast Messages, 55 importing items in, 167 organizing, 164 pre-configured, 167 foreign key tables, in table of joins, 410 formulas calculating for Hyperion Analytic Services ports, 369 formulas, calculating for Hyperion Analytic Services ports, 369 frequently used stars, 325 From Server data formats, 389

G
Generated Personal Page properties (of servlets), 214

526

Index E

generating automatic join paths, 435 global limit options, 434 Global Service Manager (GSM), 55, 203 to 204 governors data model, 433 in local results, 426 Grant Tables To Public option, 441 graphics. See images. groups, administering repository, 444 groups, repository, BRIOBRG2 table, 450

I
IBM Information Catalog administering documents, 463 creating an object type, 462 definition, 460 registering documents to, 460 setting up object types, 464 icon joins, showing, 421 Icon view definition, 429 metatopics and, 403, 405 icons See also toolbars. DBCS, 168 files, 164 for HTML output of Production Reporting, 212 LSC, 197 on Exceptions Dashboard, 59, 62 on RSC toolbar, 197 RSC, 172, 197 in Servlet Configurator, 209 on Servlet Configurator toolbar, 208 View Job Execution Log Entries, 156 images, for bookmarks, setting up, 168 impact analysis, data, 454 impact management assessment services, 70 impact management assessment services, impact management metadata, 70 impact management metadata, 70 impact management metadata service, 70 impact management services about, 70 accessing, 73 impact management update services, 71 to 72 implementation process, Enterprise Metrics, 245 implementation tasks, Enterprise Metrics, 245 importing Interactive Reporting database connections, 79 models. See Shared Services models. inactivating, MIME types, 60 Informatica tables, filtering, 409 initialization failures, 295 Install, 43 Install Home See also hosts. described, 38, 40

H
hangs, 295 Harvester Service. See Assessment Service., 196 headings (within Personal Pages), 169 Help menu commands, xx hiding icon joins, 421 hierarchical security, 252 Hierarchy Levels and Column Reference pivot table, 317 hints, for reading log lines, 294 hosts adding, 182 deleting, 183 managing, 182 to 183 modifying, 183 properties of, 198, 203 HTML files and customizing generated Personal Page, 166 displaying on Personal Pages, 168 HTTP protocol, SmartCut for e-mail notification, 63 HTTPS protocol, 63 Hyperion Analytic Services reducing connection time outs, 369 Hyperion Analytic Services, ports and connections, 369 Hyperion Consulting Services, xxii Hyperion Download Center, accessing documents, xix Hyperion Education Services, xxii Hyperion Hub applications. See Shared Services applications. Hyperion Hub data integrations. See Shared Services data integrations. Hyperion Hub models. See Shared Services models. Hyperion product information, xxii Hyperion System 9 BI+, assigning default preferences for application users and groups, 55 Hyperion Technical Support, xxii

Index H

527

Install Home directory, 203, 503 installation, 245 backing up immediately after, 502 to 503, 506 config.dat file, 192 installed services, 40 installed servlets, 208, 506 installed system, 55 location of components, 172, 197, 201, 208 location of installed files, 206 new host, 182 Servlet Configurator, 208 of Zero Administration, 221 installation directory, 203 installation program, 52, 174 installed services deleting a host with, 183 Install Home, 196, 203 Interactive Reporting services, 44 LSC displays, 197 recommendation for Job Service, 189 replicate job service, 189 RSC toolbar, 173, 197 installed servlets, 194 integrating data. See Shared Services data integrations. Interactive Reporting load testing, 221 zero administration, 220 Interactive Reporting database connections associate with interactive reports, 79 choosing, 392 creating, 78, 393 default directory, 383 explicit access property, 159 importing, 79 managing, 159 modifying, 391 options, 385 setting default, 394 Interactive Reporting documents, changing user ID and password in, 97 Interactive Reporting Service physical resources and, 57 starting with process monitor, 48 Interactive Reporting Studio dbgprint and, 484

troubleshooting connectivity, 484 Interactive Reporting Studio repository administering, 440 administering groups, 444 uploading documents to, 445 Interactive Reporting Web Client, dbgprint and, 485 interactive reports, connecting, 78 Internal properties (in Servlet Configurator) Job subgroup, 216 Transfer subgroup, 216 Upload subgroup, 216 interpreter, open metadata, 411 IP addresses, 383 IP ports. See ports. items creating computed metatopic, 404 and generated Personal Page, 165 headings on, 169 organizing in folders, 164, 167

J
JDBC driver, 177, 204 JDBC URL, 177, 188, 204 job execution, job process explained, 190 Job log columns in, 157 dates, 157 deleting entries, 157 marking entries for deletion, 157 sorting, 157 start dates and times, 157 suppressing, 189 user displays for, 157 job output property, 212 job parameters, administering, 159 Job Service application, 179 to 180 applications configuring, 179 executable of application, 180 properties Application, 179 database, 178 Executable, 182 Production Reporting, 179

528

Index J

running jobs against enterprise applications, 190 service type in server.dat file, 43 shutting down, 46 user name for running Production Reporting jobs, 65 jobs e-mail output attachments, 62 Job Log, 156 jobs property, 216 join paths, using defined, 423 join strategies, 420 join types, specifying, 422 joining topics automatically, 420 manually, 421 using metadata join information, 410 joins definition, 418 hiding from users, 402 limit local, 425 limitations of local, 426 local, 423 manual, 421 metadata definitions, 410 removing, 422 showing in icon view, 421 specifying strategies, 420 usage preferences, 435 using defined paths, 423

Limit Show Values audit event, 458 limitations of local results and local joins, 426 limiting values, 434 Limits tab, 434 limits, enabling users to apply, 432 linear joins, 422 List Environment Variables page, 498 load process, overview, 268 load support logs, reviewing, 283 load support programs, optional preference settings , 270 local joins, 423, 426 Local Service Configurator. See LSC. Local Service Manager (LSM), 203 to 204 localization properties, 211 locating, logs, 288 log files analyzing, 233 configuration, 228 configuration log, 228 Enterprise Metrics, 304 file formats, 227 for Interactive Reporting troubleshooting, 484 for Interactive Reporting document output, 206 location, 225 naming convention, 226 notification log, 63 log formats, 291 Log Management Helper, 224 log tables, 455 log4j, 224 loggers, 224 logging on and off databases, 396 on to databases, troubleshooting difficulties, 484 logging events, 156 logging levels, 229 Logging Service configuration, 228 usage, 224 Logoff audit event, 458 Logon audit event, 458 logs format, 291 locating, 288 reading, 291 reading tips, 294

K
keys, modifier, 403 kill commands, 46

L
launching Performance Statistics Utility, 309 Server Console, 258 left joins, 422 limit browse level preferencs, 434 limit local joins combining with local joins, 427 number allowed, 427 limit lookup values, applying metatdata to, 411 limit options, 434

Index K

529

system environment information, 292 viewing, 288 LOOKUPID, 408, 412 LSC described list of services, 196 modify host properties, 203 server.xml file, 196, 204 starting, 197

configuring, 305 files, 302 log levels , 304 preference file settings, 352 metadata initialization failed message, 296 metadata interpreter, open, 411 metadata join information, joining topics using, 410 metadata names applying to data model topic items, 409 applying to data model topics, 408 Metadata_export.prefs file, 302, 352 metatopics copying items to, 403 creating, 403 creating items, 404 definition, 402 in local results, 427 viewing, 405 metrics_server.prefs settings, 331 metrics_server.prefs settings , 331 Microsoft Excel, configuring Hyperion Analyzer for, 496 Microsoft Windows. See Windows. MIME types creating, 59 deleting, 60 inactivating or re-activating a, 60 modifying, 59 working with, 59 missing charts or reports, in Enterprise Metrics client, 297 model management. See Shared Services models. modifier keys, 403 modifying connection files, 396 connection preferences, 391 join types, 422 metatopics, 403 OCEs, 391 OCEs with Connections Manager, 396 repository objects, 446 request dialog, 436 server date formats, 389 topic item properties, 430 topic properties, 429 modules, Administer, 53 monitoring

M
magnifying glass icon, 209 mail server host names, 63 maintenance tasks, 246 managing Interactive Reporting repositories, 439 time events, 158 managing Interactive Reporting database connections, 159 manually joining topics, 421 master data models, promoting to, 436 mb.Enrich.log, 285 mb.Loads.log, 283 mb.Publish.log, 284 MDB_USER, 250 menu commands, DataModel, 438 Meta Connection Wizard, automatic join strategies and, 420 meta view, of data models, 405, 431 metadata adding remarks, 412 applying, 411 applying to limit lookup values, 411 defining, 407 definition, 402 in Interactive Reporting, 405 SQL entry fields, 407 Metadata Definition dialog box, 406 metadata definitions adding, 407 columns, 409 joins, 411 limit lookup values, 411 remarks, 412 tables, 408 Metadata Export tool

530

Index M

connections, 392 server settings, 263 server statistics, 259 users, 265

OLE DB provider, connecting, 390 Open Catalog Extension. See Interactive Reporting database connections. Open Metadata Interpreter, 411 options data model, 431 Interactive Reporting database connections, 385 OR Logic Between Groups, 473 Oracle Reports, command string example, 181 Organization tab, accessing, 66 organizing items and folders, 164 original view, of data models, 405, 431 outer joins, 422 output directories adding, 57 deleting, 58 modifying, 57 purpose, 56 output file, 304 output.properties, 62 OWNER, 408, 410 to 411, 413

N
Name Service config.dat file and, 191 service type in server.dat file, 43 in startup process, 192 to 193 startup process, 47 naming conventions, directories, 176 naming topics using stored metadata, 408 New Data Model audit event, 458 notification property, servlets, 218 notifications See also subscriptions ASMTP, 63 attachments, 62 e-mail account for sending, 63 enabling attachments, 62 server host name, 63 events that trigger, 61 other, 61 subscriptions and, 61 types of, 62 nts, 506 NUM_ROWS column, in sample BQAUDIT table, 455

P
parameters, job administering public, 159 pass-through, 205 credentials, 160 definition, 160 password, Interactive Reporting, changing, 97 passwords database, changing, 187, 396 encrypted, 191 Interactive Reporting database connections and, 382 of Job Service, for running Production Reporting jobs, 65 RDBMS password, 193 Repository Service, 187 service, modifying, 192 of services for database access, 177, 204 of services, 191 setting, 264 system, 191 passwords, Interactive Reporting, changing, 97 paths, using defined join, 423 performance poor, 298

O
object descriptions, updating repository, 443 object properties, 217 object type properties, creating, 462 object types, setting up, 464 objects deleting repository, 443 modifying repository, 446 OCEs, See Interactive Reporting database connections. OCEs. See Interactive Reporting database connections. ODBC data sources, 383 table filters and, 387 OLAP connection file, creating, 390

Index N

531

troubleshooting, 298 Performance Statistics Utility, 310 periodic maintenance tasks, 246 Personal Pages Broadcast Messages on, 166 configuration tool, 169 customized graphics, 168 default Personal Pages, 166 generated customizing, 165 properties, 214 setting up, 165 graphic files on, 168 importing, 169 importing other pages, 169 multiple, 166 optional content to users, 168 optional content, providing to users, 168 properties, configuring, 169 setting up items in folders, 167 viewing new users, 169 Personal Pages properties Color Scheme, 214 Publish, 214 physical resources See also printers, output directories. access control on, 57 adding, 57 deleting, 58 modifying, 57 pinging services, 175 pivot tables, in Performance Statistics Utility , 310 portal.properties file, 206 ports and connections, differences, 367 to 369 ports, Browse servlet, 64 Post Process audit event, 458 Pre Process audit event, 458 pre-configured folders, setting up, 167 preference file settings client_server.prefs, 347 configuration_server.prefs, 346 load support, 269 metrics_server.prefs, 331 performance and, 327 troubleshooting, 293

preference files Enterprise metrics, 329 exporting settings, 264 preferences connection, setting, 385 join usage, 435 limit browse level, 434 pre-SQL and post-SQL files, 304 primary key items and tables, in table of joins, 410 printers adding, 57 deleting, 58 modifying, 57 properties of, 58 purpose of, 56 priorities, topics, 435 priority setting in administration module, 55 private applications. See Shared Services applications. process monitors, 48 process overview administration, 245 implementation, 245 processed enrichment, overview, 273 processing queries, automatically, 436 Production Reporting jobs, data sources for, 179 Production Reporting servers, properties of, 179 Production Reporting, environment variables for, 185 programs, running jobs against enterprise applications, 190 Promote To Master Data Model command, 436 Promote To Meta Topic command, 438 promoting queries to master data models, 436 topics to metatopics, 403 properties configuring Personal Pages, 169 creating object type, 462 defining, 461 generated Personal Page, 166 Job Service, 178 Servlet Configurator, 209 Shared Services, 205 standard, of LSC service, 198 topic, 429 viewing in the Servlet Configurator, 209 protocols for SmartCuts, 64 publish program, 273

532

Index P

publish properties, 214 publishing Personal Pages, 169 pushed content, 164, 167 pushing content. See Broadcast Messages.

adding from stored metadata, 412 showing in Query section, 412 Remarks tabs, reordering, 413 Remote Service Configurator. See RSC. removing joins, 422 metatopics and metatopic items, 404 topics, 417 renaming Broadcast Messages folders, 168 reordering Remarks tabs, 413 replicating servlets, 208 report registry keys, 502 reports, missing in client, 297 repositories administering, 440 definition, 441 uploading documents, 445 repository database, backing up, 506 repository objects creating, 445 deleting, 443 modifying, 446 updating descriptions, 443 Repository Service service type in server.dat file, 43 stopping, 46 repository tables confirming creation, 442 creating, 440 creation failure, 441 granting access to, 441 repository, BRIOGRP2 table, 450 REPOSITORYNAME, 457 request lines, in master data models, 436 responding, to a finish load failure, 280 restarting servers, 260 restoring data, 502 restricting topic views, 430 result set size, setting maximum size in the Analyzer.properties file, 492 results, limitations of local, 426 Return First __Rows governor, 433 reviewing, load support logs, 283 right joins, 422 roles, 273 root directories, 64

Q
queries automatically processing, 436 maximum cells in results, 201 maximum rows in results, 201 promoting to master data models, 436 standard with reports, definition, 440 standard, definition, 440 tracking processing time, 454 query building, confusing aspects of, 402 query limits, local results and, 426 Query Performance Analysis Over Publish Time pivot table, 316 Query Performance Analysis Over Time pivot table, 313 Query Performance Analysis pivot table, 312 Query Performance Using Parameter pivot table, 317 query result set, setting maximum size in the Analyzer.properties file, 493 querying databases, troubleshooting difficulties, 484 query-processing time, tracking, 454 QUERYSQL, 457

R
ranking topics, 435 RDBMS passwords, 193 starting, 41 reactivating MIME types, 60 reading log files, 291 log lines , 294 rebooting machines, 46 recovering data, 502 reducing available values, 434 Reference of Bursted Supported Levels Pivot pivot table, 319 registering documents to IBM Information Catalog, 460 registry keys, 502 rejected stars, 324 relational database management system. See RDBMS. remarks

Index Q

533

row-level security, 64 ROWSRETRIEVED keyword variable, 457 RSC config.dat file and database password, 187 described, 39 pinging a service, 175 Storage tab, 187 toolbar icons, 197 what it does, 172 RSC services configuring, 39 setting properties, 204 super service, 198 run now, synchronizing, 74 Run Type property, 198 Run Type property of services, 42 running Configuration Utilities in stand-alone mode, 281 Metadata Export tool, 305

servers log, 261 mail, 63 restarting, 260 settings, monitoring, 263 shutting down, 260 statistics, monitoring, 259 server-side software components. See services. Service Broker, service type in server.dat file, 43 service configuration parameters, 98 services See also specific service names. adding, 196 BP_host.dat file, 43 common tasks, 51 to 52 deleting, 174 deleting message during deletion, 174 modifying properties, 175 names in BrioPlatformxxx.dat, 43 names in server.dat, 43 pinging, 175 properties Advanced, 176 General, 176 removing one or more, 172, 196 Run type property, 42 running as separate processes, 46 starting individually, 43, 45 starting Intelligence and DAS services with process monitor, 48 starting subset of, 42 starting under UNIX, 41 starting under Windows, 41 stopping, with scripts, 46 types of, 43 user name for database access, 177, 204 viewing properties, 175, 198 Servlet Configurator defined, 208 described, 40 making new settings effective, 209 starting, 208 toolbar, 208 servlet engine, session timeout value, 215 servlets

S
samples audit events, 458 audit log structure for BQAUDIT table , 455 Save To Repository dialog box, 445 schedules, synchronizing, 74 scheduling, load support programs, 269 scripts extension, 44 start_Common Services, 41 stop scripts, 46 security data level, 251 data-level, 251 hierarchical, 252 selecting subject areas, in IBM Information Catalog, 461 Server Console exiting, 265 launching, 258 Settings tab, 263 server statistics, in Server Console , 259 server.dat file, 42 to 43 server.xml file, 204 Server-Defined join strategy, 421

534

Index S

backing up, 506 configuring, 40 Enterprise Metrics, 244 replicating, 208 session time-out value, 215 setting connection preferences, 385 data model options, 432 default OCEs, 394 object types up in IBM Information Catalog, 464 passwords, 264 setting topic priorities, 435 settings exporting, 264 preference file, 293 shared models. See Shared Services models. Shared Service, configuring, 492 Shared Services applications common shared application, 104 creating, 104 deleting, 105 naming restrictions, 105 overview, 102 overview of private applications, 102 overview of shared applications, 103 process for sharing, 103 sharing, 105 stopping sharing, 106 Shared Services data integrations accessing, 134 accessing functions, 134 assigning access, 102, 134 Create Integrations user role, 102 creating, 137 deleting, 144 described, 101 editing, 137 filtering integration lists, 135 overview, 134 prerequisites, 134 Run Integrations user role, 102 scheduling group integrations, 151 user roles, 102 viewing integrations, 134 Shared Services models

access permissions, 127 application system members, 118 assigning permissions, 128 compare operations, 113 comparing, 112 configuring for external authentication, 100 deleting, 120 deleting permissions, 131 described, 100 dimensional hierarchies, 100 editing content, 115 editing member properties, 117 editing permissions, 130 filtering content, 122 Manage Models user role, 101 managing permissions, 126 naming restrictions, 112 non-dimensional hierarchies, 100 overview, 100 permissions, 126 to 127 private, 103 properties, viewing and setting, 132 registering applications, 100 renaming, 119 setting properties, 132 shared, 103 shared applications, 101 sharing, 105, 120 sharing data, 101 sharing metadata, 101 sync operations, 110 synchronizing, 108 system members, 118 tracking version history, 125 types of permission, 127 user authentication, 126 user roles, 101 versioning, 125 viewing, 106 viewing properties, 132 Shared Services properties, 205 shell scripts. See scripts. Show All Values limit option, 434 Show Icon Joins option, 433 show impact of change, interactive report, 82

Index S

535

Show Minimum Value Set limit option, 434 show task status, interactive report, 80 Show Values limit option, 434 Show Values Within Topic limit option, 434 showing icon joins, 421 remarks in Query section, 412 shutting down, servers, 260 shutting down. See services. SILENT keyword variable, 457 simple joins, 422 slow queries, 322 Slowest Queries pivot table, 315 SmartCuts e-mail notifications, 54, 63 in notifications, 62 to 63 servlet property, 212 system properties, 63 Solaris (Sun), 505 See also UNIX systems. specifying automatic join strategies, 420 join strategies, 420 join types, 422 join usage preferences, 435 SQL coding limits withCustom SQL limit option, 434 database variables in Where clauses, 408, 410 to 413 default values in metadata, 408 entering, 408 From clauses in metadata, 407 functions in audit log, 455 recording statements, 454 Select statements in metadata, 407 specifying table and column aliases, 408, 410 to 411, 413 table filters and, 387 testing for errors in statements, 454 topic priorities and, 435 Where clauses in metatdata, 407 SQL_STMT column, in sample BQAUDIT table, 455 staging, database tables, 242 standard query with reports, definition, 440 standard query, definition, 440 star and aggregate performance, 322 Star Levels and Columns Reference pivot table, 319

Star Stats Summary pivot table, 311 Star Supported Levels Reference pivot table, 318 stars frequently used, 325 picked but not used or rejected, 324 start scripts, 41, 204 startCommonServices script, 41, 204 starting RDBMS, 41 starting system or components administrative tools Local Service Configurator, 197 Remote Service Configurator, 172 Servlet Configurator, 208 services config.dat file and, 191 RSC services, 198 starting individually, 43, 45 starting subset of, 42 starting under UNIX, 41 starting under Windows, 41 servlets, 47 starting under UNIX, 41 statistics reporting, background, 308 stored metadata, 411 strategies, join, 420 Structure view, of topics, 428 structure, BQAUDIT sample audit log, 455 stylesheets, 212 subject areas, selecting, 461 Subscribe page, 62 subscription property, 212 subscriptions, 61 See also notifications Sun ONE servlet engine, 64 super service, 198 Sybase, table filters and, 387 Sync With Database command, 437 to 438 synchronize metadata, 73 run now, 74 schedule, 74 synchronizing data models, 437 system administration tasks, 358 system environment information, 292 System folder, described, 164 System folder, viewing, 164 system properties

536

Index S

in Administer module, 55 described, 218 system usage, 65

applying metatdata names to, 409 modifying, 430 topic priorities, 435 topic properties in local results, 427 modifying, 429 Topic View command, 438 topic views changing, 428 restricting, 430 topics adding to data models, 417 applying metatdata names to, 408 joining automatically, 420 manually, 421 promoting to metatopics, 403 ranking, 435 removing from data models, 417 specifying join strategies for, 420 tracking documents, 67 events, 66 tracking query-processing time, 454 transfer property, 216 Transformer Service. See Update Service., 196 triggering events, 159 troubleshooting crashes, 295 hangs, 295 initialization failures, 295 Interactive Reporting Studio, 484 missing charts, 297 performance, 298 Workspace, 223 trusted password, 205 trusted password, configuring, 210 types, setting up object, 464

T
TABALIAS, 408, 410 to 411, 413 TABLE, 408, 410 to 411, 413 table aliases, specifying in SQL, 408, 410 to 411, 413 Table catalog definition, 417 filtering tables from, 387 refreshing, 388 repository tables in, 442 Table Catalog command, 438 table, rename, 84 tables bridge, 423 filtering, 387 filtering Informatica, 409 log, 455 metadata definitions, 408 OnDemand Server tables BRIOBRG2 table, 450 BRIOCAT2 table, 449 BRIOGRP2 table, 450 BRIOOBJ2 table, 449 usage statistics, 454 tasks, system administration, 358 technical support, xxii Technical Utilities, Metadata Export tool, 301 testing auditing events, 454 time events, managing, 158 time formats, 389 Time Limit ___Minutes governor, 433 times, notification expiration, 63 timestamp formats, 389 tips, for reading log lines, 294 titles on items, 169 To Server date formats, 389 tool tips, data value, formatting, 496 toolbars Service Configurator, 172, 197 Servlet Configurator, 208 tools, Metadata Export, 305 topic items

U
UMDB_USER, 251 UNIX systems backing up the clients, 506 backup procedure, 502 kill command shutdowns, 46

Index T

537

maximum number of file descriptors, 176 start Servlet Configurator, 172, 197, 208 startCommonServices method, 41 system backup procedures, 502 terminate the Job Service, 46 using kill command, 46 unknown file type message, 60 unreported periods, security, 252 update data model link data models and queries, 72 specify data model, 75 transformation, 72 view candidates to update, 76 Update Service, 196 updated documents, accessing, 78 updating distributed data models, 437 Remarks tabs, 413 repository object descriptions, 443 updating, data models, 75 upload property, 216 uploading documents to the repository, 445 URL properties, 219 Usage Service defined, 65 managing, 66 reports, 67 usage statistics, tables and columns, 454 usage tracking, properties, 66 Use All Joined Topics option, 435 Use All Referenced Topics join option, 435 Use Automatic Join Path Generation option, 435 Use Defined Join Paths option, 435 Use The Minimum Number Of Topics join option, 435 User, 55 user complaints, 326 user IDs, Interactive Reporting, changing, 97 user information , 265 user name, 383 User Performance Analysis pivot table, 314 USERNAME column, in sample BQAUDIT table, 455 users common tasks, 51 to 52 Job Log displays for, 157 using Connections Manager, 395

dedicated servlet JVM logs, 301 defined join paths, 423 local joins, 423 local joins as limits, 425 log files, for tuning and troubleshooting, 288 Metadata Export tool, 301 metatopics and metadata, 401 Open Metadata Interpreter, 406 Performance Statistics tool, 321 UTF-8 encoding, 64 utilities Configuration, 279 Performance Statistics, 310

V
values limiting, 434 variables, database, 408, 410 to 413 version-controlled data models, 440 versioning. See Shared Services models. versions, controlling document, 448 View Manager, pushed content, 164 viewing metrics library metadata, 280 server logs, 261 viewing metatopics, 405 views restricting topic, 430 topic, 428 to 429 virus protection, 164 Visual Warehouse, IBM, 460

W
Web modules. See servlets. WebAnalysis.Properties ExportDataFullPrecision, 497 WebSphere, 64 Where clauses, SQL, 408, 410 to 413 Windows (Microsoft) backup procedure, 502 exporting Registry key, 502 plugins, 221 Task Manager shutdowns, 46 terminating a process, 46 Windows 2000, 504

538

Index V

ws.conf file, 208 wsconf_platform file, 208

Z
Zero Administration client processing, 221 server processing, 220 starting download of, 220

Index Z

539

540

Index Z

You might also like