You are on page 1of 19

Case Analyzer

Version 5.2.1

Case Analyzer is a FileNet P8 process component that monitors and analyzes case and
workflow business processes. Case Analyzer collects data from event logs and audit logs and
stores the data in the Case Analyzer store. OLAP cubes are generated from this data, and
business process analytic reports are produced from the multidimensional information in the
OLAP cubes.
In the Content Platform Engine environment, multiple object stores in a single database are
supported. Similarly, multiple Case Analyzer stores are allowed with each Case Analyzerstore
dedicated to an object store. The Case Analyzer store and object store are identified and defined
by database connection and schema name.
The database connection is an object that represents the JDBC data source connection to the
database. The database connection enables object stores and isolated regions to share a database.
Hence, within a single database, you can have multiple Case Analyzer stores with corresponding
object stores.

Case Analyzer can be configured to service multiple Case Analyzer stores


Through Administration Console for Content Platform Engine, you can do the following actions:

Create a Case Analyzer store and specify the store name, store type, database connection,
and schema name.
Configure Case Analyzer for processing of production or simulation data
Schedule event pruning and publishing intervals
Configure OLAP database integration to specify OLAP database host, name, user name,
and OLAP connector host
Use the Process Task Manager to do the following tasks:
Specify and configure Case Analyzer settings to process workflow event logs for isolated
regions
Define data fields so that you can use the values of data fields in Case Analyzer reports.
Manage the Case Analyzer store by processing OLAP cubes.
Manage the Case Analyzer store by initializing the store, pruning a region, and pruning
events.

Case Analyzer store security requirements


To administer a Case Analyzer store, you must be a GCD administrator (gcd_admin). For more
information about GCD administrative privileges, see GCD administrator.

Case Analyzer publishing in a high availability environment


Case Analyzer publishing is supported in a Content Platform Engine high availability
environment. The Case Analyzer publishing service runs as a background process in Content
Platform Engine and handles multiple Case Analyzer stores for each server instance. Content
Platform Engine handles the load balancing that is provided through the web server cluster
environment.

Creating a Case Analyzer store


Create the Case Analyzer store and specify the store properties.

Configuring the Case Analyzer store


Configure the Case Analyzer store properties to specify the source for workflow event data.

Compress the Case Analyzer store


Use the Case Analyzer compression wizard to compress the Case Analyzer store. You can use
the Case Analyzer compression wizard only on Microsoft SQL Serverdatabases.

Case Analyzer

Version 5.1.0

You can use Case Analyzer to monitor and analyze case and business processes. Case Analyzer collects
events from the Process Engine event logs and Content Engine audit log.Case Analyzer generates chart-based
statistical reports from active case and workflow data, as well as historical data.
Case Analyzer uses OLAP (On-Line Analytical Processing) technology for fast analysis of multi-dimensional
information, which enables you to drill-down from summary view to details and to interactively explore case and
business process data from different perspectives.
The following diagram shows the architecture of Case Analyzer and how data flows from the Process
Engine and Content Engine servers to Case Analyzer.

Event dispatchers retrieve events from the Process Engine event logs and the Content Engine audit log. A
dispatcher thread is created for each Process Engine event log and for theContent Engine audit log.
Publisher threads process events from the logs and publish the analytical results to the Case
Analyzer database. You can configure the number of publisher threads.
At the end of the publishing interval, the fact tables in the Case Analyzer database are updated with the raw
statistical data from the processed events. The statistical data in the fact tables are used to build the OLAP
cubes in the Case Analyzer OLAP database. The OLAP cubes provide the data required to generate Cognos
Business Intelligence reports or Excel charts for the user.
The Case Monitor Dashboard uses the data from the Case Analyzer database to monitor case and workflow
events.

Creating a Case Analyzer store


Version 5.2.1

Create the Case Analyzer store and specify the store properties.

Procedure

To create the Case Analyzer store and specify the properties, start the Administration Console for Content
Platform Engine and log in as the gcd_admin user.
1.

2.

Start the Case Analyzer store wizard in the administration console.


a.

In the domain navigation pane, select the Case Analyzer folder.

b.

Right-click the Case Analyzer folder and click New to start the wizard.

Complete the Case Analyzer store wizard steps.

Parent topic: Case Analyzer

Configuring the Case Analyzer store


Version 5.2.1

Configure the Case Analyzer store properties to specify the source for workflow event data.

Procedure
To access and configure the Case Analyzer store properties:
1.

Start the Administration Console for Content Platform Engine and log in as the gcd_admin user.

2.

In the domain navigation pane, select Case Analyzer. To access the Case Analyzer store properties,
expand Case Analyzer and select a Case Analyzer store.

3.

On the General tab, enter the required information.

4.

Continue through the Case Analyzer store tabs to complete the configuration of the Case
Analyzer store.

Parent topic: Case Analyzer

Compress the Case Analyzer store


Version 5.2.1

Use the Case Analyzer compression wizard to compress the Case Analyzer store. You can use the Case
Analyzer compression wizard only on Microsoft SQL Server databases.

About this task


Important: The Case Analyzer compression wizard is not supported on operating systems such as AIX and
Linux. However, there is an alternate procedure that you can use to compress the Case Analyzer database
when Content Platform Engine server is running on these types of operating systems. For more information,
see the following technical support document: http://www.ibm.com/support/docview.wss?uid=swg27046421.

As the fact tables in the Case Analyzer store grow over time, they require more disk space and longer cubeprocessing times. Compressing the Case Analyzer store reduces the fact table sizes by aggregating, or "rolling
up," measures across common dimensional values, with a resulting loss of information.
For example, consider a system that is composed of insurance claim data.

Table 1. Sample insurance claims data


Date

Claim Type (dimension)

Claim Number (dimension)

1-2-12

Homeowners

18275

1-5-12

Auto

67251

1-26-12

Auto

36185

2-22-12

Auto

47477

4-13-12

Auto

92487

4-28-12

Auto

37530

5-15-12

Homeowners

88357

Compressed by month and claim type, the data is aggregated by month. The precise date and actual claim
number information is lost.

Table 2. Sample insurance data that is aggregated by month


Date

Claim Type (dimension)

Claim Number (dimension)

Jan 2012

Homeowners

<unknown>

Jan 2012

Auto

<unknown>

Feb 2012

Auto

<unknown>

April 2012

Auto

<unknown>

May 2012

Homeowners

<unknown>

Procedure

1.

Before you start the Case Analyzer store compression, do the following steps:
a.

Back up the Case Analyzer store. Compressing the Case Analyzer store cannot be undone or
canceled.

b.

Set the Microsoft SQL Server database timeout setting to unlimited:


i.
ii.
iii.
iv.
v.

2.

Open Microsoft SQL Server Management Studio.


Navigate to the database instance. Right-click and select Properties.
On the Connections tab, make a note of the current setting for the Query time-out,
and then set it to 0 (unlimited).
Click OK.
Close Microsoft SQL Server Management Studio.

Set the file path for the JDBC driver in the cacompression.bat file:
a.

Download the Microsoft SQL Server JDBC driver.

b.

Edit the cacompression.bat file, and set the JDBC driver JAR file path:

set JDBC_DRIVER=installation_directory\sqljdbc_4.0\enu\sqljdbc.jar
For example:

set JDBC_DRIVER=c:\sqljdbc_4.0\enu\sqljdbc.jar
3.

Run the cacompression.bat file that is in the default path.


C:\Program Files\IBM\FileNet\ContentEngine\tools\PE\cacompression.bat
Case_Analyzer_store_name
Important: When you issue the cacompression command to compress the Case Analyzer store
against a workflow system in a tenant domain of a multi-tenant environment, you must specify the N tenant_domain_name parameter in the command. To specify this parameter, you must be a
member of the workflow administration master group. This group is in the master domain directory
service provider and has permission to run workflow system administrative tools in the tenant domain
in which the group is designated a master group.

4.

Log in as gcd_admin.

5.

Enter the following information in the Connect to Database window.

Table 3. Case Analyzer database connection properties


Field

Description

Database Type

SQL Server is the only valid database type

Database Server

Server name of the Case Analyzer store

Database Instance

Case Analyzer database instance name. For the default instance, leave this field blank.

Database Name

Name of the Case Analyzer database

Database Port

Port number of the Case Analyzer database server

Schema Name

Schema name for the Case Analyzer store

Table 3. Case Analyzer database connection properties


Field

6.

Description

Database User Name

Case Analyzer database user name

Database User Name Password

Password for Case Analyzer database user name

Set the Case Analyzer compression intervals.


a.

Select one or more time intervals, and specify the start and end dates. You cannot compress
the current month data.
You can choose to compress data by monthly or daily intervals, or a combination of the two.
For example, if you have data that ranges from the date of July 16, 2008, to the current date
of November 12, 2012, you can compress the older data (from 2008 through 2011) into
monthly intervals. More recent data (from January 2012 to October 2012) can be compressed
into daily intervals. The current month data (November 1, 2012, to November 12, 2012)
remains uncompressed. The following table demonstrates the example time intervals.

Table 4. Compression interval example


Interval

7.

8.

Start Date

Month

Jul 16, 2008

Dec 31, 2011

Day

Jan 1, 2012

Oct 31, 2012

b.

Modify the Temp Directory if necessary. The temporary directory must have available space
equivalent to the size of the current Case Analyzer store.

c.

Click Next.

Select the dimensions to be compressed. You can compress dimensions for user-defined data fields
only. By default only dimensions that have more than 10,000 rows are shown.
a.

Click Show all dimensions to display all dimensions.

b.

Click Next.

Review the compression settings on the summary page.


a.

Click Back to make setting changes.

b.

Click Next to start the compression process.

Important: You can stop the compression process after it is started; however, if you do so, you must
restore the Case Analyzer store from a backup.
9.

Click Finish when the process is complete.

Case Analyzer event logs

Version 5.2.1

You can configure the Case Analyzer store to process specific isolated regions or event logs. Configure the
settings on the Workflow Event Log Source tab. The default setting is to process all events from all isolated
regions.
To specify which isolated regions or event logs are to be processed by the Case Analyzer store, do the
following steps:
1.

In the navigation pane, select a Case Analyzer store.

2.

On the Workflow Event Log Source tab, choose one of the two options:
Option

Action

Process all events

All of the event logs from all isolated regions for the Case Analyzer store are processed.

Process events from specific


regions and event logs

You can choose the event logs of an isolated region to process.

o
Select one or more isolated regions.
o
Select <ALL EVENT LOGS> to process all of the event logs for an isolated region.
o
To specify individual event logs for a region, clear <ALL EVENT LOGS> and select th
Important: After you clear <ALL EVENT LOGS> and click Apply, you cannot return and select <ALL EV
you later decide that you want all of the event logs for the region to be processed, you can return and select ea

If you change a region or event log from not selected to selected, only workflows that are run after the
change are processed by Case Analyzer for that region or event log.

If you configure a new isolated region and you want Case Analyzer to process events for that region,
use the Event Log Configuration window to specify processing for that isolated region.
If you configure a new event log for an isolated region that is configured for Case Analyzer processing
and <ALL EVENT LOGS> is selected, the events in the new event log are processed. Otherwise, if
specific event logs are selected, use the Event Log Configuration window to specify the new event log.
Important: After you configure Case Analyzer to process events from specific regions, you cannot undo this
selection by selecting Process all events. However, you can configureCase Analyzer to process all events by
selecting <ALL EVENT LOGS> for every region.

GCD administrator
Version 5.2.1

A directory service account that has Full Control access to the Content Platform Engine domain object.
GCD administrator

Unique identifier
gcd_admin
Description
The gcd_admin is able to create, modify, and delete Content Platform Engine domain resources.
The gcd_admin account must reside in the directory service realm specified in Configuration
Manager's Configure LDAP task.
A GCD administrator can grant Full Control rights to additional users and groups, thereby making them
GCD administrators as well. Being a GCD administrator does not automatically make you
an object_store_admin, which is assigned on the object store's own property sheet.
Log on to IBM Administration Console for Content Platform Engine as gcd_admin in order to:
Create the GCD by launching the Configure New Domain Permissions wizard the first time
you start IBM Administration Console for Content Platform Engine to establish the FileNet
P8 domain.
Carry out administrative tasks for the FileNet P8 domain.
Minimum required permissions
Use IBM Administration Console for Content Platform Engine to grant Full Control access to
the Content Platform Engine domain object

Managing the Case Analyzer store


Version 5.2.1

You can perform the following actions on the Case Analyzer store: initialize a store, process the cubes, prune
events, prune isolated region data, and stop and restart a Case Analyzerstore.

Table 1. Actions you can perform to manage the Case Analyzer store
Action

Description

Resetting the Case Analyzer store

Initialize a Case Analyzer store with the Reset Database option.

Process cubes

Update the OLAP cubes with the latest information stored in the fact tab

Prune events

Remove processed data from the Case Analyzer event table.

Prune isolated region data

Remove all data that are related to a specific isolated region from the Ca

Taking the Case Analyzer store offline

Stop and take a Case Analyzer store offline.

Bringing the Case Analyzer store online

Start a Case Analyzer store and bring it online.

Table 1. Actions you can perform to manage the Case Analyzer store
Action

Description

Remember: The Case Analyzer store files can grow large over time. You can reduce the size of the Case
Analyzer store by aggregating measures across common dimensional values. For more information,
see Compress the Case Analyzer store.

Managing Case Analyzer data fields


Version 5.2.1

To use the values of case and workflow data fields in Case Analyzer reports, you must identify which data fields
will be exposed, specify their properties as dimensions or measures, and specify the appropriate OLAP cube to
store the data. The values for case and workflow data fields are stored in the Case Analyzer store.
Before you make a case or task property or a data field available to those who will use this information to
analyze workflows and cases, the property or field must be captured in the workflow system event log or object
store audit log.
To make a data field available from workflows, do the following steps:
In Process Designer, define a data field in a workflow definition. For more information, see Workflow
properties - data fields.
In the administration console, create a database field in the event log for that data field. For more
information, see Managing user database fields.
To make case or task properties available from Case Manager systems, see Integrating IBM case analytics
tools
If a data field that you want to make available occurs in both the event log and audit log, then you need to
create only a single Case Analyzer data field to retrieve the data values from both logs. To enable this, the data
field name and type must be the same in the event log and audit log. For example, if a data field named
LoanAmt of data type float is exposed in the event log and audit log, then you will create a single Case
Analyzer data field of type float named LoanAmt to pull the data from both sources to the Case
Analyzer database.
Data fields can be created as dimensions or measures:

Dimensions provide meaningful statistical information about an item of business significance. A large
dimension (a dimension with many members) is hard for a user to comprehend unless the dimension
provides meaningful data. For example, defining the social security number (SSN) as a dimension
results in a large number of dimension members, with little or no statistical value per member. On the
other hand, defining a dimension as the first three numbers of the SSN, which indicate the issuing
state, can provide meaningful groupings of statistical information where there are many workflow
events with different SSNs. Statistical analysis can then be performed on the resulting groups.

Any data field type can be a dimension. For data fields of type float, integer, and time, you have the
option of aggregating the data. For example, if a data field is an amount, you can categorize the
amount field into ranges of 0-10, 10-100, 100-1000, and above 1000. Aggregating dimension data
saves on storage space; if you choose not to aggregate the data, all the values are stored as members
in the dimension, which yields large dimensions.
Important: Large dimensions (even less than 64,000 members) can be problematic to Excel. Consider
a third-party application if Excel does not serve your purpose with large dimensions. Large dimensions
also increase the memory footprint of Analysis Services.

Measures provide an aggregate value for a data field, such as a sum or average. Because measures
are used for aggregation functions, only data fields of type integer or float can be created as
measures. The default aggregation function for the measure is Sum.

Adding data fields with the Data Field Wizard


You can add data fields to the Case Analyzer store. Data fields that are added for case management
activities are stored in the Content Platform Engine. Case Analyzerretrieves case data from
the Content Platform Engine and places them in the Case Analyzer store for generating OLAP cubes
and fact tables.

Modifying data fields


You can modify existing data fields in the Case Analyzer store.

Deleting data fields


At some point, you might decide to delete a user-defined data field, either because it is no longer
needed or it was created in error. You can delete one field at a time.

Importing data fields


Use this action to import data fields into a Case Analyzer database.

Exporting data fields


Use this action to export Case Analyzer data fields. For example, you can export data fields from a
development environment in preparation for importing them into a production environment.

Load balancing and farming


Version 5.2.1

You can use load balancers to manage client requests across all of the nodes in a FileNet P8 server farm.
Farming requires a mechanism to balance the load across all the nodes in a farm, and to redirect client
connections to surviving nodes in case of failure. This section summarizes the available load balancing options.
A number of hardware and software load-balancing products are available for server farm configurations,
including IBM, Oracle, F5 Big IP, and JBoss.

Table 1. Tested load-balancing/farming solutions


Vendor
Oracle WebLogic Server clusters

F5 Big IP

IBM WebSphere Application Server Network Deployment clusters

Formerly called server group

JBoss Application Server cluster

Also called High Availability

Important: Layer 7 load balancers are supported in FileNet P8, but the header or packet modification
capabilities of layer 7 load balancers have not been tested and should be used with caution.

WebSphere Application Server and WebLogic Server


Content Platform Engine is hosted on a Java application server. Both the Oracle and IBM Java
application servers have built-in capabilities for providing highly available web services. Each
application server product is capable of configuring a collection of server instances that function as a
single entity in order to provide an application to a user. Both Oracle WebLogic Server and IBM
WebSphere Application Server calls this collection of server instances a Cluster. Both products
function like a server farm.

JBoss Application Server


JBoss Application Server clusters do not use a separate administrative server. In a JBoss cluster, all
nodes grouped together in a partition are equivalent. Applications must be deployed on each node
individually unless the optional JBoss farm service is used, and individual nodes are started and
stopped independently. For details on the JBoss farm service, see the JBoss Application
Server documentation.

Load balancer support for FileNet P8


You can use different load-balancing strategies to manage requests to the Content Platform Engine,
clients, and the database.

Use the Case Analyzer reports


Version 5.2.1

The Case Analyzer preconfigured reports are organized to focus on the following areas of your system:

Case - In-progress and historical information about cases, such as the current number of cases, and
the average time to complete cases during a specified time period.

Task - In-progress and historical information about tasks, such as the current number of tasks, and the
average time to complete tasks during a specified time period.
Workflow - In-progress and historical information about workflows in your system, such as the current
number of workflows in the system, and the average time to complete workflows during a specified
time period.
Queue - In-progress and historical information about work items in various queues, such as the current
number of work items in each queue, and the number of work items completed during a specified time
period.
Step - In-progress and historical information about work items at various steps, such as the average
time spent to complete work at a step, and the percentage of work items taking each route from a step.
User - In-progress and historical information about each user, such as the average time to complete
work during a specified time period.

Using IBM Cognos Business Intelligence to display the reports


A set of preconfigured IBM Cognos Business Intelligence reports is provided with Case Analyzer to display
information in chart form. However, you must have the IBM Cognos Business Intelligence software installed
and the reports deployed on your system to use the reports. When you open a sample IBM Cognos Business
Intelligence Case Analyzer report, the data displayed is based on sample data to produce the charts; however,
you can customize the reports for your business analysis purposes. Refer to your IBM Cognos Business
Intelligence documentation for more information about creating and customizing these reports.

Using Microsoft Excel to display the reports


A set of preconfigured reports to use the Microsoft Excel pivot chart feature is provided to display information.
However, before you can view or modify the reports provided with theCase Analyzer Client software, you must
either install the Case Analyzer Client reports on your PC or have access to the installed reports on a network
or in an object store. See your system administrator to determine the proper procedure for your site. When you
open a sample Case Analyzer report in Excel, the data displayed is based on sample data. To display data from
your Case Analyzer database, click the Refresh button. When you refresh the display, dimension levels that do
not exist in your database are rolled up into the next level. See the Microsoft Excel online help for full details on
the use of pivot charts.
If you installed the Case Analyzer Client software locally, you can find these reports in the step folder under the
directory where you installed the product. If not, ask your Case Analyzeradministrator for the location of the
reports.
Note: Refreshing a report using Excel 2000 where dimension levels do not exist results in an error. In this case,
you must create your own report.
Case Analyzer uses online analytical processing (OLAP) technology to provide the data for your charts. Before
modifying charts, you should familiarize yourself with the basic OLAP concepts and terminology.

Case-related reports
Case-related reports provide you with information about the status and processing of cases in your
system. A case can be comprised of tasks, content, processes, and views.

Task-related reports
Task-related reports provide you with information about the status and processing of tasks in your
system. The distinction between a case and a task is that a case can comprise one or more tasks. A
task is a process fragment, which can be a set of items to be completed.

Workflow-related reports
Workflow-related reports provide you with information about the status and processing of workflows in
your system. To make sense of the reports, it helps to understand the distinction between a workflow
and a work item. A workflow is a single instance of a workflow definition. It consists of one or more
work items. A work item is the smallest individual piece of a workflow.

Queue-related reports
Queue-related reports provide you with information about the status of work items in specific queues in
your system.

Step-related reports
Step-related reports provide you with information about the status of work items at specific steps in
your system.

User-related reports
User-related reports provide you with information about the status and processing of work items by
specific users in your system.

Case-related reports
Version 5.2.1

Case-related reports provide you with information about the status and processing of cases in your system. A
case can be comprised of tasks, content, processes, and views.
The reports in chart form show the case status as described in the following table.

Table 1. Case reports


Report Name

Description

Current Number Of Cases

The number of cases currently in the system. The information is


grouped by case definition.

Average Age of Current


Cases

The average age of all cases currently in the system. Age is computed
from the time the case was launched.

Average Time Spent To


Complete Cases During
Time Period

The average amount of time, in minutes, that it took to complete cases


during the specified time period. The information is grouped by time
and case definition.

Number Of Cases Created


During Time Period

The number of cases that were created in the system during the
specified time period. The information is grouped by time and case

Table 1. Case reports


Report Name

Description
definition.

Number Of Cases In
Progress During Time
Period

The number of cases in progress for the specified time period. The
information is grouped by time and case definition.

Number Of Cases
Completed During Time
Period

The number of cases that were completed during the specified time
period. The information is grouped by time and case definition.

Task-related reports
Version 5.2.1

Task-related reports provide you with information about the status and processing of tasks in your system. The
distinction between a case and a task is that a case can comprise one or more tasks. A task is a process
fragment, which can be a set of items to be completed.
The following table lists the preconfigured reports that provides the task data in chart form.

Table 1. Task reports


Report Name

Description

Current Number Of Tasks

The number of tasks currently in the system. The information is grouped by task definition.

Average Age of Current


Tasks

The average age of all tasks currently in the system. Age is computed from the time the task
was launched.

Average Time Spent to


Complete Tasks During
Time Period

The average amount of time, in hours, that it took to complete tasks during the specified time
period. The information is grouped by time and task definition.

Number Of Tasks Created


During Time Period

The number of tasks that entered the system during the specified time period. The information
is grouped by time and task definition.

Number Of Tasks In
Progress During Time
Period

The number of tasks in the system at the end of the specified time period. The information is
grouped by time and task definition.

Table 1. Task reports


Report Name

Description

Number Of Tasks
Completed During Time
Period

The number of tasks that were completed during the specified time period. The information is
grouped by time and task definition.

Average Time In Each State


For Current Tasks

The average amount of time that current tasks are in each state.

Average Time In Each State


For Tasks During Time
Period

Working state indicates that the task is in progress.


Wait state indicates that the task is dependant on another item or task to complete
before the task can continue.
Ready state indicates that the task is ready to begin; however, the task is waiting for
either a process to start it automatically or for user intervention to manually start
the task.
Failed state indicates that the processing of the task stopped and requires user
intervention for the task to continue towards completion.

The average amount of time that tasks have been in each state for a specified time period.

Working state indicates that the task was in progress.

Wait state indicates that the task was dependant on another item or task to complete
before the task can continue.

Ready state indicates that the task was ready to begin; however, the task is waiting
for either a process to start it automatically or for user intervention to manually start
the task.

Failed state indicates that the processing of the task stopped and required user
intervention for the task to continue towards completion.

Known issue: Case Analyzer is out of sync with IBM Case Manager after a test environment has
been reset.

Want a glimpse into the future? Check out the new support experience beta.
Chat with Support
Release notes

Abstract

Case Analyzer might not pickup new events after a reset of the Case Manager test
environment.
Content

In IBM Case Manager you can reset a test environment by using Case Manager Builder.
When you reset the test environment, the target object store in Content Engine and the
region data in Process Engine will be reinitialized. However, Case Analyzer will still
contain object store data and region data from events that occurred before the
environment reset. Therefore, Case Analyzer might not pickup new events after the
reset.
To correct this, you must reset the Case Analyzer database after the test environment is
reset in Case Manager. In Case Analyzer Version 5.0, the only way to reset the
database is to use a command line script.
Complete these steps to reset the Case Analyzer database:
1. From the command line, navigate to your system's equivalent of the following
directory:
C:\Program Files\IBM\FileNet\Case Analyzer Engine\jpa\scripts\sqlserver

2. Run the following command:


%Systemroot%\SysWOW64\cscript.exe setupsqlserver.wsf

The command will rebuild the Case Analyzer relational and OLAP database.

Things to check when configuring Case Foundation Case Analyzer

Want a glimpse into the future? Check out the new support experience beta.
Chat with Support (currently not available)
Technote (troubleshooting)

Problem(Abstract)

There are several layers to Case Analyzer so when problems happen, this guide
provides a couple places to check.
Symptom

PAAMO connection errors:


CAPublisher 9d099aa0 [Trace] CA_OLAP, [ca_store] PAAMO::PAAMO-Connecting to
:rmi://<ssas_hostname>:32771/FileNet.PA
OLAP cube processing errors:
CAPublisher 87446338 [Error] FNRPE0911843017E [ca_store]
filenet.eventexporter.ca.main.DTSRunner::processWIP - [FNRPE0911843017E]Error
while processing WIP cubes

Cause

Case Analyzer not processing events or OLAP cubes are unable to process.

Resolving the problem

1. Verify the host name along with other CA and OLAP configuration parameters in CA
datastore configuration in Administrative Console for the Content Platform Engine
(ACCE).
2. Ensure that the Case Analyzer service is running on the Case Analyzer SSAS
(typically OLAP) server and that you can connect (telnet) to the Case Analyzer SSAS
server on port 32771.
3. Verify that the Case Analyzer database user exists and has all roles except the DENY
roles selected.
4. Verify that the Case Analyzer OLAP user is a domain account, has administrative
rights on the MSSQL Analysis server, and has a matching domain account on Case
Analyzer DB with all except DENY roles.
5. If the OLAP database does not exist, create a new one. Check the OLAP database
for the out-of-the-box cubes (e.g. Work In Progress). If they do not exist, run the
setupolap batch script and specify the CA datasource name. After entering the
command it will appear to hang (no output), however this is actually a prompt for
credentials so please enter the CA username+password.

6. Once the OLAP database exists and has the cubes, check the VMAE datasource
credentials and if needed, change them to use the correct account. Manually process
the cubes to verify things are working.

You might also like