You are on page 1of 120

Running Global Projects

AVEVA Solutions Ltd

Disclaimer
Information of a technical nature, and particulars of the product and its use, is given by AVEVA
Solutions Ltd and its subsidiaries without warranty. AVEVA Solutions Ltd and its subsidiaries disclaim
any and all warranties and conditions, expressed or implied, to the fullest extent permitted by law.
Neither the author nor AVEVA Solutions Ltd, or any of its subsidiaries, shall be liable to any person or
entity for any actions, claims, loss or damage arising from the use or possession of any information,
particulars, or errors in this publication, or any incorrect use of the product, whatsoever.

Copyright
Copyright and all other intellectual property rights in this manual and the associated software, and every
part of it (including source code, object code, any data contained in it, the manual and any other
documentation supplied with it) belongs to AVEVA Solutions Ltd or its subsidiaries.
All other rights are reserved to AVEVA Solutions Ltd and its subsidiaries. The information contained in
this document is commercially sensitive, and shall not be copied, reproduced, stored in a retrieval
system, or transmitted without the prior written permission of AVEVA Solutions Ltd Where such
permission is granted, it expressly requires that this Disclaimer and Copyright notice is prominently
displayed at the beginning of every copy that is made.
The manual and associated documentation may not be adapted, reproduced, or copied, in any material
or electronic form, without the prior written permission of AVEVA Solutions Ltd. The user may also not
reverse engineer, decompile, copy, or adapt the associated software. Neither the whole, nor part of the
product described in this publication may be incorporated into any third-party software, product,
machine, or system without the prior written permission of AVEVA Solutions Ltd, save as permitted by
law. Any such unauthorised action is strictly prohibited, and may give rise to civil liabilities and criminal
prosecution.
The AVEVA products described in this guide are to be installed and operated strictly in accordance with
the terms and conditions of the respective license agreements, and in accordance with the relevant
User Documentation. Unauthorised or unlicensed use of the product is strictly prohibited.
First published September 2007
AVEVA Solutions Ltd, and its subsidiaries
AVEVA Solutions Ltd, High Cross, Madingley Road, Cambridge, CB3 0HB, United Kingdom

Trademarks
AVEVA and Tribon are registered trademarks of AVEVA Solutions Ltd or its subsidiaries. Unauthorised
use of the AVEVA or Tribon trademarks is strictly forbidden.
AVEVA product names are trademarks or registered trademarks of AVEVA Solutions Ltd or its
subsidiaries, registered in the UK, Europe and other countries (worldwide).
The copyright, trade mark rights, or other intellectual property rights in any other product, its name or
logo belongs to its respective owner.

Running Global Projects

Running Global Projects

Contents

Page

Running Global Projects


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1:1
Global Mode of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2:1
Global Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3:1
Location of the Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3:1
Daemon Access Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3:1
Daemon Buffer Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3:1

Daemon Diagnostics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:1


Tracing

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:1

Logging

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:2

Diagnostic Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:2

Database Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:1


Allocating Databases to Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:1
Checking Database Allocations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:1
De-allocating Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:2
De-allocating a Database from a Location. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:3
admnew Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:3
Using Areas in Global. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:4

Merging Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6:1


Remote Merging on Non-Extract Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6:1

12.0

Running Global Projects

Procedure for Merging Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6:2

Merging Extract Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6:2

Transaction Audit Trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:1


Writing to the Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:1
Reading from the Database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:1
Program Initialisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:1
Reading Command Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:2

Following the Audit Trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:2


As seen from the TRINCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:2
As seen from the TROUCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:3
As seen from the TROPER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:4

Audit Trail Dates and Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:5


Cancelled Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:7
Processing of Results and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:7
Transaction Success and Failure Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:8
Scheduled Updates - Successes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:8
Scheduled Update - Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:9
Failed File Copies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:11

Reasons Other ADMIN Commands Can Fail . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:12


Automatic Merging and Purging of a Transaction Database . . . . . . . . . . . . . . 7:13

Pending File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8:1


Changing the Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9:1
Preparation for Changing the Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9:1
Recovering from Change Hub Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9:2

Updates and Synchronisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:1


Synchronisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:1
Manual Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:1
Update and Timing Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:2
Propagation of Picture and other Drawing-files . . . . . . . . . . . . . . . . . . . . . . . . 10:2
Propagation of Final Designer, Schematics and Marine Hull Drawing Files . 10:3
Propagation of Inter-Database Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:4
Update Timings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:4

ii

12.0

Running Global Projects

Transfer of Other Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:5


Reverse Propagation Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10:7

Deleting Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11:1


Database Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:1
Recovering Secondary Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:1
Recovering Primary Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:2
Recovering Database Primary Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:2
Recovering the Global Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:2
Transaction Database Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:3
Renewing the Transaction Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:3
Merging the Transaction Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:3
Reconfiguring the Transaction Database at a Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . 12:4

Recommendations for Reconfiguring


(User dBs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13:1
Copying Global Projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14:1
Backing Up Global Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15:1
Using Extracts with Global Projects . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Using Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Extract Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Querying Extract Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:2

Creating Master and Extract Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:3


Creating Master Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating Working Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Extract Numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reference Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16:3
16:3
16:5
16:6
16:7

Setting up an Extract Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:7


Using DACs with Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:8
Using Extracts in DESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:8
Managing Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9
User Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9
Extract Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9
Command Syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:10

iii

12.0

Running Global Projects

Extract Flush Commands Failing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Relationship between Extract and User Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How to Find Out What You Can Claim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Flushing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Releasing Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Issuing Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dropping Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Refreshing an Extract. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16:11
16:11
16:11
16:13
16:14
16:14
16:14
16:14

Partial Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:15


Extract Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:15
Merging Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:16

Deleting a Database that owns Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:16


Variant Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:18
Reasons Claims and Flushes can Fail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:18

Off-line Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17:1


Working Practices with Off-line Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17:2
Change Primary to Offline location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17:2
Change Primary from Offline location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17:3
Deallocation from Offline location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17:3

Firewall Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18:1


Limiting Ports Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18:1

Suggested Housekeeping Guidelines for Projects . . . . . . . . . . . . 19:1


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:1
Dice

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:1

Global

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:3

Update Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Timing of Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking Locations are Aligned. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Change Primary - Repair Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Risks of Aligning Databases Across Locations by File Copying . . . . . . . . . . . . . . . . . . . .
Flushing/Issuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transaction Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Daemon Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
admnew Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19:3
19:3
19:4
19:5
19:5
19:6
19:6
19:7
19:7

Session Management - Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:8

iv

12.0

Running Global Projects

Database File Locks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:8


Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:8
Avoiding Random File locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:9
Locating and Closing Locked and/or Open Database Files . . . . . . . . . . . . . . . . . . . . . . . . 19:9

Distributed Extract Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:10


Project Administration Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:11
ADMIN Lead. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:11
Discipline SMEs (Subject Matter Experts). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:11
Global Satellite Co-ordinator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:12

Project Setup Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A:1


Recovery from Reverse Propagation Errors . . . . . . . . . . . . . . . . . .B:1
Background - Propagation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:1
Identifying the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:2
Querying Database Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:2
Automating Checks For Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:4

Using Global to Distribute Catalogue Data. . . . . . . . . . . . . . . . . . . .C:1


Example Macro for Collecting and Deleting Old Commands . . . . .D:1

12.0

Running Global Projects

vi

12.0

Running Global Projects


Introduction

Introduction
This document proposes a set of guidelines for the effective use of the AVEVA Global
product. The guidelines result from current working experience and may be amended in the
light of future experience. Global manages a project distributed over several different
geographical locations connected by a Wide Area Network (for example the Internet) and so
presents special situations for the administrator and engineering user, which the guidelines
address.
Note: References to 'Windows' in this document mean MS Windows 2000 or MS Windows
XP.
AVEVA Global can be used to enhance projects created in either the AVEVA Plant or
AVEVA Marine group of products - henceforth known as the base product in this
document.

1:1

12.0

Running Global Projects


Introduction

1:2

12.0

Running Global Projects


Global Mode of Operation

Global Mode of Operation


In standard projects, commands are processed one at a time so that the next command
cannot begin until the previous one has finished. In principle, the state of the system is
therefore always known. In Global, remote commands are processed in parallel and so the
next command may be initiated before the previous one has finished. This mode of
operation is called non-blocking and its advantage in Global is to prevent a slow longtransaction command from blocking the user. Its disadvantage is that the user needs to work
in a new way to exploit this parallel nature of Global.
If a remote command traversing the Global network becomes held up at a particular location
(for example due to a comms line fault) then, for most commands, the command is placed in
a transaction database at that location for later processing. A small number of commands,
known as kernel commands, bypass the transaction database and are stored in a pending
file for later processing. The use of the transaction database and the pending file means
that commands are guaranteed to complete, but some commands may not succeed. Some
may roll back, while others may just fail.
For further information about the transaction database, see Transaction Audit Trail, and
Transaction Database Management.

2:1

12.0

Running Global Projects


Global Mode of Operation

2:2

12.0

Running Global Projects


Global Daemon

Global Daemon
The Global daemon (sometimes referred to as the ADMIN daemon) is supplied with the
Global product, in the default install folder. It uses RPC, which is part of the standard
Windows software, and so no additional software has to be installed.
There must be one Global daemon running for each Project at a Location.
Installing the Global daemon is described in the Global Installation Guide, configuring and
starting the daemon is described in the Global User Guide.

3.1

Location of the Daemon


We recommend that the daemon is run on the file server, thus reducing the risk of it being
accidentally stopped by a user, which could result in missed updates.
The Global daemon can be run as a background service. This allows the program to persist
after a user logs out; and to start automatically when a machine is restarted. When the
daemon is run as a service, it must be run on the file server.

3.2

Daemon Access Rights


To enable updates to function correctly between locations, the user who starts the daemon
must have sufficient access rights to all project databases at the current location. For
example, a daemon running at Location A will need to have Read/Write access to all of the
databases at Location A.

3.3

Daemon Buffer Size


Dabacon buffer size can be set in the GLOBALDAEMON module definition. The default
value is 2560000 but the Administrator may specify a larger or smaller value than this. Note
that the buffer size should be at least this value in projects where distributed Extracts are
being used.

The Dabacon buffer size can be changed by using the MODULE command. See the
Administrator Command Reference Manual for details.

3:1

12.0

Running Global Projects


Global Daemon

3:2

12.0

Running Global Projects


Daemon Diagnostics

Daemon Diagnostics
The Global daemon has the following types of diagnostic output:

4.1

Tracing.

Logging.

Tracing
Tracing can be switched on when you start the daemon. If you are running the Global
daemon as a service, add a line to the startup batch file singleds.bat to set the environment
variable DEBUG_ADMIND as follows:

DEBUG_ADMIND=1023
If you are not using the Global daemon as a service, you can set DEBUG_ADMIND from the
command line.
The value of the DEBUG_ADMIND variable determines the type of activities that are traced:
0

Not used

Not used

Trace

Remote Procedure Calls

Thread Library

16

Systems DB Access

32

Dabacon Thread

64

Event Loop Thread

128

Operation Thread

256

Trans DB I/O

512

Not used

1024

Not used

2048

Dabacon Detail

These values are bit settings, so if you want to trace a combination of activities, you add the
above values together. For example, to trace Systems DB access and the Event Loop
thread only, you would set DEBUG_ADMIND as follows:

4:1

12.0

Running Global Projects


Daemon Diagnostics

DEBUG_ADMIND=80
To enable tracing for all activities, you would set the DEBUG_ADMIND value to 3071. A
useful level of tracing for tracking commands is 896.
Full tracing can be verbose and fill disk space rapidly, the recommended value of 896 allows
the administrator to gain an idea of the current number of commands running through the
system.
This may help when bringing down a daemon at a particular location. Further tracing may be
required when investigating a particular problem.

4.2

Logging
It is beneficial to have the Daemon log setting activated for troubleshooting purposes as well
as helping the System Administrator to know how the Global daemons are functioning. We
can activate the diagnostics by configuring the Global ADMIN comms log.
The Global ADMIN comms log is activated from Daemon>Daemon Settings. This will
display the Local Daemon Settings form. In the appropriate text boxes, enter the
Diagnostic Logfile name, the Diagnostic Level (see below), and finally Enable the
Diagnostic Logging using the drop-down list. (Note: If you use an environment variable in
the log file path, it must be defined in the daemon script or in the window from which you
started the daemon.)

4.2.1

Diagnostic Level
The number to be entered is the sum of the code numbers for the individual requirements
shown below:
0

None

Received summary

Received detail

Send summary

Send detail

16

Dabacon thread summary

32

Dabacon thread detail

64

Propagation thread summary

4:2

12.0

Running Global Projects


Daemon Diagnostics

None

128

Propagation thread detail

255

Full logging.

Alternatively, log values can be selected by clicking the Define button.

The log files can be sent to the administering location at regular intervals.
The log file will get bigger over time. If you want to keep the log record, but start a new file,
move the log file to another directory.
The daemon checks for the log file location every 15 minutes. It will keep writing to the
moved log file until it checks the log file location and finds it has moved, and then a new log
file will be generated.
Note: Logging does not capture the same data as tracing, for full debugging purposes the
trace facility provides much more comprehensive internal diagnostics.

4:3

12.0

Running Global Projects


Daemon Diagnostics

4:4

12.0

Running Global Projects


Database Allocation

Database Allocation

5.1

Allocating Databases to Locations


Before allocating databases, ensure that both daemons are running by selecting
Query>Global States>Communications or by issuing a Ping command.
ALLOCATE commands can be given in sequence without waiting for the first allocate to
finish. However, the same Allocate command should not be done twice, unless you are sure
that the allocation has failed and that there is no entry in the transaction databases at either
of the locations affected by the allocation.
When databases have been allocated to a location, you cannot add databases to MDBs
until all allocations have been completed, so it is advisable to check the progress of
allocations first.
It is advisable to use macro input for long lists of database allocations; for example:

ALLOCATE

<db name> PRIMARY at /Satellite

ALLOCATE

<db name> at /Satellite

and so on for each allocation.


This will most likely be the case when the Global project is initially created.
Note: Once all allocations have been committed, it is worth checking that all commands are
complete, whether the command has been executed through the GUI, or as a
manual command. This is described in the next section.
If a de-allocation is in progress (see the DEALLOCATION command), then the allocation
will stall until the de-allocation is complete before commencing.

5.2

Checking Database Allocations


Once you have finished database allocation, you must check that the commands have been
completed. You can do this by selecting DB & Extracts from the Admin Elements form,
which displays a list of all the allocated databases at the location where dBs are being
allocated. Alternatively, you can check that the allocation has completed successfully from
the command line by listing the elements under the DBALL (the allocation list; see below),
under the LOC element, at both the Hub and the Satellite locations. Until all databases are
physically allocated to the location, the DBALL will not own the database allocation.
The allocation process may take some time if there is a slow link between Hub and Satellite
and/or if database sizes are large.

5:1

12.0

Running Global Projects


Database Allocation

A Get Work must be done prior to listing the DBALL, (that is, you must carry on doing a Get
Work to see when the databases have been allocated). Allocation is successful when the
DBALL list contains all of the databases allocated.
getwork

5.3

/Satellite

- Navigate to the location (LOC element)

- Go to the first member, i.e. DBALL

q mem

- list the members, i.e. allocated databases

De-allocating Databases
The same principles for allocating databases, as described above, apply to de-allocating
databases.
If users are reading a database at any satellite location and it is de-allocated at that location
by the hub while it is being read, then the database(s) de-allocated will not immediately be
deleted from the satellite locations.
The command will be stalled in the transaction database and, once all users at the location
exit their session, the database(s) will be de-allocated and the database files deleted.
Note: Only secondary databases can be de-allocated. If a database is primary at a satellite,
first make it secondary, then de-allocate it. If you change a database from primary to
secondary while a user is reading/writing to it, the user will be able to write to the
database until such a time as that user changes modules. A dB does not need to be
primary at the HUB, just as long as it is not primary at the location where it is being
de-allocated.

5:2

12.0

Running Global Projects


Database Allocation

5.4

De-allocating a Database from a Location

5.4.1

admnew Files
.admnew files are created when the whole database needs to be propagated. This may
occur:

Whenever a database is allocated to a location. The database is copied from the hub to
the new location by the Global daemon. As the file is copied over the network
connection, a file named
prjnnnn.admnew
is created. Once copying is complete, this file is renamed automatically to
prjnnn

5:3

12.0

Running Global Projects


Database Allocation

For example: abc0001.admnew is created while copying, and it is renamed to


abc0001 once copying is complete.

Whenever a primary database is merged. The next update will force the entire
database to be propagated.

If the RECOVER command is used to recover a database from a specified location


.admnew files are self contained by the system, there should be no reason to delete
them, except in extreme cases. If the daemon is running continuously and a Copy dB
operation fails then the system should tidy up. Normally .admnew files are retained for
later use. However, if the daemon dies (such as in a power-cut) during the process this
may result in an invalid .admnew file. In this case the file should be removed from the
operating system to avoid possible problems on a repeat operation.
In the case of a copy after a database has been merged, it may not be possible for the
.admnew file to be renamed immediately. This will happen:

If there are READERS of the database -users are accessing an MDB which contains
the database (even if they are only in MONITOR). In this case Global will not attempt to
rename the .admnew file until all such users have exited or switched to an MDB which
does not include the database. Once all such users have exited, the copy will normally
succeed.

If the database is locked by a Dead user - a session for a user which has been
expunged. In this case Global attempts to rename the .admnew file, but it fails.

To resolve the second situation, you must do one of the following: either

Ensure that the sessions for all dead users have been killed. Also ensure that no
foreign projects are reading this database; or

Use the NET FILE command in a cmd window (or a suitable third party tool) to identify
network access to the file, and close it.
If the project is not used as a foreign project, there is an additional alternative. The
Overwrite DB Users flag - the attribute LCPOVW of the LOC element - for a location
controls whether a locked file may be overwritten. If this attribute is set TRUE and
there are no database READERS in the project, then Global will overwrite the locked
file by the .admnew file.

Note: This should not be done if other projects include this database as a foreign project,
since these are valid READERS that are not recorded in the session data for the
Global project.
Overwriting of locked databases may be enabled by using the MODIFY dialogue for the
location on the Admin Elements form to enable Overwriting, or by setting the Overwrite DB
Users flag (LCPOVW) to TRUE for the appropriate LOC element on the command-line.
See also Database File Locks.

5.4.2

Using Areas in Global


Areas are available within Global and can be used in the same way as non-Global projects.
However for the daemon to manage updates and Global functionality the area environment
variables must be set before starting the daemon. If the daemon is run as a service then the
environment variables must be set within the kick-off batch script.

5:4

12.0

Running Global Projects


Merging Databases

Merging Databases
When setting up a project in a Global environment, you are likely to create many sessions in
the Global database. This is because when ADMIN issues a Daemon command, it first does
a SAVEWORK to give the Daemon an up-to-date view of the Global database. The Daemon
also may add sessions to the Global database.
We recommend that you should merge changes for the Global database and possibly the
system database after setting up a Global project. This should also be done after making
significant changes to the project setup.

6.1

Remote Merging on Non-Extract Databases


Take care when merging project databases. Databases can only be merged at their primary
locations. It is important to note that when a project database is merged, the database
session will effectively be lost. Thus the ability for Global to send only session changes is
lost too.
It is therefore recommended that you use the REMOTE MERGE command (or select
Remote> Remote Change Management> Merge Changes from the ADMIN menu bar) to
synchronise and merge the database at all secondary locations (unless the database is nonpropagating). This prevents propagation of the entire database on the next update.
If there are any users in a database at its primary location, you cannot merge the database.
REMOTE MERGE merges the database at secondary locations after it has been merged at
the primary location in order to prevent unnecessary copying of the entire database when it
is next updated.
You are advised to stop scheduled updates and avoid adhoc updates when using REMOTE
MERGE. If scheduled updates are left in place, then unnecessary copying of entire
databases will be undertaken. There is also a danger of reverse propagation from
secondary to primary location, with the result that changes made by users at the primary
location would be lost.
Note that database extracts which own other extracts cannot be merged using REMOTE
MERGE. See Merging Extract Databases.

6:1

12.0

Running Global Projects


Merging Databases

6.1.1

Procedure for Merging Changes

6.2

Merging Extract Databases


The REMOTE MERGE command cannot be used on extract databases if they own other
extracts. Instead, ADMIN must be used to merge these. The extract to be merged and all its
immediate children must be primary at the same location. This is because the child
extracts contain references to sessions in the owning extract.
Procedure to merge an individual extract database:

Select a suitable time to execute the merge, and ensure that all users have left the
project

Use CHANGE PRIMARY to bring all its immediate child extracts to the primary location
of the extract being merged

MERGE the database

6:2

12.0

Running Global Projects


Merging Databases

Use CHANGE PRIMARY to return the child extracts back to their original primary
location.

Optionally, the databases could be copied (by ftp or similar) to all secondary locations
manually after the MERGE (and before the second set of CHANGE PRIMARY
commands). This avoids the need for the next Update to copy the entire file.
Normally merging would be carried out on the entire extract hierarchy at the project
Hub. However if an extract database owns working extracts, it must be merged at its
original primary location, since the working extract files only exist at that location.

6:3

12.0

Running Global Projects


Merging Databases

6:4

12.0

Running Global Projects


Transaction Audit Trail

Transaction Audit Trail


The Global Daemon stores most of the commands that it is asked to perform in its
transaction database. Kernel commands (high level control commands) are stored in the
pending file until complete.
The transaction database can be navigated using the command line in ADMIN using
standard navigation commands. The information in the database will give the system
administrator more information about the progress of commands, and details of why
commands have failed. Much of this information is available through the user interface but
this section is included to instruct the system administrator on how to interpret the
transaction database and the audit trail information stored there.
Each Location in the Global Project has a Global Daemon (also known as ADMIN Daemon)
running. The Daemons at each location communicate and co-operate with each other to
perform actions that a user at a particular site wishes to effect: for example to allocate
databases (from ADMIN), or to claim elements (say from DESIGN).

7.1

Writing to the Database


The Daemon writes a persisted copy of each input command to the database as soon as it
is received, and each operation and output command is written as soon as all have been
successfully created at both the create operations and create post operations stage.
State changes for input and output commands, and operations are written immediately after
the state change. However optimisation may reduce the number of changes that are
actually committed to the database.
All results from an operation are written after the operation completes, and from an output
command as soon as they have been received from the location to which it was sent and
which has now replied.
Messages are written only as they are received, AT THE ORIGINATING location only.
Success and failures may also be visible at other locations involved in the command.

7.2

Reading from the Database


The Daemon reads from the transaction database for just two purposes:

7.2.1

Program Initialisation
The program reads out of the database all input commands not in a final state (processed,
timed out, cancelled) and all the owned operations and output commands and starts

7:1

12.0

Running Global Projects


Transaction Audit Trail

progressing these commands. Only unfinished commands will be read. All others will be
ignored and not validated for errors.
If there are any errors found in reading the database, the daemon will not start. It will then be
necessary to provide a (probably) empty database so that the daemon will start from fresh
and not progress any previously running commands.

7.2.2

Reading Command Data


Whenever command data is needed, such as to generate an output command to send on,
this data is read from the database. This is the only time that data is read from the database,
except at boot up, and is done purely as an optimisation to limit the amount of memory used
by the daemon.
The storage of the information in the database has the additional advantages:

7.3

It provides the user with information on the progress of his commands, allowing him to
browse the messages that have been received and persisted in this database.

It provides the administrator with an audit trail to determine if and where a problem has
occurred.

It provides other modules of the base product with a store to deposit local
administration information such as element claims. However, these commands are
totally ignored by the daemon.

Following the Audit Trail


The following section refers to elements within the Transaction Database. For more detail
about Transaction Database Elements refer to the Administrator Command Reference
Manual.

7.3.1

As seen from the TRINCO


A TRINCO is created when an input command is received. It has a creation date (DATECR)
an initial state (INCSTA) of Received, and a command type TRCNUM.
If the command came from the user, (or from TIMEDUPDATES) then the command UID
(COMUID) is set to a null reference. Otherwise it will be the reference of the output
command (TROUCO) that sent the command. If this is from another location it is an
unknown reference as it is in another transaction database that is not in the current MDB
and so cannot be navigated to.
The originating location of the command (where a user first issued it) is ORILOC. The
location that sent the command to this location is PRVLOC. And the destination location to
which it is eventually heading is DESLOC. Normally, commands are passed from location to
location until the destination is reached where operations take place. For some commands,
the operations take place at the ORILOC and DESLOC is used as a location to which to
pass on some other commands. It is not obvious which commands are exceptions to the
normal rule.
If INCSTA = Acknowledged then an acknowledgement has gone to the sender of the
command. This is only relevant if the sender was a TROUCO and not a base product user.
If the state INCSTA = Ready then the input command has created its operations
successfully and TROPERs and/or TROUCOs will exist.
If there was a failure in the create processes the INCSTA may be Stalled and no operations
will exist. After waiting for the standard time the command will try again. Alternatively this

7:2

12.0

Running Global Projects


Transaction Audit Trail

failure can terminate the TRINCO. Its TRPASS will be set to FALSE, its state will be
Complete or a later state, and it may own a TRMLST/TRMESS and perhaps a TRFAIL but
no TROPERs or TROUCOs. Input commands can be given a delayed start time (EXTIME)
after which operations will be generated. It will wait in the Waiting state until this time has
passed. This stay of execution will persist until EXTIME has expired, even if this is a longer
period than the Time out.
The TRINCO stays in Ready state for as long as all its operation and output commands
take to complete. Once the TRINCO has been set to ready the command cannot time out
until all operations have also timed out.
When all member operations and output commands have completed INCSTA is set to
Complete. All failures and successes generated by them are collected together and
handed on to the sending TROUCO (which stores them). The success state of the
command (TRPASS) is set to true if all operations have succeeded. INCSTA is now at
Replied.
Once a reply acknowledgement has been received back from the previous location, INCSTA
is set to Processed and no more actions will take place.
There are other terminating conditions of a TRINCO; Timed Out means that the command
did not manage to start before either its end time was reached, or the number of retries
allowed was exceeded. It will not own any TROUCOs or TROPERs.
The state is set to Cancelled if the command is cancelled before any significant action took
place. Owned TROUCOs and TROPERs may be set to cancelled if they have not yet
started work: subsequent operations that depend on them will be set to Redundant.

7.3.2

As seen from the TROUCO


Output commands may be created when input commands create their operations and
possibly again when post operations are generated. These TROUCOs have ORILOC set to
the current site, DESLOC the ultimate processing destination of the command and are sent
to NXTARL, the next destination en route. The command type is the TRCNUM attribute.
If the current location is not the destination of the incoming TRINCO then a TROUCO is
created to manage the progression of the command to its destination when all other (if any)
operations and output commands on which it is dependent have completed. This TROUCO
is a duplicate of the owning TRINCO and when it is passed on its ORILOC is that of its
owning TRINCO.
TROUCOs store the progress of the communication of the command with the location to
which it is sent (NXTARL) and the reply.
OUTSTA is the state of processing of a TROUCO. Its value is Waiting until the command is
ready for processing. This may be delayed because of dependency on other operations or
output commands. These dependencies are stored in DEPCOU, DEPEND and DEPTYP
attributes - the number of dependencies, the elements depended on, and whether it is on
success or failure.
When all previous commands and operations on which the TROUCO is dependent have
completed, and when the condition of the dependencies are met the TROUCOs state
changes to Ready. After this point the command is sent to its target location and its state is
set to Sent. If the sending fails the TROUCO is Stalled and remains so until the time
between retries (WAITIM) for that location is passed when it goes back to Ready again.
The receiving location must acknowledge the command (OUTSTA = Acknowledged. The
acknowledgement is sent with the dbReference of the TRINCO at the receiving location that

7:3

12.0

Running Global Projects


Transaction Audit Trail

has been created to store the command. This is stored in the TROUCOs CMREF attribute.
For remote locations this will usually be an unknown reference since the specific transaction
database is not visible. It can be used to track the command down the chain of locations if
the administrator can see all the databases.
When a reply is received OUTSTA becomes Replied. Any reply data is stored under
TRFLST and TRSLST elements and the TRPASS attribute and OUTSTA goes to
Processed.
TROUCOs can terminate by timing out if they fail to send in the lifetime prescribed (Timed
Out. They may never be sent if dependencies are not met, in which case they terminate as
Redundant.

7.3.3

As seen from the TROPER


Operations are created during the create operations process and possibly again when post
operations are generated. Operation information is stored in TROPER elements. This is
used to execute an operation at a location, to progress the operation and store any results,
errors, messages it may generate.
It is only operations that actually do anything in the daemon, input and output commands
(TRINCOs and TROUCOs) are just the means of marshalling instructions between users
and locations, and between locations. Extra commands may be generated, but in the end it
is the operations that are created at locations that do the work.
The operation type is the TRONUM attribute.
Operations start in state OPSTAT Waiting until it is no longer dependent on any prior
operation or command and can be executed. It then goes to Ready and put into a list
ready for execution. Execution takes place in a separate thread and the state is then
Running.
During the running state Failures and Successes may be generated as well as messages.
But these are not stored in the database immediately.
When the operation finishes (TRPASS set to true or false) the OPSTAT is set to Complete
and successes and failures stored in the database under the TROPER (and not before this).
The finishing may also set a string in MSTEXT, but this is not passed on.
The execution of an operation can stall due to inability to access data for example. In this
case the OPSTAT is set to Stalled and will be reset back to Ready after time WAITIM. The
number of retries after stalling is stored in NRETRY. Operations will time out if still stalled at
date ENDTIM or the number of retries exceeds MAXTRY. In this case OPSTAT goes to
Complete and finally TimedOut.
When an operation is complete it may need to generate extra operations and commands the
form of which are dependent on the results of the operation. If this create operation is stalled
then the operation goes to OPSTAT Stalled Post Operation and will go back to Complete
after WAITIM. When post operations are successfully created, or none are needed, then the
operation goes to a final state that is Processed or Timed Out.
TROPERs may never be needed if dependencies are not met in which case they terminate
in state Redundant. This may happen if a command is cancelled.

7:4

12.0

Running Global Projects


Transaction Audit Trail

7.4

Audit Trail Dates and Counts


States of input and output commands and operations are often progressed a number of
times and for some, though not all states, a count attribute is incremented. This gives
information about the number of times commands are sent, or tried or stalled for example.
A number of states also have an associated date attribute which is set each time the state is
reached so that it records, for example, the date when a command was last sent, or
acknowledge.
The states that set associated dates and counts are:

7:5

12.0

Running Global Projects


Transaction Audit Trail

TRINCO:
RECEIVED

DATECR

Date command received from user or other


location and created

ACKNOWLEDGED

DATEAK

Date acknowledgement for command sent

NACKN

Number of times acknowledgement sent

READY

DATERD

Date command made ready (after EXTIME


has been reached)

COMPLETE

DATECM

Date command completed

REPLIED

DATERP

Date reply sent with results of command

NREPLY

Number of times reply sent

TIMEDOUT

DATEND

Date command timed out. No ops created

CANCELLED

DATEND

Date command cancelled by user

PROCESSED

DATEND

Date all processing of command finished


including
reply
acknowledgement
of
command received

NREPAK

Number of times reply acknowledgement


received

WAIT

DATECR

Date command created by owning TRINCO


or previous TROPER or TROUCO

READY

DATERD

Date command made ready to be sent when


dependencies satisfied

SENT

DATESN

Date command sent to target location

NRETRY

Number of times command sent and stalled

DATEAK

Date command acknowledgement received

NACKN

Number of times acknowledgement received

DATERP

Date reply received with results of command

NREPLY

Number of times reply received

DATERK

Date command completed


acknowledgement sent

NREPAK

Number of reply acknowledgments sent

TIMEDOUT

DATEND

Date command timed out. Could not be sent

CANCELLED

DATEND

Date command cancelled by owning TRINCO

REDUNDANT

DATEND

Date command discovered to be redundant

PROCESSED

DATEND

Date all processing of command finished including post operations generated.

TROUCO:

ACKNOWLEDGED

REPLIED

COMPLETE

7:6

and

reply

12.0

Running Global Projects


Transaction Audit Trail

TROPER:

7.5

WAIT

DATECR

Date operation created by owning TRINCO or


previous TROPER or TROUCO

READY

DATERD

Date operation set ready when all dependencies


satisfied,

RUNNING

DATERN

Date operation started running

NRETRY

Number of times operation was set running

STALLED

DATESL

Date operation stalled

COMPLETE

DATECM

Date operation completed

TIMEDOUT

DATEND

Date operation timed out. Could not be run

CANCELLED

DATEND

Date operation cancelled by owning TRINCO

REDUNDANT

DATEND

Date operation discovered to be redundant

PROCESSED

DATEND

Date all processing of operation finished - including


post operations generated.

Cancelled Commands
Commands can be cancelled at the location where they were first input. There are rules as
to what a particular user may cancel, but this section describes what happens in the
daemon once a cancel command has been passed to it.
Cancellation only applies to TRINCOs and not to any particular operation it has. The
cancellation is immediately effected if the TRINCO has INCSTA of state Waiting or
Stalled.
If the TRINCO is Ready then all of its operations and output commands are inspected. If
these TROPERs and TROUCOs are all Waiting, Ready or Stalled then those in Ready
or Stalled state are set to cancelled and the waiting ones become Redundant. And the
TRINCO becomes Complete and then Cancelled.
TRINCOs in other, later states are not cancellable and the cancellation is rejected.
A Message is stored with the command as to whether the cancellation was effected, or
rejected.

7.6

Processing of Results and Messages


When operations execute they may generate successes and failures. These are buffered
and written to the transaction database as TRSUCC and TRFAIL elements when the
operations becomes Complete. Specific messages may also be generated. Messages are
automatically generated for each success and failure, each time an operation or output
command stalls.
Input Commands can have messages. These are automatically generated if the create
operations stage stall and if a cancel is attempted. There may also be failures and
successes but these are rarely generated.

7:7

12.0

Running Global Projects


Transaction Audit Trail

Messages are not stored in the database except under the TRINCO that was originally
received from a user (not another locations TROUCO) and under the TROPERs and
TROUCOs that the TRINCO owns. This is because messages are collected together
regularly by each TRINCO as its operations progress and these are passed back to the
TRINCOs originating TROUCO. If the sender was the user then the messages are stored in
the database for review under the relevant element: In particular when these messages are
passed between sites the TROUCO receives a set of messages each of which may have
been generated by different operations, and yet they will now all belong to the single
TROUCO. The messages contain sufficient attribute information to indicate the location that
the message originated from, the operation type etc.
When the messages are finally stored below the originating command successes and
failures are persisted as TRSUCC and TRFAIL elements under a TRMLST element. This
will distinguish them from the result successes and fails that are persisted when the
operation or output command finally completes.
The diagram on page
describes the elements created for a simple command claim,
between 2 locations. It provides an idea of the elements created in both transaction DBs.

7.7

Transaction Success and Failure Messages


The scheduled update is a complex operation so it is not always possible for full detail to be
reported for database file copies. The sections below give several examples of what you
can expect to see after various successful and unsuccessful transactions.

7.7.1

Scheduled Updates - Successes


A successful scheduled update normally reports two messages:

Each successful update also generates a success:

7:8

12.0

Running Global Projects


Transaction Audit Trail

In this case, all successful database updates report no data to send since the database
was up to date. This is reflected in the summary, which reports the number of successful
Copies and Updates. Note that the success for the Global db is also reported as database
=0/0.
A scheduled update normally only sends the latest sessions for a database - this is an
Update. However, if the database has been merged or had another non-additive change
(reconfigure, backtrack), then the entire database file must be copied. Database copies are
always executed at the destination (the location to which the file must be copied).
The file is copied from the remote location to a temporary file with the suffix .admnew and
then committed. The database copy cannot be committed in the following circumstances:

There are users in the database (recorded in the Comms db)

There are dead users (file is locked) and Overwriting is disabled (see below)

If the commit fails, the .admnew file will be retained. The next copy attempt will test this
file against the remote original to see whether the remote copy stage must be repeated.

In the case of updates, the number of sessions and pages sent is also reported in the
success for each database as well as cumulated in the update summary. In the case of
copies, the number of pages sent will only be reported if the copy is executed locally. For
DRAFT databases, the number of picture-files sent is also reported.
The update summary also reports on the number of other data files transferred (see also
success for Exchange of other data). Note that this will always report a success even if
there is nothing to transfer or Other data transfer is not set up.

7.7.2

Scheduled Update - Failures


Generally a scheduled update will always succeed with a number of database failures

7:9

12.0

Running Global Projects


Transaction Audit Trail

Failure messages will contain detailed reasons:

In this case, the databases could not be propagated, since the secondary database had a
higher compaction number than the primary database. This may happen when a remote
merge is executed without stopping scheduled updates. Normally it will be necessary to
recover the database to resolve this error.
Prevention of Reverse propagation may also be reported in the following situation - a
satellite has executed a direct update (UPDATE DIRECT from the command-line) with a
non-neighbour satellite. The next scheduled update with the intermediate location will report
Prevented reverse propagation. In this case, scheduled updates will eventually resolve the
situation.
The following table summarises Failure messages that can be generated for Scheduled
updates. This does not include all possible failures that may be generated from failed file
copies.
Error No

Symptom

Reason

Scheduled update was suppressed

Attribute LNOUPD set TRUE on


LCOMD to disable scheduled
update

Remote location CAM is unavailable

Daemon for CAM is not available;

Update will not report results to CAM this failure cannot be reported at
CAM - usually due to location
unavailable

612

Prevented reverse copying

Secondary location has a higher


compaction number than the
primary location. Database may
need recovering

611

Prevented reverse update

Secondary location has a higher


session number than the primary
location. Database may need
recovering

613

Unable to check update direction - The Global database is in use. This


update skipped
is normally temporary, due to
another command.

7:10

12.0

Running Global Projects


Transaction Audit Trail

7.7.3

610

Update skipped - cannot get local The specified database is in use at


details for <database>
the current location. This is normally
temporary, due to another command
using the database.

610

Update skipped - cannot get remote The specified database is in use at


details for <database>
the remote location. This is normally
temporary.

610

Update skipped - cannot get local/ In the case of system databases, if


remote details for CAM system DB
one system db is in use, then the
update will fail for any system db
(they all have the same DB number)

Failed database copy - file may be in Unspecified


COPY
failure
use at HUB
compaction numbers are still out of
step. If the copy destination was the
update location, then additional
failures will give further detail.

619

Cannot verify success - may be Unspecified


COPY
failure
failed COPY
compaction numbers are still out of
step. No further detail is available.

Missing remote/local file. Prevented System databases only. A system


reverse propagation.
database file is missing at the
specified location. This may need to
be recovered.

615

Update failure - possibly database A database error was encountered


error
during the update. Full detail will be
in the daemon log

614

Update failure - database pages are The database file is corrupt at the
not contiguous
destination. This database must be
recovered from its primary location.

628

Failed database copy. File in use. Database file is locked and


Cannot remove
overwriting is disabled. File copy
has failed to commit the .admnew
file. The .admnew file will be
retained for later use.

630

Failed database copy - update for Prevention of an inconsistent extract


previous extract failed
hierarchy. A file copy for an extract
db has not been attempted because
of an update failure on another
extract of the same database. (Not
fully working at Global 2.4)

Failed File Copies


In the case of a failed copy, 2-3 failure messages may be generated to report detail. In the
example below, a SYNCHRONISE command was used and took 4 attempts to succeed. (A
failed SYNCHRONISE or UPDATE on an individual database will retry until it succeeds or
times out.) For scheduled updates, these 4 attempts would be spread over several different
update events.

7:11

12.0

Running Global Projects


Transaction Audit Trail

In this example, the database still had readers, so the copy could not be completed. An
additional failure reports that 18 pages have been copied from the remote location. The next
retry validates the .admnew file, but still cannot commit it due to readers. A further retry
validates the .admnew file again and attempts to commit it. In this case there are no
readers, but the file is locked.

In this case, the SYNCHRONISE command eventually succeeded, since Overwriting was
enabled. Note that the Successful file copy success reports that nothing has been copied,
since the remote copy stage was executed successfully on an earlier try, when the copy
failed.
Detailed failures for file copies can only be reported at the destination. During a scheduled
update, the success of a copy is verified by checking that the compaction number has
changed. If the copy was executed at the location which executes the scheduled update,
then additional failures may show more detail. (Note this is the partner location for a
scheduled update, not the originator!)

7.8

Reasons Other ADMIN Commands Can Fail


Many daemon commands you will use for project administration include operations that
change the Global database. These operations fail if they cannot claim the appropriate
element, typically because ADMIN still has the element claimed.
Action

Cause of Failure

Commit Allocate DB

Changes DBALL members, owned by LOC

Commit Allocate All

Changes DBALL members, owned by LOC

7:12

12.0

Running Global Projects


Transaction Audit Trail

Action

Cause of Failure

Commit Allocate Primary DB

Changes DBALL members, owned by LOC

Initialise

Changes LOC element

Set Systemloc

Changes LOC element

Set Primary

Changes DBLOC element, owned by DB

Remove all DBs from MDBs

Changes MDB elements at satellite

Remove from MDB

Changes MDB elements at satellite

Delete from MDB

Changes MDB elements at satellite

Change Hub

Changes GLOCWL /*GL

Recover Hub

Changes GLOCWL /*GL

Unlock DB allocation

Changes DBLOC element, owned by DB

Unlock All db allocation

Changes DBALL element, owned by LOC

Refer to Extract Flush Commands Failing and Reasons Claims and Flushes can Fail for non
Admin command failures.

7.9

Automatic Merging and Purging of a Transaction


Database
Facilities are available for automatically merging and purging each transaction database, in
order to reduce its size and, consequently, improve its efficiency.
The principle of operation is that the system administrator can set rules that determine how
long successful commands and/or failed commands can remain in the transaction
database. At regular intervals, set by the administrator, the system deletes the expired
commands, merges the database to a new file, deletes the existing file and then copies the
merged file to the original filename.
As with all databases, the merging of a transaction database can be carried out only when
no other users are currently accessing it.
The procedure for initiating automatic merging and purging and setting the rules is in the
Global User Guide.
The Global User Guide also contains information on manually merging and purging
databases.

7:13

12.0

Running Global Projects


Transaction Audit Trail

7:14

12.0

Running Global Projects


Pending File

Pending File
On a Global network, most remote commands that are stalled for any reason at a location
are placed in the transaction database at that location, for later processing, (see next
chapter).
A small number of commands that cannot be carried out at once, known as kernel
commands, are instead stored in a locations pending file for later processing. There are
various situations where kernel commands may be added to a pending file. For example:

Too many commands have been issued in quick succession.

A communication link is down.

The kernel commands are:

ISOLATION TRUE/FALSE

LOCK/UNLOCK

PREVOWNER HUB

Also, for a Satellites transaction database:

ALLOCATE (PRIMARY)

CHANGE PRIMARY
All other commands use the transaction database to achieve a similar effect (see next
chapter).

Once a pending file has been created at a location, it will continue to exist. When the kernel
commands stored in it have been executed, they will be deleted from the file. You can tell if
there are any outstanding commands by the size of the file: if it is empty, it will be zero size.
You can read the contents of the pending file using a utility available from AVEVA.
The pending file is named pending, and it will be saved in the project directory (for example,
abc000). It can be read using the glbpend.exe utility provided in the Global install folder.
For example, if the pending file is C:\AVEVA\projects\abc000\pending, the command to read
it is:

install path\glbpend.exe C:\AVEVA\projects\abc000\pending


You will be able to read the pending commands from the output.

8:1

12.0

Running Global Projects


Pending File

8:2

12.0

Running Global Projects


Changing the Hub

Changing the Hub

9.1

Preparation for Changing the Hub


Changing the Hub is a straightforward process, but any break in communication links could
potentially complicate matters. Global can handle and recover from communication failure,
but we recommend that you take the following preliminary steps to minimise the risk of Hub
change failure:

Ensure that the daemon is running at both the current Hub and the Satellite which will
become the new hub. This can be done by selecting Query>Global
States>Communications from the ADMIN, DRAFT, DESIGN, DIAGRAMS or
SPOOLER module or by issuing a Ping command.

Ensure that you have the project at the Hub backed up and that at least the Global
database (i.e. prjglb) at the satellites is backed up.
Once these steps have been taken, you can change the Hub through the command
line. The GUI will automatically change at the old Hub. At the new Hub, re-enter
ADMIN.
Output to the Global daemon window will indicate that the location is now the Hub or
Satellite.

Note that all databases, including non-propagating databases must be allocated to the
proposed new Hub, for example /Tokyo, before changing the Hub. This may be done
using the command:

Allocate all at /Tokyo override propg

or by checking the option Allocate all to allocate non-propagating databases on the


Database Allocation (by location) form.

You should make sure that the change of Hub location is complete before working with
either the new or old Hub. Check the following attribute to confirm that the hub change has
been successful. For example, if you are changing the Hub from London to Tokyo, then
navigate to the location world /*GL and query the Hubrf attribute:

/*GL
q hubrf
The Hubrf should be set to the name of the new Hub location; in this example, /Tokyo.
You will also see that the location parent attribute of each location (locrf) has changed. This
is a secondary effect, because the Hub location can have no parent. In the above example,
navigate to the location of the old Hub and query the Locrf attribute:

/London
q locrf
The Locrf should be set to the name of the new Hub location; in this example, /Tokyo.
(Previously, London, as the old Hub, had no parent location.)

9:1

12.0

Running Global Projects


Changing the Hub

Now, navigate to the location of the new Hub and query its Locrf; for example:

/Tokyo
q locrf
If the Locrf of Tokyo is set to Nulref, then the hub change has been successful. The new
hub, Tokyo, has no parent location.

9.2

Recovering from Change Hub Failure


Very rarely, the project may be in a state where a Hub change has been carried out but has
failed. In this situation, you may effectively have no Hub.
During a Hub change, or when a Hub change has failed, the identities of the old and new
hubs are recorded on the Location World /*GL. Navigate to the location world and query
Hubrf, Prvrf and Nxthb attributes:

/*GL
q hubrf
q prvrf
q nxthb
When there is no Hub, then Prvrf records the name of the previous hub and Nxthb records
the name of the next hub. The Hubrf attribute is set to Nulref. During a Hub change from
London to Tokyo, Prvrf would be /London and Nxthb would be /Tokyo.
Normally, when a Hub change fails, the previous Hub will be restored automatically as part
of the failure operations of the Hub change. You should check progress of the command in
the transaction database. If this recovery fails, then the System Administrator must recover
the previous Hub as described below. This will be necessary in the following circumstances:

If the daemon is down

For offline locations


In all other circumstances it is better await the completion of the in-built recovery
operation, since this prevents incompatible changes being made by two competing
users at different locations.
To force recovery from a failed Hub change, at the original Hub, use the
Data>Recover>Hub Location option from the ADMIN menu bar, or type the following
command:

PREVOWNER HUB
Re-enter the ADMIN module. This will restore the Hub location and the Hub GUI. (Note: if
daemons are running, then the original Hub location command may still be in progress and
will attempt to commit the hub change or recover the original hub as appropriate)
Make sure that the PREVOWNER command is complete before working with either the new
or old Hub, as otherwise it is possible to end up with two Hubs. If this happens, the Global
database must be propagated (or physically copied) from the new Hub to the old before
further administration is be carried out. If the new Hub was to merge changes while the old
Hub was still active, the system would not be able to recover. It would be necessary to
reinstate the Global database from the backup taken before the change of Hub location was
undertaken.

9:2

12.0

Running Global Projects


Updates and Synchronisation

10

Updates and Synchronisation


Within Global there is the functionality to update databases automatically or manually.
Additionally, we can manually synchronise databases.

10.1

Synchronisation
Synchronisation can be carried out at both Hub and Satellite locations. This process can be
used to synchronise databases at one location with the corresponding databases at a
different location. This is a one-way process: project data is only received.

10.2

Manual Updates
Manual updates can also be carried out at both Hub and Satellite locations. This is a twoway process that can take place between neighbouring Locations. Data will be both sent
and received from the location initiating the update, according to which Location has the
most up-to-date version of the database.
If update is used between two locations which are not neighbours, then Global will attempt
to synchronise the database at the two locations as follows:

If the sending location is the primary location, it will update the database at each
location along the network path to the destination;

If the receiving location is the primary location, it will execute a SYNCHRONISE


command to request an update from the primary location

If the primary location lies between the two locations, then it will synchronise the
database at the sending location with the primary location and update the database
from the primary location to the destination location.

It is also possible to do a direct update between two non-neighbour locations using the
UPDATE DIRECT command. However this is not recommended, since it can result in
Reverse propagation errors from scheduled updates. This happens because UPDATE
DIRECT results in the database being more recent at the secondary destination of the
update than at the intermediate satellite through which scheduled updates are routed.

10:1

12.0

Running Global Projects


Updates and Synchronisation

To learn more about Reverse Propagation errors, see Recovery from Reverse Propagation
Errors.

10.3

Update and Timing Considerations


Remember that changes to automatic updates may take up to 15 minutes to take effect.
This is because the daemon checks the pending file every 2.5 minutes. This happens six
times before the System databases and the update information are re-read. Thus, we have
the possibility of a maximum delay of 15 minutes before updates are started. If necessary,
this delay can be reduced by stopping and re-starting the daemon.

10.4

Propagation of Picture and other Drawing-files


Picture, Schematics and Neutral Format files are propagated the same time that a Database
is propagated. If the Daemon finds that a Database has been modified at its primary location
it will query the Picture directory (referenced by variable %ABCPIC% in the case of a PADD
database), and the Neutral Format directory (referenced by variable %ABCDIA% in the
case of a SCHE database) at both locations to work out any changes. New and missing files
will be copied from the Primary Location to all Secondary Locations, any old files will be
deleted.
If the Database is found to be the same at both locations, then it is considered not to require
Propagation, and the Picture and Neutral Format File directories are not compared.
Therefore if there is a genuine mis-match in the file directories this will not be resolved.
By Default Picture and Neutral Format File Propagation is Disabled (non-propagating), it is
possible to enable the Propagation of these files by ticking the check box on the Modify/
Create dB form, as below. This will allow Picture and Neutral Format files to be propagated
to any other location.

10:2

12.0

Running Global Projects


Updates and Synchronisation

If this is done it is possible to regenerate all Picture and Neutral Format Files at the satellite,
even though the Database is secondary.
For Picture and Neutral Format Files to be successfully propagated the environment
variables %ABCPIC% and %ABCDIA% must be set in the Daemon kick-off script.
Final Designer, Schematics and Marine Drawings files are always propagated, even if
Picture/Neutral Format File Propagation is disabled.

10.5

Propagation of Final Designer, Schematics and


Marine Hull Drawing Files
Final Designer, Schematics and Marine Hull Drawing files differ from Picture and Neutral
Format Files because they are always propagated regardless as to whether the Propagate
Picture/Neutral Format Files check box is unchecked or not. The reason for this is that,
unlike picture files, they cannot be regenerated at a secondary location.

10:3

12.0

Running Global Projects


Updates and Synchronisation

For these files to be successfully propagated the following environment variable must be set
in the Daemon kick-off script:
Final Designer Drawing

{ABC} DWG

Marine Hull Drawing


objects (SDB files)

{ABC} DRG

Schematic Diagrams

{ABC} DIA

Stencil

{ABC} STE

Template

{ABC} TPL

The {ABC} DIA folder can contain Neutral Format (SVG) files as well as, or instead of Viso
Schematic Diagram files. For a detailed description of the file formats that are monitored
within the above folders refer to the Administrator Command Reference Manual.
Only PDMSDWG files that are associated with a DRAFT Database are propagated.
Associated PDMSDWG Files are Sheet and Overlay drawings. Other DWG files, such as
Backing Sheets and Symbols need to be propagated through Transfer of other Data. See
Transfer of Other Data.
This is also the case for AVEVA Marine where there are drawings located in the ASSI,
ASSP, BACK, BTEM, CPAR, MARK, NPLD, NSKE, PDB, PICT, PINJ, PLIS, PLJI, PPAR,
PRSK, RECE, SETT, STD and WCOG directories.

10.6

Propagation of Inter-Database Macros


As in a non-Global projects, inter-DB macros will be created, for example, when a user tries
to connect a pipe in one database to a nozzle in another database for which that user only
has Read access.
Within a Global environment, inter-db macros may need propagating to various locations. In
order for this to be successful, the following must be ensured:

10.7

The macro directory, (for example,. abcmac), must exist at all locations where macros
will be created.

The project variables for the macro directory (for example, ABCMAC) are set for the
daemons. (All project environment variables must also be set for users, of course.)

Update Timings
It is extremely difficult to predict the length of time that an update will take to complete. It will
depend upon the bandwidth that is dedicated to the update process at the time it is run.
Therefore, if the line is shared with other comms programs (mail, internet, etc), the update
performance will be affected. The timings described below were undertaken on a line that
had no other process competing, and a line that was extremely clean - that is, its rate of
failure would be near zero. On a normal WAN line, its collision and failure rate would not
achieve anywhere near such a low level.

10:4

12.0

Running Global Projects


Updates and Synchronisation

Update Timing Key

These test timings were taken when propagating 11080 pages (22695936 bytes) of data
between two machines.

10.8

Transfer of Other Data


Files such as ISODRAFT files, external PLOT files and DESIGN manager files are not
propagated automatically by the Global daemon. However, there is a mechanism in the
daemon to allow such files to be transferred to and from neighbouring locations. These files
are transferred by scheduled updates and the UPDATE ALL command.
The daemon uses environment variables to define import and export directories for other
data files. At a location, there is a single directory to receive data imported from other
locations; and a set of export directories, one per neighbouring location, from which data
can be exported to those locations.
For the current project, the import directory at a location is defined by variable %IMPORT%;
the export directory for neighbouring location ABC is defined by variable %EXP_ABC%, etc.
If these variables are defined at each location, then the daemon will automatically transfer
files in these directories from one Satellite to another during scheduled updates (or when

10:5

12.0

Running Global Projects


Updates and Synchronisation

the UPDATE ALL command is used). Files can only be transferred between neighbouring
locations, and this method cannot be used to send files to/from off-line locations.
For example, myfile has been produced at Satellite AAA and is needed at neighbouring
location BBB. The user at AAA must ensure that myfile has been placed in directory
%EXP_BBB%. During the next scheduled update with BBB, this file will be sent to BBB, and
received in directory %IMPORT% at location BBB. A user at BBB can then use myfile. If
myfile is to be sent on to other locations, it will need to be copied into the export directories
at BBB for those locations.
Offline locations: The TRANSFER command only copies databases and picture files to or
from the transfer directory, ready for onward manual transfer to the specified location.
Transfer of other data files must be done manually.
It is possible to assign a batch script to run both before and after the Update Event occurs.
This can be used to copy data into the EXPORT directories before the Update is executed,
and then copy it out of the IMPORT directory once the Update Event has completed. This
process will include the transfer of Other Data.
The batch scripts are assigned to an Update Event through the Create/Modify Update form,
see below.

Batch Scripts

The script itself can be of any type of batch script, for instance perl, and can be as complex
as required.

10:6

12.0

Running Global Projects


Updates and Synchronisation

Note: Transferring of other Data uses the same communication line as Updates, and all
other Global functionality. Over use of transferring too many other files may have an
impact on the Window of Opportunity for updates.

10.9

Reverse Propagation Prevention


When a database update is initiated, a check is automatically made to ensure that the
update direction is correct. If the check implies that the propagation is in the reverse
direction (that is from out-of-date database to up-to-date database) the update is aborted
and an error message 'Prevented reverse update' or 'Prevented reverse copying' is
displayed. Note that this check cannot be made if the primary location is being changed.
The check involves an analysis of the status of the two databases. The result of the
analysis determines the appropriate propagation direction. This propagation direction is
compared with that expected at the primary location and the update is allowed to continue
only when these propagation directions are the same.
The database status that is analysed comprises a Non-additive Changes Count (NACCNT),
Header Changes Count (HCCNT), Claim List Changes Count (CLCCNT) and the Latest
Session. The three counts are pseudo-attributes of the DB element, and can be queried.
The message 'Prevented reverse update' means that the session data or the header page
at the secondary location is greater than at the primary location. This may be investigated
by checking the session number at each location, and, if necessary, by querying the HCCNT
attribute at the DB elements of the databases.
The message 'Prevented reverse copying' means that the NACCNT at the secondary
location is greater than at the primary location. This attribute may be queried at the DB
element of the database at each location.
It may be necessary to recover the database at the secondary location to solve the problem.
If either of the above messages is reported for a system or Global database, a similar
investigation can be made. For a system database the HCCNT and NACCNT attributes
should be queried at the STAT /*S element. For a Global database, they should be queried
at the GSTAT /*GS element.
To resolve the problem, the database at the appropriate location must be recovered.
You can find guidelines on how to recover from Reverse Propagation Errors in Recovery
from Reverse Propagation Errors.

10:7

12.0

Running Global Projects


Updates and Synchronisation

10:8

12.0

Running Global Projects


Deleting Databases

11

Deleting Databases
The procedure for deleting a database is summarised below. If the database owns extracts,
see Deleting a Database that owns Extracts.

Note: A dB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being deallocated.

11:1

12.0

Running Global Projects


Deleting Databases

11:2

12.0

Running Global Projects


Database Recovery

12

Database Recovery
If for any reason a database at a location is corrupt, it can be recovered by transferring the
database from a neighbouring location. It is important to remember that this could result in
loss of work. The main objective when a recovery is carried out is obviously to restore the
database(s) and minimise the work loss.
Global does not verify that the file from which the database is being recovered is a valid
database. It is the user's responsibility to ensure that this is the case. Remote DICE
checking may be used to used to verify the state of the database at the remote location from
which the database is to be recovered.

12.1

Recovering Secondary Databases


You would recover a secondary database from a neighbouring location that has the most
up-to-date database. This will usually be from the Primary Database. For example, in the
diagram below, if Database 1 is corrupt at Sat 1, recover from the Hub.

Location of Corrupt DB

Corrupt DB

Recover Corrupt DB From

Hub

3, 4, 5

Sat 1, Sat 2, Sat 3 respectively

Sat 1

1+2,4,5

Hub, Sat 2, Sat 3 respectively

12:1

12.0

Running Global Projects


Database Recovery

12.2

Location of Corrupt DB

Corrupt DB

Recover Corrupt DB From

Sat 2

1+2,3,5

Hub, Sat 1, Sat 3 respectively

Sat 3

1+2,3,4

Hub, Sat 1, Sat 2 respectively

Recovering Primary Databases


Recovering primary databases requires a little more consideration. Because the primary
database is the most up-to-date version of that database in the project, there is inevitably
going to be some work loss.
The main aim here is to recover the database from the most recent backup or from a
secondary location with minimal work loss.

Restore from backup if all secondary databases are older than backups or if they had
not been synchronised with the primary database before it was corrupted.

Restore from a secondary location if this database is an uncorrupted and newer


database than the one in the backup. (You should restore from the latest secondary
database.)
Some corruption can be fixed by DICE, so try using DICE first if you are recovering to a
primary location.
This method can also be used to recover the system database. However, the system
database cannot be recovered by the daemon unless it is readable, since the daemon
uses it to understand the network. If the system database cannot be read, then it must
be restored manually.

Note: When a DICE report indicates that Refblocks have been lost, normally this would
require the master database to be patched. However in a Global project, this error is
non-fatal if there are working extracts. These databases are non-propagating and
only exist at the primary location of their extract owner. This results in the error
report, since the Refblocks for working extracts are not accessible at the primary
location of the master db (Refblocks are blocks of reference numbers available for
use in the extract).

12.3

Recovering Database Primary Locations


If changing the Primary location of a database fails for some reason, you can recover the
Primary location by using the PREVOWNER command.

12.4

Recovering the Global Database


If for any reason the Global database at a Satellite becomes out of date, first try fixing the
problem by using DICE, the base product Data Integrity Checker. If you cannot fix the
problem using DICE, the database at a Satellite can be recovered by the following
procedure:
1. Stop the Global daemon at the hub and at the satellite location.
2. Inform all users, including the Administrator, at both the Satellite and the Hub to save
work and exit their sessions.
3. Physically copy the Global database (for example abcglb) from the hub to the satellite.
4. Restart the daemons and allow users to start work again.

12:2

12.0

Running Global Projects


Database Recovery

12.5

Transaction Database Management


The transaction database records details of Global daemon commands at a location. Each
location has its own transaction database.
The transaction database is unusual in that the base product treats it both as a system
database and as an ordinary user database. By default, it is a non-propagating database,
thus secondary databases at other locations are likely to be very out of date with the primary
version at the owning location.
Note: To avoid data consistency errors, changes to the transaction database should not be
made while the daemon is running

12.5.1

Renewing the Transaction Database


If the transaction database becomes corrupt, it may be necessary to delete and re-create it.
At the Hub, this may be done using the ADMIN Module. However, at satellites it is not
possible to delete and create databases directly.
Instead, the satellite transaction database must be renewed using the daemon. This may be
done in one of the following ways:

The RENEW DELETE <DB> command may be used. This is the preferred method of
re-creating the transaction database file, as it works even when the database is too
corrupt for the daemon to run. Note that <DB> must be the transaction database for the
current location, as the command cannot be executed remotely.
Following the command, ADMIN checks that all users are logged out and that the
daemon has been shut down. Note that the check on the daemon takes up to 3
minutes. ADMIN then deletes the file for the transaction database (not its DB entry)
and prompts the user to quit and to stop and re-start the daemon. When the daemon is
restarted, it will automatically recreate the transaction database file.

The RENEW <DB> AT <loc> command may be used to renew a transaction database
remotely. Note that this command may fail, if the database corruption is severe and the
daemon at <loc> cannot be started.

If the transaction database file for a location is missing, the daemon will create a fresh
version automatically when it is started up. This means that if for some reason the
database is corrupt, the transaction database can still be renewed by deleting the
corrupt version of the file manually. (You should not delete its definition in the Global
database)
After renewing, you should copy the locations transaction database to all secondary
locations (such as the Hub), using the RECOVER command. This will prevent reverse
propagation when the database is synchronised at a secondary location.

Note: The RENEW command may remove running commands because it deletes the
transaction database.

12.5.2

Merging the Transaction Database


The transaction database may be merged from the ADMIN Module. This requires the
daemon to have been closed down at that location, so that there are no write-access users
of the location's transaction database.
It may also be merged remotely at the daemon, using the REMOTE MERGE command. The
Global daemon detects that the database involved is the location's own transaction

12:3

12.0

Running Global Projects


Database Recovery

database. The daemon will close the transaction database before the merge, and re-open it
afterwards.
However, the REMOTE MERGE command cannot be used when the transaction database
is full, since this command cannot be recorded properly. In this case, it may be necessary to
merge it by reconfiguring.To manage the transaction dB efficiently TRINCOs (and their child
elements) need to be deleted at regular intervals. Only completed transactions should be
deleted. It only makes sense to merge the transaction dB after TRINCOs have been
deleted, otherwise the dB will not be compacted.

12.5.3

Reconfiguring the Transaction Database at a Satellite


It is possible to recover a corrupt transaction database at a satellite by reconfiguring it. This
is more complicated than reconfiguring it at the Hub, because the TO NEW command
cannot be used at a satellite.
The procedure is as follows:
1. In the ADMIN Module at the satellite, reconfigure FROM the corrupt transaction
database TO files.
2. Renew the transaction database by one of the methods described in Renewing the
Transaction Database.
3. Stop the daemon at the satellite (if this has not already been done).
4. In the ADMIN Module at the satellite, reconfigure FROM the files created above TO the
transaction database.
After reconfiguring, you should copy the locations transaction database to all
secondary locations (such as the Hub), using the RECOVER command. This will
prevent reverse propagation when the database is synchronised at a secondary
location.
It is essential when reconfiguring the transaction dB that the SAMEREF option be used.
Transaction dBs from other locations have references direct into a local transaction dB, and
failure to do so will result in the references changing.

12:4

12.0

Running Global Projects


Recommendations for Reconfiguring (User dBs)

13

Recommendations for Reconfiguring


(User dBs)
It is recommended that, when reconfiguring databases, the SAMEREF option be used.
Using this option ensures that referencing databases are unaffected.
Take care when you reconfigure project databases. Databases can only be reconfigured at
their primary locations. It is important to note that when a project database is reconfigured,
the database sessions will effectively be lost. Thus the ability for Global to send only session
changes is lost as well.
It is therefore recommended that you use the REMOTE MERGE command to synchronise
and merge the database at all secondary locations (unless the database is nonpropagating). This prevents propagation of the entire database on the next update.
It is also recommended that there are no users in the database at the primary location when
reconfiguring back to the Sameref database.

13:1

12.0

Running Global Projects


Recommendations for Reconfiguring (User dBs)

13:2

12.0

Running Global Projects


Copying Global Projects

14

Copying Global Projects


Global projects may be copied. Facilities are provided to:

replicate the complete Global project, including all data (except the ISO
subdirectories), to a new project

replicate the structure of a Global project to file

This is achieved using the REPLICATE command.

For full details of the REPLICATE command, see the Administrator Command
Reference Manual.

Note: It is very important to ensure that the replicated project has a different project UUID
to the original project, otherwise the Daemon will not run correctly.
The UUID for the project is stored in the ADUUID attribute of /*GL.
If this is unset or has not been changed use the NEWUID attribute:
/*GL
!NEW=NEWUID
ADUUID $!NEW
SAVEWORK

14:1

12.0

Running Global Projects


Copying Global Projects

14:2

12.0

Running Global Projects


Backing Up Global Projects

15

Backing Up Global Projects


Backing up projects regularly is good practice in any environment, including Global projects.
With a Global project, extra attention has to be given to any restoring process that is carried
out. The following guidelines are outlined in relation to backing up Global projects:

Backup all files at all location regularly.

When restoring a project, be aware that you may be able to restore project databases
by using Globals Recover functionality. This may give you the opportunity to minimise
work loss.

Use the backups for a location only for that location. (In some cases your only option
may be to use backups from other locations. In this case, be aware of the implication it
could have on the amount of work lost.)
Remember, your Global database (for example abcglb) at the Hub is your master
Global database. Back this up before you carry out any major Global administration
work.

When you use databases from backups, it is feasible for a secondary database to have
newer sessions than a primary database. If so, at the next update, changes may be posted
back from the secondary database to the primary database. If new sessions have been
written at the primary location, this could cause corruption. You should therefore ensure that
your secondary database backups do not have newer sessions than the primary database.
To resolve this, it may be necessary to RECOVER some databases from the primary
location after the restore.

15:1

12.0

Running Global Projects


Backing Up Global Projects

15:2

12.0

Running Global Projects


Using Extracts with Global Projects

16

Using Extracts with Global Projects


An extract is created from an existing database. When an Extract is created, it will be empty,
with pointers back to the owning or master database. Thus all data visible in the master will
be visible in the extract. Extracts can only be created from Multiwrite databases, and all
extracts are themselves Multiwrite. You can create Extract DBs from any type of database
that can be multiwrite, that is DESI, PADD, CATA and ISOD, and in the case of Marine
projects , MANU and SCHE.

You cannot create extracts from foreign DBs.

You cannot create extracts from copy DBs.

You can work on an extract at the same time as another user is working on the master or
another extract. When a user works on the extract, elements are claimed to the extract in a
similar way to simple multiwrite databases, so no other User can work on them. When an
extract User does a SAVEWORK, the changed data will be saved to the Extract. The
unchanged data will still be read via pointers back to the master DB. When appropriate, the
changes made to the extract are written back to the master. Also, the extract can be
updated when required with changes made to the master.

16.1

Using Extracts
You can use extract databases both with standard (non-Global) projects and with Global
projects. This chapter gives information about the use of extracts with Global projects. Refer
to the Administrator User Guide for information about the use of extracts with standard
projects.

16.1.1

Extract Families
A Master DB may have many extract DBs. You can create an extract from another extract,
forming a hierarchy of extracts. The hierarchy can be up to 10 levels deep. The extracts
derived from the same master are defined as an Extract Family. The maximum number of
extracts at all levels in an extract family is 8191.
The original database is known as the Master database. The Master database is the parent
of the first level of extracts. If a more complex hierarchy of extracts is created, the lower
level extracts will have parent extracts which are not the master.
The extracts immediately below an extract are known as extract children. The maximum
number of extract children is 408.
If a hierarchy of extracts is created, the parent of an extract, and its parents up to and
including the Master DB, are known collectively as the Extract Ancestors.
The following diagram illustrates an example of an extract family hierarchy:

16:1

12.0

Running Global Projects


Using Extracts with Global Projects

In this example:
Label

Description

PIPES

is the Master and the parent of PIPES_X1.

PIPES_X1

is a child of PIPES and the parent of PIPES_X10.

PIPES_X10

is a child of PIPES_X1.

Note: The children of PIPES are PIPES_X1 and PIPES-X2. PIPES and PIPES_X1 are the
ancestors of PIPES_X10.
Write access to extracts is controlled in the same way as any other database:

The user must be a member of the Team owning the Extract. Extracts in the same
family can be owned by the same team or by different teams.

The user must select an MDB containing the extract (or containing its parent, if the
extract is a working extract).

Data Access Control can be applied.

An extract database cannot be opened in a constructor module (such as DESIGN) at a


satellite unless all its parent extracts are also allocated to that satellite.

Note: At this release, you can only create an extract at the bottom of an extract tree: you
cannot insert a new extract between existing generations. At the Hub, you can also
create a new master database above the original master.

16.1.2

Querying Extract Families


You can query the following attributes to get information about the structure of an extract
family, at a database element in the Global database.

EXTNO

Extract Number

EXTOWN

Extract Owner

EXTMAS

Extract Master

EXTALS

Extract Ancestors

EXTCLS

Extract Children

16:2

12.0

Running Global Projects


Using Extracts with Global Projects

EXTDES

Extract Descendants

EXTFAM

Extract Family

ISEXOP

Is Owner Primary Here?

ISEXMP

Is Parent Primary Here?

ISEXAP

Is All Ancestry Primary Here?

LVAR

Variant

LCTROL

Controlled

16.2

Creating Master and Extract Databases

16.2.1

Creating Master Databases


All master databases (that is, normal multiwrite databases which are going to have extracts
created from them) must be created at the Hub. They can be created before giving the
make global command.
You may specify the database numbers for each database. For example:

cr db pipe/pipe-a desi acc multiwrite dbno n fino n


(If fino is specified, then this will be used to define the filename, otherwise the database will
be named using its dbno and extract number - default 1).

16.2.2

Creating Extracts
Extracts can be created at any authorised Location: the parent extract must be allocated to
the Location first.
Like other databases in a Global project, extracts have a primary Location, and this need not
be the same as the Primary location of the parent database. By default, the primary location
of the new extract will be the current location.

If you are at the Hub and creating an extract for a satellite, use the AT option in the
CREATE command. The extract will be created with its primary location at the Satellite
specified.

If you are at an administering location, you must also use the AT option if you want to
specify that the extract will be created at the administered location, otherwise the
extract will be created at the administering location (that is, the true current location,
queried using Q CURLOC). The parent extract must be allocated to the administered
location.

When you are creating an Extract at a satellite, make sure you give the CREATE
EXTRACT command only once and check that the command has completed by
issuing a Q DB dbname command. You may issue further CREATE EXTRACT
commands provided that you do not use the same db name or db number (if specified).
The daemon will assign a db number (dbno) if none is specified.

The CREATE EXTRACT command will be executed by the Daemon (which will imply a
delay in executing the command) if any of the following is true:

If the master database is primary somewhere else

16:3

12.0

Running Global Projects


Using Extracts with Global Projects

If the current location is a satellite

If the parent extract is primary at another location

If the new child extract is specified to be primary at another location (AT loc option).

Note: An in-built recovery operation exists for CREATE EXTRACT and, therefore, the
PREVOWNER command is not usually needed after a failure of the CREATE
EXTRACT command. However, the automatic recovery operation does not cover
the CREATE command Allocate operation and PREVOWNER may be needed in the
unlikely event of this failing.
Note that the ALLOCATE Command allows child extracts to be allocated to a satellite
without their parent being allocated, but you will not be able to open the extract until all its
ancestors have been allocated to the location. Also note that the ancestor extracts may
need to be synchronised, if timed updates of extracts has not been implemented.
Extract creation is controlled by the NOEXTC attribute of a location. If this is TRUE, then
extract creation is disabled and extracts cannot be created by that location. However the
Hub or its administering location (if authorised) may create extracts.
The purpose of the NOEXTC attribute is to prevent a satellite from creating databases on
the fly without authorisation, and it applies to the administering location, not the
administered location. However, if the HUB is doing it, it is by definition authorised. Thus the
HUB is always able to create extracts.
Similarly, we could have a situation where one satellite AAA is administering another BBB.
Satellite AAA might have NOEXTC false, and BBB might have NOEXTC true. In this case,
AAA would be allowed to create extracts for itself and for satellite BBB.
But BBB would not be allowed to create any extracts itself. The screenshots below show
how you set the NOEXTC attribute in the Modify Location form

16:4

12.0

Running Global Projects


Using Extracts with Global Projects

16.2.3

Creating Working Extracts


Working extracts can only be created at the location where the parent extract is primary.
Working extracts do not need to be added to MDBs. When you select an MDB that includes
databases for which you have working extracts, you will actually write your data to the
working extract.
When you issue data from a working extract, it will be issued to the database from which the
working extract has been created.
In order to issue data from an extract which has working extracts to an extract further up the
extract tree, there must be a user who does not have a working extract: see the following
diagram.

16:5

12.0

Running Global Projects


Using Extracts with Global Projects

A working extract inherits the write access of the parent access. That is if the parent is
primary at the location of the working extract than it can be written to, otherwise the user will
only have read access.

16.2.4

Extract Numbers
Before you start creating extracts, you should work out an extract numbering system, and
set the extract numbers explicitly when the extracts are created.
Extract numbers must be between 1 and 8191 inclusive, for each database. You must set
the range of extract numbers available for normal extracts, and for working extracts at each
location (see the diagram below). You can do this by setting the EXTLO and EXTHI
attributes for LOCLI and LOC elements as follows:

The available numbers for extract databases at a location are defined by the EXTLO
and EXTHI attributes for the LOCLI element under the /*GL element. You must define
the range of extract numbers so that there are enough left for working extracts: see
next point.

The available numbers for working extracts at a location are defined by the EXTLO and
EXTHI attributes for the LOC elements under the LOCLI element: for each Location
you must select a range of numbers which lies within the range you have left for
working extracts, and which does not overlap with the range for working extracts at any
other Location.

Note: You can query extract number ranges by navigating to the appropriate element and
giving the commands:

Q EXTLO
Q EXTHI
When you are using the ADMIN menu bar, you can use the Location version of the Admin
Elements form to create or modify a Location. On the form, you specify the range of
numbers available for working extracts at the location. See the Global Management User
Guide for details.

16:6

12.0

Running Global Projects


Using Extracts with Global Projects

16.2.5

Reference Blocks
The allocation of reference numbers is controlled by the master database. Each extract may
be allocated reference blocks from the master. Elements created in the extract will be
allocated reference numbers from the local reference block(s). If no reference block is
allocated manually, the system will allocate reference blocks as required. For a Global
project, this may require daemon activity.
To avoid this, we recommend that you should assign a block of reference numbers to the
extract when you create it, using the REFBLOCK n option. The block of reference numbers
will then be available locally. n should reflect the number of users writing to the extract, for
example, if you expect to have five users writing to the extract, set n to 5.
Note: There are 8191 reference blocks available for each extract hierarchy, so there is no
need to be conservative when allocating them.

16.3

Setting up an Extract Hierarchy


Before you start creating extracts and working extracts, you should have a clear plan of how
they will be used. You should maintain diagrams of the basic master / extract / working
extract database organisation. These help when devising flush and release procedures,
and defining MDBs.
For project ttt:
Note: The databases shown are all part of the same extract family, and so they will all have
the same database number as part of their filenames, for example ttt0200_0001.

16:7

12.0

Running Global Projects


Using Extracts with Global Projects

16.4

Using DACs with Extracts


You can use Data Access Control (DAC) and Access Control Rights (ACRs) with extracts
and working extracts to control workflow. For detailed information about DACs, see the
ADMIN User Guide. When you use DACs with extracts, you can set the Aclass (Attribute
Class) to restrict access to elements with particular attributes, which can include UDAs.

Modify the ADMIN Module definition to give access to DICT databases using the EDIT
MODULE command in ADMIN as follows:

EDIT MOD ADMIN MODE DICT READ

16.5

Set up an MDB containing the DICT database in which the UDAs are stored, and make
sure you select it as you enter ADMIN. Users will also need to have read access to the
DICT database via their MDBs.
The following simple scenario illustrates how you could use UDAs in Data Access
Control combined with extracts, to control workflow.
The Designer Role would give access to all Piping elements except those with the UDA
:ISSUED set to TRUE.

Using Extracts in DESIGN


A useful querying command when you are using extracts in DESIGN is:

Q DBNAME
This command will return the name of the database that you are actually writing to. If the
extract is a working extract, then the name of the parent extract is returned.
Another useful querying command is:

Q WDBNAME

16:8

12.0

Running Global Projects


Using Extracts with Global Projects

This command will return the name of the working extract that you are actually writing to, if
there is a working extract. If there is no working extract, then the result is the same as for Q
DBNAME.

16.5.1

Managing Extracts
If the extract hierarchy has different primary locations for different extracts, then both the
parent and child databases must be both propagating and allocated at each others
locations. If this isnt done then Claims and Flushes will fail.
Because of this, Claiming, Flushing, and Issuing should be managed by a Supervisor to
ensure Claims are handles in batches in a planned and controlled manner.

16.5.2

User Claims
Normal multiwrite databases require the user to claim an element before changing it. This is
known as a user claim. Depending on how the database is set up when it is created, user
claims can be implicit or explicit, and in either case, when a new element is created, it will be
claimed to the user who created it.
Note: In a Global project, we recommend that multiwrite databases should be created with
EXPLICIT claim mode, unless all the children are primary at the same location.
User claims can be explicitly released (unclaimed) by the user during a session, and
elements are always unclaimed when the user changes or exits from a module.
The commands for user claims are:

CLAIM . . .
UNCLAIM . . .
Extract Users can check daemon availability before claiming or flushing using the following
command line syntax:

Q COMMS (TO) <loc>


Q COMMS (TO) <loc> PATH
PING <loc>
Q ISOLAT AT <loc>
Q PROJ LOCK AT <loc>
These commands are now available in DESIGN and other modules. This is particularly
useful to Claiming/Flushing, since commands fail if the connection is down.

16.5.3

Extract Claims
When you are using extracts, another type of claim, known as an extract claim, is made as
well as user claims.

If an element is claimed to an extract, only users with write access to the extract will be
able to make a user claim and start work on the element.

Once a user has made a user claim, no other users will be able to work on the
elements claimed, as in a normal multiwrite database.

If a user unclaims an element, it will remain claimed to the extract until the extract claim
is released.
Extract claims allow persistent claims across sessions.

16:9

12.0

Running Global Projects


Using Extracts with Global Projects

16.5.4

Command Syntax
The command syntax for handling extract claims in DESIGN is as follows:
>- EXTRACT -+|
||
||
||
||
||
|
|
||
||
-

CLAIM --------.
|
FLUSH --------|
|
FLUSHW -------|
|
RELEASE ------|
|
ISSUE --------|
.-----<---.
| /
|
DROP ---------+-*- element -+- HIERARCHY -.
|
|
|
|
-------------|
|
|
FULLREFRESH --|
|
|
|
REFRESH ------+--- DB dbname -------------+--->
FLUSH RESET ------ DB dbname ----------------->

CLAIM

Claims the element or the whole database to the extract.

FLUSH

Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is refreshed with changes that have
been made to its owning database.

FLUSHW

Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is not refreshed.

FLUSH RESET

Resets the database after a failed EXTRACT FLUSH command.


(See note below under Flushing Changes.).

REFRESH

Refreshes an extract with changes that have been made to its


parent extract.

FULLREFRESH

Refreshes an extract and all its ancestors. A full refresh takes


place from the top of the database hierarchy downwards, ending
with a refresh of the extract itself. Each extract is refreshed with
changes that have been made to its parent extract.

ISSUE

Writes the changes back to the owning extract, and releases the
extract claim.

RELEASE

Releases the extract claim: this command can only be used to


release changes that have already been flushed.

DROP

Drops changes that have not been flushed or issued. The user
claim must have been unclaimed before this command can be
given.

The HIERARCHY keyword must be the last on the command line. It will attempt to claim to
the extract all members of the elements listed in the command which are not already
claimed to the extract.
The elements required can be specified by selection criteria, using a PML expression. For
example:
EXTRACT CLAIM ALL PIPE WHERE (:OWNER EQ USERA) HIERARCHY

16:10

12.0

Running Global Projects


Using Extracts with Global Projects

16.5.5

Extract Flush Commands Failing


Extract flush commands across a Global network may fail for legitimate reasons (see
Reasons Claims and Flushes can Fail for details), for instance, there is a naming clash (two
user's trying to issue a part of the same name; one will succeed, while the other will fail).
However, it is also important to ensure that flush commands are processed in the right order
(subsequent flush commands executed may overtake commands entered prior). When
either of these cases occur, all subsequent commands will fail. To resolve this, a user needs
to reset the dB using the EXTRACT FLUSH RESET command. A claim will fail if claimed by
another user or extract.
Note that claims are treated as succeeding if partially successful - thus if one claim has
succeeded and ten have failed, the Claim command is treated as successful. The Extract
Control form in DESIGN and DRAFT includes more detailed reporting.

16.5.6

Relationship between Extract and User Claims


If the databases are set up with implicit claim, when the user modifies the element, the
element will be claimed both to the extract and then to the user. If the element is already
claimed to the extract, then the claim will only be made to the user.
If the databases are set up with explicit claim, then the user will need to use the CLAIM
command before modifying the element.

16.5.7

How to Find Out What You Can Claim


This section explains what different users will see as a result of Q CLAIMLIST commands.
For this example, take the case of a database PIPE/PIPE, accessed by USERA, with two
extracts. Users USERX1 and USERX2 are working on the extracts.

USERA creates a Pipe and flushes the database back to the parent database, PIPE/PIPE.
The results of various Q CLAIMLIST commands by the three Users, together with the
extract control commands which they have to give to make the new data available, are
shown in the following diagram.

16:11

12.0

Running Global Projects


Using Extracts with Global Projects

Note that:

Q CLAIMLIST EXTRACT
tells you what you can flush; and:

Q CLAIMLIST OTHERS
tells you want you can't claim
You can query the extract claimlist for a named database. The database can be the current
one or its parent:

Q CLAIMLIST EXTRACT DB dbname


When you create an element, it is only seen as a user claim, not an extract claim, until a
SAVEWORK. It will then be reported as an extract claim (as well as a user claim, if it has not
been unclaimed).
Note that a change in the claim status of an existing element will be shown by the
appropriate Q CLAIMLIST command as soon as appropriate updates take place, but a user
will have to GETWORK as usual to see the changes to the DESIGN model data.
We recommend that:

16:12

12.0

Running Global Projects


Using Extracts with Global Projects

Databases that are going to own extracts which are primary at other locations, should
be created with explicit claim mode.

Before you make an extract claim, you should do an EXTRACT REFRESH (or an
EXTRACT FULLREFRESH, if necessary) and GETWORK.

If you need to claim many elements to an extract, it improves performance if the


elements are claimed in a single command, for example, by using a collection:

EXTRACT CLAIM ALL FROM !COLL


The Global daemon will only be involved in the claiming process if the user is claiming an
element from a secondary database / extract to their current primary extract. In this
instance, the user will be warned that the element is now being claimed by the Global
daemon. The user will know when the claim is completed, by using GETWORK and
checking the claim list.

16.5.8

Flushing Changes
When an extract user makes changes and saves them, they are stored in the extract. These
changes can be made available to users in other extracts using the EXTRACT FLUSH
command.
The FLUSH command operates on a single element or a database or a collection of
elements. The changes to these elements will be made available in the parent extract.
If changes need to be made available in the master database, it will be necessary to flush
the changes up through each level of extracts. Users accessing extracts in other branches
of the extract tree will need to use EXTRACT REFRESH to see the changes (or EXTRACT
FULLREFRESH, if the users extract is part of a multi-level extract hierarchy and is itself
owned by another extract).
The following diagram illustrates the sequence of commands that need to be given so that a
user working on extract B2 will be able to see the changes made by a user working on
extract A2.

The Global daemon will only be involved in the flush process if the user is flushing changes
to a secondary database / extract from their current primary extract.
Note: If a flush fails, the database needs to be reset, because the failed flush causes
subsequent flushes and refreshes to fail. The FLUSH RESET command is used to
undo the failed flush.

16:13

12.0

Running Global Projects


Using Extracts with Global Projects

This situation can arise when more than one user is issuing the same database extract.
Flush and release commands might then be processed in the wrong order, causing a flush
to fail and preventing subsequent refreshes of the extract.

16.5.9

Releasing Claims
Elements that have been claimed to an extract will remain claimed to that extract until they
are released. Any changes must have been flushed to the parent extract before the extract
claim is released.
The EXTRACT RELEASE command operates on a single element or a database or a
collection of elements. The elements claimed will be released from (that is, no longer
claimed in) the current extract, at which point they will be claimed by the owning extract.
If elements need to be made available in the master database, it will be necessary to
release the elements up through each level of extracts.

The Global daemon will only be involved in the release process if the user is releasing
elements to a secondary database / extract from their current primary extract.

When you are flushing / releasing data from a satellite to another location, you should
check that the flush has been successful before releasing the changes.

16.5.10 Issuing Changes


The ISSUE command is simply a combination of FLUSH and RELEASE.

16.5.11 Dropping Changes


You can drop changes that have not been flushed. The EXTRACT DROP command will
operate on a single element or a database or a collection of elements.

You cannot drop an element if it owns new significant elements. You have to list all the
elements in the same EXTRACT DROP command, or drop the lower-level elements
first.

You must UNCLAIM any user claim on an element before you can drop it.

The DROP command should be used with care. Once the changes have been dropped
they can only be retrieved using session data or from backup.

16.5.12 Refreshing an Extract


If changes have been made to the data in an extract, the changes will not be visible in lower
extracts unless these extracts are refreshed. The REFRESH command operates on a
database, and it refreshes the database to look at the latest state of the parent extract.
If the changes are in the master and need to be made available in an extract several levels
down, the FULLREFRESH command should be issued at the extract. This refreshes each
extract in turn, starting with the extract directly beneath the master database, continuing
down to the lower extracts, and ending with a refresh of the extract itself.
Note the difference between GETWORK, REFRESH and FULLREFRESH:

GETWORK will get changes made to databases in the current MDB.

REFRESH will get changes made to the parent extract only of an extract in the MDB.

FULLREFRESH will get changes made to all the extract ancestors of an extract in the
MDB.

16:14

12.0

Running Global Projects


Using Extracts with Global Projects

The REFRESH command will only refresh from databases local to the satellite.
Therefore, if a secondary database has not yet been automatically updated with
changes made to the database at the primary location, then these changes will not yet
be visible at the local satellite. Extracts below the database will only see the latest
version of the secondary database when they are refreshed. To see the changes made
to the primary database, you must wait for the next scheduled automatic update before
refreshing.

16.6

Partial Operations
When named elements are specified in an ISSUE, DROP or FLUSH command, it is known
as a partial issue, drop or flush. There are some restrictions on what you can do, as follows:

Where a non-primary element has changed owner, then the old primary owner and the
new primary owner must both be issued back at the same time. Otherwise there is
potential for inconsistencies to occur.

If an element has been unnamed, and the name reused, then both elements must be
flushed back together.

If an element and its owner have been created, then:

If the element is included in a partial flush, then its owner must also be included.

If the owner is included in a partial drop, then the element itself must be included.

If an element and its owner have been deleted then :

If the element is included in a partial drop, then its owner must also be included.

If the owner is included in a partial flush, then the element itself must be included.

The HIERARCHY option will scan elements in both the extract and owned extract. Thus
deleted/moved elements will be included as part of the issue/drop/flush.
You can use selection criteria to specify partial issues and flushes.
Deleted elements will be issued/dropped/flushed when the owning element is issued/
dropped/flushed. Alternatively the reference number of the deleted element may be given in
the ISSUE/DROP/FLUSH command.

16.7

Extract Sessions
When an extract is created, it is created at a particular session number in the parent extract.
This is called the linked session. As the owner extract is modified, and new sessions
added, the linked session on the child extract will not change until a refresh or flush is made.
Note that ISSUE, DROP and FLUSH cause an automatic refresh.
The following example illustrates how extract session numbers and linked session numbers
change as an extract is created and modified:
Extract session Linked session
no.
in owner

Comment

10

Extract created

10

Modification made on extract

10

Modification made on extract

16:15

12.0

Running Global Projects


Using Extracts with Global Projects

Extract session Linked session


no.
in owner

Comment

15

Refreshed from owner (sessions 11 to 15 created by


other users).

15

Further modification.

18

Issued (sessions 16 and 17 created by other users)

18

Further modification

18

Further modification

25

Issued (sessions 19 to 24 created by other users)

While a user is making changes only to the extract, the linked session number in the owner
stays the same. On refreshing, the local extract is linked to the most recent version of the
parent extract.
The new session number linked to in the owner depends on the number of flushes done by
other users. In the example the linked session number goes from 10 to 15, indicating that
five flushes have been made by other users in the meantime (assuming that no work is
being done directly on the owner).

16.7.1

Merging Changes
When a MERGE CHANGES command is given on a DB with extracts, all the lower extracts
have to be changed to take account of this. Thus doing a MERGE CHANGE on a DB with
extracts should not be undertaken lightly.
The following restrictions apply:

Any sessions linked to owned extracts must be preserved.

There may be no users on any lower extracts.


We recommend that you should MERGE CHANGES at the lowest level of extracts first,
and then work up the tree.

In a Global project, MERGE CHANGES can only be carried out at the location at which the
database and all its descendant extracts are primary. The REMOTE MERGE command
currently only handles leaf extracts and databases which do not own extracts.
See Merging Extract Databases for more information on merging extract databases.
Note: BACKTRACK is not allowed for extract databases. You must use REVERT instead.

16.8

Deleting a Database that owns Extracts


Deleting Databases, describes how to delete a database in a Global project, but the
procedure is different if the database owns extracts. The conditions required to de-allocate
and delete a database are quite complicated:

To create, delete or modify a working extract at a satellite:

To de-allocate a database from a satellite:

the parent database of the working extract must be primary at the satellite

The database must not be primary at the satellite

The database must not own working extracts at the satellite

16:16

12.0

Running Global Projects


Using Extracts with Global Projects

To delete a database:

The database must not be allocated to any locations other than the Hub

The database must not own any extracts, either working or standard ones
Thus to delete a database that owns extracts (and may own working extracts) may
involve doing a number of CHANGE PRIMARY commands to get rid of any working
extracts at satellites where the database is secondary.
The procedure for deleting a database that owns extracts is summarised in the diagram
below.

No

Is the DB
allocated to a
location?
Yes

Is the DB
primary at a
location?

No

satellite
Yes

No

Does the DB
own Extracts?

Make sure no one is


accessing the db at the
location.

Yes
No

Are they
Working
Extracts?
Yes

DELETE extract tree


from DB you wish to
delete.

DELETE the working


extracts at the location where
the DB is primary.

Wait for this command to


complete. Check as follows:
GETWORK
Navigate to the DBLOC of the
DB:
DBLOC 1 of dbname
Q ATT
The DB is primary at the hub
when the LOCRF is set to the
Hub and the PRVRF is unset.

Change the db to be primary at hub.


CHANGE dbname PRIMARY AT hub

De-allocate the db from all the locations:


DEALLOCATE dbname AT loc

Check that this command


has completed. The db is
de-allocated when it is
removed from the location
DBALL list.

DELETE DB dbname

Note: A DB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being de-allocated

16:17

12.0

Running Global Projects


Using Extracts with Global Projects

16.9

Variant Extracts
Variants are a special type of extract, with less rigorous control of claiming elements and
writing data back to the owning extract. They are designed to allow users to try out different
designs, which then may or may not be written back to the master.
Variants are different from normal extracts (including Working extracts) in the following
ways:

Any element can be modified without being claimed, and so different users can modify
the same element in different variants.

When data is written back to the owning database, it will overwrite any conflicting data
in the owner.

A variant can have normal extracts created from it. Note that in this case, the variant forms a
new root for claiming elements: claims in extracts below the variant will not be visible from
other parts of the extract family, and claims in other parts of the family will not be visible in
extracts owned by the variant.
It is possible to have working variants.

16.10 Reasons Claims and Flushes can Fail


Claims and flushes may fail for valid reasons. If a flush has failed (e.g. due to a name clash),
all subsequent flushes will stall (Previous flush failed) until the failed flush has been reset.
Claims are treated as succeeding even if they are only partially successful - if 1 claim has
succeeded and 10 failed, the Claim command is treated as successful. The Extract Control
form in DESIGN and DRAFT includes more detailed reporting.
Here is a list of some common symptoms and the underlying causes:
Symptom

Cause

Unable to savework. Perhaps you have Daemon has been expunged. Modifications
been Expunged
to database (other than updates) will fail.
Flush may have overtaken another flush. In this case, the Flush will stall for a retry.
Previous flush could not be found
Previous flush failed

Subsequent flushes will fail until failed flush


has been reset.

Unable to claim <item> because element is Valid failure - another extract or user has it
already claimed by <extract or user> from claimed
Extract <no>
Unable to claim <item> from parent extract EXTRACT REFRESH is required, to bring
<no> because element is modified in a later the child extracts view of the parent up to
session.
date
Nothing to claim locally - all claims failed in Cannot claim to child extract, because
owning extract
failed to claim anything from its parent
You cannot claim <item> without doing an The item has not been claimed into the
extract claim from the parent extract
extract before the User has claimed it. This
is only applicable to Explicit dBs.

16:18

12.0

Running Global Projects


Using Extracts with Global Projects

Symptom

Cause

Unable to claim <item> from parent extract The item has been deleted in the parent,
<no> as element has been deleted in a later and the child extract has not been brought
session
up to date yet.
Element reference <item> is invalid or has The reference number of <item> cannot be
been deleted
found in the database, it is an invalid
reference number.
Element <item> has been modified, so The item must be saved the database
cannot be released. Savework must be before an extract operation can be
done first
undertaken on it.
Element <item> has been deleted by The item you are trying to Claim has been
another User
deleted by another user.
Name clash on <item>. Please rename

The name of the item that has just been


created already exists.

Cannot flush/abandon <item> as old and The parent of the owner has been changed.
new owners must both be in the list, or Both the old and the new owners need to
neither in the list
be flushed/issued/abandoned at the same
time, and the list currently only contains one
or the other.
Cannot flush/abandon <item> without it's The item is either new or moved to a
owner
another item. Both need to be flushed/
issued/abandoned at the same time.
Cannot flush/abandon <item> without it's The member list of item has changed in
members
some way. The item needs to be flushed/
issued/abandoned with it's members.
Cannot abandon/release <item>. Element is The item is claimed by a User (possible the
claimed out by a user (maybe yourself) or to user doing the EXTRACT ABANDON/
an extract
RELEASE) or to a child extract.
Element <item> kerror <no>

Internal error. Please contact your AVEVA


support desk for more information.

Dabacon error <NUMBER> for DB <item>

Internal error. Please contact your AVEVA


support desk for more information.

16:19

12.0

Running Global Projects


Using Extracts with Global Projects

16:20

12.0

Running Global Projects


Off-line Locations

17

Off-line Locations
Normally there is a communications link between pairs of locations, and these locations are
referred to as on-line. (Their ICONN attribute is 1, and RHOST points to a valid computer
name.) However, Global can operate if there is no direct communications link between the
Hub and certain locations. These locations are referred to as off-line. (Their ICONN is 0,
and RHOST may be unset.)
A tape, CD or other medium is used to copy the databases from one location to the other.
It should be noted that:

The TRANSFER command copies databases to or from the project directory to a


special transfer directory, ready for the physical transfer to another location. The
physical transfer must be made as well as using the TRANSFER command from
ADMIN.

The existence of off-line locations limits the administration capabilities of a project.

Off-line locations can only be children of the Hub. An on-line satellite cannot have offline children.

Database transfer to and from the media used for communication with an off-line
location can only be made at the Hub and the off-line location.

Commands such as ALLOCATE and CHANGE PRIMARY are not self-contained.


Working practices are required to ensure the correct transfer of data.

The transfer folder is a holding area for data going to and from the satellite:

TRANSFER TO offline satellite from HUB copies satellite secondary dbs to the transfer
folder for the satellite (at the Hub)

The contents of this folder are transferred to the satellites transfer folder

TRANSFER FROM HUB at the offline satellite copies satellite secondary dbs from the
transfer folder at the satellite to the satellite project

TRANSFER TO HUB at the offline satellite copies satellites primary dbs to the transfer
folder for the Hub.

The contents of this folder are transferred to the hubs transfer folder for the satellite.

TRANSFER FROM offline satellite at the Hub copies satellites primary dbs from the
transfer folder at the Hub.

It is potentially unsafe to assume that samsys in a transfer folder is the satellite system
database. If the TRANSFER FROM step is omitted, then the local system database could
be corrupted. This is because the meaning of the file samsys is ambiguous in TRANSFER
functionality.
For this reason, the functionality of TRANSFER has been changed since previous version of
Global to enforce the use of a location suffix in the Transfer folder. All system databases in
the transfer folder always have a location qualifier, even the system database for the Offline
satellite.

17:1

12.0

Running Global Projects


Off-line Locations

It is not recommended that users omit the TRANSFER FROM step:

17.1

Potentially, inter-db macro changes could be lost. TRANSFER FROM merges the
macros from the transfer folder into the satellites MISC database, which already might
contain local inter-db macros.

If the satellite system database is secondary, then the incoming system db transferred
from the Hub will be named with a location suffix. This would need renaming to become
the local system db.

Working Practices with Off-line Locations


The following list highlights practices that should be observed:

17.2

All users must exit before the TRANSFER TO command is initiated. Otherwise, the
command may fail.

To ensure that the database file is copied to the satellite when a new database is
allocated to an off-line location, the TRANSFER TO command must be used at the Hub
after the ALLOCATE command. The TRANSFER FROM command is then used at the
satellite. The TRANSFER should precede a CHANGE PRIMARY command.

Before a CHANGE PRIMARY command to or from an off-line location is initiated, all


users of that database should exit. TRANSFER TO (at the old primary location) and
TRANSFER FROM (at the new primary location) must be used to ensure the database
is up-to-date. The CHANGE PRIMARY command may then be issued. This should be
followed by a TRANSFER TO command (at the Hub) and a TRANSFER FROM
command (at the off-line location).

Extract hierarchies must never be partially off-line. An entire hierarchy must be on-line
or off-line.

Working extracts cannot be created for off-line locations, unless they are selfadministered. (The file for the working extract cannot be created if the Hub administers
the off-line location.)

Transfer of other data, such as ISODRAFT files, external PLOT files and DESIGN
manager files, must be done manually to an off-line location and from it.

To change a satellite from on-line to off-line, shut down its daemon and change ICONN
to 0. You should then manually copy the Global database to the off-line location. The
TRANSFER command will then work.

Picture Files and Final Designer DWG files are transferred to Offline Locations and
should be copied on CD/through ftp when they reside in the locations Transfer area.

Change Primary to Offline location


CHANGE PRIMARY for an offline location is a manual 2-stage process. Offline TRANSFER
commands will NEVER copy databases to their primary location. Therefore creation of a
PRIMARY db for an offline location must be done as follows:
1. CREATE the database at the HUB
2. ALLOCATE the db to the Offline location
3. TRANSFER TO the offline location; At the satellite TRANSFER FROM the Hub
4. CHANGE PRIMARY to the offline location
5. TRANSFER TO the offline location; At the satellite TRANSFER FROM the Hub
If step (3) is omitted, then the database will NEVER be copied to the satellite.

17:2

12.0

Running Global Projects


Off-line Locations

17.3

Change Primary from Offline location


This is also a manual 2-stage process. For a database that is primary at the satellite:
1. Make sure that all users have left the database at the satellite;
2. At the Offline satellite, TRANSFER TO the Hub; at the Hub, TRANSFER FROM the
Offline satellite;
3. CHANGE PRIMARY to the HUB;
4. TRANSFER TO the offline location; At the satellite TRANSFER FROM the Hub;
5. It is now safe to let users re-enter the database.

17.4

Deallocation from Offline location


Deletion of the database file for a de-allocated database at the satellite can only be done
manually. Offline TRANSFER only changes the contents of the Global database, not the filesystem at Offline satellites. It cannot remove files that are no longer recorded in the Global
database.
1. Make sure that all users have left the database at the Offline satellite;
2. At the Hub, DEALLOC AT Offline satellite;
3. Note the file-name for this database;
4. TRANSFER TO the offline location; At the satellite TRANSFER FROM the Hub;
5. The database is now not available at the satellite;
6. Using Windows, delete the database file at the satellite.

17:3

12.0

Running Global Projects


Off-line Locations

17:4

12.0

Running Global Projects


Firewall Configuration

18

Firewall Configuration
The primary objective of a firewall implementation is to provide security to an organizations
network. In simple terms, a firewall solution enables only certain applications to
communicate from the outside world (for example the Internet) to the organization's
network, and vice versa. To enable these applications to function, specific communication
ports need to be open. The fewer ports open within a firewall, the less chance there is of
security breaches.
Under situations where Global is implemented within an environment that has no firewall set
up, Global will function without any specific network configuration (other than the
requirements outlined under Global > IT Configuration on the AVEVA Support website and
in the Global User Guide). However, when a Global project is to be deployed between two or
more locations that have firewall implementations, certain ports need to be open in order for
Global to function.
RPC communications are an integral part of Global. Global uses TCP port 135 and a
dynamic range of ports above 1024 to communicate from one location to another (i.e.
through the Global daemons running at each location).
The dynamic range of ports required to be open (i.e. 1024 and above) poses a security risk.
In order to reduce this, we can force the operating systems RPC communications to use
only a specified range of ports. This drastically reduces the risk of intrusion from third
parties.
Firewall rules can also be specified to limit access to these ports to a specific program.
Global has a unique identifier (UUID) which is possible to use when defining firewall rules.
For further details, contact AVEVA Support.

18.1

Limiting Ports Used


It is possible to limit the destination port that RPC uses when communicating with another
machine. The source port used will still be in the range >1024, but for security reasons
firewalls are primarily only concerned with destination ports.
The Ports value specifies the range of ports that RPC will use, in this case 20 ports, ranging
from 5000 to 5020. You will need to configure on all systems running the Global daemon
across a firewall. On Windows, a reboot of the system is required after registry
modifications.
Once the RPC ports are defined, the firewall can be configured. As shown below, the
firewalls for both organisations are opened to allow only communications to and from each
others Global Servers on TCP ports 135 and 5000-5020.
These ports must be opened bi-directionally to allow Global to operate.

18:1

12.0

Running Global Projects


Firewall Configuration

The following solution can be applied to any modern firewall with the functionality of packet
filtering.
The procedure for restricting the use of dynamic ports for RPC is through additions in the
Microsoft Windows registry.
Changing the registry should not be undertaken lightly. Please note that incorrect
modification of the registry could lead to serious problems with your system. It is therefore
recommended that you back up your registry before making changes.
To change the registry, you must use REGEDT32 and not REGEDT, as the latter does not
allow you to modify the string data type. If you do not use REGEDT32, the following
message will appear on daemon startup:

Cant establish protocol sequences: Not enough resources


are available to complete this operation
You must add a subkey and three values to the registry.
Under the following key, add a subkey called Internet:

HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc
Under this subkey create three values with the corresponding string data:

"Ports" (type MULTI_SZ):5000-5020


"PortsInternetAvailable" (type REG_SZ):Y
"UseInternetPorts" (type REG_SZ):Y
Note: The RPC configuration procedure described in this document can also be found in
Microsoft TechNet Knowledge base: Article number: Q154596. Note that Microsoft
recommend a minimum of 20 ports to be open for other services; for more
information on this please refer to the article which is available on the Internet at

18:2

12.0

Running Global Projects


Firewall Configuration

http://www.microsoft.com/technet. The number of open ports suggested in the


example above is just that: a suggestion. However it is generally true that the more
Global projects you are using, the more ports you are going to require to be open.

18:3

12.0

Running Global Projects


Firewall Configuration

18:4

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

19

Suggested Housekeeping Guidelines for


Projects

19.1

Introduction
This Section gives general advice on the Housekeeping activity for Administrators running
projects. We use the term Housekeeping as a metaphor to compare the work you would
perform to create and maintain a house and its contents in a state of good repair, well
organised, tidy and clean with the similar goals for the data of an engineering project.
Similarly to a house, the larger the scale the more substantial a task this can be. However,
if you establish the basis and practices early, you can keep the task to a routine activity that
increases in efficiency with practice over the duration of the project.
This Section should be seen as supplementary material to the standard Administration
documentation and not a replacement.
As with all advice it is not mandatory and should be taken as points for consideration in
creating a stable Administration environment. Also, although efforts have been made to be
as comprehensive as possible it is not exhaustive and will be subject to modification and
addition as the base product and its use across wide industry sectors increases and
experience in good practice improves to match.
It is written for an audience who are assumed to have undertaken training in Administration
and have a thorough background in maintaining projects. Moreover, not everything
described here necessarily applies to all project set-ups under all circumstances.
However, IT managers may find it useful as background information in deciding how to
organise base product Administration.

19.2

Dice
This is the Data Integrity Checking tool supplied as part of the ADMIN module.
Its purpose is to provide a report on the base product Dabacon databases that informs the
administrator if there are any issues with the database that require extra attention. In
addition, you can also run it in a patch mode that will actually facilitate a repair on the
database.
It is recommended that a full Dice report it is run as a matter of routine daily on all databases
in the project. This includes the full extract family and secondary databases if Global is in
use.
Foreign projects, such as a centralised Catalogue, should also be Dice checked, although
the frequency should not need to be so frequent if they are not being updated on a daily
basis. Often this is done as a scheduled batch routine during no working periods.

19:1

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

However, if the project is in a period of intense activity and the window for running bulk
processes for reports, drawings, material take-off is small, it can be run with users and batch
processes continuing to run on the model.
Having produced the report it is imperative that it is closely scanned for issues of concern
and then action taken to address them. Ideally, the Administrator should take action to
remove all errors and warnings; however some warnings can be deemed to be acceptable
and of no risk to the healthy running of the project e.g. Element =18585/38329 WarningAttribute TREF contains invalid ref =18585/74770.
This error will also be highlighted to the normal users as they check their designs so it will be
picked up there. However, if the identical reference numbers in these messages recur the
Administrator should follow up with the last user to access the element (info in session data)
to ensure it is cleared.
The Fatal Errors listed in a Dice report are usually ones that need immediate attention and
action to repair the database will be needed. Nevertheless, on occasion the error can
either be tolerated for a period as it is not truly critical, or may have been wrongly
categorised as Fatal and constitutes only a warning e.g. Error in level 2 NAME table,
session no. 10469, page no. 42385 - incorrect value of first key on lower level page no.
42386 (extract 1).
While AVEVA provide analysis of each error message outlining how it should be addressed,
the nature of an individual project set-up can make the method on how they should be
addressed variable. Therefore it is recommended that as the Administrator becomes
familiar with the action needed to address each warning or error it is documented and
recorded in project work instructions.
Certain database errors can be fixed by running Dice again against the problem database,
this time in patch mode to repair the fix. Two typical examples are:

Child extract 12 not listed on header page


Element SBFITTING /
SBFIT99 needs clearing from mainlist in header extract
This should normally be done when there is no Write access to the database. Even though
the Dice report will report the problem cleared, it may be a good idea to rerun a Dice full
check on the repaired db with patch mode disable to be 100% sure the problem is cured.
Other database errors can only be fixed by a Reconfiguration of the database. For example:

Element =35021/
13323 has an inconsistent entry in the name table. Name
exists on the element but is not in the name table
itself. Thus the element can not be navigated to by name
Please reconfigure this DB to resolve the problem
This work should be done when there are no Read or Write access to the database, but to
avoid a complete project shutdown it is possible to remove the problem db from all MDBs
do the repair and then replace it. Because of the additional complexity this may involve,
looking for a window in the project workload is normally the preferred choice.
Two or three days before a phase of major deliverable production it is recommended to be
especially diligent in Dice checking to ensure that all databases are in good shape and
reduce the risk of an interruption in the bulk process.
If a user reports an unusual problem with part of the project data, such as a Dabacon crash,
the first step should always be to perform a Dice check on the database(s) involved. If the

19:2

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

report shows issues that cannot be repaired by patching or reconfiguration then the Dice
report should be sent immediately to AVEVA support.
If after repairing the database the database is OK for a few days and then Dice reports
errors again then this may indicate a deeper issue and the Dice report together with any
background information on circumstances that are common to the error occurring e.g. same
users, same UI menu etc. should be reported to AVEVA support who may then request that
the databases be sent in for fuller investigation.

19.3

Global
This section provides information to advise Administrators on good practices.
recommend you read it fully.

19.3.1

We

Update Frequency
The idea of Global is that it provides the ability for a project split across several locations to
behave just as if it was located in one location. Therefore it is assumed that most
deployments of Global will have this objective in mind and will ensure that each location is
updated with changes from the other locations on a frequent basis, especially when they are
in similar time zones. This is particularly important when the locations are operating in the
same physical space e.g. in a compressor house one location covers the steam lines, the
other the utility lines. The aim here is, of course, to try and avoid routing pipe in the same
space as the other location. This also lies to the idea of keeping an Extract database only
local at one location i.e. if the project is process split this will incur a higher risk of clash
issues when the data eventually migrates to a higher level database shared between the
locations. If the project is split geographically e.g. each location covering complete units,
then this particular risk is reduced.
As a baseline updates between locations occurring around 4 times per working day are
reasonable with a possible escalation if significant change is occurring at critical times and
data is needed by one location faster than normal e.g. a fabrication yard when the project
data is reaching design completion. Examples of updates every 15 minutes have been seen
in this particular scenario.
Where the project is split across time zones, then timing updates to ensure data is
exchanged to suit start and close of work with attention to any time overlap is
recommended.
However, when selecting update frequencies the issue of data quantity moving across the
network should be considered too. The other idea of Global is to allow smaller chunks of
data to be transferred rather than whole databases. Therefore, if only one transfer is done
per day, the quantity of data will be large and if there has been an intense period of
modelling in one location then the update may take longer, possibly not completing in time
for drawing or review file production as expected. Therefore, doing several updates will
reduce the risk of update overlap or incompletion before deliverable production.
When different time zones are involved, it may be useful to use an intermediate satellite.
This will make it easier to transfer large amounts of data outside working hours.

19.3.2

Timing of Updates
The batches of updates that are run in one update session to keep all locations
synchronised do not have to be run sequentially. However, updates should be not started at
exactly the same time to avoid file-contention on the Global database.

19:3

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

If it is felt desirable to run the updates sequentially then a script will be required that uses
the EXECAfter and EXECBefore script attributes on the Update event (LCOMD) to run preand post-execution scripts on a scheduled update. This could also:

19.3.3

Record update start and finish times

Report on Database sessions

Lock out other updates by creating/deleting a lock file


This script is not a standard delivery as it needs tailoring for each project set-up. If
required the customer can request services from AVEVA to deliver this.

Checking Locations are Aligned


A simple check to ensure that locations are aligned is to run a macro which captures the
latest session number of each secondary database as compared to the latest session of the
Primary database.
NOTE: Such a macro could use the Q REMOTE command to query the remote location (Q
REMOTE <loc> <dbname> LASTSESSION). The session number can be extracted from
the query result.
Q REMOTE ABC MYTEAM/DESI LASTSESSION
Q REMOTE ABC MYTEAM/DESI FILEDETAILS
Assuming it is run when there are no users on the project then the session numbers at all
the locations should be identical, however if the Primary db is being used it is likely that the
Primary session number will be slightly higher than the Secondary databases. As the
intention is that Global runs while users are continuing to work this will be the most common
scenario.
If for some reason the Secondary database is higher than the Primary, detailed investigation
will be required to realign that database. An example of the type of output from such a
macro follows:
DBnumber

DBname

Session_
SAT1

Session_ Session_ Session_


SAT2
SAT3
HUB

2001

SAT1PIPING/UNITA

P 1134

S 1128

S 1128

S 1128

2000

SAT1PIPING/UNITB

P 684

S 679

S 679

S 679

2002

SAT1PIPING/UNITC

P 106

S 106

S 107

S 106

2003

SAT1PIPING/UNITD

P 758

S 742

S 692

S 742

2004

SAT1PIPING/UNITE

P 533

S 517

S 467

S 517

2457

SAT3PIPING/UNITA

S 1164

S 1164

P 1169

S 1164

2451

SAT3PIPING/UNITB

S 814

S 793

P 849

S 814

2432

SAT3PIPING/UNITC

S 131

S 131

P 133

S 131

2431

SAT3PIPING/UNITD

S 451

S 451

P 453

S 451

2100

HUBPIPING/UNITA

S 1148

S 1148

S 1148

P 1175

2102

HUBPIPING/UNITB

S 212

S 212

S 212

P 213

2104

HUBPIPING/UNITC

S 231

S 231

S 231

P 234

2355

SAT2PIPING/UNITA

S 560

P 562

S 560

S 560

19:4

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

DBnumber

DBname

Session_
SAT1

Session_ Session_ Session_


SAT2
SAT3
HUB

2353

SAT2PIPING/UNITB

S 288

P 328

S 288

S 324

2351

SAT2PIPING/UNITC

S 513

P 578

S 484

S 541

2351

SAT2PIPING/UNITD

S 79

P 101

S 79

S 79

2343

SAT2PIPING/UNITE

S 174

P 176

S 174

S 174

Legend
P

Primary location

Secondary location
Locations aligned
Secondary locations not aligned Update manually to synchronise
Secondary location ahead of Primary. Investigate and repair.

This macro is not a standard delivery as it needs tailoring for each project set-up. If required
you can request services from AVEVA to deliver this.

19.3.4

Change Primary - Repair Process


If there is a need to move the location of the Primary database the Change Primary
functionality should be used. However, it is important that the Change Primary operation
completes before Deallocate is done because the system will lose identification of where
Primary actually is and Deallocate will fail. The term Transient is used to describe this
scenario - ADMIN reports
(1,529)
Cannot deallocate DB /*CTBATEST/ISOD at LOC /CAMBRIDGE - relocation
already in progress
The following recovery procedure is necessary:

19.3.5

RECOVER <db> AT <primary-location> FROM <a selected secondary location>

PREVOWN <db> - to recover its previous primary location

Then CHANGE PRIMARY can be repeated

Wait for its completion before issuing DEALLOCATE

Risks of Aligning Databases Across Locations by File Copying


On occasion if there have been unexpected interrupts in the Daemon service the Secondary
databases can become completely misaligned with the Primary databases. To repair this,
there is a temptation to simply delete the Secondary databases and file copy the Primary
database out to the secondary location. The risk in this is that, in spite of a very diligent
approach, human error can occur and there is a risk of reverse propagation i.e. a Secondary
database thinking it is Primary. In this scenario the Recover process should be followed.
See Change Primary - Repair Process.
Therefore if the Primary and Secondary databases do become broadly misaligned on the
project it is advised that realignment is done by executing the updates manually ensuring

19:5

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

each update process completes successfully and that the realignment has been successful
before kicking off the next update.

19.3.6

Flushing/Issuing
It is common practice for all users on a project that uses Extract databases, whether Global
or not, to follow common practices of Flushing. Generally it is expected that each user will
be expected to Claim, Flush and or Issue on an object-by-object (or small group of objects).
However, it may be decided by some customers to manage the Flush and Issue on a
collective basis at managed intervals, say once a day. If this is done then the Flush or Issue
should be done at as high a level in the database e.g. SITE. This reduces both the number
of sessions created and the database file size.
Note that if the Model Object Manager software is in use, the program does background
flushing and issuing to keep the Primary data as synchronised with the Oracle data as
possible. If Model Object Manager is in use regular Global Updates will also reduce the risk
of the user viewing Oracle data that is not aligned with the Secondary view of the PDMS
data. .

19.3.7

Transaction Database
This database holds all the information about the success or failure of the updates and
remote claiming and is the first place to go to check that Global is operating successfully.
Ideally it should be regularly monitored by the Administrator responsible for each location.
Note that if an automated update fails for any reason, then there is always the option to
perform the update manually rather than waiting for the automated update to try and align
things again. By doing the update manually, the duration of locations being out of synch is
reduced and also the automated update process does not get loaded with 2 or more lots of
update data to deal with.
On a large and busy project the transaction database can become very large, so it should
be compacted on a regular basis. The recommended method of doing this is to use the
Merge-and-Purge function from the Daemon, or by selecting Utilities>Transactions in the
Admin module and the selecting the Purge/Merge transactions DB tab to display the
following window:

For a detailed description see Merging and Purging from ADMIN in the Global User Guide.
Daemon merge-and-purge can be done when DESIGN users are in the project (but not
ADMIN users) provided that they do not have the Transaction db in their MDB. If a Module
(e.g. ADMIN) is accessing the transaction db when the merge-and-purge is attempted, then
nothing will be purged.

19:6

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

If the merge-and-purge is interrupted e.g. by a crash of the Daemon, then one of the two
following methods could be used at each satellite after al users are out of the project and the
Daemon has been stopped: either

carry out a normal merge (merge changes TRANSACTION/SAT) . It will be necessary


first to run a macro to collect and delete old commands -otherwise the merge will
achieve nothing. AVEVA can assist with writing such a macro. An example of a similar
macro you can use as a basis is shown in Example Macro for Collecting and Deleting
Old Commands.

or

rename the database (eg. ABC0001_0001 to ABC0001_0001-ORI)

restart the Global daemon ( a new clean database will be created automatically)
The problem with this method is that incomplete transactions are lost and therefore
updates are missed and this may contribute to misaligned Primary and satellite
locations.

The ADMIN UI provides a view of the updates from the Transaction db and it is important
that the administrator checks the actual messages from these Updates because the update
may not have successfully updated ALL databases, although the overall command has
been successful.
If the MESSAGE reads 'Update All succeeded (NNNN DBs) with MMMM failures' then the
administrator MUST investigate the failures. The FAILUREs pane of the Transaction
messages form indicates this. If this check is considered to be worth separating to a distinct
procedure a macro may be written to collect TRFAIL elements below the TRINCO for the
TIMEDUPDATES user.

19.3.8

Daemon Log File


If the Global Daemon crashes unexpectedly there may two types of cause; 1. An
unexpected problem with the databases that is not being picked up in Dice or 2. An IT
infrastructure issue. To determine which, a log file can be activated via a system
environmental variable (evar) that will capture detail of all Daemon activity, particularly
exactly what was happening leading up to the crash. The evar is DEBUG_ADMIND - see
chapter 4. Output from the daemon should be piped to a file.
This log file should be sent together with the Transaction db & Global and System dbs for
Hub and satellites to the AVEVA support desk who can work with R&D to determine
whether it is 1 or 2. If 1 then a repair can be advised on the problem database and/or a fix
in code to prevent it recurring can be deduced. If 2 then further investigation in what was
happening in the IT infrastructure needs to be done by the customer. Some evidence may
be found in the Windows System Event Log which should also be shared with AVEVA to
help isolate the issue.
The log file can grow quite large if left running for a period so it should be cleaned down
daily by renaming it during a period of Daemon inactivity with a time and date stamp which
means that when the Daemon restarts a new file will be created. Should holding the
renamed file on disk be a problem on space it can be archived off and deleted.

19.3.9

admnew Files
When the Daemon copies a Database usually after a session Merge (as opposed to a
session-based update), it copies to a temporary file with the suffix .admnew. When the copy
is complete, this file is renamed to replace the old database file.

19:7

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

These .admnew files are normally tidied up automatically. However, if the daemon has
crashed, it may leave unwanted .admnew files behind, which can prevent a subsequent
Daemon attempt to copy the database from running.
It should be ensured that the satellites and hub remove such files after a crash.
See admnew Files for a full description of .admnew files.

19.4

Session Management - Merging


It is recognised good practice to reduce database file size on occasion and this can be done
using the Merge functionality to compact the session data. The frequency of the merge is
dependent upon the activity in the db and its rate of growth.
On a Global project it should be remembered that after a Merge on a Primary database the
updates will transfer the whole db anew to the Secondary locations. Because of this the
update will take longer than a simple session update and it is therefore recommended to be
done at a weekend.
Merge has to be done at the Primary location unless a Leaf extract organisation has been
used where the Remote Merge functionality can be used from the hub. Remote Merge can
be done with the Daemon running, but for normal Merge operations it is recommended that
the Daemon is stopped to prevent any updates occurring. The steps to be taken prior to a
merge are covered more thoroughly in the Database File locks section of this document.
Note: A Leaf extract is a database which does not own other database extracts.

19.5

Database File Locks

19.5.1

Background
On occasion there may be circumstances where after a piece of Administration work such
as session merging, the database file has been found to be locked by the Windows
Operating System. To resolve file locks, the Administrator has two options:
Reboot the computer where the databases reside (this assumes the Administrator has the
privilege to do this, for a lot of cases this is not a practical solution).
Resolve file locks using a specific tool as described in admnew Files.
Also note that if the project is not used as a foreign project, you have a third choice in the
Overwrite DB Users flag, which is the LCPOVW attribute of the LOC element.
This attribute controls whether a locked file at a location may be overwritten. If this attribute
is set TRUE and there are no database READERS in the project, then Global will overwrite
the locked file by the .admnew file.
Important: Do not do this if other projects include this database as a foreign project, since
these are valid READERS that are not recorded in the session data for the
Global project.

19:8

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

19.5.2

Avoiding Random File locks


Do not exit Modules illegally. Ensure ALL users are aware of and follow the instructions
below:
Exit cleanly through a Module GUI or using FINish from the Command Line.

19.5.3

Do not kill (Close via top-right X button) the background DOS window.

Do not kill the main Module Windows and then directly re-enter that Module. If for some
reason this has to be done then use the Windows Task Manager to check that the
Module process has closed and if it has not then End the process.

Locating and Closing Locked and/or Open Database Files


Before performing any ADMIN work on databases such as session merging or
reconfiguration, it is important that there are no users accessing the database and that the
database file is closed.

Removing Users
After a session has been illegally exited, either deliberately or due to an unexpected system
fault, the Users who were accessing the databases may be left as phantom users (also
known as dead users) in the system. To clear these users from the databases and release
their claims the Administrator can use the Expunge syntax for all users or specific dbs (see
ADMIN Command Reference manual for details of all Expunge options, including how to set
the Overwrite DB Users option to allow non-foreign projects to copy over locked files
provided there are no users recorded in the COMMS db. Overwriting is disabled by default
because it may cause sessions of dead users to crash).
You can use the ADMIN Module for this also.
To force live rogue users out of the system who have not followed the request to leave the
system before Admin work is carried out, the Expunge User Process can be used. This will
not stop the process on the Workstation but it will sever the link with the database file and
the next time the user tries to access the process (Module Window) it will crash. After the
Expunge User Process has been done it is common practice to then use Expunge All Users
to remove any lingering phantom users and release all claims.
However, it is necessary after the Expunge processes (or other illegal exits) to ensure that
the database files have not been locked by Windows or left open and they should be closed
so that further work in the databases can be done. As the files normally reside on a
separate File Server, administration access to that server will be required.

Managing Database File Status


To monitor for locked and/or open files the Windows's own Open Files screen on the File
Server can be used. However, there is no standard Windows tool to close the locked or
open files and additional utilities are needed. The utility used by AVEVA and other
customers is pstools that can be obtained from Sysinternals the specific tool being PsFile.
Using this, the files can be monitored and closed. NOTE: PsFile only shows files opened
remotely, so won't show files open by processes running on the File Server itself - e.g. if
scheduled jobs are being run on the File Server itself.
Alternatively, you can use the Microsoft NETFILE API on the server to free locked db files.
Summary of steps before conducting an ADMIN task on a database

19:9

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

1. Broadcast a message to all users on the project telling them that they should cleanly
exit by a required time. If the ADMIN MESSAGE command is used note that it will only
be visible to those logged in at the time, and when they change module.
2. At the advised time Lock the project via ADMIN to prevent any users accessing the
databases further.
3. If a Global project, stop the Daemon to stop updates and/or remote claiming.
4. Check the project for any users still logged in and try to get in contact with them and
ask them to leave the project cleanly.
5. Any users who cannot be contacted should be severed from the project by Expunge
User Process.
6. Expunge All Users to remove any phantom users and release any claims.
7. Using pstools PsFile check for any open or locked db files on the db File Server.
8. Using pstools PsFile close any open or locked db files on the db File Server.
Note: In truth the only databases that should not be being accessed in the project in Read
or Write mode are those on which an ADMIN task such as reconfiguration or session
is being undertaken. However to secure this without getting all users out of the
project is to isolate the databases (inclusive of the whole extract family) from use by
removing them from all MDBs and then performing steps 1-8 with the exception of 6.
Deferring them is not recommended as the user can overwrite the deferral. After
the Admin task has been performed on the specific databases they can then be readded to the MDBs.
As this adds an extra level of complexity to the Admin task it is therefore suggested that a
window of time is sought where the whole project can be shut down.

19.6

Distributed Extract Hierarchy


Where work needs to be execute on one database by two locations a Leaf Extract
organisation is required. Below is a sketch that outlines how in a 2 level extract organisation
this should be set up:

In this scenario the SAT2 users working on the EX2_SAT1 db are claiming objects from EX1
Primary at SAT1.

19:10

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

This can be done dynamically in Explicit Claim mode over the Daemon. However, the
response can be variable causing the SAT2 users to be unsure as to the status of their
claim. Therefore it is recommended that the project is organised in such a way that the EX1:
Primary objects to be worked on at SAT2 are identified and marked by the SAT1 users and
then an Admin process is run to Extract Claim the collection to the EX2_SAT1 Primary db at
SAT2. When the work on the objects is complete the SAT2 users mark the objects as ready
to be Issued and an Admin process is run to Extract Issue the collection back to EX1:
Primary.

19.7

Project Administration Team


Each customer has its own internal organisation for the management and coordination of IT
systems. However, experience shows that if the Administration roles for a project are not
firmly established, especially across multi-location Global projects, then the Housekeeping
activity can become disjointed and the project may suffer downtime whilst repair work has to
be undertaken in normal working time.
The aim is to keep coordination and decision
making streamlined with a minimum overhead of debate and dialogue.
AVEVA suggest the following for a major project:

19.7.1

ADMIN Lead
A single technical expert, with an in-depth knowledge of the application from a User and
Administration background is placed in charge of the whole project, has decision making
authority and is the contact for communication with the engineering and IT management for
the project. This person is to have a full-time Deputy who can stand-in in their place during
planned and unexpected absence. In a Global project the location of the Hub should be
sited at the location of this person.
This role, including the Deputy, should have a high-level of IT knowledge and be a trusted
partner of the IT group with permissions to access the application server(s) to perform
specific tasks.
It will be this role that has the main contact with AVEVA Support unless it pertains to a
specific discipline need when the Discipline SME role comes into play.
This is a full-time role on a major project.

19.7.2

Discipline SMEs (Subject Matter Experts)


Each major discipline to have a nominated SME who coordinates all discipline activities with
the ADMIN Lead and other SMEs. This may include a Catalogue and Spec SME who
performs the specific task of servicing the Discipline SMEs with skills for this specialist area.
The SME should have a high level of expertise at a User level with a firm understanding of
the principles of projects. If the project is Global it is still suggested that a single SME is
nominated for each discipline in order to maintain a common project approach.
Depending on project scale and level of discipline activity, within the team of Discipline
users there should be trusted senior users who can stand-in for the SME in the event of
absence.
For Discipline specific issues this role should be in touch with a discipline SME counterpart
in AVEVA Support.
This is a part-time role on the project. Activity intensity on specific issues will vary with the
project schedule. At project start it is most likely to be a virtually full-time role for the

19:11

12.0

Running Global Projects


Suggested Housekeeping Guidelines for Projects

opening weeks, moving to an irregular pattern of high and low workloads to be balanced
with the primary role of engineering and/or design on the project.

19.7.3

Global Satellite Co-ordinator


On a Global project it is recommended that each location nominates a senior user from the
project to act as the local Administrative support for Global. This role reports to the ADMIN
Lead. Ideally the person fulfilling this role should have both end user and Administrative
experience and also have a high-level of IT knowledge and be a trusted partner of the IT
group with permissions to access the application server(s) to perform specific tasks under
the direction of the ADMIN Lead.
Depending on project scale within the team of users at the satellite there should be trusted
people who can stand-in for the Global Satellite Coordinator in the event of absence.
It is not expected that this role will need to contact AVEVA Support directly as issues should
be escalated through the ADMIN lead or Discipline SMEs depending on the topic.
However, where there are time-zone issues this may change to ensure contact with local
AVEVA support is established.
This is a part-time role on the project following a similar pattern of workload to the Discipline
SMEs.

19:12

12.0

Running Global Projects


Project Setup Guidelines

Project Setup Guidelines


For a standard project setup refer to Chapter 5 of the Administrator User Guide. There is an
extra project setup for Global which is:

Set up environment variables for location transfer directories. The environment


variables must be added to the {proj}evars.bat file in the project folder.

set {PROJ}_HUB=C:\{your project path}\TRANSFER\HUB


set {PROJ}_PFB=C:\{your project path}\TRANSFER\PFB
etc.
These will contain the project directories.
Launch PDMS and at the login screen make sure that the project that is to be made Global
is selected in the login page. Then from the login page make sure that the Admin module is
selected.
In the Admin module select Display > Command.
At command line type the following commands in sequence:

Lock

make Global

Note: The user will be prompted to close and re-open the Admin module.

unlock

savework

Then quit the Admin module and reload as prompted.


Note: When re-starting the Admin module a prompt will inform the user that the Location is
uninitialised.
In the Admin module select Locations from the Elements pulldown and highlight /projecthub.
Click Modify and rename /projecthub to /hub then click Apply.
A prompt will ask if the user wants to initialise the location. Click Yes.
A prompt will be displayed indicating that a new transaction database has been created.
Click OK then Dismiss on the Modify Location window.
Start the Global daemon by typing the following from the Windows command line. Click
Start > Run and then type CMD to open a Windows command line window.

C:\AVEVA\Global{version}\admind start {proj}


To verify that the command has run successfully the user can query the linit flag of the
location. To do this:

A:1

12.0

Running Global Projects


Project Setup Guidelines

Open the Admin module.

Select Location from the Element pulldown.

Select Display > Command to open the command window and then type the following
commands

q linit
Example PML to wait for command to complete:

!c = curloc
do
pause 1
session comment 'Interim savework at HUB after INITIALISE'
savework
getwork
break if (!c.linit)
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
skip
enddo
savework
***** Generate the locations at the HUB ******
/*GL
LOCLI 1
NEW LOC /PFB
LOCID PFB
DESC Piping Fabrication
RHOST sg132
CR DB TRANSACTION/PFB
GENERATE LOCATION PFB NOALLOCATE
Note: ALLOCATE will copy all the project files to the location defined by variable
{proj}_PFB.
NOALLOCATE will only copy the system DB files.
At the Satellite, use Windows Explorer to copy the files in {proj}_PFB to the location
directory where the project will reside as {proj}000 (i.e. the Satellite).
Set up the base product environment at the satellite location (executables, Project
directories etc).

Set up the base product environment at satellite location (executables, Project


directories etc.)

set PDMSEXE=C:\ita_test_env\pdms
set {proj}000=C:\net\project\{proj}000 ..etc.

Start the daemon at the location PFB

%PDMSEXE%\admind start {proj}


AT LOC PFB

A:2

12.0

Running Global Projects


Project Setup Guidelines

If still in the base product at the HUB, check PFB daemon by

PING PFB
Example PML to wait for command to complete

do
pause 1
ping PFB
handle ANY
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
skip
elsehandle NONE
break
endhandle
enddo

Log in to the Admin Module at location PFB (admin)

INITIALISE
Having setup the environment at location

Log in to the Admin Module at the HUB (if not in already)

savework
getwork
/PFB
Q LINIT
** If LINIT TRUE then PFB has Initialised
Example PML to check initialisation is complete

!loc = /PFB
do
pause 1
session comment 'Interim savework at HUB after initialisation
of PFB'
savework
getwork
break if (!loc.linit)
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
enddo
session comment 'Savework at HUB after confirming
initialisation of PFB'
savework
getwork
Now allocate the required DBs to the location PFB
ALLOCATE pipeapproved/master SECONDARY AT PFB
ALLOCATE pipereview/siteufa/A SECONDARY AT PFB
ALLOCATE pipeworkarea/fabwork/A PRIMARY AT PFB etc.

A:3

12.0

Running Global Projects


Project Setup Guidelines

session comment 'Savework at HUB after allocations to PFB'


savework
Wait until all dbs have been allocated at PFB
/PFB
1
The number of members in the DBALL should match the number of DBs allocated.
Example PML to wait till all databases have been allocated

do
pause 2
session comment 'Interim savework at HUB - waiting for
allocations to PFB'
savework
getwork
!location = /PFB
q var !location.members[1].members
break if (!location.members[1].members.size() ge 28) $* no.
of allocates
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
enddo
session comment 'Savework at HUB after confirming allocations
to PFB'
savework
Create Teams and Databases at the Hub and User, MDBs locally.
REPEAT FROM ****GENERATE LOCATION****, for all locations required.

A:4

12.0

Running Global Projects


Recovery from Reverse Propagation Errors

Recovery from Reverse Propagation Errors

B.1

Background - Propagation Process


When the Global Daemon attempts to update a database with another location, it uses two
sets of data:

The required propagation direction - always away from the Primary location of the
database

The detailed information in the database header. This includes:

Compaction number (Non-additive changes count) NACCNT

Latest session number

Claim-list changes count (CLCCNT)

Header changes count (HCCNT, also known as ELCCNT)


The header information is compared at the two locations to determine what sort of
update is required. If the Compaction number is different, then the entire database file
must be copied, rather than just sending the required sessions. Otherwise only the
required pages in the database are sent.
If the propagation direction which is implied by the Header information at both locations
is incorrect, then the Daemon will report errors such as:

Prevented reverse propagation, should be From Remote not


Update To
Prevented reverse propagation, should be To Remote not
Copy From
The words From and To indicate the directions implied the Primary location, and that
inferred from the database header.
These messages are output as Errors to the daemon window as well as being recorded as
Failure in the Transaction database.
The word Copy means that the compaction number at the secondary location is higher
than that at the primary location; the word Update means that the latest session or
counters are higher at the secondary location than the primary location. (If neither of the
locations is the primary location, then the database at the location nearest to the primary
location is the one that is used)
A third message is also possible for another locations system database where a file is
missing:

Missing file. Prevented reverse propagation, should be To


Remote not Copy From

B:1

12.0

Running Global Projects


Recovery from Reverse Propagation Errors

B.2

Identifying the Problem


When the Transaction database reports that an Update has failed due to Reverse
Propagation, the following should be done for the Database element:

Query the primary location of the database (Q PRMLOC at the DB element for the
database; or Q DB <name> contains this information)

Query the Filename - this is useful in identifying the database in the daemon trace.

Query the NACCNT, Latest session number, HCCNT and CLCCNT for the database

Do the same at the remote location.

Decide from this in which direction to RECOVER the database; and recover the
database.

Note that the new command Q REMOTE <locname> <dbname> FILEDETAILS may be
used to gather this information for both locations.

Note: Note that the RECOVER command is the only command which is allowed to copy
the file without a check on the propagation direction.
In general, if the Prevented Reverse Propagation message contains Copy, it is the
NACCNT attribute that is the problem. This counter is incremented by a database MERGE,
BACKTRACK (but not REVERT - the Appware uses REVERT) or Reconfiguration. In this
case, the propagation needs to copy the entire database file. However the copy has failed,
because the NACCNT is higher at the secondary location than the primary location.
The other properties are used to control normal database propagation, where only the
required sessions and the database header are sent. If the Latest session number is higher
at the secondary location than at the primary location, then database recovery is required. If
the session numbers are equal, but the HCCNT and CLCCNT attributes are higher at the
secondary location than at the primary location, then a database recovery is also required.
Usually, recovery should be made from the Primary location, unless there are good reasons
why a secondary location has the correct version of the database.

RECOVER AT <secondary location>


It may be necessary to recover the database at more than one satellite location.
If the secondary database is correct for some reason, then recovery should be from the
required secondary location:

RECOVER AT <Primary> FROM <Secondary>.


This is deliberately replacing the database at the primary location by the version at a
secondary location.

B.3

Querying Database Properties


The latest session number, and the NACCNT, HCCNT and CLCCNT attributes are
properties of the database file, and must be queried at each location. One way to do this is
to include these queries in a post-update propagation macro, which can be run from a script
specified by the EXECB attribute of the Update timer (LCOMD)
The latest session for a database may be queried by using Q SESSIONS <dbname>, for
example

Q SESSIONS MYTEAM/DESI

B:2

12.0

Running Global Projects


Recovery from Reverse Propagation Errors

The database properties NACCNT, HCCNT and CLCCNT may be queried in the normal way
by navigating to the DB element for the database, for example, /*MYTEAMDESI. Attributes.
It should be emphasised that these attributes are properties of the database file, and may
differ at each location.
Alternatively, a PML object <DB> may be constructed for the database:

!DD = OBJECT DB (/*MYTEAM/DESI)


Then the properties may be queried:

!DD.NACCNT
!DD.HCCNT
!DD.CLCCNT
!DD.LatestSession()
Note that the last property is a member, not a method. Primary location and filename may
also be queried:

!DD.FileName
!DD.Prmloc
The same properties may be queried for a database at a remote location ABC by using:

Q REMOTE ABC MYTEAM/DESI FILEDETAILS


Q REMOTE ABC MYTEAM/DESI LASTSESSION
The FILEDETAILS option returns the Compaction number (NACCNT), last session, Extract
list/Header changes count (HCCNT) and Claim changes count (CLCCNT) for the database
at the specified location. These may be compared with the local values.
Note: These commands return data in CSV format when used with a variable e.g.
!REMOTE ABC MYTEAM/DESI FILEDETAILS
However, the Daemon trace log does include this information if the right Trace level is
turned on. (Trace bit 3). This information is only present in the log of the location which
issued the Update. The relevant lines might read:
(6) At Tue Oct 04 01:03:24 2005 Processing DB %ABC000%/abc2315_0001
(6) At Tue Oct 04 01:03:24 2005 Compaction numbers:

local 0 remote 0

(6) At Tue Oct 04 01:03:24 2005 Session numbers:

local 3 remote 2

(6) At Tue Oct 04 01:03:24 2005 Claim Changes counts: local 17 remote 1
(6) At Tue Oct 04 01:03:24 2005 Extract List counts:

local 3 remote 10

In this case this indicates that the current location has a more recent session than the
remote location. The Claim count only applies to a session, so its value will be ignored
unless the session numbers are the same. In this example, the implied propagation
direction is from the current location to the remote location.
However, before making the update, the Daemon checks the update direction, to ensure
that the propagation direction is consistent with the direction away from the primary location
of the database. If this check fails, then the Prevented reverse propagation error causes
the update to fail.
Occasionally, it is not possible for the daemon to check the Update direction (Global db may
be in use). In this case, the failure will read Update skipped. This is normally a temporary
problem, and the database will be propagated as normal on the next scheduled update.

B:3

12.0

Running Global Projects


Recovery from Reverse Propagation Errors

B.3.1

Automating Checks For Failure


To reduce the Administration of updates, it is possible to write a macro to collect failures and
messages from the Transaction database. This manual includes a chapter on the structure
of this database (See Transaction Audit Trail).
The structure of a Transaction command can be quite complex. This chapter contains detail
that will be useful to a macro writer.
Each command consists of a number of operations, some of which may be commands for a
remote location. Each of these in turn may also contain a set of operations and remote
commands. Operations may be dependent on the success (or failure) of earlier operations.
Some commands, such as REMOTE MERGE, or EXTRACT CLAIM are quite complex.
Others, such as scheduled updates (UPDATE ALL) are relatively simple. (See also the
Powerpoint presentation on Transactions)
Each scheduled update has an associated command element (TRINCO) which is stored
under the TIMEDUPDATES user for the date:

/2005/OCT/5/TIMEDUPDATES/ABC
where ABC is the LOCID of the location owning the Update event (LCOMD). PML Collection
syntax can be used to extract the Failures:

COLLECT ALL TRFAIL WITH (TYPE OF OWNER NEQ |TRMLST|) FOR


!DBREF
where !DBREF refers to the timed update element above.
Generally, successes (TRSUCC) and failures (TRFAIL) can be ignored when they are
owned by TRMLST, since these are progress messages. Only those in the Success list
(TRSLST) and Failure list (TRFLST) need to be considered.
Alternatively the Transactions Utility Appware could be used as a basis for a suitable macro,
since the embedded methods are extracting this information. There are two main forms
involved:

!!glbtransactions for transaction command summary


!!glbtransactionmessages
for transaction messages, failures and successes
These forms are files in %PMLLIB%\global\forms with the suffix .pmlfrm.
These forms use the Appware object GLBTRANSACTION. This contains suitable methods
using the COLLECTION object and EXPRESSON filters to collect successes and failures.
When a command is stalled, this is only reported as a Message (TRMESS) in the Message
list (TRMLST). There is no corresponding success or failure, since the command may well
complete on a re-try.
Note: Some commands (such as Claims) use Successes as a way of passing data
between operations, so contain fairly obscure data.

B:4

12.0

Running Global Projects


Using Global to Distribute Catalogue Data

Using Global to Distribute Catalogue Data


Global can be used to distribute the catalogue databases around the world, so that projects
can include them as foreign databases.
A project cannot include a database from a Global project unless the project itself is Global
(that is, the MAKE GLOBAL command has been executed). Therefore, in order to have a
set-up where you have several projects all using a Global distributed project, they
themselves must be Global, in the MAKE GLOBAL sense: they don't have to be distributed
themselves- Single location Global projects do not require a Global license.
Single location Global projects may be created and made Global at their resident location.
You will then be able to include the catalogue data in the usual way.
If there are many multiple-location projects that share this project, then it will be necessary
(because of your HUB license) to create and make each project Global at the HUB and then
copy them to their eventual resident locations. You will then be able to include the catalogue
data in the usual way.
Note: If a Global project is being used to distribute catalogue databases for other projects
to include, the Overwrite DB Users flag (see admnew Files) should be disabled.

C:1

12.0

Running Global Projects


Using Global to Distribute Catalogue Data

C:2

12.0

Running Global Projects


Example Macro for Collecting and Deleting Old Commands

Example Macro for Collecting and Deleting Old


Commands
The PML Function below allows transactions older than a specified number of days to be
deleted. This is an alternative to using Transactions Merge/Purge in Automatic Merging and
Purging of a Transaction Database. This function must be copied into PMLLIB (under
Global\functions). It may be run using !!purgeTransaction(value) where value is the number
of days to retain:
define function !!PurgeTransaction(!days is REAL)
if (!days gt 28) then
!!Alert.error('Maximum purge time is 28 days')
return
endif
if (not !!Alert.Confirm('The local daemon must be shut down before you
can continue with the purge/merge operation.
Do you wish to continue?').Boolean()) then
return
endif
$P Searching for complete transactions...
!monlengths = '31,28,31,30,31,30,31,31,30,31,30,31'
!today = object DATETIME()
!year = !today.year()
!month = !today.month()
!day = !today.date()
!hour = !today.hour()
!minute = !today.minute()
!second = !today.second()
!day = !day - !days
if (!day lt 1) then
!month = !month - 1
if (!month lt 1) then
!year = !year - 1
!month = 12
endif
if (!month eq 2) then
!leaptest = (!year - 2000) / 4
if (!leaptest eq !leaptest.int()) then
!day = 29 + !day
else
!day = 28 + !day
endif
else
!day = !monlengths.split(',')[!month].real() + !day

D:1

12.0

Running Global Projects


Example Macro for Collecting and Deleting Old Commands

endif
endif
!date = object DATETIME(!year,!month,!day,!hour,!minute,!second)
!collection = object COLLECTION()
GOTO FRSTW TRAN
!collection.scope(!!ce)
!filter = object EXPRESSION('upc(TSTATE) eq |COMPLETE|')
!collection.filter(!filter)
!collection.type('TRINCO')
!trincos = !collection.results()
!promptstr = 'Found ' & !trincos.size().string() & ' complete transactio
ns...'
$P $!promptstr
!promptstr = 'Deleting obsolete transactions more than ' & !days.string(
& ' days old...'
$P $!promptstr
!numdel = 0
!numh = 0
do !trinco values !trincos
!datecm = object DATETIME(!trinco.datecm)
!datend = object DATETIME(!trinco.datend)
if (!trinco.incsta.upcase() eq 'PROCESSED' and !datecm.lt(!date) or !t
rinco.incsta.upcase().inset('TIMED OUT','CANCELLED','REDUNDANT') and !date
nd.lt(!date)) then
!numdel = !numdel + 1
!!CE = !trinco
DELETE TRINCO
if (!!CE.members.size() eq 0) then
DELETE TRLOC
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRUSER
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRDAY
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRMONT
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRYEAR
!numh = !numh + 1
endif
endif
endif
endif
endif
endif
enddo
$P $!numdel obsolete transactions deleted
$P $!numh associated hierarchy elements deleted
if (!numdel eq 0) then
$P No merge necessary
!!Alert.Message('No obsolete transactions found')
else

D:2

12.0

Running Global Projects


Example Macro for Collecting and Deleting Old Commands

!cs = CURRENT SESSION


!locrf = !cs.locationname.dbref()
!transdbstr = 'TRANSACTION/' & !locrf.locid
!promptstr = 'Merging all sessions of transaction DB ' & !transdbstr &
'...'
$P $!promptstr
MERGE CHANGES $!transdbstr
$P Merge complete
!!Alert.Message(!numdel.string() & ' obsolete transactions deleted transaction database purge/merge complete')
endif
endfunction

D:3

12.0

Running Global Projects


Example Macro for Collecting and Deleting Old Commands

D:4

12.0

Running Global Projects

Index

ADMIN Daemon . . . . . . . . . . . . . . . . . . . 3:1


Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:4

Extracts . . . . . . . . . . . . . . . . . . . . . . . . 16:1
access . . . . . . . . . . . . . . . . . . . . . . 16:8
children . . . . . . . . . . . . . . . . . . . . . 16:1
claim restrictions . . . . . . . . . . . . . 16:11
creating . . . . . . . . . . . . . . . . . . . . . 16:3
creating working . . . . . . . . . . . . . . . 16:5
dropping changes . . . . . . . . . . . . 16:14
explicit claim . . . . . . . . . . . . . . . . 16:11
extract claim . . . . . . . . . . . . . . . . . . 16:9
flushing . . . . . . . . . . . . . . . . . . . . 16:13
flushing command failure . . . . . . . 16:11
hierarchy . . . . . . . . . . . . . . . . . . . . 16:7
implicit claim . . . . . . . . . . . . . . . . 16:11
issuing changes . . . . . . . . . . . . . . 16:14
master . . . . . . . . . . . . . . . . . . . . . . 16:1
merging changes . . . . . . . . . . . . . 16:16
numbers . . . . . . . . . . . . . . . . . . . . . 16:6
parent database . . . . . . . . . . . . . . . 16:1
partial operations . . . . . . . . . . . . . 16:15
querying family . . . . . . . . . . . . . . . . 16:2
reference blocks . . . . . . . . . . . . . . 16:7
refreshing . . . . . . . . . . . . . . . . . . . 16:14
releasing claims . . . . . . . . . . . . . . 16:14
sessions . . . . . . . . . . . . . . . . . . . . 16:15
user claim . . . . . . . . . . . . . . . . . . . 16:9
using in . . . . . . . . . . . . . . . . . . . . . 16:8
variant . . . . . . . . . . . . . . . . . . . . . 16:18

C
Command Processing . . . . . . . . . . . . . . . 2:1

D
Database
allocation check . . . . . . . . . . . . . . . . 5:1
allocation to location . . . . . . . . . . . . . 5:1
creating extract . . . . . . . . . . . . . . . . 16:3
creating master . . . . . . . . . . . . . . . . 16:3
de-allocation . . . . . . . . . . . . . . . 5:2, 5:3
deleting . . . . . . . . . . . . . . . . . . . . . . 11:1
macros . . . . . . . . . . . . . . . . . . . . . . 10:4
manual update . . . . . . . . . . . . . . . . 10:1
master of extract . . . . . . . . . . . . . . . 16:1
merging . . . . . . . . . . . . . . . . . . . . . . 6:1
reconfiguring . . . . . . . . . . . . . . . . . . 13:1
recovery . . . . . . . . . . . . . . . . . . . . . 12:1
recovery of global . . . . . . . . . . . . . . 12:2
recovery of primary . . . . . . . . . . . . . 12:2
recovery of primary location . . . . . . 12:2
recovery of secondary . . . . . . . . . . 12:1
synchronisation . . . . . . . . . . . . . . . 10:1
update delay . . . . . . . . . . . . . . . . . . 10:2
update protection . . . . . . . . . . . . . . 10:7
update timing . . . . . . . . . . . . . . . . . 10:4
updating . . . . . . . . . . . . . . . . . . . . . 10:1
DESIGN Manager files . . . . . . . . . . . . . 10:5

F
Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 18:1

Index page i

12.0

Running Global Projects

writing to . . . . . . . . . . . . . . . . . . . . . 7:1

Global Daemon
access rights . . . . . . . . . . . . . . . . . . 3:1
diagnostics . . . . . . . . . . . . . . . . . . . . 4:1
location . . . . . . . . . . . . . . . . . . . . . . . 3:1

H
Hub
changing . . . . . . . . . . . . . . . . . . . . . . 9:1
recovering . . . . . . . . . . . . . . . . . . . . . 9:2

I
ISODRAFT files . . . . . . . . . . . . . 10:5, 17:2

K
Kernel Command . . . . . . . . . . . . . . 2:1, 7:1

L
Locations
off-line . . . . . . . . . . . . . . . . . . . . . . . 17:1

M
Macros . . . . . . . . . . . . . . . . . . . . . . . . . 10:4

P
Pending file . . . . . . . . . . . . . . . . . . . 2:1, 8:1
PLOT files . . . . . . . . . . . . . . . . . . 10:5, 17:2
Projects
backing up . . . . . . . . . . . . . . . . . . . 15:1

T
Transaction Audit . . . . . . . . . . . . . . . . . . 7:1
Transaction database
audit trail cancelled commands . . . . 7:7
audit trail dates and counts . . . . . . . 7:5
audit trail from TRINCO . . . . . . . . . . 7:2
audit trail from TROPER . . . . . . . . . . 7:4
audit trail from TROUCO . . . . . . . . . 7:3
audit trail results and messages . . . . 7:7
commands . . . . . . . . . . . . . . . . 2:1, 7:1
management . . . . . . . . . . . . . . . . . 12:3
merging . . . . . . . . . . . . . . . . . . . . . 12:3
merging and purging . . . . . . . . . . . 7:13
reading from . . . . . . . . . . . . . . . . . . . 7:1
reconfiguring . . . . . . . . . . . . . . . . . . 12:4
renewing . . . . . . . . . . . . . . . . . . . . . 12:3

Index page ii

12.0

You might also like