You are on page 1of 188

CERTIFICATE

This is to certify that the thesis entitled Finitely Recursive Process Models for
Discrete Event Systems - Analysis and Extension, submitted by Mr. Supratik
Bose to the Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, for the award of the degree of Doctor of Philosophy is a bonafide
record of research work carried out by him under our supervision during the period from
January 1992 to June 1995. The results embodied in this thesis have not been submitted
for the award of any other degree.

Dr. Amit Patra


Kharagpur
Dated

Dr. Siddhartha Mukhopadhyay

Abstract
The field of Discrete Event Systems (DES) has emerged in the last decade to provide
formal methodologies for modelling, analysis and control of man-made technological systems. The complexity of these systems makes it imperative to adopt an appropriate level
of abstraction depending upon the issues being investigated. The logical behaviour of such
systems is usually characterised by sequences of events occurring at discrete instants of
time.
The recently proposed Finitely Recursive Process (FRP) model captures the dynamic
behaviour of DES in terms of processes, in contrast to the commonly adopted concept of
states. A process is characterised by a set of event sequences, along with associated functions, which determine the immediate course of system behaviour. Operators on processes
have been defined to build new processes from the elementary ones. Finite descriptions of
complex processes can be obtained by recursive application of these operators.
It is not decidable whether an arbitrary FRP is bounded, i.e., whether it can be implemented using a finite number of states. This is a major deterrent to further analysis
and practical use of the model. Therefore, in this work, subclasses of the FRP model
have been identified, for processes of which, boundedness is either guaranteed or atleast
decidable. Naturally, for such subclasses, restrictions are imposed on the use of different
process operators in recursive descriptions to ensure boundedness. However, this impairs
modelling flexibility to an extent. Later, it has also been shown how processes belonging
to these subclasses can be hierarchically combined, resulting in a better trade-off between
modelling flexibility and tractability of the model.
The FRP model, being a deterministic framework, cannot capture nondeterminism
such as arising out of event concealment. To overcome this limitation, a nondeterministic
extension of the FRP model is proposed in this work. Here, a nondeterministic process is
characterised in terms of a set of deterministic futures possible at a given instant. For
an underlying deterministic process, these may be the ones that are reachable via hidden
events. Later, using a collection of operators, a finitely recursive characterisation of the
nondeterministic process space has been obtained.
In the FRP model there is no provision to handle numerical or non-numeric information
and logical decisions based on the status of system variables. This poses difficulty for the
modeller of any real world system. To include these features, the idea of extended processes
has been used in this work to provide a state-based extension to the FRP model. Since
it is assumed that the extended processes can share variables during concurrent as well
as sequential operations, a number of state-continuity constraints have been obtained over

iii
the extended process operators. The concept of a silent transition has been introduced
for modelling logical decisions, which are triggered by occurrences of events and associated
state changes in the environment of a process, running concurrently with others. Here
also, a finitely recursive characterisation of the extended process space has been obtained
in terms of a collection of operators.
Finally, a number of practical application examples including a manufacturing job-shop,
a fault diagnosis session and a robot controller have been modelled using the proposed
modelling frameworks.
Keywords: Discrete Event Systems, Process Algebra, Finitely Recursive Processes,
Boundedness analysis, Nondeterminism.

iv

Contents
1 Introduction

1.1

Discrete Event Systems (DES) . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Nature of DES dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Logical Models of DES: State of the Art . . . . . . . . . . . . . . . . . . .

1.4

Motivation and Contribution . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Mathematical Background

13

2.1

Marked Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

2.2

Recursive Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.3

Deterministic Process Space . . . . . . . . . . . . . . . . . . . . . . . . . .

25

3 Boundedness Analysis of FRP


3.1
3.2
3.3

37

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
k
Boundedness Characterisation : A(< (GD ,
D ) >) . . . . . . . . . . . . .
k
Finiteness Characterisation : < (HD , D ) > . . . . . . . . . . . . . . . .
(G;D , D )

37
44
61

3.4

Boundedness Characterisation : A(<

>) . . . . . . . . . . . . .

67

3.5

Bounded Subclasses of A(< (GD , D ) >) : Some Examples . . . . . . . .

78

3.6

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

4 Nondeterministic Extension of FRP

87

4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

4.2

Nondeterministic Process Space . . . . . . . . . . . . . . . . . . . . . . . .

89

4.3

Process Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.4

Recursive Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.5

Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.5.1

Comparison with CSP . . . . . . . . . . . . . . . . . . . . . . . . . 103

4.5.2

Comparison with CCS: . . . . . . . . . . . . . . . . . . . . . . . . . 108


v

vi
5 State-based Extension of FRP
5.1 Introduction . . . . . . . . . .
5.2 Extended Process Space . . .
5.3 Process Operators . . . . . . .
5.4 Recursive Characterisation . .
5.5 Assessment . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

109
109
111
113
123
127

6 Case Studies
131
6.1 Modelling of a Shop-Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.2 Modelling of a Fault Diagnosis Session . . . . . . . . . . . . . . . . . . . . 136
6.3 Modelling of a Robot Controller . . . . . . . . . . . . . . . . . . . . . . . . 143
7 Conclusions

149

A Proofs of Some Results

153

Preface
Discrete Event Systems (DES) are dynamic systems that evolve with the abrupt occurrences of physical events, at possibly unknown irregular intervals. Such systems arise in a
variety of contexts ranging from computer operating systems to the control of substations
in an electric supply network. In the past, DES have usually been sufficiently simple such
that intuitive or ad-hoc solutions to various problems have been adequate. The increasing
complexity of man-made systems, made a reality by the widespread application of computer technology, has elevated such systems to a level of complexity where more detailed
formal methods become necessary for their analysis and design.
The models designed to capture logical behaviour of DES can be categorised broadly
into two classes: state-based models and process algebra models. The state-based models
use the ideas of states and state-transition functions as fundamental components, whereas
the process algebra models use the ideas of traces (event sequences) and associated marking
functions.
Deterministic Finitely Recursive Process (DFRP) model is an important process algebra
model that tries to capture the dynamics and logical behaviour of DES recursively in terms
of processes and process operators. The DFRP model has good describing power and it
can model arbitrary Finite State Machines, Petrinets and Deterministic Communicating
Sequential Processes. However it cannot model nondeterministic behaviour that arises out
of hiding of events. State-based features like variables and logical decisions based on the
state of these variables cannot be modelled in this framework as it does not have any tool
to store numerical or non-numeric information. Moreover it has been shown that different
issues like reachability, boundedness, deadlock etc. are undecidable even in the simple
DFRP model. These problems have motivated us to work in the following two directions:
(i) analysis of the boundedness problem and identification of bounded subclasses of the
DFRP framework; (ii) extension of the DFRP formalism to include nondeterminism and
variables and states.
The thesis is organised as follows. The first chapter introduces the basic features of the
vii

viii
DES and presents a brief idea of the state of the art of the logical models of DES. The basic
mathematical ideas behind process algebra models and their recursive characterisation are
presented in the second chapter. Previous research has shown that it is not decidable
whether a general DFRP model can be implemented using finite number of states. This is
a major deterrent to further analysis and practical use of the model. In the third chapter,
two subclasses of the DFRP model have been identified, for processes of which, boundedness
is either guaranteed or atleast decidable. It has also been shown how processes of these
subclasses can be hierarchically combined, resulting in a better trade-off between modelling
flexibility and tractability of the model. A nondeterministic extension of the DFRP model
is presented in the fourth chapter. Here, a nondeterministic process is characterised in
terms of possible deterministic futures that are reachable via hidden events. Later, using
a collection of operators, a finitely recursive characterisation of the nondeterministic process
space has been obtained. In the fifth chapter a state-based extension of the DFRP model
has been proposed using the idea of extended processes. Since it is assumed that the
extended processes can share variables during concurrent as well as sequential operations,
a number of state-continuity constraints have been obtained over the extended process
operators. A concept of silent transition has been introduced for effective modelling of
logical decisions, made by concurrently running processes, based on global variables. Here
also, a finitely recursive characterisation of the extended process space has been obtained in
terms of a collection of operators. A number of real world DES including a manufacturing
job-shop, a fault diagnosis session and a robot controller have been modelled in the sixth
chapter using the proposed bounded subclasses as well as extensions of the DFRP model.
Conclusions and future problems related to this work are discussed in the seventh chapter.
A wise man once said Fine words butter no parsnips. I do not pretend to know what
that means, but I have a sneaking suspicion that it applies to this document. Worse, I fear,
as all writers of technical material must, that people will have the same quizzical reaction
to this thesis that I have to the proverb. In any case, insofar as the ideas presented herein
have any currency and consequence butter any parsnips, I suppose I owe a debt to a
great many people.
I am grateful to my father, Mr. Amal Bose, and my mathematics teachers Mr. H.S.
Mallik and Dr. M. R. Jash who instilled into me the liking for mathematical and abstract
thinking, without which I would not have thought of doing research in a subject like
Discrete Event Systems.
I consider myself fortunate to have guides like Dr. Amit Patra and Dr. Siddhartha
Mukhopadhyay, each of whom played the dual roles of my supervisor and elder brother
to perfection. It were they who introduced me to this challenging research area. Each

ix
problem tackled in this thesis is a result of endless hours of discussions and arguments
which I had with them. I am grateful to them for the inspiration, patient hearing and
criticism which they provided throughout the course of this work.
I am grateful to a number of people who, over the last few years, have helped me by
providing valuable ideas and documents. They include Dr. P.P. Chakrabarty, Dr. P.P.

Das, Dr. C.M. Ozveren,


Mr. G. Biswas, Mr. R.S. Mitra, Professors D. Sarkar, A. Basu,
G.P. Rao, S. Sinha, K. Lodaya, P. Varaiya, K. Inan, S. Schneider, W.M. Wonham, S.D.
Brookes, A.W. Roscoe and C.G. Cassandras.
Submission of this thesis also marks the end of my long and fruitful association with
I.I.T. Kharagpur in general and the Department of Electrical Engineering in particular.
The teachers of this Department have taught me some of the most important lessons of
life. I am grateful to the Department and to the Institute for allowing me to pursue my
research in such an excellent academic atmosphere with complete freedom. I also take this
opportunity to thank Professors K.Venkataratnam, A.K. Chattopadhyay and Y.P. Singh,
who, in their capacities as the Heads of the Department, have made available to me all the
departmental facilities that were required to conduct this work.
The nine years of my stay at I.I.T. Kharagpur has seen the entrance and exit of many
friends in my life who have influenced me in no uncertain way. I am thankful to Sujoy
Bhattacharya who is an excellent friend and has probably suffered most from my Cancerian characteristics. Sujoy, Partha Choudhury, Biswarup Chatterjee, Anindya Basu and
Subhrakanti De in the earlier days, and Debasish Bhattacharya, P.K. Dutta, Dibyendu
Baksi, Sutanu Ghosh, P.K. Maity, Atanu Datta and Partha Bannerjee in the later days,
have made my stay here all the more enjoyable. V.V. Patel, A.V.B.Subrahmanyam,
S.P.Das, J.K. Mandal, A.C. Bishwal, A.B. Chattopadhyay, and B.K. Roy were excellent
colleagues with whom I had many interesting discussions.
I am thankful to Mr. Shamit Patra who, besides demonstrating the finer points of
laboratory management, has helped me in many ways during the production of this thesis
and other documents in the TDM laboratory.
Finally, I would like to express my gratitude to my parents and other family members
who provided me with all the support and encouragement that were required to start and
finally complete such a project.

Kharagpur
Dated

Supratik Bose

Glossary
Logic
Notation
=
6
=
2
P Q
P Q
P Q
P iff Q
x.P
/ x.P
x.P

Meaning
equal
not equal
end of a proof
P and Q (both true)
P or Q (one or both true)
if P then Q
P if and only if Q
there exists x such that P
there does not exist x such that P
for all x, P

Sets
Notation

/
{},
{a}
{a, b, c}
{x | P (x)}
AB
AB
AB
AB

Meaning
is a member of
is not a member of
empty set
singleton set, containing a
the set containing a, b, c as members
the set of all x such that P (x)
union of sets A and B
intersection (common elements) of A and B
set of elements of A that are not in B
A is contained in B
xi

xii
AB
2A
S
T

A contains B
family of subsets of A
union of the sets A such that P (A) is true
intersection of the sets A such that P (A) is true

A
A|P (A) A

A|P (A)

Functions
Notation
f :A B
f (x)
injection
surjection
bijection
f 1
{f (x) | P (x)}
f (C)
f g

Meaning
f is a function that maps elements of A to those of B
that member of B to which f maps x A
f such that x 6= y f (x) 6= f (y)
f such that y B, x A with y = f (x)
injection and surjection
inverse of an injection f
set of all f (x) such that P (x)
the image of C under f
f composed with g, f g(x) = f (g(x))

Traces and Events


Section
2.1
2.1

Notation

2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.3
2.3

<>
<>
< 1 2 3 >
s^t
Language L
prefix-closed
prefix-closure
C( )
#s
sn
s P

Meaning
a fixed finite collection of event symbols
set of finite length strings (also called traces)
formed with elements of
null string
the string containing a single event
the string containing 1 , 2 , 3
concatenation of the string s followed by string t
L
L is prefix closed if s^t L s L
L := {s0 | s0 is prefix of some s L}
family of prefix closed languages over
length of the string s
s repeated n times
projection of s on the process P

xiii
sC

4.3
4.5
5.2

projection of s on C
successful termination event
silent transition

Processes and Operators


Section
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.2
2.2
2.2
2.2
2.2
2.2
2.2
2.2
2.2

Notation
Meaning
M
set of marks

fixed family of marking functions


w
marked process
tr w
traces of w
w
marking function of w
W, M,
embedding set
w/s
post-process of w after s
w = (1 w1 | | n wn )m
choice function
n
projection operator

partial order
(W, , { n})
embedding space
HALTm
the Halt process with mark m

marked process space


Q
state-space
Q
W
extended process space
Q
extended marked process space
extended process
w
D(w)
domain of w
n
cartesian product of n process spaces
C
set of constant symbols of
i
P roj
i-th projection symbol
P roj(n)
set of n projection symbols
G
signature over
G
set of composite functional symbols over G
<G>
corresponding set of functions over G
n
(G, )
syntax set over G,
n
< (G, ) >
semantics set over G,

xiv
2.2
2.2
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
3.1
3.1
3.1

(F, < g >)


realisation of the FRP Y =< g > (X), X = F(X)
A(< n (G, ) >)
algebraic process space over < n (G, ) >
WD
deterministic embedding set
MD
set of deterministic marks
D
set of deterministic marking functions
tr P
traces of the process P
P
alphabet function of the process P
P
termination function of the process P
D n
deterministic projection operator
D
deterministic partial order
D
deterministic marked process space

D
deterministic marked nonterminating process space
ST OPA
constant nonterminating process ST OPA
SKIPA
constant terminating process SKIPA
P = (1 P1 | | n Pn )A,0
deterministic Choice Operator
P = P 1 ; P2
sequential Composition Operator
P = P1 kP2
parallel Composition Operator
[B]
P
local Change Operator LCO[B]
P [+C]
local Change Operator LCO[+C]
[[B]]
P
global Change Operator GCO[[B]]
[[+C]]
P
global Change Operator GCO[[+C]]
CD
set of constant symbols of D
D
set of constant symbols of
C D
GD
signature over D
f
functional symbol (also called process expression) f
g
g = f if g = f
len(g)
length of g
Sub(g)
subexpressions of g
f =g
f and g are syntactically equivalent

f =g
f has a structure g
f g
f and g are semantically equivalent
;
GD
GD {P CO}
;
HD
{SCO}
k
GD
GD {SCO}

xv
3.1
3.1
3.1
3.1
3.1
3.1
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2

SY
MP
MA
F (f )
F (f )
F (f )
CF1 , CF2 , CF
U1 , U2 , V1
comp(f )
mR
Mm (, )
T ree(f )
N ode(f )
L(f )
| L(f ) |
Closure
M
COM P (F, < g >)

3.2
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.4
3.4
3.4
3.4
3.5
3.5

V AR(F, < g >)


k
HD
MLP
C1 , C2 , C
U, V
Child(f )
m0R
M0m (, )
Closure0
M0
DF
R, N , N , Nnew
N .E
R.E
B
H

set of post-process expressions of Y


minimum present alphabet function
minimum absent alphabet function
< f > (X)(<>) where X = F(X)
< f > (X)(<>) where X = F(X)
{ |< > tr < f > (X) where X = F(X)}
k
syntactic transformations over n (GD ,
D)
k
D)
subsets of n (GD ,
k
component expressions of f CF1 (n (GD ,
D ))
k
n
merging relation over f CF1 ( (GD , D ))
corresponding result of merging
tree representation of f
nodes of T ree(f )
leaf nodes of T ree(f )
number of leaf nodes of T ree(f )
k
closure algorithm over f CF1 (n (GD ,
D ))
k
recursive merging procedure over CF1 (n (GD ,
D ))
component expressions obtained from
realisation (F, < g >)
variants of expressions of COM P (F, < g >)
GD {SCO, LCO[B]}
minimum locally present alphabet function
k
syntactic transformations over n (HD ,
D)
k
n
subsets of (HD , D )
k
child expressions of f n (HD ,
D)
k
D ))
merging relation over C1 (n (HD ,
corresponding result of merging
k
closure algorithm over C1 (n (HD ,
D ))
k
recursive merging procedure over C1 (n (HD ,
D ))
;
n
syntactic transformation over (GD , D )
nodes of the tree of post-process expressions
process expression contained in node N .
process expression contained in root node R.
bounded subclass B
hierarchy tree

xvi
3.5
3.5
3.5
3.5
3.5
4.2
4.2
4.2
4.2
4.2
4.2
4.2
4.3
4.3
4.3
4.3
4.3
4.4
4.5

V
`
`+
`
H
MN
WN
CHAOS
N n
N
N
F
P \C
P1 u P2
P1 k A P2
P {}
P [ =0]
GN
L

4.5
5.2
5.2
5.2
5.2
5.2
5.2
5.3
5.3
5.3
5.3
5.4

G, G
V
Q
DA
P
Q
DA
hP
P {r}
P [h]
PT  b  PF

GE

set of nodes of a hierarchy tree


hierarchy relation
transitive closure of `
reflexive-transitive closure of `
bounded hierarchical subclass H
set of nondeterministic marks
nondeterministic embedding set
maximal nondeterministic process
nondeterministic projection operator
nondeterministic partial order
nondeterministic process space
transformation from D to N
event concealment operator
nondeterministic choice operator
asynchronous concurrency composition operator
local deletion operator
local nontermination operator
signature over nondeterministic process space
subset of nonterminating, constant alphabet
processes from N
transformations from NCSP to L
set of variables
underlying statespace
marked augmented process space
state transition function
marked extended process space
effect of in process P on the state-space
state modification operator
assignment operator
logical branching operator
set of natural numbers
signature over extended process space

xvii
Abbreviations : Models
Abbreviations
CCS
CFG
CFL
CSP
CVS
DCSP
DES
DFRP
EFRP
FRP
FSM
NCSP
NCSP(WOD)
NFRP
PN
SC
TTM

Full Form
Calculus of Communicating Systems
Context Free Grammar
Context Free Language(s)
Communicating Sequential Process(es)
Continuous Variable Systems
Deterministic CSP
Discrete Event System(s)
Deterministic Finitely Recursive Process(es)
Extended Finitely Recursive Process(es)
Finitely Recursive Process(es)
Finite State Machine(s)
Nondeterministic CSP
NCSP WithOut Divergence
Nondeterministic FRP
Petri net(s)
State Chart(s)
Timed Transition Model(s)

Abbreviations : Operators
Abbreviations
AO
ACO
(A/E)DCO
(A/E)GCO
(A/E)LCO
(A/E)P CO
(A/E)SCO
(ASM O/ESM O)
CO
ECO
LBO
LDO

Full Form
Assignment Operator
Asynchronous Concurrency Operator
(Augmented/Extended) Determinstic Choice Operator
(Augmented/Extended) Global Change Operator
(Augmented/Extended) Local Change Operator
(Augmented/Extended) Parallel Composition Operator
(Augmented/Extended) Sequential Composition Operator
(Augmented/extended) State Modification Operator
Choice Operator
Event Concealment Operator
Logical Branching Operator
Local Deletion Operator

xviii
LN O
N CO

Local Nontermination Operator


Nondeterministic Choice Operator

Abbreviations : Properties
Abbreviations
con
ndes
sndes
MR
MRS
MRWS

Full Form
constructive
nondestructive
strictly nondestructive
Mutually Recursive
Mutually Recursive Spontaneous
Mutually Recursive Weakly Spontaneous

Chapter 1
Introduction
1.1

Discrete Event Systems (DES)

Discrete Event Systems (DES) are characterised by a set of discrete states and a set of
sequences of events, which can take place at random instants of time, causing transitions
between the states. Examples of such systems include manufacturing systems, chemical
processes, traffic systems, communication networks, robotic systems etc. Characteristically,
they are viewed as having a discrete statespace derived from associated physical variables
and are asynchronous, event driven, often nondeterministic or admit choice of events by
some unmodelled mechanisms. They consist of subsystems which evolve concurrently and
with interactions in the form of interlocks, communications via channels or shared physical
variables. The physical variables include numeric continuous variables such as a liquid level,
numeric discrete variables such as machine parts in a buffer, or non-numeric variables such
as traffic lights, relay flags or valve positions. Although the term DES has been adopted
only recently, such systems are not new and researchers and practising engineers are dealing
with these systems for quite a long time in diverse fields of manufacturing, networking,
power systems, computer operating systems etc. But, in the past, DES have usually been
sufficiently simple such that intuitive or ad-hoc solutions to various problems have been
possible. The increasing size, capabilities and complexity of such man-made systems, made
possible by the widespread application of computer technology, have elevated such systems
to a level where more detailed formal methods become necessary for their analysis, design
and operation.
These systems easily contrast with Continuous Variable Systems (CVS), that have continuous state spaces, execute continuous (at least piecewise) trajectories, and are described
by ordinary or partial differential or difference equations. While all physical systems, ei1

CHAPTER 1. INTRODUCTION

ther man-made or natural, behave as a CVS at a sufficiently high level of resolution, it


may be unnecessary to be concerned with such a description for a given purpose. For
example, while designing the basic configuration of a digital circuit, one is not concerned
with the analog innards of the integrated circuits. In fact, the man-made systems are
often so complex that, in order to study many aspects of their design and operation, it
becomes necessary to adopt a high level of abstraction, wherein the underlying continuous
dynamics is completely obliterated. This is often achieved by quantizing the continuous
statespace into a set of zones labelled as discrete states, while events label the (discrete)
state transitions that occur under the continuous dynamics. The need arises, therefore, for
a new modelling paradigm that can represent such systems.

1.2

Nature of DES dynamics

Typical features of a DES can be classified under the following heads.


i) Asynchronism and instantaneity of events: The evaluation of the system dynamics
is viewed as a sequence of events or state transitions. Asynchrony arises because components of the system, including the environment that interacts with it, often do not possess
synchronized or global clocks. There are also possibilities of random events, such as a
machine breakdown, which are fundamentally asynchronous. The events are assumed to
be instantaneous mainly for convenience; when one does not want to model the duration
of occurrence of the event and when there is no ambiguity about the sequence of events. If
necessary, one can model the duration of an event by two subevents, namely, start-event
and finish-event, each of which can occur asynchronously, only maintaining the correct
sequence. These features also arise naturally, when an underlying CVS exists; a DES state
is associated with an (in)equality involving a point function of CVS signals and the DES
events label transitions from one DES state to another.
ii) Reactivity: A reactive system is one that has to respond frequently to stimuli from the
environment. As a result, such systems typically possess features of real-time constraints,
task preemption, prioritized interrupts etc.
iii) Selectivity: This implies that a DES often has to choose from alternatives in a
deterministic or nondeterministic manner. A deterministic choice arises, when out of a set
of distinct events, any one can take place, but the actual choice is made by the environment
in which it is placed. On the other hand, an event taking place in a nondeterministic
choice situation, is actually prompted by some internal mechanism which is not visible to
the modeller.

1.3. LOGICAL MODELS OF DES: STATE OF THE ART

iv) Concurrency with communication, synchronization and interlocks: Quite often a


DES contains many subsystems that evolve concurrently. However, the dynamics of these
subsystems are generally interrelated, since all are required to work for the same overall
objective. The concurrent operation then appears as an interleaving of the individual
event sequences. However, the interrelations restrict the possible interleavings among the
individual sequences. Thus a particular task may require joint participation of two or
more components, in which case, the corresponding events in the respective processes get
synchronized. Similarly, an interlock may prevent or block the occurrence of an event in
a subprocess until another event occurs in another given subprocess. Interlocks are often
needed when a single resource is to be shared among a number of processes. Communication
among the subprocesses is necessary to ensure synchronization or blocking. This can take
place either as direct broadcast or point-to-point communication or indirectly via shared
variables.
v) Modularity and hierarchical features: These are typical means of handling large and
complex systems, and man-made systems almost always exhibit these features. Modularity
results when a system is decomposed into units that are largely autonomous, have welldefined input-output interfaces and perform complete well-defined subtasks. Hierarchy is
an attempt to aggregate these modules, retaining sufficient details for a given problem,
but allowing the flow of the minimum necessary information only from a lower to a higher
level.

1.3

Logical Models of DES: State of the Art

Research activities in the area of DES can be categorised broadly in two parts: (i) Analysis
and synthesis of logical behaviour of DES, (ii) Quantitative analysis and performance
evaluation of timed and stochastic behaviour of DES.
In the first case one is interested in the logical structure of the models, addressing
qualitative issues such as logical correctness, reachability, deadlock avoidance, language
equivalence, liveness, fairness etc. These are structural properties and are tested for arriving at admissible designs for manufacturing cells, sequential control logic or communication
protocols etc.
In the second case, one is interested in the deterministic and stochastic behaviour of
the systems, computing performance measures such as throughput, mean time between
failures, average up and down time etc. For example, in a flexible manufacturing system
(FMS), one may like to know the production rate or the capacity utilization. Such quanti-

CHAPTER 1. INTRODUCTION

tative measures are necessary for solving high-level problems such as production planning
and resource scheduling in FMS, message routing in communication networks, traffic management in transportation systems, etc. Most often these questions are answered through
extensive simulation [1]. Standard simulation languages like SIMULA [2], GPSS [3] are
used for this purpose. Different tools from the field of Operation Research like perturbation analysis [4], Markov chains and queueing models [5, 6, 7, 8], min-max algebra [9]
etc. are also used for analytical solutions. A comprehensive introduction to the problem
of performence evaluation and analysis of timed behaviour can be found in [10].
There are two distinct approaches to modelling and analysis of logical behaviour of
DES: state-based approaches and process algebraic approaches.
The first class of models explicitly captures the states and associated transitions in a
graphical structure. The state here refers to the physical state of the system resources
and variables, as viewed from a certain level of abstraction. State transitions take place in
response to events occurring in the systems. Typical example of this class of models are
Finite State Machines (FSM) [11], Petri Nets (PN) [12], Timed Transition Models (TTM)
[13], State Charts (SC) [14, 15], Temporal Logic (TL) [16] etc.
The other class of models does not use the concept of state explicitly, but provides a
recursive algebraic description of the event dynamics in terms of elementary processes
and a set of process operators. Such models are called process algebra models. Early
formalisms in this class include the Communicating Sequential Processes (CSP) [17] and
Calculus of Communicating Systems (CCS) [18] which were used for mathematical modelling of concurrent and communicating systems. Comprehensive introduction to process
algebra models, as applied mainly to model parallel and distributed computing, can be
found in [19, 20]. More recently, the framework of Deterministic Finitely Recursive Processes (DFRP) [21, 22] has been derived from CSP and proposed as a general modelling
tool for deterministic DES. Most of the process algebra models are theoretical and abstract
in nature. Based on them and their variations, a number of programming languages including Ada [23], OCCAM [24], LOTOS [25], ESTEREL [26], SIGNAL [27], LUSTRE [28]
and CSML [29] have been developed for parallel and distributed computing. Though these
programming languages themselves can describe DES behaviour, they contain too much
details and are not considered suitable formalisms for conceptualisation, analysis of dynamics and synthesis of supervisors, which are major aims of studying DES. Once a strategy
of supervision or the complete dynamic description is obtained using some abstract model
of DES, these languages may be used for the actual implementation of a simulation or
real-time control using a computer.
In state based models, it is easy to specify system dynamics via the one step transition

1.3. LOGICAL MODELS OF DES: STATE OF THE ART

functions. Relatively easy extension to stochastic models such as Markov chains are also
possible in these models. More often than not, the state based models have graphical
representations, which are helpful for an intuitive understanding of the dynamics of the
systems. Handling numerical information is also quite easy in these models. Finally, the
rich repertoire of concepts and techniques developed in traditional continuous variable
system theory can be utilised for analysis and synthesis of state based DES models. Under
the constraint of finite state space, in these models, different properties are solvable in
principle. In fact many problems related to control and observation have been posed and
solved in these frameworks [11, 30].
In practice however, even for a reasonably sized physical system, a state based description becomes unwieldy because of the large number of states that are required to model
the system. Consequently, associated algorithms for say, constructing controllers, may become computationally prohibitive. One solution to this problem may be the computation
of online but partial solutions [31]. The other possibility is to use structured frameworks
like State Charts instead of flat FSM type models and to use efficient graph algorithms
that exploit the structures and avoid exhaustive state-space explorations [32] .
On the other hand, the process algebra models offer the advantage of compact description of DES. A set of operators is available for composition of processes in these
models. The different operators are useful in capturing different features like concurrency,
re-entrant and recursive mechanisms, reusability of descriptions etc. These operators result in an algebra of process expressions. The most powerful feature of these models is the
facility of defining processes via recursion and fixed point equations. Together, it is often
possible to describe DES more compactly than a state-based formalism. Problems related
to program verification, termination, synthesis have been attempted in CSP, a well known
process algebra model.
The price, to be paid for this compact and enhanced describing power, is the tractability
of problems using these models. Many important issues like boundedness, reachability,
deadlock etc. are undecidable [33] in these models. In cases, where they are decidable, it
is often necessary to convert these high level models into low level state based models
before analysis of these issues can be attempted. A major research problem is therefore
to identify subclasses of general process algebraic models which can strike a good tradeoff between modelling flexibility on one side and solvability and effective computability of
basic issues on the other side. Moreover, as in FSM, PN or TTM, extensions and tools are
required to capture features of time, numerical information, nondeterminism etc., within
the framework of process based models.
Some of the typical problems that are sought to be addressed in the context of modelling

CHAPTER 1. INTRODUCTION

(specification), analysis (verification) and synthesis of logical behaviour of DES, arising in


various application areas, are mentioned below. While there may be several other domainspecific questions, in the following, a few problems of generic significance are mentioned.
i) Analysing Logical Behaviour: Some important problems under this category are those
of:
a) Boundedness: This property implies that the state space of the model is finite. A
finite state-space ensures that problems on the model are solvable atleast in principle.
However, it is possible to describe the behaviour of the same system using both bounded
and unbounded models [33]. In such a case, the use of an unbounded model deprives
one of the assurance of existence of solutions. Therefore, before attempting to construct
algorithms for solution of any problem on a class of system models, one should atleast know
whether the problem is decidable for that class. Detection of boundedness is a sufficient
condition for decidability of these problems.
b) Reachability: One is often interested in determining whether a particular state is
reachable from another or any arbitrary initial state. The state information may be coded
in terms of numerical parameters. A major problem of reachability analysis is that of
state explosion which leads to high computational complexities. To improve the situation,
techniques have been suggested to search most probable paths [34], to exploit structure
and symmetry in the model [32] etc.
c) Liveness/Deadlock: A system is said to be live, if under any circumstances, there is
at least one event that can take place. Deadlock is the reverse situation, where each system
has halted and cannot proceed further. Deadlock arises when all component subprocesses
of the system have enabled events, but they are all blocked by at least one of the other
sub-processes. Such situations are obviously undesirable. Deadlock problem can be considered as a special case of reachability problem. The problem of designing computationally
efficient algorithms to identify and prevent deadlocked situations has received wide attention in the process algebra and parallel programming language research community. The
majority of early work on deadlock and liveness was cast in terms of resource allocation
problems, as in operating systems [35]. In the present days, most of the work on deadlock are formulated using Hoares CSP formalism [17, 36, 37, 38] etc. A comprehensive
treatment on deadlock can be found in [39].
ii) Program Verification and Real-time Properties: Process algebra based DES models
(CCS, CSP, DFRP etc) as well as the TTM model contain many general purpose programming language features and can be used as rudimentary specification languages in order to
make nonambiguous specifications. Elaborate specification languages like Ada, OCCAM,
LUSTRE, LOTOS, ESTEREL, SIGNAL, CSML etc. have been developed, using pro-

1.3. LOGICAL MODELS OF DES: STATE OF THE ART

cess algebra models as basis, and additionally providing syntax and semantics. Different
program verification techniqes [16, 40, 41] can be used to verify properties of the process
algebra models. In these methods, one makes assertions about program execution, before
and after every instruction, which are also known as pre-conditions and post-conditions
of the instruction respectively. Inference rules are used to reason about the whole program by combining assertions about the instructions. Since in many cases, real-time and
safety-critical reactive systems are modelled using the process algebra and TTM models,
real-time properties like safety, realtime constraints etc. are also verified using the program
verification methods [16, 42].
iii) Supervision/Control: The kind of control one can exercise in a DES is mainly of a
restrictive nature, since there is no mechanism to force an event to take place in general.
This has been adopted to recognise the existence of events which are uncontrollable (such as
faults) and therefore cannot be prevented. In absence of such events however, one can force
an event to occur by preventing all other enabled events from occurring. One, therefore,
is interested to know, whether, prevention of inadmissible or non-optimal behavior of the
system is possible by systematically disabling a set of controllable events [31], [43] [49].
The DES, whose task is to keep track of the dynamic evoluation of the controlled DES,
and decide on the control action to be exercised, is known as a supervisor. The design of a
supervisor is difficult because of many reasons like nondeterminism in the plant, existence
of uncontrollable events such as machine breakdown, hard time-constraints, insufficient
information feedback from the plant, etc.The desired behavioural pattern can be stated in
several ways, for example in the form of a set of reference trajectories or by specifying a set
of states to be avoided/visited infinitely often. The last requirement is sometimes related
to a notion of stability of the system, and the system is said to be stabilizable if this can
be achieved under control [50] [53].
iv) Observation: The events taking place in a DES may not always be observed by other
agents in its environment, for example when the system is physically distributed over a vast
geographical area, such as communication networks. It may then be necessary to design a
supervisor under partial observation. One is therefore interested in determining whether
the DES is observable, i.e., whether one can determine its state from the observed sequence
of events alone. If it is not possible to ensure this at every instant, it may sometimes suffice
to know the state intermittently with only a finite number of events in between them [54]
[57].
In order to get an overview of the various modelling frameworks, a table is presented
at the end of this section comparing them on the basis of the answers to the following
questions:

CHAPTER 1. INTRODUCTION

Q1. Is the reachability problem decidable?


Q2. Is the liveness/deadlock problem decidable?
Q3. What is the language complexity?
Q4. Can it handle nondeterministic behaviour?
Q5. Does it support hierarchical structure?
Q6. Is the problem of control well-solved?
Q7. Is the problem of observation well-solved?
Q8. Is the problem of stabilization well-solved?
Q9. Can real-time features be handled ?
Q10. Do there exist good analytical tools for analyzing the model?
The possible answers in most cases are Y (Yes), QY (Qualified Yes), N (No) and ?
(Unknown). Question 3 has however the following possible answers: R (Regular), RE
(Recursively Enumerable), CS (Context Sensitive), CF (Context free) DPN (Deterministic
PN) and +(more than). The frameworks, which are compared, are the FSM, TTM, PN,
Statecharts, CSP, CCS and DFRP. Since one is in general concerned with finite state versions of the first four systems, most of the questions related to them are decidable. Things
are exactly the opposite for the last three. Activities on the control, and stabilization
are till now restricted only to the first four, although not yet well-solved in all of them.
Hierarchy is explicitly supported inherently only in the Statecharts model, while real time
features are inherent only in the TTM. Such features have however been imposed on some
of the other frameworks as well. Finally, graph and set-theoretic tools are available for the
first four frameworks, in addition to temporal logic for TTM, while for the last three only
logic and proof rules can be applied.

Model

Q1

Q2

Q3

Q4

Q5

Q6

Q7

Q8

Q9

Q10

FSM
TTM
PN
SC
CSP
CCS
DFRP

Y
QY
QY
QY
N
N
N

Y
QY
QY
QY
N
N
N

R
RE
RE
?
RE
RE
DPN+

Y
Y
QY
Y
Y
Y
N

N
QY
QY
Y
QY
QY
QY

Y
QY
Y
QY
N
N
N

Y
N
?
QY
QY
Y
N

Y
N
?
QY
N
N
N

QY
Y
Y
?
QY
?
QY

Y
Y
Y
Y
QY
QY
QY

1.4. MOTIVATION AND CONTRIBUTION

1.4

Motivation and Contribution

In the present work, investigation has been carried out on the DFRP model of DES.
The DFRP model, designed by Inan and Varaiya ([21]), is a relatively new entrant in
the field of DES. The model is an enhancement over the Deterministic CSP (DCSP) model
of Hoare ([17]) to include the concept of variable alphabet and a number of new operators.
The model aims at describing complex DES in terms of an algebra consisting of a few basic
terms (also known as processes) and a number of operators, which can be used to form
complex terms from simpler ones. The basic terms represent some simple DES behaviour.
The different operators arise naturally from the physical fact that simpler DES indeed
operate concurrently or sequentially producing complex behaviour. The operators can
model choosing of an event among different alternatives, concurrent operation of processes,
sequential operation of processes and local and global modification of the event sets of
processes. In a subsequent work [22], the DFRP model has also been used to construct a
general process algebra, from which, different logical formalisms of DES can be obtained
by using suitable marking axioms. The DFRP model has a large describing power and
it can model FSM, PN and DCSP. Cieslak, in a recent thesis [58], has treated three
aspects of DFRP formalism: simulation, logical analysis and real-time semantics. A LISP
based universal simulator [59] has been designed to simulate the traces of an DFRP by
performing symbolic manipulations. Regarding logical analysis, it has been shown that
Finitely Recursive Processes are powerful enough to model any Turing Machine [60]. As
a result, several important problems such as deadlock, boundedness, reachability etc. are
undecidable, i.e., there is no algorithmic solution to them. Finally, Cieslak has suggested a
method of adding real-time semantics to a process in a way that depends on its realizations.
Since a process may have several realizations, the problem of finding the fastest realization
is important for implementation considerations. A special case of the problem has been
solved by him. Low has shown the use of DFRP in specification of communication protocols
[61].
In spite of its modelling advantages, DFRP model lacks several features that are considered useful in modelling practical situations, as described below.
Since major system properties like boundedness, reachability, deadlock, language
equivalence etc. are not decidable in this model, one cannot validate and analyse systems,
expressed in this model, let alone construct supervisors and observers.
Nondeterminism is an important feature of DES models. Sometimes a system has a
range of possible behaviour, but the environment or the user may not have the ability
to influence, or even observe the selection between the alternative courses of behaviour.

10

CHAPTER 1. INTRODUCTION

Nondeterministic models arise either from a deliberate decision to ignore the factors that
influence this selection, or from the part of the dynamics that is not visible from the given
vantage point of the user. Thus, nondeterminism is necessary in maintaining a high level
of abstraction in description of the behaviour of physical processes. Also, for construction
of a supervisor that is required to work under partial observation, it is imperative that a
nondeterministic model or observer be used, possibly along with the underlying deterministic model of the process. DFRP, being a deterministic framework cannot capture event
concealment and nondeterminism.
Another important handicap of the DFRP model is that it does not have any provision
to handle numerical as well as non-numeric information. Logical decisions, based on this
kind of information (state), which is so common in real world DES, cannot be modelled by
DFRP. Also clocks and real time features like scheduling driven by hard deadline cannot
be modelled here.
The above observations motivate us to work in the following two directions:
analysis of the DFRP framework to find out the basic reason for undecidability of
the associated problems and identify general subclasses for which these problems are
solvable or atleast decidable;
extension of the formalism to include nondeterminism and variables (state).
We end this section with a chapterwise summary of contribution of the thesis.
Chapter 1. The present chapter has discussed the basic features of DES and has
given a brief survey of the state of the art of the logical models of DES. This chapter
has also explained the motivation behind the current work.
Chapter 2. The basic mathematical ideas behind process algebra models in general, and for the DFRP model in particular, are presented here. While presenting
the general algebra of DES models (following [22]), some modifications have been
introduced. These include (a) a modification regarding general property of projection operators in the definition of the embedding space; (b) distinctions between
syntax and semantics of process based models; (c) a generalised concept of mutually
recursive family of functions; (d) relaxation of the condition for existence of unique
solution of recursive equations to include weak spontaneous functions.
Chapter 3. In this chapter, an analysis has been carried out to characterise the
boundedness property of the DFRP model and its subclasses. It has been shown

1.4. MOTIVATION AND CONTRIBUTION

11

that (a) for the subclass of DFRPs built with operators for sequential composition
and all types of event set modification, boundedness is decidable, inspite of the infiniteness of the undelying set of functions; (b) for the subclass of DFRPs built with
operators for concurrent operation and all types of event set modification, the underlying set of functions is infinite. However, inspite of this, DFRPs belonging to
this subclass are bounded under a suitable post-process computation procedure; (c)
the set of functions built with operators for concurrent operation and some types
of event set modification, is finite; (d) a number of semantics preserving syntactic
transformations are necessary to prove the above results. These have been defined
and they play a role, similar to the canonical transformations in linear system theory,
in process simplification; (e) the above two bounded classes can be used to construct
bounded and hierarchical subclasses of DFRP in which both parallel and sequential
composition operators can be used in a way such that a good trade off between modelling flexibility and tractability of problems can be achieved. The results obtained
in this work have been communicated as [62, 63].
Chapter 4. Motivated by the need for nondeterminism, a nondeterministic extension of the DFRP model has been attempted in this chapter. The main features are
the following. (a) A possible future based model of nondeterministic processes is
introduced. (b) A substantial collection of operators is defined and their properties
are discussed in detail. (c) A finitely recursive characterisation of the nondeterministic process space has been obtained in terms of these operators. The results obtained
in this work have been communicated as [64].
Chapter 5. In this chapter, to address the problem of introducing variables and
states in DFRP model, the idea of extended processes [22] has been used to provide
a state based extension of the DFRP model. The major contributions are as follows. (a) Unlike [22], concurrent processes have been allowed to share variables in
this model. (b) A number of operators have been defined and different constraints
that are required to maintain the state continuity in a shared memory framework
are discussed. A finitely recursive characterisation has been obtained in terms of
these operators. (c) The concept of silent transition presented here enables constant monitoring of state-based conditions in the face of state transitions caused by
environmental actions. (d) The enhanced framework has been shown to be able to
model arbitrary TTMs. The results obtained in this work are published in [65, 66].
Chapter 6. The utility of the proposed hierarchical, nondeterministic and state-

12

CHAPTER 1. INTRODUCTION
based extensions of the DFRP model has been illustrated here through a number
of practical examples drawn from various fields. (a) A manufacturing shop floor
has been modelled using a bounded hierarchical DFRP which uses both parallel and
sequential operator. (b) Dynamics of a fault diagnosis session has been modelled
using the nondeterministic extension. (c) Finally, the dynamics of a robot controller
that interfaces between a robot and its environment has been modelled using the
state-based extension of the DFRP model.
Chapter 7. Conclusions from the work are presented in this chapter along with
some suggestions for future work.
In the next chapter we review the mathematical framework to be adopted in this thesis.

Chapter 2
Mathematical Background
In this chapter, a comprehensive mathematical background has been presented which will
be useful in understanding the DFRP model of DES and subsequent developments on the
model which have been carried out in this work. Most of its contents are from [21] and
[22]. It was however necessary to introduce some modifications and generalisations without
changing the basic theme of [21] and [22].
To begin with, following [22], a general process algebra for logical models of DES is
presented here, which can be tailored to recover existing formalisms and to obtain new
model families.

2.1

Marked Processes

There exists a number of frameworks to model the logical behaviour of DES. Most of them
require some common major components, like a collection of event symbols representing
individual significant events in the physical system. The set of sequences of these event
symbols that are possible in a particular DES determines the behaviour of the system
along with some representation of the state or mark of the system after occurrence of
an event.
Let be a fixed finite collection of event symbols. is the set of all finite length
strings formed with elements of , including the null string <>.
As an example, let = {a, b}. Then = { <>, < a >, < b >, < ab >, < ba >,
< aa >, < bb >, }.
Given any two strings s and t, s^t denotes the concatenation of the strings of s and t.
For example < ab > ^ < ac > = < abac >. Also <> ^s = s^ <> = s. L is called a
language over and it is prefix closed if s^t L s L. C( ) denotes the family of
13

14

CHAPTER 2. MATHEMATICAL BACKGROUND

prefix closed languages over . For example L = { <>, < a >, < b >, < aa > < aab >,
< aaa > } is a prefix closed language over {a, b}.
Marking corresponds to an assignment of values to a set of mathematical objects associated with a process after some event has taken place. This in turn determines the
immediate future of the process. Each such assignment is called a mark.
Definition 2.1.1 (Marking:) Let M be a set of marks and be a fixed family of functions
from to M such that s /s where /s(t) := (s^t), for all t .
Definition 2.1.2 (Marked Process and Embedding Set:) A marked process is a
tuple w = (tr w, w) where, tr w C( ) and w : tr w M , w is the marking
function that determines the mark after s. The underlying global set of processes, also
called an embedding set, is denoted as W,M, (often simply as W ) and is defined as the
cartesian product C( ) . The prefix closed language tr w is also known as the set of
traces of w and w(s) is called marking of the trace s.
Definition 2.1.3 (Post Process:) A process w, after executing a string s, behaves as its
post-process w after s, denoted by w/s. It is formally defined as
tr w/s := {t | s^t tr w} and (w/s)(t) := w/s(t) = w(s^t).
Definition 2.1.4 (Choice Function:) Given for i = 1, , k, wi W, i , and an
initial mark m M , a new process w, denoted as w = (1 w1 | | k wk )m is
defined as follows.
S
tr w := {<>} ki=1 {< i > ^s | s tr wi }. Also
w(<>) := m; w(< i > ^s) := wi (s).
The process w captures the fact that its environment can choose any of the events from
{1 , , k } to take place in it at the initial instant. It behaves as the process wi after
the occurrence of the event i . In other words, processes unfold their traces like a tree.
Any process first exercises its choice of events from those that are possible ( i.e., events
included in traces of length 1) and then behaves as the corresponding post process.
Definition 2.1.5 (One Step Expansion Formula:) For any w W , if tr w 6= {<>}
then w = (1 w/ < 1 >| | k w/ < k >) w(<>) where
{< i >| i = 1, , k} = {s tr w | #s = 1}. Here #s denotes the length of s.
To relate the elements of the embedding set W , a partial order  on W is introduced.
The complete definition of  may be made for specific cases appropriately. As will be seen

2.1. MARKED PROCESSES

15

later, depending upon its exact definition,  may capture concepts of sub-processes, more
nondeterministic process etc.
Also a sequence of projection operators { n : W W | n = 0, 1, } is defined
with the implied meaning that w n + 1 is a process that has the same dynamics (trace
and mark) as w for strings upto length n + 1. Beyond that w n + 1 may differ from w.
Obviously w n+1 is a better approximation of w compared to w n. The exact definition
of { n} is again made appropriately for specific cases.
Definition 2.1.6 (Embedding Space:) An embedding space is a triple (W, , { n}),
where W = W,M, is an embedding set,  is a partial order and { n : W W | n =
0, 1, } is a family of projections mapping W into itself, satisfying following properties:
1) (w 0) n = w 0, n 0. In a sense the n operator is causal on the indexing
sequence.
2) w(<>) = v(<>) w 0 = v 0.
3) If w = (1 w/ < 1 >| | k w/ < k >) w(<>) then
w n = (1 w/ < 1 > (n 1) | | k w/ < k > (n 1)) w(<>) . Here w n
is the image of w under n. If tr w := {<>} then n > 0, w n := HALTw(<>) , where,
for m M , HALTm is the process having tr HALTm := {<>}, HALTm (<>) := m.
4) If {wi } is a chain, (that is i, wi  wi+1 ) then there exists a least upper bound (l. u. b)
of {wi } in W . Thus (W, ) is a complete partial order.
5) If {wi } converges to w then < > tr w l | i l, < > tr wi . Also wi / < >
 wi+1 / < > and {wi / < >} converges to w/ < >.
6) If {wi } is a chain converging to w, u1 , , uk are processes in W , , 1 , , n are
distinct events, m is a mark, v := ( w | 1 u1 | k uk )m , and for i = 1, , k,
vi := ( wi | 1 u1 | k uk )m , then, {vi } is a chain converging to v. Thus choice
function preserves ordering of processes.
7) {w n}n0 is a chain converging to w.
8) If {wi }i0 is a chain converging to w then n {wi n}i0 is a chain converging to w n.
Remark 2.1.1 In the 3rd condition of definition 2.1.6 a change has been made in the
definition of w n from the one that is given in [22]. There, n, w n was defined as w 0,
whenever tr w = {<>}. Here we have defined that if tr w := {<>} then n > 0, w n :=
HALTw(<>) .
Intuitively, since the information tr w = {<>} is available, the reasonable approximation w n of w for n > 0 should be a do-nothing HALTm process. It can be easily
checked that this modification doesnt affect any result in [22]. Moreover if the old defini-

16

CHAPTER 2. MATHEMATICAL BACKGROUND

tion is retained, then difficulties are encountered, as will be shown later, while describing
the process operators for our nondeterministic process space.
Fact 2.1.1 If : W W satisfies (w 0) 0 = w 0 and (2) of definition 2.1.6, then there
exists a unique collection of maps, { n, n 0}, satisfying (1) to (3) of definition 2.1.6.
Also
w n m = w min(m, n).
For a proof of this, see [22].
Definition 2.1.7 (Marked Process Space:) A subset = (, , { n}) of the embedding space (W, , { n}) is called a marked process space if it is closed with respect to
the following operations.
a) Projection: P n 0 P n . It is known as the axiom of projection.
b) Postprocess: P s tr P, P/s . It is called the axiom of post-process.
c) Limit: if {Pi }i0 is a chain converging to P such that i 0, Pi , then P . It
is known as the axiom of completeness.
d) Choice: i = 1, , k 0, such that R, P1 , , Pk and R/ < i > 0 = Pi 0 and
R = (1 R/ < 1 >| | k R/ < k >) R(<>)
implies P where
P := (1 P1 | | k Pk ) R(<>) .
It is called the axiom of prefixing.
Elements of are called marked processes. Any subset of which also satisfies the
above axioms is called a process subspace.
In general, a process space is specified by an embedding space W (by specifying its
partial order and projection operators that satisfy conditions of definition 2.1.6 ) and a
set of marking axioms that define a subset of W which is closed w.r.t. above four
operations.
The following important fact, proved in [22], shows that the marking axioms need only
specify the local behaviour.
Fact 2.1.2 Let W be an embedding space. Let W0 W 0 and W1 W 1 be subsets
satisfying the following consistency condition:
1) W1 0 = W0 .
2) (w W1 ) (< > tr w) (w/ < > W0 ). Then there exists a unique marked

2.1. MARKED PROCESSES

17

process space W such that 0 = W0 and 1 = W1 . The converse is also true,


i.e., if W is a marked process space, then W0 := 0 and W1 := 1 satisfy the
above consistency conditions.
The above fact states that the marked process space is determined by specifying all
the processes of the type w := (1 w01 | | k w0k )m where m M , i ,
w0i W0 . Other processes in are obtained from these one shot processes using cut
(projection and postprocess) and paste (prefixing) operations.
The following example, taken from [22], shows how an established logical frameworks
like Petri Net (PN) can be viewed as a marked process spaces, with suitable marking
axioms.
Example 2.1.1 (Petri Nets:) The structure of a Petri Net [12] (with injective labelling
of transitions) is indexed by a set of transitions labelled by event symbols , a collection
of places, namely, {p1 , , pN }, a family of input and output transitions, Li , Ki , i =
1, , N , where Li and Ki denote respectively, the set of transitions into and out of the
place pi . The state of the net is given by (n1 , , nN ) where ni 0 is the number of tokens
in place pi . Any Petri Net P also has a fixed initial state given by some (nP01 , , nP0N ).
Let ZN be the set of all N -tuples zN = (n1 , , nN ), ni 0. Also let TN be the set of all
N -tuples tN = ((A1 , B1 ), , (AN , BN )), where Ai , Bi are subsets of .
One can express the set of Petri Nets as a marked process space satisfying the following
axioms.
(i) M := N >0 {(A, tN , zN ) | A ; tN TN , zN ZN };
(ii) Each Petri Net process P = (tr P, P ) where tr P C( ) and P : tr P M
such that P (s) = (A, tN , zN ) implies A is the set of enabled events, tN is the set of
incoming arcs (joining transitions to places and labelled by the transition symbol) and
outgoing arcs (joining places to transitions and labelled by the transition symbol) related
with the N places and zN is the state of the Petri Net after s has been generated;
(iii) P (s) = (A, tN , zN ) and P (t) = (B, t0N 0 , zN 0 ) (N = N 0 ) (tN = t0N 0 ); In other
words, the Petri Nets considered here have a fixed topology;
(iv) P (s) = (A, tN , zN ) A = { Ki | ni > 0 where zN = (n1 , , nN ) and
tN = ((L1 , K1 ), , (LN , KN )) is the fixed topology};
(v) s^ < > tr P A where P (s) = (A, tN , zN );
(vi) s^ < > tr P and Li Ki ni (s^ < >) = ni (s) + 1, where ni (s) represents
the i th component of the N -tuple zN , such that P (s) = (A, tN , zN );
(vii) s^ < > tr P and Ki Li ni (s^ < >) = ni (s) 1.
(viii) s^ < > tr P and Li Ki ni (s^ < >) = ni (s).

18

CHAPTER 2. MATHEMATICAL BACKGROUND

For this marked process space, the projection operator and the partial order are defined
as follows.
Given any Petri Net process P , P 0 := HALTP (<>) . For n > 0, P n :=
HALTP (<>) if tr P = {<>}. Otherwise P n := (1 P/ < 1 > (n 1) | | k
P/ < k > (n 1)) P (<>) where P = (1 P/ < 1 >| | k P/ < k >) P (<>) .
Also P1  P2 iff tr P1 tr P2 and s tr P1 , P1 (s) = P2 (s). This definition leads to
the definition of the limit of a chain Pi  Pi+1 as lim{Pi }i0 = P where tr P := i tr Pi
and s tr P tr Pi , P (s) := Pi (s).
A process P = (tr P, P ) can be viewed as a transformation which converts the initial
marking P (<>) into the marking P (s) after the process P executes the event sequence
s. In [22] a natural extension has been suggested, where instead of a single initial mark,
a process may have an arbitrary initial mark. Such an extension will help in modelling
cases, in which each mark represents assignment of values to variables associated with
some computation. Such an extension is formalized below.
Let W be an embedding space and Q an arbitrary set, representing the set of possible
assignments of values to a number of variables.
Definition 2.1.8 Let W Q := {w | w : Q W is any partial function }. D(w) denotes
the domain of w and for q D(w), w(q) W is the evaluation of the (extended) process
w at q.
Definition 2.1.9 The (extended) post process of w W Q , denoted by w/s, is defined as
w/s(q) :=

(w(q))/s if s tr w(q)
undefined otherwise

Note that, D(w/s^t) D(w/s) D(w).


Definition 2.1.10 Given w1 , w2 , wk in W Q , distinct events 1 , 2 , k from and
a partial function v : Q M , the (extended) choice function, denoted as
w = (1 w1 | | k wk )v
is defined as
w(q) := (k1 wk1 (q) | | kl wkl (q))v(q)
where
{k1 , , kl } = {j | q D(wj ) D(v)}.
If this index set is empty, then w is not defined for that q Q.

2.1. MARKED PROCESSES

19

The following one-step expansion holds for (extended) processes. For w W Q


w := (1 w/ < 1 >| | k w/ < k >)v
where
{< 1 >, , < k >} =

{s tr w(q) | #s = 1}

qD(w)

and v is defined by v(q) := (w(q))(<>).


The partial order and the projection operators on W Q are induced from those on W .
Formally they are defined as follows.
Definition 2.1.11 For any n 0,
(w n)(q) := (w(q)) n, D(w n) = D(w).
Also
(w1  w2 ) (D(w1 ) = D(w2 ) q D(w1 ), w1 (q)  w2 (q)).
For the sake of convenience, same symbols of partial order and projection functions
have been used for both W and W Q .
If W is a marked process space, then
Q := {w W Q | w(q) , q D(w)}.
is defined as the corresponding marked extended process space.
Example 2.1.2 (Petri Nets as Extended Processes:) In a previous example it has
been shown how a Petri Net with a fixed topology and initial state (token distribution)
can be modelled as a marked process. To express Petri Nets with fixed topology but
arbitrary initial token distribution, one needs to use the concept of extended processes.
Referring to previous example, let PNN denote the set of processes corresponding to all N place fixed topology Petri Nets. Also let tN be the subset of PNN denoting the processes
corresponding to Petri Nets having a topology tN TN . Clearly
PNN =

tN .

tN TN

Each element of tN is denoted as PtN ,zN where tN is the topology for the Petri Net
corresponding to PtN ,zN and zN is the initial token distribution. Then, one can form
extended processes as
P tN : ZN tN | P tN (zN ) := PtN ,zN .

20

CHAPTER 2. MATHEMATICAL BACKGROUND

ZtNN denotes the collection of such extended processes. Finally


Z :=

ZtNN ,

N 0, tN TN

together with standard projection operators and partial orders (induced from those of )
forms an extended process space.

2.2

Recursive Characterisation

In general a process P is an infinite object because of its infinite set of trajectories. For the
purpose of computation it is necessary to have a finite description of processes of at least
some restricted process space. The basic idea is to use recursion in a way analogous to the
method of differential or difference equation for describing an infinite set of trajectories.
Definition 2.2.1 (Function Properties:) Given a process space = (, , { n}), a
process operator f : is said to be :
continuous: if for every chain {Pi }i0 , {f (Pi )}i0 is also a chain and f (Pi ) = f (Pi ).
(Note that P1  P2 P1 P2 = P2 and P2  P1 P1 P2 = P1 , otherwise between
processes is not well defined).
nondestructive (ndes): if P , n, f (P ) n = f (P n) n.
constructive (con): if P , n, f (P ) n + 1 = f (P n) n + 1.
It is intuitively clear that f is con if the (n + 1)-st event of f (P ) is determined by the
n-th event in P . Also con implies ndes. If f has several arguments, these definitions apply
if they apply to each argument when others are fixed.
Fact 2.2.1 The properties of continuity, nondestructiveness and constructiveness are preserved under function composition. Furthermore, if f1 is con and f2 is ndes, then both
f1 f2 and f2 f1 are con.
It is easy to see that the post-process function P P/s is usually destructive (not
ndes) unless s =<>. Also from conditions of definition 2.1.6, it is easy to see (a) the choice
function is continuous and con; (b) every projection operator n is continuous and ndes
and (c) the post-process operation is a continuous partial function and if P1  P2  is a
chain converging to P , and s tr P , then there is an integer l such that Pl /s  Pl+1 /s 
is a chain converging to P/s.
The following theorem specifies the conditions for unique solution of recursive equations
defined on processes.

2.2. RECURSIVE CHARACTERISATION

21

Theorem 2.2.1 Let be a process space and consider the recursive equation
P = F(P, U)

(2.1)

where F = (F1 , , Fn ), a (vector) function from n k to n and U = (U1 , , Uk ) k


are given along with a set of consistent initial conditions:
i, Pi 0 = Z0i = Fi (Z01 , , Z0n , U) 0.

(2.2)

(c1) existence: If each Fi is continuous and ndes in P, then the process


Z=

Zk

is well defined, where


Z0 := (Z01 Z0n ), Zk+1 := F(Zk , U), k 0.
Also Z is the minimal solution to equations ( 2.1, 2.2), i.e., for any other P = (P1 , Pn )
satisfying the above equations, Zi  Pi , i = 1, , n. Moreover this minimal solution is
continuous, con and ndes in Ui accordingly, as F is continuous, con and ndes in Ui .
(c2) uniqueness: If F is constructively guarded, i.e., there do not exist indices i1 , , im ,
im = i1 such that Fik is not con in Pik+1 , 1 k < m, then the above equations have a
unique solution.
Proof:
See [22].
2
By the above theorem, one finds that processes can be defined in a recursive way in the
form
P = F(P), P 0 = Z0 .
It is necessary to develop a framework for specifying a family of functions in a finite way.
The basic idea is to start with some primitive functions and combine them by algebraic
operations to produce a larger family. Throughout our discussion a process space =
(, , { n}) is assumed. Also 0 = {P 0 | P }.
To begin with, we define a set of functional symbols over . It is also known as a
signature over . To every element of the signature we assign a function (process operator).
For several reasons, distinctions have been made between the functional symbols (syntax)
and the functions (semantics) themselves.
Definition 2.2.2 (Signature:) Let G be a set of functional symbols, and be called a signature over . With every g G, one defines a function < g >: k of arity k for
some k. Let < G > be the family of functions < g >, defined over the signature G.

22

CHAPTER 2. MATHEMATICAL BACKGROUND

Definition 2.2.3 (Spontaneous Family:) < G > is said to be a spontaneous family


of functions if every < g >< G > is a spontaneous function, i.e., < g > is continuous,
ndes and if it is a k ary function then (P1 , , Pk ) k ,
< g > (P1 0, , Pk 0) = (< g > (P1 0, , Pk 0)) 0.
< G > is said to be a weakly spontaneous family of functions if every < g >< G >
is weakly spontaneous, i.e., just continuous and ndes.
For recursive characterisation, it is required that < G > has the property of Mutual
Recursiveness so that any post-process of any process composition, obtained using operators of < G >, can be expressed in terms of post-processes of individual processes and
suitable operators of < G >. The formal definitions are as follows.
Definition 2.2.4 Let G be the set of symbols that can be formed recursively with elements
of G and the function composition symbol as given below. < G > denotes the corresponding set of functions. Formally
(a) Id G and < Id >< G > where < Id > is the identity map on .
(b) G G and < G >< G >.
(c) If f1 , , fk are in G and g G so that < g > is a k ary function, then g (f1 , , fk )
is in G and < g (f1 , , fk ) > is in < G >. The former is a functional symbol and the
latter is the corresponding function having an arity m = m1 + + mk where mi is the
arity of < fi >. Finally < g (f1 , , fk ) > (P1 , , Pm ) := < g > (< f1 > (P1 , Pm1 ),
, < fk > (P(mmk )+1 , Pm )).
Definition 2.2.5 (Mutually Recursive (MR) Family:) < G > is said to be a Mutually Recursive Family of functions if for any k ary function < g >< G >,
(P1 , , Pk ) k , < > tr (< g > (P1 , , Pk )) there exists some m0 ary function
< g 0 > in < G > such that
(< g > (P1 , , Pk ))/ < >=< g 0 > (P10 , , Pm0 0 )
where for each j, 1 j m0 , i, 1 i k with Pj0 = Pi or Pj0 = Pi / < >.
The operators of < G > and < G > have different arities. The idea of projection
symbols and constant symbols are now introduced, which, along with the signature G, can
produce an infinite set of fixed arity functional symbols.

2.2. RECURSIVE CHARACTERISATION

23

Definition 2.2.6 (Constant Symbols:) Given a process space , let C be a set of


symbols which is equinumerous to 0, that is, there exists a bijection r : 0 C
such that, for every unique P 0 in 0 for some P , there is a unique symbol c C
with r(P 0) = c and r1 (c) = P 0. C is called the set of constant symbols of .
For every c C and for any n 0 one can define a constant function < c >: n
such that P = (P1 , , Pn ) n , < c > (P) := r1 (c).
Definition 2.2.7 (Projection Symbols:) For any n > 0, the set of projection symbols, namely P roj(n), is the set of symbols {P roj i | 1 i n}.
For every P roj i P roj(n) one defines a projection function < P roj i >: n
such that for any P n , < P roj i > (P) := Pi .
Though < P roj i > and n are both called projection functions, it should be kept in
mind that their meanings are completely different.
Naturally, it is assumed that G, C and P roj(n) are all disjoint. Also it is to be noted
that the constant functions and the projection functions can be defined over any process
space , irrespective of what other operators one may define in < G >.
Definition 2.2.8 (Syntax n (G, ):) Once a family of functions < G > over a signature
G is defined, for any n > 0, one can define recursively the infinite class of functional
symbols n (G, ) (or simply n if G and are understood) as follows:
(a) C n (G, ).
(b) P roj(n) n (G, ).
(c) f1 , , fk n (G, ) and g G such that < g > has an arity k implies g(f1 , , fk )
n (G, ).
Definition 2.2.9 (Semantics < n (G, ) >:) With each element of the symbol set
n (G, ) one can assign functions (semantics) in a natural way and create the set of
functions < n (G, ) > (or simply < n >) as follows:
< n (G, ) > :={< c >: n | c C } {< P roj i >: n | P roj i P roj(n)}
{< g (f1 , , fk ) > | g (f1 , , fk ) n (G, ) and < g (f1 , , fk ) > (P) :=
< g > (< f1 > (P), , < fk > (P))}.
It should be noted that the functional form < g (f1 , , fk ) > appears both in < G >
as well as in < n (G, ) >. In the former, f1 , , fm involve the identity symbol id and
no projection symbol. Here, the overall arity depends upon those of < f1 >, , < fm >
and is not constant. However, in the latter case, f1 , , fm involve the projection symbols
P roj 1 , , P roj n and no identity symbol. As a result, the overall arity is always n.

24

CHAPTER 2. MATHEMATICAL BACKGROUND

Fact 2.2.2 < n (G, ) > is (weakly) spontaneous if < G > is (weakly) spontaneous.
Definition 2.2.10 (Mutually Recursive Processes:) A finite set of processes {P1 , ,
Pn } is called a family of Mutually Recursive Processes (MRP) over < n (G, ) > if
j, Pj , s tr Pj Pj /s = < f > (P1 , , Pn ) for some < f >< n (G, ) >.
Theorem 2.2.2 Let either (a) < G > be a mutually recursive spontaneous (MRS) family
of functions or (b) < G > be a mutually recursive weak spontaneous (MRWS) family of
functions such that for the embedding space (W, , n) in question, the projection operator
0 be a constant function i.e., for any w1 , w2 in W , w1 0 = w2 0.
Then {P1 , , Pn } is a family of MRP w.r.t < n (G, ) > iff P = (P1 , , Pn ) is the
unique solution of the recursive equation with consistent initial condition,
(X1 , , Xn ) = X = F(X), X 0 := P 0

(2.3)

where F : n n and each component Fi of F is guarded, that is


Fi (X) = (i1 < fi1 > (X) | | iki < fiki > (X))mi

(2.4)

with each < fij >< n (G, ) > and mi is a suitable mark.
Though the proof will follow similar reasoning as the one in [22], it is presented in
detail in the Appendix because of two reasons: (a) the concept of MR family of functions is
generalised. (b) The requirement on < G > is relaxed by allowing it to be just an MRWS
family instead of an MRS family under the condition that 0 is a constant function.
Necessity of the above changes will be clear while defining the process operators of our
nondeterministic process space in chapter 4.
Definition 2.2.11 (Finitely Recursive Processes:) A process Y is said to be a
Finitely Recursive Process (FRP) with respect to < n (G, ) > if it can be represented
as
X = F(X), Y =< g > (X)
where X = (X1 , , Xn ), F is of the form ( 2.4) and < g >< n (G, ) >. (F, < g >) is
said to be a realisation of the process P .
Definition 2.2.12 (Algebraic Process Space:) The collection of all possible FRPs w.r.t
< n (G, ) > for arbitrary n is called the algebraic process space and is denoted as
A(< (G, ) >).

2.3. DETERMINISTIC PROCESS SPACE

25

Fact 2.2.3 Because of MR-ness of functions of < (G, ) >, A(< (G, ) >) is closed
under post-process operation, i.e., if Y A(< (G, ) >) with realisation (F, < g >) for
some n, then for any s tr Y , Y /s can be represented as Y /s = < gs > (X) for some
< gs > A(< (G, ) >).

2.3

Deterministic Process Space

In this section the Deterministic (Inan-Varaiya) Process Space [21] has been described as
a typical example of general process space, defined in earlier section. Here the specific embedding space, the marked process space, the operators and the recursive characterisation
of the deterministic process space are described.
Definition 2.3.1 (Basic Objects:) Let be a fixed finite collection of events. Let MD
be a set of deterministic marks. D be a fixed family of functions from to MD such
that D s /s D where /s(t) := (s^t), for all t . W,MD ,D
(or simply WD ) denotes the suitable deterministic embedding set and it is defined as
WD := C( ) D .
To characterise the suitable embedding space one needs to define the projection operators {D n} n 0 and the partial order D over WD .
Definition 2.3.2 (Deterministic Projection Operators {D n | n 0}:) For any w
WD , w D 0 := HALTw(<>) where for some m MD , HALTm is the process defined formally as
tr HALTm := {<>}. HALTm (<>) := m.
For n > 0
w D n := (1 w/ < 1 >D (n 1) | | k w/ < k >D (n 1)) w(<>)
where
w = (1 w/ < 1 >| | k w/ < k >) w(<>) .
If tr w =<>, then for n > 0, w D n := HALTw(<>) .
Definition 2.3.3 (Deterministic Partial Order D :) The partial order D over WD
is defined as : w1 D w2 iff tr w1 tr w2 and s tr w1 , w1 (s) = w2 (s). This definition
leads to the straight forward definition of the limit of a chain w0 D w1 D w2 as:
lim{wi }i0 = w

26

CHAPTER 2. MATHEMATICAL BACKGROUND

where
tr w :=

tr wi , s tr w tr wi , w(s) := wi (s).

i0

Finally the marking axioms are described which define the Deterministic Process Space
D .
Definition 2.3.4 (Marking Axioms:) Let D be the subset of WD that satisfies the following marking axioms.
(a) tr P C( ).
(b) MD := 2 {0, 1}. w : tr w MD is a tuple of two functions, namely w(s) =
(w(s), w(s)). w : tr w 2 is the alphabet function, so that w(s) is the set of events
w can execute or block after it has generated the event sequence s. w : trw {0, 1}
is the termination function where w(s) = 1 represents successful termination of w after
generation of the event sequence s in w.
(c) s^ < > trw w(s). That is, the events that w can execute after generating
s are from the set of events in which it can engage in at that instant.
(d) w(s) = 1 s^t trw t =<>. It means, once w terminates successfully, no event
can take place in it.
From now on an element of D is denoted by the symbol P . Elements of D are known
as deterministic (Inan-Varaiya) processes.
Two processes P1 and P2 are said to be equal if P1 D P2 and P2 D P1 . In other words,
equal processes are those which have identical trace, alphabet and termination function.
Definition 2.3.5 Let
D := {P D | s tr P, P (s) = 0}.

Fact 2.3.1 It can be easily verified that (a) (W,MD ,D , D , {D n}) satisfies conditions of
definition 2.1.6 and thus is a suitable embedding space and (b) D satisfies the conditions
of definition 2.1.7 and thus, (D , D , {D n}) is a valid process space. (c) By restricting
D , as well as considering D as a partial order on
D , it
the domains of each D n to
D , D , {D n}) is a valid process space, and hence a process
can be shown easily that (
subspace of (D , D , {D n}).
Next the different constant processes and process operators defined over D are described.

2.3. DETERMINISTIC PROCESS SPACE

27

Definition 2.3.6 (Constant Processes:) The basic processes in this model are ST OPA
and SKIPA , for some A .
ST OPA := (<>, ST OPA (<>) = A, ST OPA (<>) = 0).
SKIPA := (<>, SKIPA (<>) = A, SKIPA (<>) = 1).
Note that any HALTm or P D 0 for some P is either ST OPA or SKIPA for some
D , then P D 0 is ST OPA for some A .
A . If P is from
The standard operators that have been defined over this model are as follows:
Definition 2.3.7 (Deterministic Choice Operator (DCO):) .
Given P1 , , Pn and distinct events 1 , , n from A , the deterministic choice
operator (denoted as P = (1 P1 | | n Pn )A, ) is defined as follows.
If = 1, then P := SKIPA .
If = 0 then
tr P := {<>} {< i > ^s | s tr Pi , 1 i n}
P (<>) := A, P (< i > ^s) := Pi (s), P (<>) := 0, P (< i > ^s) := Pi (s)
Example 2.3.1 For example, the dynamics of a simple machine that gives two possible
choices of change for 10 Rupees, (5 + 5 or 2 + 2 + 1 + 5) can be described as follows.
CH10i = (in10 CH10o){in10},0 .
CH10o = (out5 CH5o | out2 CH8o){out5, out2},0 .
CH5o = (out5 CH10i){out5},0 .
CH8o = (out2 CH6o){out2},0 .
CH6o = (out1 CH5o){out1},0 .
Definition 2.3.8 (Local Change Operator (LCO):) .
Given a process P and collection of events B and C, the LCO P [B+C] is defined as follows:
tr P [B+C] := {s tr P | B is not the first event of s}
P [B+C] (<>) := (P (<>) B) C
P [B+C] (s) := (P (s)), for s 6=<> .
P [B+C] (s) := P (s), s including <>.
Definition 2.3.9 (Global Change Operator (GCO):) .
Given a process P and collection of events B and C, the GCO P [[B+C]] is defined as
follows:
tr P [[B+C]] := {s tr P | B is not in s}
P [[B+C]] (s) := (P (s) B) C.
P [[B+C]] (s) := P (s).

28

CHAPTER 2. MATHEMATICAL BACKGROUND

For our convenience, from now on two types of LCOs and GCOs will be in use:
LCO[B] or P [B] (meaning P [B+] ) and LCO[+C] or P [+C] (meaning P [+C] ) and
similarly GCO[[B]] and GCO[[+C]].
LCO and GCO are useful as they support reusability of FRP models with minor
modification. For example, if because of shortage of changes, the change giving machine
(of example 2.3.1) fails to give 2 Rupees change, the truncated behaviour can be simply
modelled as
T r_CH10i = CH10i[[{out2}]] .
Definition 2.3.10 (Sequential Composition Operator (SCO):) .
Given P1 and P2 , the sequential operation between the processes P1 and P2 is denoted as
P1 ; P2 . It is defined as:
tr (P1 ; P2 ) := {s | (s tr P1 P1 (s) = 0) (s = r^t r tr P1 P1 (r) = 1
t tr P2 )}.
(P1 ; P2 )(s) :=

(P1 ; P2 )(s) :=

P1 (s) if s tr P1 P1 (s) = 0
P2 (t) if s = r^t r tr P1 P1 (r) = 1 t tr P2

1 if s = r^t r tr P1 P1 (r) = 1 t tr P2 P2 (t) = 1


0 otherwise

Intuitively, P1 ; P2 behaves initially as P1 . Once P1 terminates successfully, it behaves


as P2 . Sequential composition operator is a particularly important operator as it allows
recursive and re-entrant description.
Example 2.3.2 The behaviour of an unbounded stack can be modelled as
Stack_in = (push Stack_in; Stack_out | retrieve SKIP{} ){push, retrieve},0
Stack_out = (pop SKIP{} ){pop},0 .
The traces generated by the process Stack_in will be given by the prefix closure of the
set {(push)n retrieve (pop)n | n 0}. This however does not specify all possible stack
behaviour. For example, < (push, pop)2 > is not modelled here.
Example 2.3.3 Re-entrant codes and Interrupt Service Routines (ISR) in a microprocessor can also be modelled easily using SCO. For example, suppose a microprocessor is
executing the programme of a clock. It reads the value of time from a timer and displays
it repeatedly. An interrupt (event intr1) is provided for the user to interrupt this process
to do some other work. The behavior can be modelled as

2.3. DETERMINISTIC PROCESS SPACE

29

C_read = (read C_display | intr1 ISR1[[{intr1}+{intr1}]] ; C_read){read, intr1},0 .


C_display = (display C_read | over SKIP{} ){display, over},0 .
It should be noted that within the ISR, the corresponding interrupt is disabled and this
is captured using the change operators.
Also, within the first interrupt service routine (ISR1), another interrupt may take place
(event intr2), in response to which the microprocessor may re-enter the C_read process,
within the 2nd ISR, namely ISR2. Such a call will be modelled as a process P1 within
ISR1 and P2 within ISR2 as follows.
..
.
P1 = ( | intr2 ISR2[[{intr2}+{intr2}]] ; P1 ){, intr2},0 .
..
.
P2 = ( | read_timer C_read[[{display}]] ; P2 ){, read_timer},0 .
..
.
For the sake of brevity, ISR1 and ISR2 have not been elaborated.
Parallel composition operator between two processes P1 and P2 generates a new process
(denoted by P1 kP2 ) which depicts the concurrent operation of two processes. Naturally
its traces are generated via interleaving of traces from component processes under the
constraints of synchronization and blocking of events labelled identically. Events of the
concurrent processes, which have a common label, are intended to occur synchronously
and their simultaneous occurrence is represented by single event label in the overall trace.
If any commoly labelled event cannot occur in a process but is present in the alphabet
of a process, it will block the occurrence of other events in other processes which are also
labelled identically as this one. Formal definition of this operator is given below.
Definition 2.3.11 (Parallel Composition Operator (P CO):) .
To begin with, the projection of a string s on a process P (denoted by s P ) is defined
as follows.

undefined if s P /tr P
<>P :=<> and s^ < >P := (s P )^ < > if P (s P )

(s P ) if /P (s P )
Then, for processes P1 and P2 , the parallel composition P1 || P2 , is defined as :
<> tr (P1 || P2 ).
If s tr (P1 || P2 ), then s^ < > tr (P1 || P2 ) iff (s^ < >P1 tr (P1 ),

30

CHAPTER 2. MATHEMATICAL BACKGROUND

s^ < >P2 tr (P2 ) and P1 (s P1 ) P2 (s P2 )).


(P1 || P2 )(s) := P1 (s P1 ) P2 (s P2 ). Finally

( P1 (s P1 ) = 1 P2 (s P2 ) = 1)
1 if ( P1 (s P1 ) = 1 P2 (s P2 ) P1 (s P1 ))
(P1 || P2 )(s) :=

( P2 (s P2 ) = 1 P1 (s P1 ) P2 (s P2 ))

0 otherwise
The process P1 kP2 , after generating an event sequence s, can execute an event from
P1 (s P1 ) P2 (s P2 ) iff both P1 and P2 execute the event synchronously. Otherwise
such an event will be blocked because of lack of participation from one of the components.
Otherwise if belongs to any of Pi (s Pi ) Pj (s Pj ), i, j = 1, 2, i 6= j, then, it can
take place in P1 kP2 if it takes place in Pi , since Pj cannot block these events.
Example 2.3.4 Consider two robots, each of which picks up objects and places them
somewhere. The first robot either picks up a light object of type 1 and places it or it
picks up a heavy object with the help of the second robot and places it. The second robot
also either picks up a light object of type 2 and places it or it picks up a heavy object
with the help of the first robot and places it. Thus picking and placing of heavy object is
synchronous between the robots. The behaviour can be modelled as follows. For i = 1, 2,
Robo_i = (pick_i T emp_i | pick_heavy T emp_heavy_i){pick_i, pick_heavy},0
T emp_i = (place_i Robo_i){place_i, pick_heavy},0
T emp_heavy_i = (place_heavy Robo_i){place_heavy},0
Robo_1kRobo_2 gives the concurrent behaviour of the two robots. Note that when first
robot picks up object of type 1, it prevents the occurrence of the event pick_heavy in
Robo_2, by including the event pick_heavy in the initial alphabet of T emp_1. Similar
arrangement has been made in Robo_2 also.
DCO is con and continuous. P CO, SCO, LCO and GCO are all ndes and continuous.
P CO is commutative and associative. SCO is associative but not commutative.
Next some examples of operators are presented to show the distinction between continuity and nondestructiveness.
Example 2.3.5 Consider the following operators.
(i) Post-process operator is continuous but destructive.
(ii) Let F1 : D D such that

F (P ) :=

ST OP{} if P = ST OPA or SKIPA for some A


P
otherwise

2.3. DETERMINISTIC PROCESS SPACE

31

F1 () is neither continuous nor ndes.


(iii) Let F2 : D D be defined as follows.
<> tr F2 (P ). F2 (P )(<>) := P (<>) and F2 (P )(<>) := P (<>).
s tr F2 (P ) (s^ < > tr F2 (P ) ( F2 (P )(s)) ( F2 (P )(s) = 0)).

F2 (P )(s^ < >) :=


F2 (P )(s^ < >) :=

P (s^ < >) if s^ < > trP


F2 (P )(s)
otherwise

P (s^ < >) if s^ < > trP


0
otherwise

It can be easily seen that F2 () is ndes. But to see that it is not continuous consider the
following.
Let P1 := (a P1 ){a,b},0 .
P2 := (a P1 | b P3 ){a,b},0 where
P3 := (b P3 ){b},0 .
Clearly P1 D P2 But F2 (P1 ) /D F2 (P2 ) as F2 (P1 ) = P4 and F2 (P2 ) = P5 where
P4 = (a P4 | b P4 ){a,b},0 and
P5 = (a P4 | b P3 ){a,b},0 .
Next, following [21], a recursive characterisation of Deterministic processes using the
operators defined above is presented. Some modifications have been made to facilitate
presentation of the current work.
Definition 2.3.12 Let GD be the set of functional symbols defined as :
GD := {SCO, P CO, LCO[B], LCO[+C], GCO[[B]], GCO[[+C]] | B, C }.
Corresponding to each element g GD the function < g > and the set < GD > are
constructed as follows:
< SCO >: 2D D such that < SCO > (P1 , P2 ):= P1 ; P2 .
< P CO >: 2D D such that < P CO > (P1 , P2 ):= P1 k P2 .
< LCO[B] >: D D such that < LCO[B] > (P )) := P [B] .
< LCO[+C] >: D D such that < LCO[+C] > (P )) := P [+C] .
< GCO[[B]] >: D D such that < LCO[[B]] > (P )) := P [[B]] .
< GCO[[+C]] >: D D such that < GCO[[+C]] > (P )) := P [[+C]] .
Definition 2.3.13 < GD > be the set of functions defined as :
< GD >:= {< SCO >, < P CO >, < LCO[B] >, < LCO[+C] >, < GCO[[B]] >,
< GCO[[+C]] > | B, C }.

32

CHAPTER 2. MATHEMATICAL BACKGROUND

Following definition 2.2.6, the set of constant symbols of both process spaces D and
D are constructed as below.

Definition 2.3.14 Let < ST OPA >: nD D and < SKIPA >: nD D such
that < ST OPA > (P) := ST OPA and < SKIPA > (P) := SKIPA . Then
CD = {ST OPA , SKIPA | A }.
< CD >= {< ST OPA >, < SKIPA > | A }.
C D = {ST OPA | A }.
< C D >= {< ST OPA > | A }.
Theorem 2.3.1 < GD > is an MRS family of functions.
Proof: It can be easily verified that every < g >< GD > is continuous, ndes and
< g > (P D 0) = (< g > (P D 0)) D 0.
Now the mutual recursiveness property of the operators is described.
SCO : (P1 ; P2 )/ < >:=

(P1 / < >); P2 if < > trP1


P /<>
if P1 (<>) = 1 < > trP2
2

undefined
otherwise.
P CO : (P1 || P2 )/(< >) :=

(P1 / < >) || P2


if < > trP1 /P2 (<>)
(P2 / < >) || P1
if < > trP2 /P1 (<>)
(P1 / < >|| (P2 / < >) if < > trP1 trP2 .
undefined
otherwise

LCO[B] : P [B] / < >:=

(P/ < >) if < > trP /B


undefined
otherwise.

LCO[+C] : P [+C] / < >:=

(P/ < >) if < > trP


undefined
otherwise.

2.3. DETERMINISTIC PROCESS SPACE

33

GCO[[B]] : P [[B]] / < >:=

(P/ < >)[[B]] if < > trP /B


undefined
otherwise.
GCO[[+C]] : P [[+C]] / < >:=

(P/ < >)[[+C]] if < > trP


undefined
otherwise.
Clearly < GD > is an MRS family.
2
By theorem 2.3.1 it can be seen that it is possible to construct MRS family of recursive functions < n (GD , D ) > using < CD >, < P roj(n) > and < GD >. Since first
assumption of theorem 2.2.2 is satisfied, it is also possible to build the deterministic algebraic process space A(< (GD , D ) >). The space A(< (GD , D ) >) is more commonly
known as the space of Deterministic Finitely Recursive Processes (DFRP). Formally speaking A(< (GD , D ) >) is the collection of processes P D such that, for some n 1 it
is possible to find P = (P1 , , Pn ) nD , so that P can be expressed in terms of P as in
equations (2.3) and (2.4). In case of equation (2.4), a suitable mark mi is characterised by
a tuple (Ai , i ) such that Ai , {i1 , , iki }, and i = 0. For each i, j < fij > as well as
< g > are from < n (G, ) >. (F, < g >) is said to be a realisation of the DFRP P .
Remark 2.3.1 (Notational Simplification:) From now on, for the purpose of easy understanding the notations of n (GD , D ) and < n (GD , D ) > will be simplified by using
standard infix notations recursively as follows.
P roj i is written as Pi in simplified form.
If f is an expression of n (GD , D ), expressed in notationally simplified form, then f is
the expression obtained from f by removing the outermost quotation marks (). In other
words f and f are stringwise identical. For example if f is Pi then f is just Pi . It
should be noted that f is not an element of n (GD , D ) ( though f is) but it represents
a process in D , depending upon the choice of individual Pi D . Also for P nD ,
< f > (P) = < f > (P) = f.
For any g n (GD , D ), if g is LCO[B] (f ), first f is expressed in notationally simplified form and then g is simplified as (f)[B] . The same rule is applied for LCO[+C],
GCO[[B]] and GCO[[+C]]. Thus LCO[B] (GCO[[+C]] (P roj i )) is simplified as
[[+C]][B]
Pi
.
For g n (GD , D ), if g is P CO (f1 , f2 ) or SCO (f1 , f2 ), first f1 and f2 are expressed

34

CHAPTER 2. MATHEMATICAL BACKGROUND

in notationally simplified form. Then g is written as (f1 ) ||


tively.
After simplification, whenever the removal of the braces
sion regarding the scope of any of the functions, they will
(P1 k(P2 ; P3 ))kP4 will be expressed as P1 k(P2 ; P3 )kP4 .

(f2 ) or (f1 ); (f2 ) respecdoes not cause any confube removed. For example

As an example, P CO(P CO(P CO(P roj 2 , P roj 3 ), ST OPA ), SCO(SCO(P roj 4 ,


LCO[+C] (P roj 2 )), P roj 1 )) will be written as
[+C]

P2 || P3 || ST OPA || (P4 ; P2

; P1 ).

Definition 2.3.15 (Length of Expressions:) For g n (GD , D ), length of g (denoted as len(g)) is defined recursively as follows.
len(ST OPA ) = len(SKIPA ) = len(Pi ) := 1
[[B]] ) = len(h
[[+C]] ) = len(h
[B] ) = len(h
[+C] ) := len(h) + 1.
len(h
m

len(km
i=1 hi ) = len(h1 ; ; hm ) := (i=1 len(hi )) + (m 1).
Definition 2.3.16 (Sub-expressions:) For f n (GD , D ), the set of subexpressions
of f , namely Sub(f ) is defined recursively as follows:

{f }
if f = ST OPA or SKIPA or Pi

Sub(f ) := {f } Sub(g)
if f = g [[B]] or g [[+C]] or g [B] or g [+C]

m
m
{f } m
i=1 Sub(vi ) if f = ki=1 vi or ;i=1 vi

Here f = g means that f has a structure as g.


It is easy to see that there will be (syntactically) distinct elements from n (GD , D ),
which are semantically equivalent. For example Pi and Pi kPi are syntactically distinct.
But they are semantically equivalent, meaning that, P nD , < Pi > (P) and
< Pi kPi > (P) are identical processes. In other words < Pi > and < Pi kPi > are
identical functions or the same element of < n (GD , D ) >.
To formalise, two equivalence relations over n (GD , D ) are defined.
Definition 2.3.17 (Syntactic Equivalence:) Given f, g n (GD , D ), f and g are
syntactically equivalent, (denoted as f = g) if any of the following is true:
(a) f and g are both ST OPA or SKIPA or Pi .

[[B ]]
[[B ]]
(b) f = f1 1 , (meaning f has a structure f1 1 , for some f1 and B ),

[[B ]]
g = g1 2 , f1 = g1 and B1 and B2 are identical sets.

[[+C ]]
[[+C ]]
(c) f = f1 1 , g = g1 2 , f1 = g1 and C1 and C2 are identical sets.

2.3. DETERMINISTIC PROCESS SPACE

[B ]

35

[B ]

(d) f = f1 1 , g = g1 2 , f1 = g1 and B1 and B2 are identical sets.

[+C ]
[+C ]
(e) f = f1 1 , g = g1 2 , f1 = g1 and C1 and C2 are identical sets.

(f ) f = f1 ; ; fm and g =
g1 ; ; gm and i, fi = gi .

m
(g) f = ||i=1 fi , g = ||m
i=1 gi , such that for every fi , there exists gj with
fi = gj and vice-versa.
The above definition implies that P1 kP2 kP2 is syntactically equivalent to both
P2 kP2 kP1 and P1 kP2 kP1 . Also clearly = is an equivalence relation. Because of the

notational simplification one can assume that f = ||m


i=1 fi implies that none of the fi

has a structure fi = fi1 || fi2 and similarly for ; also.


Definition 2.3.18 (Semantic Equivalence:) Given f, g n (GD , D ), f and g are
semantically equivalent, ( denoted by f g) if P nD , < f > (P) =< g > (P).
Again it is easy to see that is an equivalence relation over n (GD , D ). Moreover
= is finer than , i.e., f = g implies f g, but not vice-versa.
With this mathematical background, in the next chapter, we begin the formal analysis
of DFRP framework.

36

CHAPTER 2. MATHEMATICAL BACKGROUND

Chapter 3
Boundedness Analysis of FRP
3.1

Introduction

In general, a DES can be described in a more compact way in the DFRP model than
the state based models like FSM. For practical applicability of the model, it is important
to know if typical properties associated with a system, described in this model, can be
computed algorithmically in a finite time, i.e., whether these properties are decidable.
One inherent requirement in this computation is to have an idea about the complete trace,
marks and underlying state space. In models like FSM, this requirement is satisfied easily.
However in models like PN, TTM, DFRP one needs to expand the given model to extract
this information. Moreover, because of repetitive behaviour, traces of a system may be
infinite, even when the underlying state space is finite. As a result, most of the algorithms
developed for analysing logical properties of DES use the underlying state space of the
system.
A property is said to be decidable if there exists some generic algorithm which can
produce an answer about the property, for an arbitrary member of the particular class of
systems, in finite number of steps. More often than not, the decidability of a property is
guaranteed by the finiteness of the underlying state space. As shown in [21], a (possibly
infinite) state machine FP = (Q, , q0 , , Qf ), with state-space Q, alphabet set , initial
state q0 , state transition function (partial) : Q Q and set of final states Qf , can
be associated with any P D as follows.
Q = {P/s | s tr P }, = { P (s) | s tr P },
Qf = {P/s | s tr P P (s) = 1}, q0 = P/ <>,
37

38

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

(P/s, ) :=

P/s^ < >


undefined

if s^ < > tr P
otherwise

Note that, here the post processes play the role of states. In view of this, state-based
algorithms can be applied on deterministic processes to determine many logical properties.
These properties become decidable if FP is an FSM, i.e., the set of post processes of P is
finite. In this case P is said to be a bounded process.
In case of any DFRP Y A(< n (GD , D ) >), with realisation (F, < g >), fact 2.2.3
ensures that for any s tr Y , Y /s can be represented as Y /s =< gs > (X) for some
< gs > A(< n (GD , D ) >). Here X is the unique solution of P = F(P). Clearly Y is
bounded (FY is an FSM) if the set of post processes {< gs > (X) | s tr Y } is a finite set.
But there is no direct way of computing and representing the post processes < gs > (X),
other than by computing the symbol gs , for any s tr Y . So we redefine the property
of boundedness in terms of the syntax set and say that a DFRP Y is bounded if SY =
{gs | s tr Y, Y /s =< gs > (X)} is finite.
But {gs | s tr Y, Y /s =< gs > (X)} and {< gs > (X) | s tr Y, Y /s =< gs > (X)}
are not equinumerous. The former is the set of post-process expressions (syntax), whereas
the latter is the set of post-processes (semantics). And once we define boundedness property
in terms of the syntax set, it becomes dependent on (i) realisation of the process and (ii) the
exact procedure of computation of post process expressions. The following two examples
illustrate these points.
Consider the process Y in A(< 1 (GD , D ) >), with two realisations. In the first
realisation, Y =< P1 > (X) where X = ( < P1 > (X)){},0 . Here the set of post
process expressions is the singleton set {P1 }. In the second realisation, Y =< P1 > (X)
where X = ( < P1 ; P1 > (X)){},0 . Here the set of post process expressions is an
infinite set containing P1 , P1 ; P1 , P1 ; P1 ; P1 , etc. But in both cases it can be
easily seen that the set of post processes is a singleton containing a single process X with
tr X = {an | n 0} and X and X are two constant functions, equal to {a} and 0
respectively. Thus, speaking in terms of the syntax set, the former realisation identifies
the process as bounded whereas the latter identifies the same process as unbounded.
As a second example, consider the simple process Y =< P1 > (P ), given recursively
[[+{b}]]
as P = (a < P1
> (P )){a},0 . If we compute the post-process expressions, performing syntactic substitutions mechanically, the set of post-process expressions will be
[[+{b}]]
[[+{b}]] [[+{b}]]
an infinite set containing P1 , P1
, (P1
)
, and Y will be termed
as unbounded. However, except P1 , all the other symbols are semantically equivalent.
If we use a semantics preserving syntactic transformation in the post-process expression

3.1. INTRODUCTION

39
[[+{b}]]

computation, which converts all these semantically equivalent expressions into P1


,
[[+{b}]]
the set of post-process expressions will be finite, containing only P1 and P1
and
Y will be identified as a bounded process. Thus boundedness becomes dependent on the
post-process expression computation procedure as well.
About nonunique realisation of processes, we cannot do much. But, if one defines
general syntactic transformations that are capable of identifying semantically equivalent
expressions for the argument X and converting them into unique expressions, and uses
these transformations in computing the set of post-process expressions along with syntactic
substitutions, one has a better chance of identifying a process as bounded than one using
syntactic substitutions alone. Therefore, the final definition of boundedness captures this
fact of its dependence on the existence of suitable computation procedure for post-process
expressions, which naturally involves syntactic transformations.
Definition 3.1.1 (Boundedness:) An FRP Y A(< n (GD , D ) >), with realisation,
(F, < g >) is said to be bounded if there exists some suitable semantics preserving postprocess computation procedure under which the set of postprocess expressions of Y , namely
SY , is finite.
In [33] it has been shown that the boundedness of general DFRP is undecidable. That
is, there does not exist any general algorithm which can determine whether SY is finite
for an arbitrary DFRP Y . This has been a deterrent to the practical use of the DFRP
formalism, since problems related to a system cannot be solved algorithmically for arbitrary
processes.
It is therefore worthwhile to investigate whether there exist subclasses of DFRPs where
boundedness is either guaranteed or atleast decidable. Such subclasses must also retain
sufficient modelling features in order to be able to capture real world DES behaviour.
In general, such classes will be characterised by appropriate restrictions on the use of
operators in the construction of process expressions. Since the reason behind undecidability
of boundedness for general DFRPs is the unrestricted use of both P COs and SCOs, two
subclasses of DFRPs are formed, one containing SCO and the different change operators
and the other containing P CO and the change operators. In this chapter the boundedness
properties of these two subclasses is explored.
For the subclass of DFRPs built around P CO and different change operators, it has
been found that the number of distinct function expressions can become infinite, even
though SCO is not used. This is a new result, contrary to what is conjectured in [21] and
leads to a deeper investigation into the question of unboundedness in absence of SCO.

40

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

It has been shown that if LCO[B] is removed along with SCO, the resultant set of
functions is finite and the DFRPs built over this set of functions are naturally bounded.
More importantly, it has been found that, even if LCO[B] is not removed, because of the
recursive structure of the DFRPs built over P CO, GCO and LCO, each DFRP is bounded,
though the underlying function space is infinite. Thus, boundedness for this subclass of
DFRPs is guaranteed. This constitutes the first contribution of this chapter.
For the subclass of DFRPs built around SCO and different change operators, it is
already well known that infinitely many distinct post-process expressions can be generated
recursively. Consequently these processes may be unbounded. Processes of this subclass
have structural similarity with the grammars of simple deterministic languages, defined
by Korenjak and Hopcroft [67], who have studied the problem of language equivalence
for such unbounded grammars. The second contribution of this chapter is to show that
boundedness is decidable for processes of this subclass. In other words, a general procedure
has been found, which, in finite steps, can find out whether a DFRP of this subclass is
indeed unbounded or not.
In the course of proving the above two results, this work also points out that if postprocess expressions are computed using simple syntactic substitution, then very simple
DFRPs may appear to be unbounded because of semantically redundant expressions. In
both cases, semantics-preserving syntactic transformations are proposed, which, along with
syntactic substitutions in the computation of post-process expressions, reduce redundant
expressions into a single expression and obtain a bounded implementation. These transformations are of significance independently of the present context of boundedness analysis
too. They play similar roles as the canonical forms of transformations in general system
theory and would be useful in many other contexts. These form the third contribution
made in this chapter.
The final contribution of this chapter is to show how both the P CO and the SCO can
be used together in a specific way, giving rise to a bounded hierarchical subclass of DFRP.
Below, a few definitions and basic results are presented that will be useful in the subsequent analysis of boundedness.
Definition 3.1.2 Let
G;D := GD {P CO}; Also < G;D >:= < GD > {< P CO >}.
;
;
Also let HD
= {SCO}; < HD
>:= {< SCO >}.
;
It is easy to see that the function space < n (G;D , D ) > (as well as < n (HD
, D ) >)
is infinite as each of < Pi >, < Pi ; Pi >, < Pi ; Pi ; Pi >, , is a distinct function

3.1. INTRODUCTION

41

;
and belongs to < n (G;D , D ) > (also in < n (HD
, D ) >). In [21] it was conjectured
that if sequential composition operator is taken out from < GD > then the resulting set
of functions, is finite. However, a counterexample will be presented later to show that an
infinite set of functions can be composed even without using the SCO.
k

Definition 3.1.3 Let GD := GD {SCO}; Also < GD >:= < GD > {< SCO >}.
k

Before showing the infiniteness of < n (GD , D ) >, the following facts regarding distributivity of the GCOs and LCOs over P CO are presented.
Fact 3.1.1 Contrary to what has been conjectured in [21], it has been shown in [58] that,
in general,
[B]
[B]
6= (P1 || P2 )[B] . For example, let
|| P2
P1
P1 = (a SKIP{a} ){a},0 . P2 = (b SKIP{b} ){b},0 .
[{b}]
[{b}]
= P1 .
|| P2
Then (P1 || P2 )[{b}] = (a (b SKIP{a,b} ){a,b},0 ){a},0 . But P1

[+
C]
[+
C]
where C = C 2i=1 Pi (<>). In the above example, if
|| P2
(P1 || P2 )[+C] 6= P1
C = {a, b, c} then C = {c}. Now
(P1 || P2 )[+C] = (a (b SKIP{a,b} ){a,b},0 | (b (a SKIP{a,b} ){a,b},0 ){a,b,c},0 .
Whereas

[+C]
[+C]
|| P2 ) = (a (b SKIP{a,b} ){a,b,c},0 | (b (a SKIP{a,b} ){a,b,c},0 ){a,b,c},0 .
(P1
Fact 3.1.2 In the present investigation it has also been found that GCO[[B]] does not
distribute over P CO either.
[[B]]
[[B]]
P3
|| P4
6= (P3 || P4 )[[B]] . For example, let
P3 = (g (b SKIP{b} ){b,a},0 ){g},0 , P4 = (g SKIP{b,d} ){g},0 . Then
[[{a,d}]]
[[{a,d}]]
((P3 || P4 )[[{a,d}]] )/ < g >= ST OP{b} , but (P3
|| P4
)/ < g >= SKIP{b} .
[B+C]
; P2 . Also (P1 ; P2 )[[B+C]] =
If P1 6= SKIPA for any A , then (P1 ; P2 )[B+C] = P1
[[B+C]]
[[B+C]]
.
P1
; P2
From the above facts it can be found that the LCO and the GCO operators in general
do not distribute over P CO. However a sufficient condition under which GCO[[B]]
operator distributes over P CO is the following.
Lemma 3.1.1 (a) For P1 , P2 D , B , if /s tr (P1 || P2 )[[B]] tr (P1 || P2 ) such
that Pi (s Pi ) = 0, Pj (s Pj ) = 1, Pi (s Pi )
/Pj (s Pj ) but (Pi (s Pi ) B)
[[B]]
[[B]]
(Pj (s Pj ) B) for some i, j {1, 2}, i 6= j, then P1
|| P2
= (P1 || P2 )[[B]] .
D , B , (P1 || P2 )[[B]] = P1[[B]] || P2[[B]] .
(b)For P1 , P2 in

42

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Proof: The first part of the lemma can be proved easily by induction on the length
[[B]]
[[B]]
of strings generated by both sides. Intuitively P1
|| P2
and (P1 || P2 )[[B]] always
match in alphabet and traces. They can differ only in termination function. The condition
in the lemma guarantees the matching of termination function also. (In fact, the example
mentioned in fact 3.1.2 is a result of violation of the condition of this lemma.) Part (b)
D trivially satisfy the condition
is straightforward from the fact that processes from
mentioned in (a).
2
Example 3.1.1 (Counterexample) Let P = (a ST OP{} | b ST OP{} ){a,b},0 and two
sets of events be A = {a}, and B = {b}. We first show that each element of the sequence
of processes {Pn }n1 is distinct, where
Pn := ((( (((P [B] || P )[A] || P )[B] || P )[A] || || P )[B] || P )[A] || P )[B]
such that there are n applications of LCO[A] and n + 1 applications of LCO[B] in the
definition.
Proof: To prove the above claim, we show, using induction, that each
Pn = (a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0
where there are altogether n bs and n + 1 as.
Basis: P1 := ((P [B] || P )[A] || P )[B] = (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 .
Hypothesis: For some k, k 1, let
Pk = (a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0
where there are altogether k bs and k + 1 as.
Induction Step: By definition it is easy to see that
Pk+1 = ((Pk || P )[A] || P )[B]
= (((a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0 || P )[A] || P )[B]
(By induction hypothesis)
= ((b Pk ){b},0 || P )[B] = ((b Pk | a (b Pk ){b},0 ){a,b},0 )[B] (as Pk kST OP{} = Pk )
= (a (b Pk ){b},0 ){a},0 .
By induction, our claim about the structure of Pn is true. Hence each element of {Pn }n1
is a distinct process.
k
As a result, each element of {< fk >}k1 is a distinct member of < n (GD , D ) >,
where
[B]
fk := ((( (((P1
|| P1 )[A] || P1 )[B] || P1 )[A] || || P1 )[B] || P1 )[A] || P1 )[B]
such that A = {a}, B = {b} and there are k applications of LCO[A] and k + 1 applications of LCO[B] in the definition. This is because applying the sequence {fk }k1
of functions on some argument P nD , where, the first argument P1 is the process
(a ST OP{} | b ST OP{} ){a,b},0 , will result in a sequence of distinct processes, namely

3.1. INTRODUCTION

43
k

{Pk }k1 , as defined above. Thus < n (GD , D ) > is infinite.

Next we present the concept of minimum present and minimum absent alphabet functions in process expressions. Because of the GCO[[B]] and the GCO[[+C]] operators,
given some function f n (GD , D ), it is possible to have a lower bound on the minimum
alphabet that is present (or absent) everywhere along the traces of < f > (P), irrespective
of what P is chosen as argument of < f >. Formally the definitions are as follows:
Definition 3.1.4 (Minimum Alphabets) :
Minimum Present Alphabet: MP : n (GD , D ) 2 , is defined as
MP (f ) := { | < f > (P)(s), P nD , s tr < f > (P)}.
Minimum Absent Alphabet: MA : n (GD , D ) 2 , is defined as
MA (f ) := { | ( < f > (P)(s)), P nD , s tr < f > (P)}.
The following lemma shows how to compute the minimum alphabets. Before applying

the lemma, it is however assumed that, in any subexpression g of any f , if g = g1 ; ; gm ,


then g is both ST OP and SKIP reduced ([33]). In other words, within g, all possible
reductions like converting ST OPA ; gi into ST OPA and SKIP ; gi into gi have been
made before arriving at the expression g1 ; ; gm , and no such simplification is possible
further.
Lemma 3.1.2 For f n (GD , D )

MP (f ) =

MA (f ) =

MP (g)
MP (g) B
MP (g) C
Sm
i=1 MP (gi )
Tm
i=1 MP (gi )

if
if
if
if
if
if
if

f
f
f
f
f
f
f

= ST OPA or SKIPA

= Pi

= g [+C]

= g [[B]] or g [B]

= g [[+C]]

= ||m
i=1 gi

= g1 ; ; gm

MA (g)
MA (g) B
MA (g) C
Tm
i=1 MA (gi )
Tm
i=1 MA (gi )

if
if
if
if
if
if
if

f
f
f
f
f
f
f

= ST OPA or SKIPA

= Pi

= g [B]

= g [[B]]

= g [+C] or g [[+C]]

= ||m
i=1 gi

= g1 ; ; gm

44

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Proof: From the definition of minimum alphabets, it is easy to see that MP (SKIPA )
= MP (ST OPA ) = A and MA (ST OPA ) = MA (SKIPA ) = A. Also since
the argument processes can be arbitrary, we have MP (Pi ) = MA (Pi ) = . The rest
follows naturally from the recursive structure of n (GD , D ) and the definition of minimum
alphabets.
2
Also, given any f n (GD , D ), let F (f ), F (f ) and F (f ) represent respectively the
initial alphabet ( < f > (X)(<>)), initial termination ( < f > (X)(<>)) and set of
initial possible events of < f > (X), where, X nD is the unique solution of the equation
P = F(P), where each equation of F is of the form (2.4). These quantities can be computed
easily, in a recursive manner, (as in [59]) using the local information present in F in the
form of initial marking and choices of the DCOs and the definition of different operators.
In the following two sections, we investigate the boundedness issue of two major subk
;
n
classes of A(< n (GD , D ) >), namely A(< n (GD ,
D ) >), and A(< (GD , D ) >).
Both the subclasses are built around infinite sets of functions. The SCO, in the case
of the former and the P CO, in the latter is absent from the set of operators.
k

It should be noted that, in case of the operator set GD , the algebraic process space we
D and not D . Since for general processes of D , even
have considered, is built around
GCO[[B]] does not distribute over P CO, (see fact 3.1.2) it is not known whether the
k
processes from the function space A(< n (GD , D ) >) are bounded. But this distributivity
can be achieved if we restrict the process space (and the domain of different functions) to
D (see lemma 3.1.1). Now using < C >, < P roj(n) > and < GkD >, as well as

D
D we can define a set of
by restricting the domain of each function to the processes of
D ) >, in the same way as < n (GD , D ) >
nD to
D , namely < n (GkD ,
functions from
k
is defined. Absence of SKIPA in the counterexample shows that < n (GD ,
D ) > is
D is not so distinctive in terms of the event dynamics due to
also infinite. The choice of
k
the fact that < GD > does not contain < SCO >, which is the major operator that uses
the termination functions of its argument processes.

3.2

k
Boundedness Characterisation : A(< (GD ,
D ) >)

k
In this section we examine boundedness of the process space A(< n (GD ,
D ) >.

Given a process realisation (F, < g >), and the unique set of processes X, satisfying
X = F(X), we first define a syntactic transformation CF on elements of the symbol set
k
n (GD ,
D ). It converts a large class of function symbols, which are semantically equivalent atleast for the argument X, into a unique function symbol. The suffix F of CF indicates

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

45

that the transformation CF uses the information present in equation X= F(X) and as a
result it is not semantics preserving for arbitrary processes. However, for the argument X,
it preserves the semantics, i.e., < f > (X) = < CF (f ) > (X). The transformation CF is in
turn composed of two syntactic transformations, CF1 and CF2 .
Informally speaking, CF1 makes the following conversions. (i) Multiple applications
of same types of change operators (LCO and GCOs) are converted into a single ap[[+C ]][[+C2 ]]
[[+C C ]]
plication of the same. For example, Pi 1
will be converted into Pi 1 2 .
(ii) Redundant event symbols in the event set of GCO[[B]] and GCO[[+C]] are removed with the help of minimum present and absent alphabets. Thus f [[+C]] is converted into f [[+(CMP (f ))]] . If it becomes empty, the corresponding operator symbol itself is removed. Similarly, in case of LCO[+C] and LCO[B], instead of minimum present and minimum absent alphabets, F (f ) and F (f ) are used. For example

(P1 kP2 )[B][+C] will be converted into (P1 kP2 )[B][+C] where B 0 := B F (P1 kP2 ),
:= (B 0 C) (B 0 C F (P1 kP2 )) and C:=
(B 0 C F (g)) (C F (P1 kP2 )).
B
The explanation of the last conversion is as follows. First LCO[B] is transformed into
LCO[B 0 ]. This is because, there is no need for keeping any event in B which is not
in F (P1 kP2 ). Then, because of application of LCO[+C], both B 0 and C are modified
and C in the following manner. Any event , which is in F (P1 kP2 ), but not in
to B
F (P1 kP2 ), and also belongs to B 0 C, is removed from both B 0 and C. This is because
removing any such from the initial alphabet by LCO[B 0 ] and then again adding it by
LCO[+C] is a redundant process. (iii) Among the different change operators, an order is
imposed, with GCO[[B]] as the innermost operator, followed by GCO[[+C]], LCO[B]
and LCO[+C]. The reader may convince himself that the chosen ordering among change
operator symbols is unique, in the sense that, there does not exist any other ordering
into which any arbitrary arrangement of change operator symbols can be converted, while
[+C ][[+C1 ]][[B]]
preserving the semantics. Thus, for example, Pi 2
will be converted into
[[B]][[+(C B)]][+(C B (P

[[B]][[+(C1 B)]]
))]

1
2
F
i
. (iv) The GCO[[B]] operator is made to
Pi
[[B ]]
[[B ]]
distribute over the P CO. Thus (Pi kPj )[[B1 ]] is rewritten as (Pi 1 kPj 1 ). In this
D and not
case, preservation of semantics is possible as the processes are obtained from
from D (see lemma 3.1.1). The formal definition of CF1 is given below.

D ), CF1 (f ) is defined as
Definition 3.2.1 (Transformation CF1 :) Given f n (GD ,
follows:
CF1 (f ):= ST OPF (f ) if F (f ) = . Otherwise

CF1 (f ) := f if f = Pi .

CF1 (f ) := km
i=1 CF1 (fi ) if f = ki=1 fi . Here CF1 () is the expression obtained by

46

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

removing the outermost quote marks () from CF1 (). In other words, CF1 (f ) = g
implies CF1 (f ) = g and CF1 (f ) = CF1 (f ).
CF1 (f [[B]] ) := CF1 (f ) if B MA (CF1 (f )). Otherwise, CF1 (f [[B]] ) :=

[[B]]

Pi

[[B]]
m
ki=1 CF1 (fi
)
[[B 0 (BMA (CF1 (f )))]]
g

0
(CF1 (g [[B]] ))[[+(C B)]]
0
(CF1 (g [[B]] ))[(B B)]
0
(CF1 (g [[B]] ))[+(C B)]

if CF1 (f ) = Pi

if CF1 (f ) = km
i=1 fi
0

if CF1 (f ) = g [[B ]]
0

if CF1 (f ) = g [[+C ]]
0

if CF1 (f ) = g [B ]
0

if CF1 (f ) = g [+C ]

CF1 (f [[+C]] ) := CF1 (f ) if C MP (CF1 (f )). Otherwise, CF1 (f [[+C]] ) :=

[[+C]]

Pi

m
(ki=1 fi )[[+(CMP (CF1 (f )))]]
0
g [[B ]][[+(CMP (CF1 (f )))]]
0
g [[+C (CMP (CF1 (f )))]]

(CF1 (g [[+C]] ))[B][+C]


0

(CF1 (g [[+C]] ))[B][+C(C C)]

if CF1 (f ) = Pi

if CF1 (f ) = km
i=1 fi
0

if CF1 (f ) = g [[B ]]
0

if CF1 (f ) = g [[+C ]]
0

if CF1 (f ) = g [B ]
0
0

if CF1 (f ) = g [B ][+C ]

:= (B 0 C) C and C := (B 0 C F (g)). Also if B 0 is empty, so is B,


and
Here B
would be removed.
then any occurrence of the LCO[B 0 ] and LCO[B]
CF1 (f [B] ) := CF1 (f ) if B F (CF1 (f )) = . Otherwise, CF1 (f [B] ) :=

[B (C

(f ))]

F F1
Pi

[B F (CF1 (f ))]
m

(ki=1 fi )
[[B 0 ]][B F (CF1 (f ))]
g

0
g [[+C ]][B F (CF1 (f ))]
0
g [B (B F (CF1 (f ))]
0
(CF1 (g [B] ))[+(C B)]

if CF1 (f ) = Pi

if CF1 (f ) = km
i=1 fi
0

if CF1 (f ) = g [[B ]]
0

if CF1 (f ) = g [[+C ]]
0

if CF1 (f ) = g [B ]
0

if CF1 (f ) = g [+C ]

CF1 (f [+C] ) := CF1 (f ) if C F (CF1 (f )). Otherwise, CF1 (f [+C] ) :=

[+(C (C

(f )))]

F F1
Pi

m
[+(CF (CF1 (f )))]
(ki=1 fi )

[[B 0 ]][+(CF (CF1 (f )))]

g
0
g [[+C ]][+(CF (CF1 (f )))]

g [B][+C]
0

g [B][+C C]

if CF1 (f ) = Pi

if CF1 (f ) = km
i=1 fi
0

if CF1 (f ) = g [[B ]]
0

if CF1 (f ) = g [[+C ]]
0

if CF1 (f ) = g [B ]
0
0

if CF1 (f ) = g [B ][+C ]

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

47

:= (B 0 C) (B 0 C F (g)) and C:=


(B 0 C F (g)) (C F (g)).
Here B
Also, if the event set associated with any change operator symbol is empty, the corresponding
symbol is redundant and therefore removed.
An informal explanation is due for the above transformation. CF1 (f ) is defined as
ST OPF (f ) if F (f ) = . Otherwise, CF1 is defined recursively, with arguments ST OPA
and Pi acting as the bottom of the recursive definition where the recursion terminates.
In case of km
i=1 fi , the result of application of CF1 is determined by each of CF1 (fi ).
While computing CF1 (f [[B]] ) one first checks whether B MA (CF1 (f )). If it is so
then CF1 (f [[B]] ) = CF1 (f ). Otherwise different possibilities arise depending upon the
structure CF1 (f ) (and not f !). For Pi , no structural change is made. For km
i=1 vi ,
GCO[[B]] distributes over P CO. Note however, after distribution, CF1 is applied once
[[B]]
again over individual vi
to check whether GCO[[B]] can penetrate further. In
0
0
[[+C 0 ]]
case of arguments like g
or g [+C ] , or g [B ] the GCO[[B]] penetrates in
to the scope of GCO[[+C 0 ]] and LCO[+C 0 ] and LCO[B 0 ] respectively while preserving
the semantics. This is necessary in order to achieve the required ordering among the
change operators, where GCO[[B]] is the innermost operator. One can then easily check
that after application of CF1 , in the result, GCO[[B]] appears only over the individual
projection symbol Pi .
While computing CF1 (f [[+C]] ) one checks whether C MP (CF1 (f )) or not. If
C is a subset then it is redundant and GCO[[+C]] is removed. Otherwise different possibilities arise depending upon the structure CF1 (f ). For Pi , no structural change
0
is made. If it is g [[B ]] or km
i=1 vi , the event set C of GCO[[+C]] is replaced by
0
C MP (CF1 (f )). If CF1 (f ) has a structure g [[+C ]] , then f [[+C]] is transformed
0
0
0
0
into g [[+C (CMP (CF1 (f )))]] . In case the structure is g [B ] or g [B ][+C ] , GCO[[+C]]
penetrates in to the scope of LCOs, while preserving semantics.
CF1 (f [B] ) can be explained in a similar fashion as that of CF1 (f [[B]] ), except that,
LCO[B] can penetrate only the LCO[+C] operator and from B, all events that are not
in F (CF1 (f )) are removed. Finally, CF1 (f [+C] ) can also be explained in a similar way.
To illustrate the transformation CF1 , we give an example here.
Example 3.2.1 Let P= F(P) be the collection of the following equations. P = (P1 , P2 , P3 ).
P1 = (a < P2 > (P) | b < P3 kP2 > (P) | c < P1 > (P)){a,b,c,g},0 .
P2 = (d < P2 > (P) | e < ST OP{} > (P)){d,e,f },0 .
[{c,g}]
P3 = (f < P1
> (P) | e < ST OP{} > (P) | g < P2 > (P)){e,f,g},0 .
[[+{a,b}]]
[[+{a,c}]] [+{a,f }][{g}]
[{g}] [+{d,h}] [[{b,c,e}]]
Also let v = ((P1 kP2
kP3
)
k(ST OP{c,h} kP2
)
)
.
[[{b,c,e}]]
[[{b,c,e}]][[+{a}]]
[[{b,c,e}]][[+{a}]] [{g}]
[[{b,c,e}]]
Then CF1 (v) = (P1
kP2
kP3
)
kST OP{h} kP2
.

48

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Note that GCO[[{b, c, e}]] distributes over the P COs. While distributing it takes out
event symbols b and c from ST OP{c,h} , GCO[[+{a, b}]] and GCO[[+{a, c}]]. Also
presence of a in GCO[[+{a, b}]] and GCO[[+{a, c}]] and presence of f in initial alphabet of P3 make LCO[+{a, f }] redundant. Similarly absence of g and presence of d in
initial alphabet of P2 and presence of h in ST OP{c,h} make LCO[{g}] over P2 and
LCO[+{d, h}] redundant.
To express the properties of the transformation CF1 formally, we define the following
k
subsets of n (GD ,
D ).
[[B ]][[+C ]][B ][+C ]

1
2
2
Definition 3.2.2 Let U1 := {ST OPA | A } {Pi 1
| B1 , B2 ,
[[+C1 ]][B2 ][+C2 ]
C1 , C2 are subsets of }. Also U2 := {(f1 kf2 k kfm )
| m > 1, fi U1 ,
B2 , C1 , C2 are subsets of }, .
Finally V1 is defined recursively as follows:
(g1 kg2 k kgm )[[+C1 ]][B2 ][+C2 ] V1 whenever gi U1 U2 , m > 1, no gi can be
decomposed further as gi1 kgi2 and not all gi s are in U1 .
(g1 kg2 k kgm )[[+C1 ]][B2 ][+C2 ] V1 whenever gi V1 , m > 1, and no gi can be
decomposed further as gi1 kgi2 .

Note that U1 , U2 , V1 together include arbitrary process expressions with two characteristic features: the first being the fixed order among the change operator symbols and the
second is the appearance of GCO[[B]] in distributed form over P CO. It is also assumed
that if the associated event set of any change operator is empty, then that operator symbol
is actually absent from the syntax.
Next we present the lemma that formally expresses the properties of CF1 .
k
Lemma 3.2.1 For h n (GD ,
D ), X = F(X), f = CF1 (h) and any subexpression
0
f Sub(f ), we have
(1) f 0 U1 U2 V1 ,
(2) f 0 = g [[B]] B MA (g) = ,
(3) f 0 = g [[+C]] C MP (g) = ,
(4) f 0 = g [B] B F (g),
(5) f 0 = g [+C] g 6= v [B] C F (g) = ,
(6) f 0 = g [B][+C] C = (C F (g)) (B C F (g)),
(7) f = CF1 (f ) and
(8) < h > (X) = < f > (X).

Proof: The claims can be proved together with the help of structural induction on the
k
construction of h n (GD ,
D ). The proof is mechanical but tedious as it involves ex-

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

49

haustive enumeration and analysis of all possible structures that f = CF1 (h) might possess.
The formal proof is given in the Appendix. Informally however the results are easier to see.
Claims (1) (6) talk about the structure obtained after application of CF1 () on h. As is
evident from the steps of the transformation, these claims are true because each step of
CF1 () converts an expression by imposing the particular order among the change operators
and removing the redundant event symbols. Since the transformation acts recursively on
every subexpression, each of them also satisfies these claims. Claim (7) is satisfied since,
obviously, CF1 () cannot make any structural conversion of an expression which already
satisfies (1) (6) and f naturally satisfies these. Finally the last claim follows from the
fact that each step of the transformation is semantics preserving atleast for the argument
processes X, satisfying X = F(X).
2
k
D ), it fails to
Though CF1 reduces a substantial amount of redundancy from n (GD ,
capture the underlying boundedness of the processes from this process space. To this
end, a major remaining task is to prevent redundant functional symbols from appear[+C]
[+C]
[+C ]
ing as arguments of P CO. Symbols like Pi , Pi kPi , ((Pi kPi 1 )[+C2 ] kPi ) with
C1 C2 = C which are semantically equivalent (but syntactically distinct), will not be identified to be so even after application of CF1 . Moreover, for a given realisation (F, < g >),
[B]
and X= F(X), if B F (Pi ) is empty, then < Pi kPi > (X) and < Pi > (X)
represent identical processes, a fact which CF1 does not capture. We define another transk
formation CF2 with its domain over the elements of CF1 (n (GD ,
D )), that will detect such
semantic equivalence, using the local information present in F.
k
To begin with, given a functional symbol from CF1 (n (GD ,
D )), we identify its minimal
trace generating subexpression(s), called as component expression(s), (or just components for brevity) which, when applied on X generates the same trace as that of the
original symbol.
Definition 3.2.3 (Component and Variant expressions:) .
k
For f CF1 (n (GD ,
D )), the set of trace generating component expressions of f , namely
Comp(f ) is defined recursively as follows:

Comp(f ) :=

{ST OP{} }
[[B ]][B (P

if f = ST OPA
[[B1 ]]

)]

2
F
i
{Pi 1
}
m
[B2 F (km
v
)]
i=1 i
{(ki=1 vi )
}

Sm

i=1 Comp(vi )

[[B ]][[+C ]][B ][+C ]

1
2
2
if f = Pi 1

m
[[+C1 ]][B2 ][+C2 ]
if f = (ki=1 vi )

B2 F (km
i=1 vi ) 6=

m
if f = (ki=1 vi )[[+C1 ]][B2 ][+C2 ]
B2 F (km
i=1 vi ) =

50

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

In the first three cases Comp(f ) is a singleton and in these cases f is said to be a variant
of the component expression g, where Comp(f ) = {g}.
Referring to example 3.2.1, we find that f = CF1 (v) has three components, which are
[[{b,c,e}]]
[[{b,c,e}]][[+{a}]] [{g}]
[[{b,c,e}]][[+{a}]]
[[{b,c,e}]]
and ST OP{} . It is easy
)
, P2
kP3
kP2
(P1
to see that, being variant to same component expression is an equivalence relation and
any component expression and its variants will have identical traces. Also note that if f
has a structure of the fourth type above, then it cannot be a variant of any component
expression. Thus (P1 kP1 )[[+C1 ]] cannot be a variant of any component expression even if
its component set is a singleton containg P1 .
Next, using the concept of component expressions, we identify a set of conditions under
which a functional symbol g2 can merge into another symbol g1 to form a single symbol,
say, g3 6=
g1 k
g2 such that <
g1 k
g2 > (X) = < g3 > (X), for X = F(X). We say g2
merges into g1 and express it as g1 mR g2 where mR is the binary merging relation. The
result of merging is denoted as Mm (g1 , g2 ). We also require Mm (g1 , g2 ) to be an element
k
of CF1 (n (GD ,
D )). To keep the conditions general and amenable to easy computation,
we use only local information from F, in the form of F () and F (). This information
can be obtained without computing any post process expression.
k
Definition 3.2.4 (Merging:) Given g1 , g2 CF1 (n (GD ,
D )), g1 mR g2 if any of the
following is satisfied. In each case the result of merging is also indicated.

(i) g1 = ST OPA g2 = ST OPB . Then Mm (g1 , g2 ) := ST OPAB .


[[+C i ]][B i ][+C i ]

[[B1 ]]
1
2
2 B 1 (h) = B 2 (h) = B t (say) and h = P or P
(ii) gi = h

F
F
i
i
2
2
2
C]

[[+C11 C12 ]][B][+

or v1 k kvm . Then Mm (g1 , g2 ) := h


, where
= B2t (B2a C21 C22 ) such that B2ai = B2i F (h), i = 1, 2 and
B
B2a = (B2a1 B2a2 ) (B2a1 (C11 (F (h) C12 ))) (B2a2 (C12 (F (h) C11 ))).
Also C = ((C21 C22 ) B2t ) ((C21 C22 ) (F (h) C11 C12 ))

[[+C10 ]][B20 ][+C20 ]


(iii) g1 = (km
, B20 F (km
i=1 vi )
i=1 vi ) = and Comp(g2 ) {ST OP{} }
Comp(g1 ) {ST OP{} }.
0
0
0
Then Mm (g1 , g2 ) := (v1 k kvm k
g2 )[[+(C1 MP (g2 ))]][(B2 F (g2 ))][+(C2 F (g2 ))] .

The following lemma formally states the property of the merging relation mR .
k

D )), if g1 mR g2 then
Lemma 3.2.2 Given X = F(X) and g1 , g2 CF1 (n (GD ,
k
(a) <
g1 k
g2 > (X) = < Mm (g1 , g2 ) > (X), (b) Mm (g1 , g2 ) CF1 (n (GD ,
D ))
(c) Comp(
g1 k
g2 ) = Comp(Mm (g1 , g2 )) and (d) mR is reflexive and transitive, i.e.,
g mR g and if g1 mR g2 and g2 mR g3 then g1 mR g3 . However it is not symmetric
in general.

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

51

Proof: The claims are obvious for the first condition of merging. In case of the second
condition note that < g1 > (X) and < g2 > (X) have identical traces. This is because
B21 F (h) = B22 F (h). The termination function is identically equal to 0. Clearly their
parallel composition will also have the same trace and termination function. Now, first, we
have to show that < Mm (g1 , g2 ) > (X)), as defined in the definition, and (<
g1 k
g2 > (X)
and C are formed. Since both
are identical processes. To see this, we first explain how B
k
D ))), from lemma 3.2.1 we have, for i = 1, 2 (a) B i F (h)
gi s are from CF1 (n (GD ,
2
C1i ; (b) B2i = B2t B2ai and B2t B2ai = where B21 F (h) = B22 F (h) =: B2t and
B2ai = B2i F (h); (c) C2i = (C2i (F (h) C1i )) (C2i B2t ) and (C2i (F (h) C1i ))
(C2i B2t ) = . Now F (k2i=1 gi ) is rewritten as follows.
F (k2i=1 gi ) = 2i=1 ((F (h) C1i B2i ) C2i )

(From definition of P CO)

= 2i=1 ((F (h) C1i C1j (B2i (C1j (F (h) C1i )))) C2i ), (j = 1, 2, j 6= i)
( Note: For any three sets X, Y , Z, we have X Y = (X Z) (Y (Z X)). Here we
take X = F (h) C1i , Y = B2i and Z = C1j .)
= (F (h) C11 C12 2i=1 (B2i (C1j (F (h) C1i )))) C21 C22 ,
(Note: For any three sets X, Y1 , Y2 , (X Y1 ) (X Y2 ) = X 2i=1 Yi . We take
X = F (h) C11 C12 and Yi = (B2i (C1j (F (h) C1i ))).)
= (F (h) C11 C12 (B2t (2i=1 (B2ai (C1j (F (h) C1i )))))) C21 C22
(Note: Yi = (B2t B2ai (C1j (F (h) C1i ))) and B2t (B2ai (C1j (F (h) C1i ))) =
for both i = 1, 2. Then Y1 Y2 = (B2t (2i=1 (B2ai (C1j (F (h) C1i ))))).)
= (F (h) C11 C12 (B2t B2a )) C21 C22 ,
C
= (F (h) C11 C12 B)
= B t (B a C 1 C 2 ) and C = ((C 1 C 2 ) B t )
= F (Mm (g1 , g2 )), where B
2
2
2
2
2
2
2
((C21 C22 ) (F (h) C11 C12 )).
(Note: the last expression has been obtained by modifying the associated event sets of
the local change operators, while preserving the semantics so that the first six conditions
of lemma 3.2.1 are satisfied. In other words, this is equivalent to applying CF1 () on
[[+C11 C12 ]][B2t B2a ][+C21 C22 ] .)
h
F (h) = B2t . It is then obvious that < Mm (g1 , g2 ) > (X)), as defined in the
Clearly B
second condition, and <
g1 k
g2 > (X) have identical trace, termination and initial alphabet. Also, for any string s other than the empty string <>, <
g1 k
g2 > (X)(s)
0
= < h > (X)(s) C1 C1 = < Mm (g1 , g2 ) > (X)(s). Hence <
g1 k
g2 > (X) =
< Mm (g1 , g2 ) > (X). From the last step of the above derivation, it can be easily seen that
k
< Mm (g1 , g2 ) > (X)) belongs to CF1 (n (GD ,
D ))), as it satisfies the first six conditions

52

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

of lemma 3.2.1. From the second condition of mR , it is easy to see that Comp(g1 ) =
[B2t ] (if B t 6= ) or Comp(h)
Comp(g2 ) = Comp(
g1 k
g2 ) = Comp(Mm (g1 , g2 )) = h
2
(otherwise). From the above it is also obvious that mR is reflexive and transitive.
We now proceed to prove the lemma for the third condition of mR . Because of the
k
special structure obtained after application of CF1 , given any f CF1 (n (GD ,
D )) the elements of Comp(f ) can be considered as the set of trace generating components of f . This
is because tr f (X) is formed via interleaving of traces of processes corresponding to these
individual component expressions. The ST OPA symbol and the different LCO[+C] and
GCO[+C] symbols only add to the blocking capability of different components joined by
[[B]][+C1 ] [[+C2 ]]
[[B]][+C3 ]
P CO. For example, consider the symbol f = (P1 kP2
)
kP2
, which
k
n
clearly belongs to CF1 ( (GD , D )) when F is given in the example 3.2.1 and B = {b, c, e},
C1 = {a}, C2 = {g} and C3 = {e}. It should be noted that it is never possible for an
[[B]][+C3 ]
event to occur in < f > (X), such that it occurs in the component P2
alone.
[[B]][+C3 ]
[[B]][+C1 ]
This is simply because P2
and P2
have identical traces. Thus application of GCO[[+C2 ]] does not represent any extra blocking constraint for the component
[[B]][+C3 ]
P2
as far as its operation in < f > (X) is concerned. According to condition
[[B]][+C1 ] [[+C2 ]]
[[B]][+C3 ]
(iii) of mR , (definition 3.2.4), in this case, (P1 kP2
)
and P2
are
[[B]][+C1 ]
[[B]][+C3 ] [[+C2 ]]
merging compatible and they can be merged to form (P1 kP2
kP2
)
.
m
g1 k
g2 > (X). If
To proceed formally, assume h1 = ki=1 vi . Consider any s tr <
s^ < > tr <
g1 k
g2 > (X), then, due to (iii) of definition 3.2.4 only following two
cases are possible:
1. s^ < ><g1 >(X) = s <g1 >(X) ^ < > and s^ < ><g2 >(X) = s <g2 >(X) ;
2. s^ < ><g1 >(X) = s <g1 >(X) ^ < > and s^ < ><g2 >(X) = s <g2 >(X) ^ < >;.
The third case, namely, s^ < ><g1 >(X) = s <g1 >(X) and s^ < ><g2 >(X) =
s <g2 >(X) ^ < > is not possible. This is because, any event, that appears in the
k
traces of some < g > (X), for g CF1 (n (GD ,
D ))), actually occurs in some component process(es) < f > (X), such that f Comp(g). And, if third condition of
mR is satisfied, it will not be possible for any event to take place asynchronously in
< g2 > (X) alone, in the concurrent combination <
g1 k
g2 > (X). More specifically,
it will not be possible to get an event, which will satisfy all the following simultaneously: (a) it will take place in < g2 > (X); (b) neither will it be blocked, nor will it
be executed by < h1 > (X), (which has same set of component processes as that of
< g1 > (X), and hence contains all that of < g2 > (X)); (c) it will be blocked due to the
application of operator GCO[[+C10 ]], LCO[B20 ] and LCO[+C20 ], where B20 F (h1 ) =
. Had the above three requirements been satisfied simultaneously by any event, then
0
0
0
1 k
the event would have appeared in the traces of< (h
g2 )[[+C1 ]][B2 ][+C2 ] > (X) but would

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

53

have failed to appear in <


g1 k
g2 > (X). However such is not the case due to condi0
0
0
1 k
tion (iii). Also, under this condition, <
g1 k
g2 > (X) = < (h
g2 )[[+C1 ]][B2 ][+C2 ] > (X)
0
0
0
1 k
= < (h
g2 )[[+(C1 MP (g2 ))]][(B2 F (g2 ))][+(C2 F (g2 ))] > (X) =: < Mm (g1 , g2 ) > (X). To
prove the second part of the lemma for condition (iii), note that, since both g1 and g2
k
g2
are in CF1 (n (GD ,
D )), by lemma 3.2.1 we can conclude the following: (a) h1 k
k
0
0
0
CF1 (n (GD ,
D )); (b) C1 , B2 and C2 satisfy different structural conditions mentioned in
lemma 3.2.1 with respect to h1 . Together it is easy to see that (C10 MP (g2 )), (B20 F (g2 ))
and (C20 F (g2 )), will also satisfy the constraints mentioned in lemma 3.2.1, with rek
1 k
spect to h
g2 . As a result Mm (g1 , g2 ) CF1 (n (GD ,
D )). It can also be easily seen
m
0
g1 k
g2 ) = Comp(Mm (g1 , g2 )) = Comp(g1 )
that as B2 F (ki=1 vi ) = , Comp(
Comp(g2 ). By the third condition this is also equal to either Comp(g1 ) or Comp(g1 )
ST OP{} , when ST OP{} Comp(g2 ) Comp(g1 ). Satisfaction of reflexivity
and transitivity of mR is then obvious.
2
In case of condition (iii) of definition 3.2.4 if Comp(g2 ) {ST OP{} } is a proper
subset of Comp(g1 ){ST OP{} } (which also equals Comp(km
i=1 vi )){ST OP{} } then
Mm (g1 , g2 ) is defined, but Mm (g2 , g1 ) is not defined. In case these two sets are equal, both
Mm (g1 , g2 ) and Mm (g2 , g1 ) will be defined. However, they will be syntactically distinct
even though <
g1 k
g2 > (X) = < Mm (g1 , g2 ) > (X) = < Mm (g2 , g1 ) > (X).
Example 3.2.2 Let B = {b, c, e}, C1 = {a, b}, C2 = {b, c}. Given the equations F
[[B]] [[+C1 ]]
[[B]] [[+C2 ]]
k
in example 3.2.1 h = (P1 kP2
)
k(P1 kP2
)
is in CF1 (n (GD ,
D )) and it
[[B]]
[[B]] [[+C2 ]] [[+(C1 C2 )]]
can be expressed in merged form either as (P1 kP2
k(P1 kP2
)
)
or
[[B]]
[[B]] [[+C1 ]] [[+(C2 C1 )]]
as (P1 kP2
k(P1 kP2
)
)
. Further merging among subexpressions in
[[B]] [[+C1 C2 ]]
both cases results in the unique expression (P1 kP2
)
.
k
Remark 3.2.1 (Tree representation:) Any f CF1 (n (GD ,
D )), can be visualised as
a tree, namely T ree(f ), as follows. The root node contains the expression f . Now given
any node of the tree, we construct its children recursively as below. If the node contains
[[B ]][[+C1 ]][B2 ][+C2 ]
[[+C1 ]][B2 ][+C2 ]
some expression of the form ST OPA or Pi 1
or (km
i=1 vi )
such that B2 F (km
i=1 vi ) 6= , then the node is not expanded further. On the other
hand if the node contains some such expression such that B2 F (km
i=1 vi ) = , then
we create m children, one each containing each vi . Let N ode(f ) be the set of nodes in
T ree(f ) and L(f ) N ode(f ) be the set of leaf nodes. For any node n N ode(f ), hn
denotes the expression contained in it. Naturally, for any leaf node l L(f ), hl is a variant
of some component expression of f .

54

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Example 3.2.3 Consider the expression h presented in example 3.2.2. If we construct


T ree(h), it will have a root node containing h and two immediate child nodes, containing
[[B]] [[+C2 ]]
[[B]] [[+C1 ]]
)
respectively. Each of these nodes will have an
)
and (P1 kP2
(P1 kP2
[[B]]
respectively.
identical set of two children (leaf) nodes containing P1 and P2
k
Next, we present the transformation CF2 on CF1 (n (GD ,
D )), that merges wherever
m
[[+C1 ]][B2 ][+C2 ]
possible among all subexpressions of the type h = (ki=1 vi )
in a recursive
way. After application of the transformation, h will be modified into a unique expresk
sion (kri=1 gi )[[+C1 ]][B2 ][+C2 ] in CF1 (n (GD ,
D )), while preserving the semantics for the
argument X.
k
Definition 3.2.5 (Transformation CF2 :) The transformation CF2 : CF1 (n (GD ,
D ))

k
CF1 (n (GD ,
D )) is defined as follows.

CF2 (f ) :=

f
if f U1

[[+C1 ]][B2 ][+C2 ]


[[+C
]][B
][+C
]
1
2
2

U2 V1
CF1 (h
) if f = (km
i=1 vi )

where h = Closure(km
i=1 vi ) is computed as follows.
Algorithm: Closure (km
i=1 vi ).
Step 1: Compute the set of expressions S as follows. For each vi , compute CF2 (vi ).

If CF2 (vi ) U1 or CF2 (vi ) = (krj=1 wi )[[+C1 ]][B2 ][+C2 ] such that atleast one of C1 ,
B2 , C2 is nonempty, then for any node n in N ode(CF2 (vi )) of T ree(CF2 (vi )), hn S.

Otherwise if CF2 (vi ) = krj=1 wi , then for each wi , and for any node n in N ode(wi ),
hn S. Let S = {g1 , , gr }.
(Note: S has been constructed by applying CF2 recursively on all arguments of the P CO
and then collecting the syntactically distinct expressions.)
Step 2: Closure(km
i=1 vi ) := g1 if S = {g1 } is a singleton. Otherwise,

Closure(km
i=1 vi ) := kh M (kri=1 gi ) h
where M is the set of expressions obtained by a recursive merging procedure defined below.
(Note: Among the elements of S we apply the recursive merging procedure M , so that in
the resultant expression, no further merging among arguments of the P CO is possible.)
End of Algorithm
Definition 3.2.6 (Recursive Merging Procedure M :) If f U1 then M (f ) := {f }.

If f = km
i=1 vi then M (f ) is computed using the steps given later. Finally if f =

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

55

[[+C1 ]][B2 ][+C2 ]


(km
and atleast one of C1 , B2 , C2 is non-empty, then M (f )
i=1 vi )
[[+C
]][B
][+C
1
2
2]

:= {CF1 (h
)} where h := g if M (km
g,
i=1 vi ) = {g} and h = kg M (km
i=1 vi )
otherwise.
M (km
i=1 vi ) is computed using the following steps.
Step M.1 Let V denote the set of syntactically distinct elements among v1 , , vm .
While ( g1 , g2 V, g1 6= g2 , g1 mR g2 and g2 mR g1 ) do {
1. V1 := V ;
2. Partition V1 such that g1 , g2 V1 are in the same partition iff both g1 mR g2 and
g2 mR g1 . Let there be p partitions and the j-th partition be V1j = {gj1 , , gjkj };
3. Compute V2 := {gj1 | V1j = {g1j } is a singleton and 1 j p}

[
V1j ={gj1 ,,gjkj }|

M (Mm ( Mm (Mm (gj1 , gj2 ), gj3 ) , gjkj ));


1jp, kj >1

4. V := V2 } end (While)
(Note: In order to compute M (km
i=1 vi ), we first partition the set V using both way
merging compatibility as an equivalence relation. All the components of a single partition
are first merged in some arbitrary order and then M is applied on it so that we get a
unique collection of expressions. At the end, among elements of V , no both way merging
is possible.)
Step M.2 Compute V3 and V4 as follows:
V3 :={v V | / h V, h 6= v, such that v mR h or h mR v}.
V4 :={v V | / h V, h 6= v such that h mR v but h V, h 6= v, such that v mR h}.
Step M.4 v V4 Compute V4v := {h V | h 6= v and v mR h} = {hv1 , , hvkv } (say).
(Note: V3 is the set of elements from V each of which neither merges into any other element
of V , nor any other element of V merges into it. V4 is the set of elements from V each
of which does not merge into any other element of V , but there are other element(s) of V
which merge into it. Naturally each element in V , which is not in V3 V4 merges into one
or more elements of V4 . In the following step the above merging takes place recursively
using the transformation M which results in a unique collection of expressions irrespective
of the order of merging.)
Step M.5: Compute
V5 := V3

M (Mm ( Mm (Mm (v, hv1 ), hv2 ) , hvkv )).

[
v V4 , V4v ={hv1 ,,hvk }
v

Step M.6 If ( g1 , g2 V5 , g1 6= g2 , g1 mR g2 ) go to Step M.1 with V := V5 . Otherwise

56

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

M (km
i=1 vi ) := V5 .
End of Procedure
Example 3.2.4 Given the process equations F in example 3.2.1, consider the functional
[+{g}] [[+{b}]]
k
[+{h}] [[+{a}]]
)
;
)
, g2 = (P3 kP2
symbols in CF1 (n (GD ,
D )). Let g1 = (P1 kP2
[+{h}] [[+{b}]]
[+{g}] [[+{a}]]
)
; Finally let us compute CF2 (f ) where
)
, g4 = (P3 kP2
g3 = (P1 kP2
[{g}]
[{g}]
f = (
g1 k
g2 )
k(
g3 k
g4 )
. It can be easily seen that CF2 (gi ) = gi , for 1 i 4.
Now consider Closure(
g1 k
g2 ). Following first steps of the algorithm, we immediately
[+{h,g}] [[+{a}]]
[+{h,g}] [[+{b}]]
find it to be (P1 kP2
)
k(P3 kP2
)
. Note how arguments of the out[+{g,h}]
ermost P CO share identical variants P2
for identical component P2 . Similarly
Closure(
g3 k
g4 ) results in the same expression. As a result CF2 ((
g1 k
g2 )[{g}] ) and
[+{h,g}] [[+{a}]]
[+{h,g}] [[+{b}]] [{g}]
CF2 ((
g3 k
g4 )[{g}] ) are both equal to ((P1 kP2
)
k(P3 kP2
)
)
.
Clearly CF2 (f ) also equals this expression.
k
Example 3.2.5 As another example consider the following expressions in CF1 (n (GD ,
D ))
for the same F as in example 3.2.1.
[[{b}]]
[[{b}]][+{m}]
f1 =((P1
kST OP{h} )[[+{b}]] k(P2 kP3 )[[+{c}]] )[+{m}] ; f4 = P1
;
[+{g}]
[+{d}]
[[{e}]]
[+{a}]
[+{n}]
[[+{k}]]
f2 = (P2
kP3
kST OP{e} )
; f3 = ((P1
kP2 )
kP3 )
;
4
To find CF2 (ki=1 fi ) note that CF2 (fi ) = fi , for i = 2, 3, 4, but
[[{b}]]
kST OP{h} )[[+{b}]] k(P2 kP3 kST OP{h} )[[+{c}]] )[+{m}] ; Next we compute
CF2 (f1 ) = ((P1
Closure(k4i=1 fi ) according to the algorithm.
[[{b}]]

[[{b}]]

, (P2 kP3 kST OP{h} )[[+{c}]] ,


S = {CF2 (f1 ), f2 , f3 , f4 , (P1
kST OP{h} )[[+{b}]] , P1
[[{e}]]
[[{e}]]
[+{d}]
[+{g}]
}.
kP2 )[+{n}] , P1
, ST OP{e} , (P1
, P3
ST OP{h} , P2 , P3 , P2
In first iteration, we find,
[[{b}]]

[+{g}]

[+{d}]

V = {CF2 (f1 ), f3 , f4 , (P1


kST OP{h} )[[+{b}]] , (P2
kP3
kST OP{e,h} )[[+{c}]][+{a}] ,
[+{g}]
[+{d}]
[[{e}]]
[[{e}]]
ST OP{e,h} , P2
, P3
, (P1
kP2 )[+{n}] , P1
}.
V3 = .
V4 = {CF2 (f1 ), f3 }.
[[{b}]]

The expressions of V that merge into CF2 (f1 ) are f4 , (P1


kST OP{h} )[[+{b}]] ,
[+{g}]
[+{d}]
[+{g}]
[+{d}]
(P2
kP3
kST OP{e,h} )[[+{c}]][+{a}] , ST OP{e,h} , P2
, and P3
. Merging them into CF2 (f1 ) in any order and then applying M , we get the set of expressions
[[{b}]][+{m}]
[+{g}]
[+{d}]
{(P1
kST OP{e,h} )[[+{b}]] , (P2
kP3
kST OP{e,h} )[[+{c}]][+{a}] }.
[+{g}]

[+{d}]

Similarly the expressions of V that merge into f3 are ST OP{e,h} , P2


, P3
,
[+{g}]
[+{d}]
[[{e}]]
[[{e}]]
[[+{c}]][+{a}]
[+{n}]
(P2
kP3
kST OP{e,h} )
, (P1
kP2 )
, P1
. Merging them
into f3 in any order and then applying M , we get the singleton set

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
[[{e}]]

{((P1

[+{g}]

kP2

[+{g}]

kST OP{e,h} )[+{n}] k(P2

[[{b}]][+{m}]

[+{d}]

kP3

57

kST OP{e,h} )[[+{c}]][+{a}] )[[+{k}]] }.

[+{g}]

[+{d}]

kST OP{e,h} )[[+{c}]][+{a}] ,


kP3
kST OP{e,h} )[[+{b}]] , (P2
So V5 = {(P1
[+{d}]
[+{g}]
[+{g}]
[[{e}]]
kST OP{e,h} )[[+{c}]][+{a}] )[[+{k}]] }.
kP3
kST OP{e,h} )[+{n}] k(P2
kP2
((P1
In this set, the second element clearly merges into the third one. So in the second
iteration, using this V5 as V , we get the unique collection of expressions (new V5 ) containing
the first and third expressions of the previous V5 , in which no further merging is possible.
[[{b}]][+{m}]
kST OP{e,h} )[[+{b}]] k
Then CF2 (k4i=1 fi ) = (P1
[[{e}]]
[+{g}]
[+{g}]
[+{d}]
((P1
kP2
kST OP{e,h} )[+{n}] k(P2
kP3
kST OP{e,h} )[[+{c}]][+{a}] )[[+{k}]]
Lemma 3.2.3 The transformation CF2 satisfies the following properties.
(a) Irrespective of the order of merging in the steps of M , CF2 results in a unique expression.
k
k
n
(b) For h CF1 (n (GD ,
D )), if f = CF2 (h), then (i) h CF1 ( (GD , D )), (ii) < f > (X)

[[+C1 ]][B2 ][+C2 ]


= < h > (X), where X = F(X) and (iii) if f 0 = (km
Sub(f ) then
i=1 vi )
/ i, j {1, m}, i 6= j, such that vi mR vj .
k
(c) For v1 , , vm CF2 (CF1 (n (GD , D ))), such that vi 6= vi1 kvi2 ,
m
Comp(Closure(km
i=1 vi )) = Comp(ki=1 vi ).
Proof Outline: Intuitively, the results of the lemma are easy to see from the following
facts. (i) From lemma 3.2.2 we see that the merging operation preserves the components,
k
the semantics for the argument X and results in an expression in CF1 (n (GD ,
D )). (ii)
The transformation CF2 uses the recursive merging procedure M , to replace different vari[[+C1 ]][B2 ][+C2 ]
) by a
ants of common components of the vi s, (in an expression (km
i=1 vi )
unique variant of the same component, while preserving the semantics for the argument
X. Then it uses the procedure M over the arguments of the P CO, so that in the resultant
expression, no further merging among the arguments of the P CO is possible. (iii) Finally,
the recursive merging procedure M merges recursively among expressions as well as among
the subexpressions of the resulting expressions obtained from previous level mergings, so
that at the end we get a unique expression in which, no further merging is possible among
arguments of a P CO. Formal proof of the lemma is presented in the Appendix.
2
k
k
n

Definition 3.2.7 (Transformation CF :) Let CF : n (GD ,


D ) (GD , D ) such that
CF (h) := CF2 (CF1 (h)).
k
Lemma 3.2.4 Given X = F(X), h n (GD ,
D ), f = CF (h) satisfies the following: (a)
k
n
f CF1 ( (GD , D )), (b) < h > (X) = < f > (X).

Proof: Straightforward from lemmas 3.2.3 and 3.2.1.

58

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

k
Definition 3.2.8 (Postprocess Computation for A(n (GD ,
D )) :) Given a set of equak
D )) such that F (h) 6= , for some F (h),
tions X = F(X), h CF (n (GD ,
k
the postprocess expression (denoted as h/) is an element of CF (n (GD ,
D )) such that
(< h > (X))/ < >=< h/ > (X) and is computed recursively as follows.

h = Pi h/ = CF (fij ) where = ij and the equation describing Pi in F is of the


form of equation (2.4), i.e., Pi = (i1 < fi1 > (X) | | iki < fiki > (X))Ai ,0 .
[[B]]

) where (g/) is the result of removal of


h = g [[B]] h/ = CF ((g/)
outermost quote marks () of (g/). Here /B.
[[+C]]

h = g [[+C]] h/ = CF ((g/)
).

[B]
h = g
h/ = CF (g/). (Here /B)

[+C]
h = g
h/ = CF (g/).

p
h = ki=1 gi h/ = CF (kqi=1 fi ) where for 1 i p, gi0 = gi if /F (gi ) and gi0
= gi / if F (gi ).
(Note: (a) F (h) / i, (F (gi ) F (gi )).
(b) kqi=1 fi = kpi=1 gi0 , no fi can be expressed as fi1 kfi2 and q p. Note here, when

0
0
i
gi0 = gi /, it may have the structure gi0 = km
j=1 gij , where individual gij cannot be
expressed further in parallel components. Each of this gi0j will be considered as some fi ,
and as a result, q p.)
k
Finally for P A(< n (GD ,
D ) >), with realisation (F, < g >), the set of postprocess
expressions SP CF (n (GD , D )) is computed recursively as follows:
CF (g) SP .
(h SP ) ( F (h)) h/ SP .
k

D ) >),
We now present the main result of this section. For any process P in A(< n (GD ,
with realisation (F, < g >), if SP is computed using the above procedure, then SP is a finite
set and therefore P is bounded. To show this, first we define the set of possible components
of all post-process expressions of the process with realisation (F, < g >) as follows.
Definition 3.2.9 COM P (F, < g >) :=
{ST OPA | A }

Comp(CF (f[[B]] ))

f {g}{fij |1jki , 1in}, B

where the fij expressions are obtained from the equations in F.


Naturally COM P (F, < g >) is finite. Next we show that for any expression f in SP ,
Comp(f ) is a subset of COM P (F, < g >).

k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)

59

D ) >), with realisation (F, < g >), f SP


Lemma 3.2.5 For P A(< n (GD ,
Comp(f ) COM P (F, < g >).
Proof: The proof of the lemma follows from the two claims given below.
Claim 1: Comp(CF (g)) COM P (F, < g >).
Proof of this claim is straightforward from the definition of COM P (F, < g >) and by
taking g as
g [[]] .
k
Claim 2: For P A(< n (GD ,
D ) >), with realisation (F, < g >) if h SP and
Comp(h) COM P (F, < g >) then F (h), Comp(h/) COM P (F, < g >)).
k
The claim is proved by induction on the structure of h which belongs to CF (n (GD ,
D )).

[[B ]][[+C ]][B ][+C ]

1
2
2
Basis: Let h = Pi 1
SP and Comp(h) COM P (F, < g >). For
[[B1 ]]
))[[+C1 ]] )
F (h), we have h/ = CF ((CF ((Pi /)
[[B ]]
= CF ((CF ((CF (fij ))[[B1 ]] ))[[+C1 ]] ) = CF ((CF (fij 1 ))[[+C1 ]] ). This is because, for
[[B ]]
any f , CF ((CF (f ))[[B]] ) = CF (f[[B]] ). Now by definition Comp(CF (fij 1 )) is a
subset of COM P (F, < g >)). To find out the components of h/, we have to consider the
[[B ]]
different possible structures of CF (fij 1 ).

[[B ]]
Case B1: CF (fij 1 ) = ST OPA . Then h/ = ST OPAC1 and trivially Comp(h/)
COM P (F, < g >).
2 ][+C
2 ]
[[B ]][[+C1 ]][B2 ][+C2 ]
[[B ]][[+C1 C1 ]][B

[[B ]]
Case B2: CF (fij 1 ) = Pj 1
. Then h/ = Pj 1
.
0 ]]
0 ]]
[[B
[[B
0
0
1
1
2 = (B2 F (Pj
Here B
)) ((B2 F (Pj
)) C1 ),
0 ]]
0 ]]
[[B
[[B
C2 = (C20 B20 F (Pj 1 )) (C20 F (Pj 1 ) C10 C1 ). From the structure of
2 it can be easily concluded that Comp(h/) = Comp(CF (fi[[B1 ]] )) and hence is a
B
0

subset of COM P (F, < g >)).


2 ][+C
2 ]

[[B ]]
[[+C10 ]][B20 ][+C20 ]
[[+C10 C1 ]][B
Case B3: CF (fij 1 ) = (km
. Then h/ = (km
.
i=1 vi )
i=1 vi )
m
0

m
[[+C10 ]]
0
m
, B2 = (B2 F (ki=1 vi )) ((B2 F (ki=1 vi )) C1 ),
Here C1 = C1 MP ((ki=1 vi )
0
0
m
0

C2 = (C2 B2 F (ki=1 vi )) (C20 F (km


i=1 vi ) C1 C1 ). Again from the structure
2 it can be concluded that Comp(h/) = Comp(CF (fi[[B1 ]] )) and hence is a subset
of B
j
of COM P (F, < g >)).

[[+C1 ]][B2 ][+C2 ]


Hypothesis: Let h = (km
be in SP . Comp(h) COM P (F, < g >)) and
i=1 vi )
for any i and F (h) such that vi / is defined, Comp(vi /) COM P (F, < g >)).
Step: In the induction step, it is to be proved that Comp(h/) COM P (F, < g >)). Now
by definition, h/ = CF ((CF (kqi=1 fi ))[[+C1 ]] ) where for 1 i p, vi0 = vi if /F (vi )
and vi0 = vi / if F (vi ).

60

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Here kqi=1 fi = kpi=1 vi0 , no fi can be expressed as fi1 kfi2 and q p. Note here,

0
0
i
when vi0 = vi /, it may have the structure vi0 = km
j=1 vij , where individual vij cannot
be expressed further in parallel components. Each of this vi0j will be considered as some
fi , and as a result, q p.
Since h SP and while computing vi /, CF has been applied, it is obvious that each of
k
fi belongs to CF (n (GD ,
g [[+C1 ]] ) where g = Closure(kqi=1 fi ).
D )). Then h/ = CF (
k
As each fi belongs to CF (n (GD ,
D )), from lemmma 3.2.3,
0
Comp(Closure(kqi=1 fi )) = Comp(kqi=1 fi ) = Comp(km
i=1 vi ).

Also by induction hypothesis it is a subset of COM P (F, < g >). It is then trivial to see
that
Comp(h/) = Comp(CF (
g [[+C1 ]] )) (where g = Closure(kqi=1 fi ))
= Comp(Closure(kqi=1 fi )) COM P (F, < g >).
Thus by induction, Claim 2 is found to be true. The lemma then follows from the two
claims.
2
k
D )), whose compoOur next result shows that, the set of expressions of CF (n (GD ,
nents come from a finite set such as COM P (F, < g >) is itself finite. Now, let the set of possible variants of expressions of COM P (F, < g >) be denoted as V AR(F, < g >). Because
of finiteness of the symbol set and COM P (F, < g >), it is obvious that V AR(F, < g >)
is a finite set.
k
Lemma 3.2.6 The set S = {f CF (n (GD ,
D )) | comp(f ) COM P (F, < g >)} is
finite.

Proof: To prove this lemma, given any element f in S, we first represent it as a tree
as mentioned in remark 3.2.1. Note that all the leaf node expressions of T ree(f ) are in
k
m
[[+C1 ]][B2 ][+C2 ]
V AR(F, < g >). Since f CF (n (GD ,

D )), by lemma 3.2.3, for any (ki=1 vi )


Sub(f ), / vi1 , vi2 , 1 i1 , i2 m, i1 6= i2 so that vi1 mR vi2 . More loosely speaking, it is never possible that {hn | n L(vi1 )} {hn | n L(vi2 )} or vice versa, where,
L(vi ) denotes the set of leaf nodes of T ree(vi ). Since V AR(F, < g >) is finite, from
k
the above statement we can easily conclude that for any f CF (n (GD ,
D )), the number
of leaf nodes | L(f ) |, of the T ree(f ) is bounded. The upper bound is to be computed as
follows. Let F denote a family of subsets of V AR(F, < g >). For any F F, | F | denotes
the number of expressions in F . Then
| L(f ) |

max

F ={F1 ,F2 ,,Fr } | i6=j, Fi


/ Fj

r
X

i=1

| Fi |)

k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >

61

k
Since for any f S, | L(f ) | is bounded, from the particular structure of CF (n (GD ,
D ))
it can be easily concluded that there is an upper bound on length len(f ) of f . This, in
turn implies that S is finite.
2
k
n
Finally we state the main result about the boundedness of processes of A( (GD , D )).
k
Theorem 3.2.1 Any P in A(n (GD ,
D )) is bounded under the post-process computation
procedure described above.

Proof: From lemma 3.2.5 and lemma 3.2.6, it can be easily seen that SP S and hence
it is finite. Thus P is bounded.
2

3.3

k
Finiteness Characterisation : < (HD ,
D) >

k
In the previous section it has been shown that < n (GD ,
D ) > is infinite, though the
k
n
DFRPs of A(< (GD , D ) >) are all bounded. We now identify a more restricted subclass
of < n (GD , D ) >, which is finite. To this end we remove both SCO and LCO[B] from
GD .
k

Definition 3.3.1 Let HD := GD {SCO, LCO[B] | B , B 6= }. Also let


k
< HD >:= < GD > {< SCO >, < LCO[B] >| B , B 6= }.
k
In this section we shall show that < n (HD ,
D ) > is a finite set of functions. The
k
procedure is almost identical to the proof of boundedness of DFRPs of A(< n (GD ,
D ) >)
and we shall present only a proof outline. The following definitions are introduced.
k

D ) 2 ,
Definition 3.3.2 (Minimum Locally Present Alphabet:) MLP : n (HD ,
nD , < f > (P)(<>)}. It is computed as follows.
is defined as MLP (f ) := { | P

MLP (f ) =

MLP (g) C
MLP (g) B
MLP (g) C
Sm
L
i=1 MP (gi )

if
if
if
if
if
if

f
f
f
f
f
f

= ST OPA

= Pi

= g [+C]

= g [[B]]

= g [[+C]]

= ||m
i=1 gi

As in definition 3.2.2 we define two subsets U and V of the set of functional symbols
k
(HD ,
D ) which are characterised by the particular order in which the LCOs and the
GCOs are arranged in any functional symbol. In a sense they are canonical forms of
syntactic process expressions.
n

62

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Definition 3.3.3 (U and V) :


k
U n (HD ,
D ) is defined as:
[[B]]
[[B]][[+C1 ]]
[[B]][[+C1 ]][+C2 ]
[[+C ]]
[[+C ]][+C2 ]
[+C ]
, Pi 2
U := {Pi , Pi
, Pi
, Pi
, Pi 1 , Pi 1
| 1 i n, B, C1 , C2 are nonempty subsets of } {ST OPA | A }.
k
V n (HD ,
D ) is defined recursively as follows.
m
[[+C1 ]]
[[+C1 ]][+C2 ]
[+C2 ]
(a) {(km
, (km
, (km
| u1 , , um
i=1 ui ), (ki=1 ui )
i=1 ui )
i=1 ui )
U, m > 1, and B, C1 , C2 are nonempty subsets of } V.
m
[[+C1 ]]
[[+C1 ]][+C2 ]
(b) For m > 1, v1 , , vm U V (km
, (km
,
i=1 vi ), (ki=1 vi )
i=1 vi )
[+C2 ]
(km
V where B, C1 , C2 are nonempty subsets of .
i=1 vi )
For any functional symbol f U V, a set of functional symbols are defined recursively
as follows.
Definition 3.3.4 (Child Expressions:) For f U V

Child(f ) :=

{f }
if f U
Sm

[[+C1 ]][+C2 ]
V
{f } i=1 Child(vi ) if f = (km
i=1 vi )

Here v1 , , vm are called the immediate children of f .


Remark 3.3.1 Given any f U V, the set Child(f ) can also be represented in a tree
structure where each node of the tree contains a subexpression of f as follows. The root
node contains the expression f itself. If f U, the tree contains only the root node and
nothing else. On the other hand, recursively, if any node contains some f V whose
immediate children are v1 , , vm , then that node will have m children nodes, one
each for each vi , 1 i m. T ree(f ) denotes the tree corresponding to the expression
f . N ode(f ) denotes the set of nodes of T ree(f ) and L(f ) N ode(f ) denotes the set of
leaf nodes of the tree. For any n N ode(f ), hn represents the expression contained in
n. Clearly n L(f ) hn U. It can be easily seen that in absence of LCO[B], the
tree structure mentioned here is a special case of the one presented in the previous section.
k
This is because, if we apply the definiton 3.2.3 on the elements of n (HD ,
D ), any variant
expression will belong to U.
[[B ]]

[+C ]

Example 3.3.1 Consider f = ((Pi 1 kPj )[+C1 ] kPk 3 )[[+C2 ]] kPl . Then Child(f ) :=
[[B ]]
[+C ]
[[B ]]
[+C ]
[[B ]]
{f, ((Pi 1 kPj )[+C1 ] kPk 3 )[[+C2 ]] , Pl , (Pi 1 kPj )[+C1 ] , Pk 3 , Pi 1 , Pj }
k

D ) > is finite, we shall construct a syntactic transIn order to show that < n (HD ,
k
D ), which identifies and converts many semantically equivalent
formation C over n (HD ,

k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >

63

k
expressions of n (HD ,
D ) into a single expression while preserving the semantics. We
k
D )) is a finite set. Since semantics is preserved, the set of
then show that C(n (HD ,
k
k
n
functions < n (HD ,
D ) > is identical to < C( (HD , D )) >. Since the latter is finite by
k
the finiteness of the corresponding symbol set (syntax) C(n (HD ,
D )), the former is also
finite. The transformation C is composed of two semantics preserving transformations C1
and C2 .
It should be remembered that in the previous section the transformation CF was designed for a particular DFRP with realisation (F, < g >). Local information in the form
of F and F was used in designing the transformation. Naturally, it was semantics pre nD . In order to show
serving for the processes P = F(P), but not for arbitrary processes in
the finiteness of a function space, we have to however construct syntactic transformations
which are semantics preserving in genereal.
The first transformation C1 is almost identical as the one defined in definition 3.2.1 and
converts any expression into some expression in U V while preserving the semantics. It
also takes care of redundancies arising out of the knowledge of minimum alphabets. Note
that, instead of F , it uses MLP () to remove redundancies in LCO[+C].
k
k
n
Definition 3.3.5 (Transformation C1 :) Formally C1 : n (HD ,
D ) (HD , D ) is
defined recursively as follows.

C1 (f ) := f for f = ST OPA or Pi .
k

C1 (f1 k kfm ) := C1 (f1 )k kC1 (fm ) where for any f n (HD ,


D ), C1 (f ) is
the expression obtained by removing the outermost quotation marks ( ) of C1 (f ), i.e.,
C1 (f ) = C1 (f ).
C1 (f [[B]] ):= ST OP{} if MA (C1 (f )) B = . Otherwise
C1 (f [[B]] ) :=

ST OPAB
[[B]]
Pi

[[B]]
m
ki=1 C1 (fi
)
[[B 0 (BMA (C1 (f )))]]
g

0 B)]]
[[B]]
[[+(C
(C1 (g
))

ST OPC 0 B
0
(C1 (g [[B]] ))[+(C B)]
ST OPC 0 B

if C1 (f ) = ST OPA

if C1 (f ) = Pi

if C1 (f ) = km
i=1 fi
0

if C1 (f ) = g [[B ]]
0

if (C1 (f ) = g [[+C ]] ) (MA (g) B 6= )


0

if (C1 (f ) = g [[+C ]] ) (MA (g) B = )


0

if (C1 (f ) = g [+C ] ) (MA (g) B 6= )


0

if (C1 (f ) = g [+C ] ) (MA (g) B = )

If for some instance of C 0 and B, C 0 B = , corresponding LCO[+(C 0 B)] or


GCO[[+(C 0 B)]] will be absent from the expression.

64

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

C1 (f [[+C]] ):= C1 (f ) if C MP (C1 (f )). Otherwise


C1 (f [[+C]] ) :=

ST OPAC
[[+C]]
Pi

[[+(CMP (C1 (f )))]]


(km

i=1 fi )
0
[[B ]][[+(CMP (C1 (f )))]]
g

[[+C 0 (CMP (C1 (f )))]]


g

0
(C1 (g [[+C]] ))[+(C C)]

if C1 (f ) = ST OPA

if C1 (f ) = Pi

if C1 (f ) = km
i=1 fi
0

if C1 (f ) = g [[B ]]
0

if C1 (f ) = g [[+C ]]
0

if C1 (f ) = g [+C ]

Again, for some C 0 , C if C 0 C = , corresponding LCO[+(C 0 C)] will be absent from


the expression.
C1 (f [+C] ):= C1 (f ) if C MLP (C1 (f )). Otherwise
C1 (f [+C] ) :=

ST OPAC
[+C]
Pi
[+(CML
P (C1 (f )))]
(km
i=1 fi )
0
L
g [[B ]][+(CMP (C1 (f )))]
0
L
g [[+C ]][+(CMP (C1 (f )))]
0
L
g [+C (CMP (C1 (f )))]

if C1 (f ) = ST OPA

if C1 (f ) = Pi

if C1 (f ) = km
i=1 fi
0

if C1 (f ) = g [[B ]]
0

if C1 (f ) = g [[+C ]]
0

if C1 (f ) = g [+C ]

Formally the properties of C1 are described in the following theorem.


k
Lemma 3.3.1 For h n (HD ,
D ), f = C1 (h) satisfies the following:
(a) f U V;

[[B]][[+C1 ]][+C2 ]
(b) f 0 Child(f ) U f 0 = Pi
C1 C2 = ;
0
0
m
[[+C1 ]][+C2 ]
(c) f Child(f ) V f = (ki=1 vi )
(C1 MP (v1 k kvm ) = )
L
m
[[+C1 ]]
(C2 MP ((ki=1 vi )
) = ).
(d) f = C1 (f );
(e) h f .

The proof of this lemma is similar to that of lemma 3.2.1.


k
As in the previous section, a binary relation m0R is defined below on C1 (n (HD ,
D ))
which can identify redundant arguments joined by P CO symbol. Its definition is quite
similar to the one defined in definition 3.2.4.
k
0
Definition 3.3.6 For two elements g1 , g2 C1 (n (HD ,
D )), g2 merges into g1 , (g1 mR g2 )
if one of the following conditions are satisfied. The result of the merging operation, denoted

k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >

65

as M0m (g1 , g2 ), is also indicated.

(i) g1 = ST OPA g2 = ST OPB . Then, M0m (g1 , g2 ) := ST OPAB .

(ii) g1 = ST OP . Then, M0m (g1 , g2 ) := ST OP .


[[+C1 ]][+C2 ]
[[+C 0 ]][+C 0 ]

[[B]]
1
2 where h = P or P
(iii) g1 = h
g2 = h
or v1 k kvm ,
i
i
[[+C1 C10 ]][+(C2 C20 C1 C10 )]
0
0
0

. Note
B, C1 , C2 , C1 C2 , B 6= . Then, Mm (g1 , g2 ) := h
= h.
that h
00
00
[[+C10 ]][+C20 ]
1 ]][+C2 ]
[[+C
(iv) (g1 = h
) (h1 = v1 k kvm ) (g2 6= ST OP or h
for
1
1
[[B]]
[[B]][[+C1 ]][+C2 ]
00
00
such that Pi
Child(g2 ), B, C1 , C2 }
any C1 and C2 ) ({Pi
[[B]]
[[B]][[+C1 ]][+C2 ]
{Pi
such that Pi
Child(g1 ), B, C1 , C2 }).
[[+(C10 MP (g2 ))]][+(C20 ML
0

P (g2 ))] .
g2 )
Then, Mm (g1 , g2 ) := (h1 k
k
0
Lemma 3.3.2 For g1 , g2 C1 (n (HD ,
g1 k
g2 ) M0m (g1 , g2 ),
D )), g1 mR g2 implies (a) (
k
0
0
(b) M0m (g1 , g2 ) C1 (n (HD ,
D )) and (c) mR is reflexive and transitive, i.e., g mR g and
if g1 m0R g2 and g2 m0R g3 then g1 m0R g3 .

Proof of this lemma is similar to that of lemma 3.2.2.


k
Next, we define the transformation C2 on C1 (n (HD ,
D )), that merges wherever pos[[+C1 ]][+C2 ]
sible among all subexpressions of the type h = (km
in a recursive way. The
i=1 vi )
definition of this transformation is similar to the one defined in definition 3.2.1. After application of the transformation, h will be modified into a unique expression (kri=1 gi )[[+C1 ]][+C2 ]
k
in C1 (n (HD ,
D )), while preserving the semantics.
k
Definition 3.3.7 (Transformation C2 :) The transformation C2 : C1 (n (HD ,
D ))
k
n
C1 ( (HD , D )) is defined as follows.

C2 (f ) :=

f
if f U

[[+C1 ]][+C2 ]
[[+C1 ]][+C2 ] ) if f =
C1 (h
(km
V
i=1 vi )

where h = Closure0 (km


i=1 vi ) is computed as follows.
Algorithm: Closure0 (km
i=1 vi ).
Step 1: Compute the set of expressions S as follows. For each vi , compute C2 (vi ). If

C2 (vi ) U or C2 (vi ) = (krj=1 wi )[[+C1 ]][+C2 ] such that atleast one of C1 , C2 is nonempty,

then for any node n in N ode(C2 (vi )) of T ree(C2 (vi )), hn S. Otherwise if C2 (vi ) =
krj=1 wi , then for each wi , and for any node n in N ode(wi ), hn S. Let S = {g1 , , gr }.
(Note: S has been constructed by applying C2 recursively on all arguments of the P CO
and then collecting the syntactically distinct expressions.)
Step 2: Closure0 (km
i=1 vi ) := g1 if S = {g1 } is a singleton. Otherwise,

Closure0 (km
i=1 vi ) := kh M 0 (kri=1 gi ) h

66

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

where M 0 is the set of expressions obtained by a recursive merging procedure defined


identically as in the earlier section, (procedure M ) except that here one uses the merging
relation m0R and merging function M0m (, ) instead of mR and Mm (, ) respectively.
(Note: Among the elements of S we apply the recursive merging procedure M 0 , so that in
the resultant expression, no further merging among arguments of the P CO is possible.)
End of Algorithm
Lemma 3.3.3 The transformation C2 satisfies the following properties.
(a) Irrespective of the order of merging in the steps of M 0 , C2 results in an unique expression.
k
k
n
(b) For h C1 (n (HD ,
D )), if f = C2 (h), then (i) h C1 ( (HD , D )), (ii) f h

[[+C1 ]][+C2 ]
and (iii) if f 0 = (km
Child(f ) then / i, j {1, m}, i 6= j, such that
i=1 vi )
0
vi mR vj .
Proof of this lemma is similar to that of lemma 3.2.3.
k
k
n

Definition 3.3.8 (Transformation C:) Let C : n (HD ,


D ) (HD , D ) such that
C(h) := C2 (C1 (h)).
k
Lemma 3.3.4 For h n (HD ,
D ), f = C(h) satisfies the following:
k
D )),
(a) f C1 (n (HD ,
(b) h f , and
k
(c) C(n (HD ,
D )) is a finite set.

Proof: The result (a) and (b) follows easily from lemmas 3.3.3 and 3.3.1. Result (c)
k
follows from the argument presented below. For a finite , U C1 (n (HD ,
D )) is finite
and let the number of expressions in it be N ,which can be computed from | | and n.
Here | | denotes the number of event symbols in . Since, from lemma 3.3.3, for any h
k
[[+C1 ]][+C2 ]
m
C(n (HD ,
Child(h), / vi1 , vi2 , 1 i1 , i2 m, i1 6=
D )) and (ki=1 vi )
0
i2 with vi1 mR vi2 , it is never possible that {hn | n L(vi1 )} {hn | n L(vi2 )}
or vice versa, where, L(vi ) denotes the set of leaf nodes of T ree(vi ). Because if it were
so, then vi1 m0R vi2 would have been true. Together, we can conclude that for any h
k
C(n (HD ,
D )), the number of leaf nodes | L(h) |, of the T ree(h) is bounded. Since | L(h) |
k
is bounded, from the particular structure of C(n (HD ,
D )) it can be easily concluded that
k
there is an upper bound on len(h). This, in turn implies that C(n (HD ,
D )) is finite. 2
k
n
Finally we prove the finiteness of the function space < (HD , D ) >.
k
Theorem 3.3.1 < n (HD ,
D ) > is a finite set of functions.

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)

67

k
Proof: Since the set of functional symbols C(n (HD ,
D )) is finite (by (c) of lemma 3.3.4),
k
D )) > is finite. From (b) of lemma 3.3.4 it
the set of corresponding functions < C(n (HD ,
k
k
n
follows that for every h in n (HD ,
D ) there is an element f = C(h) in C1 ( (HD , D )) such
k
k
n
that h f . From this we can conclude that < n (HD ,
D ) > and < C( (HD , D )) >
k
are identical sets of functions. Together we can conclude that < n (HD ,
D ) > is also a
finite set of functions.
2

3.4

Boundedness Characterisation : A(< (G;D , D ) >)

In this section, we prove that for the processes, which are built without any application of
P CO, boundedness is decidable. Following definition 3.1.2, let A(< n (G;D , D ) >) be the
;
set of DFRPs built with SCO and the different change operators and A(< n (HD
, D ) >)
be the set of DFRPs built with SCO alone.
k
As in the case of A(< n (GD ,
D ) >), we need to define a syntactic transformation
DF , for reduction of redundant symbols. The transformation DF will be almost similar to
that of CF1 , except that here SCO will be used in place of P CO, on which all the change
operators will distribute. Also as mentioned in relation with computation of minimum
alphabets, expressions joined by SCO symbols should be ST OP and SKIP reduced.
Because of the close similarity between DF and CF1 , instead of giving a complete definition
of DF , we just mention the points where the two transformations differ.
Given f n (GD , D ), DF (f ) is defined as follows.
If F (f ) = 1 then DF (f ):= SKIPF (f ) .
If F (f ) = 0 and F (f ) = then DF (f ):= ST OPF (f ) . Otherwise

DF (f ) = f if f = Pi .
F (f1 ); ; D
F (fm )) where T () is the procedure of
DF (f1 ; ; fm ) := T ; (D
F () = DF ().
SKIP and ST OP reduction mentioned before definition 3.1.4. Here D
DF (f [[B]] ), DF (f [[+C]] ), DF (f [B] ), DF (f [+C] ), are defined recursively in an
identical manner as that of CF1 , except with the following differences.
(i) The transformation is not defined for expressions containing the P CO (k) symbol.

(ii) If DF (f ) = f1 ; ; fm , then
[[B]]
F (f1[[B]] ); ; D
F (fm
DF (f [[B]] ) := T ; (D
)).
[[+C]]
[[+C]]
[[+C]]
F (f1
F (fm ).
DF (f
) := D
); ; D
[B]
F (f1 ); f2 ; ; fm ).
DF (f [B] ) := (D
F (f1[+C] ); f2 ; ; fm ).
DF (f [+C] ) := (D
The following example illustrates the transformation DF .

68

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Example 3.4.1 Let P= F(P) be the collection of the following equations. P = (P1 , P2 , P3 ).
P1 = (a < P2 > (P) | b < P3 ; P2 > (P) | c < P1 > (P)){a,b,c,g},0 .
P2 = (d < P2 > (P) | e < SKIP{} > (P)){d,e,f },0 .
[{c,g}]
P3 = (f < P1
> (P) | e < SKIP{} > (P) | g < P2 > (P)){e,f,g},0 .
[{d}]
Let f = (SKIP{} ; P3 ; P2
; P1 )[[{e}]][+{g,b}] . Then DF (f ) = (P3 )[[{e}]][+{b}] ; ST OP{f } .
Here SKIP{} ; P3 is reduced to P3 , before distribution of LCO symbols over SCO.
[[{e}]][{d}]
Also after distribution, P2
is reduced to ST OP{f } as the set of possible ini[[{e}]]
tial events becomes empty after application of the change operators. As a result P1

is removed from the expression.


Definition 3.4.1 Let G1 := {ST OPA | A } {SKIPA | A }
[[B ]][[+C1 ]][B2 ][+C2 ]
{Pi 1
| B1 , B2 , C1 , C2 }.
G2 := {f1 ; f2 ; ; fm | m > 1, fi G1 and the expression is ST OP and SKIP reduced }.
Lemma 3.4.1 For h n (G;D , D ), X = F(X), and f = DF (h), we have
(a) f G1 G2 ; (b) f = DF (f ) and (c) < h > (X) = < f > (X).
Proof: Similar to that of lemma 3.2.1.
2
;
n
The post process computation procedure for A(< (GD , D ) >) also being almost
k
identical to that of A(< n (GD ,
D ) >), we only mention the differences. Given P
;
A(< n (GD , D ) >) with realisation (F, < g >), the set of equations X = F(X), and
h DF (n (G;D , D )) such that F (h) 6= , h/ is computed in a similar way as mentioned in definition 3.2.8, except that, (i) DF is used instead of CF , (ii) it is not de
fined over processes containing the P CO symbol (k) and if h = g1 ; ; gp then h/
:= DF ((g1 /); g2 ; gp ). Finally SP DF (n (G;D , D )) is computed as follows:
DF (g) SP and if (h SP ) ( F (h)) then h/ SP .
;
Though A(< n (HD
, D ) >) is definitely a subset of the A(< n (G;D , D ) >), we shall
;
first prove that boundedness is decidable for processes of A(< n (HD
, D ) >), by constructing a procedure to decide on boundedness. Later it will be shown, how the same
procedure can be used with some modification, to decide on boundedness in case of
A(< n (G;D , D ) >).
;
Lemma 3.4.2 Given any P A(< n (HD
, D ) >) with realisation (F, < g >), if f SP ,
then f will have one of the following structures: (i) ST OPA or SKIPA , A , or (ii)
Pi or Pi ; ST OPA or Pi ; SKIPA , 1 i n, or (iii) Pi1 ; ; Pim or Pi1 ; ; Pim ;
ST OPA or Pi1 ; ; Pim ; SKIPA , where, m > 1 and 1 ij n, j = 1, , m.

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)


Proof: Straightforward from lemma 3.4.1.

69
2

;
Lemma 3.4.3 For arbitrary P A(< n (HD
, D ) >) with realisation (F, < g >), bound;
edness is decidable iff for any P A(< n (HD
, D ) >) with realisation (F, < Pi >),
1 i n, boundedness is decidable.

Proof: ( Only if:) Straightforward as one can take < Pi > as < g >.
;
( If:) Suppose, for any P A(< n (HD
, D ) >) with realisation (F, < Pi >),
1 i n, boundedness is decidable. We have to now prove that for arbitrary P
;
A(< n (HD
, D ) >) with realisation (F, < g >), boundedness is decidable. From lemma
3.4.2, as DF (g) SP , it can have one of the three types of structures mentioned there. If
DF (g) is either ST OPA or SKIPA , boundedness is trivially decidable. If DF (g) is any
Pi , or Pi ; ST OPA or Pi ; SKIPA , boundedness is decidable by the hypothesis of the
proof. Finally suppose DF (g) has some structure of type (iii). Then boundedness of P can
be decided as follows. Starting from j = 1, going atmost upto m, we repeat the following
procedure. First we check the boundedness of the process P with realisation (F, < Pi1 >)
which is clearly decidable by hypothesis. If it is unbounded, P is unbounded. Otherwise,
if it is bounded, we can easily decide whether SP contains any SKIPA , as SP is finite.
If it does not contain, there is no need to check further and P can be termed as bounded.
Otherwise, we increment j in Pij and repeat the procedure. Since m is finite, in finite
steps we can decide on the the boundedness of P .
2
In view of the lemma 3.4.3, to prove decidability of boundedness of arbitrary processes,
we now need a procedure which will be able to determine in finite time, whether any
;
, D ) >) with realisation of the form (F, < Pi >),
arbitrary process P A(< n (HD
1 i n is bounded or not. For any process P , it will either compute the set SP if it is
finite (i.e., P is bounded) or declare P as unbounded and terminate in finite steps. The
procedure is based on the following theorem.
;
Definition 3.4.2 Given some process P A(< n (HD
, D ) >) with realisation of the
form (F, < Pi >), an expression Pj be said to be a derivative of Pi if there exists
some post process expression beginning with Pj in the set S<Pi >(X) , where X = F(X).
Trivially, Pi is its own derivative.
;
Theorem 3.4.1 Given some process P A(< n (HD
, D ) >) with realisation of the form
(F, < Pi >), P will be unbounded iff there exists some derivative Pj of Pi , such that
or Pj ; W;
SKIPB ( where W
for some m 2 and W = Pj2 ; ; Pjm , either Pj ; W

= W)
belongs to S<Pj >(X) .

70

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

Proof: ( If:) Suppose Pj is a derivative of Pi after some event sequence s, i.e., P/s
> (X), for some expression Y. Also, by hypothesis of the
= < Pj > (X) or < Pj ; Y
> (X)
sufficiency condition, for some event sequence t, let < Pj > (X)/t = < Pj ; W
SKIPB > (X)). Clearly then, for all positive integer n, s^tn tr P . Also
(or < Pj ; W;
n ; Y
(or Pj ; (W)
n or Pj ; (W)
n ; SKIPB if Y is
for each n the expression Pj ; (W)
absent) will be present in SP and P will be unbounded. It should however be noted that
ST OPB in S<P >(X) does not indicate unboundedness, as, in this
presence of Pj ; W;
j
case, even though n, s^tn tr P , the corresponding post process expression is always
ST OPB .
Pj ; W;
( Only if:) A formal proof of the necessary part of the theorem will be given after
describing the procedure.
2
Boundedness Checking Procedure: An informal description
The above theorem motivates us to suggest the following procedure to check boundedness
of processes. Given any process P , we construct a tree whose nodes contain different postprocess expressions of P =< Pi > (X). While building the tree, at each step we check
whether the equivalent condition of unboundedness, as mentioned above, is satisfied. If the
process is bounded, the algorithm will terminate in finite time after generating the set SP .
On the otherhand, if it is unbounded, then also in finite time the algorithm will detect the
satisfaction of the sufficient condition and will terminate after declaring P as unbounded.
The root node contains the expression Pi . For any node N of the tree, let N .E denote
the expression contained in N . R denotes the root node of the tree and R.E = Pi if
< g > = < Pi > for some i. For some node N 0 6= N , R N 0 N denotes that
N 0 is a node which is different from N and it lies in the path from the root node to
the node N . N and N 0 are called descendants of R. At any node, whenever N .E is
different from ST OPA or SKIPA , for each F (N .E) we compute the expression
(N .E)/ and put it in a new node Nnew . To understand where this new node should be
attached in the tree, we have to consider the different possibilities that arise as a result
of cross combination between N .E and Pj /, where Pj is the first (or only) process
symbol in N .E. The resultant expression (N .E)/, (i.e, Nnew .E) for different cases are
tabulated in Table 3.4. Here the rows (1)-(4) indicate the various possibilities for N .E
whose first term is Pj . The columns (a)-(f) correspond to the different cases for Pj /.
For example, the combination (d) vs (2) indicates that if Pj / is Pl ; Y ; SKIPB then
Pj ; ST OPA / is Pl ; Y ; ST OPA , where a SKIP reduction has been performed. The
other entry within parenthesis in the table indicates at which node this new node should
be connected, provided the expression in the new node has not occurred already in some
child of the indicated node. In most cases it is connected to the node N itself. Our

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)

71

intention is to build the tree in such a way that along any path in the tree, whenever we
find a repetition of the first term and the length of the expression in the descendant node
is larger, unboundedness is inferred. To ensure this, we connect the new node either to the
root node R or some node N in the path from R to N in the following way.
Combinations (a)-(d) vs (1)-(4): In these cases after occurrence of some event in the
process < Pj > (X), the process does not terminate and the length of Nnew .E/ is
atleast equal to or more than that of N .E. In these cases, (except the case (b) vs (1))
Nnew is attached to N itself. In case of (f) vs (1) also, Nnew is attached to N . In all these
cases Nnew is termed as a primary child of N .
Combinations (b) vs (1) and (e) vs (1)-(4): In the case of (b) vs (1), though there is an
increase in length of expression from N .E to Nnew .E, a repetition of the first term (i.e.,
from some Pl to Pl ; SKIPB ) does not indicate unboundedness. Here Nnew is attached
to the root node, indicating that expansion of the tree starting from that node is necessary
in order to arrive at some conclusion about unboundedness of the system. The same is
true for the cases (e) vs (1)-(4), when < Pj > (X)/ results in a post process expression
Z ending with ST OPB . Here, after substitution of Pj / in N .E we get Nnew .E as the
expression Z itself. As stated in the proof of theorem 3.4.1, a repetition in first term in
these cases does not indicate unboundedness. Here also Nnew is attached to the root node.
In all these cases Nnew is termed as a secondary child of R.
Combinations (f ) vs (2)(4): Clearly in these cases, length of Nnew .E/ is less than that
of N .E, as < Pj > (X), where Pj is the first element of N .E, terminates. Further,
for all the nodes preceding N , whose expression lengths are equal to that of N .E, ( that
is for the nodes between N and N , excluding N ,) the processes corresponding to the
first elements of the corresponding expressions terminate. In these cases, we have to look
for repetition between first elements of expressions of nodes preceding (and including) N
and that of Nnew . Note that for all the nodes between N and N , excluding N , the
expressions following the first element will be identical and equal to Nnew .E. To facilitate
checking we attach Nnew to N , indicating that the tree is to be expanded along that path
also. Obviously, before attaching Nnew to any node it is to be ensured that no child of
that node already bears the same expression as that of Nnew . In these cases also Nnew is
termed as a secondary child of N .
The above procedure of attaching the nodes as primary or secondary children of existing
nodes ensures the following: (i) Along any path of the tree, if any node N bears an
expression N .E, beginning with symbol Pl , and for any other node N 0 , R N 0 N ,
if N 0 .E, begins with symbol Pj , then Pl is a derivative of Pj . In other words, along
any path of the tree, no process corresponding to the first element of the expressions in

72

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

N .E
(1)
(2)
(3)
(4)

Pj /
Pj
Pj ; ST OPA
Pj ; SKIPA
Pj ; W

Pl
Pl ; ST OPA
Pl ; SKIPA
(a)
Pl ; W
(N )
(N )
(N )
Pl
(N )

Pl ; SKIPB
Pl ; ST OPA
Pl ; SKIPA
(b)
Pl ; W
(R)
(N )
(N )
Pl ; SKIPB
(N )

ST OPA Pl ; Y;
SKIPA Pl ; Y;
W

(c)
Pl ; Y
Pl ; Y;

Pl ; Y
(N )
(N )
(N )
(N )
SKIPB Pl ; Y;
ST OPA Pl ; Y;
SKIPA Pl ; Y;
W

(d)
Pl ; Y;
SKIPB (N )
Pl ; Y;
(N )
(N )
(N )
Z
Z
Z
Z
(e)
(R)
(R)
(R)
(R)
Z
W
SKIPB
ST OPA
SKIPA
(f)

(N )
(N )
(N )
(N )
SKIPB
W = Pj2 ; ; Pjm or Pj2 ; ; Pjm ; ST OPA or Pj2 ; ; Pjm ; SKIPA .
Y = Pl2 ; ; Plr
ST OPB .
Z = ST OPB or Pl ; ST OPB or Pl ; Y;
similarly for Y and Z.
W = W,
R N N and N is such that len(N .E) < len(N .E) and
N 0 6= N , N N 0 N , len(N 0 .E) = len(N .E)
Table 3.1: Table of (N .E)/

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)

73

the nodes of that path (except the case when the node bears an expression SKIPA , for
some A) has terminated. (ii) The lengths of the expressions at the nodes are non decreasing
along any path. (iii) Finally it is also ensured that any repetition in the first element of
expressions of the nodes, along any path, with a strictly increased length of the second
expression ( barring the case of repetition of the first term between Pl and Pl ; SKIPB
for any Pl ) is equivalent to the satisfaction of the sufficient condition of unboundedness.
It should be noted that in case some child of the tree contains an expression of the type
(e), (see Table 3.4) the checking of repetition in the path through it, will commence from
that child node, instead of the root node directly. This is because, if the child contains
an expression of type (e) beginning with Pi where root node also contains Pi , the
repetition, even with increased length does not indicate unboundedness.
The tree is built using a recursive depth-first procedure. Before expanding the tree
from any node N , we check the following. If N bears ST OPB or SKIPB type of
expressions, it cannot be expanded further and N is marked as expanded. Similarly if N .E
= N 0 .E such that R N 0 N , a loop is indicated. In that case also the node is marked
as expanded. Finally, barring the case of repetition of the first term between Pl and
Pl ; SKIPB , if N .E has, as its first argument, some Pl such that N 0 , R N 0
N , N 0 .E also begins with Pl but len(N 0 .E) < len(N .E), unboundedness is indicated.
Otherwise, for each event that is possible from < N .E > (X), N .E/ is created and a
new unexpanded node Nnew , bearing N .E/, is attached to N or N or R as indicated in
the table. Since the tree is built using a depth first procedure, a parent node is termed
expanded when either tree building is complete from all its children nodes, i.e., they have
been termed as expanded, or when from some of its children node, unboundedness has been
detected. The algorithm terminates after finding the status of the root node.
A formal description of the procedure, in the form of an algorithm can be found in the
Appendix.
In the following examples we apply the above procedure to build the tree from the
recursive descriptions and decide on their boundedness.
Example 3.4.2 Consider the process P with realisation (F, < P1 >) where
P = (P1 , , P4 ) = F(P) is given by the following collection of equations.
P1 = (a < P5 > (P) | b < P2 ; P3 > (P)){a,b},0
P2 = (c < P4 > (P) | d < SKIP{} > (P)){c,d},0
P3 = (e < P3 > (P) | f < P4 > (P)){e,f },0
P4 = (g < P4 > (P) | h < P3 ; ST OP{} > (P)){g,h},0
P5 = (i < P2 ; P2 > (P)){i},0

74

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

The tree generated by the above procedure for this example (see Fig. 3.1) shows P to be
a bounded process.
Example 3.4.3 Let P = (F, < P1 >) where P = (P1 , , P4 ) = F(P) is given by :
P1 = (a < P2 > (P) | b < P2 ; P3 > (P)){a,b},0
P2 = (c < P4 > (P) | d < SKIP{} > (P) | e < P1 ; ST OP{} > (P)){c,d,e},0
P3 = (e < P3 ; P3 > (P)){e},0
P4 = (g < P4 > (P)){g},0
The tree generated by the above procedure for this example (see Fig. 3.2) shows P to be
a unbounded process.
Finally we prove the necessity part of the theorem 3.4.1.
Proof: ( Only if:) We assume the contrary, i.e., for any Pj reachable from Pi ,
nor Pj ; W;
SKIPB belongs to S<P >(X) for any W = Pj2 ; ; Pjm
neither Pj ; W
j

with m 2, 1 j2 , , jm n (where W = W). We have to show that P is bounded


in this case.
We prove the above by showing that the tree building procedure mentioned above,
always terminates in a finite number of steps and if the above hypothesis is satisfied, then
the termination takes place only after generation of all possible post-process expressions
(which is obviously finite because of termination of the algorithm in finite steps) of P .
Hence P would be bounded.
First we prove that the depth of the tree is finite. For this let us consider the path
{N0 , , NL } in the tree, where N0 = R. There are two possibilities. (1) If N1 is a
primary child of R, i.e., N1 .E is different from any of Pl ; SKIPB , Pl ; ST OPB or
Pl ; Pl2 ; ; Plr ; ST OPB , then the checking for boundedness (i.e., checking for repetitions
in first term of expression of nodes) starts from N0 itself. (2) Otherwise N1 is a secondary
child of R and checking starts from the node N1 . In both cases we first show that there
is an upper bound on L. In the tree construction procedure, since none of the processes
corresponding to the first elements of Ni .E, 1 i L has terminated (except the case when
NL .E is SKIPA for some A) if there is a repetition in the first elements of Ni .E and
Nj .E, 0 i < j L, (i strictly greater that 0 in the second case) it must either indicate
a loop (i.e., Ni .E = Nj .E) or len(Ni .E) will be strictly greater than len(Nj .E) and it will
indicate unboundedness (except the case of repetition of the first term between Pl and
Pl ; SKIPB ) by satisfying the condition of unboundedness mentioned in theorem 3.4.1.
In either case the tree is not expanded further. Under the hypothesis of this proof, the
second case is ruled out. So each path will terminate either in a loop or with a SKIP

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)

75

 




9

 P1
 
H

H
Q
Q HH
Q HH
Q
Q HH
HH
Q
Q
HH
QQ
s



HH

j




P5

 XX 
XXX

X
z
X




P2 ; P2

P2




 B

B

B

BBN




 ?

P4 SKIP{}




P3 ; ST OP{}




P4 ; P2






P3 ; ST OP{}

? 


P4




?


P4 ; P2

P2 ; P3

 ?

J
J
J
J
J

^





 



P3

 
B
 B

B
BN
 





P4 ; P3

P3

P4



 
 


P4 ; ST OP{}











P4 ; ST OP{}


?


P4 ; P3




STATUS = BOUNDED

Figure 3.1: Tree generated for example 3.4.2

?
  

P4
  

76

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

 
 P1 PPP

 Q
P

PP
 
A Q
PP

Q

PP 

A


Q
q
P

A
Q


P3 ; ST OP{}
Q

A

Q





U
A

Q
=


 



QQ

*
s


P3
P2 ; P3
  ST OP
P2
{}
A



 ?


A


B
*

A
P1 ; ST OP{}
B

A
B



AU
BBN
#


?
J







J
P
;
P

P
;
P

4
3
3
3
SKIP{}
P4
J
J







"
!
J



^

P2 ; ST OP{} P2 ; P3 ; ST OP{}

? 


P4








?




P4 ; P3 










P4 ; ST OP{}






=

P4 ; ST OP{}

P4 ; P3 ; ST OP{}

 XX
z
X
  X




P4 ; P3 ; ST OP{}

 







Unmarked nodes

Unboundedness is detected here.


STATUS = UNBOUNDED

Figure 3.2: Tree generated for example 3.4.3

3.4. BOUNDEDNESS CHARACTERISATION : A(< (G;D , D ) >)

77

expression. Since there are all together n distinct process symbols, namely P1 to Pn ,
the maximum value of L within which a loop will be indicated is n in case of the first
possibility and n + 1 in case of the second possibility.
Next we prove that the number of paths in the tree is also finite, i.e., each node may
have only a finite number of children. Clearly the number of primary children of any node
is bounded by the number of choices of events present in any of the equations of F. The
number of secondary children is also finite, as can be understood from the following. For
any N , different from R, a secondary child node is attached to N whenever the process
corresponding to the first element of the expression of any primary child node or some
other already-attached secondary child node of N , having an expression of length strictly
greater than that of N , terminates. In these cases, N is considered as N for that child
of N . The newly created secondary node contains the remaining expression. Thus if N
bears an expression Pj1 ; ; Pjm and some primary child node of N bears an expression,
say, Pl1 ; ; Plr , such that m < r, it may produce at most r m secondary children
for N , one each for each Pli ; ; Plr , 2 i (r m) + 1. Because of finiteness of
depth, length of expression of any primary child node is bounded. Clearly the number of
secondary children is then also finite. Next consider the root node. Other than the primary
children generated by events of F (Pi ), there are a finite number of cases ( (b) vs (1) and
(e) vs (1)-(4) from the table) when a node is attached to the root. Given any F, number
of this type of nodes is also finite. From all these nodes, once again, only finite number
of secondary children can be generated where R behaves as N for some of its children.
Thus, there is an upper bound on the number of children of any node.
From the above argument we find that the tree building procedure always generates a
finite tree. Along any path of the tree either unboundedness is detected within finite depth
and the algorithm terminates or each path ends either with a loop or some ST OPA or
SKIPA type of expression. Since by hypothesis of this proof, along no path of the tree,
unboundedness is detected, the tree building procedure can terminate only after exploring
the set of all possible post-process expressions, namely SP . The finiteness of the tree
guarantees that in this case SP will be finite. Hence P is bounded.
2
;
Theorem 3.4.2 Boundedness for arbitrary processes in A(< n (HD
, D ) >) is decidable.

Proof: From theorem 3.4.1 we find that there exists a procedure to decide (in finite time)
;
whether an arbitrary process P A(< n (HD
, D ) >) with P =< Pi > (X), X = F(X)
is bounded. Then together with the procedure mentioned in the proof of lemma 3.4.3
;
we can always decide in finite time whether any process P A(< n (HD
, D ) >) with
realisation (F, < g >) is bounded or not. Thus boundedness of this space is decidable. 2

78

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

At the end we show how the same algorithm can be used to decide on boundedness of
processes of A(< n (G;D , D ) >).
Lemma 3.4.4 Given any P A(< n (G;D , D ) >) with realisation (F, < g >), f SP
implies f will have one of the following structures: (i) ST OPA or SKIPA , A , or
(ii) f or f ; ST OPA or f ; SKIPA , or (iii) f1 ; ; fm or f1 ; ; fm ; ST OPA or
[[B ]][[+C1 ]][B2 ][+C2 ]
f1 ; ; fm ; SKIPA , where, each f or fi is some Pi 1
such that if any
parameter of any change operator symbol is empty, then the corresponding change operator
symbol is absent. Also m > 1 and 1 ij n, j = 1, , m.
Proof: Straightforward from the post-process expression computation procedure defined
earlier, as the change operator symbols distribute over SCO symbol.
2
Theorem 3.4.3 Boundedness for arbitrary processes in A(< n (G;D , D ) >) is decidable.
Proof: Given any P A(< n (G;D , D ) >) with realisation (F, < g >), number of
distinct symbols of the form Pi is finite and equal to n. Then for a finite , the number
[[B ]][[+C1 ]][B2 ][+C2 ]
of distinct symbols f , of the form Pi 1
( where B1 , C1 , B2 , C2 are subsets
of and if any of them is empty, it indicates that the corresponding change operator symbol
is nonexistent in f ), that may appear in elements of postprocess expressions of P is also
finite and equal to N (say). It is easy to see that the procedure designed for processes of
;
A(< n (HD
, D ) >) can be used for this space also, except that, here repetition in the
first element of expressions means the repetition in f and not in just Pi symbols. Thus
the tree depth will be bounded by N instead of n. Thus, for this space also, boundedness
is decidable.
2

3.5

Bounded Subclasses of A(< (GD , D ) >) : Some


Examples

In the previous sections we have studied boundedness property of two classes of DFRPs,
k
;
n
namely A(< n (GD ,
D ) >) and A(< (GD , D ) >). Modelling flexibility of both the
subclasses is restricted. The former does not include SCO and cannot model stacks, reentrant codes etc, while the latter does not include P CO and cannot model concurrency. On
the other hand, for processes of A(< n (GD , D ) >), where both the operators can be used
in an unrestricted fashion, modelling flexibility is assured but several problems including
boundedness are undecidable. This is expected and merely shows that the tractability of

3.5. BOUNDED SUBCLASSES OF A(< (GD , D ) >) : SOME EXAMPLES

79

problems in a modelling paradigm can be improved only at the cost of design flexibility
and vice-versa. The problem is therefore to establish boundedness and decidability results
of more general subclasses of A(< n (GD , D ) >) possibly by combining the subclasses
presented here in a judicious manner, which are flexible enough to model wide varieties
of real world processes. Almost certainly such classes will be characterised by restricted
use of both SCO and P CO. Below, few such examples of bounded DFRPs are presented,
where a good trade off between modelling flexibility and tractability is obtained.
The Subclass B
As a first example, we present a subclass B of A(< (GD , D ) >), where, for any process
Y B, with realisation (F, < g >), the processes X, satisfying X = F(X), can be partioned into a number of bounded modules. As originally introduced in [21], for some DFRP
Y =< g > (X), X = F(X), a subset X0 of X is called a module if X0 is mutually recursive
and it cannot be written as the union of two disjoint strictly proper subsets X01 , X02 of X0
where both are mutually recursive. If X = F(X) then the subset of equations from F,
describing the processes of the module X0 can be expressed as X0 = F0 (X0 ). We define
a module to be bounded if each process in it is bounded. In the following we present a
formal definition of the subclass of B. The problem of deciding on boundedness of a general
module is obviously as complex as that of a DFRP, and hence undecidable. To guarantee
boundedness we impose further restrictions on these modules which are also mentioned
below.
Definition 3.5.1 B is the subset of A(< (GD , D ) >) such that any process Y B, with
realisation (F, < g >), satisfies the following.
The set of equations X = F(X) can be partitioned into p disjoint modules, p 1, such
that
i
(a) X=(X1 , , Xp ) where Xi = (X1i , , Xm
) = Fi (Xi ). Each Fi is a collection of
i
equations of the form
Xji = Fij (Xi ) = (ji1 < fji1 > (Xi ) | | jinij < fjinij > (Xi ))Aij ,0 .
Here Aij {ji1 , , jinij }.
[Note: To maintain rigour, however, any fjik is written as, after choosing the projection
symbols suitably, as some f m (GD , D ), where m =m1 + + mp and < fjik > (Xi ) =
< f > (X). For example the process X12 ; X32 can equivalently expressed as either
< P1 ; P3 > (X12 , X22 , X32 ) or < P3 ; P5 > (X11 , X21 , X12 , X22 , X32 ).]
k
(b) Any module either belongs to A(< (GD ,
D ) >) (where boundedness of each process

80

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

in the module is guaranteed) or it belongs to A(< (G;D , D ) >) along with the condition
that each process in the module is bounded (which is decidable)
(c) Y = < g > (X1 , , Xp ) where < g > can be any arbitrary function of < m (GD , D ) >.
Note that, in a very simple case, a process in B may have just a single module in it.
Lemma 3.5.1 For any Y B, Y is bounded.
Proof: By condition (b) of definition 3.5.1, each process Xji of X is bounded. However
to obtain a bounded implementation a suitable postprocess computation procedure should
k
be used. In other words, for processes of modules in A(< (GD ,
D ) >), the syntactic
transformation CF should be used in the computation of post-process expressions. Similarly
for processes of modules in A(< (G;D , D ) >), the syntactic transformation DF should be
used. For Y = < g > (X), the post process expressions of Y can be written as composition
of post-process expressions of individual Xji . Since each individual Xji is bounded, it is
easy to see that Y will have a finite set of post-process expressions and hence bounded. 2
Remark 3.5.1 In condition (b), for modules in A(< (G;D , D ) >), if individual processes
are not constrained to be bounded, then the resulting structure is quite general. In [33],
a Turing Machine has been modelled using this type of structure, where Y is a parallel
composition of a number of processes, one each from each module, and each module is from
A(< (G;D , D ) >), processes of which are not constrained to be bounded. Consequently,
boundedness of Y is in general undecidable if condition (b) is relaxed.
Below, we present a typical example of a process from the subclass B.
Example 3.5.1 Let Y =< (P1 ; P7 )kP10 > (X) = (X11 ; X12 )kX13 where
X = (X11 , , X61 , X12 , , X32 , X13 , , X53 ) and
[[{ }]]
X11 = (1 < (P2 ; P4 9 )[[+{10 ,11 ,12 }]] > (X)
[[ ]]
| 2 < (P3 ; P4 9 )[[+{10 ,11 ,12 }]] > (X)){1 ,2 ,10 ,11 ,12 },0
X21 = (3 < P2 > (X) | 4 < SKIP{} > (X)){3 ,4 },0
X31 = (5 < ST OP{} > (X) | 6 < SKIP{} > (X)){5 ,6 },0
X41 = (7 < P5 > (X) | 8 < P6 > (X) | 9 < P4 > (X)){7 ,8 ,9 },0
X51 = (10 < SKIP{} > (X) | 9 < P5 > (X)){9 ,10 },0
X61 = (11 < SKIP{} > (X) | 12 < P6 > (X)){11 ,12 },0
[[+{ , }]]
[[+{ , }]]
X12 = (11 < P8 11 12 kP7 > (X) | 12 < P9 11 12 kP7 > (X)){11 ,12 },0
X22 = (13 < P7 > (X)){13 ,},0
X32 = (14 < P7 > (X)){14 ,},0

3.5. BOUNDED SUBCLASSES OF A(< (GD , D ) >) : SOME EXAMPLES


X13
X23
X33
X43
X53

= (15
= (17
= (19
= (10
= (21

81

< (P11 ; P12 )[[+{10 }]] > (X) | 16 < SKIP{10 } > (X)){10 ,15 ,16 ,},0
[{ }]
< P10 15 > (X) | 18 < P11 > (X)){17 ,18 },0
< P13 > (X) | 20 < P14 > (X)){19 ,20 },0
< SKIP{} > (X)){10 },0
< P13 > (X)){21 },0

In example 3.5.1, the output process is a concurrent operation of two processes, X11 ; X12
and X13 , where the former is in turn a sequential operation of two processes, namely
X11 and X12 . The first module (X11 , , X61 ) and the third module (X13 , , X53 ) are in
k
A(< (G;D , D ) >) whereas the second module (X12 , , X32 ) is in A(< (GD ,
D ) >).
Clearly Y B.
Since the processes of B are bounded, a finite state implementation is possible. Naturally, reachability, deadlock, liveness, language equivalence etc. are also decidable. In
many cases, each module represents a task. The ith task is accomplished by the process
X1i . Also the output equation Y becomes a function of the processes {X11 , , X1p }, i.e.,
Y =< g > (X11 , , X1p ), and a task level integrity is maintained [21].
Often, by blocking, if-then-else type constructs among processes can be achieved in the
output equation. For example, let X11 , X12 , X13 be three processes from three disjoint
modules. They have the following properties. Let X13 (<>) = A. For any s tr X11 ,
if X11 /s 6= ST OPB , then A X11 (s) but for A, s^ < > / tr X11 . On the other
hand, if X11 /s = ST OPB , then B = . Finally, s tr X12 , A X12 (s) but for A,
s^ < > / tr X12 . Now Y = (X11 ; X12 )kX13 will achieve the following: the task X11 will
begin first; if it terminates successfully X12 will take over; however, in case X11 terminates
unsuccessfully, X13 will take place.
The Subclass H
The bounded subclass B has the limitation that it allows combination of SCO and P CO,
only at the output equation Y =< g > X. Since each task is modelled as a module,
the single output equation represents a flat combination of tasks; it does not allow any
explicit hierarchy among the tasks. However, any complex sytem can in general be modelled
as a hierarchical combination of tasks, where each task consists of sequences of events
interspersed with suitable subtasks. Next, we present a bounded subclass H of processes
from A(< (GD , D ) >), wherein a hierarchical ordering among tasks is brought out
explicitly.
The informal idea is as follows. Given any module X0 , there may exist some proper
subset X00 of X0 , which itself is a module. Naturally in this case Z0 := X0 X00 cannot

82

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

0 , X00 ), where F
is the set
be a module. Also, process of Z0 can be expressed as Z0 = F(Z
of equations from F, representing the processes of Z0 . Since, each module can be viewed
as the representation of a task, here, the task X0 is said to be placed hierarchically above
the task X00 . Also the task X00 can be viewed as external input to the recursive process
equations of Z0 . In a hierarchical characterisation, one or more output processes joining
a number of bounded sub-modules (as in the case of B) are used as inputs to the rest of
the equations of a module, which is placed hierarchically above these sub-modules.
To formally describe the subclass H, we use the concept of a hierarchy tree H = (V, `),
which was originally introduced in [32]. Let V be a set of nodes and ` be a binary relation
(called the hierarchy relation) over V satisfying the following properties:
(i) There exists a unique node, called the root node of the hierarchy tree H. It is denoted
as r = (r(H)) and there does not exist any node v V such that v ` r. In other words,
the root node does not have any parent.
(ii) For every node v V , v 6= r, there exists a unique node u V such that u ` v. The
node u is called a parent of v and v is called a child of u.
(iii) The set of nodes, which do not have any child are called the leaf nodes of the hierarchy
tree.
Clearly, H = (V, `) defines a tree, which is also known as the hierarchy tree. Let l
denote the depth of the tree and Vj denote the set of all nodes of level j, 0 j l, where
the root node is the unique node at level 0. The transitive closure of ` is denoted as `+
and the reflexive transitive closure is denoted as ` . Thus u ` v denotes that u is some
ancestor (possibly different from v) of v and u `+ v denotes that u is some ancestor of v
and u 6= v.
Definition 3.5.2 H is the class of processes Y from A(< (GD , D ) >) whose realisation
(< g >, F) (with Y = < g > (X), X = F(X)) can be mapped onto some hierarchy tree
H = (V, `) as follows.
(i) With each node v V , it is possible to associate an output process Y v , and a
unique subset of processes Xv from the set X satisfying the following:
(a) Y v = < g v > (Xv ) and Xv = Fv (Xv , Uv ), where Uv is the set of output processes
associated with the children nodes of v, i.e., Uv := {Y w | v ` w}. In case v is a leaf node,
Uv is empty.
(b) The set of equations Xv = Fv (Xv , Uv ) can be partitioned into pv parts, pv 1, such
that
Xv =(Xv1 , , Xvpv ) such that Xvi Xvj = , for vi 6= vj. Uv = (Uv1 , , Uvpv ),
vi
where Uvi s need not be disjoint. For each i, 1 i pv , Xvi = (X1vi , , Xm
) =
vi

3.5. BOUNDED SUBCLASSES OF A(< (GD , D ) >) : SOME EXAMPLES

83

Fvi (Xvi , Uvi ).


For each i,
Xvi

Zu

u|Y u Uvi

is a module, where Zu is defined recursively as follows.

Xu
if u is a leaf node
Zu := u S
w
X u`w Z otherwise.
vi
vi
each Fvi is a collection of equations of the form Xjvi = Fvi
j (X , U ) =

(jvi1 < fjvi1 > (Xvi , Uvi ) | | jvinvij < fjvinvij > (Xvi , Uvi ))Avij ,0 .
vi
>
Here Avij {jvi1 , , jvinvij }. Also for 1 k nvij , 1 j mvi , each < fjk
;
v
< (GD , D ) >. However < g > is from < (GD , D ) > .

(ii) For v1 , v2 V , v1 6= v2 , Xv1 and Xv2 are disjoint. Also vV Xv = X. Finally


Y and Y r are identical processes, where r is the root node. In other words < g > (X) and
< g r > (Xr , Ur ) are the same process.
[Note: In the original realisation of the process Y , for the sake of rigour, no Y v is present
explicitly in the right hand side of any equation. Instead they appear as < g v > (Xv )].
(iii) For any leaf node v V , if the associated set of equations of i-th partition is Xvi =
Fvi (Xvi ), then each of Xjvi is bounded. Also for any non-leaf node v V , if the associated
set of equations of i-th partition is Xvi = Fvi (Xvi , Uvi ), then under the condition that all
input processes of Uv are replaced by SKIP{} , each Xjvi is bounded.
Lemma 3.5.2 For any Y H, Y is bounded.
Proof: By third condition of definition 3.5.2, in any leaf node, each Xjvi is bounded.
Note that this fact can be verified using the algorithm to decide boundedness of processes
of A(< (G;D , D ) >) as presented earlier. We can then easily conclude that Y v is bounded
for each leaf node v.
Also for any non-leaf node v V , if the associated set of equations of i-th partition is
vi
X = Fvi (Xvi , Uvi ), then under the condition that all input processes of Uv are replaced
by SKIP{} , each Xjvi is bounded. This fact also can be decided using the same algorithm
mentioned above. It is then obvious that, instead of being a SKIP{} process, even if each
of the input process from Uv is bounded, then also, each Xjvi is bounded. It can then

84

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

easily be concluded that, for any non-leaf node v, if each process of Yv is bounded, then
Y v is bounded.
Combining the above two facts, we can easily conclude that Y = Y r is bounded. It
should however be remembered that the post-process computation procedure involves the
syntactic transformation DF .
2
Unlike the processes in B, the processes in H immediately allow a hierarchy of tasks,
where, if u ` v, then the task Y v can be accomplished independently but the task Y u ,
along with its own dynamics, uses Y v as a subtask.
vi
vi
The P CO has not been used explicitly in any process equation Xjvi = Fvi
j (X , U ).
D and we have so far been able
This is because, processes of Uvi are not necessarily from
k
k
n
to characterise boundedness of A(< n (GD ,
D ) >) and not that of A(< (GD , D ) >).
D , then simulHowever if it can somehow be guaranteed that processes of Uvi are from
k
vi
taneously for all 1 k nvij , 1 j mvi , < fjk
> may belong to < (GD ,
D) >
Example 3.5.2 Consider the DFRP Y =< P1 ; P2 > (X), where X = (X1 , X2 , X3 ) is
given by the process equations
X1 = (a < P1 kP2 > (X) | b < P3 > (X)){a,b},0 .
X2 = (c < P1 > (X) | d < P3 ; P1 > (X)){c,d},0 .
[{d}]
kP3 ) > (X) | e < SKIP{} > (X)){a,e},0 .
X3 = (a < P1 ; (P2
Clearly X is a module having no sub-module in it. Moreover unrestricted use of P CO
and SCO in the process equations prevents any hierarchical characterisation of Y and
Y / H.
Example 3.5.3 On the other hand consider the DFRP Y =< P1 kP3 > (X), where X
= (X1 , , X12 ) is given by the following process equations.
X1 = (a < P1 > (X) | b < ((P5 kP6 ); (P7 kP8 ); P2 )[[+{a,b}]] ; P1 > (X)){a,b},0
X2 = (c SKIP{} ){c},0 .
X3 = (b < P3 > (X) | a < ((P5 kP6 ); (P7 kP8 ); P4 )[[+{a,b}]] ; P3 > (X)){a,b},0
X4 = (d SKIP{} ){d},0 .
X5 = (e < P6 > (X) | f SKIP{} ){e,f },0 .
X6 = (g < P5 > (X)){g},0 .
X7 = (h < (P10 kP12 ); P7 > (X)){h},0 .
X8 = (i < P9 > (X) | j SKIP{h} ){i,j},0 .
X9 = (j < P8 > (X)){j},0 .
X10 = (k < P11 > (X)){k,l},0 .
X11 = (l < P10 > (X)){k,l},0 .
X12 = (l < P12 > (X) | m SKIP{k,l} ){l,m},0 .

3.6. CONCLUSION

85


.
X1 = (X1 , X2 , .. X3 , X4 )
.
 1
U = (Y 1 , Y 2 , .. Y 1 , Y 2 )
J

Y = Y 1 = X1 kX3
J

J
.

^
J



X3 = (X7 , .. X8 , X9 )

.
3
U3 = (Y 4 , .. )
2
3

Y = X7 kX8

X2 = (X5 , X6 )
Y 2 = X5 kX6



.
X4 = (X10 , X11 , .. X12 )

Y 4 = X10 kX12
4

Figure 3.3: Hierarchy Tree for example 3.5.3


By identifying the modules and their submodules of X, one can easily fit the above
realisation into a hierarchy tree as shown in Fig. 3.5.3. Then Y H.

3.6

Conclusion

In this chapter we have studied the boundedness property of two classes of DFRPs, namely
k
;
n
A(< n (GD ,
D ) >) and A(< (GD , D ) >). For both the cases the underlying set of
functions is shown to be infinite.
In the case of the former class, we have proposed a syntactic transformation and a
rigorous post-process computation procedure which is powerful enough to capture the fact
that, each process of this class is bounded. The result is important as it guarantees finite
representation of these processes inspite of the infiniteness of the underlying function space.
k
It has also been shown that if LCO[B] is taken out from GD , then the set of functions
built on the resutant collection of operators is finite.
In the latter case also we have followed the same procedure of using a syntactic transformation along with syntactic substitution in post-process computation. Using a suitable
procedure we have shown that boundedness is decidable for this subclass.
Several semantics preserving syntactic transformations of process expressions have been
proposed in course of the analysis of boundedness.
Finally, it has been shown how both the P CO and the SCO can be used together in a
specific way, giving rise to bounded hierarchical subclasses of DFRP.
In the case of DFRPs built with P CO and change operators, one major limitation

86

CHAPTER 3. BOUNDEDNESS ANALYSIS OF FRP

D instead of D . As
of the present work is the applicability of boundedness results on
explained in fact 3.1.2, for processes in D , GCO[[B]] fails to distribute over P CO in
general, as it may change some of the post processes that are ST OPA into SKIPA , for
D is the class
some A, though trace and alphabets always remain same after distribution.
of processes that definitely satisfies the condition of distribution as given in lemma 3.1.1.
k
For some process P in A(< n (GD , D ) >), if it can somehow be guaranteed before hand
that the condition of lemma 3.1.1 is met, then CF can be used in post process computation
of that process in a straightforward way. A second option is to consider modification in the
definition of P CO, or more specifically in the definition of termination function. But this
may pose difficulty in construction of supervisor processes which normally run concurrently
with plant processes and decide on overall termination via blocking.
Also, in the previous section the subclasses B and H are presented only as examples to
show how both modelling flexibility and boundedness can be retained simultaneously by
imposing suitable restrictions on the realisation of DFRPs.
There are important issues that remain to be investigated regarding these subclasses.
For example, given the realisation of any arbitrary DFRP, it is not yet known about the
existence of any algorithm which can find out whether the DFRP can be mapped into one
of these subclasses or not. However it is rather artificial to design an arbitrary DFRP and
then to map it on a hierarchy tree. In chapter 6, it is shown, how the logical and physical
structure of a system naturally leads to a hierarchical process model.
In general hierarchical models give computational advantages over flat models for those
issues which can be investigated independently or atleast semi-independently at different
levels of hierarchy [32]. It will be quite useful to identify such issues in case of DFRP and
study their computational complexities.
B and H are not the only bounded classes using both P CO and SCO. Governed by
applications, other such general subclasses should be identified and efficient algorithms,
that use structural information particular to a subclass, should be found out, to study the
properties of these subclasses.
In the next chapter we shall present a nondeterministic extension over the DFRP
model.

Chapter 4
Nondeterministic Extension of FRP
4.1

Introduction

Nondeterminism is a useful and important characteristic feature in DES models, that arises
in a sytem due to partial abstraction of the system dynamics. Sometimes a system has
a range of possible behaviour, but the environment or the user of the system may not
have the ability to influence, or even observe the selection between the alternative courses
of behaviour. Nondeterministic models arise either from a deliberate decision to ignore the
factors that influence this selection, or due to only the partial observation possible from
the given vantage point of the user.
Nondeterministic models are useful for different reasons.
To the implementor of a system, a nondeterministic model often serves as a specification. As a result, the implementor has the freedom to choose from a range of
deterministic implementation details, each of which satisfies the higher level nondeterministic specification. On the other hand the specifier of the system is not
interested in the implementation details and is solely interested in verifying whether
the implementation satisfies the nondeterministic specification.
Sometimes the supervisor of a system is forced to perform control tasks with a partial observation of the system dynamics because of some physical or operational
constraints. Such is the situation in distributed processes controlled from a remote
control room. In order to design such a supervisor, both the actual deterministic
model of the system as well as the nondeterministic observation model are necessary.
DFRP, being a deterministic framework does not have the ability to capture these
effects of event concealment. In the present work, therefore, a nondeterministic extension
87

88

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

of the DFRP model is proposed.

The state based models such as FSM, SC, TTM etc., incorporate nondeterminism by
making the transition relation nondeterministic. In PNs it may be achieved via a noninjective labelling of the transitions [12]. In CCS, nondeterminism arises out of hidden
transitions. In CSP, nondeterminism is described in terms of the externally observable
behaviour of a process to engage in an event or to refuse it. A nondeterministic CSP
(NCSP) is therefore characterised merely in terms of its traces and refusals, without regard for its internal state, giving rise to the so called failure-based model. In contrast,
here we argue that the underlying deterministic dynamics of a process is often, atleast
partially available, from say, the designer of the process. The system, however, while in
action, may only be partially observable to an observer and/or supervisor. However, since
the underlying deterministic model is atleast available to some extent, one can use this
knowledge in addition to external observations, to create an observed nondeterministic
model of the system for practical use. This model approximates the original deterministic
system as closely as possible, since it utilises all available information, losing only the minimum amount that is unavoidable due to under-observation. This kind of reasoning leads
us to a possible future based model of nondeterministic FRP (NFRP). Interestingly, it
turns out that, in case of constant alphabet, treatment of nondeterminism in NCSP and
NFRP are equivalent. But in case of a variable alphabet, given an extent of lack of observation, an NFRP in general results in less uncertainty, than what it would have been, had
a failure type characterisation been used.

Introduction of the nondeterministic process space as a special case of general marked


process space is the first contribution of this work. As already mentioned, the representation is different from that of standard failure model of NCSP. Next, a substantial collection
of process operators has been defined over the nondeterministic process space and their
properties are discussed. These operators have been shown to form a weakly spontaneous
mutually recursive family of functions and they have been used to give a finitely recursive
characterisation over the nondeterministic process space. The work ends with a detailed
discussion covering assessment and comparison of this model with other nondeterministic
models.

4.2. NONDETERMINISTIC PROCESS SPACE

4.2

89

Nondeterministic Process Space

In this section we define the specific embedding space and the marked process space of
concern here.
Definition 4.2.1 (Basic Objects:) Let be a fixed finite collection of events. Let MD
be a set of deterministic marks and MN be the family of nonempty subsets of MD . That
is MN = 2MD \ and it denotes the collection of nondeterministic marks. N be a fixed
family of (set-valued) functions from to MN such that N s /s N
where /s(t) := (s^t), for all t . W,MN ,N (or simply WN ) denotes the suitable
nondeterministic embedding set and it is defined as WN := C( ) N .
In WN we define two types of nondeterministic constant processes, namely,
CHAOS and HALTm for some m MN .
Definition 4.2.2 (The Constant Nondeterministic Processes:) .
CHAOS is the maximally nondeterministic process of WN defined as
tr CHAOS := , s , CHAOS(s) := MN .
Also, for some m MN , HALTm is the process defined as
tr HALTm := {<>}. HALTm (<>) := m.
It is however not necessary that m MN , HALTm be defined.
Definition 4.2.3 (Nondeterministic Projection Operators {N n | n 0}:) For any
w WN , w N 0 := CHAOS.
For n > 0 if tr P = {<>} then w N n := HALTw{<>} . Otherwise
w N n := (1 w/ < 1 >N (n 1) | | k w/ < k >N (n 1)) w(<>) where
w = (1 w/ < 1 >| | k w/ < k >) w(<>) .
The projection operator N n gives an approximation of the nondeterministic processes
and N (n+1) is a better approximation than that of N n. The N 0 gives the worst possible
approximation of a process and naturally it is defined as the maximal nondeterministic
process CHAOS.
Definition 4.2.4 (Nondeterministic Partial Order N :) The partial order N over
WN is defined as : w1 N w2 iff tr w2 tr w1 and s tr w2 , w2 (s) w1 (s).

90

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

In other words w1 N w2 implies that w1 is less deterministic than w2 . This leads to the
straightforward definition of the limit of a chain wi N wi+1 as follows.
lim{wi }i0 = w where tr w := i tr wi , and s tr w, w(s) := i wi (s).
Finally we have to specify the marking axioms that will define a suitable (marked)
nondeterministic process space.
Definition 4.2.5 (Marking Axioms:) The marking axioms of our required process space
are :
(a) MD = {(, , f ) (2 {0, 1} 2 ) | f ( = 1 f = )}. Also MN = 2MD \.
(b) For any process P in our required process space, s tr P
((, , f ) P (s) | ( = 0 ( f ))) s^ < > tr P .
Let N be a subset of WN that satisfies the above marking axioms.
Fact 4.2.1 It can be verified that (a) (W,MN ,N , N , {N n}) satisfies conditions of definition 2.1.6 and thus is a suitable nondeterministic embedding space and (b) N
satisfies the conditions of definition 2.1.7 and thus, (N , N , {N n}) is a suitable nondeterministic marked process space.
It should also be noted that the 2nd marking axiom poses some restriction on the choice
of m MN for which HALTm is a valid element of N . Only for those m MN such
that for all triples in m, = f , HALTm is a valid process. This is because tr HALTm is
defined to be a singleton, containing only the null trace <>.
The above definitions are extensions on the deterministic processes defined in [21].
There, any deterministic processes P has been defined as 3-tuple P = (trP, P, P ) satisfying certain axioms. The set trP is the set of traces of P , P : trP 2 is the alphabet
function, and P : trP {0, 1} is the termination function where P (s) = 1 represents
successful termination of P after generation of the event sequence s in P . The axioms that
are satisfied by any deterministic process P are (i) <> trP and s^t trP s trP .
(ii) s^ < > trP P (s). (iii) ( P (s) = 1 s^t trP ) t =<>. D denotes
the set of deterministic processes. Here a mark after a string s consists of a set of triples of
the form (, , f ) where and determine alphabet and termination after s in the usual
sense of deterministic processes and f is the set of events that will be blocked. Thus each
(, , f ) is a possible deterministic future. A nondeterministic mark is represented as a
set of deterministic marks, one of which gets chosen in a nondeterministic fashion. If a
particular future is chosen, then the corresponding is the instantaneous alphabet of the
process, f is the collection of events that can be blocked by the process in that future

4.2. NONDETERMINISTIC PROCESS SPACE

91

and =1 implies the process terminates successfully in that future. As mentioned earlier, nondeterminism essentially arises due to low resolution of modelling or observation.
Thus the process may engage in unmodelled or unobserved actions between modelled or
observed events and arrive at new marks (futures). Since we are interested in describing
the modelled behaviour of the system, after every modelled event we collect the set of
deterministic marks that may be arrived at via possible internal (hidden) actions to make
the nondeterministic mark or future. It is to be noted that in the deterministic case, (that
is in DFRP) since the blocking function f is computable from the trace and the alphabet
function, it is not included explicitly. It is however necessary to include it in the mark in
the case of a nondeterministic process, since, depending upon the mark chosen (by some
internal unmodelled action), an event may either take place or it may be blocked and it is
not possible to compute the chosen mark from the trace. We feel that this characterisation
of nondeterminism is appropriate since nondeterminism essentially arises due to hiding
or undermodelling of events of a deterministic dynamics. It should also be noted that
the above definition cannot be treated as a special case of Canonical Nondeterministic
Embedding Space (W) defined in [22] mainly for two reasons. Firstly divergence has not
been treated in this work. In this sense, the present work is similar to the original Nondeterministic Communicating Sequential Processes (With-Out Divergence) (NCSP(WOD))
of [68]. Secondly, definition of W in [22] basically generalises refusal based postulates of
CSP as well as NCSP(WOD). Instead of refusals we have adopted a framework based on
possible future. A detailed comparison between NCSP(WOD) and the nondeterministic
process space N presented here, will be given in a later section.
Remark 4.2.1 (Comparison with Deterministic Processes:) Any P D can be
expressed as a special nondeterministic process F(P ) N , by the transformation rule F,
defined as follows.
tr F(P ) := tr P . s tr F(P ),
F(P )(s) := {(P (s), P (s), P (s) { | s^ < > tr P })}.
Thus s trF(P ), F(P )(s) is a singleton (contains a single future) since it is actually
deterministic.
The partial order on deterministic processes D is defined in [21]) as P1 D P2 if
tr P1 tr P2 and s tr P1 , P1 (s) = P2 (s) and P1 (s) = P2 (s). However, P1 D P2
does not imply that F(P2 ) N F(P1 ) even though tr F(P2 ) tr F(P1 ). This is because,
D relates single futures of two deterministic processes (after a common string) , which
have same and components and may differ only in set of events they block. Its
nondeterministic counterpart N , on the other hand, essentially tests whether (after a

92

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

common string) all the possible futures of one process is included in another or not.
The projection operator on deterministic processes (denoted as D n and defined in
[21]) truncates a deterministic process upto string length n. By comparison we find that
D n on a deterministic process P and its nondeterministic counterpart N n on F(P )
behave identically upto a string length n 1. We also have:
tr (P D n) := {s trP | #s n} = {s tr (F(P ) N n) | #s n}.
s tr P | #s n 1, F(P D n)(s) = (F(P ) N n)(s).
In this section we have completely defined the nondeterministic process space. It has
also been shown how deterministic processes can be treated as a special case of nondeterministic processes.

4.3

Process Operators

In this section we present a variety of process operators which will be used later to give a
recursive characterisation of N .
The Event Concealment Operator generates a (more) nondeterministic process from a
(non-)deterministic process through concealment of events.
Definition 4.3.1 (The Event Concealment Operator (ECO) :) Given P N and
C , P \C is defined in terms of a hiding operator C : ( C) defined as below:
<>C :=<> and s^ < >C := (s C if C) or (s C ^ < > if
/C)
Finally tr P \C := {s C | s tr P } and
(P \C)(t) :=

{(( C), , (f C)) | (, , f ) P (s)}

str P | sC =t

ECO is valid, (i.e. P \C N ) continuous but destructive in nature. (P \C1 )\C2 =


(P \C1 C2 ).
Definition 4.3.2 (The Nondeterministic Choice Operator (N CO):) Given two processes P1 and P2 , the process P = P1 u P2 is defined as:
tr P := tr P1 tr P2 .
P (s) := (P1 (s) if s tr P1 tr P2 ) (P2 (s) if s tr P2 tr P1 ) (P1 (s)
P2 (s) if s tr P1 tr P2 ).
The N CO is valid, continuous and ndes. ECO distributes over N CO, i.e., (P1 u P2 )\C
= (P1 \C) u (P2 \C).

4.3. PROCESS OPERATORS

93

Definition 4.3.3 (The Choice Operator (CO):) Given processes P1 , , Pn in N ,

distinct events 1 n and a consistent mark m 22 {0,1}2 \, such that, m = {(, 0, f )}


is a singleton, f and {i | i = 1, , n} = ( f ), the process P , denoted by
P = (1 P1 | | n Pn )m is defined as:
S
tr P := {<>} ni=1 {< i > ^s | s tr Pi }.
P (<>) := m; P (< i > ^s) := Pi (s).
The definition is similar to the one in the deterministic case of FRP. The choice operator
is valid, continuous and con. Note that the initial mark of a CO is a singleton, indicating
that the initial event can chosen deterministically by the environment.
The ECO distributes over CO according to the following law. Let
P = (1 P1 | | m Pm ){(,0,f )} and C . Then

(P1 \C) u u (Pm \C) u HALT(C,0,f C) if f C


P \C = (i1 Pi1 \C | | ir Pir \C){(C,0,f C)} u (Pj1 \C) u u (Pjk \C)

if {i1 ir } = ( f ) C 6= and {j1 jk } = ( f ) C


Note that, f and f C together implies that C = f C.
Definition 4.3.4 (The Sequential Composition Operator (SCO) :) Given two processes P1 and P2 , the process P = P1 ; P2 is defined as:
tr P := {s tr P1 | (, 0, f ) P1 (s)} {s = r^t | r tr P1 (, 1, )
P1 (r) t tr P2 }.
P (s) := M1s M2s where
M1s := ({(, 0, f ) P1 (s)} if s tr P1 ) or ( otherwise). It is actually the collection of
marks of P1 after s provided the event sequence s has taken place in P1 .
S
M2s := ( r,t P2 (t) | s = r^t r tr P1 (, 1, ) P1 (r) t tr P2 ) or (
otherwise). It is the collection of marks of P2 after t, when s = r^t, P1 has terminated
after r and in P2 the sequence t has taken place.
A consequence of nondeterminism here is that after generating a prefix of s, namely r,
the process P1 may terminate and remaining part of s gets generated in P2 otherwise P1
itself may continue to generate the full string s.
The sequential composition operator is used to achieve modularity of description. It also
brings about extra descriptive power and attendant complexities such as unboundedness of
process descriptions. The SCO is valid, continuous and ndes. Moreover, ECO distributes
over SCO, i.e., (P1 ; P2 )\C = (P1 \C); (P2 \C).

94

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

Definition 4.3.5 (The Asynchronous Concurrency Operator (ACO):) For two processes P1 , P2 N the ACO P1 kA P2 is defined, in terms of an asynchronous interleaving
operator IA of strings from two processes P1 and P2 , given below:
<> IA (<>, <>).
s IA (t1 , t2 ) s IA (t2 , t1 ).
if s IA (t1 , t2 ), ti tr Pi then
s^ < > IA (ti ^ < >, tj ) if ti ^ < > tr Pi for some i, j = 1, 2 i 6= j.
Following this P1 kA P2 is defined as:
tr (P1 kA P2 ) := {s | s IA (t1 , t2 ); ti tr Pi }
(P1 kA P2 )(s) :=

(P1 (t1 ) P2 (t2 )).

t1 ,t2 |s IA (t1 ,t2 )

The ACO is valid, continuous and ndes. It has similarity with the shuffle operator
defined on two FSMs with disjoint alphabets. However here alphabets of two processes
need not be disjoint. Events even with identical names take place sequentially and without
synchronization. Also ECO distributes over ACO, i.e., (P1 kA P2 )\C = (P1 \C)kA (P2 \C).
Example 4.3.1 Below are a few examples of CO, N CO, SCO, ACO, and ECO.
P0 = (a HALT{({},1,{})} | b HALT{({},0,{})} ){({a,b},0,{})} .
P1 = P0 u HALT{({c},0,{c})} .
P2 = (a HALT{({b},0,{b})} | c HALT{({},0,{})} ){({a,c},0,{})} .
P3 = (d HALT{({},1,{})} ){({d},0,{})} u HALT{({},1,{})} .
P3 ; P1 = P1 u ((d P1 ){({d},0,{})} ).
Let P = P1 kA P2 . Then P (<>) = P1 (<>) P2 (<>). Also P/ < a >= (P1 / < a >
kA P2 )u(P2 / < a > kA P1 ), P/ < b > = (P1 / < b > kA P2 ) and P/ < c > = (P2 / < c > kA P1 ).
Finally P1 \{a} = (b HALT{({},0,{})} ){({b},0,{})} u HALT{({c},0,{c}),({},1,{})} .
The parallel composition operator (P CO) captures concurrent behaviour of two nondeterministic processes. This is a natural extension over its deterministic counterpart defined
in [21]. We give a formal definition below.
Definition 4.3.6 (The Parallel Composition Operator (P CO):) To begin with, we
first define an operator Is , that interleaves two given strings with synchronisation, recursively as follows:
a) <> Is (<>, <>)
b) s Is (t1 , t2 ) s Is (t2 , t1 )
c) If s Is (t1 , t2 ), t1 tr P1 , t2 tr P2 then

4.3. PROCESS OPERATORS

95

(i) s^ < > Is (ti ^ < >, tj ); i, j = 1, 2; i 6= j if ti ^ < > tr Pi and (, , f )


Pj (tj ) such that
/, and
(ii) s^ < > Is (t1 ^ < >, t2 ^ < >) if ti ^ < > tr Pi ; i = 1, 2.
Here s Is (t1 , t2 ) implies that there exist at least one pair (t1 , t2 ) and sequences of
deterministic marks along (t1 , t2 ) which give rise to s in the same way as in deterministic
processes.
Given the processes P1 and P2 we now define the P CO P := P1 kP2 . Firstly,
tr P := {s | s Is (t1 , t2 ), ti tr Pi , i = 1, 2}.
In the nondeterministic case, all possible futures of all possible t1 , t2 , such that s Is (t1 , t2 ),
are considered while constructing P (s).
P (<>) := {(1 2 , , f1 f2 ) | ((k , k , fk ) Pk (<>), k = 1, 2) and = 1
((1 = 2 = 1) (i = 1 j i ; i, j = 1, 2; i 6= j))}
Finally, t1 , t2 | (ti tr Pi ; i = 1, 2; s Is (t1 , t2 )) if s^ < > tr P then
P (s^ < >) := A1 A2 where
A1 :=

A1 (s, t1 ^ < >, t2 ^ < >)

t1 ^<>,t2 ^<>|s^<> Is (t1 ^<>,t2 ^<>)

A1 (s, t1 ^ < >, t2 ^ < >) := {(1 2 , , f1 f2 ) | ((k , k , fk ) Pk (tk ^ < >
), k = 1, 2) and = 1 ((1 = 2 = 1) (i = 1 j i ; i, j = 1, 2; i 6= j))}.
A2 :=

A2 (s, ti ^ < >, tj )

ti ^<>,tj |s^<> Is (ti ^<>,tj );i,j=1,2; i6=j

A2 (s, ti ^ < >, tj ) := {(i j , , fi fj ) | (i , i , fi ) Pi (ti ^ < >), (j , j , fj )


Pj (tj ),
/j and = 1 ((i = j = 1)
(i = 1 j i ) (j = 1 i j ))}.
The P CO is valid, continuous and ndes. However, unlike the deterministic case,
P kP 6= P . Also (P1 kP2 )\C = (P1 \C)k(P2 \C) if si tr Pi , (i , i , fi ) Pi (si ), 1 2 C
is empty. In case this condition is not satisfied, (P1 kP2 )\C can be computed after obtaining
(if possible) a finite state characterisation (in terms of post process expressions) of P1 kP2
and then applying ECO on it.
Example 4.3.2 ( Example of P CO:) Consider the process P = P1 kP2 . It can be checked
easily that trP = {<>, < a >, < b >, < c >, < b, a >, < b, c >, < c, a >, < c, b >}. Also
P/ <>= P ; P (<>) = {({a, b, c}, 0, {}), ({a, c}, 0, {c})}.
{a}
P/ < a >= ((P1 / < a >)k(P2 / < a >)) u (P1 k(P2 / < a >));
P (< a >) = {({b}, 0, {b}), ({b, c}, 0, {b, c})}.

96

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

P/ < b >= ((P1 / < b >)kP2 ); P (< b >) = {({a, c}, 0, {})}.
{c}
P/ < c >= (P1 k(P2 / < c >)); P (< c >) = {({a, b}, 0, {})}.
P/ < b, a >= ((P1 / < b >)k(P2 / < a >)); P (< b, a >) = {({b}, 0, {b})}.
P/ < b, c >= ((P1 / < b >)k(P2 / < c >)); P (< b, c >) = {({}, 0, {})}.
{c}
P/ < c, a >= ((P1 / < a >)k(P2 / < c >)); P (< c, a >) = {({}, 1, {})}.
{c}
P/ < c, b >= ((P1 / < b >)k(P2 / < c >)); P (< c >) = {({}, 0, {})}.
P {} is defined later.
Remark 4.3.1 The following example shows that if we allow as in [21], that for tr P =
{<>}, P N n = CHAOS, n, then the parallel composition fails to be ndes. Let
P4 = (a HALT{({},1,{})} ){({a},0,{})} and
P5 = (a (b HALT{({},1,{})} ){({b},0,{})} ){({a},0,{})} .
So P4 kP5 = (a (b HALT{({},1,{})} ){({b},0,{})} ){({a},0,{})} . According to [22], we have
P4 N 2 = (a CHAOS){({a},0,{})} .
P5 N 2 = (a (b CHAOS){({b},0,{})} ){({a},0,{})} .
(P4 kP5 ) N 2 = (a (b CHAOS){({b},0,{}),} ){({a},0,{})} .
(P4 N 2kP5 N 2) = (a (CHAOSk(b CHAOS){({b},0,{})} )){({a},0,{})} .
In (P4 N 2kP5 N 2) and hence in (P4 N 2kP5 N 2) N 2, the second event can be any
. But this is not the case in (P4 kP5 ) N 2, where the only possible second event is
b. Clearly therefore P CO is not ndes as (P4 N 2kP5 N 2) N 2 6= (P4 kP5 ) N 2 under this
definition. However, if we apply our modified definition 2.1.6 then P4 N 2 = P4 and PCO
is ndes.
Definition 4.3.7 (The Local Deletion Operator (LDO):) Given P N and
such that (, , f ) P (<>) with /, the LDO P {} is defined as follows:
<> tr P {} .
P {} (<>) := {(, , f ) P (<>) | /}.
< 0 > tr P {} (, 0, f ) P {} (<>) | 0 ( f )
P {} (< 0 >) = P (< 0 >)
(s tr P {} | #s 1) (s^ < 0 > tr P {} s^ < 0 > tr P ).
P {} (s^ < 0 >) = P (s^ < 0 >)
(, , f ) P (<>), / , P {} = P.
(, , f ) P (<>), , P {} is undefined.
The LDO deletes those deterministic futures which contain in their alphabet components, from the initial mark of the process. From the resultant process, at the initial
state, can neither take place nor can it be blocked. This operator is valid, continuous
and ndes.

4.3. PROCESS OPERATORS

97

A Q
k
 
Q


(*,!)




Q
Q
Q










+



 S2

Q
Q

(*,!)

Q
Q

C

6
?


Q
Q
Q
Q

(*)

Q




S1 - J2
- J3 S3 - J4 S4 - J5 S5 - J6 S6 - J7
J1








QQ
k
(*)
(!)
6

(!)
(*,!) Q

Q

?


Q
Q




D


Q
Q
Q

(*,!)

Q
Q
Q








Q

(*,!)

Q
Q
Q

 
+
B 





Figure 4.1: Structure of the shared track in example 4.3.3


Definition 4.3.8 (The Local Non-termination Operator (LN O):) Given P N
such that (, 0, f ) P (<>), P [ =0] is defined as follows:
trP [ =0] := trP .
P [ =0] (<>) := P (<>) {(, 1, ) | (, 1, ) P (<>)}.
P [ =0] (s) := P (s), s 6=<>.
The LN O removes the possibility of the process terminating successfully at the very
start without generating any observable event. However deadlock or unsuccessful termination (ST OP ) remains a possibility. The operator is valid, continuous and ndes.
The last two operators have mainly been introduced to ensure mutual recursiveness
of the process operators, as will be seen later.
Example 4.3.3 Consider the following modified version of the example taken from [11].
Two loop-lines, (shown in figure 4.3.3) A-J1- -J7-A (L1) and B-J1- -J7-B (L2) share
a common single one way track from J1 to J7. The track consists of six sections (S1 to S6)
separated by five junctions (J2 to J6) which are equipped by stoplights (*) and detectors
(!). Simultaneously, two vehicles, V 1 and V 2 , traverse the loops L1 and L2 respectively

98

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

in the directions shown in the Fig 4.3.3 . Vehicle V 1 (resp. V 2 ) loads material from A
(resp. B) if it is empty and enters the common track. In the common track, after reaching
J4 (resp. J5) it may either continue its journey direcly and enter into S3 (resp. S5) or it
may take a left (resp. right) turn, arrive at C (resp. D), unloads material, comes back to
J3 (resp. J5) and then continue its journey forward. The movement of vehicle V i from
i
section Sj to Sj+1 , j = 1, , 5 is represented by event j,j+1
. Rest of the event symbols are
self explanatory. The overall dynamics is expressed as the following deterministic process
P lant. However since each deterministic process is a special case of a nondeterministic
process, here we express P lant as a nondeterministic FRP (NFRP), i.e., as an FRP built
over N .
2
1
kVB1
.
P lant = VA1

1
1
){({check1 },0,{})} .
= (check 1 VA2
VA1
1
1
1
1
){({empty1 ,nonempty1 },0,{})} .
VA2 = (empty VA3 | non empty 1 VA4
1
1
1
1
1
){({start1 },0,{})} .
VA4 = (start1 VA5
VA3 = (load VA4 ){({load1 },0,{})} .
1
1
1
1
1
1 },0,{})} .
){({enterS1
VS1
= (enterS1
){({arrive1J1 },0,{})} .
VJ1
= (arrive1J1 VJ1
VA5
1
1
1
1 },0,{})} .
VS1 = (1,2 VS2 ){({1,2
[{turn1 }]

1
1
1
1
1 ,turn1 },0,{})} .
){({2,3
| turn1 W11 ; VS2
VS2
= (2,3
VS3
1
1
1
1
1
1
1 },0,{})} .
1 },0,{})} .
){({4,5
VS3 = (3,4 VS4 ){({3,4
VS4 = (4,5 VS5
1
1
1
1 },0,{})} .
){({5,6
VS5
= (5,6
VS6
1
1
1
1
1
){({return1A },0,{})} .
= (return1A VA1
VS6 = (arrive7 VJ7 ){({arrive17 },0,{})} .
VJ7
1
1
1
1
1
W2 = (unloadC W31 ){({unload1C },0,{})} .
W1 = (arriveC W2 ){({arrive1C },0,{})} .
W31 = (return1J3 HALT({},1,{}) ){({return1J3 },0,{})} .
2
The process VB1
can be constructed in an identical way with suitable changes in process
and event symbols (like changing the superscript 1 to 2, subscript A to B and C to D,
return1J3 to return2J5 ). However it should be noted that, the event turn2 , (followed by the
2
2 [{turn }]
2
2
process W12 ; VS4
) now takes place as a possible choice in VS4
, instead of VS2
.

In the above plant, for those junctions, which are not equipped with detectors, junction crossing events are unobservable. The observable behaviour of the plant will become
important in the context of controller synthesis under partial observation. For example,
one may try to construct a supervisor, based on the event information supplied by the
detectors, which will control the stop lights in such a way that the two vehicles never ply
in the same section of the guideway simultaneously. For the given plant, the observed
behaviour, obtained by hiding the unobservable events, is naturally a nondeterministic

4.3. PROCESS OPERATORS

99

process. In general, the conditions under which a nondeterministic process, obtained by


hiding of an NFRP, can be expressed again as another NFRP, are not known, let alone
the construction of a supervisor. Only in the context of FSM this problem is well studied
[54, 56]. However, in this example we can use the distribution laws of ECO over different
operators and obtain an NFRP. It is described below.
i
i
i
i
The set of unobservable events is uo = {1,2
, 2,3
, 4,5
, 5,6
, turni | i = 1, 2}. The observed behaviour of the plant is described by the nondeterministic process OP = P lant\uo .
1
2
1
2
1
1
Now OP = (VA1
kVB1
)\uo = (VA1
\uo )k(VB1
\uo ). The process VA1
\uo = PA1
is described as follows.
1
1
PA1
= (check 1 PA2
){({check1 },0,{})} .
1
1
1
1
PA2 = (empty PA3 | non empty 1 PA4
){({empty1 ,nonempty1 },0,{})} .
1
1
1
1
PA3
= (load1 PA4
){({load1 },0,{})} .
PA4
= (start1 PA5
){({start1 },0,{})} .
1
1
1
PA5 = (arriveJ1 PJ1 ){({arrive1J1 },0,{})} .
1
1
1
1
1 },0,{})} .
))){({enterS1
u (W 1 ; (HALT({},0,{}) u PS123
PJ1
= (enterS1
HALT({},0,{}) u PS123
1
1
1
1 },0,{})} .
u HALT({},0,{}) ){({3,4
PS123
= (3,4
PS456
1
1
1
1
1
){({return1A },0,{})} .
= (return1A PA1
PS456 = (arrive7 PJ7 ){({arrive17 },0,{})} .
PJ7
1
1
1
1
1
1
W2 = (unloadC W3 ){({unload1C },0,{})} .
W1 = (arriveC W2 ){({arrive1C },0,{})} .
W31 = (return1J3 HALT({},1,{}) ){({return1J3 },0,{})} .
2
2
Similarly the process VB1
\uo = PB1
can also be written as an NFRP as above. Based on
P lant and OP , one can construct the supervisor.
Definition 4.3.9 (The local Change Operator (LCO):) Given P N , and B, C
, P [B+C] is defined as follows:
<> tr P [B+C] .
< > tr P [B+C] < > tr P / B.
(s tr P [B+C] #s 1) (s^ < > tr P [B+C] s^ < > tr P ).
P [B+C] (<>) := {( B) C, , (f B) C) | (, , f ) P (<>)}.
P [B+C] (s) := P (s) s 6=<>.
The LCO is valid, continuous and ndes.
Definition 4.3.10 (The Global Change Operator (GCO):) Given P N , and
B, C , P [[B+C]] is defined as follows:
<> tr P [[B+C]] .
(s tr P [[B+C]] ) (s^ < > tr P [[B+C]] (s^ < > tr P / B)).
P [[B+C]] (s) := {( B) C, , (f B) C) | (, , f ) P (s)}.

100

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

The GCO is valid, continuous and ndes. Both the LCO and GCO are natural extensions over similar operators defined for FRP.
For our convenience, from now on we will have two types of LCOs and GCOs: LCO[B]
or P [B] (meaning P [B+] ) and LCO[+C] or P [+C] (meaning P [+C] ) and similarly
GCO[[B]] and GCO[[+C]].
Example 4.3.4 The following examples show the use of LN O, LDO, LCO and GCO.
{a}
P1
= HALT{({c},0,{c})} .
[ =0]
(P3
); P1 = (d P1 ){({d},0{})} ).
[{a}+{d}]
P0
= (b HALT{({},0,{})} ){({b,d},0,{d})} .
[[+{b}]]
P3
= (d HALT{({b},1,{b})} ){({b,d},0,{b})} u HALT{({b},1,{b})} .
Distribution Laws: The following laws describe distribution of the unary operators
over some binary ones.
(i) N CO:
[[B+C]]
[[B+C]]
[B+C]
[B+C]
(a) (P1 u P2 )[[B+C]] = P1
u P2
. (b) (P1 u P2 )[B+C] = P1
u P2
.
[ =0]
[ =0]
[ =0]
[ =0]
[ =0]
(c) (P1 u P2 )
= P1
u P2
if Pi
is defined for both i = 1, 2. If Pi
is not
[ =0]
defined then the r.h.s will be Pj
where j = 1, 2, j 6= i.
{}
{}
{}
{}
{}
(d) (P1 u P2 )
= P1
u P2
if Pi
is defined for both i = 1, 2. If Pi
is not
{}
defined then the r.h.s will be Pi u Pj
where j = 1, 2, j 6= i.
(ii) SCO:
[[B+C]]
[[B+C]]
[ =0][B+C]
[ =0]
(a) (P1 ; P2 )[[B+C]] = P1
; P2
. (b) (P1 ; P2 )[B+C] = (P1
; P2 if P1
is
[B+C]
defined) u (P2
if (, 1, ) P1 (<>)).
[ =0]
[ =0]
[ =0]
[ =0]
(c)(P1 ; P2 )
= (P1
; P2 if P1
is defined) u (P2
if (, 1, ) P1 (<>)).
[ =0]{}
[ =0]
{}
{}
(c)(P1 ; P2 )
= (P1
; P2 if P1
is defined) u (P2
if (, 1, ) P1 (<>)).
(iii) ACO and P CO: Over ACO, GCO[[B+C]] distributes. Over P CO only GCO[[B]]
distributes when every future of P1 and P2 has a zero termination. Other local operators
do not in general distribute over these two operators.

4.4

Recursive Characterisation

In this section, we establish the main property of mutual recursiveness, necessary for a
recursive characterisation of nondeterministic processes using the operators defined in the
previous section.

4.4. RECURSIVE CHARACTERISATION

101

Definition 4.4.1 Let GN be the set of functional symbols defined as :


GN := {N CO, SCO, P CO, ACO, LN O, LDO{}, LCO[B+C], GCO[[B+C]], HALTm
| , B, C , m MN such that for any (, , f ) m, = f }.
Corresponding to each element g GN we construct the function < g > and the set
< GN > as follows:
< HALTm >: N N such that < HALTm > (P ):= HALTm .
< N CO >: 2N N such that < N CO > (P1 , P2 ):= P1 u P2 .
< SCO >: 2N N such that < SCO > (P1 , P2 ):= P1 ; P2 .
< P CO >: 2N N such that < P CO > (P1 , P2 ):= P1 k P2 .
< ACO >: 2N N such that < ACO > (P1 , P2 ):= P1 kA P2 .
< LDO{} >: N N such that < LDO{} > (P )) := P {} .
< LN O >: N N such that < LN O > (P )) := P [ =0] .
< LCO[B + C] >: N N such that < LCO[B + C] > (P )) := P [B+C] .
< GCO[[B + C]] >: N N such that < GCO[[B + C]] > (P )) := P [[B+C]] .
Definition 4.4.2 < GN > be the set of functional symbols defined as :
< GN >:= {< HALTm >, < N CO >, < SCO >, < P CO >, < ACO >, < LN O >, <
LDO{} >, < LCO[B + C] >, < GCO[[B + C]] > | , B, C , m MN such
that for any (, , f ) m, = f }.
Theorem 4.4.1 < GN > is an MRWS family of functions.
Proof: It can be easily verified that every < g >< GN > is continuous and ndes.
However many of them do not satisfy the third requirement of spontaneity. For example
< LDO{} > does not satisfy this requirement as (CHAOS){} 6= CHAOS, for .
The same holds for < LN O >, < LCO[B + C] >, < GCO[B + C] > also. However
CHAOSkCHAOS = CHAOS. Thus < GN > is a weakly spontaneous. Now we show
their mutual recursiveness as follows.
HALTm : It is trivially MR.
NCO: (P1 u P2 )/ < >= (P1 / < > if < > tr P1 tr P2 ) (P2 / < > if < >
tr P2 tr P1 ) (P1 / < > uP2 / < > if < > tr P1 tr P2 ).
SCO: (P1 ; P2 )/ < >= ((P1 / < >)[ =0] ; P2 , if (P1 / < >)[ =0] is defined)
u (P2 , if P1 / < > is defined and (, 1, ) (P1 / < >)(<>)).
u (P2 / < >, if (, 1, ) P1 (<>) P2 / < > is defined ).
PCO: (P1 kP2 )/ < >=
{}
(P1 / < > kP2
if < > tr P1 (, , f ) P2 (<>) | /)

102

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP


{}

u(P2 / < > kP1


if < > tr P2 (, , f ) P1 (<>) | /)
u(P1 / < > kP2 / < > if < > tr P1 tr P2 ).
ACO: (P1 kA P2 )/ < >=
(P1 / < > kA P2 if < > tr P1 ) u (P2 / < > kA P1 if < > tr P2 ).
LCO: P [B+C] / < >= (P/ < >)
GCO: P [[B+C]] / < >= (P/ < >)[[B+C]]
LNO: P [ =0] / < >= (P/ < >)
/
LDO: P { } / < >= (P/ < >).
This completes the theorem.

Definition 4.4.3 CN := {CHAOS}. < CHAOS > (P) := CHAOS.


By theorem 4.4.1 we see that it is possible to construct an MRWS family of recursive
functions < n (GN , N ) > using < CN >, < P roj(n) > and < GN >. Since the 2nd
assumption of theorem 2.2.2 is satisfied, it is also possible to build the nondeterministic
algebraic process space A(< (GN , N ) >) of all possible nondeterministic FRPs (NFRP)
with respect to < n (GN , N ) > for some n.
Note that ECO cannot be included in GN as it is destructive in nature.
In this section the conditions, under which a collection of mutually recursive process
descriptions admits a unique solution, have been described. It has also been shown that
the operators defined in the previous section form a consistent family in terms of mutual
recursion.

4.5

Assessment

In this section we make a critical assessment of the proposed nondeterministic extension


and compare it with other nondeterministic frameworks such as CSP and CCS.
Advantages and Disadvantages
The major disadvantages of the NFRP framework are (i) complexity of operator definitions
because of variable alphabet and (ii) undecidability of analysis results [33]. The complexity
of operator definitions is however unavoidable and would show up in any other framework
that attempts to deal with features such as concurrency, modular sequencing etc., in the
face of nondeterminism. Similarly, undecidability is also inevitable in such a powerful
framework. Solution to this problem lies in identifying bounded subclasses as in DFRP
[63], where analysis will be possible.

4.5. ASSESSMENT

103

Among the advantages we have the following. (i) This is the first attempt at introducing
nondeterminism in a variable alphabet situation which itself leads to modelling advantages
over the constant alphabet case of CSP. (ii) It provides a much needed platform over which
problems of observation [30], control under partial observation [54] etc., can be formulated.
(iii) Because of mutual recursiveness of the operators, the model can be simulated in
computer. (iv) Finally it generates a rich language. For example, as shown below, any
context free language (CFL) can be modelled by this framework.
Modelling CFL with NFRP
Given any Context Free Language L without the null string <>, we can construct an NFRP
PL A(< n (GN , N ) >) for some finite n such that tr PL = L and s L, PL /s 
HALT{({},1,{})} , as follows.
By Chomsky Normal Form, any such L can be generated by a grammar GL whose
production rules are of the form A BC or A a where A, B, C are nonterminals and
a . We formulate PL to emulate the operation of GL . For every nonterminal A a
process PA is created.
If A B C GL then PA = PB ; PC .
If A a GL then PA = (a HALT{({},1,{})} ){({a},0,{})} .
If A BC | a GL then PA = (PB ; PC ) u ((a HALT{({},1,{})} ){({a},0,{})} ). If, in GL
the set of nonterminals be V = {S, A1 , , An } and the set of terminals be T = {a1 , , an }
then we get a vector recursive equation X = F(X, U) where
X = (PS , PA1 , , PAn ) and U = ((ai HALT{({},1,{})} ){({ai },0,{})} ) 1 i m). By the
structure of a valid CFG GL there cannot be an unguarded loop of recursive definitions
among the nonterminals of the grammar. Hence F must satisfy conditions of theorem 2.2.1
and there is a unique solution process that mimics GL . Y = PS will generate the required
language.

4.5.1

Comparison with CSP

Although the DFRP has its origin in DCSP, an important extension introduced there is the
concept of a variable alphabet. The nondeterministic process framework proposesd here,
naturally, is an extension over its corresponding CSP counterpart, namely Nondeterministic
CSP (NCSP).
In NCSP, nondeterminism has been treated in terms of the set of refusals of a process.
A refusal of a process is a collection of events, which when offered by the environment to the
process, may be refused by the latter. In NCSP, nondeterminism arises from the fact that,

104

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

at a time, a process may have mutiple refusal possibilities. The set of all possible refusals
( which is actually a family of sets of events) captures the immediate nondeterministic
behaviour of the process. Subset of a refusal is also considered as a separate refusal and
these refusals are used to represent the dynamics of a process. It is as if the description
of the process has been arrived at by conducting experiments ( like offering a collection of
events to the process) and observing its external behaviour.
On the other hand, the NFRP framework takes a view point of internal behaviour. It
is as if, the deterministic process model were known, and by some concealment operation,
a nondeterministic model has been arrived at, where, a collection of possible deterministic
futures, characterised by corresponding alphabet, termination and blocking behaviour
have been clubbed together to represent a nondeterministic future.
Thus given any nondeterministic mark of any NFRP, one can construct the immediate
one length strings of events that are possible in the process at that stage. This is however
not possible in NCSP, where information about both the refusals and trace is necessary to
determine the one length strings. This is because, by definition, any subset of a refusal is
also a refusal, and hence, an individual refusal may not qualify as a possible deterministic
future.
In order to gain an understanding of the relation between NCSP and NFRP we make the
following assumptions. (a) Constant alphabet in case of NFRP; (b) Both types of processes
are nonterminating; (c) The divergence component of NCSP processes, which has been
defined to capture behaviour of processes in case of infinite occurrence of hidden events, is
taken to be empty. This results in the class of NCSP without divergence (NCSOWOD). In
other words, we restrict our discussions within the class of nonterminating NCSP(WOD)
processes on one side and nonterminating, constant alphabet nondeterministic processes
of N on the other side. For detailed definitions of NCSP(WOD) and CSP we refer to [68]
and [17].
Formally a nonterminating NCSP(WOD) P is a CSP P = ((P ), F (P ), D(P )) such

that D(P ) = and


/ (P ), where, (P ), F (P ) and D(P ) represent the alphabet, failure

and divergence component of the CSP P .


represents successful termination. Further
F (P ) = {(s, X) | s tr P X ref usals(P/s). Also we define the set of impossible
events, Imp(P ), as Imp(P ) := { (P ) |< >/tr P }.
On the other hand, let L be the subset of N whose processes are characterised by
constant alphabets and absence of termination. Formally
L = {P N | s1 , s2 tr P, (1 , 1 , f1 ) P (s1 ), (2 , 2 , f2 ) P (s2 ) 1 = 2 =
P (say), and 1 = 2 = 0}. Here also Imp(P ) := { P | < >/tr P }.
Under the above assumptions, processes of L can be converted into an NCSP(WOD)

4.5. ASSESSMENT

105

process and vice versa while preserving behaviour in some sense. For this reason, as also to
compare similar operators defined in the two formalisms, we consider some transformations.
The following transformation rule G converts an NCSP(WOD) P into a nondeterministic process G(P ) L. G preserves traces and G(P ) contains the maximum number of
deterministic futures in each nondeterministic mark, each of which corresponds to a suitable refusal of P .
tr G(P ) := tr P .
G(P )(s) := {(, 0, f ) | = (P ), (s, f ) F (P ), Imp(P/s) f }.
The idea behind the transformation rule G is that only a refusal of a process, which contains all the impossible events, will correspond to an individual deterministic future of a
nondeterministic mark, in which it contributes the set of blocked events.
Actually several such transformations can be defined depending on their properties.
For example, the following transformation rule G is similar to G as it is trace-preserving.
But instead of maximum number of refusals, it retains only the maximal refusals, each of
which naturally contains all the impossible events.
tr G (P ) := tr P .
G (P )(s) := {(, 0, f ) | = (P ), (s, f ) F (P ), / (P ), (s, f {}) F (P )}.
On the other hand the transformation rule G 1 converts a process PF L into a
NCSP(WOD) G 1 (PF ) as follows:
(G 1 (PF )) := PF .
F (G 1 (PF )) := {(s, X) | s tr PF , X f , for some (PF , 0, f ) PF (s)}.
D(G 1 (PF )) := .
G 1 actually creates the refusal set by taking the subset closure of the blocking components of all the futures present in the mark. It is also trace-preserving.
For any NCSP(WOD) P we have G 1 (G(P )) = P . But for any PF L, in general
G(G 1 (PF )) 6= PF . Note that the same holds for G and G 1 , i.e., G 1 (G (P )) = P but
G (G 1 (PF )) 6= PF . In both cases G 1 fails to be right inverse because in general there
exist non-unique NFRPs of L having same trace but different marking functions, which
upon application of G 1 give rise to identical NCSP(WOD) G 1 (PF ). G or G maps back
to one specific NFRP among these non-unique processes, which may not be the original
one.
The transformations preserve the sense of a number of similar operators defined in the
two frameworks. We assign suffix C and suffix F respectively to the similar operators of
NCSP(WOD) and L (or N ). Then it can be easily checked:
(a) P1 uC P2 = G 1 (G(P1 ) uF G(P2 ));
(b) P1 kC P2 = G 1 (G(P1 )kF G(P2 ));

106

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

(c) (1 P1 | | k Pk ) = G 1 ((1 G(P1 ) | | k G(Pk ))m ) where


P1 = P2 = = Pk = P (say), {1 , k } P and m = {(P , 0, P {1 k })};
(d) However P \C C 6= G 1 (G(P )\F C).
Finally we make the following qualitative remarks regarding advantages and disadvantages of NFRP vis-a-vis CSP. The advantages include the following.
(+) The language generating power of NFRP is more than that of DFRP or deterministic CSP (DCSP). Consider the following example.
Example 4.5.1 Let L be the prefix closure of the language {an bn | n 0}. L cannot be
the trace of any DFRP or DCSP, defined in a finitely recursive way. But the following
NFRP Y generates L as its trace. Y = P1 u HALT{({a,b},1,{a,b})} where
P1 = (a (P1 u HALT{({a,b},1,{a,b})} ); P2 ){({a,b},0,{a})} and
P2 = (b HALT{({},1,{})} ){({b},0,{})} . It can be easily seen that tr Y = L.
(+)Language generating power of NCSP or NFRP are not known exactly to the best
of our knowledge. However the flexibility of having a variable alphabet in NFRP normally
helps in obtaining a more natural and elegant description.
Example 4.5.2 Here we present an NFRP process and a recursively defined NCSP process, both of which generate the language L which is the prefix closure of a context sensitive
language {an bn an | n 0} as their trace set. The NFRP process YF is given by the following equations. YF = (P1 u HALT{({a,b},1,{a,b})} )k(P3 u HALT{({},1,{})} ) where
P1 = (a (P1 u HALT{({a,b},1,{a,b})} ); P2 ){({a,b},0,{b})} ,
P2 = (b HALT{({b},1,{b})} ){({a,b},0,{a})}
P3 = (b (P3 u HALT{({},1,{})} ); P4 ){({b},0,{}),({},1,{})} , P4 = (a HALT{({},1,{})} ){({a},0,{})} .
It can be seen that tr YF = prefix closure of {an bn an | n 0}. However there is one problem in the above description. We desire that, only for those strings s, such that s = an bn an
for some n | n > 0, YF /s = HALT{({},1,{})} . However the fact that YF / < an bn > N
HALT{({b},1,{b})} , is an undesirable outcome of the above description which seems unavoidable.
Next we try to capture the same language as the trace of a recursively defined NCSP.
In the NFRP model, we have designed two processes P1 and P3 , to generate languages
{an bn | n 0} and {bn an | n 0} respectively. Finally by parallel composition we
get the required language. Unfortunately, the same scheme does not work for NCSP.
Because of constant alphabet and related restrictions on choice operator [17] of NCSP and
synchronization constraints of P CO, event a of first process (P1 ) remains synchronous with
that of the second process (P3 ) in case of NCSP, which is not intended in the above scheme.

4.5. ASSESSMENT

107

Also while designing, similar restriction on P CO and SCO [17] should also be taken into
account. The following NCSP YC emerges as a result.
P1 = (a (P1 u SKIP{a,b} ); P2 )

P2 = (b SKIP{a,b} )

P3 = (b (P3 u SKIP{b,c} ); P4 )
P4 = (c SKIP{b,c} )

where P1 = P2 = {a, b, }. P3 = P4 = {b, c, }. Also let f : {a, b, c, } {a, b, }

be the symbol change operator, defined as, f () := for = a, b, and f (c) := a.


Finally YC is defined as YC = f (P1 kP3 ). It can be checked that tr YC = L. Note that, to
tackle synchronisation problem mentioned above, the second process generates the language
{bn cn | n 0}. Later, with the help of symbol change operator, we get the desired language
in a round-about way.
(+) In case of NFRP, knowledge about the presence of different deterministic futures
with different alphabet component, in the nondeterministic mark can sometime help in
resolving nondeterminism. This may not be possible in case of NCSP.
Example 4.5.3 Consider the following NFRP processes. Let P1 = (a P1 ){({a,c},0,{c})} ,
P2 = (a P2 ){({a},0,{})} and P3 = (c P3 ){({c},0,{})} . P1 and P2 have identical trace. But
their blocking capability are different. Then ((P1 uF P2 )kP3 )/ < c > = P1 kP3 . However
in case of N CSP , if two processes, P1 and P2 have identical trace and if they can be
arguments of N CO (uC ), then they must have same alphabet (see Hoare 85, art 3.2,
p102). In other words, they will have same blocking capability. In such a case, it is not
possible to distinguish between P1 and P2 in P1 u P2 , by putting it in some environment
and just observing the trace. This fact is likely to be of use if the problem of observation
(see [30] and [57]) is defined in case of process algebra models.
However the above advantages have been achieved at the cost of simplicity and elegance
that is inherent in a constant alphabet NCSP. The disadvantages of this model are as
follows.
() The operators of the NFRP model are more complicated compared to their NCSP
counterpart in order to take care of the variable alphabet, while preserving the sense of the
operators. As a result, implementation of NFRP operators will be complex and postprocess
computation will be computationally expensive.
() The general choice operator in NCSP, allows the environment to choose an event
in a process only at the very first action. But a corresponding concept is difficult to define
in case of NFRP because of the use of variable alphabet and possible futures in the model,
instead of constant alphabet and refusals. It is difficult even if the alphabet components

108

CHAPTER 4. NONDETERMINISTIC EXTENSION OF FRP

of the different futures are identical at the first instant. In absence of this, ACO of NFRP
and its counterpart in NCSP do not carry completely identical meaning.

4.5.2

Comparison with CCS:

Relations between CCS and CSP have been discussed at length in both [17]) and [18]).
Here we briefly mention a few features in which NFRP is different from that of CCS.
The CCS was designed to provide a mathematical framework for studying different
kinds of equivalence among process algebra models of concurrent nondeterministic systems. Here nondeterminism is introduced by a special symbol (different from the termination component of deterministic futures of our NFRP model) which correspond to
the occurrence of a hidden event. Unlike CSP or NFRP, in CCS, these hidden events are
not ignored and they can be used, even for guarding recursive equations involving process expressions. Different kind of process equivalences are defined, which differ from each
other in their treatment regarding the hidden events. These include strong bisimulation,
weak bisimulation etc. In NFRP, however the only equivalence between processes defined
is that of equality (P1 N P2 and vice-versa). In fact, trace equivalence (identical set
of observed strings) can also be defined in both models. This trace equivalence, cannot
distinguish between models having identical traces but different deadlocking property. The
failure equivalence can be compared between the two models, only under the restriction
of constant alphabet. It is however difficult, if not impossible, to define weak bisimulation in NCSP and in NFRP as these frameworks are not distinguishing enough. In fact
they were not meant to be also. Finally, as for NCSP, the P CO of NFRP and corresponding conjunction operator of CCS are not comparable. The latter does not have any
blocking property. On the other hand it includes aspects of hiding, nondeterminism, and
interleaving as well as synchronization. In NFRP, we have separate operators for all these.
This work has presented a formalism for capturing nondeterministic behaviours of DES
in general and FRP model in particular. Nondeterminism is often unavoidable in hierarchical supervision, where low level deterministic descriptions may become nondeterministic
due to deliberate undermodelling or lack of observation. Together with the DFRP model
the present work forms a uniform process algebraic approach for modelling untimed logical
discrete event systems. However problems related to control and observation remain to be
worked out.
In the next chapter we equip the DFRP formalism with concepts of states and variables.

Chapter 5
State-based Extension of FRP
5.1

Introduction

One important limitation of the DFRP model is that, it does not support any feature
to store and update numeric and non-numeric information about the state of different
physical and logical variables associated with a system. In man-made real world systems,
concept of states and associated variables emerges quite naturally. One can also evaluate
system performance through simulation if such variables are available. Many logical decisions are taken based on the state of these variables. Naturally DFRP cannot model such
logical decisions directly and as a result poses considerable difficulty to the modeller of
a system. The motivation behind this work emerged from the recognition that both the
events and the collection of system variables are independent real entities and therefore
need to be modelled while modelling the logical behaviour of Discrete Event Systems.
An important step towards integrating features of both state and trace based models
has been taken through the notion of extended processes in [22], where both events and
state transformations are dealt with simultaneously. However only a restricted class of
operators, to be used for recursive definition, has been mentioned. Moreover possibility of
shared variables among subprocesses and different constraints that may arise due to them,
have not been discussed.
In this work, a second attempt has been made to integrate both trace and state based
features using the concept of extended processes. The major difference between the current
work and the existing one [22] stems from the fact that a shared memory architecture has
been used in the current work unlike in [22] where a message passing architecture is used.
In [22] it has been shown how a shared variable can be modelled, not as an event, but as
a buffer process.
109

110

CHAPTER 5. STATE-BASED EXTENSION OF FRP

The relative advantages and disadvantages of using message passing / local memory
architecture and global memory architecture in parallel and distributed computation are
well known to the computer science community. The former has the advantage that parallel
computation assigns values to any variable in a deterministic way, since each variable is
local in nature. But it has large communication burden. In case of the latter, there is
little communication burden, but strategies like semaphores and mutual exclusions should
be enforced in order to ensure a deterministic effect of parallel computation on shared
variables. In most cases therefore, the choice of an architecture is determined by the
application in hand as well as physical constraints.
Extended FRP (EFRP) can be considered to be a generalised computational model and
the above discussion is applicable to this also. The choice of a global memory architecture
in this work for modelling DES has been motivated by the following facts.
The state based extension of the DFRP model is intended to capture real life discrete
event systems, and not just programming languages. While modelling programming
languages, the only purpose is to capture a logical class of computations and even
there itself are several hardware and software systems that support the concept of
shared memory. On the other hand, while modelling real life dynamic systems,
a good model should reflect the structure of the dynamics as it exists in reality.
Shared natures of physical variables are often intrinsic features of several systems.
For example, consider the behaviour of a set of generators in an interconnected
power grid. Through quantization of the state space, one may be able to describe
the behaviour in a DES framework, by modelling each generator as a process which
evolves through sequence of events such as tripped-on-underfrequency, on-full-load,
on-no-load, tripped-on-overspeed etc. In such a system, note that the frequency
is a true shared variable in the sense that it is global and all generators contribute
to its value. Modelling such a system with message passing would imply a frequency
variable local to each generator, or a buffer process for the frequency, both of which
are quite artificial.
A second problem arises if each shared variable is to be modelled by a buffer process.
If the number of shared variables is large, an equally large number of processes have
to be constructed. In any implementation, this will lead to a significant storage
and communication overload. It is just for this reason, notwithstanding the many
problems of shared memory, that many multiprocessing systems are implemented
with shared memory architecture, to save on memory and communication.

5.2. EXTENDED PROCESS SPACE

111

Modelling with buffer process becomes complex if an event, shared or otherwise,


modifies a set of shared variables. Since in a buffer process each individual shared
variable is modelled as a process with associated read and write, the event, modifying
one or more shared variables, must be artificially broken up into a number of events,
each modifying one shared variable. The problem aggravates with increasing number
of variables and degree of sharing.
Often a shared memory architecture is discouraged from points of maintaining variable integrity (see [17]). This is also known as the mutual exclusion problem. But the
well-known solution of semaphores which locks a shared resource during a critical
section of a program and which is inherent in a buffer process can be implemented
in the proposed framework also.
Finally, it should be pointed out that a message passing architecture with no shared
memory can always be considered as a special case of the global memory architecture.
In this work, the possibility of sharing of variables between concurrently or sequentially
running processes has led to the development of a number of state-continuity constraints.
The concept of a silent transition has been introduced which is quite useful to depict situations where event dynamics within one process is likely to be affected by transformations
in shared variables caused by events occurring in the environment of the given process.
Finally, a finitely recursive characterisation of the extended processes has been obtained
using a collection of operators. It has been shown that arbitrary TTMs can be modelled
as extended processes in the proposed framework.

5.2

Extended Process Space

In order to model state variables explicitly in a trace based formalism we shall use the idea
of extended process space as originally envisaged in [22]. To begin with, we shall augment
the deterministic process space of Inan and Varaiya with an explicit state component. The
resultant Augmented Process Space will have many similarities as well as some important
differences with the Deterministic Process of Inan-Varaiya with Memory [22].
Let V be a set of variables. For each v V, let type(v) denote the set from which v
can take values. The underlying state-space Q is defined as
Q := vV type(v).

112

CHAPTER 5. STATE-BASED EXTENSION OF FRP

Also let be a collection of event symbols. It includes a distinguished event symbol ,


which is also known as a silent event. This event takes place in a process only in response
to (and in synchronization with) to events taking place in the environment of a given
process. We defer a detailed discussion on the features and role of to the part on the
Parallel Composition Operator, later in this chapter.
Finally let (W,MD ,D , D , {D n}) be the deterministic embedding space as defined
in definition 2.3.1. The augmented process space (DA , D , {D n}), which augments the
standard deterministic process space D , is a subset of the deterministic embedding space
(and hence uses the same partial order and projection operators). It is characterised by a
collection of marking axioms which also imply the satisfaction of conditions for a marked
process space.
Definition 5.2.1 (Marking Axioms:) The marking axioms of the augmented process
space DA are as given below.
(i) MDA := 2 {0, 1} Q.
(ii) For any P = (tr P, P ) DA , tr P C( ) and P (s) = ( P (s), P (s), P (s)).
Here P and P are the alphabet and the termination functions, being the same as defined
for D and P : tr P Q is the state-transition function, with P (s) denoting the state
of the process after it has executed the event sequence s. Naturally P (<>) denotes the
initial state.
(iii) s^ < > tr P (s tr P ) ( P (s)).
(iv) (s^t tr P ) ( P (s) = 1) t =<>.
To capture the effect of occurrence of an event on the global statespace Q, we use
a partial function hP : tr P Q Q. Naturally, P (s^ < >) = hP (s, P (s)). For a
single process the whole treatment of state trajectories can be made in terms of P ()
only, without introducing hP (, ) at all. However cases arise where other processes also
independently affect the state along with P . In such a situation it becomes necessary
to use the state transition function of P as well as those of other agents. In such cases,
because of shared variable structure, building the overall state transition function becomes
much easier if we use the hP (, ), a function of individual components, instead of the P ()
function. This fact is explained while defining the parallel composition operator. It is for
the same reason that both arguments of hP (, ), namely, s and q, are necessary.
In DA explicit state dependent decisions (branching) cannot be modelled. Also elements of DA are defined to have unique initial states. This imposes constraints on
operator definitions and their applicability to real systems. To circumvent these problems
we use the concept of extended processes already introduced in chapter 2.

5.3. PROCESS OPERATORS

113

Definition 5.2.2 Let


Q
DA := {P | P : Q DA is any partial function }.
D(P ) denotes the domain of the (extended) process P and for q Q, P (q) DA is the
evaluation of the (extended) process P at q.
In the same way as for general extended embedding space and extended marked process
space (see chapter 2), the post-process operator, partial order and projection operators can
Q
be defined over Q
DA and (DA , D , { n}) can be associated with a marked process space.

5.3

Process Operators

In this section we define constant processes and a number of different operators on both
the augmented and extended process space. The operators can be used to build complex
models from simpler ones.
The constant processes of DA are ST OPA,q and SKIPA,q , for some A and q Q.
Definition 5.3.1 (Constant Processes:)
ST OPA,q := ({<>}, ST OPA,q (<>) = A, ST OPA,q (<>) = 0, ST OPA,q = q)
SKIPA,q := ({<>}, SKIPA,q (<>) = A, SKIPA,q (<>) = 1, ST OPA,q = q)
Naturally, their extended process counterparts will be,
ST OPA : Q DA | D(ST OPA ) = Q and ST OPA (q) := ST OPA,q .
SKIPA : Q DA | D(SKIPA ) = Q and SKIPA (q) := SKIPA,q .
Note that any HALTm or P D 0 for some P DA is either ST OPA,q or SKIPA,q
for some A , q Q. Similarly P D 0 for some P Q
DA is either ST OPA or SKIPA
for some A .
Next, following DFRP we present a collection of operators and associated restrictrions.
Definition 5.3.2 (Augmented Deterministic Choice Operator (ADCO):) Given
P1 , , Pn DA , distinct events 1 , , n from some A , and a collection of
associated state transition functions hi : Q Q, 1 i n and q Q the ADCO is
defined as:
P = (1 P1 | | n Pn )A, q

114

CHAPTER 5. STATE-BASED EXTENSION OF FRP

If = 1,
P := SKIPA,q .
If = 0 then
trP := {<>} {< i > ^s | s trPi , 1 i n};
P (<>) := A, P (< i > ^s) := Pi (s);
P (<>) := 0, P (< i > ^s) := Pi (s);
P (<>) := 0, P (< i > ^s) := Pi (s).
The continuity of state is achieved under the following constraints. (i) For the given
q Q, hi must be defined for all i. (ii) Also the initial states of each Pi must satisfy the
condition that Pi (<>) = hi (q). Note that, hPi (<>, ) = hi (), but if instead of being the
first event, i occurs in P , after some < j > ^s, then hPi (< j > ^s, ) will be determined
P
as hPi (< j > ^s, ) = hij (s, ).
ADCO describes the many courses of events that are possible at the beginning of a
process. It is continuous and con. Using ADCO, we define its extended version.
Definition 5.3.3 (Extended Deterministic Choice Operator (EDCO):) Given extended processes P 1 , , P n , distinct events 1 , , n , a collection of associated state transition functions hi : Q Q, 1 i n, and a partial function v : Q 2 {0, 1} Q,
the EDCO is defined as
D(P ) := {q Q | v(q) = (Aq , q , qq0 ) implies that q = 0 and there exists nonempty
{k1 , , km } Aq such that for all l, 1 l m, q D(P kl ), hkl (qq0 ) is defined and
[P kl (q)](<>) = hkl (qq0 )}. For q D(P )
P (q) := (k1 P k1 (q) | | km P km (q))(Aq ,0,qq0 )
where {k1 , , km } = {j | q D(P j ) hj (qq0 ) is defined [P j (q)](<>) = hj (qq0 )}.
We denote v as (A, 0) if for all q D(P ), v(q) = (A, 0, q). Also if for all q D(P ),
v(q) = ({1 , , n }, 0, q) and {k1 , , km } = {1, , n}, we can simply drop the initial
marking symbol v. This is also continuous and constructive.
According to the definition of the choice operator in extended processes, (see definition
2.1.10), the extended processes P 1 , , P n are to be evaluated at the initial mark. Since we
have retained the same spirit here in definition 5.3.3, for meaningful operation, it becomes
necessary that the state continuity constraint [P j (q)](<>) = hj (qq0 ) be satisfied.
The following Assignment Operator (AO), also originally defined in [22], is widely used
for satisfying this state continuity constraints in EDCO.

5.3. PROCESS OPERATORS

115

Definition 5.3.4 (Assignment Operator (AO):) Given a partial function h : Q Q


with domain D(h) and an extended process P , the AO, denoted as P [h] is defined as:
D(P [h]) := {q D(h) | h(q) D(P )}. Also P [h](q) := P (h(q)).
For example, if hi is the associated state modification function with event i , then
P = (1 P 1 [h1 ] | | n P n [hn ])
automatically satisfies state continuity constraints of EDCO.
AO is continuous and ndes.
Definition 5.3.5 (Augmented Local Change Operator (ALCO):) Given a process
P and collection of events B and C, the ALCO P [B+C] is defined as follows:
tr P [B+C] := {s tr P | B is not the first event of s}
P [B+C] (<>) := (P (<>) B) C
P [B+C] (s) := (P (s)), for s 6=<> .
P [B+C] (s) := P (s)
P [B+C] (s) := P (s)
Definition 5.3.6 (Extended Local Change Operator (ELCO):) Given an extended
process P and collection of events B and C, the ELCO P [B+C] is defined as :
D(P [B+C] ) := D(P ). For q D(P [B+C] ), P [B+C] (q) := (P (q))[B+C] .
Definition 5.3.7 (Augmented Global Change Operator (AGCO):) Given a process
P and collection of events B and C, the AGCO P [[B+C]] is defined as follows:
tr P [[B+C]] := {s tr P | B is not in s}
P [[B+C]] (s) := (P (s) B) C.
P [[B+C]] (s) := P (s).
P [[B+C]] (s) := P (s)
Definition 5.3.8 (Extended Global Change Operator (EGCO):) Given an extended
process P and collection of events B and C, the EGCO P [[B+C]] is defined as :
D(P [[B+C]] ) := D(P ). For q D(P [[B+C]] ), P [[B+C]] (q) := (P (q))[[B+C]] .
Both local and global change operators are continuous and ndes.
Definition 5.3.9 (Augmented Sequential Composition Operator (ASCO):) Given
P1 and P2 , P1 ; P2 is defined as follows:
tr (P1 ; P2 ) := A1 A2 where A1 , A2 are defined as

116

CHAPTER 5. STATE-BASED EXTENSION OF FRP

A1 := {s tr P1 | ( P1 (s) = 0) ( P1 (s) = 1 P1 (s) 6= P2 (<>))};


A2 := {s = rs ^ts | (rs tr P1 )( P1 (rs ) = 1)(P1 (rs ) = P2 (<>))(ts tr P2 )}.
Note that A1 A2 = and for s A2 , the components rs and ts are unique.
(P1 ; P2 )(s) :=

P1 (s) if s A1
P2 (ts ) where s = rs ^ts A2

0
if s A1
P2 (ts ) if s = rs ^ts A2

P1 (s) if s A1
P2 (ts ) if s = rs ^ts A2

(P1 ; P2 )(s) :=
(P1 ; P2 )(s) :=

Note that ASCO is quite restrictive in the sense that, even if P1 terminates successfully,
if there is a mismatch in the terminating state of P1 and initial state of P2 , the overall
sequential composition deadlocks. The extended version can remove this kind of restriction.
Definition 5.3.10 (Extended Sequential Composition Operator (ESCO):) Given
processes P 1 and P 2 , P 1 ; P 2 is defined as follows.
D(P 1 ; P 2 ) = D(P 1 ). For q D(P 1 ; P 2 ), (P 1 ; P 2 )(q) is defined as
tr (P 1 ; P 2 )(q) := A1q A2q where A1q , A2q are defined as
A1q := {s tr P 1 (q) | ( P 1 (q)(s) = 0) ( P 1 (q)(s) = 1 ((q 0 = P 1 (q)(s) / D(P 2 ))
(q 0 6= P 2 (q 0 )(<>))))};
A2q := {s = rs ^ts | (rs tr P 1 (q)) ( P 1 (q)(rs ) = 1) (q 0 = P 1 (q)(rs ) D(P 2 ))
(q 0 = P 2 (q 0 )(<>)) (ts tr P 2 (q 0 ))}.
Here also A1q A2q = and for s A2q , the components rs and ts are unique.
(P 1 ; P 2 )(q)(s) :=

(P 1 ; P 2 )(s) :=

P 1 (q)(s) if s A1q
P (q 0 )(ts ) where s = rs ^ts A2q
2

0
if s A1q
0
P (q )(ts ) if s = rs ^ts A2q
2

(P 1 ; P 2 )(s) :=

P 1 (q)(s) if s A1q
P 2 (q 0 )(ts ) if s = rs ^ts A2q

Note that the state matching requirement for augmented processes is relaxed to some
extent by allowing the extended processes to have variable state q 0 as the initial state.
It is in the definition of the Augmemted Parallel Composition Operator (AP CO) where
augmented processes differ most from the deterministic processes, both without and with

5.3. PROCESS OPERATORS

117

memory. The difference emerges due to the introduction of the silent event . The AP CO
captures the concurrent evolution of two or more processes. Synchronisation of certain
tasks is brought about by naming the two tasks in the two processes identically. Such
synchronous events appear only once in the trace. At times when one process generates
an event , the other process may execute synchronously, irrespective of the event in
the environment, but only appears in the trace. This is explained below in the following
formal definition.
Definition 5.3.11 (Augmented Parallel Composition Operator (AP CO):) To define the AP CO we first define the projection of a string s on a process P (denoted
by s P ) as follows.
<>P :=<> .
For 6= ,

s^ < >P :=

undefined if s P /tr P
(s P )^ < > if P (s P )
(s P )^ < > if ( /P (s P )) ( P (s P ))
(s P ) if , /P (s P )

Given P1 and P2 , P1 || P2 is defined as follows:


<> tr (P1 || P2 ).
(s tr (P1 || P2 )) (q = (P1 || P2 )(s)) (s^ < > tr (P1 || P2 ) iff the following holds.
alphabet condition: P1 (s P1 ) P2 (s P2 ).
trace condition: (s^ < >) Pi tr Pi , i = 1, 2.
state condition(s): For 6= ,
i) If (s^ < >Pi , s^ < >Pj ) = (s Pi ^ < >, s Pj ^ < >) then for i, j = 1, 2
P
P
i 6= j hP i (s Pi , h j (s Pj , q)) and h j (s Pj , hP i (s Pi , q)) are both defined and equal.
ii) Same as i) with replaced by .
iii) Same as i) with replaced by .
iv) For including if (s^ < >Pi , s^ < >Pj ) = (s Pi ^ < >, s Pj ) then
hP i (s Pi , q) is defined for i, j = 1, 2; i 6= j.
Also (P1 || P2 )(s) := P1 (s P1 ) P2 (s P2 ).

( P1 (s P1 ) = 1 P2 (s P2 ) = 1)
1 if
( P1 (s P1 ) = 1 P2 (s P2 ) P1 (s P1 ))

(P1 || P2 )(s) :=

( P2 (s P2 ) = 1 P1 (s P1 ) P2 (s P2 ))

0 otherwise

118

CHAPTER 5. STATE-BASED EXTENSION OF FRP

Finally (P1 k P2 )(<>) := P1 (<>) = P2 (<>). (See constraints of AP CO). For


including , if q = (P1 || P2 )(s) and (s^ < > tr (P1 || P2 ), then for i, j = 1, 2; i 6= j,
(P1 || P2 )(s^) :=

hP i (s Pi , q)
P
hP i (s Pi , h j (s Pj , q))

P
Pi
h (s Pi , h j (s Pj , q))

if (s^ < >Pi , s^ < >Pj ) = (s Pi ^ < >, s Pj )


if (s^ < >Pi , s^ < >Pj ) = (s Pi ^ < >, s Pj ^ < >)
if (s^ < >Pi , s^ < >Pj ) = (s Pi ^ < >, s Pj ^ < >)

Thus an event 6= can take place in a component process provided it is not blocked
by the environment. Events common to the alphabets of the component processes must
occur synchronously. If an event is possible in P1 and is not blocked by P2 and at
the same time the silent transition is possible in P2 , then in P1 and in P2 will
take place synchronously and only will appear in trace of P1 kP2 . Note that this occurs
even if is in the alphabet (and trace) of P2 . Essentially has been introduced to
model the fact that at certain points in the dynamics of processes, some change in the
state variable, by events occurring in other external processes operating in parallel, may
spontaneously cause a change in the event dynamics. By spontaneous we imply one without
any deliberate action by the process itself or external manifestation. A typical analogy is
that of a hardware interrupt where an event (interrupt) by another processor can simply
cause a change in the event dynamics (jump to the interrupt service subroutine) without
any external manifestation. Note that just as an interrupt can be disabled, a similar effect
can be achieved by not including in the alphabet. Further use of will be found in
the context of an extended process in a later part. Though according to the definition,
one may not always be able to block independent occurrences of in traces of P1 kP2 ,
meaningfully, should appear in traces of P1 kP2 only when it is acting as a component
process in (P1 kP2 )kP3 for some P3 and is induced in P1 kP2 by some event in P3 . This
meaningful behaviour can be achieved by using AGCO[[]] on the process P describing
the physical behaviour of the overall system.
The constraints that must be satisfied by AP CO are as follows
i) P1 (<>) = P2 (<>). This is because P1 and P2 should have same initial global state
for a well defined P1 kP2 .
ii) All the function compositions used in finding (P1 kP2 )(s) must be commutative. We
emphasize here that this requirement is both natural and reasonable from the point of
view of deterministic modelling of real world DES. Recall that either events are physically
synchronous or they are modelled as such because their exact order of occurrence is deemed
to be unimportant for the particular application of the model. In the former case commu-

5.3. PROCESS OPERATORS

119

tativity is physical while in the latter only because it is satisfied, the events are modelled
to be synchronous. Note also that we do not require that the set of variables accessed and
manipulated by the components of AP CO are disjoint as in [22] where the commutativity requirement is obviated. In case neither of the situations hold, loss of commutativity
results in nondeterminism in the state trajectory. Since in this work we concern ourselves
with a deterministic framework, the commutativity of maps has been assumed to remove
the possible nondeterminism when shared events modify common variables.
iii) For any component process Pi , Pi (s Pi ) s Pi ^ < > tr Pi . To see the
importance of this constraint, assume that it is violated in P1 kP2 , for some s and P1 with
s Pi tr Pi , i = 1, 2. Then , (s P2 )^ < > tr P2 and / P1 (s P1 ) implies
s^ < >/tr (P1 kP2 ). This is physically absurd.
It is interesting to observe that, as far as the state dynamics is concerned, (P kP ) 6= P .
This equality holds in the case of CSP and DFRP where the concept of state is absent.
In augmented processes also, tr (P kP ) = tr P . However even with simple state transition
functions such as hP (, q) = q + 1 with Q = Z, the set of integers, hP (s P , hP (s P , q))
6= hP (s P , q).
Definition 5.3.12 (Extended Parallel Composition Operator (EP CO):) Given extended processes P 1 and P 2 , P 1 kP 2 is defined as below:
D(P 1 kP 2 ) := {q D(P 1 ) D(P 2 ) | P 1 (q)kP 2 (q) is defined }. Also (P 1 kP 2 )(q) :=
P 1 (q)kP 2 (q).
ASCO, ESCO, AP CO and EP CO are all continuous and ndes.
Definition 5.3.13 (Augmented State Modification Operator (ASM O):) Given
some process P DA , a function r : Q Q, with domain D(r), such that P (<>) D(r),
the ASM O, denoted by P {r} modifies the initial state and consequently the state trajectory
of a process. Formally
<> tr P {r} and P {r}(<>) := r( P (<>)).
If s tr P {r} then s^ < > tr P {r} iff (s^ < > tr P ) (hP (s, P {r}(s)) is defined ).
P {r}(s^ < >) := hP (s, P {r}(s)).
s tr P {r}, P {r}(s) := P (s) and P {r}(s) := P (s).
Definition 5.3.14 (Extended State Modification Operator (ESM O):) Given some
extended process P Q
DA , a function r : Q Q, with domain D(r), the ESM O, denoted
by P {r} is defined as:
D(P {r}) := {q D(P ) | (P (q)){r} is defined }. Also (P {r})(q) := (P (q)){r}.

120

CHAPTER 5. STATE-BASED EXTENSION OF FRP

The difference between ESM O and AO should be noted carefully. In the former,
the initial state of the augmented process P (q), namely (P (q))(<>), gets modified to
r( (P (q))(<>)) and the subsequent state trajectory is also modified. In the latter, the argument of the extended process P itself is changed from q to h(q) and a different augmented
process P (h(q)) is selected.
Logical branching operator is another extended operator which was originally introduced in [22] and it captures the standard if-then-else construct of programming languages.
Definition 5.3.15 (Logical Branching operator (LBO):) Given a boolean condition
b : Q {T rue, F alse} extended processes P T , P F , the LBO P = (P T  b  P F ) :
Q DA such that

P (q) if b(q) = T rue


P (q) := T
P F (q) if b(q) = F alse.
Also
D(P ) := {q D(b) | (b(q) = T q D(P T )) (b(q) = F q D(P F ))}.
SM O and LBO are both continuous and ndes.
The following property is satisfied by LBO:
(P T  b  P F )kR = ((P T kR)  b  (P F kR)).
That is, within a parallel composition, if one component faces a logical branching, the
branching will take place instantaneously. This may however pose modelling difficulties as
evidenced in the following example.
Example 5.3.1 Let
P := ((b P 1 []){b},0  x = 0  (a P 2 []){a},0 )
R := (c P 3 [(x = 1) x := 0, (x = 0) x := 1]){c},0
Consistent with the definition of P , let it be desired that b should take place in P only
from the state x = 0. Consider the parallel composition of P and R at state q0 := (x = 0).
We have
(P kR)(q0 ) := ((b P 2 []){b},0 (q0 ))k(R(q0 )).
In this process suppose the first event is c and it sets x := 1. At this state the event b is
possible in the concurrent process even with the state x = 1. From the way P is defined,
this is not desired. To prevent this kind of situation, without using Hoares restriction

5.3. PROCESS OPERATORS

121

A-

B-

4
5

R1

R2

B
C-

7
R3

Figure 5.1: Schematic Diagram of the batch plant in example 5.3.2


of disjoint variable sets, (see [17]) can be helpful. What we want is that, since P
shares variables with the environment and P faces a logical decision based on these global
variables, actions in the environment should not go unnoticed by P . It should be able
to reconsider its decision whenever there is a change of global state due to actions in the
environment. Clearly, if P is modified to P 0 with
P 0 := ((b P 1 [] | P 0 []){b,},0  x = 0  (a P 2 [] | P 0 []){a,},0 )
then (P 0 kR)[[{}]] (q0 ) will not have such problem. It however remains the designers responsibility to provide such a solution if necessary.
Example 5.3.2 This example shows the modelling of a batch process using the process
operators mentioned above.
We consider a multipurpose batch plant which consists of N product streams of equal
volume that have to be processed in L distinct batch units. The processing route of each
unit is fixed and known and processing times of the product streams in each unit are
constant. The system works in a cycle that consists of a number of predetermined tasks.
The time between two consecutive cycles is the cycle time of the system. In Fig. 5.3.2 a
schematic diagram of a multipurpose batch plant with 3 products and 3 units is presented.
The example is taken from [69]. Using the methods of [9], the processing sequence of each
unit is found so as to minimise the cycle time under the constraint of fixed processing route
of individual products. For the example system, the production sequence in the units R1 ,
R2 and in R3 are respectively 1 = (A, B), 2 = (A, B, C) and 3 = (C, A).
We shall now see how the production sequences, along with the timing informations
can be modelled in our EFRP framework. In general, let there be L units, R1 , , RL and
N product streams, M1 , , MN . The processing of the product Mm in unit Rr is marked

122

CHAPTER 5. STATE-BASED EXTENSION OF FRP

m
by two events, namely sm
r (start of processing of Mm in Rr ) and er (end of processing of
Mm in Rr ). The time gap between these two events is fixed and is equal to t(m, r) units
(assumed to be discrete). For the mth product the predetermined resource utilisation
sequence m is expressed as the event sequence
m
m
m
m =< sm
rm1 , erm1 , , srmkm , ermkm > .

On the other hand, using methods of [9], the product sequence in individual units is also
found. For Rr , this sequence is expressed as
r1
r1
rkr
rkr
>.
, em
, , sm
, em
r =< sm
r
r
r
r

The plant is modelled as an extended process namely Batch process, which is the parallel
operation of the unit processes Rr , 1 r L along with the supervisor (constraint m ,
1 m N ) or specification processes P m , 1 m N . The state space is formed with N
boolean variables vm , 1 m N , each representing whether the Mm is being processed
by some unit or not and L counters, cr , 1 r L, (type(cr ) = ), each capturing the
timing constraints t(m, r) for different m and r. Finally,
Batch process := ((R1 k kRL )k(P 1 k kP N ))[[{}]]
The process P m is determined by the sequence m and is given as
m
m
m
[[Am ]]
P m = (sm
rm1 (erm1 (srmkm (ermkm P m [])) ))

where Am is the collection of events present in m . The process P m does not modify any
variables. It just ensures that the product sequence m is maintained.
The process Rr is determined by the sequence r and is given as
Rr = (U mr1 ; ; U mrkr )[[Ar {tick}]] ; Rr
Here Ar is the collection of events present in r . Rr is the extended process that simulates
the timed behaviour of the unit Rr . Also for j {mr1 , , mrkr }, U jr is given below.
U jr = (W jr  (v1 = 1) (vm = 1)  V jr )
V jr = ( U jr [] | sjr X jr [cr := 0; vj := 1])
W jr = ( U jr [] | sjr X jr [cr := 0; vj := 1] | tick U j [])
If all the units and products are idle (v1 = = vm = 0) then in individual U jr , the process
V jr is activated. In this process, either unit r starts working on product j, or some other

5.4. RECURSIVE CHARACTERISATION

123

unit in its environment starts working on some product. The event tick is not allowed in
V jr , to represent the physical fact no time is wasted with all the units being idle. In other
words counting of time starts only after some unit starts working. However, if some unit
is busy, the event tick is allowed to take place simultaneously in all Rr as captured by the
process W jr . The effect of the event sjr is expressed with the help of assignment operator.
Finally
X jr = ((ejr SKIP {} [vr := 0])  cr = t(j, r)  (tick X jr [cr := cr + 1])
In X jr , the product stays for t(j, r) time units, before the r-th unit finishes its work and
sets vj := 0. The global change operator ensures that the events shared by any P m and
Rr are always synchronized and also tick is globally synchronized among all Rr s.

5.4

Recursive Characterisation

As in the case of DFRP, we present a recursive characterisation of extended processes, using


the operators defined above. Some minor modifications have been made to incorporate the
silent transition. Also to keep our discussion simple we havent discussed about the syntax
of the proposed model.
Let < GE > be the family of operators on extended processes, consisting of ESCO,
EP CO, ESM O, ELCO, EGCO, AO and LBO.
Definition 5.4.1 (Semantics < n (GE , Q
DA ) >) It is the collection of functions from
Q
Q n
n
(DA ) to DA defined recursively as follows. Let P = (P 1 , , P n ) (Q
DA ) . Then
(i) < ST OP A >, < SKIP A > < n (GE , Q
DA ) > where < ST OP A > (P) := ST OP A
and < SKIP A > (P) := SKIP A . A is any subset of .
i
(ii) < P roj i > < n (GE , Q
DA ) > where < P roj > (P) := P i .
[B+C]
>, < f [[B+C]] >
(iii) If < f > < n (GE , Q
DA ) > then for B, C , < f
[B+C]
< n (GE , Q
> (P) := (< f > (P))[B+C] and similarly for the
DA ) >, where < f
global change operator.
(iv) If < f > < n (GE , Q
DA ) > then for any partial function h : Q Q, < f {h} >,
Q
n
< f [h] > < (GE , DA ) >, where < f {h} > (P):= (< f > (P)){h} and similarly for
the assignment operator.
Q
n
(v) If < f1 >, < f2 > < n (GE , Q
DA ) > then < f1 kf2 >, < f1 ; f2 > < (GE , DA ) >,
where < f1 kf2 > (P):= < f1 > (P)k < f2 > (P) and similarly for the sequential operator.
Q
n
(vi) If < f1 >, < f2 > < n (GE , Q
DA ) > then < f1  b  f2 > < (GE , DA ) >,
where < f1  b  f2 > (P) := (< f1 > (P))  b  (< f2 > (P)).

124

CHAPTER 5. STATE-BASED EXTENSION OF FRP

Definition 5.4.2 (Extended Mutually Recursive Processes (EMRP):) A finite collection of processes {(P 1 , , P n )} is called a family of Extended Mutually Recursive Processes (EMRP) with respect to < n (GE , Q
DA ) > if j, P j , s tr P j we have P j /s =
n
< f > (P 1 , , P n ) for some < f > < (GE , Q
DA ) >.
Theorem 5.4.1 {(P 1 , , P n )} is a family of EMRP with respect to < n (GE , Q
DA ) > iff
Q n
P = (P 1 , , P n ) (DA ) is the unique solution of the recursive equation with consistent
initial condition,
(X 1 , , X n ) = X = F(X), X D 0 := P D 0

(5.1)

Q n
n
where F : (Q
DA ) (DA ) and each component Fi of F is guarded, that is

Fi (X) = (i1 < fi1 > (X) | | iKi < fiki > (X))mi

(5.2)

with each < fij > < n (GE , Q


DA ) > and mi is a suitable partial function from Q to
2 {0, 1} Q.
Q n
Proof: We first prove that for any < f > < n (GE , Q
DA ) >, (P 1 , , P n ) (DA ) ,
if < > tr Y where Y = < f > (P 1 , , P n ), then one can write

Y / < >=< f 0 > (P 1 , , P n , P 0 i1 , , P 0 ik )


0
where < f 0 > < n+k (GE , Q
DA ) > and each P ij , 1 j k, is either some P i / < > or
some P i / < >, for some i such that the corresponding post-process is well defined.
To prove the above claim we use induction on the number of steps to construct < f >
< n (GE , Q
DA ) >. The result can be seen to be trivially true when < f > is < ST OP A >,
< SKIP A > or some < P roj i >. Assuming the result to be true for some < f > (or
< f1 > and < f2 >), we have to prove that it holds for different possible constructions
using < f > (or < f1 > and < f2 >) that have been used to build < n (GE , Q
DA ) >. It
is shown as below. Assume

(< f > (P 1 , , P n ))/ < >=< f 0 > (P 1 , , P n , P 0 i1 , , P 0 ik )


Now consider the following possibilities.
ELCO : (< f [B+C] > (P))/ < > = (< f > (P))/ < > = < f 0 > (P, P0 ).
EGCO :
(< f [[B+C]] > (P))/ < > = ((< f > (P))/ < >)[[B+C]] = < f 0[[B+C]] > (P, P0 ).
AO :
(< f [h] > (P))/ < > = ((< f > (P))/ < >)[h] = (< f 0 > (P, P0 ))[h] = < f 0 [h] > (P, P0 ).

5.4. RECURSIVE CHARACTERISATION

125

In the following cases choice of < f 0 > will depend on the argument of the extended process.
ESM O : For any q D((< f {h} > (P))/ < >)
((< f {h} > (P))/ < > )(q) = (( < f {h} > (P))(q))/ < >
= (((< f > (P)){h})(q))/ < >= ((( < f > (P))(q)){h})/ < >
= ((( < f > (P))(q))/ < >){hq } = (((< f > (P))/ < > )(q)){hq }
= ((< f 0 > (P, P0 )){hq })(q) = ( < f 0 {hq } > (P, P0 ))(q)
where hq : Q Q | q 0 Q, hq (q 0 ) := (((f (P))(q)){h})(< >).
M BO : ((< f1  b  f2 > (P))/ < >)(q) :=

((< f1 > (P))/ < >)(q)


((< f2 > (P))/ < >)(q)
:=

(< f10 > (P, P0 ))(q)


(< f 0 > (P, P0 ))(q)
2

if b(q) = T
if b(q) = F.
if b(q) = T
if b(q) = F.

ESCO : ((< f1 ; f2 > (P))/ < >)(q) :=

((< f1 > (P))/ < >; (< f2 > (P)))(q) if < > tr (< f1 > (P))(q)
((< f2 > (P))/ < >)(q 0 ) if ((< f1 > (P))(q) = SKIPA,q0 ) < > tr (< f2 > (P))(q 0 )

which is also equal to

(< f10 ; f2 > (P, P0 ))(q)


(< f 0 [h] > (P, P0 ))(q)
2

if < > tr (< f1 > (P))(q)


if ((< f1 > (P))(q) = SKIPA,q0 ) < > tr (< f2 > (P))(q 0 )

Note that, in the first case, < f2 > (P) is trivially written as < f2 > (P, P0 ). But in the
second case ((< f2 > (P))/ < >)(q 0 ) is first replaced using induction hypotheses to
(< f20 > (P, P0 ))(q 0 ) and then the argument q 0 (which is the initial state of < f1 > (P))(q))
is brought back to q using the AO h : Q Q such that
q Q, h(
q ) := q 0 .
EP CO : Let < > tr (< f1 kf2 > (P))(q). Consider the following two possibilities.
(i) < >(<f1 >(P))(q) =< 1 > and < >(<f2 >(P))(q) =< 2 > where possible (, 1 , 2 )
tuples are (, , ), (, , ), (, , ) or (, , ). Then ((< f1 kf2 > (P))/ < >)(q)
:= ((((< f1 > (P))/ < 1 >){h2 })k(((< f2 > (P))/ < 2 >){h1 }))(q)
:= ((< f10 {h2 } > (P, P0 ))k(< f20 {h1 } > (P, P0 )))(q). Here hi : Q Q such that hi () :=
i >(P))(q)
h(<f
(<>, ) for i = 1, 2.
i

126

CHAPTER 5. STATE-BASED EXTENSION OF FRP

(ii) For i, j = 1, 2, i 6= j if < >(<fi >(P))(q) =< > and < >(<fj >(P))(q) =<> then
((< fi kfj > (P))/ < >)(q) := (((< fi > (P))/ < >)k((< fj > (P)){hi }))(q)
:= ((< fi0 > (P, P0 ))k(< fj {hi } > (P, P0 )))(q). Here also hi : Q Q such that hi () :=
i >(P))(q)
h(<f
(<>, ). Also < fj > (P) is trivially written as < fj > (P, P0 ). In this case
i
includes also.
Thus by induction principle, the claim made at the beginning of the proof is found to
be true. The rest of the proof proceeds in an identical fashion as the one (theorem 3.2) in
[21].
2
Definition 5.4.3 (Extended Finitely Recursive Processes (EFRP)) :
A process Y Q
DA is said to be an Extended Finitely Recursive Process (EFRP) with
n
respect to < (GE , Q
DA ) > if it can be represented as
P = F(P), Y =< g > (P)
where P = (P 1 , , P n ), F is of the form 5.2 and < g >< n (GE , Q
DA ) >. (F, < g >)
is said to be a realisation of the process Y .
Definition 5.4.4 (Extended Algebraic Process Space:) The collection of all possible
EFRPs w.r.t < n (GE , Q
DA ) > for arbitrary n is called the Extended Algebraic Process
Space and is denoted as A(< (GE , Q
DA ) >).
Like DFRPs A(< (GE , Q
DA ) >) is also closed under post-process operation.
Finally we mention the specific differences between the space A(< (GE , Q
DA ) >)
introduced here and the similara one presented in [22].
In [22] GE consists of only EP CO and ESCO.
In the present work the concept of a silent transition has been introduced. As a
result the definition of EP CO is modified.
In [22], in the case of EP CO, the state space is modelled as cross product of individual state spaces. But this can make some well defined EP COs, in the sense of
[22], physically meaningless since independent transformations of multiple copies of
shared variables is not meaningful in reality. In the case of ESCO also no restriction is imposed on the domain even if two sequential processes share some common
variables.
A state modification operator has been introduced to maintain the mutual recursiveness of EP CO.

5.5. ASSESSMENT

127

In our model the continuity of state, explicit mention of state transition functions
associated with events etc., have led to many constraints which are not mentioned in
[22].

5.5

Assessment

In the present work, the DFRP framework presented in [21] has been augmented and
extended to include different state-based features like variables, logical decisions etc. A
first attempt in this direction was made in [22] where the concept of extended memory was
introduced. In the present work we have made a second attempt of introducing state based
features. This work differs from the one in [22] mainly in the use of shared memory. A
larger class of operators has been used to give a recursive characterisation of the extended
process. A number of constraints are introduced to maintain state continuity under the
shared memory architecture. A concept of a silent transition has been introduced in this
model to capture effects on a process of the events that take place in the environment of
the process. With this extension the FRP framework becomes powerful to capture different
numerical and real-time features like clock, hard-deadline, logical decisions etc. It can also
mimic powerful state based models like TTM as shown in the following.
Modelling Arbitrary Timed Transition Models (TTM)
The TTM framework is proposed in [13]. Here we demonstrate the describing power of
EFRP by modelling a general TTM. For a detailed exposition on TTM the reader is
requested to refer to [13].
Let Plant = M1 k kMn be a TTM plant, where the i-th TTM Mi is described as
Mi = (Vi , i , Ti ).
Here
Ti := Ti {tick, initial} = {ij | 1 j ki , ij = (eij , hij , lij , uij )}.
V i := Vi {}. Ci := {cij | 1 j ki , type(ci ) = }. Vi := V i Ci .
VEx := V1 Vn . QEx := vVEx type(v).
In the above description the plant is modelled as the parallel composition of n TTMs.
An individual TTM is described as a tuple of a set of variables Vi , an initial condition i ,
and a collection of transitions Ti . Each transition of Ti is a tuple consisting of an enabling
condition e, a state transformation h, a lower time bound l and an upper time bound u.
From Ti two special transitions, initial and tick, are removed to form Ti , which now

128

CHAPTER 5. STATE-BASED EXTENSION OF FRP

contains ki number of transitions. From Vi the next transition variable is removed and
ki counter variables are added, one for each transition of the TTM, other than tick or
initial. We now deal with an extended variable set VEx and state space QEx . The central
idea is to introduce individual extended processes for each transition of the TTM, which
takes care of respective enabling conditions and time bounds using counter variables, silent
transitions and LBO. These processes are called event processes. However initial and
tick, are modelled explicitly as events since they dont have such constraints.
We now proceed to give the formal definition of the EFRP P lant that models a general
TTM Plant. It is formulated as a concurrent operation of n extended processes M i ,
1 i n, each of which models the i-th TTM Mi .
P lant := (M 1 k kM n )[[{}]]
M i := (ST OP {initial}  i  (initial Y i )).
At a state q, the realization M i (q) of the extended process M i either evaluates to a
deadlocked process or it is ready to execute initial and engage in purposeful activity,
depending upon whether i (q) is false or true. If it is false for some i then P lant is
deadlocked due to ST OP {initial} in Mi . Otherwise P lant starts meaningful activities with
initial taking place in all its components. Y i is a recursive process. In each call of Y i a
parallel composition of ki event processes, namely kP ij , 1 j ki , is called. Once this
composition terminates, Y i is called again.
Y i := (P i1 k kP iki ).
Y i is modelled to be a nonterminating process since the TTM is such. Each nonterminating
event process P i plays the key role in modelling. Formally
P ij = ((X 4ij  cij = uij (X 3ij  lij cij < uij X 2ij )) eij X 1ij )[[{tick, ij }]] . Here
X 1ij := ( SKIP {} [cij := 0] | tick SKIP {} [t := t + 1, cij := 0]).
X 2ij := ( P ij [] | tick P ij [t := t + 1, cij := cij + 1]).
X 3ij := ( P ij [] | ij P ij [hij , cij := 0] | tick P ij [t := t + 1, cij := cij + 1]).
X 4ij := ( P ij [] | ij P ij [hij , cij := 0]).
eij :=

hij :=

eij if ij is not a shared transition


p=1 eipjp if ij is shared as i1j1 , , irjr in Mi1j1 , , Mirjr respectively

Vr

hij if ij is not a shared transition


hi1j1 hirjr if ij is shared asi1j1 , , irjr in Mi1j1 , , Mirjr respectively

5.5. ASSESSMENT

129

Now we explain the operation of the event process P ij . The process X 1ij corresponds
to the case when the enabling condition for i is false. At this stage, in P ij , only tick
or (due to some i0j in some P 0ij ) is allowed. Also the counter is reset to zero. X 2ij
takes place when the enabling condition is true but the value of time is less than the lower
bound. Here also the event ij cannot take place. However in response to tick, counter
is incremented. The idea behind keeping in each of these processes is to be able to
check enabling as well as timing conditions after every event in the environment of P ij .
However is prevented from appearing in the traces of overall process P lant using the
EGCO [[{}]]. X 3ij pertains to the case when the enabling condition, as well as both
the lower and upper time bound constraints are satisfied and ij is allowed to happen.
X 4ij is the case when the counter value equals uij . It is interesting to note that in this
case tick is not allowed as its occurrence will violate the upper time bound constraints of
TTM. Physically this means that, before the next time instant, either ij will take place
or the enabling condition will become false due to some event in the environment. The
EGCO[[+{tick, ij }]] ensures that tick is synchronized among all processes and ij is also
synchronized when it is a shared transition. Communicating transitions are also modelled
as shared transitions. Finally note that after each event in kni=1 Y i , the post-process is
i
P ij ) proceeds synchronously, with
kni=1 Y i itself. This is because each process in kni=1 (kkj=1
the occurrence of either a tick in all the components, or a ij in one (or some if shared)
and in others. In the next call (at a new state), all the conditions are rechecked. Thus
P lant models Plant successfully.
2
However the present framework suffers the same problems as those of the original DFRP
and the NFRP model in the sense that it has a large modelling flexibility and resultant
loss of tractability. Moreover, for general EFRPs, it is not known how the different statecontinuity constraints can be verified in a recursive way. To solve these problems, as
well as to attempt the different control and observation problems, bounded and tractable
subclasses of n (GE , Q
DA ), in the sense of those in Chapter 3, should be identified.
In chapter 3, 4 and 5 we have defined three different variants of the DFRP framework
and illustrated their use by means of simple and other synthetic examples. In the next
chapter, we have taken three examples drawn from practical real-world systems and see
how they can be modelled using these frameworks.

130

CHAPTER 5. STATE-BASED EXTENSION OF FRP

Chapter 6
Case Studies
In this chapter we present three case studies showing the utility of the proposed hierarchical
subclass and the extensions of DFRP, in modelling real world DES. The first example
involves modelling of a manufacturing shop-floor using the bounded subclass H. In the
second, the nondeterministic extension of DFRP is used in modelling a fault diagnosis
procedure adopted by an automatic diagnostic expert. Finally, discrete dynamics of a robot
controller has been modelled using the state based extension of the DFRP framework.

6.1

Modelling of a Shop-Floor

Many modern manufacturing shop-floors consist of a number of job-shops, where different


operations are carried out or different parts are produced; buffers, where parts of a product are stored temporarily before being moved to other job-shops and automatic transfer
mechanisms like conveyer belts or automatic guided vehicles (AGVs) for moving the parts.
In the present example we shall model the discrete event dynamics of one such small shopfloor as a hierarchical DFRP. The schematic diagram of the shop-floor is given in figure 6.1.
It consists of two job-shops, producing part A and part B, a distributor of input parts to
the job-shops, two buffers and an AGV. Part A is produced by combining two subparts
A1 and A2 , which are produced concurrently in the first job-shop. Part B is produced
by combining two subparts B1 and B2 , which are also produced concurrently in the second job-shop. Individual subpart B1 (respectively B2 ) is also produced by combining two
subparts B3 and B4 (respectively B5 and B6 ) which are produced concurrently after some
direct processing. The parts are then inspected. If they are found to be satisfactory, then
they are stored in the corresponding buffer. If any of them turns out to be defective, error
recovery operation takes place before the production of that part can commence again.
131

132

CHAPTER 6. CASE STUDIES

Distributor






Q
Q
Q
Q
Q
Q



+

QQ
s

job-shop A

job-shop B

Buffer 1

Buffer 2

AGV

Figure 6.1: Schematic diagram of a shop floor


When any of the buffers becomes full, a request is sent to the AGV to come and collect
the parts. Till then production of that particular part is temporarily blocked.
In response to a request signal, either from the first or the second buffer, the AGV
controller determines the destination path and returns an acknowledgement signal. After
that, the AGV arrives at the destination point, unloads the buffer contents into it, moves
to the deposit box and unloads the parts into it.
Here we model the above dynamics as a hierarchical process, where the underlying
hierarchy tree is shown in figure 6.1.
Node 1: The shop-floor (SF) dynamics is modelled as a process Y SF (=Y 1 ) which
begins operation with the event start, followed by the actual nonterminating process Y M C ,
depicting the normal concurrent operation of the major components (MC) of the given
shop-floor. Formally Y 1 = Y M C = SF1 where
SF1 = (start Y M C ){start},0 .
Node 2: The major components of the shop-floor dynamics are the Distributor (D),
Jobshop A (JSA), Buffer 1 (BU1), Jobshop B (JSB), Buffer 2 (BU2), and AGV Controller (AGVC). Processes describing the dynamics of individual components are given
below. All these components are brought together in the second node of the hierarchy tree.
Distributor: The distributor supplies parts to the two job-shops in any arbitrary order.
The supply of parts to any job-shop can repeat only after the work on the previously
supplied parts to that job shop is over. Formally it is modelled as the process D1 where

6.1. MODELLING OF A SHOP-FLOOR

133

1

Y = Y SF = SF1
X1 = (SF1 )
1
U = (Y 2 )
1

Y 2 = Y MC
Y 3 = Y JA = JA11 kJA21
?

.
.
.
.
.
.
X2 = (D.. JSA.. BU1..; JSB.. BU2.. AGVC)
X3 = (JA11 , JA12 .. JA21 , JA22 )
2 P U2 = (... Y 3 ... ... Y 4 ... ... Y 5 , Y 6 )



 









 4
)



=

JB

Z PPP
PP
Z
PP
Z
PP
Z
PP
Z
PP
Z

q
P
~
Z

Y =Y
= JB11 kJB21
.
5
6
X4 = (JB1.. JB2)

4


.

J U = (Y 7 .. Y 8 )
Y 6 = Y AGV D = AGV D1

J
X6 = (AGV D1 , , AGV D4 )
Y 5 = Y AGV A = AGV A1

J
5

J^
X = (AGV A1 , , AGV A4 )
J




3





Y 7 = Y HJB1 = JB31 kJB41


.
X7 = (JB31 , JB32 .. JB41 , JB42 )

Y 8 = Y HJB2 = JB51 kJB61


.
X8 = (JB51 , JB52 .. JB61 , JB62 )

Figure 6.2: Hierarchy tree for Case Study 1


D1
D2
D3
D4

= (supplyA D2 | supplyB D3 ){supplyA ,supplyB },0 .


= (f inishA D1 | supplyB D4 ){f inishA ,supplyB },0 .
= (f inishB D1 | supplyA D4 ){supplyA ,f inishB },0 .
= (f inishA D3 | f inishB D2 ){f inishA ,f inishB },0 .

Job-shop A: In Job-shop A, after supply of parts (event supplyA ), the parts are arranged
for concurrent processing (event partA ). Once concurrent processing is over (represented
by the subtask Y 3 = Y JA ), the process parts are combined (event combineA ) and examined
(event examineA ). If it is found to be satisfactory, then the event f inishA signals that
the job-shop A is ready for another round of processing. However if some error is found
out, (errorA ), error recovery must take place (recoveryA ) before the job-shop can start
operation once again. This dynamics is modelled by the process JSA1 where,
JSA1 = (supplyA JSA2 ; Y JA ; JSA3 ){supplyA },0 .
JSA2 = (partA SKIP{} ){partA },0 .
JSA3 = (combineA JSA4 ){combineA },0 .
JSA4 = (examineA JSA5 ){examineA },0 .
JSA5 = (f inishA JSA1 | errorA JSA6 ){f inishA ,errorA },0 .
JSA6 = (recoveryA JSA1 ){recoveryA },0 .

134

CHAPTER 6. CASE STUDIES

Job-shop B: Behaviour of job-shop B is also similar to that of job-shop A. Formally


JSB1 = (supplyB JSB2 ; Y JB ; JSB3 ){supplyB },0 .
JSB2 = (partB SKIP{} ){partB },0 .
JSB3 = (combineB JSB4 ){combineB },0 .
JSB4 = (examineB JSB5 ){examineB },0 .
JSB5 = (f inishB JSB1 | errorB JSB6 ){f inishB ,errorB },0 .
JSB6 = (recoveryB JSB1 ){recoveryB },0 .
Buffer 1: (capacity nA ) After nA occurrences of the event f inishA (each occurrence of
f inishA indicates that one more finished part is put into the buffer), the buffer becomes
full and refuses to accept any more finished part. It then requests (reqA ) AGVC to send
the AGV and collect the parts. Once it receives an acknowledgement from the AGVC
(ackA ) and the AGV arrives, the finished parts are loaded into the AGV (load). After that
the buffer returns to its empty state, where it is ready to accept finished parts once again.
Its dynamics is modelled as the process BU 11 where,
BU 11 = (f inishA BU 12 ; nA 1 times ; BU 12 ; BU 13 ){f inishA },0 .
BU 12 = (f inishA SKIP{} ){f inishA },0 .
BU 13 = (reqA BU 14 ){reqA },0 .
BU 14 = (reqA BU 14 | ackA BU 15 ){reqA ,ackA },0 .
BU 15 = (load BU 11 ){load},0 .
Buffer 2: It has a similar structure as that of buffer 1. Formally
BU 21 = (f inishB BU 22 ; nB 1 times ; BU 22 ; BU 23 ){f inishA },0 .
BU 22 = (f inishB SKIP{} ){f inishB },0 .
BU 23 = (reqB BU 24 ){reqB },0 .
BU 24 = (reqB BU 24 | ackB BU 25 ){reqB ,ackB },0 .
BU 25 = (load BU 21 ){load},0 .
AGV Controller: After receiving requests from any of the buffers, (events reqA or reqB )
the AGVC sets its path (events set_pathA or set_pathB ) and sends acknowledgement (ackA
or ackB ). After this, the AGV arrives at the corresponding buffer (subtask Y 5 = Y AGV A ),
collects the finished parts from the buffer (event load), goes to the deposit box (subtask
Y 6 = Y AGV D ), unloads its contents (unload). Only after this, the controller once again
receives further requests. Formally
AGV C1 = (reqA AGV C2 | reqB AGV C3 ){reqA ,reqB },0 .
AGV C2 = (set_pathA AGV C4 ){set_pathA },0 .
AGV C3 = (set_pathB AGV C5 ){set_pathB },0 .
AGV C4 = (ackA Y AGV A ; AGV C6 ; Y AGV D ; AGV C7 ; AGV C1 ){ackA },0 .
AGV C5 = (ackB Y AGV A ; AGV C6 ; Y AGV D ; AGV C7 ; AGV C1 ){ackB },0 .

6.1. MODELLING OF A SHOP-FLOOR

135

AGV C6 = (load SKIP{} ){load},0 .


AGV C7 = (unload SKIP{} ){unload},0 .
Finally, Y 2 = Y M C is the concurrent operation of the processes representing these major
components and is given by
[[+D ]]

Y M C = D1

[[+JSA ]]

kJSA1

[[+JSB ]]

kJSB1

[[+BU 1 ]]

kBU 11

[[+BU 2 ]]

kBU 21

[[+AGV C ]]

kAGV C1

where
D = {supplyA , f inishA , supplyB , f inishB }.
JA = {supplyA , f inishA }.
JB = {supplyB , f inishB }.
BU 1 = {f inishA , reqA , ackA }.
BU 2 = {f inishB , reqB , ackB }.
AGV C = {ackA , ackB , load}.
Node 3: As mentioned earlier, part A is produced by combining two concurrently
produced subparts, namely A1 and A2. Formally
JA11 = (processA1 JA12 ){processA1 },0 .
JA12 = (storeA1 SKIP{} ){storeA1 },0 .
JA21 = (processA2 JA22 ){processA2 },0 .
JA22 = (storeA2 SKIP{} ){storeA2 },0 .
Finally Y 3 = Y JA = JA11 kJA21 .
Node 4: Part B is also produced by combining two concurrently produced subparts,
namely B1 and B2. Individually B1 (B2 respectively) is also produced by some direct processing (event d_processB1 (event d_processB2 resp.)) followed by combining (combineB1
(combineB2 resp.)) two concurrently produced subparts B3 and B4 (B5 and B6 resp.).
The production of these subparts is captured by the process Y 7 = Y HJB1 (Y 8 = Y HJB2
respectively). Formally, for i = 1, 2
JBi1 = (d_processBi JBi2 ; Y HJBi ; JBi3 ){d_processBi },0 .
JBi2 = (partBi SKIP{} ){partBi },0 .
JBi3 = (combineBi JBi4 ){combineBi },0 .
JBi4 = (storeBi SKIP{} ){storeBi },0 .
Finally Y 4 = Y JB = JB11 kJB21 .
Node 5: The arrival of the AGV to the requesting buffer is modelled by the process
Y 5 = Y AGV A = AGV A1 which is given by the following recursive equations with selfexplanatory event labels.
AGV A1 = (AGV _start AGV A2 ){AGV _start} .
AGV A2 = (check_f or_buf f er AGV A3 ){check_f or_buf f er} .

136

CHAPTER 6. CASE STUDIES

AGV A3 = (buf f er_f ound AGV A4 | not_f ound AGV A2 ){buf f er_f ound,not_f ound} .
AGV A4 = (AGV _stop SKIP{} ){AGV _stop} .
Node 6: The movement of the AGV from any of the buffer to the deposit position is
also modelled by a similar process Y 6 = Y AGV D = AGV D1 which is given as follows.
AGV D1 = (AGV _start AGV D2 ){AGV _start} .
AGV D2 = (check_deposit_position AGV D3 ){check_deposit_position} .
AGV D3 = (position_f ound AGV D4 | not_f ound AGV D2 ){position_f ound,not_f ound} .
AGV D4 = (AGV _stop SKIP{} ){AGV _stop} .
Node 7: The concurrent production of part B3 and B4 is modelled as Y 7 = Y HJB1 =
JB31 kJB41 where JB31 and JB41 are given by the following processes.
JB31 = (processB3 JB32 ){processB3 },0 .
JB32 = (storeB3 SKIP{} ){storeB3 },0 .
JB41 = (processB4 JB42 ){processB4 },0 .
JB42 = (storeB4 SKIP{} ){storeB4 },0 .
Node 8: The concurrent production of part B4 and B5 is similarly modelled as Y 8 =
Y HJB2 = JB51 kJB61 where JB51 and JB51 are given by the following processes.
JB51 = (processB5 JB52 ){processB5 },0 .
JB52 = (storeB5 SKIP{} ){storeB5 },0 .
JB61 = (processB6 JB62 ){processB6 },0 .
JB62 = (storeB6 SKIP{} ){storeB6 },0 .
The other conditions regarding boundedness of the different modules can be easily seen
to be true. Thus clearly Y SF H.
This example shows how the logical and physical structure of a system can naturally
lead to a hierarchical description. Finally, a rough calculation shows that if the example
is modelled using FSM, a large number of states (around 105 ) will be required. Clearly,
hierarchical DFRP gives a more compact description.

6.2

Modelling of a Fault Diagnosis Session

In this case study the proposed nondeterministic extension of the DFRP framework has
been used to model the dynamics of a fault-diagnosis mechanism in response to some fault
in a chemical system, namely a continuously stirred tank reactor (CSTR).
The original example of the plant has been taken from [70] and its schematic diagram is
given in Fig. 6.2(a). Here we are interested in modelling the sequences of observations made
by a fault diagnoser in response to logical sequences of process related queries which may

6.2. MODELLING OF A FAULT DIAGNOSIS SESSION

137

be made interactively with a human operator or automatically through process sensors.


The diagnoser starts the query upon receiving some symptoms of abnormality reported
to it manually or automatically. The observation of additional symptoms acquired from
the system, on-line or off-line, in response to queries, leads the diagnosis process along a
fault tree which eventually arrives at one or more root causes, termed as primary faults
here. This procedure is often nondeterministic, because a certain set of observation may
not uniquely pinpoint the source of a fault. The fault tree is a standard representation
for possible search paths during diagnosis. Here each node of the tree is labeled with
an abnormality symptom. A parent node is connected to a number of child nodes
in the tree, provided the symptoms labelling the child nodes are causes for that of the
parent node. There are three types of nodes in a fault tree: And, Or and Mixed nodes.
Occurrence of symptoms of all the child nodes together are necessary to cause the symptom
associated with an And type parent node. Similarly, occurrence of symptoms of any child
node is a sufficient cause for that of an Or type parent node. Mixed nodes are those
parent nodes where abnormality symptoms are caused by any one of the different groups
of and children. In a diagnostic session, the diagnoser starts from a given node, indicating
the abnormality initially reported to it and makes a query for symptoms of all its child
processes. The children corresponding to the symptoms obtained in response to the query
that are causes for the abnormality represented by the parent node are marked. The
query process repeats from the marked children. The tree is traversed in a breadth-first
fashion.
As described above, the fault tree captures the knowledge about the effects of various
faults that can occur in the system, as a tree of cause-effect relationships. Thus, it is a
static description which is to be utilized by the fault diagnosis procedure in arriving at
a decision regarding which fault(s) might have taken place. Depending on the observed
symptoms, only a small fraction of the fault tree is to be dynamically searched in a given
situation. In this example, we show how to capture both the static and dynamic aspects
of fault diagnosis in the form of NFRPs. Thus, information encoded in a fault tree can
be compactly and elegantly captured by NFRPs and the fault diagnosis procedure can be
modelled by suitable parallel composition with appropriate synchronization and blocking,
of these elementary processes depending on the observed fault symptoms.
Since the diagnosis may start from any reported symptom, the query and the observation
sequences beginning from each node are modelled as a process.
Observation of symptoms in response to a query is modelled as an event similarly named.
From a child node, if no abnormality symptom is observed in response to a query, a Halt
process is substituted to stop further investigation from that node.

138

CHAPTER 6. CASE STUDIES

Once a particular level of the tree is searched, a synchronisation event takes place in
the model, in all the marked nodes. Then the query process from the marked children
nodes commences.
Finally, the search terminates at a set of leaf nodes, indicating the primary faults that
have taken place.
Each fault event is described in the format variable-name variable-condition. In many
cases we have complementary variable-conditions like high (H) and low(L) or open(O)
or closed(C). We call these event pairs as complementary event pairs and if one event of
such a pair is symbolically denoted by then the other is called . It is sometimes possible
that an event has more than one complementary event. In that case if the event is , the
non-singleton complementary set is denoted as .
For each primary fault node we define a process P = P (), where, if has a complementary event then,
P () := ( HALT{({},1,{})} ){({},0,{})} .
else if it has a complementary event set then
P () := ( HALT{( ,1, )} ){({},0,{})} .
else if it does not have any complementary event then
P () := ( HALT{({},1,{})} ){({},0,{})} .
A diagnosis session stops successfully after identifying the primary fault processes P .
Blocking of complementary events indicates that in the fault diagnosis session, which proceeds after taking a snapshot of the physical conditions indicated by different sensors, we
do not get contradictory information about the occurrence of a fault. Note that, here we
do not consider the temporal behaviour of processes where some physical faulty condition
and its complementary condition can be observed at different times.
For each secondary node having node structure as shown in figure 6.2(b), we define the
process P = P ( | (11 1n1 ) (m 1 m nm )) as follows:
[[{}+{}]]

nk
[ =0]

P := ( ( (km
){}
k=1 ((kj=1 Pkj ) u HALTk ))

where
k := {(, 1, ) | {k1 knk }, 6= }.
{} := {({}, 0, {})}.

){,}

6.2. MODELLING OF A FAULT DIAGNOSIS SESSION

139

{, } := {({, }, 0, {})}.
If has a complementary set of events, namely , then GCO[[{}+{}]] will be replaced
by GCO[[ + ]]. If a secondary fault event (secondary node) does not have a complementary fault event one should naturally do away with the GCO in the corresponding
process description.
In the above, arrival at a secondary node , (start of P ) signifies a query. A positive
observation is denoted by the occurrence of the event after which the process P waits
for the next level query of its children. This takes place upon the occurrence of the
synchronisation event which denotes that queries at all the secondary nodes of that
level of the tree are completed. Assuming that there are m possible (or) groups of and
children, one or more of these groups may be the causes. If the k-th group (denoted by
k
knj=1
Pkj ) is not a cause, then HALTk is substituted. Here k represents the fact that
k-th group of and children is not a cause for , because some non-empty subset of the
fault set {k1 knk } has failed to take place. The LN O captures the fact that among
the possible m (or groups) causes, at least one (group) must have actually taken place.
The GCO[[{} + {}]] reflects the fact that in the search subsequent to the occurrence of
, not only will not be encountered in P but it will also be blocked in the environment.
The same is true for multiple complementary events.
Before describing the fault diagnosis mechanism as a process, we describe the list of
faults (events/nodes).
STREAM FAULTS : S-i-j-k where i {1 7}, j {F (Flow), T (Temperature)}, and
k {H (high), L (low)}. S-i-j-H and S-i-j-L are complementary events.
REACTOR FAULTS : R-i-j where i {L (Level), T (Temperature), P (Pressure)}, and
j {H (high), L(low)}. R-i-H and R-i-L are complementary events.
TANK FAULTS : T-i-j-k where i {1,2}, j {L (Level), T (Temperature) }, and k
{H (high), L (low)}. T-i-j-H and T-i-j-L are complementary events.
VALVE FAULTS : V-i-j where i {1 4}, j {O (Open), C (Closed)}. V-i-O and
V-i-C are complementary events.
PIPE FAULTS : P-i-j where i {1 7}, j {L (Leak), B (Block)}. There is however
no complementary event pair here.
PUMP FAULTS : PuH (pump high), PuL (pump low ) and PuN (pump normal). Out of
these three events we consider any two events form a complementary event set for the third.
Thus if is P uH then consists of {P uN, P uL}. Also in the primary fault structure
P (P uH), the marking of HALT will be ({P uN, P uL}, 1, {P uN, P uL}) and similarly for
P (P uL) and P (P uN ).

140

CHAPTER 6. CASE STUDIES

Tank 1
Pipe 1 / Stream 1
L L e Valve 1
Pipe 3 / Stream 3

Tank 2
Pipe 2 / Stream 2
Valve 2 e L L
Pipe 4 / Stream 4

Stirrer

 
  

Valve 3

e L
L
?

Pump

Pipe 5 /

j


Stream 5
Pipe 6 / Stream 6

Valve 4 e L L

(a) CSTR diagram.

Pipe 7 / Stream 7



 X
PX


PX



PX


PX



PX




PX



PX

@


PX

X


PX


PPXXXX

@



R
@

XXX 

PP




)

q 



9

z
X

11
i1  ini
mnm 
m1 

 1n1

(b) Reason of fault = m
i=1 (i1 1ni ).
S6FL

( j
h h
(
((
 Hh

(
hhhh
(

HHh
`
`
(
h
(


h
 hhh
hX

?
((((

H
)

S5FL +(
z j
X
HH
j
e
j
e P6L
j
j
e
j

e
j
e

PuN
PuN

@

A
PuL
V4C
6

6
A@
RLL j
@
R
? AA
U @
?
e
j
e
j
e
j


P5L P5B

(c)

e
j

J
J^
J

P6B

e
j

PuL

Primary Reason
-

Cause-Effect Relation

j


Secondary Reason
-

Blocking Relation

Figure 6.3: (a) Schematic Diagram of a CSTR, (b) Node structure of a general fault tree,
(c) Part of a fault tree for the CSTR.

6.2. MODELLING OF A FAULT DIAGNOSIS SESSION

141

Now we proceed to give the NFRP description of the fault tree. Since the structure
of processes modelling primary and secondary faults has been explained before in detail
these are not elaborated here. The reader will note that most of the nodes (processes)
are of pure or type except those corresponding to the processes PS6F L , PS6F H and PS6T H .
All the STREAM and REACTOR faults are secondary faults and all the VALVE, TANK,
PIPE and PUMP faults are primary faults. It should be noted that the choice of fault
events is only intended to capture the logical structure in a realistic case. Obviously many
other faults could be included if an application demands.
SECONDARY FAULTS: P = P ( | (11 1n1 ) (m 1 m nm ))
a)STREAM FAULTS:
FLOW TYPE FAULTS:
PS1F L = P (S1F L | (T 1LL), (P 1L), (P 1B), (V 1C)).
PS1F H = P (S1F H | (T 1LH), (V 1O)).
PS2F L = P (S2F L | (T 2LL), (P 2L), (P 2B), (V 2C)).
PS2F H = P (S2F H | (T 2LH), (V 2O)).
PS3F L = P (S3F L | (S1F L), (P 3L), (P 3B)).
PS3F H = P (S3F H | (S1F H)).
PS4F L = P (S4F L | (S2F L), (P 4L), (P 4B)).
PS4F H = P (S4F H | (S2F H)).
PS5F L = P (S5F L | (RLL), (P uL), (P 5L), (P 5B)).
PS5F H = P (S5F H | (RLH), (P uH)).
PS6F L = P (S6F L | (P uL), (P uN, S5F L), (P 6L), (P 6B), (P uN, V 4C)).
PS6F H = P (S6F H | (P uN, S5F H), (P uH), (P uN, V 4O)).
PS7F L = P (S7F L | (S6F L), (V 4C), (P 7L), (P 7B)).
PS7F H = P (S7F H | (S6F H)).
TEMPERATURE TYPE FAULTS:
PS1T L = P (S1T L | (T 1T L)).
PS1T H = P (S1T H | (T 1T H)).
PS2T l = P (S2T L | (T 2T L)).
PS2T H = P (S2T H | (T 2T H)).
PS3T L = P (S3T L | (S1T L)).
PS3T H = P (S3T H | (S1T H)).
PS4T L = P (S4T L | (S2T L)).
PS4T H = P (S2T H | (S2T H)).

142

CHAPTER 6. CASE STUDIES

PS5T L = P (S5T L | (RT L)).


PS5T H = P (S5T L | (RT H)).
PS6T L = P (S6T L | (S5T L)).
PS6T H = P (S6T H | (S5T H), (P uN, V 4C)).
PS7T L = P (S7T L | (S6T L)).
PS7T H = P (S7T H | (S6T H)).

b)REACTOR FAULTS:
LEVEL TYPE FAULTS:
PRLL = P (RLL | (S3F L), (S4F L), (S5F H)).
PRLH = P (RLH | (S3F H), (S4F H), (S5F L)).
TEMPERATURE TYPE FAULTS:
PRT L = P (RT L | (S3T L), (S4T L)).
PRT H = P (RT H | (S3T H), (S4T H), (RP H)).
PRESSURE TYPE FAULTS:
PRP L = P (RP L | (V 3O)).
PRP H = P (RP H | (V 3C)).
PRIMARY FAULTS: P = P ()
c)TANK FAULTS: {T1TL,T1TH,T1LL,T1LH,T2TL,T2TH,T2LL,T2LH}
d)PIPE FAULTS: {PiL, PiB such that 1 i 7}.
e)VALVS FAULTS: {V1O,V1C,V2O,V2C,V3O,V3C,V4O,V4C}.
f)PUMP FAULTS: {PuH,PuL,PuN}.
One can now simulate the diagnosis process with any consistent initial abnormalities.
For example if the starting symptoms observed in the CSTR are High Reactor Temperature
and Low Flow in Stream 7. Then the fault diagnostic session will be modelled by the NFRP
Y = PRT H kPS7F L .
By the natural assumption that top level faults are non conflicting we ensure that the
overall process Y wont get blocked by attempting to generate both and in its trace
for some fault event .
From the above the following points regarding the role of NFRP are imporant to note. In
the design of a fault diagnostic system, such as a diagnostic expert, the usual procedure is to
capture the cause-effect relationships between the faults and their symptoms in some form
such as a fault tree [71] or a signed directed graph [72]. Many such trees may arise due to the

6.3. MODELLING OF A ROBOT CONTROLLER

143

various possible faults. In the next step, all interpretations of all collections of such graphs,
that may arise in a system due to possibly multiple faults, are computed by a suitable
graph traversal mechanism. Note that, this requires concurrent traversal of graphs with
constraints that arise due to interaction among the faults. Based on these interpretation
a rule-base is compiled for subsequent on-line inferencing by the diagnostic expert. It
is not easy to construct a systematic description of this procedure in terms of graphical
methods such as fault trees, mainly because such paradigms do not support structures
for systematic composition of individual graphs. Constructing product graphs has the
problem of a combinatorial explosion. Moreover the nondeterminism is not explicit in these
models. The NFRP, on the other hand provides a modular, unified and compact description
for computing the interpretation. It explicitly supports description of event sequences,
concurrency with synchronization and nondeterminism. It is therefore an important tool
in describing the concurrent execution of individual fault diagnosis processes at a higher
level. In an implementation, such a description may be used to govern the execution of
elementary fault diagnosis processes or equivalently govern the traversal of the individual
fault tree to generate the overall sequence of fault symptoms, leading to the primary reasons
(faults).

6.3

Modelling of a Robot Controller

In this case study, we develop the EFRP model of a robot controller which carries out
some repetitive work on a job depending upon the program selected. The basic dynamics
of the robot controller are taken from an example in [73].
Briefly, the actions are as follows. After power on and system initialisation the robot
enters the manual mode and its state is inactive. In this mode the program to be run
can be selected and the valid command is start. In response to start the state becomes
establishing connection where the controller sends repeated job requests to a scheduler.
After a fixed number of requests, if the job is not granted, the robot gives a connection
failure signal and goes back to manual mode. If granted, the robot enters the working
state and loads the selected program in the buffer. Each program is assumed to be a
sequence of block of instructions for the axis actuator of the robot. At a time one block
of instructions is sent. The program is repetitive, i.e., at the end of the sequence, the first
block is sent again. Before sending each block, data is read from a collection of sensors. If
some sensor shows abnormal value, alarm is signalled and the program halts. Otherwise a
block is sent and acknowledgement is received. After each acknowledgement the command

144

CHAPTER 6. CASE STUDIES

register is checked. In absence of new commands, the controller proceeds to send the next
block. In working mode valid commands are halt or end or end on (external) request
(e_o_r). In case of halt the controller enters the suspended mode and waits for new
valid commands, namely resume or end. In response to resume the robot enters the
working state and resumes sending instruction blocks to the actuator. In response to end
the controller enters the terminating state, loads a terminating program in the buffer
and sends it to the actuator. At the end of this program which is non-repetitive, the robot
controller goes back to manual mode.
For building the EFRP model of the above sequence of actions we have used the following variables as well as their range of values.
SV (State variable) := { I (inactive), M (manual), EC (establishing connection), W
(working), S (suspended), T (terminating)}.
Count (Counter for number of requests) := {1, , N R}.
Buf f er (It holds the sequence of executable blocks of the selected program.
C (Command register) := { S (start), H (halt), R (resume), E (end)}.
CF (Command flag) := {Old, N ew}.
P N (Program number) := {1, , p}.
P L (Program length) := {N 1, , N p, N T }.
Connect (Connect flag showing response to the job request signal) := {T, F }.
k (Sensor counter) := {1, , N S}.
SV D (Dummy location for storing a sensor data).
CV D (Dummy location for storing the critical value of some physical variable measured
by a sensor).
j (Counter for number of blocks being sent).
SR (Sensor register, an array of length N S).
DSP (Display register, an array of length N S).
SF (Sensor flag indicating all sensors showing normal value or not) :={1, 0}.
DR (External disconnect request flag) := {0, 1}.
lP ower, lM , lEC, lCF , lW , lS, lT , lI : Light indicators (l.i.). Each can be either 1
(on) or 0 (off).
The different constants used are p (number of programs), N i (length of i th program,
1 < i < p), N S (number of sensors), N R (maximum number of requests), N T (length of
terminating program), CR (Array storing the critical values of different parameters.).
The set of event () names, their interpretations and transformations (h ) are given
below.
on [lP ower := 1 ] Power on.

6.3. MODELLING OF A ROBOT CONTROLLER

145

of f [all l.i. := 0] Power off.


s_init [SV := I, Except lpower, all other l.i. := 0] System initialisation.
mm [SV := M , lM := 1, other l.i. := 0, CF := Old] Press Manual mode.
ps_i [P N := i, P l := N i] Select i th program, 1 < i < p.
p_s [CF := N ew, C := S] Press Start button.
p_h [CF := N ew, C := H] Press Halt button.
p_r [CF := N ew, C := R] Press Resume button.
p_e [CF := N ew, C := E] Press End button.
start [SV := EC, lEC := 1, other l.i.:= 0, CF := Old, Count := 0] Robot starts.
resume [SV := W , lW := 1, other l.i.:= 0, CF := old] Robot resumes.
halt [SV := S, lS := 1, other l.i.:= 0, CF := Old] Robot halts.
end [SV := T , lT = 1, other l.i.:= 0, CF := Old] Termination starts.
inv [lI := 1, other l.i.:= 0, CF := Old] Invalid command sensed.
jr [Count := Count + 1] Request for job is sent.
jc [Connect := T , lEC := 0] Confirmation for job is received.
discon [Connect := F , lEC := 0, lCF := 1] Job request turned down.
set_t_1(2)[] Set Timer 1(2).
out_t_1(2)[] Timer 1(2) gives timeout signal.
prog [j := 1, SV := W , lW := 1, Buf f er := P rogram(P N )] Selected program is sent
to the buffer and block counter of the program is initialised.
int_s [k := 1] Sensors are interrupted and sensor counter :=1.
read_s [SV D := SR(k) := Sensor data, CV D := CV R(k)] k th sensor is read and
sensor data and corresponding critical data are loaded in dummy locations for comparison.
disp [DSP (k) := SR(k)] Display k th sensor output.
Alarm [SF := 0] Alarm signal, if some sensor output is abnormal.
poll_end [SF := 1] End of successful polling of all sensors.
int_A[] Interrupt axis actuator.
int_ack[] Receive interrupt acknowledge from actuator.
send[] Send j_th program block to actuator from buffer.
ack [j := j + 1; (j > P N ) (j := 1)] Axis actuator acknowledges completion of task of
one block.
req_end [DR := 1] External request signal to release the job.
e_o_r [DR := 0, SV := T , lT := 1, other l.i. := 0] Robot starts terminating operation
on external request.
prog_T [j := 1, P L := N T , Buf f er := P rog(terminate)] Terminating program is
loaded in the buffer.

146

CHAPTER 6. CASE STUDIES

ack_T [j : j + 1] Actuator acknowledges completion of a block of terminating program.


Now we proceed to give the formal model of the robot controller. It is a nonterminating
process in which events on starts a loop and event of f terminates it. In between these
events, the robot performs its activities.
R_Controller := (Robotk(on (of f SKIP [hof f ])[hon ])); RC ontroller.
Note that, by use of the process SKIP , termination of the robot dynamics has been
achieved via blocking.
Robot := (on (s_init M anual[hs_init ])[hon ]).
M anual := (mm (ps_1 W ait[hps_1 ] | | ps_p W ait[hps_p ])[hmm ]).
W ait := (ps_1 W ait[hps_1 ] | | ps_p W ait[hps_p ] | p_s Decide[hp_s ] | p_r
Decide[hp_r ] | p_h Decide[hp_h ] | p_e Decide[hp_e ]).
Decide := (start Pstart [hstart ])  SV = M C = S  P1 .
P1 := (halt Phalt [hhalt ])  SV = W C = H  P2 .
P2 := (resume Presume [hresume ])  SV = S C = R  P3 .
P3 := (end Pend [hend ])  ((SV = W SV = S) C = E)  (inv Pinv [hinv ]).
Pinv := (p_s Decide[hp_s ] | p_r Decide[hp_r ] | p_h Decide[hp_h ] | p_e
Decide[hp_e ]).
After power on, system initialisation takes place and manual mode becomes active.
One of the p programs can be selected and the selected program can be changed also
before pressing one of the command buttons. The validity of the command is checked by
comparing the state variable and the command register. In case of invalid command the
robot waits for another command button to be pressed. Obviously in the manual mode
only the start command is valid.
Pstart := Ptry ; Pconnect .
Ptry := (j_r Preq [hj _r ])  count < N R  (discon SKIP{} [hdiscon ]).
Preq := (set_t_1 (out_t_1 Ptry | j_c SKIP{} [hj _c ])).
Pconnect := (Pwork )  connect = T  (M anual).
The process Ptry captures the repeated requests (N R times) with the help of a counter
variable. The robot sends a Job request signal to a job scheduler (assume a flexible
manufacturing framework), starts a timer and waits. If timeout occurs before receiving
confirmation, the request is sent again. It is repeated at most N R times. In case of connection failure, the robot is to be restarted from manual mode. Otherwise robot starts
working.
[[+{halt,ack}]]
Pwork := Pexe
kPpanel .
Ppanel := (p_s Ppanel [hp_s ] | p_h Ppanel [hp_h ] | p_r Ppanel [hp_r ] | p_e Ppanel [hp_e ] |

6.3. MODELLING OF A ROBOT CONTROLLER

147

req_end SKIP{} [hreq_end ] | ack SKIP{} [hack ] | halt SKIP{} [hhalt ]).
The GCO[[+{halt, ack}]] captures that these events are synchronised between Pexe and
Ppanel . Ppanel captures the fact that command buttons are active as long as remote end
request, acknowledge from axis actuator or halt has not taken place. Once these events
take place the panel is reactivated at proper time.
Pexe := prog Presume [hprog ].
Presume := Psensor ; Paxis .
Psensor := int_s Psread [hint_s ].
Psread := read_s Pscheck [hread_s ].
Pscheck := (disp ((pollend SKIP{} [hpollend ])  k > N S  Psread )[hdisp ])
 SRD < CV D  (disp (alarm SKIP{} [halarm hdisp ])).
In Pexe the selected program is loaded and interrupt signal is sent to the sensors. The
different sensors, N S in number, are read one by one. The data obtained by each sensor
is compared with the critical value of the corresponding physical parameter and the data
is displayed. If the sensed data shows a value in normal range, next sensor is read and if
the reading of all the sensors is complete, Psensor ends with a pollend signal. However if
an abnormal value is detected, instead of reading the rest of the sensors, an alarm signal
is given out.
Paxis := (int_A int_ack send ack Pnb [hack ])  SF = 1  (halt
Phalt [hhalt ]).
In case of alarm, the robot halts. Otherwise communication is established with the
axis actuator via proper interrupts, j_th block of program is sent to the actuator, and an
acknowledgement is received upon proper completion of the instructions given in the block.
Note that, when Paxis is taking place, Ppanel has terminated (due to synchronous events
halt or ack) making the command panel temporarily inactive.
Pnb := Pe_o_r  DR = 1  ((Presume kPpanel )  CF = Old  Decide).
Before sending the next block of motion instruction, the disconnect request flag is
checked. and in case of such a remote end request the robot ends its operation. In absence
of such requests, the command flag is checked. If no new command button is pressed, the
robot prepares to send the next block and the command panel is reactivated. Note that
the programs are repetitive, i.e., if all the blocks of the program are sent, robot sends the
first block once again. If new command button is pressed, it proceeds to check the validity
of the new command.
Phalt := (set_t_2 (out_t_2 halt | p_s Decide[hp_s ] | pr Decide[hp_r ] | p_h
Decide[hp_h ] | p_e Decide[hp_e ])).
In the halt process a timer is set and the controller waits for some command button to

148

CHAPTER 6. CASE STUDIES

be pressed.
Pe_o_r := (e_o_r Pend [he_o_r ]).
Pend := prog_T Pintr [hP rog_T ].
Pintr := (int_A int_ack send ack_T (M anual  j > P L  Pintr )[hack_T ]).
In the terminating procedure, the special terminating program is brought to the buffer
and sent to the actuator blockwise. At the end of this procedure the robot goes back to
the manual mode.
In the above example we have seen, how different features of EFRP, including variables
and logical decisions, can model the logical structure of the robot controller compactly.
The state continuity constraints of the different EDCO operators are satisfied because of
the use of the AO operators. Constraints of EP CO and ESCO are also satisfied quite
naturally. However, silent transition has not been used in this model, though its use has
been shown in the previous chapter.
Finally, in the next chapter, we present an assessment about the current work.

Chapter 7
Conclusions
The DFRP framework based on process algebra is a relatively new candidate for modelling
DES. The work presented in this thesis is only a first step towards establishing this as
a powerful but tractable modelling framework for DES. We conclude this work with an
assessment of the framework and some future research problems that will have to be tackled
to enhance its prospect in practical applications.
State and Graph based formalisms such as Petri Nets (PN) and State Charts (SC) are
established frameworks which are popular for their inherent simplicity and graphical
interfaces. Languages like STATEMATE [15], based on SC and Grafcet [74], based
on PN have been developed and are used in the industry. But these models become
unwieldy when the number of states of the system increases. The DFRP model with
its operators for process composition and recursive description is suitable for compact
descriptions of complex systems. Therefore, process algebraic frameworks should be
thought of as complementary models to the state based frameworks, instead of as
alternatives. But, in the absence of graphical interfaces, these are not very userfriendly.
Thus, user-friendly modelling and analysis tools for DES should be developed, which
will support multiple formalisms so that smaller systems can be modelled using graphical or state-based models for easy understandability. Later, these models can be
treated as processes and can be combined using process operators to build models of
complex systems.
Modularity and hierarchy are important requirements of any modelling framework
for complex systems. The original DFRP framework does not support these features
149

150

CHAPTER 7. CONCLUSIONS
explicitly and is as flat as FSM and PN. Efforts have been made to include hierarchical features in PN [75] and FSM [32]. In case of the DFRP framework, a concept
of module is defined in [21]. Using this concept, in this work, a bounded hierarchical
structure has been proposed for modelling of DES.
However, as mentioned earlier, there are important issues that have not been discussed in this thesis but should be addressed regarding hierarchical characterisation
of DFRPs. Firstly, it will be necessary to design algorithms to determine whether
bounded and hierarchical characterisation is possible for a given DFRP. Secondly, a
hierarchical DFRP, that is bounded, can always be implemented using a finite number of states and different properties like reachability, deadlock, liveness, language
equivalence etc., can be found out by adapting standard FSM-based algorithms to
this finite state implementation. However the associated state-space tends to assume
a large size due to the product spaces that arise in presence of concurrent subsystems and FSM-based algorithms become inefficient and computationally expensive
because of state explosion in the implementation. For hierarchical DFRPs, therefore,
properties, which are amenable to hierarchical analysis, should be identified. Efficient
algorithms, which make use of the underlying structure and may avoid computation
over product spaces, can then be designed to analyse these properties. This will result
in a significant reduction of the size of the search space and associated computational
advantages over flat DFRP.

The problem of supervision is well researched in the FSM model. It is of immediate interest to define this problem for unbounded and bounded subclasses of the
DFRP model and solve them while taking advantage of the high-level, recursive
and hierarchical structure of DFRP. Once again, it should be noted that straightforward application of FSM based algorithms for controller synthesis will not be
computationally attractive for bounded DFRPs and will fail in case of an unbounded
DFRP. In this context, a compositional way of controller specification may be useful.
For individual small bounded modules, one may use FSM-based controller synthesis
approach. Later, these individual controllers can possibly be connected hierarchically using DFRP operators to meet the overall control requirement. For unbounded
DFRPs one perhaps cannot hope for a generic procedure for controller synthesis. But
specific subclasses of unbounded DFRP and associated control requirements may be
identified for which controller synthesis is possible in a finite number of steps.
A contribution of this work is the nondeterministic extension of the DFRP model

151
to include the effects of event concealment under the setting of variable alphabet.
Contrasted to the refusal based external viewpoint of NCSP, a possible future
based internal viewpoint of nondeterminism is used in this work. Also for the sake of
simplicity, divergence (result of infinite hiding) has not been treated in this work.
The following research problems may be taken up to consolidate this work:
Firstly, the nondeterministic extension is too general and it is necessary to identify bounded subclasses (in the sense of DFRP) of the nondeterministic model for
further analysis. Secondly, concealment of events in any DFRP or NFRP leads to
new nondeterministic processes. The central problem of observation is to identify
whether infinite hiding is taking place and if not, how to construct a finitely recursive characterisation of the resultant nondeterministic process. It will be useful
to identify suitable subclasses of both DFRP and NFRP where this problem will be
solvable. Finally, a suitable formulation and solution to the problem of supervision
under partial observation for different subclasses of this model are other important
tasks.
Prominent DES researchers are concerned about the fact that at such an early stage
of research in this area, two distinct schools of thought are emerging, without much
mutual interaction. On one hand there is the logical model school, to which this
thesis belongs, which is engrossed in modelling and analysis of logical structure of
the event dynamics and associated problems. On the other hand the performance
model school is interested in numerical performance related analysis in a stochastic
setting. Queueing methods and Markov Chains are major methods in this analysis
and in most cases, analytical solutions are obtained only after assuming quite simple
logical structure of underlying event dynamics. The popularity enjoyed by Petri Nets
is to an extent derived from their ability to deal with structural issues [12] as well as
numerical performance related questions via generalised Stochastic Petri Nets [10].
In this work, a step towards combining these divergent issues in the DFRP formalism,
has been taken, via the introduction of the EFRP model. Besides structural issues like
liveness, deadlock, safety, reachability, etc., quantitative issues like timing constraints
such as deadlines, time bounds and deterministic performance measures can also be
modelled in this framework by virtue of the presence of variables. The following
problems should be investigated in order to make this extension more useful.
Firstly, for general EFRP, it is not known how the different state-continuity constraints can be verified in a recursive way. To solve these problems, as well as to
attempt the different control and observation problems, bounded and tractable sub-

152

CHAPTER 7. CONCLUSIONS
classes, in the sense of those in Chapter 3, should be identified. Secondly, EFRP,
with its state-based features may be quite useful for formal specification. It will
be worthwhile to investigate whether formal specification languages like LOTOS [25]
and associated verification tools can be developed over the EFRP model for exploring
properties related with supervision, logical correctness etc. Thirdly, in EFRP, timing
features are modelled indirectly through TTM or logical branching operators. It will
useful to compare this model with timed CSP [76] or with the real time semantics of
DFRP provided by Cieslak [58] and to integrate the different features if possible.

The DES modelling tools allow one to model a physical system at a certain level of abstraction such that the underlying continuous variable dynamics is often obliterated.
But, for successful design and analysis of systems, it sometimes becomes necessary
to integrate continuous and discrete dynamics in a unified framework. Extension of
the process algebraic framework to handle such hybrid features will definitely be a
major challenge to future researchers.
Thus, we come to the end of this thesis. Looking back, the author wonders whether
this thesis has been able to bring out some order out of the chaos that exists in this field.
This work would be considered to have had some success only it can create some interest
in the process algebraic approach to DES modelling and motivate researchers to take on
the adventures of real world applications.

Appendix A
Proofs of Some Results
Proof of theorem 2.2.2:
() Let {P1 , , Pn } be a family of MRP w.r.t < n (G, ) >. By one step expansion
formula, each process Pi can be expressed as
Pi = (i1 Pi / < i1 >| | iki Pi / < iki >)Pi (<>) . By definition 2.2.10 each
process Pi / < ij > can be expressed as < fij > (P) for some < fij >< n (G, ) >.
Thus Pi = (i1 < fi1 > (P) | | iki < fiki > (P))mi , where mi = Pi (<>). Now
we construct the collection of recursive equations X = F(X), X 0 := P 0, where each
component Fi of F is obtained from the corresponding one step post-process expression of
Pi given above, i.e.,
Xi = Fi (X) := (i1 < fi1 > (X) | | iki < fiki > (X))mi .
It is obvious that each Fi is guarded, continuous and ndes. To see that X 0 = P 0 is
consistent, all we have to show that for any i, Pi 0 = (Fi (P 0)) 0. If the assumption
(b) of the theorem is satisfied, then by constancy of 0 function, the above consistency
requirement is trivially satisfied. On the other hand suppose (a) is satisfied, i.e., 0 is not
necessarily constant but < G > is an MRS family. Also by fact 2.2.2, each function in
< n (G, ) > is spontaneous. Then note that
Pi 0 = (i1 < fi1 > (P) | | iki < fiki > (P))mi 0
= (i1 (< fi1 > (P)) 0 | | iki (< fiki > (P)) 0)mi 0
(as Fi is ndes)
= (i1 (< fi1 > (P 0)) 0 | | iki (< fiki > (P 0)) 0)mi 0
(as each < fij > is ndes)
= (i1 < fi1 > (P 0) | | iki < fiki > (P 0))mi 0
(as each < fij > is spontaneous)
153

154

APPENDIX A. PROOFS OF SOME RESULTS

= (Fi (P 0)) 0.
Clearly conditions of theorem 2.2.1 is satisfied by the above collection of recursive
equations and (P1 , , Pn ) is the unique solution.
() Conversely suppose P is the unique solution of equations (2.3) and (2.4). For any
arbitrary Pi in P, one can show MR-ness using induction on the length of s tr Pi .
The basis case is trivially true as Pi / <> = < P roj i > (P). Assume that, for some
s tr Pi , Pi /s =< f > (P) for some < f >< n (G, ) >. Then
Pi /(s^ < >) = (Pi /s)/ < >= (< f > (P))/ < >. We claim that there exists
< f 0 >< n (G, ) > such that (< f > (P))/ < >= (< f 0 > (P)). This proves our
result for Pi /(s^ < >) and the rest follows by induction.
All we need now is to prove the claim made above and for this, we again use induction
on the number of steps in definition 2.2.9 needed to construct < f >< n (G, ) >.
If f C , i.e., < f > (P) = P 0 for some P then tr < f > (P) 6= {<>} implies
that for any < > tr < f > (P), (< f > (P))/ < > = (P 0)/ < >. Now using
condition (3) of definition 2.1.6 we have the following. If P 0 = (1 (P 0)/ < 1 >|
| k (P 0)/ < k >)m with m = (P 0)(<>), then
(P 0) 1 = (1 ((P 0)/ < 1 >) 0 | | k ((P 0)/ < k >) 0)m . However
by (1) of definition 2.1.6 (P 0) 1 = P 0. Comparing, one can conclude that for any
< > tr P 0, (P 0)/ < >= ((P 0)/ < >) 0. Since the process in the right
hand side belongs to 0, one can find c0 C such that c0 = r(((P 0)/ < >) 0).
Thus (< f > (P))/ < >= (< f 0 > (P)) for some f 0 = c0 C n (G, ).
If f P roj(n), i.e., < f > (P) = Pi for some i, then by equation 2.4
< f > (P)/ < >= Pi / < >=< fij > (P) for some < fij >< n (G, ) > as = ij
for some j.
Finally suppose the claim is true for some < f1 >, , < fk > in < n (G, ) > and let
f = g (f1 , , fk ) for some k ary function < g >< G >. Now
< f > (P)/ < >= (< g > (< f1 > (P), , < fk > (P)))/ < >. The MRness of
< G > guarantees existence of < g 0 >< G > such that
< g > (< f1 > (P), , < fk > (P))/ < >=< g 0 > (P10 , , Pm0 0 ), where for each
j, 1 j m0 i, 1 i k with Pj0 =< fi > (P) or Pj0 =< fi > (P)/ < >. By
induction hypotheses any < fi > (P)/ < > can be expressed as < fi0 > (P) for some
< fi0 >< n (G, ) >. For each j, 1 j m0 let hj be the element of n (G, ) such
that hj = fi if Pj0 =< fi > (P) or hj = fi0 if Pj0 =< fi > (P)/ < >. Next we construct
f 0 n (G, ) such that < f 0 > (P) = < g 0 > (< h1 > (P), , < hm0 > (P)), using the
following recursive syntactic transformation G as follows.

155
f 0 = G(g 0 , h1 , , hm0 ) :=

g 0 (h1 , , hm0 )
if g 0 G
g
(G(
g1 , h1 , , hm 1 ), , G(
gk , hm0 m k +1 , , hm0 )) otherwise
j and
where g 0 = g (
g1 , , gk ) is in G, g G, gj G, < gj > has an arity m
0
j = 1 and G(
gj , hmr ) := hmr
m =m
1 + + m
k . Note that, if, for some j, gj = Id, then, m
where mr = m
1 + + m
j1 + 1. Clearly f 0 n (G, ) and (< f > (P))/ < > =
(< f 0 > (P)). This completes our proof.
2

Proof of Lemma 3.2.1:


The lemma is proved with the help of structural induction on the construction of
k
h n (GD ,
D ). We use the following induction proposition involving h.
P(h) : If f = CF1 (h) then (i) f satisfies the claims (1) (8); (ii) for any u (Sub(f )
0
{f }) and nonempty B 0 , CF1 (
u[[B ]] ) satisfies (1) (8), where u is the result of
removal of outermost quotation marks () from u.

Basis : If h = ST OPA or Pi , it can be seen that P(h) is trivially true.


k
Hypothesis : Let P(h) be true for some h n (GD ,
D ).

[[B]]
C , each of P(h
Step : We have to prove that for any nonempty B,
),

[[+C]]
[B]
[+C]

P(h
), P(h
), and P(h
) is true. Also if the hypothesis is such that for
k
n

some h1 hm (GD , D ) each of P(hi ) is true, then P(km


i=1 hi ) must also be shown
to be true in the induction step.

[[B]]
Case 1: Let v = h
. To prove the veracity of P(v), we have to compute CF1 (v)
first. Consider the following possibilities on CF1 (h).

Case 1.1: CF1 (h) = ST OPA . Then CF1 (v) = ST OPAB . P(v) is trivially true.

[[B]]

Case 1.2: CF1 (h) = Pi . Then CF1 (v) = Pi


. Without losing any generality of
the result, we have assumed here that neither F (v) is empty (which makes it a ST OP

MA (CF1 (h)) (where B


is simply removed). Now Pi[[B]]
expression) nor B
trivially

[[B]]
satisfies (1) (8). Also the only subexpression of Pi
, different from the original, is
[[B 0 ]]
[[B 0 ]]
Pi . Then CF1 (Pi
) = Pi
which also trivially satisfies (1) (8). Thus both
(i) and (ii) of P(v) are satisfied in this case.

[[B ]][[+C1 ]][B2 ][+C2 ]


Case 1.3: CF1 (h) =Pi 1
. Note that we are representing a general
structure of U1 so that any of B1 , B2 , C1 , C2 can be , indicating that the corresponding

[[BB
1 ]][[+(C1 B)]][(B2 B)][+(C2 B)]
operator symbol is absent. Then CF1 (v) = Pi
, which is
clearly an element of U1 . Since by induction hypothesis P(h) is true, claims (2) (6)
[[B ]][[+C1 ]][B2 ][+C2 ]
are satisfied by every subexpression of Pi 1
. It is then easy to see that

156

APPENDIX A. PROOFS OF SOME RESULTS

CF1 (v) also satisfies (2) (6). Claim (7) follows from the fact that as claims (1) (6) are
already satisfied by CF1 (v), a further application of CF1 () on it does not make any more
structural change. Finally as by induction hypothesis < h > (X) = < CF1 (h) > (X) =

[[B ]][[+C1 ]][B2 ][+C2 ]


[[B]]
> (X), we have < v > (X) = < h
> (X) =
< Pi 1

[[B1 ]][[+C1 ]][B2 ][+C2 ]


[[B]]
[[B]]

(< h > (X))


= (< Pi
> (X))

[[BB
1 ]][[+(C1 B)]][(B2 B)][+(C2 B)]
= < Pi
> (X)= < CF1 (v) > (X). Hence (8) is satisfied.
Together (i) of P(v) is satisfied. To prove (ii), consider the different subexpressions u0

[[BB
[[BB
1 ]]
1 ]][[+(C1 B)]]
of CF1 (v), different from itself. They are Pi , Pi
, Pi
and

[[B 00 ]]
[[BB
1 ]][[+(C1 B)]][(B2 B)]
Pi
respectively. Corresponding CF1 (u0
) expressions will
00 ]]
00
00

[[B 00 ]]
[[BB
B
[[BB
1
1 B ]][[+(C1 (BB ))]]
be Pi
, Pi
, Pi
and
00
00
00

[[BB
1 B ]][[+(C1 (BB ))]][(B2 (BB ))]
Pi
respectively. Each of them satisfies (1) (8).
Thus (ii) of P(v) is also satisfied.

[[+C1 ]][B2 ][+C2 ]


Case 1.4: Let CF1 (h) =(km
. Then CF1 (v) =
i=1 vi )

[[B]]

))[[+(C1 B)]][(B2 B)][+(C2 B)] . Now, at this stage we see the requirement
(km
i=1 CF1 (vi
of second part the induction hypothesis. Since P(h) is true, by (ii) of P(h) (taking u =

each of CF1 (vi[[B]]


vi and B 0 = B)
) satisfies (1) (8). Also by induction hypothesis
m
m
C1 MP (ki=1 vi ) = , B2 F (ki=1 vi ) and (B2 C2 ) F (km
i=1 vi ). This implies

[[
B]]
MP (km

F (km CF1 (vi[[B]]


)) and
that (C1 B)
)) = , (B2 B)
i=1 CF1 (vi
i=1

((B2 B) (C2 B))

[[B]]

F (km
)). Combining the above facts one can easily conclude that (i) of
i=1 CF1 (vi
P(v) is satisfied.
To prove part (ii) of P(v), we consider any u0 (Sub(CF1 (v)) {CF1 (v)}) and nonempty
B 00 . It is easy to see that for any such u0 there exists u (Sub(CF1 (h)) {CF1 (h)})
00
00

such that u0 = CF1 (


u[[B]] ), and then CF1 (
u0[[B ]] ) = CF1 (
u[[B B]] ). Again by part
00
CF1 (
(ii) of P(h) (taking B 0 = B 00 B),
u[[B B]] ) satisfies (1) (8). Hence so does
00
CF1 (
u0[[B ]] ). Thus (ii) of P(v) is also proved.
Combining the results of the above subcases, we find that P(v) is satisfied by Case 1.

[[+C]]
Case 2: Let v = h
. Again to prove P(v) we first compute CF1 (v). Consider
the following possibilities on CF1 (h). Without losing any generality of the result we can
assume C /MP (CF1 (h)) (otherwise CF1 (v) = CF1 (h).

Case 2.1: CF1 (h) = ST OPA . Then CF1 (v) = ST OPAC . P(v) is trivially true.

[[+C]]
Case 2.2: CF1 (h) = Pi . Then CF1 (v) = Pi
. Satisfaction of P(v) follows identically as in Case 1.2.

[[B ]][[+C1 ]][B2 ][+C2 ]


Case 2.3: CF1 (h) =Pi 1
. Then CF1 (v) =

[[B ]][[+(C1 (CC


1 ))]][B ][+C (C2 C)]
Pi 1
, which is clearly an element of U1 . Here B :=
C and C := (B2 C F (Pi[[B1 ][[+C1 ]] )). Now, satisfaction of P(v) in this
(B2 C)

157
case follows identically as in Case 1.3.

[[+C1 ]][B2 ][+C2 ]


. Then CF1 (v) =
Case 2.4: CF1 (h) =(km
i=1 vi )
m

[[+C1 (CC
1 MP (ki=1 vi ))]][B ][+C (C2 C)]
, which is clearly an element of U2 V1 .
(km
i=1 vi )
[[+C1 ]]

)). Since CF1 (h) satisfies


Here B := (B2 C) C and C := (B2 C F ((km
i=1 vi )
(1)(8), it is easy to see that CF1 (v) will also satisfy them. To prove (ii) of P(v), we consider
any u0 (Sub(CF1 (v)) {CF1 (v)}) and nonempty B 00 . It is easy to see that any
m

[[+C1 (CC
m
1 MP (ki=1 vi ))]]
,
such u0 either belongs to Sub(km
i=1 vi ), or it is one of (ki=1 vi )
m

[[+C1 (CC
00
m
1 MP (ki=1 vi ))]][B ]
. After applying GCO[[B ]] on any of these
or (ki=1 vi )
[[B 00 ]]
expressions, it can be easily seen that, CF1 (u0
) is some subexpression of
m
00

00

[[B 00 ]]
m
[[+(C1 (CC
1 MP (ki=1 vi ))B )]][(B B )]
(ki=1 CF1 (vi
))
. Then satisfaction of (1)(8)
[[B 00 ]]
0
by CF1 (u
) follows from part (ii) of the hypothesis itself.
Combining the results of the above subcases, we find that P(v) is satisfied by Case 2.

[B]
Case 3: Let v = h
. P(v) can be proved in a similar way as in Case 2 and the
proof is not elaborated here.

[+C]
Case 4: Let v = h
. P(v) can also be proved in a similar way as in Case 2.
m

Case 5: Let v = ki=1 hi such that each of P(hi ) is true. Now CF1 (v) = km
i=1 CF1 (hi ).
Then (i) of P(v) follows immediately from the fact that each of the P(hi ) is true.
To prove (ii) of P(v), we consider any u0 (Sub(CF1 (v)) {CF1 (v)}) and nonempty
[[B 00 ]]
B 00 . We have to show that CF1 (u0
) satisfies (1)(8). There are two possibilities.
00
0
Case 5.1: The first possibility is that u = CF1 (hj ) for some j. To compute CF1 (
u0[[B ]] ),
we have to consider different possible structures that CF1 (u0 ) can possess. Note however
that, as u0 = CF1 (hj ) and P(hj ) is true, by (i)(f) of P(hj ), CF1 (u0 ) = u0 .
00

Case 5.1.1 Let CF1 (u0 ) = ST OPA . Then CF1 (


u0[[B ]] ) = ST OPAB 00 which satisfies (1) (8).
00

[[B 00 ]]
Case 5.1.2 Let CF1 (u0 ) = Pi Then CF1 (
u0[[B ]] ) = Pi
which obviously satisfies (1) (8) trivially.
[[B 00 ]]

[[B ]][[+C1 ]][B2 ][+C2 ]


Case 5.1.3: CF1 (u0 ) =Pi 1
. Then CF1 (u0
) =
[[B 00 B1 ]][[+(C1 B 00 )]][(B2 B 00 )][+(C2 B 00 )]
Pi
, which is clearly an element of U1 . As CF1 (u0 ) =
[[B 00 ]]
u0 = CF1 (hj ) and P(hj ) is true, by case 1 of this proof, it is easy to see that CF1 (u0
)
will also satisfy (1) (8).
[[B 00 ]]

[[+C1 ]][B2 ][+C2 ]


0

Case 5.1.4: CF1 (u0 ) = (km


v
)
.
Then
C
(
u
) =
i
F1
i=1
00
00
00
[[B 00 ]]

(km
))[[+(C1 B )]][(B2 B )][+(C2 B )] . Since P(hj ) is true, by part (ii) of
i=1 CF1 (vi
[[B 00 ]]
) satisfies (1) (8). By inducP(hj ) (taking u = vi and B 0 = B 00 ) each of CF1 (vi
m
m
tion hypothesis C1 MP (ki=1 vi ) = , B2 F (ki=1 vi ) and (B2 C2 ) F (km
i=1 vi ).
[[B 00 ]]
00
00
m
It implies (C1 B ) MP (ki=1 CF1 (vi
)) = , (B2 B )
00 ]]
[[B
[[B 00 ]]

F (km
)) and ((B2 B 00 ) (C2 B 00 )) F (km
)). Comi=1 CF1 (vi
i=1 CF1 (vi

158

APPENDIX A. PROOFS OF SOME RESULTS

bining the above facts one can easily conclude that (1) (8) are satisfied by
[[B 00 ]]
CF1 (u0
).
Combining the results of above subcases, we find that P(v) is satisfied by Case 5.1.
Case 5.2: The other possibility is that u0 (Sub(CF1 (hj )) - {CF1 (hj )}) for some j.
[[B 00 ]]
Clearly by (ii) of P(hj ) CF1 (u0
) satisfies (1) (8). Hence part (ii) of P(v) is
satisfied.
Combining the results of the above subcases, we find that P(v) is satisfied by Case 5.
By principle of induction, the lemma is found to be true.
2

Proof of Lemma 3.2.3:


(a) First we show that CF2 results in an unique expression irrespective of the order of
merging that takes place in different steps of M . To prove this, we have to show that the
procedure M termnates in a finite time resulting in a unique set of expressions. This is
k
proved using structural induction on the construction of elements in CF1 (n (GD ,
D )). We
use the following induction proposition.
k
Induction Proposition P(U): Let U be a subset of CF1 (n (GD ,
D )) that satisfies the following poperties.
(i) M (u) is uniquely defined and finitely computable for any u U.
(ii) For some C1 , B2 , C2 if M (u) := {(kri=1 u0i )[[+C1 ]][B2 ][+C2 ] } or {u01 , , u0r }, then no
u0i mR u0j .
(iii) M is component preserving, i.e., Comp(u) = gM (u) Comp(g).
(iv) Let {u1 , , um } U such that for any i, j, ui mR uj .
Then W = M (Mm ( Mm (Mm (ui1 , ui2 ), ui3 ), , uim )) has the following properties:
(a) It is finitely computable and uniquely defined irrespective of choice of i1 , , im as long
as they are m distinct elements of {1, , m}; (b) It is a subset of U; (c) m
i=1 Comp(ui ) =
r
0 [[+C1 ]][B2 ][+C2 ]
gW Comp(g); (d) For some C1 , B2 , C2 , if W ={(ki=1 ui )
} or {u01 , , u0r },
then no u0i mR u0j .

(v) For some u U such that if u = (krj=1 u0j )[[+C1 ]][B2 ][+C2 ] , no u0k mR u0l , let {u1 , , um }
be a set of elements of U such that (1) i, u mR ui , but ui m/R u, and (2) / i, j such that
both uj mR ui and ui mR uj .
Then W = M (Mm ( Mm (Mm (u, ui1 ), ui2 ), , uim )) satisfies the following properties. (a) It is finitely computable and uniquely defined irrespective of choice of i1 , , im
as long as they are m distinct elements of {1, , m}; (b) Comp(u) m
i=1 Comp(ui ) =
r
0 [[+C1 ]][B2 ][+C2 ]
0
0
gW Comp(g); (c) If W = {(ki=1 ui )
} or {u1 , , ur }, then each u0i U
and no u0i mR u0j .

159
Basis Let U = U1 . Note that each element of U1 is variant of some component expression.
Therefore, whenever for any two element u1 , u2 , if u1 mR u2 , then the reverse is also
satisfied, i.e., u2 mR u1 and Mm (u1 , u2 ) = Mm (u2 , u1 ) and this itself is another variant of
the same component expression as that of u1 , u2 . Now, it can be easily seen that P(U1 ) is
satisfied.
Hypothesis Let P(U) be satisfied by some U.
k
Step Let V CF1 (n (GD ,
D )) such that, (a) U V, and (b) for any C1 , B2 , C2 , and

ui U, 1 i m, v = {(km
i )[[+C1 ]][B2 ][+C2 ] } V. We have to show that P(V) is
i=1 u
satisfied.

If v U, (i) of P(V) is trivially satisfied. Otherwise let v = {(km


i )[[+C1 ]][B2 ][+C2 ] }
i=1 u
V U, for some C1 , B2 , C2 , where each ui U. Using the steps of M , we find that in
this case M (v) is determined by M (km
i ). While computing this, note that each step
i=1 u
in the While-do loop as well as in the overall iteration is finitely computable and uniquely
defined because of the induction hypothesis. The While-do loop terminates when in step
3 of the loop, each application of M () over merging in a partition results in a singleton
set. Note that there are only a finite number of repetitions (upper bounded by number of
i ) ) of the loop in which in step 3 of the loop, some application of
nodes in T ree(km
i=1 u
M () in a partition may result in a non-singleton set. Due to a similar reason, the overall
repetition, as indicated in Step M.5 also terminates in a finite time giving a unique result.
Thus (i) of P(V) is satisfied.
Similarly (ii) and (iii) of P(V) follow directly from the steps of M , lemma 3.2.2 and
(ii), (iii), (iv) and (v) of P(U).
To show that (iv) of P(V) is satisfied, consider {v1 , , vm } V such that for any i, j
vi mR vj . There are two possibilities.
The first possibility is that each vi is variant of an identical component. Then either
each vi U and (iv) of P(V) is satisfied by (iv) of P(U), or, for each i, vi V U and
i
i
i
i
ij )[[+C1 ]][B2 ][+C2 ] , for some C1i , B2i , and C2i , where each uij U,
it has a structure (km
j=1 u
i
for all i km
ij are syntactically identical to each other (denote it as h) and B2i F (h)
j=1 u
are all equal (denote it as B2t ) and nonempty. Using the second condition of merging

(definition 3.2.4) and definition of M , we find W = {CF1 (


g [[+C1 ]][B2 ][+C2 ] )}. Here g = f
i
t
a
m
i

if M (h) = {f }. Otherwise g = kf M (h) f. Here C1 = m


i=1 C1 . B2 = B2 (B2 i=1 C2 )
where
j
i
m
i
B2a = m
i=1 ((B2 F (h)) ((j6=i, j=1 C1 ) F (h) C1 )), and

mi
i
t
m
i
m
i
C2 = ((m
ij for any i and each
i=1 C2 )B2 ) ((i=1 C2 )F (h)(i=1 C1 )). Since h = kj=1 u
uij U, by steps of M and (i), (ii), (iii) of P(V) we find that M (h) is a finitely computable
unique subset of expressions from U such that if M (h) := {(kri=1 fi0 )[[+C1 ]][B2 ][+C2 ] } or

160

APPENDIX A. PROOFS OF SOME RESULTS

{f1 , , fr }, then no fi mR fj . Together we find that the four sub-properties of condition


(iv) of P(V) are satisfied.
The other possibility is that for each i either vi U or vi V U has a structure
i
i
i
mi
i
ij )
(kj=1 uij )[[+C1 ]][B2 ][+C2 ] , for some C1i , B2i , and C2i where uij U and B2i F (km
j=1 u
is empty. Also for all i, Comp(vi ) {ST OP{} } are equal. Let {k1 , , kp } be the subset
of {1, , m} where vkj V U. The rest of the elements are (say) {vl1 , , vlq }, where
each vlj is in U. The merging of all these vi s in arbitrary order gives rise to nonunique
expressions. However once we apply M we immediately find that we will obtain an intermediate unique result, where M will be applied on the unique expression of the form

(
vl1 k k
vlq k
uk11 k k
ukp m )[[+C1 ]][B2 ][+C2 ] = (kni=1 ui )[[+C1 ]][B2 ][+C2 ] ( say).
kp

2 and C2 are three unique subsets obtained (irwhere naturally each ui U and C1 , B
respective of order of merging) from repeated application of third condition of merging
(definition 3.2.4) and CF1 . Note that this expression contains all the components from
mk
each vi . Also for any li , lj , ki , vli mR vlj and also kj=1i ukij mR vli . Applying steps of M
and using these facts as well as (i), (ii), (iii) of P(V), and (iv) and (v) of P(U), we find
that (iv) of P(V) is also satisfied.

Finally to show (v) consider some v V such that v = (kri=1 ui )[[+C1 ]][B2 ][+C2 ] for
some C1 , B2 , C2 and each ui U and no ui mR uj . Let {v1 , , vm } V such that (1) i,
v mR vi , but vi m/R v and (2) / i, j such that both vj mR vi and vi mR vj . From the above,
we can conclude that v is not a variant of any component expression and B2 F (km
i )
i=1 u
is nonempty. Then Mm ( Mm (Mm (v, vi1 ), vi2 ), , vim ) =

(
u1 k k
ur k
v1 k k
vm )[[+C1 ]][B2 ][+C2 ]
m
m

where C1 = C1 m
i=1 Mp (vi ), B2 = B2 i=1 F (vi ) and C2 = C2 i=1 F (vi ). It is an
unique expression irrespective of order of merging. Note that, for any i, 1 i m, if vi / U
i
i
i
i
then vi has a structure (km
ij )[[+C1 ]][B2 ][+C2 ] , for some C1i , B2i , and C2i where uij U.
j=1 u
i
ij )
i , we can conclude that B2i F (km
Also, as vi merges into v, or rather into km
j=1 u
i=1 u
is empty. Using these properties, the steps of M , (i), (ii) and (iii) of P(V) and (iv) and
(v) of P(U), we can find that P(V) will be satisfied.
Together, the induction proposition is found to be true for V. Now using principle of
k
induction we find that for h CF1 (n (GD ,
D )), M (h) satisfies the above properties. This
in turn show that (a) of lemma 3.2.3 is satisfied.
Finally using the facts that CF2 is applied recursively on all arguments of P CO, as well
as the results of lemma 3.2.2 and the above induction, the other claims of lemma 3.2.3 can
be easily shown to be true.
2

161
;
Algorithm to decide boundedness of A(< n (HD
, D ) >)

Input: The process realisation X = F(X), and Y =< Pi > (X), for some i, 1 i n.
(Remark: Data-structure)
Structure node {
integer :: count, level;
boolean :: mark;
expression:: E;
Array of pointers to nodes:: Child[ ]}
( Remark: A structure called node is used here. A node N has the following fields. N .count
keeps track of the number of children attached to N . N .level denotes the level of the tree
at which N is placed in the tree. N .E is expression present in N . N .mark denotes
the status of the node which is either expanded or unexpanded. Finally for different i,
1 i N .count, N .Child[i] is a pointer that points to the i-th child node of N . The
node pointed by this pointer is denoted as (N .Child[i]))
Procedure: Main
{
structure node :: Root;
boolean :: status;
boolean buildtree (structure node *root, *link, *current);
(Remark: The Main procedure is the top level procedure in the algorithm. It uses the
variable status to store the result of the recursive function buildtree which indicates whether
the process mentioned in the Input is bounded or unbounded. The pointer variable Root is
used to initialise the procedure by pointing to a node. This node is used as the root node
for the tree to be built by buildtree function.)
Create(Root);
(*Root).level := 0;
(*Root).count := 0;
(*Root).mark := unexpanded;
(*Root).E := Pi ;
status := buildtree (Root, Root, Root);
}
(Remark: The root node is created and assigned with level 0 and an expression Pi which
is mentioned in the Input of the procedure. Initially no child is attached to it and it is yet
to be expanded. The buildtree function is called with all arguments pointing to the root
node)

162

APPENDIX A. PROOFS OF SOME RESULTS

Procedure: Buildtree
boolean buildtree (structure node *root, *link, *current)
{
structure node :: *new;
boolean :: temp-status, new-status;
boolean attach(structure node n1 , n2 );
(Remark: The Buildtree procedure uses three pointers as its arguments. The pointers
point to the root node (R), the node N and the current node (N ) respectively. Also
the node pointer new points to the node Nnew which contains the post process expression
(N .E)/ for different . In this procedure, after finding the appropriate position for Nnew
from the table 3.4, the function attach is used to attach this node to the tree. Before
attaching, however it is checked whether Nnew .E is already present in some other child of
the node where Nnew is to be connected. If so, Nnew is no longer connected to the tree and
the attach function sets the new-status variable as old. Otherwise the node is connected to
the tree and new-status is assigned the value new).
If (N 0 6= current | root N 0 current, N 0 .E = (current).E)
((current).E = ST OPA ) ((current).E = SKIPA ) then
{ (*current).mark := expanded;
temp-status := bounded;
return temp-status
}
(Remark: If N .E indicates any loop or bears a ST OPA or SKIPA type of expression
then it is not expanded further and the current (most recent) call of the recursive function
buildtree terminates returning a bounded status.)
else
for ( F ((current).E))
{
Create(new);
(new).E := (current).E/;
if (combination is any of (f ) vs (1) or (a)-(d) vs (1)-(4), except (b) vs (1)) then
new-status := attach(current, new);
else if (combination is any of (e) vs (1)-(4), or (b) vs (1)) then
new-status := attach(root, new);
else if (combination is any of (f ) vs (2)-(4), then
new-status := attach(link, new);
If (new-status = new) (N 0 6= new | root N 0 new, N 0 .E and

163
(new).E begin with same argument, len(N 0 .E) < len((new).E) and
it is not the case that N 0 .E = Pj and (new).E = Pj ; SKIPA ) then
{ temp-status := unbounded;
*current.mark := expanded
return temp-status
}
free(new);
}
temp-status := bounded;
(Once Nnew is attached to the tree, the path from R to Nnew is checked for unboundedness.
If the condition of unboundedness is satisfied, the for loop as well as the most recent call of
the buildtree terminates, returning an unbounded status. Otherwise the for loop terminates
indicating unboundedness condition has still not been found true. The new pointer is made
free for further use. Since the search is depth-first, we now proceed to build the tree from
the different children of the current node N . A recursive call of buildtree will be used
in which some child of the current node N will be used as the new current node. Also,
depending upon the relative length of the expressions contained in N and that of its child,
the new location of the link node (i.e., N ) has to be fixed. If the length of the expression
of the child node is more than that of the current node, the current node itself will be used
as the link node in the next level of recursive call. Otherwise, if it is equal, the old link
node will suffice. A node is marked as expanded only after tree building from its child nodes
is complete or in between the unboundedness condition has been found to be satisfied.)
While((u, 1 u (current).count | *(current.Child[u]).mark = unexpanded)
(temp-status = bounded))
{
If (len(*(current.Child[u]).E) > len(*current.E)) then
temp-status := buildtree(root, current, current.Child[u]);
else if (len(*(current.Child[u]).E) = len(*current.E)) then
temp-status := buildtree(root, link, current.Child[u]);
}
*current.mark := expanded;
return temp-status
}
Procedure: Attach
boolean attach(structure node *n1 , *n2 )

164

APPENDIX A. PROOFS OF SOME RESULTS


{

boolean :: n2 -status;
(Remark: This procedure attaches the node n2 to n1 only if no existing child of n1 bears
same expression as that of n2 . It then makes appropriate changes in different fields of n1
and returns n2 -status as new. Otherwise if expression of n2 is a repetition of that of some
child of n1 , it is not connected to n1 and n2 -status is returned as old. In both cases)
if (
/ u, 1 u (n1 ).count | (n1 .Child[u]).E = n2 .E) then
{
n2 .level := n1 .level+1;
n2 .count := 0;
n2 .mark := unexpanded;
n1 .count := n1 .count+1;
Create(n1 .Child[n1 .count]);
n1 .Child[n1 .count] := n2 ;
n2 -status := new
}
else n2 -status := old;
return n2 -status
}
2

Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
165

166
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]

BIBLIOGRAPHY

BIBLIOGRAPHY
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]

167

168
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]

BIBLIOGRAPHY

T HE AUT HOR
Supratik Bose was born in Asansol, India, on July 10, 1968. He received his B.Tech
degree in Electrical Engineering and M.Tech degree in Instrumentation Engineering both
from I.I.T. Kharagpur, India, in 1990 and 1992 respectively. He has worked in the area of
microprocessor based control and discrete event systems. His hobbies include reading and
teaching high school mathematics.
Temporary Contact Address
(a) Department of Electrical Engineering
I.I.T. Kharagpur, W.B. 721302, INDIA
e-mail: supratik@ee.iitkgp.ernet.in
(b) T.D.B. College (Qr. No. 2)
Raniganj, Burdwan
W.B. 713347. INDIA Phone: (0341) 445526
Permanent Home Address
Q/2 Panchasyar, Garia
Calcutta 700084, INDIA

Publication
i) Microprocessor Based Gain Scheduling Control of A Coupled Tank System, by S. Bose,
S. Mukherjee and A. Patra in Proceedings of 15th National Systems Conference, Roorkee,
March 1992, pp 111-115.
ii) Modelling Discrete Event Systems: A Process Algebra Approach, by S. Bose and S.
Mukhopadhyay in Proceedings of 3rd National Seminar on Theoretical Computer Science,
Kharagpur, June 1993, pp 190-201.
iii). Comments on Observability of Discrete Event Dynamic Systems by S. Bose, A.
Patra and S. Mukhopadhyay in IEEE Transactions on Automatic Control, Vol 38, May
1993, pp 830.
iv). On Observability With Delay: Antitheses and Syntheses by S. Bose, A. Patra and
S. Mukhopadhyay in IEEE Transactions on Automatic Control, Vol 39, Apr., 1994, pp
803-806.
v).An Extended Finitely Recursive Process Model For Discrete Event Systems by S. Bose

and S. Mukhopadhyay in IEEE Transactions on System, Man and Cybernetics. (Accepted


for publication).
vi) Logical Models of Discrete Event System: A Comparative Exposition, by A. Patra,
S. Mukhopadhyay and S. Bose, communicated to CSI Journal of Computer Science and
Informatics.
vii). Finiteness Characterisation of Finitely Recursive Processes by S. Bose, S. Mukhopadhyay and A. Patra, communicated to IEEE Transactions on Automatic Control.
viii). Boundedness Characterisation of Finitely Recursive Processes by S. Bose, A. Patra
and S. Mukhopadhyay, communicated to IEEE Transactions on Automatic Control.
ix). A Nondeterministic Extension of the Finitely Recursive Process Model by S. Bose,
A. Patra and S. Mukhopadhyay, communicated to Discrete Event Dynamical Systems:
Theory and Applications.

You might also like