Professional Documents
Culture Documents
This is to certify that the thesis entitled Finitely Recursive Process Models for
Discrete Event Systems - Analysis and Extension, submitted by Mr. Supratik
Bose to the Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, for the award of the degree of Doctor of Philosophy is a bonafide
record of research work carried out by him under our supervision during the period from
January 1992 to June 1995. The results embodied in this thesis have not been submitted
for the award of any other degree.
Abstract
The field of Discrete Event Systems (DES) has emerged in the last decade to provide
formal methodologies for modelling, analysis and control of man-made technological systems. The complexity of these systems makes it imperative to adopt an appropriate level
of abstraction depending upon the issues being investigated. The logical behaviour of such
systems is usually characterised by sequences of events occurring at discrete instants of
time.
The recently proposed Finitely Recursive Process (FRP) model captures the dynamic
behaviour of DES in terms of processes, in contrast to the commonly adopted concept of
states. A process is characterised by a set of event sequences, along with associated functions, which determine the immediate course of system behaviour. Operators on processes
have been defined to build new processes from the elementary ones. Finite descriptions of
complex processes can be obtained by recursive application of these operators.
It is not decidable whether an arbitrary FRP is bounded, i.e., whether it can be implemented using a finite number of states. This is a major deterrent to further analysis
and practical use of the model. Therefore, in this work, subclasses of the FRP model
have been identified, for processes of which, boundedness is either guaranteed or atleast
decidable. Naturally, for such subclasses, restrictions are imposed on the use of different
process operators in recursive descriptions to ensure boundedness. However, this impairs
modelling flexibility to an extent. Later, it has also been shown how processes belonging
to these subclasses can be hierarchically combined, resulting in a better trade-off between
modelling flexibility and tractability of the model.
The FRP model, being a deterministic framework, cannot capture nondeterminism
such as arising out of event concealment. To overcome this limitation, a nondeterministic
extension of the FRP model is proposed in this work. Here, a nondeterministic process is
characterised in terms of a set of deterministic futures possible at a given instant. For
an underlying deterministic process, these may be the ones that are reachable via hidden
events. Later, using a collection of operators, a finitely recursive characterisation of the
nondeterministic process space has been obtained.
In the FRP model there is no provision to handle numerical or non-numeric information
and logical decisions based on the status of system variables. This poses difficulty for the
modeller of any real world system. To include these features, the idea of extended processes
has been used in this work to provide a state-based extension to the FRP model. Since
it is assumed that the extended processes can share variables during concurrent as well
as sequential operations, a number of state-continuity constraints have been obtained over
iii
the extended process operators. The concept of a silent transition has been introduced
for modelling logical decisions, which are triggered by occurrences of events and associated
state changes in the environment of a process, running concurrently with others. Here
also, a finitely recursive characterisation of the extended process space has been obtained
in terms of a collection of operators.
Finally, a number of practical application examples including a manufacturing job-shop,
a fault diagnosis session and a robot controller have been modelled using the proposed
modelling frameworks.
Keywords: Discrete Event Systems, Process Algebra, Finitely Recursive Processes,
Boundedness analysis, Nondeterminism.
iv
Contents
1 Introduction
1.1
1.2
1.3
1.4
2 Mathematical Background
13
2.1
Marked Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2
Recursive Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
2.3
25
37
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
k
Boundedness Characterisation : A(< (GD ,
D ) >) . . . . . . . . . . . . .
k
Finiteness Characterisation : < (HD , D ) > . . . . . . . . . . . . . . . .
(G;D , D )
37
44
61
3.4
>) . . . . . . . . . . . . .
67
3.5
78
3.6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
87
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
4.2
89
4.3
Process Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
4.4
4.5
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.5.1
4.5.2
vi
5 State-based Extension of FRP
5.1 Introduction . . . . . . . . . .
5.2 Extended Process Space . . .
5.3 Process Operators . . . . . . .
5.4 Recursive Characterisation . .
5.5 Assessment . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
109
111
113
123
127
6 Case Studies
131
6.1 Modelling of a Shop-Floor . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.2 Modelling of a Fault Diagnosis Session . . . . . . . . . . . . . . . . . . . . 136
6.3 Modelling of a Robot Controller . . . . . . . . . . . . . . . . . . . . . . . . 143
7 Conclusions
149
153
Preface
Discrete Event Systems (DES) are dynamic systems that evolve with the abrupt occurrences of physical events, at possibly unknown irregular intervals. Such systems arise in a
variety of contexts ranging from computer operating systems to the control of substations
in an electric supply network. In the past, DES have usually been sufficiently simple such
that intuitive or ad-hoc solutions to various problems have been adequate. The increasing
complexity of man-made systems, made a reality by the widespread application of computer technology, has elevated such systems to a level of complexity where more detailed
formal methods become necessary for their analysis and design.
The models designed to capture logical behaviour of DES can be categorised broadly
into two classes: state-based models and process algebra models. The state-based models
use the ideas of states and state-transition functions as fundamental components, whereas
the process algebra models use the ideas of traces (event sequences) and associated marking
functions.
Deterministic Finitely Recursive Process (DFRP) model is an important process algebra
model that tries to capture the dynamics and logical behaviour of DES recursively in terms
of processes and process operators. The DFRP model has good describing power and it
can model arbitrary Finite State Machines, Petrinets and Deterministic Communicating
Sequential Processes. However it cannot model nondeterministic behaviour that arises out
of hiding of events. State-based features like variables and logical decisions based on the
state of these variables cannot be modelled in this framework as it does not have any tool
to store numerical or non-numeric information. Moreover it has been shown that different
issues like reachability, boundedness, deadlock etc. are undecidable even in the simple
DFRP model. These problems have motivated us to work in the following two directions:
(i) analysis of the boundedness problem and identification of bounded subclasses of the
DFRP framework; (ii) extension of the DFRP formalism to include nondeterminism and
variables and states.
The thesis is organised as follows. The first chapter introduces the basic features of the
vii
viii
DES and presents a brief idea of the state of the art of the logical models of DES. The basic
mathematical ideas behind process algebra models and their recursive characterisation are
presented in the second chapter. Previous research has shown that it is not decidable
whether a general DFRP model can be implemented using finite number of states. This is
a major deterrent to further analysis and practical use of the model. In the third chapter,
two subclasses of the DFRP model have been identified, for processes of which, boundedness
is either guaranteed or atleast decidable. It has also been shown how processes of these
subclasses can be hierarchically combined, resulting in a better trade-off between modelling
flexibility and tractability of the model. A nondeterministic extension of the DFRP model
is presented in the fourth chapter. Here, a nondeterministic process is characterised in
terms of possible deterministic futures that are reachable via hidden events. Later, using
a collection of operators, a finitely recursive characterisation of the nondeterministic process
space has been obtained. In the fifth chapter a state-based extension of the DFRP model
has been proposed using the idea of extended processes. Since it is assumed that the
extended processes can share variables during concurrent as well as sequential operations,
a number of state-continuity constraints have been obtained over the extended process
operators. A concept of silent transition has been introduced for effective modelling of
logical decisions, made by concurrently running processes, based on global variables. Here
also, a finitely recursive characterisation of the extended process space has been obtained in
terms of a collection of operators. A number of real world DES including a manufacturing
job-shop, a fault diagnosis session and a robot controller have been modelled in the sixth
chapter using the proposed bounded subclasses as well as extensions of the DFRP model.
Conclusions and future problems related to this work are discussed in the seventh chapter.
A wise man once said Fine words butter no parsnips. I do not pretend to know what
that means, but I have a sneaking suspicion that it applies to this document. Worse, I fear,
as all writers of technical material must, that people will have the same quizzical reaction
to this thesis that I have to the proverb. In any case, insofar as the ideas presented herein
have any currency and consequence butter any parsnips, I suppose I owe a debt to a
great many people.
I am grateful to my father, Mr. Amal Bose, and my mathematics teachers Mr. H.S.
Mallik and Dr. M. R. Jash who instilled into me the liking for mathematical and abstract
thinking, without which I would not have thought of doing research in a subject like
Discrete Event Systems.
I consider myself fortunate to have guides like Dr. Amit Patra and Dr. Siddhartha
Mukhopadhyay, each of whom played the dual roles of my supervisor and elder brother
to perfection. It were they who introduced me to this challenging research area. Each
ix
problem tackled in this thesis is a result of endless hours of discussions and arguments
which I had with them. I am grateful to them for the inspiration, patient hearing and
criticism which they provided throughout the course of this work.
I am grateful to a number of people who, over the last few years, have helped me by
providing valuable ideas and documents. They include Dr. P.P. Chakrabarty, Dr. P.P.
Kharagpur
Dated
Supratik Bose
Glossary
Logic
Notation
=
6
=
2
P Q
P Q
P Q
P iff Q
x.P
/ x.P
x.P
Meaning
equal
not equal
end of a proof
P and Q (both true)
P or Q (one or both true)
if P then Q
P if and only if Q
there exists x such that P
there does not exist x such that P
for all x, P
Sets
Notation
/
{},
{a}
{a, b, c}
{x | P (x)}
AB
AB
AB
AB
Meaning
is a member of
is not a member of
empty set
singleton set, containing a
the set containing a, b, c as members
the set of all x such that P (x)
union of sets A and B
intersection (common elements) of A and B
set of elements of A that are not in B
A is contained in B
xi
xii
AB
2A
S
T
A contains B
family of subsets of A
union of the sets A such that P (A) is true
intersection of the sets A such that P (A) is true
A
A|P (A) A
A|P (A)
Functions
Notation
f :A B
f (x)
injection
surjection
bijection
f 1
{f (x) | P (x)}
f (C)
f g
Meaning
f is a function that maps elements of A to those of B
that member of B to which f maps x A
f such that x 6= y f (x) 6= f (y)
f such that y B, x A with y = f (x)
injection and surjection
inverse of an injection f
set of all f (x) such that P (x)
the image of C under f
f composed with g, f g(x) = f (g(x))
Notation
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.1
2.3
2.3
<>
<>
< 1 2 3 >
s^t
Language L
prefix-closed
prefix-closure
C( )
#s
sn
s P
Meaning
a fixed finite collection of event symbols
set of finite length strings (also called traces)
formed with elements of
null string
the string containing a single event
the string containing 1 , 2 , 3
concatenation of the string s followed by string t
L
L is prefix closed if s^t L s L
L := {s0 | s0 is prefix of some s L}
family of prefix closed languages over
length of the string s
s repeated n times
projection of s on the process P
xiii
sC
4.3
4.5
5.2
projection of s on C
successful termination event
silent transition
Notation
Meaning
M
set of marks
xiv
2.2
2.2
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
2.3
3.1
3.1
3.1
D
deterministic marked nonterminating process space
ST OPA
constant nonterminating process ST OPA
SKIPA
constant terminating process SKIPA
P = (1 P1 | | n Pn )A,0
deterministic Choice Operator
P = P 1 ; P2
sequential Composition Operator
P = P1 kP2
parallel Composition Operator
[B]
P
local Change Operator LCO[B]
P [+C]
local Change Operator LCO[+C]
[[B]]
P
global Change Operator GCO[[B]]
[[+C]]
P
global Change Operator GCO[[+C]]
CD
set of constant symbols of D
D
set of constant symbols of
C D
GD
signature over D
f
functional symbol (also called process expression) f
g
g = f if g = f
len(g)
length of g
Sub(g)
subexpressions of g
f =g
f and g are syntactically equivalent
f =g
f has a structure g
f g
f and g are semantically equivalent
;
GD
GD {P CO}
;
HD
{SCO}
k
GD
GD {SCO}
xv
3.1
3.1
3.1
3.1
3.1
3.1
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
3.2
SY
MP
MA
F (f )
F (f )
F (f )
CF1 , CF2 , CF
U1 , U2 , V1
comp(f )
mR
Mm (, )
T ree(f )
N ode(f )
L(f )
| L(f ) |
Closure
M
COM P (F, < g >)
3.2
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.3
3.4
3.4
3.4
3.4
3.5
3.5
xvi
3.5
3.5
3.5
3.5
3.5
4.2
4.2
4.2
4.2
4.2
4.2
4.2
4.3
4.3
4.3
4.3
4.3
4.4
4.5
V
`
`+
`
H
MN
WN
CHAOS
N n
N
N
F
P \C
P1 u P2
P1 k A P2
P {}
P [ =0]
GN
L
4.5
5.2
5.2
5.2
5.2
5.2
5.2
5.3
5.3
5.3
5.3
5.4
G, G
V
Q
DA
P
Q
DA
hP
P {r}
P [h]
PT b PF
GE
xvii
Abbreviations : Models
Abbreviations
CCS
CFG
CFL
CSP
CVS
DCSP
DES
DFRP
EFRP
FRP
FSM
NCSP
NCSP(WOD)
NFRP
PN
SC
TTM
Full Form
Calculus of Communicating Systems
Context Free Grammar
Context Free Language(s)
Communicating Sequential Process(es)
Continuous Variable Systems
Deterministic CSP
Discrete Event System(s)
Deterministic Finitely Recursive Process(es)
Extended Finitely Recursive Process(es)
Finitely Recursive Process(es)
Finite State Machine(s)
Nondeterministic CSP
NCSP WithOut Divergence
Nondeterministic FRP
Petri net(s)
State Chart(s)
Timed Transition Model(s)
Abbreviations : Operators
Abbreviations
AO
ACO
(A/E)DCO
(A/E)GCO
(A/E)LCO
(A/E)P CO
(A/E)SCO
(ASM O/ESM O)
CO
ECO
LBO
LDO
Full Form
Assignment Operator
Asynchronous Concurrency Operator
(Augmented/Extended) Determinstic Choice Operator
(Augmented/Extended) Global Change Operator
(Augmented/Extended) Local Change Operator
(Augmented/Extended) Parallel Composition Operator
(Augmented/Extended) Sequential Composition Operator
(Augmented/extended) State Modification Operator
Choice Operator
Event Concealment Operator
Logical Branching Operator
Local Deletion Operator
xviii
LN O
N CO
Abbreviations : Properties
Abbreviations
con
ndes
sndes
MR
MRS
MRWS
Full Form
constructive
nondestructive
strictly nondestructive
Mutually Recursive
Mutually Recursive Spontaneous
Mutually Recursive Weakly Spontaneous
Chapter 1
Introduction
1.1
Discrete Event Systems (DES) are characterised by a set of discrete states and a set of
sequences of events, which can take place at random instants of time, causing transitions
between the states. Examples of such systems include manufacturing systems, chemical
processes, traffic systems, communication networks, robotic systems etc. Characteristically,
they are viewed as having a discrete statespace derived from associated physical variables
and are asynchronous, event driven, often nondeterministic or admit choice of events by
some unmodelled mechanisms. They consist of subsystems which evolve concurrently and
with interactions in the form of interlocks, communications via channels or shared physical
variables. The physical variables include numeric continuous variables such as a liquid level,
numeric discrete variables such as machine parts in a buffer, or non-numeric variables such
as traffic lights, relay flags or valve positions. Although the term DES has been adopted
only recently, such systems are not new and researchers and practising engineers are dealing
with these systems for quite a long time in diverse fields of manufacturing, networking,
power systems, computer operating systems etc. But, in the past, DES have usually been
sufficiently simple such that intuitive or ad-hoc solutions to various problems have been
possible. The increasing size, capabilities and complexity of such man-made systems, made
possible by the widespread application of computer technology, have elevated such systems
to a level where more detailed formal methods become necessary for their analysis, design
and operation.
These systems easily contrast with Continuous Variable Systems (CVS), that have continuous state spaces, execute continuous (at least piecewise) trajectories, and are described
by ordinary or partial differential or difference equations. While all physical systems, ei1
CHAPTER 1. INTRODUCTION
1.2
1.3
Research activities in the area of DES can be categorised broadly in two parts: (i) Analysis
and synthesis of logical behaviour of DES, (ii) Quantitative analysis and performance
evaluation of timed and stochastic behaviour of DES.
In the first case one is interested in the logical structure of the models, addressing
qualitative issues such as logical correctness, reachability, deadlock avoidance, language
equivalence, liveness, fairness etc. These are structural properties and are tested for arriving at admissible designs for manufacturing cells, sequential control logic or communication
protocols etc.
In the second case, one is interested in the deterministic and stochastic behaviour of
the systems, computing performance measures such as throughput, mean time between
failures, average up and down time etc. For example, in a flexible manufacturing system
(FMS), one may like to know the production rate or the capacity utilization. Such quanti-
CHAPTER 1. INTRODUCTION
tative measures are necessary for solving high-level problems such as production planning
and resource scheduling in FMS, message routing in communication networks, traffic management in transportation systems, etc. Most often these questions are answered through
extensive simulation [1]. Standard simulation languages like SIMULA [2], GPSS [3] are
used for this purpose. Different tools from the field of Operation Research like perturbation analysis [4], Markov chains and queueing models [5, 6, 7, 8], min-max algebra [9]
etc. are also used for analytical solutions. A comprehensive introduction to the problem
of performence evaluation and analysis of timed behaviour can be found in [10].
There are two distinct approaches to modelling and analysis of logical behaviour of
DES: state-based approaches and process algebraic approaches.
The first class of models explicitly captures the states and associated transitions in a
graphical structure. The state here refers to the physical state of the system resources
and variables, as viewed from a certain level of abstraction. State transitions take place in
response to events occurring in the systems. Typical example of this class of models are
Finite State Machines (FSM) [11], Petri Nets (PN) [12], Timed Transition Models (TTM)
[13], State Charts (SC) [14, 15], Temporal Logic (TL) [16] etc.
The other class of models does not use the concept of state explicitly, but provides a
recursive algebraic description of the event dynamics in terms of elementary processes
and a set of process operators. Such models are called process algebra models. Early
formalisms in this class include the Communicating Sequential Processes (CSP) [17] and
Calculus of Communicating Systems (CCS) [18] which were used for mathematical modelling of concurrent and communicating systems. Comprehensive introduction to process
algebra models, as applied mainly to model parallel and distributed computing, can be
found in [19, 20]. More recently, the framework of Deterministic Finitely Recursive Processes (DFRP) [21, 22] has been derived from CSP and proposed as a general modelling
tool for deterministic DES. Most of the process algebra models are theoretical and abstract
in nature. Based on them and their variations, a number of programming languages including Ada [23], OCCAM [24], LOTOS [25], ESTEREL [26], SIGNAL [27], LUSTRE [28]
and CSML [29] have been developed for parallel and distributed computing. Though these
programming languages themselves can describe DES behaviour, they contain too much
details and are not considered suitable formalisms for conceptualisation, analysis of dynamics and synthesis of supervisors, which are major aims of studying DES. Once a strategy
of supervision or the complete dynamic description is obtained using some abstract model
of DES, these languages may be used for the actual implementation of a simulation or
real-time control using a computer.
In state based models, it is easy to specify system dynamics via the one step transition
functions. Relatively easy extension to stochastic models such as Markov chains are also
possible in these models. More often than not, the state based models have graphical
representations, which are helpful for an intuitive understanding of the dynamics of the
systems. Handling numerical information is also quite easy in these models. Finally, the
rich repertoire of concepts and techniques developed in traditional continuous variable
system theory can be utilised for analysis and synthesis of state based DES models. Under
the constraint of finite state space, in these models, different properties are solvable in
principle. In fact many problems related to control and observation have been posed and
solved in these frameworks [11, 30].
In practice however, even for a reasonably sized physical system, a state based description becomes unwieldy because of the large number of states that are required to model
the system. Consequently, associated algorithms for say, constructing controllers, may become computationally prohibitive. One solution to this problem may be the computation
of online but partial solutions [31]. The other possibility is to use structured frameworks
like State Charts instead of flat FSM type models and to use efficient graph algorithms
that exploit the structures and avoid exhaustive state-space explorations [32] .
On the other hand, the process algebra models offer the advantage of compact description of DES. A set of operators is available for composition of processes in these
models. The different operators are useful in capturing different features like concurrency,
re-entrant and recursive mechanisms, reusability of descriptions etc. These operators result in an algebra of process expressions. The most powerful feature of these models is the
facility of defining processes via recursion and fixed point equations. Together, it is often
possible to describe DES more compactly than a state-based formalism. Problems related
to program verification, termination, synthesis have been attempted in CSP, a well known
process algebra model.
The price, to be paid for this compact and enhanced describing power, is the tractability
of problems using these models. Many important issues like boundedness, reachability,
deadlock etc. are undecidable [33] in these models. In cases, where they are decidable, it
is often necessary to convert these high level models into low level state based models
before analysis of these issues can be attempted. A major research problem is therefore
to identify subclasses of general process algebraic models which can strike a good tradeoff between modelling flexibility on one side and solvability and effective computability of
basic issues on the other side. Moreover, as in FSM, PN or TTM, extensions and tools are
required to capture features of time, numerical information, nondeterminism etc., within
the framework of process based models.
Some of the typical problems that are sought to be addressed in the context of modelling
CHAPTER 1. INTRODUCTION
cess algebra models as basis, and additionally providing syntax and semantics. Different
program verification techniqes [16, 40, 41] can be used to verify properties of the process
algebra models. In these methods, one makes assertions about program execution, before
and after every instruction, which are also known as pre-conditions and post-conditions
of the instruction respectively. Inference rules are used to reason about the whole program by combining assertions about the instructions. Since in many cases, real-time and
safety-critical reactive systems are modelled using the process algebra and TTM models,
real-time properties like safety, realtime constraints etc. are also verified using the program
verification methods [16, 42].
iii) Supervision/Control: The kind of control one can exercise in a DES is mainly of a
restrictive nature, since there is no mechanism to force an event to take place in general.
This has been adopted to recognise the existence of events which are uncontrollable (such as
faults) and therefore cannot be prevented. In absence of such events however, one can force
an event to occur by preventing all other enabled events from occurring. One, therefore,
is interested to know, whether, prevention of inadmissible or non-optimal behavior of the
system is possible by systematically disabling a set of controllable events [31], [43] [49].
The DES, whose task is to keep track of the dynamic evoluation of the controlled DES,
and decide on the control action to be exercised, is known as a supervisor. The design of a
supervisor is difficult because of many reasons like nondeterminism in the plant, existence
of uncontrollable events such as machine breakdown, hard time-constraints, insufficient
information feedback from the plant, etc.The desired behavioural pattern can be stated in
several ways, for example in the form of a set of reference trajectories or by specifying a set
of states to be avoided/visited infinitely often. The last requirement is sometimes related
to a notion of stability of the system, and the system is said to be stabilizable if this can
be achieved under control [50] [53].
iv) Observation: The events taking place in a DES may not always be observed by other
agents in its environment, for example when the system is physically distributed over a vast
geographical area, such as communication networks. It may then be necessary to design a
supervisor under partial observation. One is therefore interested in determining whether
the DES is observable, i.e., whether one can determine its state from the observed sequence
of events alone. If it is not possible to ensure this at every instant, it may sometimes suffice
to know the state intermittently with only a finite number of events in between them [54]
[57].
In order to get an overview of the various modelling frameworks, a table is presented
at the end of this section comparing them on the basis of the answers to the following
questions:
CHAPTER 1. INTRODUCTION
Model
Q1
Q2
Q3
Q4
Q5
Q6
Q7
Q8
Q9
Q10
FSM
TTM
PN
SC
CSP
CCS
DFRP
Y
QY
QY
QY
N
N
N
Y
QY
QY
QY
N
N
N
R
RE
RE
?
RE
RE
DPN+
Y
Y
QY
Y
Y
Y
N
N
QY
QY
Y
QY
QY
QY
Y
QY
Y
QY
N
N
N
Y
N
?
QY
QY
Y
N
Y
N
?
QY
N
N
N
QY
Y
Y
?
QY
?
QY
Y
Y
Y
Y
QY
QY
QY
1.4
In the present work, investigation has been carried out on the DFRP model of DES.
The DFRP model, designed by Inan and Varaiya ([21]), is a relatively new entrant in
the field of DES. The model is an enhancement over the Deterministic CSP (DCSP) model
of Hoare ([17]) to include the concept of variable alphabet and a number of new operators.
The model aims at describing complex DES in terms of an algebra consisting of a few basic
terms (also known as processes) and a number of operators, which can be used to form
complex terms from simpler ones. The basic terms represent some simple DES behaviour.
The different operators arise naturally from the physical fact that simpler DES indeed
operate concurrently or sequentially producing complex behaviour. The operators can
model choosing of an event among different alternatives, concurrent operation of processes,
sequential operation of processes and local and global modification of the event sets of
processes. In a subsequent work [22], the DFRP model has also been used to construct a
general process algebra, from which, different logical formalisms of DES can be obtained
by using suitable marking axioms. The DFRP model has a large describing power and
it can model FSM, PN and DCSP. Cieslak, in a recent thesis [58], has treated three
aspects of DFRP formalism: simulation, logical analysis and real-time semantics. A LISP
based universal simulator [59] has been designed to simulate the traces of an DFRP by
performing symbolic manipulations. Regarding logical analysis, it has been shown that
Finitely Recursive Processes are powerful enough to model any Turing Machine [60]. As
a result, several important problems such as deadlock, boundedness, reachability etc. are
undecidable, i.e., there is no algorithmic solution to them. Finally, Cieslak has suggested a
method of adding real-time semantics to a process in a way that depends on its realizations.
Since a process may have several realizations, the problem of finding the fastest realization
is important for implementation considerations. A special case of the problem has been
solved by him. Low has shown the use of DFRP in specification of communication protocols
[61].
In spite of its modelling advantages, DFRP model lacks several features that are considered useful in modelling practical situations, as described below.
Since major system properties like boundedness, reachability, deadlock, language
equivalence etc. are not decidable in this model, one cannot validate and analyse systems,
expressed in this model, let alone construct supervisors and observers.
Nondeterminism is an important feature of DES models. Sometimes a system has a
range of possible behaviour, but the environment or the user may not have the ability
to influence, or even observe the selection between the alternative courses of behaviour.
10
CHAPTER 1. INTRODUCTION
Nondeterministic models arise either from a deliberate decision to ignore the factors that
influence this selection, or from the part of the dynamics that is not visible from the given
vantage point of the user. Thus, nondeterminism is necessary in maintaining a high level
of abstraction in description of the behaviour of physical processes. Also, for construction
of a supervisor that is required to work under partial observation, it is imperative that a
nondeterministic model or observer be used, possibly along with the underlying deterministic model of the process. DFRP, being a deterministic framework cannot capture event
concealment and nondeterminism.
Another important handicap of the DFRP model is that it does not have any provision
to handle numerical as well as non-numeric information. Logical decisions, based on this
kind of information (state), which is so common in real world DES, cannot be modelled by
DFRP. Also clocks and real time features like scheduling driven by hard deadline cannot
be modelled here.
The above observations motivate us to work in the following two directions:
analysis of the DFRP framework to find out the basic reason for undecidability of
the associated problems and identify general subclasses for which these problems are
solvable or atleast decidable;
extension of the formalism to include nondeterminism and variables (state).
We end this section with a chapterwise summary of contribution of the thesis.
Chapter 1. The present chapter has discussed the basic features of DES and has
given a brief survey of the state of the art of the logical models of DES. This chapter
has also explained the motivation behind the current work.
Chapter 2. The basic mathematical ideas behind process algebra models in general, and for the DFRP model in particular, are presented here. While presenting
the general algebra of DES models (following [22]), some modifications have been
introduced. These include (a) a modification regarding general property of projection operators in the definition of the embedding space; (b) distinctions between
syntax and semantics of process based models; (c) a generalised concept of mutually
recursive family of functions; (d) relaxation of the condition for existence of unique
solution of recursive equations to include weak spontaneous functions.
Chapter 3. In this chapter, an analysis has been carried out to characterise the
boundedness property of the DFRP model and its subclasses. It has been shown
11
that (a) for the subclass of DFRPs built with operators for sequential composition
and all types of event set modification, boundedness is decidable, inspite of the infiniteness of the undelying set of functions; (b) for the subclass of DFRPs built with
operators for concurrent operation and all types of event set modification, the underlying set of functions is infinite. However, inspite of this, DFRPs belonging to
this subclass are bounded under a suitable post-process computation procedure; (c)
the set of functions built with operators for concurrent operation and some types
of event set modification, is finite; (d) a number of semantics preserving syntactic
transformations are necessary to prove the above results. These have been defined
and they play a role, similar to the canonical transformations in linear system theory,
in process simplification; (e) the above two bounded classes can be used to construct
bounded and hierarchical subclasses of DFRP in which both parallel and sequential
composition operators can be used in a way such that a good trade off between modelling flexibility and tractability of problems can be achieved. The results obtained
in this work have been communicated as [62, 63].
Chapter 4. Motivated by the need for nondeterminism, a nondeterministic extension of the DFRP model has been attempted in this chapter. The main features are
the following. (a) A possible future based model of nondeterministic processes is
introduced. (b) A substantial collection of operators is defined and their properties
are discussed in detail. (c) A finitely recursive characterisation of the nondeterministic process space has been obtained in terms of these operators. The results obtained
in this work have been communicated as [64].
Chapter 5. In this chapter, to address the problem of introducing variables and
states in DFRP model, the idea of extended processes [22] has been used to provide
a state based extension of the DFRP model. The major contributions are as follows. (a) Unlike [22], concurrent processes have been allowed to share variables in
this model. (b) A number of operators have been defined and different constraints
that are required to maintain the state continuity in a shared memory framework
are discussed. A finitely recursive characterisation has been obtained in terms of
these operators. (c) The concept of silent transition presented here enables constant monitoring of state-based conditions in the face of state transitions caused by
environmental actions. (d) The enhanced framework has been shown to be able to
model arbitrary TTMs. The results obtained in this work are published in [65, 66].
Chapter 6. The utility of the proposed hierarchical, nondeterministic and state-
12
CHAPTER 1. INTRODUCTION
based extensions of the DFRP model has been illustrated here through a number
of practical examples drawn from various fields. (a) A manufacturing shop floor
has been modelled using a bounded hierarchical DFRP which uses both parallel and
sequential operator. (b) Dynamics of a fault diagnosis session has been modelled
using the nondeterministic extension. (c) Finally, the dynamics of a robot controller
that interfaces between a robot and its environment has been modelled using the
state-based extension of the DFRP model.
Chapter 7. Conclusions from the work are presented in this chapter along with
some suggestions for future work.
In the next chapter we review the mathematical framework to be adopted in this thesis.
Chapter 2
Mathematical Background
In this chapter, a comprehensive mathematical background has been presented which will
be useful in understanding the DFRP model of DES and subsequent developments on the
model which have been carried out in this work. Most of its contents are from [21] and
[22]. It was however necessary to introduce some modifications and generalisations without
changing the basic theme of [21] and [22].
To begin with, following [22], a general process algebra for logical models of DES is
presented here, which can be tailored to recover existing formalisms and to obtain new
model families.
2.1
Marked Processes
There exists a number of frameworks to model the logical behaviour of DES. Most of them
require some common major components, like a collection of event symbols representing
individual significant events in the physical system. The set of sequences of these event
symbols that are possible in a particular DES determines the behaviour of the system
along with some representation of the state or mark of the system after occurrence of
an event.
Let be a fixed finite collection of event symbols. is the set of all finite length
strings formed with elements of , including the null string <>.
As an example, let = {a, b}. Then = { <>, < a >, < b >, < ab >, < ba >,
< aa >, < bb >, }.
Given any two strings s and t, s^t denotes the concatenation of the strings of s and t.
For example < ab > ^ < ac > = < abac >. Also <> ^s = s^ <> = s. L is called a
language over and it is prefix closed if s^t L s L. C( ) denotes the family of
13
14
prefix closed languages over . For example L = { <>, < a >, < b >, < aa > < aab >,
< aaa > } is a prefix closed language over {a, b}.
Marking corresponds to an assignment of values to a set of mathematical objects associated with a process after some event has taken place. This in turn determines the
immediate future of the process. Each such assignment is called a mark.
Definition 2.1.1 (Marking:) Let M be a set of marks and be a fixed family of functions
from to M such that s /s where /s(t) := (s^t), for all t .
Definition 2.1.2 (Marked Process and Embedding Set:) A marked process is a
tuple w = (tr w, w) where, tr w C( ) and w : tr w M , w is the marking
function that determines the mark after s. The underlying global set of processes, also
called an embedding set, is denoted as W,M, (often simply as W ) and is defined as the
cartesian product C( ) . The prefix closed language tr w is also known as the set of
traces of w and w(s) is called marking of the trace s.
Definition 2.1.3 (Post Process:) A process w, after executing a string s, behaves as its
post-process w after s, denoted by w/s. It is formally defined as
tr w/s := {t | s^t tr w} and (w/s)(t) := w/s(t) = w(s^t).
Definition 2.1.4 (Choice Function:) Given for i = 1, , k, wi W, i , and an
initial mark m M , a new process w, denoted as w = (1 w1 | | k wk )m is
defined as follows.
S
tr w := {<>} ki=1 {< i > ^s | s tr wi }. Also
w(<>) := m; w(< i > ^s) := wi (s).
The process w captures the fact that its environment can choose any of the events from
{1 , , k } to take place in it at the initial instant. It behaves as the process wi after
the occurrence of the event i . In other words, processes unfold their traces like a tree.
Any process first exercises its choice of events from those that are possible ( i.e., events
included in traces of length 1) and then behaves as the corresponding post process.
Definition 2.1.5 (One Step Expansion Formula:) For any w W , if tr w 6= {<>}
then w = (1 w/ < 1 >| | k w/ < k >) w(<>) where
{< i >| i = 1, , k} = {s tr w | #s = 1}. Here #s denotes the length of s.
To relate the elements of the embedding set W , a partial order on W is introduced.
The complete definition of may be made for specific cases appropriately. As will be seen
15
later, depending upon its exact definition, may capture concepts of sub-processes, more
nondeterministic process etc.
Also a sequence of projection operators { n : W W | n = 0, 1, } is defined
with the implied meaning that w n + 1 is a process that has the same dynamics (trace
and mark) as w for strings upto length n + 1. Beyond that w n + 1 may differ from w.
Obviously w n+1 is a better approximation of w compared to w n. The exact definition
of { n} is again made appropriately for specific cases.
Definition 2.1.6 (Embedding Space:) An embedding space is a triple (W, , { n}),
where W = W,M, is an embedding set, is a partial order and { n : W W | n =
0, 1, } is a family of projections mapping W into itself, satisfying following properties:
1) (w 0) n = w 0, n 0. In a sense the n operator is causal on the indexing
sequence.
2) w(<>) = v(<>) w 0 = v 0.
3) If w = (1 w/ < 1 >| | k w/ < k >) w(<>) then
w n = (1 w/ < 1 > (n 1) | | k w/ < k > (n 1)) w(<>) . Here w n
is the image of w under n. If tr w := {<>} then n > 0, w n := HALTw(<>) , where,
for m M , HALTm is the process having tr HALTm := {<>}, HALTm (<>) := m.
4) If {wi } is a chain, (that is i, wi wi+1 ) then there exists a least upper bound (l. u. b)
of {wi } in W . Thus (W, ) is a complete partial order.
5) If {wi } converges to w then < > tr w l | i l, < > tr wi . Also wi / < >
wi+1 / < > and {wi / < >} converges to w/ < >.
6) If {wi } is a chain converging to w, u1 , , uk are processes in W , , 1 , , n are
distinct events, m is a mark, v := ( w | 1 u1 | k uk )m , and for i = 1, , k,
vi := ( wi | 1 u1 | k uk )m , then, {vi } is a chain converging to v. Thus choice
function preserves ordering of processes.
7) {w n}n0 is a chain converging to w.
8) If {wi }i0 is a chain converging to w then n {wi n}i0 is a chain converging to w n.
Remark 2.1.1 In the 3rd condition of definition 2.1.6 a change has been made in the
definition of w n from the one that is given in [22]. There, n, w n was defined as w 0,
whenever tr w = {<>}. Here we have defined that if tr w := {<>} then n > 0, w n :=
HALTw(<>) .
Intuitively, since the information tr w = {<>} is available, the reasonable approximation w n of w for n > 0 should be a do-nothing HALTm process. It can be easily
checked that this modification doesnt affect any result in [22]. Moreover if the old defini-
16
tion is retained, then difficulties are encountered, as will be shown later, while describing
the process operators for our nondeterministic process space.
Fact 2.1.1 If : W W satisfies (w 0) 0 = w 0 and (2) of definition 2.1.6, then there
exists a unique collection of maps, { n, n 0}, satisfying (1) to (3) of definition 2.1.6.
Also
w n m = w min(m, n).
For a proof of this, see [22].
Definition 2.1.7 (Marked Process Space:) A subset = (, , { n}) of the embedding space (W, , { n}) is called a marked process space if it is closed with respect to
the following operations.
a) Projection: P n 0 P n . It is known as the axiom of projection.
b) Postprocess: P s tr P, P/s . It is called the axiom of post-process.
c) Limit: if {Pi }i0 is a chain converging to P such that i 0, Pi , then P . It
is known as the axiom of completeness.
d) Choice: i = 1, , k 0, such that R, P1 , , Pk and R/ < i > 0 = Pi 0 and
R = (1 R/ < 1 >| | k R/ < k >) R(<>)
implies P where
P := (1 P1 | | k Pk ) R(<>) .
It is called the axiom of prefixing.
Elements of are called marked processes. Any subset of which also satisfies the
above axioms is called a process subspace.
In general, a process space is specified by an embedding space W (by specifying its
partial order and projection operators that satisfy conditions of definition 2.1.6 ) and a
set of marking axioms that define a subset of W which is closed w.r.t. above four
operations.
The following important fact, proved in [22], shows that the marking axioms need only
specify the local behaviour.
Fact 2.1.2 Let W be an embedding space. Let W0 W 0 and W1 W 1 be subsets
satisfying the following consistency condition:
1) W1 0 = W0 .
2) (w W1 ) (< > tr w) (w/ < > W0 ). Then there exists a unique marked
17
18
For this marked process space, the projection operator and the partial order are defined
as follows.
Given any Petri Net process P , P 0 := HALTP (<>) . For n > 0, P n :=
HALTP (<>) if tr P = {<>}. Otherwise P n := (1 P/ < 1 > (n 1) | | k
P/ < k > (n 1)) P (<>) where P = (1 P/ < 1 >| | k P/ < k >) P (<>) .
Also P1 P2 iff tr P1 tr P2 and s tr P1 , P1 (s) = P2 (s). This definition leads to
the definition of the limit of a chain Pi Pi+1 as lim{Pi }i0 = P where tr P := i tr Pi
and s tr P tr Pi , P (s) := Pi (s).
A process P = (tr P, P ) can be viewed as a transformation which converts the initial
marking P (<>) into the marking P (s) after the process P executes the event sequence
s. In [22] a natural extension has been suggested, where instead of a single initial mark,
a process may have an arbitrary initial mark. Such an extension will help in modelling
cases, in which each mark represents assignment of values to variables associated with
some computation. Such an extension is formalized below.
Let W be an embedding space and Q an arbitrary set, representing the set of possible
assignments of values to a number of variables.
Definition 2.1.8 Let W Q := {w | w : Q W is any partial function }. D(w) denotes
the domain of w and for q D(w), w(q) W is the evaluation of the (extended) process
w at q.
Definition 2.1.9 The (extended) post process of w W Q , denoted by w/s, is defined as
w/s(q) :=
(w(q))/s if s tr w(q)
undefined otherwise
19
{s tr w(q) | #s = 1}
qD(w)
tN .
tN TN
Each element of tN is denoted as PtN ,zN where tN is the topology for the Petri Net
corresponding to PtN ,zN and zN is the initial token distribution. Then, one can form
extended processes as
P tN : ZN tN | P tN (zN ) := PtN ,zN .
20
ZtNN ,
N 0, tN TN
together with standard projection operators and partial orders (induced from those of )
forms an extended process space.
2.2
Recursive Characterisation
In general a process P is an infinite object because of its infinite set of trajectories. For the
purpose of computation it is necessary to have a finite description of processes of at least
some restricted process space. The basic idea is to use recursion in a way analogous to the
method of differential or difference equation for describing an infinite set of trajectories.
Definition 2.2.1 (Function Properties:) Given a process space = (, , { n}), a
process operator f : is said to be :
continuous: if for every chain {Pi }i0 , {f (Pi )}i0 is also a chain and f (Pi ) = f (Pi ).
(Note that P1 P2 P1 P2 = P2 and P2 P1 P1 P2 = P1 , otherwise between
processes is not well defined).
nondestructive (ndes): if P , n, f (P ) n = f (P n) n.
constructive (con): if P , n, f (P ) n + 1 = f (P n) n + 1.
It is intuitively clear that f is con if the (n + 1)-st event of f (P ) is determined by the
n-th event in P . Also con implies ndes. If f has several arguments, these definitions apply
if they apply to each argument when others are fixed.
Fact 2.2.1 The properties of continuity, nondestructiveness and constructiveness are preserved under function composition. Furthermore, if f1 is con and f2 is ndes, then both
f1 f2 and f2 f1 are con.
It is easy to see that the post-process function P P/s is usually destructive (not
ndes) unless s =<>. Also from conditions of definition 2.1.6, it is easy to see (a) the choice
function is continuous and con; (b) every projection operator n is continuous and ndes
and (c) the post-process operation is a continuous partial function and if P1 P2 is a
chain converging to P , and s tr P , then there is an integer l such that Pl /s Pl+1 /s
is a chain converging to P/s.
The following theorem specifies the conditions for unique solution of recursive equations
defined on processes.
21
Theorem 2.2.1 Let be a process space and consider the recursive equation
P = F(P, U)
(2.1)
(2.2)
Zk
22
23
24
Fact 2.2.2 < n (G, ) > is (weakly) spontaneous if < G > is (weakly) spontaneous.
Definition 2.2.10 (Mutually Recursive Processes:) A finite set of processes {P1 , ,
Pn } is called a family of Mutually Recursive Processes (MRP) over < n (G, ) > if
j, Pj , s tr Pj Pj /s = < f > (P1 , , Pn ) for some < f >< n (G, ) >.
Theorem 2.2.2 Let either (a) < G > be a mutually recursive spontaneous (MRS) family
of functions or (b) < G > be a mutually recursive weak spontaneous (MRWS) family of
functions such that for the embedding space (W, , n) in question, the projection operator
0 be a constant function i.e., for any w1 , w2 in W , w1 0 = w2 0.
Then {P1 , , Pn } is a family of MRP w.r.t < n (G, ) > iff P = (P1 , , Pn ) is the
unique solution of the recursive equation with consistent initial condition,
(X1 , , Xn ) = X = F(X), X 0 := P 0
(2.3)
(2.4)
with each < fij >< n (G, ) > and mi is a suitable mark.
Though the proof will follow similar reasoning as the one in [22], it is presented in
detail in the Appendix because of two reasons: (a) the concept of MR family of functions is
generalised. (b) The requirement on < G > is relaxed by allowing it to be just an MRWS
family instead of an MRS family under the condition that 0 is a constant function.
Necessity of the above changes will be clear while defining the process operators of our
nondeterministic process space in chapter 4.
Definition 2.2.11 (Finitely Recursive Processes:) A process Y is said to be a
Finitely Recursive Process (FRP) with respect to < n (G, ) > if it can be represented
as
X = F(X), Y =< g > (X)
where X = (X1 , , Xn ), F is of the form ( 2.4) and < g >< n (G, ) >. (F, < g >) is
said to be a realisation of the process P .
Definition 2.2.12 (Algebraic Process Space:) The collection of all possible FRPs w.r.t
< n (G, ) > for arbitrary n is called the algebraic process space and is denoted as
A(< (G, ) >).
25
Fact 2.2.3 Because of MR-ness of functions of < (G, ) >, A(< (G, ) >) is closed
under post-process operation, i.e., if Y A(< (G, ) >) with realisation (F, < g >) for
some n, then for any s tr Y , Y /s can be represented as Y /s = < gs > (X) for some
< gs > A(< (G, ) >).
2.3
In this section the Deterministic (Inan-Varaiya) Process Space [21] has been described as
a typical example of general process space, defined in earlier section. Here the specific embedding space, the marked process space, the operators and the recursive characterisation
of the deterministic process space are described.
Definition 2.3.1 (Basic Objects:) Let be a fixed finite collection of events. Let MD
be a set of deterministic marks. D be a fixed family of functions from to MD such
that D s /s D where /s(t) := (s^t), for all t . W,MD ,D
(or simply WD ) denotes the suitable deterministic embedding set and it is defined as
WD := C( ) D .
To characterise the suitable embedding space one needs to define the projection operators {D n} n 0 and the partial order D over WD .
Definition 2.3.2 (Deterministic Projection Operators {D n | n 0}:) For any w
WD , w D 0 := HALTw(<>) where for some m MD , HALTm is the process defined formally as
tr HALTm := {<>}. HALTm (<>) := m.
For n > 0
w D n := (1 w/ < 1 >D (n 1) | | k w/ < k >D (n 1)) w(<>)
where
w = (1 w/ < 1 >| | k w/ < k >) w(<>) .
If tr w =<>, then for n > 0, w D n := HALTw(<>) .
Definition 2.3.3 (Deterministic Partial Order D :) The partial order D over WD
is defined as : w1 D w2 iff tr w1 tr w2 and s tr w1 , w1 (s) = w2 (s). This definition
leads to the straight forward definition of the limit of a chain w0 D w1 D w2 as:
lim{wi }i0 = w
26
where
tr w :=
tr wi , s tr w tr wi , w(s) := wi (s).
i0
Finally the marking axioms are described which define the Deterministic Process Space
D .
Definition 2.3.4 (Marking Axioms:) Let D be the subset of WD that satisfies the following marking axioms.
(a) tr P C( ).
(b) MD := 2 {0, 1}. w : tr w MD is a tuple of two functions, namely w(s) =
(w(s), w(s)). w : tr w 2 is the alphabet function, so that w(s) is the set of events
w can execute or block after it has generated the event sequence s. w : trw {0, 1}
is the termination function where w(s) = 1 represents successful termination of w after
generation of the event sequence s in w.
(c) s^ < > trw w(s). That is, the events that w can execute after generating
s are from the set of events in which it can engage in at that instant.
(d) w(s) = 1 s^t trw t =<>. It means, once w terminates successfully, no event
can take place in it.
From now on an element of D is denoted by the symbol P . Elements of D are known
as deterministic (Inan-Varaiya) processes.
Two processes P1 and P2 are said to be equal if P1 D P2 and P2 D P1 . In other words,
equal processes are those which have identical trace, alphabet and termination function.
Definition 2.3.5 Let
D := {P D | s tr P, P (s) = 0}.
Fact 2.3.1 It can be easily verified that (a) (W,MD ,D , D , {D n}) satisfies conditions of
definition 2.1.6 and thus is a suitable embedding space and (b) D satisfies the conditions
of definition 2.1.7 and thus, (D , D , {D n}) is a valid process space. (c) By restricting
D , as well as considering D as a partial order on
D , it
the domains of each D n to
D , D , {D n}) is a valid process space, and hence a process
can be shown easily that (
subspace of (D , D , {D n}).
Next the different constant processes and process operators defined over D are described.
27
Definition 2.3.6 (Constant Processes:) The basic processes in this model are ST OPA
and SKIPA , for some A .
ST OPA := (<>, ST OPA (<>) = A, ST OPA (<>) = 0).
SKIPA := (<>, SKIPA (<>) = A, SKIPA (<>) = 1).
Note that any HALTm or P D 0 for some P is either ST OPA or SKIPA for some
D , then P D 0 is ST OPA for some A .
A . If P is from
The standard operators that have been defined over this model are as follows:
Definition 2.3.7 (Deterministic Choice Operator (DCO):) .
Given P1 , , Pn and distinct events 1 , , n from A , the deterministic choice
operator (denoted as P = (1 P1 | | n Pn )A, ) is defined as follows.
If = 1, then P := SKIPA .
If = 0 then
tr P := {<>} {< i > ^s | s tr Pi , 1 i n}
P (<>) := A, P (< i > ^s) := Pi (s), P (<>) := 0, P (< i > ^s) := Pi (s)
Example 2.3.1 For example, the dynamics of a simple machine that gives two possible
choices of change for 10 Rupees, (5 + 5 or 2 + 2 + 1 + 5) can be described as follows.
CH10i = (in10 CH10o){in10},0 .
CH10o = (out5 CH5o | out2 CH8o){out5, out2},0 .
CH5o = (out5 CH10i){out5},0 .
CH8o = (out2 CH6o){out2},0 .
CH6o = (out1 CH5o){out1},0 .
Definition 2.3.8 (Local Change Operator (LCO):) .
Given a process P and collection of events B and C, the LCO P [B+C] is defined as follows:
tr P [B+C] := {s tr P | B is not the first event of s}
P [B+C] (<>) := (P (<>) B) C
P [B+C] (s) := (P (s)), for s 6=<> .
P [B+C] (s) := P (s), s including <>.
Definition 2.3.9 (Global Change Operator (GCO):) .
Given a process P and collection of events B and C, the GCO P [[B+C]] is defined as
follows:
tr P [[B+C]] := {s tr P | B is not in s}
P [[B+C]] (s) := (P (s) B) C.
P [[B+C]] (s) := P (s).
28
For our convenience, from now on two types of LCOs and GCOs will be in use:
LCO[B] or P [B] (meaning P [B+] ) and LCO[+C] or P [+C] (meaning P [+C] ) and
similarly GCO[[B]] and GCO[[+C]].
LCO and GCO are useful as they support reusability of FRP models with minor
modification. For example, if because of shortage of changes, the change giving machine
(of example 2.3.1) fails to give 2 Rupees change, the truncated behaviour can be simply
modelled as
T r_CH10i = CH10i[[{out2}]] .
Definition 2.3.10 (Sequential Composition Operator (SCO):) .
Given P1 and P2 , the sequential operation between the processes P1 and P2 is denoted as
P1 ; P2 . It is defined as:
tr (P1 ; P2 ) := {s | (s tr P1 P1 (s) = 0) (s = r^t r tr P1 P1 (r) = 1
t tr P2 )}.
(P1 ; P2 )(s) :=
(P1 ; P2 )(s) :=
P1 (s) if s tr P1 P1 (s) = 0
P2 (t) if s = r^t r tr P1 P1 (r) = 1 t tr P2
29
undefined if s P /tr P
<>P :=<> and s^ < >P := (s P )^ < > if P (s P )
(s P ) if /P (s P )
Then, for processes P1 and P2 , the parallel composition P1 || P2 , is defined as :
<> tr (P1 || P2 ).
If s tr (P1 || P2 ), then s^ < > tr (P1 || P2 ) iff (s^ < >P1 tr (P1 ),
30
( P1 (s P1 ) = 1 P2 (s P2 ) = 1)
1 if ( P1 (s P1 ) = 1 P2 (s P2 ) P1 (s P1 ))
(P1 || P2 )(s) :=
( P2 (s P2 ) = 1 P1 (s P1 ) P2 (s P2 ))
0 otherwise
The process P1 kP2 , after generating an event sequence s, can execute an event from
P1 (s P1 ) P2 (s P2 ) iff both P1 and P2 execute the event synchronously. Otherwise
such an event will be blocked because of lack of participation from one of the components.
Otherwise if belongs to any of Pi (s Pi ) Pj (s Pj ), i, j = 1, 2, i 6= j, then, it can
take place in P1 kP2 if it takes place in Pi , since Pj cannot block these events.
Example 2.3.4 Consider two robots, each of which picks up objects and places them
somewhere. The first robot either picks up a light object of type 1 and places it or it
picks up a heavy object with the help of the second robot and places it. The second robot
also either picks up a light object of type 2 and places it or it picks up a heavy object
with the help of the first robot and places it. Thus picking and placing of heavy object is
synchronous between the robots. The behaviour can be modelled as follows. For i = 1, 2,
Robo_i = (pick_i T emp_i | pick_heavy T emp_heavy_i){pick_i, pick_heavy},0
T emp_i = (place_i Robo_i){place_i, pick_heavy},0
T emp_heavy_i = (place_heavy Robo_i){place_heavy},0
Robo_1kRobo_2 gives the concurrent behaviour of the two robots. Note that when first
robot picks up object of type 1, it prevents the occurrence of the event pick_heavy in
Robo_2, by including the event pick_heavy in the initial alphabet of T emp_1. Similar
arrangement has been made in Robo_2 also.
DCO is con and continuous. P CO, SCO, LCO and GCO are all ndes and continuous.
P CO is commutative and associative. SCO is associative but not commutative.
Next some examples of operators are presented to show the distinction between continuity and nondestructiveness.
Example 2.3.5 Consider the following operators.
(i) Post-process operator is continuous but destructive.
(ii) Let F1 : D D such that
F (P ) :=
31
It can be easily seen that F2 () is ndes. But to see that it is not continuous consider the
following.
Let P1 := (a P1 ){a,b},0 .
P2 := (a P1 | b P3 ){a,b},0 where
P3 := (b P3 ){b},0 .
Clearly P1 D P2 But F2 (P1 ) /D F2 (P2 ) as F2 (P1 ) = P4 and F2 (P2 ) = P5 where
P4 = (a P4 | b P4 ){a,b},0 and
P5 = (a P4 | b P3 ){a,b},0 .
Next, following [21], a recursive characterisation of Deterministic processes using the
operators defined above is presented. Some modifications have been made to facilitate
presentation of the current work.
Definition 2.3.12 Let GD be the set of functional symbols defined as :
GD := {SCO, P CO, LCO[B], LCO[+C], GCO[[B]], GCO[[+C]] | B, C }.
Corresponding to each element g GD the function < g > and the set < GD > are
constructed as follows:
< SCO >: 2D D such that < SCO > (P1 , P2 ):= P1 ; P2 .
< P CO >: 2D D such that < P CO > (P1 , P2 ):= P1 k P2 .
< LCO[B] >: D D such that < LCO[B] > (P )) := P [B] .
< LCO[+C] >: D D such that < LCO[+C] > (P )) := P [+C] .
< GCO[[B]] >: D D such that < LCO[[B]] > (P )) := P [[B]] .
< GCO[[+C]] >: D D such that < GCO[[+C]] > (P )) := P [[+C]] .
Definition 2.3.13 < GD > be the set of functions defined as :
< GD >:= {< SCO >, < P CO >, < LCO[B] >, < LCO[+C] >, < GCO[[B]] >,
< GCO[[+C]] > | B, C }.
32
Following definition 2.2.6, the set of constant symbols of both process spaces D and
D are constructed as below.
Definition 2.3.14 Let < ST OPA >: nD D and < SKIPA >: nD D such
that < ST OPA > (P) := ST OPA and < SKIPA > (P) := SKIPA . Then
CD = {ST OPA , SKIPA | A }.
< CD >= {< ST OPA >, < SKIPA > | A }.
C D = {ST OPA | A }.
< C D >= {< ST OPA > | A }.
Theorem 2.3.1 < GD > is an MRS family of functions.
Proof: It can be easily verified that every < g >< GD > is continuous, ndes and
< g > (P D 0) = (< g > (P D 0)) D 0.
Now the mutual recursiveness property of the operators is described.
SCO : (P1 ; P2 )/ < >:=
undefined
otherwise.
P CO : (P1 || P2 )/(< >) :=
33
34
(f2 ) or (f1 ); (f2 ) respecdoes not cause any confube removed. For example
P2 || P3 || ST OPA || (P4 ; P2
; P1 ).
Definition 2.3.15 (Length of Expressions:) For g n (GD , D ), length of g (denoted as len(g)) is defined recursively as follows.
len(ST OPA ) = len(SKIPA ) = len(Pi ) := 1
[[B]] ) = len(h
[[+C]] ) = len(h
[B] ) = len(h
[+C] ) := len(h) + 1.
len(h
m
len(km
i=1 hi ) = len(h1 ; ; hm ) := (i=1 len(hi )) + (m 1).
Definition 2.3.16 (Sub-expressions:) For f n (GD , D ), the set of subexpressions
of f , namely Sub(f ) is defined recursively as follows:
{f }
if f = ST OPA or SKIPA or Pi
Sub(f ) := {f } Sub(g)
if f = g [[B]] or g [[+C]] or g [B] or g [+C]
m
m
{f } m
i=1 Sub(vi ) if f = ki=1 vi or ;i=1 vi
[[B ]]
[[B ]]
(b) f = f1 1 , (meaning f has a structure f1 1 , for some f1 and B ),
[[B ]]
g = g1 2 , f1 = g1 and B1 and B2 are identical sets.
[[+C ]]
[[+C ]]
(c) f = f1 1 , g = g1 2 , f1 = g1 and C1 and C2 are identical sets.
[B ]
35
[B ]
[+C ]
[+C ]
(e) f = f1 1 , g = g1 2 , f1 = g1 and C1 and C2 are identical sets.
(f ) f = f1 ; ; fm and g =
g1 ; ; gm and i, fi = gi .
m
(g) f = ||i=1 fi , g = ||m
i=1 gi , such that for every fi , there exists gj with
fi = gj and vice-versa.
The above definition implies that P1 kP2 kP2 is syntactically equivalent to both
P2 kP2 kP1 and P1 kP2 kP1 . Also clearly = is an equivalence relation. Because of the
36
Chapter 3
Boundedness Analysis of FRP
3.1
Introduction
In general, a DES can be described in a more compact way in the DFRP model than
the state based models like FSM. For practical applicability of the model, it is important
to know if typical properties associated with a system, described in this model, can be
computed algorithmically in a finite time, i.e., whether these properties are decidable.
One inherent requirement in this computation is to have an idea about the complete trace,
marks and underlying state space. In models like FSM, this requirement is satisfied easily.
However in models like PN, TTM, DFRP one needs to expand the given model to extract
this information. Moreover, because of repetitive behaviour, traces of a system may be
infinite, even when the underlying state space is finite. As a result, most of the algorithms
developed for analysing logical properties of DES use the underlying state space of the
system.
A property is said to be decidable if there exists some generic algorithm which can
produce an answer about the property, for an arbitrary member of the particular class of
systems, in finite number of steps. More often than not, the decidability of a property is
guaranteed by the finiteness of the underlying state space. As shown in [21], a (possibly
infinite) state machine FP = (Q, , q0 , , Qf ), with state-space Q, alphabet set , initial
state q0 , state transition function (partial) : Q Q and set of final states Qf , can
be associated with any P D as follows.
Q = {P/s | s tr P }, = { P (s) | s tr P },
Qf = {P/s | s tr P P (s) = 1}, q0 = P/ <>,
37
38
(P/s, ) :=
if s^ < > tr P
otherwise
Note that, here the post processes play the role of states. In view of this, state-based
algorithms can be applied on deterministic processes to determine many logical properties.
These properties become decidable if FP is an FSM, i.e., the set of post processes of P is
finite. In this case P is said to be a bounded process.
In case of any DFRP Y A(< n (GD , D ) >), with realisation (F, < g >), fact 2.2.3
ensures that for any s tr Y , Y /s can be represented as Y /s =< gs > (X) for some
< gs > A(< n (GD , D ) >). Here X is the unique solution of P = F(P). Clearly Y is
bounded (FY is an FSM) if the set of post processes {< gs > (X) | s tr Y } is a finite set.
But there is no direct way of computing and representing the post processes < gs > (X),
other than by computing the symbol gs , for any s tr Y . So we redefine the property
of boundedness in terms of the syntax set and say that a DFRP Y is bounded if SY =
{gs | s tr Y, Y /s =< gs > (X)} is finite.
But {gs | s tr Y, Y /s =< gs > (X)} and {< gs > (X) | s tr Y, Y /s =< gs > (X)}
are not equinumerous. The former is the set of post-process expressions (syntax), whereas
the latter is the set of post-processes (semantics). And once we define boundedness property
in terms of the syntax set, it becomes dependent on (i) realisation of the process and (ii) the
exact procedure of computation of post process expressions. The following two examples
illustrate these points.
Consider the process Y in A(< 1 (GD , D ) >), with two realisations. In the first
realisation, Y =< P1 > (X) where X = ( < P1 > (X)){},0 . Here the set of post
process expressions is the singleton set {P1 }. In the second realisation, Y =< P1 > (X)
where X = ( < P1 ; P1 > (X)){},0 . Here the set of post process expressions is an
infinite set containing P1 , P1 ; P1 , P1 ; P1 ; P1 , etc. But in both cases it can be
easily seen that the set of post processes is a singleton containing a single process X with
tr X = {an | n 0} and X and X are two constant functions, equal to {a} and 0
respectively. Thus, speaking in terms of the syntax set, the former realisation identifies
the process as bounded whereas the latter identifies the same process as unbounded.
As a second example, consider the simple process Y =< P1 > (P ), given recursively
[[+{b}]]
as P = (a < P1
> (P )){a},0 . If we compute the post-process expressions, performing syntactic substitutions mechanically, the set of post-process expressions will be
[[+{b}]]
[[+{b}]] [[+{b}]]
an infinite set containing P1 , P1
, (P1
)
, and Y will be termed
as unbounded. However, except P1 , all the other symbols are semantically equivalent.
If we use a semantics preserving syntactic transformation in the post-process expression
3.1. INTRODUCTION
39
[[+{b}]]
40
It has been shown that if LCO[B] is removed along with SCO, the resultant set of
functions is finite and the DFRPs built over this set of functions are naturally bounded.
More importantly, it has been found that, even if LCO[B] is not removed, because of the
recursive structure of the DFRPs built over P CO, GCO and LCO, each DFRP is bounded,
though the underlying function space is infinite. Thus, boundedness for this subclass of
DFRPs is guaranteed. This constitutes the first contribution of this chapter.
For the subclass of DFRPs built around SCO and different change operators, it is
already well known that infinitely many distinct post-process expressions can be generated
recursively. Consequently these processes may be unbounded. Processes of this subclass
have structural similarity with the grammars of simple deterministic languages, defined
by Korenjak and Hopcroft [67], who have studied the problem of language equivalence
for such unbounded grammars. The second contribution of this chapter is to show that
boundedness is decidable for processes of this subclass. In other words, a general procedure
has been found, which, in finite steps, can find out whether a DFRP of this subclass is
indeed unbounded or not.
In the course of proving the above two results, this work also points out that if postprocess expressions are computed using simple syntactic substitution, then very simple
DFRPs may appear to be unbounded because of semantically redundant expressions. In
both cases, semantics-preserving syntactic transformations are proposed, which, along with
syntactic substitutions in the computation of post-process expressions, reduce redundant
expressions into a single expression and obtain a bounded implementation. These transformations are of significance independently of the present context of boundedness analysis
too. They play similar roles as the canonical forms of transformations in general system
theory and would be useful in many other contexts. These form the third contribution
made in this chapter.
The final contribution of this chapter is to show how both the P CO and the SCO can
be used together in a specific way, giving rise to a bounded hierarchical subclass of DFRP.
Below, a few definitions and basic results are presented that will be useful in the subsequent analysis of boundedness.
Definition 3.1.2 Let
G;D := GD {P CO}; Also < G;D >:= < GD > {< P CO >}.
;
;
Also let HD
= {SCO}; < HD
>:= {< SCO >}.
;
It is easy to see that the function space < n (G;D , D ) > (as well as < n (HD
, D ) >)
is infinite as each of < Pi >, < Pi ; Pi >, < Pi ; Pi ; Pi >, , is a distinct function
3.1. INTRODUCTION
41
;
and belongs to < n (G;D , D ) > (also in < n (HD
, D ) >). In [21] it was conjectured
that if sequential composition operator is taken out from < GD > then the resulting set
of functions, is finite. However, a counterexample will be presented later to show that an
infinite set of functions can be composed even without using the SCO.
k
Definition 3.1.3 Let GD := GD {SCO}; Also < GD >:= < GD > {< SCO >}.
k
Before showing the infiniteness of < n (GD , D ) >, the following facts regarding distributivity of the GCOs and LCOs over P CO are presented.
Fact 3.1.1 Contrary to what has been conjectured in [21], it has been shown in [58] that,
in general,
[B]
[B]
6= (P1 || P2 )[B] . For example, let
|| P2
P1
P1 = (a SKIP{a} ){a},0 . P2 = (b SKIP{b} ){b},0 .
[{b}]
[{b}]
= P1 .
|| P2
Then (P1 || P2 )[{b}] = (a (b SKIP{a,b} ){a,b},0 ){a},0 . But P1
[+
C]
[+
C]
where C = C 2i=1 Pi (<>). In the above example, if
|| P2
(P1 || P2 )[+C] 6= P1
C = {a, b, c} then C = {c}. Now
(P1 || P2 )[+C] = (a (b SKIP{a,b} ){a,b},0 | (b (a SKIP{a,b} ){a,b},0 ){a,b,c},0 .
Whereas
[+C]
[+C]
|| P2 ) = (a (b SKIP{a,b} ){a,b,c},0 | (b (a SKIP{a,b} ){a,b,c},0 ){a,b,c},0 .
(P1
Fact 3.1.2 In the present investigation it has also been found that GCO[[B]] does not
distribute over P CO either.
[[B]]
[[B]]
P3
|| P4
6= (P3 || P4 )[[B]] . For example, let
P3 = (g (b SKIP{b} ){b,a},0 ){g},0 , P4 = (g SKIP{b,d} ){g},0 . Then
[[{a,d}]]
[[{a,d}]]
((P3 || P4 )[[{a,d}]] )/ < g >= ST OP{b} , but (P3
|| P4
)/ < g >= SKIP{b} .
[B+C]
; P2 . Also (P1 ; P2 )[[B+C]] =
If P1 6= SKIPA for any A , then (P1 ; P2 )[B+C] = P1
[[B+C]]
[[B+C]]
.
P1
; P2
From the above facts it can be found that the LCO and the GCO operators in general
do not distribute over P CO. However a sufficient condition under which GCO[[B]]
operator distributes over P CO is the following.
Lemma 3.1.1 (a) For P1 , P2 D , B , if /s tr (P1 || P2 )[[B]] tr (P1 || P2 ) such
that Pi (s Pi ) = 0, Pj (s Pj ) = 1, Pi (s Pi )
/Pj (s Pj ) but (Pi (s Pi ) B)
[[B]]
[[B]]
(Pj (s Pj ) B) for some i, j {1, 2}, i 6= j, then P1
|| P2
= (P1 || P2 )[[B]] .
D , B , (P1 || P2 )[[B]] = P1[[B]] || P2[[B]] .
(b)For P1 , P2 in
42
Proof: The first part of the lemma can be proved easily by induction on the length
[[B]]
[[B]]
of strings generated by both sides. Intuitively P1
|| P2
and (P1 || P2 )[[B]] always
match in alphabet and traces. They can differ only in termination function. The condition
in the lemma guarantees the matching of termination function also. (In fact, the example
mentioned in fact 3.1.2 is a result of violation of the condition of this lemma.) Part (b)
D trivially satisfy the condition
is straightforward from the fact that processes from
mentioned in (a).
2
Example 3.1.1 (Counterexample) Let P = (a ST OP{} | b ST OP{} ){a,b},0 and two
sets of events be A = {a}, and B = {b}. We first show that each element of the sequence
of processes {Pn }n1 is distinct, where
Pn := ((( (((P [B] || P )[A] || P )[B] || P )[A] || || P )[B] || P )[A] || P )[B]
such that there are n applications of LCO[A] and n + 1 applications of LCO[B] in the
definition.
Proof: To prove the above claim, we show, using induction, that each
Pn = (a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0
where there are altogether n bs and n + 1 as.
Basis: P1 := ((P [B] || P )[A] || P )[B] = (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 .
Hypothesis: For some k, k 1, let
Pk = (a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0
where there are altogether k bs and k + 1 as.
Induction Step: By definition it is easy to see that
Pk+1 = ((Pk || P )[A] || P )[B]
= (((a (b (a (b (a ST OP{} ){a},0 ){b},0 ){a},0 ){b},0 ){a},0 || P )[A] || P )[B]
(By induction hypothesis)
= ((b Pk ){b},0 || P )[B] = ((b Pk | a (b Pk ){b},0 ){a,b},0 )[B] (as Pk kST OP{} = Pk )
= (a (b Pk ){b},0 ){a},0 .
By induction, our claim about the structure of Pn is true. Hence each element of {Pn }n1
is a distinct process.
k
As a result, each element of {< fk >}k1 is a distinct member of < n (GD , D ) >,
where
[B]
fk := ((( (((P1
|| P1 )[A] || P1 )[B] || P1 )[A] || || P1 )[B] || P1 )[A] || P1 )[B]
such that A = {a}, B = {b} and there are k applications of LCO[A] and k + 1 applications of LCO[B] in the definition. This is because applying the sequence {fk }k1
of functions on some argument P nD , where, the first argument P1 is the process
(a ST OP{} | b ST OP{} ){a,b},0 , will result in a sequence of distinct processes, namely
3.1. INTRODUCTION
43
k
Next we present the concept of minimum present and minimum absent alphabet functions in process expressions. Because of the GCO[[B]] and the GCO[[+C]] operators,
given some function f n (GD , D ), it is possible to have a lower bound on the minimum
alphabet that is present (or absent) everywhere along the traces of < f > (P), irrespective
of what P is chosen as argument of < f >. Formally the definitions are as follows:
Definition 3.1.4 (Minimum Alphabets) :
Minimum Present Alphabet: MP : n (GD , D ) 2 , is defined as
MP (f ) := { | < f > (P)(s), P nD , s tr < f > (P)}.
Minimum Absent Alphabet: MA : n (GD , D ) 2 , is defined as
MA (f ) := { | ( < f > (P)(s)), P nD , s tr < f > (P)}.
The following lemma shows how to compute the minimum alphabets. Before applying
MP (f ) =
MA (f ) =
MP (g)
MP (g) B
MP (g) C
Sm
i=1 MP (gi )
Tm
i=1 MP (gi )
if
if
if
if
if
if
if
f
f
f
f
f
f
f
= ST OPA or SKIPA
= Pi
= g [+C]
= g [[B]] or g [B]
= g [[+C]]
= ||m
i=1 gi
= g1 ; ; gm
MA (g)
MA (g) B
MA (g) C
Tm
i=1 MA (gi )
Tm
i=1 MA (gi )
if
if
if
if
if
if
if
f
f
f
f
f
f
f
= ST OPA or SKIPA
= Pi
= g [B]
= g [[B]]
= g [+C] or g [[+C]]
= ||m
i=1 gi
= g1 ; ; gm
44
Proof: From the definition of minimum alphabets, it is easy to see that MP (SKIPA )
= MP (ST OPA ) = A and MA (ST OPA ) = MA (SKIPA ) = A. Also since
the argument processes can be arbitrary, we have MP (Pi ) = MA (Pi ) = . The rest
follows naturally from the recursive structure of n (GD , D ) and the definition of minimum
alphabets.
2
Also, given any f n (GD , D ), let F (f ), F (f ) and F (f ) represent respectively the
initial alphabet ( < f > (X)(<>)), initial termination ( < f > (X)(<>)) and set of
initial possible events of < f > (X), where, X nD is the unique solution of the equation
P = F(P), where each equation of F is of the form (2.4). These quantities can be computed
easily, in a recursive manner, (as in [59]) using the local information present in F in the
form of initial marking and choices of the DCOs and the definition of different operators.
In the following two sections, we investigate the boundedness issue of two major subk
;
n
classes of A(< n (GD , D ) >), namely A(< n (GD ,
D ) >), and A(< (GD , D ) >).
Both the subclasses are built around infinite sets of functions. The SCO, in the case
of the former and the P CO, in the latter is absent from the set of operators.
k
It should be noted that, in case of the operator set GD , the algebraic process space we
D and not D . Since for general processes of D , even
have considered, is built around
GCO[[B]] does not distribute over P CO, (see fact 3.1.2) it is not known whether the
k
processes from the function space A(< n (GD , D ) >) are bounded. But this distributivity
can be achieved if we restrict the process space (and the domain of different functions) to
D (see lemma 3.1.1). Now using < C >, < P roj(n) > and < GkD >, as well as
D
D we can define a set of
by restricting the domain of each function to the processes of
D ) >, in the same way as < n (GD , D ) >
nD to
D , namely < n (GkD ,
functions from
k
is defined. Absence of SKIPA in the counterexample shows that < n (GD ,
D ) > is
D is not so distinctive in terms of the event dynamics due to
also infinite. The choice of
k
the fact that < GD > does not contain < SCO >, which is the major operator that uses
the termination functions of its argument processes.
3.2
k
Boundedness Characterisation : A(< (GD ,
D ) >)
k
In this section we examine boundedness of the process space A(< n (GD ,
D ) >.
Given a process realisation (F, < g >), and the unique set of processes X, satisfying
X = F(X), we first define a syntactic transformation CF on elements of the symbol set
k
n (GD ,
D ). It converts a large class of function symbols, which are semantically equivalent atleast for the argument X, into a unique function symbol. The suffix F of CF indicates
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
45
that the transformation CF uses the information present in equation X= F(X) and as a
result it is not semantics preserving for arbitrary processes. However, for the argument X,
it preserves the semantics, i.e., < f > (X) = < CF (f ) > (X). The transformation CF is in
turn composed of two syntactic transformations, CF1 and CF2 .
Informally speaking, CF1 makes the following conversions. (i) Multiple applications
of same types of change operators (LCO and GCOs) are converted into a single ap[[+C ]][[+C2 ]]
[[+C C ]]
plication of the same. For example, Pi 1
will be converted into Pi 1 2 .
(ii) Redundant event symbols in the event set of GCO[[B]] and GCO[[+C]] are removed with the help of minimum present and absent alphabets. Thus f [[+C]] is converted into f [[+(CMP (f ))]] . If it becomes empty, the corresponding operator symbol itself is removed. Similarly, in case of LCO[+C] and LCO[B], instead of minimum present and minimum absent alphabets, F (f ) and F (f ) are used. For example
(P1 kP2 )[B][+C] will be converted into (P1 kP2 )[B][+C] where B 0 := B F (P1 kP2 ),
:= (B 0 C) (B 0 C F (P1 kP2 )) and C:=
(B 0 C F (g)) (C F (P1 kP2 )).
B
The explanation of the last conversion is as follows. First LCO[B] is transformed into
LCO[B 0 ]. This is because, there is no need for keeping any event in B which is not
in F (P1 kP2 ). Then, because of application of LCO[+C], both B 0 and C are modified
and C in the following manner. Any event , which is in F (P1 kP2 ), but not in
to B
F (P1 kP2 ), and also belongs to B 0 C, is removed from both B 0 and C. This is because
removing any such from the initial alphabet by LCO[B 0 ] and then again adding it by
LCO[+C] is a redundant process. (iii) Among the different change operators, an order is
imposed, with GCO[[B]] as the innermost operator, followed by GCO[[+C]], LCO[B]
and LCO[+C]. The reader may convince himself that the chosen ordering among change
operator symbols is unique, in the sense that, there does not exist any other ordering
into which any arbitrary arrangement of change operator symbols can be converted, while
[+C ][[+C1 ]][[B]]
preserving the semantics. Thus, for example, Pi 2
will be converted into
[[B]][[+(C B)]][+(C B (P
[[B]][[+(C1 B)]]
))]
1
2
F
i
. (iv) The GCO[[B]] operator is made to
Pi
[[B ]]
[[B ]]
distribute over the P CO. Thus (Pi kPj )[[B1 ]] is rewritten as (Pi 1 kPj 1 ). In this
D and not
case, preservation of semantics is possible as the processes are obtained from
from D (see lemma 3.1.1). The formal definition of CF1 is given below.
D ), CF1 (f ) is defined as
Definition 3.2.1 (Transformation CF1 :) Given f n (GD ,
follows:
CF1 (f ):= ST OPF (f ) if F (f ) = . Otherwise
CF1 (f ) := f if f = Pi .
CF1 (f ) := km
i=1 CF1 (fi ) if f = ki=1 fi . Here CF1 () is the expression obtained by
46
removing the outermost quote marks () from CF1 (). In other words, CF1 (f ) = g
implies CF1 (f ) = g and CF1 (f ) = CF1 (f ).
CF1 (f [[B]] ) := CF1 (f ) if B MA (CF1 (f )). Otherwise, CF1 (f [[B]] ) :=
[[B]]
Pi
[[B]]
m
ki=1 CF1 (fi
)
[[B 0 (BMA (CF1 (f )))]]
g
0
(CF1 (g [[B]] ))[[+(C B)]]
0
(CF1 (g [[B]] ))[(B B)]
0
(CF1 (g [[B]] ))[+(C B)]
if CF1 (f ) = Pi
if CF1 (f ) = km
i=1 fi
0
if CF1 (f ) = g [[B ]]
0
if CF1 (f ) = g [[+C ]]
0
if CF1 (f ) = g [B ]
0
if CF1 (f ) = g [+C ]
[[+C]]
Pi
m
(ki=1 fi )[[+(CMP (CF1 (f )))]]
0
g [[B ]][[+(CMP (CF1 (f )))]]
0
g [[+C (CMP (CF1 (f )))]]
if CF1 (f ) = Pi
if CF1 (f ) = km
i=1 fi
0
if CF1 (f ) = g [[B ]]
0
if CF1 (f ) = g [[+C ]]
0
if CF1 (f ) = g [B ]
0
0
if CF1 (f ) = g [B ][+C ]
[B (C
(f ))]
F F1
Pi
[B F (CF1 (f ))]
m
(ki=1 fi )
[[B 0 ]][B F (CF1 (f ))]
g
0
g [[+C ]][B F (CF1 (f ))]
0
g [B (B F (CF1 (f ))]
0
(CF1 (g [B] ))[+(C B)]
if CF1 (f ) = Pi
if CF1 (f ) = km
i=1 fi
0
if CF1 (f ) = g [[B ]]
0
if CF1 (f ) = g [[+C ]]
0
if CF1 (f ) = g [B ]
0
if CF1 (f ) = g [+C ]
[+(C (C
(f )))]
F F1
Pi
m
[+(CF (CF1 (f )))]
(ki=1 fi )
g
0
g [[+C ]][+(CF (CF1 (f )))]
g [B][+C]
0
g [B][+C C]
if CF1 (f ) = Pi
if CF1 (f ) = km
i=1 fi
0
if CF1 (f ) = g [[B ]]
0
if CF1 (f ) = g [[+C ]]
0
if CF1 (f ) = g [B ]
0
0
if CF1 (f ) = g [B ][+C ]
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
47
48
Note that GCO[[{b, c, e}]] distributes over the P COs. While distributing it takes out
event symbols b and c from ST OP{c,h} , GCO[[+{a, b}]] and GCO[[+{a, c}]]. Also
presence of a in GCO[[+{a, b}]] and GCO[[+{a, c}]] and presence of f in initial alphabet of P3 make LCO[+{a, f }] redundant. Similarly absence of g and presence of d in
initial alphabet of P2 and presence of h in ST OP{c,h} make LCO[{g}] over P2 and
LCO[+{d, h}] redundant.
To express the properties of the transformation CF1 formally, we define the following
k
subsets of n (GD ,
D ).
[[B ]][[+C ]][B ][+C ]
1
2
2
Definition 3.2.2 Let U1 := {ST OPA | A } {Pi 1
| B1 , B2 ,
[[+C1 ]][B2 ][+C2 ]
C1 , C2 are subsets of }. Also U2 := {(f1 kf2 k kfm )
| m > 1, fi U1 ,
B2 , C1 , C2 are subsets of }, .
Finally V1 is defined recursively as follows:
(g1 kg2 k kgm )[[+C1 ]][B2 ][+C2 ] V1 whenever gi U1 U2 , m > 1, no gi can be
decomposed further as gi1 kgi2 and not all gi s are in U1 .
(g1 kg2 k kgm )[[+C1 ]][B2 ][+C2 ] V1 whenever gi V1 , m > 1, and no gi can be
decomposed further as gi1 kgi2 .
Note that U1 , U2 , V1 together include arbitrary process expressions with two characteristic features: the first being the fixed order among the change operator symbols and the
second is the appearance of GCO[[B]] in distributed form over P CO. It is also assumed
that if the associated event set of any change operator is empty, then that operator symbol
is actually absent from the syntax.
Next we present the lemma that formally expresses the properties of CF1 .
k
Lemma 3.2.1 For h n (GD ,
D ), X = F(X), f = CF1 (h) and any subexpression
0
f Sub(f ), we have
(1) f 0 U1 U2 V1 ,
(2) f 0 = g [[B]] B MA (g) = ,
(3) f 0 = g [[+C]] C MP (g) = ,
(4) f 0 = g [B] B F (g),
(5) f 0 = g [+C] g 6= v [B] C F (g) = ,
(6) f 0 = g [B][+C] C = (C F (g)) (B C F (g)),
(7) f = CF1 (f ) and
(8) < h > (X) = < f > (X).
Proof: The claims can be proved together with the help of structural induction on the
k
construction of h n (GD ,
D ). The proof is mechanical but tedious as it involves ex-
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
49
haustive enumeration and analysis of all possible structures that f = CF1 (h) might possess.
The formal proof is given in the Appendix. Informally however the results are easier to see.
Claims (1) (6) talk about the structure obtained after application of CF1 () on h. As is
evident from the steps of the transformation, these claims are true because each step of
CF1 () converts an expression by imposing the particular order among the change operators
and removing the redundant event symbols. Since the transformation acts recursively on
every subexpression, each of them also satisfies these claims. Claim (7) is satisfied since,
obviously, CF1 () cannot make any structural conversion of an expression which already
satisfies (1) (6) and f naturally satisfies these. Finally the last claim follows from the
fact that each step of the transformation is semantics preserving atleast for the argument
processes X, satisfying X = F(X).
2
k
D ), it fails to
Though CF1 reduces a substantial amount of redundancy from n (GD ,
capture the underlying boundedness of the processes from this process space. To this
end, a major remaining task is to prevent redundant functional symbols from appear[+C]
[+C]
[+C ]
ing as arguments of P CO. Symbols like Pi , Pi kPi , ((Pi kPi 1 )[+C2 ] kPi ) with
C1 C2 = C which are semantically equivalent (but syntactically distinct), will not be identified to be so even after application of CF1 . Moreover, for a given realisation (F, < g >),
[B]
and X= F(X), if B F (Pi ) is empty, then < Pi kPi > (X) and < Pi > (X)
represent identical processes, a fact which CF1 does not capture. We define another transk
formation CF2 with its domain over the elements of CF1 (n (GD ,
D )), that will detect such
semantic equivalence, using the local information present in F.
k
To begin with, given a functional symbol from CF1 (n (GD ,
D )), we identify its minimal
trace generating subexpression(s), called as component expression(s), (or just components for brevity) which, when applied on X generates the same trace as that of the
original symbol.
Definition 3.2.3 (Component and Variant expressions:) .
k
For f CF1 (n (GD ,
D )), the set of trace generating component expressions of f , namely
Comp(f ) is defined recursively as follows:
Comp(f ) :=
{ST OP{} }
[[B ]][B (P
if f = ST OPA
[[B1 ]]
)]
2
F
i
{Pi 1
}
m
[B2 F (km
v
)]
i=1 i
{(ki=1 vi )
}
Sm
i=1 Comp(vi )
1
2
2
if f = Pi 1
m
[[+C1 ]][B2 ][+C2 ]
if f = (ki=1 vi )
B2 F (km
i=1 vi ) 6=
m
if f = (ki=1 vi )[[+C1 ]][B2 ][+C2 ]
B2 F (km
i=1 vi ) =
50
In the first three cases Comp(f ) is a singleton and in these cases f is said to be a variant
of the component expression g, where Comp(f ) = {g}.
Referring to example 3.2.1, we find that f = CF1 (v) has three components, which are
[[{b,c,e}]]
[[{b,c,e}]][[+{a}]] [{g}]
[[{b,c,e}]][[+{a}]]
[[{b,c,e}]]
and ST OP{} . It is easy
)
, P2
kP3
kP2
(P1
to see that, being variant to same component expression is an equivalence relation and
any component expression and its variants will have identical traces. Also note that if f
has a structure of the fourth type above, then it cannot be a variant of any component
expression. Thus (P1 kP1 )[[+C1 ]] cannot be a variant of any component expression even if
its component set is a singleton containg P1 .
Next, using the concept of component expressions, we identify a set of conditions under
which a functional symbol g2 can merge into another symbol g1 to form a single symbol,
say, g3 6=
g1 k
g2 such that <
g1 k
g2 > (X) = < g3 > (X), for X = F(X). We say g2
merges into g1 and express it as g1 mR g2 where mR is the binary merging relation. The
result of merging is denoted as Mm (g1 , g2 ). We also require Mm (g1 , g2 ) to be an element
k
of CF1 (n (GD ,
D )). To keep the conditions general and amenable to easy computation,
we use only local information from F, in the form of F () and F (). This information
can be obtained without computing any post process expression.
k
Definition 3.2.4 (Merging:) Given g1 , g2 CF1 (n (GD ,
D )), g1 mR g2 if any of the
following is satisfied. In each case the result of merging is also indicated.
[[B1 ]]
1
2
2 B 1 (h) = B 2 (h) = B t (say) and h = P or P
(ii) gi = h
F
F
i
i
2
2
2
C]
The following lemma formally states the property of the merging relation mR .
k
D )), if g1 mR g2 then
Lemma 3.2.2 Given X = F(X) and g1 , g2 CF1 (n (GD ,
k
(a) <
g1 k
g2 > (X) = < Mm (g1 , g2 ) > (X), (b) Mm (g1 , g2 ) CF1 (n (GD ,
D ))
(c) Comp(
g1 k
g2 ) = Comp(Mm (g1 , g2 )) and (d) mR is reflexive and transitive, i.e.,
g mR g and if g1 mR g2 and g2 mR g3 then g1 mR g3 . However it is not symmetric
in general.
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
51
Proof: The claims are obvious for the first condition of merging. In case of the second
condition note that < g1 > (X) and < g2 > (X) have identical traces. This is because
B21 F (h) = B22 F (h). The termination function is identically equal to 0. Clearly their
parallel composition will also have the same trace and termination function. Now, first, we
have to show that < Mm (g1 , g2 ) > (X)), as defined in the definition, and (<
g1 k
g2 > (X)
and C are formed. Since both
are identical processes. To see this, we first explain how B
k
D ))), from lemma 3.2.1 we have, for i = 1, 2 (a) B i F (h)
gi s are from CF1 (n (GD ,
2
C1i ; (b) B2i = B2t B2ai and B2t B2ai = where B21 F (h) = B22 F (h) =: B2t and
B2ai = B2i F (h); (c) C2i = (C2i (F (h) C1i )) (C2i B2t ) and (C2i (F (h) C1i ))
(C2i B2t ) = . Now F (k2i=1 gi ) is rewritten as follows.
F (k2i=1 gi ) = 2i=1 ((F (h) C1i B2i ) C2i )
= 2i=1 ((F (h) C1i C1j (B2i (C1j (F (h) C1i )))) C2i ), (j = 1, 2, j 6= i)
( Note: For any three sets X, Y , Z, we have X Y = (X Z) (Y (Z X)). Here we
take X = F (h) C1i , Y = B2i and Z = C1j .)
= (F (h) C11 C12 2i=1 (B2i (C1j (F (h) C1i )))) C21 C22 ,
(Note: For any three sets X, Y1 , Y2 , (X Y1 ) (X Y2 ) = X 2i=1 Yi . We take
X = F (h) C11 C12 and Yi = (B2i (C1j (F (h) C1i ))).)
= (F (h) C11 C12 (B2t (2i=1 (B2ai (C1j (F (h) C1i )))))) C21 C22
(Note: Yi = (B2t B2ai (C1j (F (h) C1i ))) and B2t (B2ai (C1j (F (h) C1i ))) =
for both i = 1, 2. Then Y1 Y2 = (B2t (2i=1 (B2ai (C1j (F (h) C1i ))))).)
= (F (h) C11 C12 (B2t B2a )) C21 C22 ,
C
= (F (h) C11 C12 B)
= B t (B a C 1 C 2 ) and C = ((C 1 C 2 ) B t )
= F (Mm (g1 , g2 )), where B
2
2
2
2
2
2
2
((C21 C22 ) (F (h) C11 C12 )).
(Note: the last expression has been obtained by modifying the associated event sets of
the local change operators, while preserving the semantics so that the first six conditions
of lemma 3.2.1 are satisfied. In other words, this is equivalent to applying CF1 () on
[[+C11 C12 ]][B2t B2a ][+C21 C22 ] .)
h
F (h) = B2t . It is then obvious that < Mm (g1 , g2 ) > (X)), as defined in the
Clearly B
second condition, and <
g1 k
g2 > (X) have identical trace, termination and initial alphabet. Also, for any string s other than the empty string <>, <
g1 k
g2 > (X)(s)
0
= < h > (X)(s) C1 C1 = < Mm (g1 , g2 ) > (X)(s). Hence <
g1 k
g2 > (X) =
< Mm (g1 , g2 ) > (X). From the last step of the above derivation, it can be easily seen that
k
< Mm (g1 , g2 ) > (X)) belongs to CF1 (n (GD ,
D ))), as it satisfies the first six conditions
52
of lemma 3.2.1. From the second condition of mR , it is easy to see that Comp(g1 ) =
[B2t ] (if B t 6= ) or Comp(h)
Comp(g2 ) = Comp(
g1 k
g2 ) = Comp(Mm (g1 , g2 )) = h
2
(otherwise). From the above it is also obvious that mR is reflexive and transitive.
We now proceed to prove the lemma for the third condition of mR . Because of the
k
special structure obtained after application of CF1 , given any f CF1 (n (GD ,
D )) the elements of Comp(f ) can be considered as the set of trace generating components of f . This
is because tr f (X) is formed via interleaving of traces of processes corresponding to these
individual component expressions. The ST OPA symbol and the different LCO[+C] and
GCO[+C] symbols only add to the blocking capability of different components joined by
[[B]][+C1 ] [[+C2 ]]
[[B]][+C3 ]
P CO. For example, consider the symbol f = (P1 kP2
)
kP2
, which
k
n
clearly belongs to CF1 ( (GD , D )) when F is given in the example 3.2.1 and B = {b, c, e},
C1 = {a}, C2 = {g} and C3 = {e}. It should be noted that it is never possible for an
[[B]][+C3 ]
event to occur in < f > (X), such that it occurs in the component P2
alone.
[[B]][+C3 ]
[[B]][+C1 ]
This is simply because P2
and P2
have identical traces. Thus application of GCO[[+C2 ]] does not represent any extra blocking constraint for the component
[[B]][+C3 ]
P2
as far as its operation in < f > (X) is concerned. According to condition
[[B]][+C1 ] [[+C2 ]]
[[B]][+C3 ]
(iii) of mR , (definition 3.2.4), in this case, (P1 kP2
)
and P2
are
[[B]][+C1 ]
[[B]][+C3 ] [[+C2 ]]
merging compatible and they can be merged to form (P1 kP2
kP2
)
.
m
g1 k
g2 > (X). If
To proceed formally, assume h1 = ki=1 vi . Consider any s tr <
s^ < > tr <
g1 k
g2 > (X), then, due to (iii) of definition 3.2.4 only following two
cases are possible:
1. s^ < ><g1 >(X) = s <g1 >(X) ^ < > and s^ < ><g2 >(X) = s <g2 >(X) ;
2. s^ < ><g1 >(X) = s <g1 >(X) ^ < > and s^ < ><g2 >(X) = s <g2 >(X) ^ < >;.
The third case, namely, s^ < ><g1 >(X) = s <g1 >(X) and s^ < ><g2 >(X) =
s <g2 >(X) ^ < > is not possible. This is because, any event, that appears in the
k
traces of some < g > (X), for g CF1 (n (GD ,
D ))), actually occurs in some component process(es) < f > (X), such that f Comp(g). And, if third condition of
mR is satisfied, it will not be possible for any event to take place asynchronously in
< g2 > (X) alone, in the concurrent combination <
g1 k
g2 > (X). More specifically,
it will not be possible to get an event, which will satisfy all the following simultaneously: (a) it will take place in < g2 > (X); (b) neither will it be blocked, nor will it
be executed by < h1 > (X), (which has same set of component processes as that of
< g1 > (X), and hence contains all that of < g2 > (X)); (c) it will be blocked due to the
application of operator GCO[[+C10 ]], LCO[B20 ] and LCO[+C20 ], where B20 F (h1 ) =
. Had the above three requirements been satisfied simultaneously by any event, then
0
0
0
1 k
the event would have appeared in the traces of< (h
g2 )[[+C1 ]][B2 ][+C2 ] > (X) but would
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
53
54
k
CF1 (n (GD ,
D )) is defined as follows.
CF2 (f ) :=
f
if f U1
U2 V1
CF1 (h
) if f = (km
i=1 vi )
where h = Closure(km
i=1 vi ) is computed as follows.
Algorithm: Closure (km
i=1 vi ).
Step 1: Compute the set of expressions S as follows. For each vi , compute CF2 (vi ).
If CF2 (vi ) U1 or CF2 (vi ) = (krj=1 wi )[[+C1 ]][B2 ][+C2 ] such that atleast one of C1 ,
B2 , C2 is nonempty, then for any node n in N ode(CF2 (vi )) of T ree(CF2 (vi )), hn S.
Otherwise if CF2 (vi ) = krj=1 wi , then for each wi , and for any node n in N ode(wi ),
hn S. Let S = {g1 , , gr }.
(Note: S has been constructed by applying CF2 recursively on all arguments of the P CO
and then collecting the syntactically distinct expressions.)
Step 2: Closure(km
i=1 vi ) := g1 if S = {g1 } is a singleton. Otherwise,
Closure(km
i=1 vi ) := kh M (kri=1 gi ) h
where M is the set of expressions obtained by a recursive merging procedure defined below.
(Note: Among the elements of S we apply the recursive merging procedure M , so that in
the resultant expression, no further merging among arguments of the P CO is possible.)
End of Algorithm
Definition 3.2.6 (Recursive Merging Procedure M :) If f U1 then M (f ) := {f }.
If f = km
i=1 vi then M (f ) is computed using the steps given later. Finally if f =
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
55
:= {CF1 (h
)} where h := g if M (km
g,
i=1 vi ) = {g} and h = kg M (km
i=1 vi )
otherwise.
M (km
i=1 vi ) is computed using the following steps.
Step M.1 Let V denote the set of syntactically distinct elements among v1 , , vm .
While ( g1 , g2 V, g1 6= g2 , g1 mR g2 and g2 mR g1 ) do {
1. V1 := V ;
2. Partition V1 such that g1 , g2 V1 are in the same partition iff both g1 mR g2 and
g2 mR g1 . Let there be p partitions and the j-th partition be V1j = {gj1 , , gjkj };
3. Compute V2 := {gj1 | V1j = {g1j } is a singleton and 1 j p}
[
V1j ={gj1 ,,gjkj }|
4. V := V2 } end (While)
(Note: In order to compute M (km
i=1 vi ), we first partition the set V using both way
merging compatibility as an equivalence relation. All the components of a single partition
are first merged in some arbitrary order and then M is applied on it so that we get a
unique collection of expressions. At the end, among elements of V , no both way merging
is possible.)
Step M.2 Compute V3 and V4 as follows:
V3 :={v V | / h V, h 6= v, such that v mR h or h mR v}.
V4 :={v V | / h V, h 6= v such that h mR v but h V, h 6= v, such that v mR h}.
Step M.4 v V4 Compute V4v := {h V | h 6= v and v mR h} = {hv1 , , hvkv } (say).
(Note: V3 is the set of elements from V each of which neither merges into any other element
of V , nor any other element of V merges into it. V4 is the set of elements from V each
of which does not merge into any other element of V , but there are other element(s) of V
which merge into it. Naturally each element in V , which is not in V3 V4 merges into one
or more elements of V4 . In the following step the above merging takes place recursively
using the transformation M which results in a unique collection of expressions irrespective
of the order of merging.)
Step M.5: Compute
V5 := V3
[
v V4 , V4v ={hv1 ,,hvk }
v
56
M (km
i=1 vi ) := V5 .
End of Procedure
Example 3.2.4 Given the process equations F in example 3.2.1, consider the functional
[+{g}] [[+{b}]]
k
[+{h}] [[+{a}]]
)
;
)
, g2 = (P3 kP2
symbols in CF1 (n (GD ,
D )). Let g1 = (P1 kP2
[+{h}] [[+{b}]]
[+{g}] [[+{a}]]
)
; Finally let us compute CF2 (f ) where
)
, g4 = (P3 kP2
g3 = (P1 kP2
[{g}]
[{g}]
f = (
g1 k
g2 )
k(
g3 k
g4 )
. It can be easily seen that CF2 (gi ) = gi , for 1 i 4.
Now consider Closure(
g1 k
g2 ). Following first steps of the algorithm, we immediately
[+{h,g}] [[+{a}]]
[+{h,g}] [[+{b}]]
find it to be (P1 kP2
)
k(P3 kP2
)
. Note how arguments of the out[+{g,h}]
ermost P CO share identical variants P2
for identical component P2 . Similarly
Closure(
g3 k
g4 ) results in the same expression. As a result CF2 ((
g1 k
g2 )[{g}] ) and
[+{h,g}] [[+{a}]]
[+{h,g}] [[+{b}]] [{g}]
CF2 ((
g3 k
g4 )[{g}] ) are both equal to ((P1 kP2
)
k(P3 kP2
)
)
.
Clearly CF2 (f ) also equals this expression.
k
Example 3.2.5 As another example consider the following expressions in CF1 (n (GD ,
D ))
for the same F as in example 3.2.1.
[[{b}]]
[[{b}]][+{m}]
f1 =((P1
kST OP{h} )[[+{b}]] k(P2 kP3 )[[+{c}]] )[+{m}] ; f4 = P1
;
[+{g}]
[+{d}]
[[{e}]]
[+{a}]
[+{n}]
[[+{k}]]
f2 = (P2
kP3
kST OP{e} )
; f3 = ((P1
kP2 )
kP3 )
;
4
To find CF2 (ki=1 fi ) note that CF2 (fi ) = fi , for i = 2, 3, 4, but
[[{b}]]
kST OP{h} )[[+{b}]] k(P2 kP3 kST OP{h} )[[+{c}]] )[+{m}] ; Next we compute
CF2 (f1 ) = ((P1
Closure(k4i=1 fi ) according to the algorithm.
[[{b}]]
[[{b}]]
[+{g}]
[+{d}]
[+{d}]
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
[[{e}]]
{((P1
[+{g}]
kP2
[+{g}]
[[{b}]][+{m}]
[+{d}]
kP3
57
[+{g}]
[+{d}]
58
k
Definition 3.2.8 (Postprocess Computation for A(n (GD ,
D )) :) Given a set of equak
D )) such that F (h) 6= , for some F (h),
tions X = F(X), h CF (n (GD ,
k
the postprocess expression (denoted as h/) is an element of CF (n (GD ,
D )) such that
(< h > (X))/ < >=< h/ > (X) and is computed recursively as follows.
h = g [[+C]] h/ = CF ((g/)
).
[B]
h = g
h/ = CF (g/). (Here /B)
[+C]
h = g
h/ = CF (g/).
p
h = ki=1 gi h/ = CF (kqi=1 fi ) where for 1 i p, gi0 = gi if /F (gi ) and gi0
= gi / if F (gi ).
(Note: (a) F (h) / i, (F (gi ) F (gi )).
(b) kqi=1 fi = kpi=1 gi0 , no fi can be expressed as fi1 kfi2 and q p. Note here, when
0
0
i
gi0 = gi /, it may have the structure gi0 = km
j=1 gij , where individual gij cannot be
expressed further in parallel components. Each of this gi0j will be considered as some fi ,
and as a result, q p.)
k
Finally for P A(< n (GD ,
D ) >), with realisation (F, < g >), the set of postprocess
expressions SP CF (n (GD , D )) is computed recursively as follows:
CF (g) SP .
(h SP ) ( F (h)) h/ SP .
k
D ) >),
We now present the main result of this section. For any process P in A(< n (GD ,
with realisation (F, < g >), if SP is computed using the above procedure, then SP is a finite
set and therefore P is bounded. To show this, first we define the set of possible components
of all post-process expressions of the process with realisation (F, < g >) as follows.
Definition 3.2.9 COM P (F, < g >) :=
{ST OPA | A }
Comp(CF (f[[B]] ))
k
3.2. BOUNDEDNESS CHARACTERISATION : A(< (GD ,
D ) >)
59
1
2
2
Basis: Let h = Pi 1
SP and Comp(h) COM P (F, < g >). For
[[B1 ]]
))[[+C1 ]] )
F (h), we have h/ = CF ((CF ((Pi /)
[[B ]]
= CF ((CF ((CF (fij ))[[B1 ]] ))[[+C1 ]] ) = CF ((CF (fij 1 ))[[+C1 ]] ). This is because, for
[[B ]]
any f , CF ((CF (f ))[[B]] ) = CF (f[[B]] ). Now by definition Comp(CF (fij 1 )) is a
subset of COM P (F, < g >)). To find out the components of h/, we have to consider the
[[B ]]
different possible structures of CF (fij 1 ).
[[B ]]
Case B1: CF (fij 1 ) = ST OPA . Then h/ = ST OPAC1 and trivially Comp(h/)
COM P (F, < g >).
2 ][+C
2 ]
[[B ]][[+C1 ]][B2 ][+C2 ]
[[B ]][[+C1 C1 ]][B
[[B ]]
Case B2: CF (fij 1 ) = Pj 1
. Then h/ = Pj 1
.
0 ]]
0 ]]
[[B
[[B
0
0
1
1
2 = (B2 F (Pj
Here B
)) ((B2 F (Pj
)) C1 ),
0 ]]
0 ]]
[[B
[[B
C2 = (C20 B20 F (Pj 1 )) (C20 F (Pj 1 ) C10 C1 ). From the structure of
2 it can be easily concluded that Comp(h/) = Comp(CF (fi[[B1 ]] )) and hence is a
B
0
[[B ]]
[[+C10 ]][B20 ][+C20 ]
[[+C10 C1 ]][B
Case B3: CF (fij 1 ) = (km
. Then h/ = (km
.
i=1 vi )
i=1 vi )
m
0
m
[[+C10 ]]
0
m
, B2 = (B2 F (ki=1 vi )) ((B2 F (ki=1 vi )) C1 ),
Here C1 = C1 MP ((ki=1 vi )
0
0
m
0
60
Here kqi=1 fi = kpi=1 vi0 , no fi can be expressed as fi1 kfi2 and q p. Note here,
0
0
i
when vi0 = vi /, it may have the structure vi0 = km
j=1 vij , where individual vij cannot
be expressed further in parallel components. Each of this vi0j will be considered as some
fi , and as a result, q p.
Since h SP and while computing vi /, CF has been applied, it is obvious that each of
k
fi belongs to CF (n (GD ,
g [[+C1 ]] ) where g = Closure(kqi=1 fi ).
D )). Then h/ = CF (
k
As each fi belongs to CF (n (GD ,
D )), from lemmma 3.2.3,
0
Comp(Closure(kqi=1 fi )) = Comp(kqi=1 fi ) = Comp(km
i=1 vi ).
Also by induction hypothesis it is a subset of COM P (F, < g >). It is then trivial to see
that
Comp(h/) = Comp(CF (
g [[+C1 ]] )) (where g = Closure(kqi=1 fi ))
= Comp(Closure(kqi=1 fi )) COM P (F, < g >).
Thus by induction, Claim 2 is found to be true. The lemma then follows from the two
claims.
2
k
D )), whose compoOur next result shows that, the set of expressions of CF (n (GD ,
nents come from a finite set such as COM P (F, < g >) is itself finite. Now, let the set of possible variants of expressions of COM P (F, < g >) be denoted as V AR(F, < g >). Because
of finiteness of the symbol set and COM P (F, < g >), it is obvious that V AR(F, < g >)
is a finite set.
k
Lemma 3.2.6 The set S = {f CF (n (GD ,
D )) | comp(f ) COM P (F, < g >)} is
finite.
Proof: To prove this lemma, given any element f in S, we first represent it as a tree
as mentioned in remark 3.2.1. Note that all the leaf node expressions of T ree(f ) are in
k
m
[[+C1 ]][B2 ][+C2 ]
V AR(F, < g >). Since f CF (n (GD ,
max
r
X
i=1
| Fi |)
k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >
61
k
Since for any f S, | L(f ) | is bounded, from the particular structure of CF (n (GD ,
D ))
it can be easily concluded that there is an upper bound on length len(f ) of f . This, in
turn implies that S is finite.
2
k
n
Finally we state the main result about the boundedness of processes of A( (GD , D )).
k
Theorem 3.2.1 Any P in A(n (GD ,
D )) is bounded under the post-process computation
procedure described above.
Proof: From lemma 3.2.5 and lemma 3.2.6, it can be easily seen that SP S and hence
it is finite. Thus P is bounded.
2
3.3
k
Finiteness Characterisation : < (HD ,
D) >
k
In the previous section it has been shown that < n (GD ,
D ) > is infinite, though the
k
n
DFRPs of A(< (GD , D ) >) are all bounded. We now identify a more restricted subclass
of < n (GD , D ) >, which is finite. To this end we remove both SCO and LCO[B] from
GD .
k
D ) 2 ,
Definition 3.3.2 (Minimum Locally Present Alphabet:) MLP : n (HD ,
nD , < f > (P)(<>)}. It is computed as follows.
is defined as MLP (f ) := { | P
MLP (f ) =
MLP (g) C
MLP (g) B
MLP (g) C
Sm
L
i=1 MP (gi )
if
if
if
if
if
if
f
f
f
f
f
f
= ST OPA
= Pi
= g [+C]
= g [[B]]
= g [[+C]]
= ||m
i=1 gi
As in definition 3.2.2 we define two subsets U and V of the set of functional symbols
k
(HD ,
D ) which are characterised by the particular order in which the LCOs and the
GCOs are arranged in any functional symbol. In a sense they are canonical forms of
syntactic process expressions.
n
62
Child(f ) :=
{f }
if f U
Sm
[[+C1 ]][+C2 ]
V
{f } i=1 Child(vi ) if f = (km
i=1 vi )
[+C ]
Example 3.3.1 Consider f = ((Pi 1 kPj )[+C1 ] kPk 3 )[[+C2 ]] kPl . Then Child(f ) :=
[[B ]]
[+C ]
[[B ]]
[+C ]
[[B ]]
{f, ((Pi 1 kPj )[+C1 ] kPk 3 )[[+C2 ]] , Pl , (Pi 1 kPj )[+C1 ] , Pk 3 , Pi 1 , Pj }
k
D ) > is finite, we shall construct a syntactic transIn order to show that < n (HD ,
k
D ), which identifies and converts many semantically equivalent
formation C over n (HD ,
k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >
63
k
expressions of n (HD ,
D ) into a single expression while preserving the semantics. We
k
D )) is a finite set. Since semantics is preserved, the set of
then show that C(n (HD ,
k
k
n
functions < n (HD ,
D ) > is identical to < C( (HD , D )) >. Since the latter is finite by
k
the finiteness of the corresponding symbol set (syntax) C(n (HD ,
D )), the former is also
finite. The transformation C is composed of two semantics preserving transformations C1
and C2 .
It should be remembered that in the previous section the transformation CF was designed for a particular DFRP with realisation (F, < g >). Local information in the form
of F and F was used in designing the transformation. Naturally, it was semantics pre nD . In order to show
serving for the processes P = F(P), but not for arbitrary processes in
the finiteness of a function space, we have to however construct syntactic transformations
which are semantics preserving in genereal.
The first transformation C1 is almost identical as the one defined in definition 3.2.1 and
converts any expression into some expression in U V while preserving the semantics. It
also takes care of redundancies arising out of the knowledge of minimum alphabets. Note
that, instead of F , it uses MLP () to remove redundancies in LCO[+C].
k
k
n
Definition 3.3.5 (Transformation C1 :) Formally C1 : n (HD ,
D ) (HD , D ) is
defined recursively as follows.
C1 (f ) := f for f = ST OPA or Pi .
k
ST OPAB
[[B]]
Pi
[[B]]
m
ki=1 C1 (fi
)
[[B 0 (BMA (C1 (f )))]]
g
0 B)]]
[[B]]
[[+(C
(C1 (g
))
ST OPC 0 B
0
(C1 (g [[B]] ))[+(C B)]
ST OPC 0 B
if C1 (f ) = ST OPA
if C1 (f ) = Pi
if C1 (f ) = km
i=1 fi
0
if C1 (f ) = g [[B ]]
0
64
ST OPAC
[[+C]]
Pi
i=1 fi )
0
[[B ]][[+(CMP (C1 (f )))]]
g
0
(C1 (g [[+C]] ))[+(C C)]
if C1 (f ) = ST OPA
if C1 (f ) = Pi
if C1 (f ) = km
i=1 fi
0
if C1 (f ) = g [[B ]]
0
if C1 (f ) = g [[+C ]]
0
if C1 (f ) = g [+C ]
ST OPAC
[+C]
Pi
[+(CML
P (C1 (f )))]
(km
i=1 fi )
0
L
g [[B ]][+(CMP (C1 (f )))]
0
L
g [[+C ]][+(CMP (C1 (f )))]
0
L
g [+C (CMP (C1 (f )))]
if C1 (f ) = ST OPA
if C1 (f ) = Pi
if C1 (f ) = km
i=1 fi
0
if C1 (f ) = g [[B ]]
0
if C1 (f ) = g [[+C ]]
0
if C1 (f ) = g [+C ]
[[B]][[+C1 ]][+C2 ]
(b) f 0 Child(f ) U f 0 = Pi
C1 C2 = ;
0
0
m
[[+C1 ]][+C2 ]
(c) f Child(f ) V f = (ki=1 vi )
(C1 MP (v1 k kvm ) = )
L
m
[[+C1 ]]
(C2 MP ((ki=1 vi )
) = ).
(d) f = C1 (f );
(e) h f .
k
3.3. FINITENESS CHARACTERISATION : < (HD ,
D) >
65
[[B]]
1
2 where h = P or P
(iii) g1 = h
g2 = h
or v1 k kvm ,
i
i
[[+C1 C10 ]][+(C2 C20 C1 C10 )]
0
0
0
. Note
B, C1 , C2 , C1 C2 , B 6= . Then, Mm (g1 , g2 ) := h
= h.
that h
00
00
[[+C10 ]][+C20 ]
1 ]][+C2 ]
[[+C
(iv) (g1 = h
) (h1 = v1 k kvm ) (g2 6= ST OP or h
for
1
1
[[B]]
[[B]][[+C1 ]][+C2 ]
00
00
such that Pi
Child(g2 ), B, C1 , C2 }
any C1 and C2 ) ({Pi
[[B]]
[[B]][[+C1 ]][+C2 ]
{Pi
such that Pi
Child(g1 ), B, C1 , C2 }).
[[+(C10 MP (g2 ))]][+(C20 ML
0
P (g2 ))] .
g2 )
Then, Mm (g1 , g2 ) := (h1 k
k
0
Lemma 3.3.2 For g1 , g2 C1 (n (HD ,
g1 k
g2 ) M0m (g1 , g2 ),
D )), g1 mR g2 implies (a) (
k
0
0
(b) M0m (g1 , g2 ) C1 (n (HD ,
D )) and (c) mR is reflexive and transitive, i.e., g mR g and
if g1 m0R g2 and g2 m0R g3 then g1 m0R g3 .
C2 (f ) :=
f
if f U
[[+C1 ]][+C2 ]
[[+C1 ]][+C2 ] ) if f =
C1 (h
(km
V
i=1 vi )
C2 (vi ) U or C2 (vi ) = (krj=1 wi )[[+C1 ]][+C2 ] such that atleast one of C1 , C2 is nonempty,
then for any node n in N ode(C2 (vi )) of T ree(C2 (vi )), hn S. Otherwise if C2 (vi ) =
krj=1 wi , then for each wi , and for any node n in N ode(wi ), hn S. Let S = {g1 , , gr }.
(Note: S has been constructed by applying C2 recursively on all arguments of the P CO
and then collecting the syntactically distinct expressions.)
Step 2: Closure0 (km
i=1 vi ) := g1 if S = {g1 } is a singleton. Otherwise,
Closure0 (km
i=1 vi ) := kh M 0 (kri=1 gi ) h
66
[[+C1 ]][+C2 ]
and (iii) if f 0 = (km
Child(f ) then / i, j {1, m}, i 6= j, such that
i=1 vi )
0
vi mR vj .
Proof of this lemma is similar to that of lemma 3.2.3.
k
k
n
Proof: The result (a) and (b) follows easily from lemmas 3.3.3 and 3.3.1. Result (c)
k
follows from the argument presented below. For a finite , U C1 (n (HD ,
D )) is finite
and let the number of expressions in it be N ,which can be computed from | | and n.
Here | | denotes the number of event symbols in . Since, from lemma 3.3.3, for any h
k
[[+C1 ]][+C2 ]
m
C(n (HD ,
Child(h), / vi1 , vi2 , 1 i1 , i2 m, i1 6=
D )) and (ki=1 vi )
0
i2 with vi1 mR vi2 , it is never possible that {hn | n L(vi1 )} {hn | n L(vi2 )}
or vice versa, where, L(vi ) denotes the set of leaf nodes of T ree(vi ). Because if it were
so, then vi1 m0R vi2 would have been true. Together, we can conclude that for any h
k
C(n (HD ,
D )), the number of leaf nodes | L(h) |, of the T ree(h) is bounded. Since | L(h) |
k
is bounded, from the particular structure of C(n (HD ,
D )) it can be easily concluded that
k
there is an upper bound on len(h). This, in turn implies that C(n (HD ,
D )) is finite. 2
k
n
Finally we prove the finiteness of the function space < (HD , D ) >.
k
Theorem 3.3.1 < n (HD ,
D ) > is a finite set of functions.
67
k
Proof: Since the set of functional symbols C(n (HD ,
D )) is finite (by (c) of lemma 3.3.4),
k
D )) > is finite. From (b) of lemma 3.3.4 it
the set of corresponding functions < C(n (HD ,
k
k
n
follows that for every h in n (HD ,
D ) there is an element f = C(h) in C1 ( (HD , D )) such
k
k
n
that h f . From this we can conclude that < n (HD ,
D ) > and < C( (HD , D )) >
k
are identical sets of functions. Together we can conclude that < n (HD ,
D ) > is also a
finite set of functions.
2
3.4
In this section, we prove that for the processes, which are built without any application of
P CO, boundedness is decidable. Following definition 3.1.2, let A(< n (G;D , D ) >) be the
;
set of DFRPs built with SCO and the different change operators and A(< n (HD
, D ) >)
be the set of DFRPs built with SCO alone.
k
As in the case of A(< n (GD ,
D ) >), we need to define a syntactic transformation
DF , for reduction of redundant symbols. The transformation DF will be almost similar to
that of CF1 , except that here SCO will be used in place of P CO, on which all the change
operators will distribute. Also as mentioned in relation with computation of minimum
alphabets, expressions joined by SCO symbols should be ST OP and SKIP reduced.
Because of the close similarity between DF and CF1 , instead of giving a complete definition
of DF , we just mention the points where the two transformations differ.
Given f n (GD , D ), DF (f ) is defined as follows.
If F (f ) = 1 then DF (f ):= SKIPF (f ) .
If F (f ) = 0 and F (f ) = then DF (f ):= ST OPF (f ) . Otherwise
DF (f ) = f if f = Pi .
F (f1 ); ; D
F (fm )) where T () is the procedure of
DF (f1 ; ; fm ) := T ; (D
F () = DF ().
SKIP and ST OP reduction mentioned before definition 3.1.4. Here D
DF (f [[B]] ), DF (f [[+C]] ), DF (f [B] ), DF (f [+C] ), are defined recursively in an
identical manner as that of CF1 , except with the following differences.
(i) The transformation is not defined for expressions containing the P CO (k) symbol.
(ii) If DF (f ) = f1 ; ; fm , then
[[B]]
F (f1[[B]] ); ; D
F (fm
DF (f [[B]] ) := T ; (D
)).
[[+C]]
[[+C]]
[[+C]]
F (f1
F (fm ).
DF (f
) := D
); ; D
[B]
F (f1 ); f2 ; ; fm ).
DF (f [B] ) := (D
F (f1[+C] ); f2 ; ; fm ).
DF (f [+C] ) := (D
The following example illustrates the transformation DF .
68
Example 3.4.1 Let P= F(P) be the collection of the following equations. P = (P1 , P2 , P3 ).
P1 = (a < P2 > (P) | b < P3 ; P2 > (P) | c < P1 > (P)){a,b,c,g},0 .
P2 = (d < P2 > (P) | e < SKIP{} > (P)){d,e,f },0 .
[{c,g}]
P3 = (f < P1
> (P) | e < SKIP{} > (P) | g < P2 > (P)){e,f,g},0 .
[{d}]
Let f = (SKIP{} ; P3 ; P2
; P1 )[[{e}]][+{g,b}] . Then DF (f ) = (P3 )[[{e}]][+{b}] ; ST OP{f } .
Here SKIP{} ; P3 is reduced to P3 , before distribution of LCO symbols over SCO.
[[{e}]][{d}]
Also after distribution, P2
is reduced to ST OP{f } as the set of possible ini[[{e}]]
tial events becomes empty after application of the change operators. As a result P1
69
2
;
Lemma 3.4.3 For arbitrary P A(< n (HD
, D ) >) with realisation (F, < g >), bound;
edness is decidable iff for any P A(< n (HD
, D ) >) with realisation (F, < Pi >),
1 i n, boundedness is decidable.
Proof: ( Only if:) Straightforward as one can take < Pi > as < g >.
;
( If:) Suppose, for any P A(< n (HD
, D ) >) with realisation (F, < Pi >),
1 i n, boundedness is decidable. We have to now prove that for arbitrary P
;
A(< n (HD
, D ) >) with realisation (F, < g >), boundedness is decidable. From lemma
3.4.2, as DF (g) SP , it can have one of the three types of structures mentioned there. If
DF (g) is either ST OPA or SKIPA , boundedness is trivially decidable. If DF (g) is any
Pi , or Pi ; ST OPA or Pi ; SKIPA , boundedness is decidable by the hypothesis of the
proof. Finally suppose DF (g) has some structure of type (iii). Then boundedness of P can
be decided as follows. Starting from j = 1, going atmost upto m, we repeat the following
procedure. First we check the boundedness of the process P with realisation (F, < Pi1 >)
which is clearly decidable by hypothesis. If it is unbounded, P is unbounded. Otherwise,
if it is bounded, we can easily decide whether SP contains any SKIPA , as SP is finite.
If it does not contain, there is no need to check further and P can be termed as bounded.
Otherwise, we increment j in Pij and repeat the procedure. Since m is finite, in finite
steps we can decide on the the boundedness of P .
2
In view of the lemma 3.4.3, to prove decidability of boundedness of arbitrary processes,
we now need a procedure which will be able to determine in finite time, whether any
;
, D ) >) with realisation of the form (F, < Pi >),
arbitrary process P A(< n (HD
1 i n is bounded or not. For any process P , it will either compute the set SP if it is
finite (i.e., P is bounded) or declare P as unbounded and terminate in finite steps. The
procedure is based on the following theorem.
;
Definition 3.4.2 Given some process P A(< n (HD
, D ) >) with realisation of the
form (F, < Pi >), an expression Pj be said to be a derivative of Pi if there exists
some post process expression beginning with Pj in the set S<Pi >(X) , where X = F(X).
Trivially, Pi is its own derivative.
;
Theorem 3.4.1 Given some process P A(< n (HD
, D ) >) with realisation of the form
(F, < Pi >), P will be unbounded iff there exists some derivative Pj of Pi , such that
or Pj ; W;
SKIPB ( where W
for some m 2 and W = Pj2 ; ; Pjm , either Pj ; W
= W)
belongs to S<Pj >(X) .
70
Proof: ( If:) Suppose Pj is a derivative of Pi after some event sequence s, i.e., P/s
> (X), for some expression Y. Also, by hypothesis of the
= < Pj > (X) or < Pj ; Y
> (X)
sufficiency condition, for some event sequence t, let < Pj > (X)/t = < Pj ; W
SKIPB > (X)). Clearly then, for all positive integer n, s^tn tr P . Also
(or < Pj ; W;
n ; Y
(or Pj ; (W)
n or Pj ; (W)
n ; SKIPB if Y is
for each n the expression Pj ; (W)
absent) will be present in SP and P will be unbounded. It should however be noted that
ST OPB in S<P >(X) does not indicate unboundedness, as, in this
presence of Pj ; W;
j
case, even though n, s^tn tr P , the corresponding post process expression is always
ST OPB .
Pj ; W;
( Only if:) A formal proof of the necessary part of the theorem will be given after
describing the procedure.
2
Boundedness Checking Procedure: An informal description
The above theorem motivates us to suggest the following procedure to check boundedness
of processes. Given any process P , we construct a tree whose nodes contain different postprocess expressions of P =< Pi > (X). While building the tree, at each step we check
whether the equivalent condition of unboundedness, as mentioned above, is satisfied. If the
process is bounded, the algorithm will terminate in finite time after generating the set SP .
On the otherhand, if it is unbounded, then also in finite time the algorithm will detect the
satisfaction of the sufficient condition and will terminate after declaring P as unbounded.
The root node contains the expression Pi . For any node N of the tree, let N .E denote
the expression contained in N . R denotes the root node of the tree and R.E = Pi if
< g > = < Pi > for some i. For some node N 0 6= N , R N 0 N denotes that
N 0 is a node which is different from N and it lies in the path from the root node to
the node N . N and N 0 are called descendants of R. At any node, whenever N .E is
different from ST OPA or SKIPA , for each F (N .E) we compute the expression
(N .E)/ and put it in a new node Nnew . To understand where this new node should be
attached in the tree, we have to consider the different possibilities that arise as a result
of cross combination between N .E and Pj /, where Pj is the first (or only) process
symbol in N .E. The resultant expression (N .E)/, (i.e, Nnew .E) for different cases are
tabulated in Table 3.4. Here the rows (1)-(4) indicate the various possibilities for N .E
whose first term is Pj . The columns (a)-(f) correspond to the different cases for Pj /.
For example, the combination (d) vs (2) indicates that if Pj / is Pl ; Y ; SKIPB then
Pj ; ST OPA / is Pl ; Y ; ST OPA , where a SKIP reduction has been performed. The
other entry within parenthesis in the table indicates at which node this new node should
be connected, provided the expression in the new node has not occurred already in some
child of the indicated node. In most cases it is connected to the node N itself. Our
71
intention is to build the tree in such a way that along any path in the tree, whenever we
find a repetition of the first term and the length of the expression in the descendant node
is larger, unboundedness is inferred. To ensure this, we connect the new node either to the
root node R or some node N in the path from R to N in the following way.
Combinations (a)-(d) vs (1)-(4): In these cases after occurrence of some event in the
process < Pj > (X), the process does not terminate and the length of Nnew .E/ is
atleast equal to or more than that of N .E. In these cases, (except the case (b) vs (1))
Nnew is attached to N itself. In case of (f) vs (1) also, Nnew is attached to N . In all these
cases Nnew is termed as a primary child of N .
Combinations (b) vs (1) and (e) vs (1)-(4): In the case of (b) vs (1), though there is an
increase in length of expression from N .E to Nnew .E, a repetition of the first term (i.e.,
from some Pl to Pl ; SKIPB ) does not indicate unboundedness. Here Nnew is attached
to the root node, indicating that expansion of the tree starting from that node is necessary
in order to arrive at some conclusion about unboundedness of the system. The same is
true for the cases (e) vs (1)-(4), when < Pj > (X)/ results in a post process expression
Z ending with ST OPB . Here, after substitution of Pj / in N .E we get Nnew .E as the
expression Z itself. As stated in the proof of theorem 3.4.1, a repetition in first term in
these cases does not indicate unboundedness. Here also Nnew is attached to the root node.
In all these cases Nnew is termed as a secondary child of R.
Combinations (f ) vs (2)(4): Clearly in these cases, length of Nnew .E/ is less than that
of N .E, as < Pj > (X), where Pj is the first element of N .E, terminates. Further,
for all the nodes preceding N , whose expression lengths are equal to that of N .E, ( that
is for the nodes between N and N , excluding N ,) the processes corresponding to the
first elements of the corresponding expressions terminate. In these cases, we have to look
for repetition between first elements of expressions of nodes preceding (and including) N
and that of Nnew . Note that for all the nodes between N and N , excluding N , the
expressions following the first element will be identical and equal to Nnew .E. To facilitate
checking we attach Nnew to N , indicating that the tree is to be expanded along that path
also. Obviously, before attaching Nnew to any node it is to be ensured that no child of
that node already bears the same expression as that of Nnew . In these cases also Nnew is
termed as a secondary child of N .
The above procedure of attaching the nodes as primary or secondary children of existing
nodes ensures the following: (i) Along any path of the tree, if any node N bears an
expression N .E, beginning with symbol Pl , and for any other node N 0 , R N 0 N ,
if N 0 .E, begins with symbol Pj , then Pl is a derivative of Pj . In other words, along
any path of the tree, no process corresponding to the first element of the expressions in
72
N .E
(1)
(2)
(3)
(4)
Pj /
Pj
Pj ; ST OPA
Pj ; SKIPA
Pj ; W
Pl
Pl ; ST OPA
Pl ; SKIPA
(a)
Pl ; W
(N )
(N )
(N )
Pl
(N )
Pl ; SKIPB
Pl ; ST OPA
Pl ; SKIPA
(b)
Pl ; W
(R)
(N )
(N )
Pl ; SKIPB
(N )
ST OPA Pl ; Y;
SKIPA Pl ; Y;
W
(c)
Pl ; Y
Pl ; Y;
Pl ; Y
(N )
(N )
(N )
(N )
SKIPB Pl ; Y;
ST OPA Pl ; Y;
SKIPA Pl ; Y;
W
(d)
Pl ; Y;
SKIPB (N )
Pl ; Y;
(N )
(N )
(N )
Z
Z
Z
Z
(e)
(R)
(R)
(R)
(R)
Z
W
SKIPB
ST OPA
SKIPA
(f)
(N )
(N )
(N )
(N )
SKIPB
W = Pj2 ; ; Pjm or Pj2 ; ; Pjm ; ST OPA or Pj2 ; ; Pjm ; SKIPA .
Y = Pl2 ; ; Plr
ST OPB .
Z = ST OPB or Pl ; ST OPB or Pl ; Y;
similarly for Y and Z.
W = W,
R N N and N is such that len(N .E) < len(N .E) and
N 0 6= N , N N 0 N , len(N 0 .E) = len(N .E)
Table 3.1: Table of (N .E)/
73
the nodes of that path (except the case when the node bears an expression SKIPA , for
some A) has terminated. (ii) The lengths of the expressions at the nodes are non decreasing
along any path. (iii) Finally it is also ensured that any repetition in the first element of
expressions of the nodes, along any path, with a strictly increased length of the second
expression ( barring the case of repetition of the first term between Pl and Pl ; SKIPB
for any Pl ) is equivalent to the satisfaction of the sufficient condition of unboundedness.
It should be noted that in case some child of the tree contains an expression of the type
(e), (see Table 3.4) the checking of repetition in the path through it, will commence from
that child node, instead of the root node directly. This is because, if the child contains
an expression of type (e) beginning with Pi where root node also contains Pi , the
repetition, even with increased length does not indicate unboundedness.
The tree is built using a recursive depth-first procedure. Before expanding the tree
from any node N , we check the following. If N bears ST OPB or SKIPB type of
expressions, it cannot be expanded further and N is marked as expanded. Similarly if N .E
= N 0 .E such that R N 0 N , a loop is indicated. In that case also the node is marked
as expanded. Finally, barring the case of repetition of the first term between Pl and
Pl ; SKIPB , if N .E has, as its first argument, some Pl such that N 0 , R N 0
N , N 0 .E also begins with Pl but len(N 0 .E) < len(N .E), unboundedness is indicated.
Otherwise, for each event that is possible from < N .E > (X), N .E/ is created and a
new unexpanded node Nnew , bearing N .E/, is attached to N or N or R as indicated in
the table. Since the tree is built using a depth first procedure, a parent node is termed
expanded when either tree building is complete from all its children nodes, i.e., they have
been termed as expanded, or when from some of its children node, unboundedness has been
detected. The algorithm terminates after finding the status of the root node.
A formal description of the procedure, in the form of an algorithm can be found in the
Appendix.
In the following examples we apply the above procedure to build the tree from the
recursive descriptions and decide on their boundedness.
Example 3.4.2 Consider the process P with realisation (F, < P1 >) where
P = (P1 , , P4 ) = F(P) is given by the following collection of equations.
P1 = (a < P5 > (P) | b < P2 ; P3 > (P)){a,b},0
P2 = (c < P4 > (P) | d < SKIP{} > (P)){c,d},0
P3 = (e < P3 > (P) | f < P4 > (P)){e,f },0
P4 = (g < P4 > (P) | h < P3 ; ST OP{} > (P)){g,h},0
P5 = (i < P2 ; P2 > (P)){i},0
74
The tree generated by the above procedure for this example (see Fig. 3.1) shows P to be
a bounded process.
Example 3.4.3 Let P = (F, < P1 >) where P = (P1 , , P4 ) = F(P) is given by :
P1 = (a < P2 > (P) | b < P2 ; P3 > (P)){a,b},0
P2 = (c < P4 > (P) | d < SKIP{} > (P) | e < P1 ; ST OP{} > (P)){c,d,e},0
P3 = (e < P3 ; P3 > (P)){e},0
P4 = (g < P4 > (P)){g},0
The tree generated by the above procedure for this example (see Fig. 3.2) shows P to be
a unbounded process.
Finally we prove the necessity part of the theorem 3.4.1.
Proof: ( Only if:) We assume the contrary, i.e., for any Pj reachable from Pi ,
nor Pj ; W;
SKIPB belongs to S<P >(X) for any W = Pj2 ; ; Pjm
neither Pj ; W
j
75
9
P1
H
H
Q
Q HH
Q HH
Q
Q HH
HH
Q
Q
HH
QQ
s
HH
j
P5
XX
XXX
X
z
X
P2 ; P2
P2
B
B
B
BBN
?
P4 SKIP{}
P3 ; ST OP{}
P4 ; P2
P3 ; ST OP{}
?
P4
?
P4 ; P2
P2 ; P3
?
J
J
J
J
J
^
P3
B
B
B
BN
P4 ; P3
P3
P4
P4 ; ST OP{}
P4 ; ST OP{}
?
P4 ; P3
STATUS = BOUNDED
?
P4
76
P1 PPP
Q
P
PP
A Q
PP
Q
PP
A
Q
q
P
A
Q
P3 ; ST OP{}
Q
A
Q
U
A
Q
=
QQ
*
s
P3
P2 ; P3
ST OP
P2
{}
A
?
A
B
*
A
P1 ; ST OP{}
B
A
B
AU
BBN
#
?
J
J
P
;
P
P
;
P
4
3
3
3
SKIP{}
P4
J
J
"
!
J
^
P2 ; ST OP{} P2 ; P3 ; ST OP{}
?
P4
?
P4 ; P3
P4 ; ST OP{}
=
P4 ; ST OP{}
P4 ; P3 ; ST OP{}
XX
z
X
X
P4 ; P3 ; ST OP{}
Unmarked nodes
77
expression. Since there are all together n distinct process symbols, namely P1 to Pn ,
the maximum value of L within which a loop will be indicated is n in case of the first
possibility and n + 1 in case of the second possibility.
Next we prove that the number of paths in the tree is also finite, i.e., each node may
have only a finite number of children. Clearly the number of primary children of any node
is bounded by the number of choices of events present in any of the equations of F. The
number of secondary children is also finite, as can be understood from the following. For
any N , different from R, a secondary child node is attached to N whenever the process
corresponding to the first element of the expression of any primary child node or some
other already-attached secondary child node of N , having an expression of length strictly
greater than that of N , terminates. In these cases, N is considered as N for that child
of N . The newly created secondary node contains the remaining expression. Thus if N
bears an expression Pj1 ; ; Pjm and some primary child node of N bears an expression,
say, Pl1 ; ; Plr , such that m < r, it may produce at most r m secondary children
for N , one each for each Pli ; ; Plr , 2 i (r m) + 1. Because of finiteness of
depth, length of expression of any primary child node is bounded. Clearly the number of
secondary children is then also finite. Next consider the root node. Other than the primary
children generated by events of F (Pi ), there are a finite number of cases ( (b) vs (1) and
(e) vs (1)-(4) from the table) when a node is attached to the root. Given any F, number
of this type of nodes is also finite. From all these nodes, once again, only finite number
of secondary children can be generated where R behaves as N for some of its children.
Thus, there is an upper bound on the number of children of any node.
From the above argument we find that the tree building procedure always generates a
finite tree. Along any path of the tree either unboundedness is detected within finite depth
and the algorithm terminates or each path ends either with a loop or some ST OPA or
SKIPA type of expression. Since by hypothesis of this proof, along no path of the tree,
unboundedness is detected, the tree building procedure can terminate only after exploring
the set of all possible post-process expressions, namely SP . The finiteness of the tree
guarantees that in this case SP will be finite. Hence P is bounded.
2
;
Theorem 3.4.2 Boundedness for arbitrary processes in A(< n (HD
, D ) >) is decidable.
Proof: From theorem 3.4.1 we find that there exists a procedure to decide (in finite time)
;
whether an arbitrary process P A(< n (HD
, D ) >) with P =< Pi > (X), X = F(X)
is bounded. Then together with the procedure mentioned in the proof of lemma 3.4.3
;
we can always decide in finite time whether any process P A(< n (HD
, D ) >) with
realisation (F, < g >) is bounded or not. Thus boundedness of this space is decidable. 2
78
At the end we show how the same algorithm can be used to decide on boundedness of
processes of A(< n (G;D , D ) >).
Lemma 3.4.4 Given any P A(< n (G;D , D ) >) with realisation (F, < g >), f SP
implies f will have one of the following structures: (i) ST OPA or SKIPA , A , or
(ii) f or f ; ST OPA or f ; SKIPA , or (iii) f1 ; ; fm or f1 ; ; fm ; ST OPA or
[[B ]][[+C1 ]][B2 ][+C2 ]
f1 ; ; fm ; SKIPA , where, each f or fi is some Pi 1
such that if any
parameter of any change operator symbol is empty, then the corresponding change operator
symbol is absent. Also m > 1 and 1 ij n, j = 1, , m.
Proof: Straightforward from the post-process expression computation procedure defined
earlier, as the change operator symbols distribute over SCO symbol.
2
Theorem 3.4.3 Boundedness for arbitrary processes in A(< n (G;D , D ) >) is decidable.
Proof: Given any P A(< n (G;D , D ) >) with realisation (F, < g >), number of
distinct symbols of the form Pi is finite and equal to n. Then for a finite , the number
[[B ]][[+C1 ]][B2 ][+C2 ]
of distinct symbols f , of the form Pi 1
( where B1 , C1 , B2 , C2 are subsets
of and if any of them is empty, it indicates that the corresponding change operator symbol
is nonexistent in f ), that may appear in elements of postprocess expressions of P is also
finite and equal to N (say). It is easy to see that the procedure designed for processes of
;
A(< n (HD
, D ) >) can be used for this space also, except that, here repetition in the
first element of expressions means the repetition in f and not in just Pi symbols. Thus
the tree depth will be bounded by N instead of n. Thus, for this space also, boundedness
is decidable.
2
3.5
In the previous sections we have studied boundedness property of two classes of DFRPs,
k
;
n
namely A(< n (GD ,
D ) >) and A(< (GD , D ) >). Modelling flexibility of both the
subclasses is restricted. The former does not include SCO and cannot model stacks, reentrant codes etc, while the latter does not include P CO and cannot model concurrency. On
the other hand, for processes of A(< n (GD , D ) >), where both the operators can be used
in an unrestricted fashion, modelling flexibility is assured but several problems including
boundedness are undecidable. This is expected and merely shows that the tractability of
79
problems in a modelling paradigm can be improved only at the cost of design flexibility
and vice-versa. The problem is therefore to establish boundedness and decidability results
of more general subclasses of A(< n (GD , D ) >) possibly by combining the subclasses
presented here in a judicious manner, which are flexible enough to model wide varieties
of real world processes. Almost certainly such classes will be characterised by restricted
use of both SCO and P CO. Below, few such examples of bounded DFRPs are presented,
where a good trade off between modelling flexibility and tractability is obtained.
The Subclass B
As a first example, we present a subclass B of A(< (GD , D ) >), where, for any process
Y B, with realisation (F, < g >), the processes X, satisfying X = F(X), can be partioned into a number of bounded modules. As originally introduced in [21], for some DFRP
Y =< g > (X), X = F(X), a subset X0 of X is called a module if X0 is mutually recursive
and it cannot be written as the union of two disjoint strictly proper subsets X01 , X02 of X0
where both are mutually recursive. If X = F(X) then the subset of equations from F,
describing the processes of the module X0 can be expressed as X0 = F0 (X0 ). We define
a module to be bounded if each process in it is bounded. In the following we present a
formal definition of the subclass of B. The problem of deciding on boundedness of a general
module is obviously as complex as that of a DFRP, and hence undecidable. To guarantee
boundedness we impose further restrictions on these modules which are also mentioned
below.
Definition 3.5.1 B is the subset of A(< (GD , D ) >) such that any process Y B, with
realisation (F, < g >), satisfies the following.
The set of equations X = F(X) can be partitioned into p disjoint modules, p 1, such
that
i
(a) X=(X1 , , Xp ) where Xi = (X1i , , Xm
) = Fi (Xi ). Each Fi is a collection of
i
equations of the form
Xji = Fij (Xi ) = (ji1 < fji1 > (Xi ) | | jinij < fjinij > (Xi ))Aij ,0 .
Here Aij {ji1 , , jinij }.
[Note: To maintain rigour, however, any fjik is written as, after choosing the projection
symbols suitably, as some f m (GD , D ), where m =m1 + + mp and < fjik > (Xi ) =
< f > (X). For example the process X12 ; X32 can equivalently expressed as either
< P1 ; P3 > (X12 , X22 , X32 ) or < P3 ; P5 > (X11 , X21 , X12 , X22 , X32 ).]
k
(b) Any module either belongs to A(< (GD ,
D ) >) (where boundedness of each process
80
in the module is guaranteed) or it belongs to A(< (G;D , D ) >) along with the condition
that each process in the module is bounded (which is decidable)
(c) Y = < g > (X1 , , Xp ) where < g > can be any arbitrary function of < m (GD , D ) >.
Note that, in a very simple case, a process in B may have just a single module in it.
Lemma 3.5.1 For any Y B, Y is bounded.
Proof: By condition (b) of definition 3.5.1, each process Xji of X is bounded. However
to obtain a bounded implementation a suitable postprocess computation procedure should
k
be used. In other words, for processes of modules in A(< (GD ,
D ) >), the syntactic
transformation CF should be used in the computation of post-process expressions. Similarly
for processes of modules in A(< (G;D , D ) >), the syntactic transformation DF should be
used. For Y = < g > (X), the post process expressions of Y can be written as composition
of post-process expressions of individual Xji . Since each individual Xji is bounded, it is
easy to see that Y will have a finite set of post-process expressions and hence bounded. 2
Remark 3.5.1 In condition (b), for modules in A(< (G;D , D ) >), if individual processes
are not constrained to be bounded, then the resulting structure is quite general. In [33],
a Turing Machine has been modelled using this type of structure, where Y is a parallel
composition of a number of processes, one each from each module, and each module is from
A(< (G;D , D ) >), processes of which are not constrained to be bounded. Consequently,
boundedness of Y is in general undecidable if condition (b) is relaxed.
Below, we present a typical example of a process from the subclass B.
Example 3.5.1 Let Y =< (P1 ; P7 )kP10 > (X) = (X11 ; X12 )kX13 where
X = (X11 , , X61 , X12 , , X32 , X13 , , X53 ) and
[[{ }]]
X11 = (1 < (P2 ; P4 9 )[[+{10 ,11 ,12 }]] > (X)
[[ ]]
| 2 < (P3 ; P4 9 )[[+{10 ,11 ,12 }]] > (X)){1 ,2 ,10 ,11 ,12 },0
X21 = (3 < P2 > (X) | 4 < SKIP{} > (X)){3 ,4 },0
X31 = (5 < ST OP{} > (X) | 6 < SKIP{} > (X)){5 ,6 },0
X41 = (7 < P5 > (X) | 8 < P6 > (X) | 9 < P4 > (X)){7 ,8 ,9 },0
X51 = (10 < SKIP{} > (X) | 9 < P5 > (X)){9 ,10 },0
X61 = (11 < SKIP{} > (X) | 12 < P6 > (X)){11 ,12 },0
[[+{ , }]]
[[+{ , }]]
X12 = (11 < P8 11 12 kP7 > (X) | 12 < P9 11 12 kP7 > (X)){11 ,12 },0
X22 = (13 < P7 > (X)){13 ,},0
X32 = (14 < P7 > (X)){14 ,},0
= (15
= (17
= (19
= (10
= (21
81
< (P11 ; P12 )[[+{10 }]] > (X) | 16 < SKIP{10 } > (X)){10 ,15 ,16 ,},0
[{ }]
< P10 15 > (X) | 18 < P11 > (X)){17 ,18 },0
< P13 > (X) | 20 < P14 > (X)){19 ,20 },0
< SKIP{} > (X)){10 },0
< P13 > (X)){21 },0
In example 3.5.1, the output process is a concurrent operation of two processes, X11 ; X12
and X13 , where the former is in turn a sequential operation of two processes, namely
X11 and X12 . The first module (X11 , , X61 ) and the third module (X13 , , X53 ) are in
k
A(< (G;D , D ) >) whereas the second module (X12 , , X32 ) is in A(< (GD ,
D ) >).
Clearly Y B.
Since the processes of B are bounded, a finite state implementation is possible. Naturally, reachability, deadlock, liveness, language equivalence etc. are also decidable. In
many cases, each module represents a task. The ith task is accomplished by the process
X1i . Also the output equation Y becomes a function of the processes {X11 , , X1p }, i.e.,
Y =< g > (X11 , , X1p ), and a task level integrity is maintained [21].
Often, by blocking, if-then-else type constructs among processes can be achieved in the
output equation. For example, let X11 , X12 , X13 be three processes from three disjoint
modules. They have the following properties. Let X13 (<>) = A. For any s tr X11 ,
if X11 /s 6= ST OPB , then A X11 (s) but for A, s^ < > / tr X11 . On the other
hand, if X11 /s = ST OPB , then B = . Finally, s tr X12 , A X12 (s) but for A,
s^ < > / tr X12 . Now Y = (X11 ; X12 )kX13 will achieve the following: the task X11 will
begin first; if it terminates successfully X12 will take over; however, in case X11 terminates
unsuccessfully, X13 will take place.
The Subclass H
The bounded subclass B has the limitation that it allows combination of SCO and P CO,
only at the output equation Y =< g > X. Since each task is modelled as a module,
the single output equation represents a flat combination of tasks; it does not allow any
explicit hierarchy among the tasks. However, any complex sytem can in general be modelled
as a hierarchical combination of tasks, where each task consists of sequences of events
interspersed with suitable subtasks. Next, we present a bounded subclass H of processes
from A(< (GD , D ) >), wherein a hierarchical ordering among tasks is brought out
explicitly.
The informal idea is as follows. Given any module X0 , there may exist some proper
subset X00 of X0 , which itself is a module. Naturally in this case Z0 := X0 X00 cannot
82
0 , X00 ), where F
is the set
be a module. Also, process of Z0 can be expressed as Z0 = F(Z
of equations from F, representing the processes of Z0 . Since, each module can be viewed
as the representation of a task, here, the task X0 is said to be placed hierarchically above
the task X00 . Also the task X00 can be viewed as external input to the recursive process
equations of Z0 . In a hierarchical characterisation, one or more output processes joining
a number of bounded sub-modules (as in the case of B) are used as inputs to the rest of
the equations of a module, which is placed hierarchically above these sub-modules.
To formally describe the subclass H, we use the concept of a hierarchy tree H = (V, `),
which was originally introduced in [32]. Let V be a set of nodes and ` be a binary relation
(called the hierarchy relation) over V satisfying the following properties:
(i) There exists a unique node, called the root node of the hierarchy tree H. It is denoted
as r = (r(H)) and there does not exist any node v V such that v ` r. In other words,
the root node does not have any parent.
(ii) For every node v V , v 6= r, there exists a unique node u V such that u ` v. The
node u is called a parent of v and v is called a child of u.
(iii) The set of nodes, which do not have any child are called the leaf nodes of the hierarchy
tree.
Clearly, H = (V, `) defines a tree, which is also known as the hierarchy tree. Let l
denote the depth of the tree and Vj denote the set of all nodes of level j, 0 j l, where
the root node is the unique node at level 0. The transitive closure of ` is denoted as `+
and the reflexive transitive closure is denoted as ` . Thus u ` v denotes that u is some
ancestor (possibly different from v) of v and u `+ v denotes that u is some ancestor of v
and u 6= v.
Definition 3.5.2 H is the class of processes Y from A(< (GD , D ) >) whose realisation
(< g >, F) (with Y = < g > (X), X = F(X)) can be mapped onto some hierarchy tree
H = (V, `) as follows.
(i) With each node v V , it is possible to associate an output process Y v , and a
unique subset of processes Xv from the set X satisfying the following:
(a) Y v = < g v > (Xv ) and Xv = Fv (Xv , Uv ), where Uv is the set of output processes
associated with the children nodes of v, i.e., Uv := {Y w | v ` w}. In case v is a leaf node,
Uv is empty.
(b) The set of equations Xv = Fv (Xv , Uv ) can be partitioned into pv parts, pv 1, such
that
Xv =(Xv1 , , Xvpv ) such that Xvi Xvj = , for vi 6= vj. Uv = (Uv1 , , Uvpv ),
vi
where Uvi s need not be disjoint. For each i, 1 i pv , Xvi = (X1vi , , Xm
) =
vi
83
Zu
u|Y u Uvi
Xu
if u is a leaf node
Zu := u S
w
X u`w Z otherwise.
vi
vi
each Fvi is a collection of equations of the form Xjvi = Fvi
j (X , U ) =
(jvi1 < fjvi1 > (Xvi , Uvi ) | | jvinvij < fjvinvij > (Xvi , Uvi ))Avij ,0 .
vi
>
Here Avij {jvi1 , , jvinvij }. Also for 1 k nvij , 1 j mvi , each < fjk
;
v
< (GD , D ) >. However < g > is from < (GD , D ) > .
84
easily be concluded that, for any non-leaf node v, if each process of Yv is bounded, then
Y v is bounded.
Combining the above two facts, we can easily conclude that Y = Y r is bounded. It
should however be remembered that the post-process computation procedure involves the
syntactic transformation DF .
2
Unlike the processes in B, the processes in H immediately allow a hierarchy of tasks,
where, if u ` v, then the task Y v can be accomplished independently but the task Y u ,
along with its own dynamics, uses Y v as a subtask.
vi
vi
The P CO has not been used explicitly in any process equation Xjvi = Fvi
j (X , U ).
D and we have so far been able
This is because, processes of Uvi are not necessarily from
k
k
n
to characterise boundedness of A(< n (GD ,
D ) >) and not that of A(< (GD , D ) >).
D , then simulHowever if it can somehow be guaranteed that processes of Uvi are from
k
vi
taneously for all 1 k nvij , 1 j mvi , < fjk
> may belong to < (GD ,
D) >
Example 3.5.2 Consider the DFRP Y =< P1 ; P2 > (X), where X = (X1 , X2 , X3 ) is
given by the process equations
X1 = (a < P1 kP2 > (X) | b < P3 > (X)){a,b},0 .
X2 = (c < P1 > (X) | d < P3 ; P1 > (X)){c,d},0 .
[{d}]
kP3 ) > (X) | e < SKIP{} > (X)){a,e},0 .
X3 = (a < P1 ; (P2
Clearly X is a module having no sub-module in it. Moreover unrestricted use of P CO
and SCO in the process equations prevents any hierarchical characterisation of Y and
Y / H.
Example 3.5.3 On the other hand consider the DFRP Y =< P1 kP3 > (X), where X
= (X1 , , X12 ) is given by the following process equations.
X1 = (a < P1 > (X) | b < ((P5 kP6 ); (P7 kP8 ); P2 )[[+{a,b}]] ; P1 > (X)){a,b},0
X2 = (c SKIP{} ){c},0 .
X3 = (b < P3 > (X) | a < ((P5 kP6 ); (P7 kP8 ); P4 )[[+{a,b}]] ; P3 > (X)){a,b},0
X4 = (d SKIP{} ){d},0 .
X5 = (e < P6 > (X) | f SKIP{} ){e,f },0 .
X6 = (g < P5 > (X)){g},0 .
X7 = (h < (P10 kP12 ); P7 > (X)){h},0 .
X8 = (i < P9 > (X) | j SKIP{h} ){i,j},0 .
X9 = (j < P8 > (X)){j},0 .
X10 = (k < P11 > (X)){k,l},0 .
X11 = (l < P10 > (X)){k,l},0 .
X12 = (l < P12 > (X) | m SKIP{k,l} ){l,m},0 .
3.6. CONCLUSION
85
.
X1 = (X1 , X2 , .. X3 , X4 )
.
1
U = (Y 1 , Y 2 , .. Y 1 , Y 2 )
J
Y = Y 1 = X1 kX3
J
J
.
^
J
X3 = (X7 , .. X8 , X9 )
.
3
U3 = (Y 4 , .. )
2
3
Y = X7 kX8
X2 = (X5 , X6 )
Y 2 = X5 kX6
.
X4 = (X10 , X11 , .. X12 )
Y 4 = X10 kX12
4
3.6
Conclusion
In this chapter we have studied the boundedness property of two classes of DFRPs, namely
k
;
n
A(< n (GD ,
D ) >) and A(< (GD , D ) >). For both the cases the underlying set of
functions is shown to be infinite.
In the case of the former class, we have proposed a syntactic transformation and a
rigorous post-process computation procedure which is powerful enough to capture the fact
that, each process of this class is bounded. The result is important as it guarantees finite
representation of these processes inspite of the infiniteness of the underlying function space.
k
It has also been shown that if LCO[B] is taken out from GD , then the set of functions
built on the resutant collection of operators is finite.
In the latter case also we have followed the same procedure of using a syntactic transformation along with syntactic substitution in post-process computation. Using a suitable
procedure we have shown that boundedness is decidable for this subclass.
Several semantics preserving syntactic transformations of process expressions have been
proposed in course of the analysis of boundedness.
Finally, it has been shown how both the P CO and the SCO can be used together in a
specific way, giving rise to bounded hierarchical subclasses of DFRP.
In the case of DFRPs built with P CO and change operators, one major limitation
86
D instead of D . As
of the present work is the applicability of boundedness results on
explained in fact 3.1.2, for processes in D , GCO[[B]] fails to distribute over P CO in
general, as it may change some of the post processes that are ST OPA into SKIPA , for
D is the class
some A, though trace and alphabets always remain same after distribution.
of processes that definitely satisfies the condition of distribution as given in lemma 3.1.1.
k
For some process P in A(< n (GD , D ) >), if it can somehow be guaranteed before hand
that the condition of lemma 3.1.1 is met, then CF can be used in post process computation
of that process in a straightforward way. A second option is to consider modification in the
definition of P CO, or more specifically in the definition of termination function. But this
may pose difficulty in construction of supervisor processes which normally run concurrently
with plant processes and decide on overall termination via blocking.
Also, in the previous section the subclasses B and H are presented only as examples to
show how both modelling flexibility and boundedness can be retained simultaneously by
imposing suitable restrictions on the realisation of DFRPs.
There are important issues that remain to be investigated regarding these subclasses.
For example, given the realisation of any arbitrary DFRP, it is not yet known about the
existence of any algorithm which can find out whether the DFRP can be mapped into one
of these subclasses or not. However it is rather artificial to design an arbitrary DFRP and
then to map it on a hierarchy tree. In chapter 6, it is shown, how the logical and physical
structure of a system naturally leads to a hierarchical process model.
In general hierarchical models give computational advantages over flat models for those
issues which can be investigated independently or atleast semi-independently at different
levels of hierarchy [32]. It will be quite useful to identify such issues in case of DFRP and
study their computational complexities.
B and H are not the only bounded classes using both P CO and SCO. Governed by
applications, other such general subclasses should be identified and efficient algorithms,
that use structural information particular to a subclass, should be found out, to study the
properties of these subclasses.
In the next chapter we shall present a nondeterministic extension over the DFRP
model.
Chapter 4
Nondeterministic Extension of FRP
4.1
Introduction
Nondeterminism is a useful and important characteristic feature in DES models, that arises
in a sytem due to partial abstraction of the system dynamics. Sometimes a system has
a range of possible behaviour, but the environment or the user of the system may not
have the ability to influence, or even observe the selection between the alternative courses
of behaviour. Nondeterministic models arise either from a deliberate decision to ignore the
factors that influence this selection, or due to only the partial observation possible from
the given vantage point of the user.
Nondeterministic models are useful for different reasons.
To the implementor of a system, a nondeterministic model often serves as a specification. As a result, the implementor has the freedom to choose from a range of
deterministic implementation details, each of which satisfies the higher level nondeterministic specification. On the other hand the specifier of the system is not
interested in the implementation details and is solely interested in verifying whether
the implementation satisfies the nondeterministic specification.
Sometimes the supervisor of a system is forced to perform control tasks with a partial observation of the system dynamics because of some physical or operational
constraints. Such is the situation in distributed processes controlled from a remote
control room. In order to design such a supervisor, both the actual deterministic
model of the system as well as the nondeterministic observation model are necessary.
DFRP, being a deterministic framework does not have the ability to capture these
effects of event concealment. In the present work, therefore, a nondeterministic extension
87
88
The state based models such as FSM, SC, TTM etc., incorporate nondeterminism by
making the transition relation nondeterministic. In PNs it may be achieved via a noninjective labelling of the transitions [12]. In CCS, nondeterminism arises out of hidden
transitions. In CSP, nondeterminism is described in terms of the externally observable
behaviour of a process to engage in an event or to refuse it. A nondeterministic CSP
(NCSP) is therefore characterised merely in terms of its traces and refusals, without regard for its internal state, giving rise to the so called failure-based model. In contrast,
here we argue that the underlying deterministic dynamics of a process is often, atleast
partially available, from say, the designer of the process. The system, however, while in
action, may only be partially observable to an observer and/or supervisor. However, since
the underlying deterministic model is atleast available to some extent, one can use this
knowledge in addition to external observations, to create an observed nondeterministic
model of the system for practical use. This model approximates the original deterministic
system as closely as possible, since it utilises all available information, losing only the minimum amount that is unavoidable due to under-observation. This kind of reasoning leads
us to a possible future based model of nondeterministic FRP (NFRP). Interestingly, it
turns out that, in case of constant alphabet, treatment of nondeterminism in NCSP and
NFRP are equivalent. But in case of a variable alphabet, given an extent of lack of observation, an NFRP in general results in less uncertainty, than what it would have been, had
a failure type characterisation been used.
4.2
89
In this section we define the specific embedding space and the marked process space of
concern here.
Definition 4.2.1 (Basic Objects:) Let be a fixed finite collection of events. Let MD
be a set of deterministic marks and MN be the family of nonempty subsets of MD . That
is MN = 2MD \ and it denotes the collection of nondeterministic marks. N be a fixed
family of (set-valued) functions from to MN such that N s /s N
where /s(t) := (s^t), for all t . W,MN ,N (or simply WN ) denotes the suitable
nondeterministic embedding set and it is defined as WN := C( ) N .
In WN we define two types of nondeterministic constant processes, namely,
CHAOS and HALTm for some m MN .
Definition 4.2.2 (The Constant Nondeterministic Processes:) .
CHAOS is the maximally nondeterministic process of WN defined as
tr CHAOS := , s , CHAOS(s) := MN .
Also, for some m MN , HALTm is the process defined as
tr HALTm := {<>}. HALTm (<>) := m.
It is however not necessary that m MN , HALTm be defined.
Definition 4.2.3 (Nondeterministic Projection Operators {N n | n 0}:) For any
w WN , w N 0 := CHAOS.
For n > 0 if tr P = {<>} then w N n := HALTw{<>} . Otherwise
w N n := (1 w/ < 1 >N (n 1) | | k w/ < k >N (n 1)) w(<>) where
w = (1 w/ < 1 >| | k w/ < k >) w(<>) .
The projection operator N n gives an approximation of the nondeterministic processes
and N (n+1) is a better approximation than that of N n. The N 0 gives the worst possible
approximation of a process and naturally it is defined as the maximal nondeterministic
process CHAOS.
Definition 4.2.4 (Nondeterministic Partial Order N :) The partial order N over
WN is defined as : w1 N w2 iff tr w2 tr w1 and s tr w2 , w2 (s) w1 (s).
90
In other words w1 N w2 implies that w1 is less deterministic than w2 . This leads to the
straightforward definition of the limit of a chain wi N wi+1 as follows.
lim{wi }i0 = w where tr w := i tr wi , and s tr w, w(s) := i wi (s).
Finally we have to specify the marking axioms that will define a suitable (marked)
nondeterministic process space.
Definition 4.2.5 (Marking Axioms:) The marking axioms of our required process space
are :
(a) MD = {(, , f ) (2 {0, 1} 2 ) | f ( = 1 f = )}. Also MN = 2MD \.
(b) For any process P in our required process space, s tr P
((, , f ) P (s) | ( = 0 ( f ))) s^ < > tr P .
Let N be a subset of WN that satisfies the above marking axioms.
Fact 4.2.1 It can be verified that (a) (W,MN ,N , N , {N n}) satisfies conditions of definition 2.1.6 and thus is a suitable nondeterministic embedding space and (b) N
satisfies the conditions of definition 2.1.7 and thus, (N , N , {N n}) is a suitable nondeterministic marked process space.
It should also be noted that the 2nd marking axiom poses some restriction on the choice
of m MN for which HALTm is a valid element of N . Only for those m MN such
that for all triples in m, = f , HALTm is a valid process. This is because tr HALTm is
defined to be a singleton, containing only the null trace <>.
The above definitions are extensions on the deterministic processes defined in [21].
There, any deterministic processes P has been defined as 3-tuple P = (trP, P, P ) satisfying certain axioms. The set trP is the set of traces of P , P : trP 2 is the alphabet
function, and P : trP {0, 1} is the termination function where P (s) = 1 represents
successful termination of P after generation of the event sequence s in P . The axioms that
are satisfied by any deterministic process P are (i) <> trP and s^t trP s trP .
(ii) s^ < > trP P (s). (iii) ( P (s) = 1 s^t trP ) t =<>. D denotes
the set of deterministic processes. Here a mark after a string s consists of a set of triples of
the form (, , f ) where and determine alphabet and termination after s in the usual
sense of deterministic processes and f is the set of events that will be blocked. Thus each
(, , f ) is a possible deterministic future. A nondeterministic mark is represented as a
set of deterministic marks, one of which gets chosen in a nondeterministic fashion. If a
particular future is chosen, then the corresponding is the instantaneous alphabet of the
process, f is the collection of events that can be blocked by the process in that future
91
and =1 implies the process terminates successfully in that future. As mentioned earlier, nondeterminism essentially arises due to low resolution of modelling or observation.
Thus the process may engage in unmodelled or unobserved actions between modelled or
observed events and arrive at new marks (futures). Since we are interested in describing
the modelled behaviour of the system, after every modelled event we collect the set of
deterministic marks that may be arrived at via possible internal (hidden) actions to make
the nondeterministic mark or future. It is to be noted that in the deterministic case, (that
is in DFRP) since the blocking function f is computable from the trace and the alphabet
function, it is not included explicitly. It is however necessary to include it in the mark in
the case of a nondeterministic process, since, depending upon the mark chosen (by some
internal unmodelled action), an event may either take place or it may be blocked and it is
not possible to compute the chosen mark from the trace. We feel that this characterisation
of nondeterminism is appropriate since nondeterminism essentially arises due to hiding
or undermodelling of events of a deterministic dynamics. It should also be noted that
the above definition cannot be treated as a special case of Canonical Nondeterministic
Embedding Space (W) defined in [22] mainly for two reasons. Firstly divergence has not
been treated in this work. In this sense, the present work is similar to the original Nondeterministic Communicating Sequential Processes (With-Out Divergence) (NCSP(WOD))
of [68]. Secondly, definition of W in [22] basically generalises refusal based postulates of
CSP as well as NCSP(WOD). Instead of refusals we have adopted a framework based on
possible future. A detailed comparison between NCSP(WOD) and the nondeterministic
process space N presented here, will be given in a later section.
Remark 4.2.1 (Comparison with Deterministic Processes:) Any P D can be
expressed as a special nondeterministic process F(P ) N , by the transformation rule F,
defined as follows.
tr F(P ) := tr P . s tr F(P ),
F(P )(s) := {(P (s), P (s), P (s) { | s^ < > tr P })}.
Thus s trF(P ), F(P )(s) is a singleton (contains a single future) since it is actually
deterministic.
The partial order on deterministic processes D is defined in [21]) as P1 D P2 if
tr P1 tr P2 and s tr P1 , P1 (s) = P2 (s) and P1 (s) = P2 (s). However, P1 D P2
does not imply that F(P2 ) N F(P1 ) even though tr F(P2 ) tr F(P1 ). This is because,
D relates single futures of two deterministic processes (after a common string) , which
have same and components and may differ only in set of events they block. Its
nondeterministic counterpart N , on the other hand, essentially tests whether (after a
92
common string) all the possible futures of one process is included in another or not.
The projection operator on deterministic processes (denoted as D n and defined in
[21]) truncates a deterministic process upto string length n. By comparison we find that
D n on a deterministic process P and its nondeterministic counterpart N n on F(P )
behave identically upto a string length n 1. We also have:
tr (P D n) := {s trP | #s n} = {s tr (F(P ) N n) | #s n}.
s tr P | #s n 1, F(P D n)(s) = (F(P ) N n)(s).
In this section we have completely defined the nondeterministic process space. It has
also been shown how deterministic processes can be treated as a special case of nondeterministic processes.
4.3
Process Operators
In this section we present a variety of process operators which will be used later to give a
recursive characterisation of N .
The Event Concealment Operator generates a (more) nondeterministic process from a
(non-)deterministic process through concealment of events.
Definition 4.3.1 (The Event Concealment Operator (ECO) :) Given P N and
C , P \C is defined in terms of a hiding operator C : ( C) defined as below:
<>C :=<> and s^ < >C := (s C if C) or (s C ^ < > if
/C)
Finally tr P \C := {s C | s tr P } and
(P \C)(t) :=
str P | sC =t
93
94
Definition 4.3.5 (The Asynchronous Concurrency Operator (ACO):) For two processes P1 , P2 N the ACO P1 kA P2 is defined, in terms of an asynchronous interleaving
operator IA of strings from two processes P1 and P2 , given below:
<> IA (<>, <>).
s IA (t1 , t2 ) s IA (t2 , t1 ).
if s IA (t1 , t2 ), ti tr Pi then
s^ < > IA (ti ^ < >, tj ) if ti ^ < > tr Pi for some i, j = 1, 2 i 6= j.
Following this P1 kA P2 is defined as:
tr (P1 kA P2 ) := {s | s IA (t1 , t2 ); ti tr Pi }
(P1 kA P2 )(s) :=
The ACO is valid, continuous and ndes. It has similarity with the shuffle operator
defined on two FSMs with disjoint alphabets. However here alphabets of two processes
need not be disjoint. Events even with identical names take place sequentially and without
synchronization. Also ECO distributes over ACO, i.e., (P1 kA P2 )\C = (P1 \C)kA (P2 \C).
Example 4.3.1 Below are a few examples of CO, N CO, SCO, ACO, and ECO.
P0 = (a HALT{({},1,{})} | b HALT{({},0,{})} ){({a,b},0,{})} .
P1 = P0 u HALT{({c},0,{c})} .
P2 = (a HALT{({b},0,{b})} | c HALT{({},0,{})} ){({a,c},0,{})} .
P3 = (d HALT{({},1,{})} ){({d},0,{})} u HALT{({},1,{})} .
P3 ; P1 = P1 u ((d P1 ){({d},0,{})} ).
Let P = P1 kA P2 . Then P (<>) = P1 (<>) P2 (<>). Also P/ < a >= (P1 / < a >
kA P2 )u(P2 / < a > kA P1 ), P/ < b > = (P1 / < b > kA P2 ) and P/ < c > = (P2 / < c > kA P1 ).
Finally P1 \{a} = (b HALT{({},0,{})} ){({b},0,{})} u HALT{({c},0,{c}),({},1,{})} .
The parallel composition operator (P CO) captures concurrent behaviour of two nondeterministic processes. This is a natural extension over its deterministic counterpart defined
in [21]. We give a formal definition below.
Definition 4.3.6 (The Parallel Composition Operator (P CO):) To begin with, we
first define an operator Is , that interleaves two given strings with synchronisation, recursively as follows:
a) <> Is (<>, <>)
b) s Is (t1 , t2 ) s Is (t2 , t1 )
c) If s Is (t1 , t2 ), t1 tr P1 , t2 tr P2 then
95
A1 (s, t1 ^ < >, t2 ^ < >) := {(1 2 , , f1 f2 ) | ((k , k , fk ) Pk (tk ^ < >
), k = 1, 2) and = 1 ((1 = 2 = 1) (i = 1 j i ; i, j = 1, 2; i 6= j))}.
A2 :=
96
P/ < b >= ((P1 / < b >)kP2 ); P (< b >) = {({a, c}, 0, {})}.
{c}
P/ < c >= (P1 k(P2 / < c >)); P (< c >) = {({a, b}, 0, {})}.
P/ < b, a >= ((P1 / < b >)k(P2 / < a >)); P (< b, a >) = {({b}, 0, {b})}.
P/ < b, c >= ((P1 / < b >)k(P2 / < c >)); P (< b, c >) = {({}, 0, {})}.
{c}
P/ < c, a >= ((P1 / < a >)k(P2 / < c >)); P (< c, a >) = {({}, 1, {})}.
{c}
P/ < c, b >= ((P1 / < b >)k(P2 / < c >)); P (< c >) = {({}, 0, {})}.
P {} is defined later.
Remark 4.3.1 The following example shows that if we allow as in [21], that for tr P =
{<>}, P N n = CHAOS, n, then the parallel composition fails to be ndes. Let
P4 = (a HALT{({},1,{})} ){({a},0,{})} and
P5 = (a (b HALT{({},1,{})} ){({b},0,{})} ){({a},0,{})} .
So P4 kP5 = (a (b HALT{({},1,{})} ){({b},0,{})} ){({a},0,{})} . According to [22], we have
P4 N 2 = (a CHAOS){({a},0,{})} .
P5 N 2 = (a (b CHAOS){({b},0,{})} ){({a},0,{})} .
(P4 kP5 ) N 2 = (a (b CHAOS){({b},0,{}),} ){({a},0,{})} .
(P4 N 2kP5 N 2) = (a (CHAOSk(b CHAOS){({b},0,{})} )){({a},0,{})} .
In (P4 N 2kP5 N 2) and hence in (P4 N 2kP5 N 2) N 2, the second event can be any
. But this is not the case in (P4 kP5 ) N 2, where the only possible second event is
b. Clearly therefore P CO is not ndes as (P4 N 2kP5 N 2) N 2 6= (P4 kP5 ) N 2 under this
definition. However, if we apply our modified definition 2.1.6 then P4 N 2 = P4 and PCO
is ndes.
Definition 4.3.7 (The Local Deletion Operator (LDO):) Given P N and
such that (, , f ) P (<>) with /, the LDO P {} is defined as follows:
<> tr P {} .
P {} (<>) := {(, , f ) P (<>) | /}.
< 0 > tr P {} (, 0, f ) P {} (<>) | 0 ( f )
P {} (< 0 >) = P (< 0 >)
(s tr P {} | #s 1) (s^ < 0 > tr P {} s^ < 0 > tr P ).
P {} (s^ < 0 >) = P (s^ < 0 >)
(, , f ) P (<>), / , P {} = P.
(, , f ) P (<>), , P {} is undefined.
The LDO deletes those deterministic futures which contain in their alphabet components, from the initial mark of the process. From the resultant process, at the initial
state, can neither take place nor can it be blocked. This operator is valid, continuous
and ndes.
97
A Q
k
Q
(*,!)
Q
Q
Q
+
S2
Q
Q
(*,!)
Q
Q
C
6
?
Q
Q
Q
Q
(*)
Q
S1 - J2
- J3 S3 - J4 S4 - J5 S5 - J6 S6 - J7
J1
QQ
k
(*)
(!)
6
(!)
(*,!) Q
Q
?
Q
Q
D
Q
Q
Q
(*,!)
Q
Q
Q
Q
(*,!)
Q
Q
Q
+
B
98
in the directions shown in the Fig 4.3.3 . Vehicle V 1 (resp. V 2 ) loads material from A
(resp. B) if it is empty and enters the common track. In the common track, after reaching
J4 (resp. J5) it may either continue its journey direcly and enter into S3 (resp. S5) or it
may take a left (resp. right) turn, arrive at C (resp. D), unloads material, comes back to
J3 (resp. J5) and then continue its journey forward. The movement of vehicle V i from
i
section Sj to Sj+1 , j = 1, , 5 is represented by event j,j+1
. Rest of the event symbols are
self explanatory. The overall dynamics is expressed as the following deterministic process
P lant. However since each deterministic process is a special case of a nondeterministic
process, here we express P lant as a nondeterministic FRP (NFRP), i.e., as an FRP built
over N .
2
1
kVB1
.
P lant = VA1
1
1
){({check1 },0,{})} .
= (check 1 VA2
VA1
1
1
1
1
){({empty1 ,nonempty1 },0,{})} .
VA2 = (empty VA3 | non empty 1 VA4
1
1
1
1
1
){({start1 },0,{})} .
VA4 = (start1 VA5
VA3 = (load VA4 ){({load1 },0,{})} .
1
1
1
1
1
1 },0,{})} .
){({enterS1
VS1
= (enterS1
){({arrive1J1 },0,{})} .
VJ1
= (arrive1J1 VJ1
VA5
1
1
1
1 },0,{})} .
VS1 = (1,2 VS2 ){({1,2
[{turn1 }]
1
1
1
1
1 ,turn1 },0,{})} .
){({2,3
| turn1 W11 ; VS2
VS2
= (2,3
VS3
1
1
1
1
1
1
1 },0,{})} .
1 },0,{})} .
){({4,5
VS3 = (3,4 VS4 ){({3,4
VS4 = (4,5 VS5
1
1
1
1 },0,{})} .
){({5,6
VS5
= (5,6
VS6
1
1
1
1
1
){({return1A },0,{})} .
= (return1A VA1
VS6 = (arrive7 VJ7 ){({arrive17 },0,{})} .
VJ7
1
1
1
1
1
W2 = (unloadC W31 ){({unload1C },0,{})} .
W1 = (arriveC W2 ){({arrive1C },0,{})} .
W31 = (return1J3 HALT({},1,{}) ){({return1J3 },0,{})} .
2
The process VB1
can be constructed in an identical way with suitable changes in process
and event symbols (like changing the superscript 1 to 2, subscript A to B and C to D,
return1J3 to return2J5 ). However it should be noted that, the event turn2 , (followed by the
2
2 [{turn }]
2
2
process W12 ; VS4
) now takes place as a possible choice in VS4
, instead of VS2
.
In the above plant, for those junctions, which are not equipped with detectors, junction crossing events are unobservable. The observable behaviour of the plant will become
important in the context of controller synthesis under partial observation. For example,
one may try to construct a supervisor, based on the event information supplied by the
detectors, which will control the stop lights in such a way that the two vehicles never ply
in the same section of the guideway simultaneously. For the given plant, the observed
behaviour, obtained by hiding the unobservable events, is naturally a nondeterministic
99
100
The GCO is valid, continuous and ndes. Both the LCO and GCO are natural extensions over similar operators defined for FRP.
For our convenience, from now on we will have two types of LCOs and GCOs: LCO[B]
or P [B] (meaning P [B+] ) and LCO[+C] or P [+C] (meaning P [+C] ) and similarly
GCO[[B]] and GCO[[+C]].
Example 4.3.4 The following examples show the use of LN O, LDO, LCO and GCO.
{a}
P1
= HALT{({c},0,{c})} .
[ =0]
(P3
); P1 = (d P1 ){({d},0{})} ).
[{a}+{d}]
P0
= (b HALT{({},0,{})} ){({b,d},0,{d})} .
[[+{b}]]
P3
= (d HALT{({b},1,{b})} ){({b,d},0,{b})} u HALT{({b},1,{b})} .
Distribution Laws: The following laws describe distribution of the unary operators
over some binary ones.
(i) N CO:
[[B+C]]
[[B+C]]
[B+C]
[B+C]
(a) (P1 u P2 )[[B+C]] = P1
u P2
. (b) (P1 u P2 )[B+C] = P1
u P2
.
[ =0]
[ =0]
[ =0]
[ =0]
[ =0]
(c) (P1 u P2 )
= P1
u P2
if Pi
is defined for both i = 1, 2. If Pi
is not
[ =0]
defined then the r.h.s will be Pj
where j = 1, 2, j 6= i.
{}
{}
{}
{}
{}
(d) (P1 u P2 )
= P1
u P2
if Pi
is defined for both i = 1, 2. If Pi
is not
{}
defined then the r.h.s will be Pi u Pj
where j = 1, 2, j 6= i.
(ii) SCO:
[[B+C]]
[[B+C]]
[ =0][B+C]
[ =0]
(a) (P1 ; P2 )[[B+C]] = P1
; P2
. (b) (P1 ; P2 )[B+C] = (P1
; P2 if P1
is
[B+C]
defined) u (P2
if (, 1, ) P1 (<>)).
[ =0]
[ =0]
[ =0]
[ =0]
(c)(P1 ; P2 )
= (P1
; P2 if P1
is defined) u (P2
if (, 1, ) P1 (<>)).
[ =0]{}
[ =0]
{}
{}
(c)(P1 ; P2 )
= (P1
; P2 if P1
is defined) u (P2
if (, 1, ) P1 (<>)).
(iii) ACO and P CO: Over ACO, GCO[[B+C]] distributes. Over P CO only GCO[[B]]
distributes when every future of P1 and P2 has a zero termination. Other local operators
do not in general distribute over these two operators.
4.4
Recursive Characterisation
In this section, we establish the main property of mutual recursiveness, necessary for a
recursive characterisation of nondeterministic processes using the operators defined in the
previous section.
101
102
4.5
Assessment
4.5. ASSESSMENT
103
Among the advantages we have the following. (i) This is the first attempt at introducing
nondeterminism in a variable alphabet situation which itself leads to modelling advantages
over the constant alphabet case of CSP. (ii) It provides a much needed platform over which
problems of observation [30], control under partial observation [54] etc., can be formulated.
(iii) Because of mutual recursiveness of the operators, the model can be simulated in
computer. (iv) Finally it generates a rich language. For example, as shown below, any
context free language (CFL) can be modelled by this framework.
Modelling CFL with NFRP
Given any Context Free Language L without the null string <>, we can construct an NFRP
PL A(< n (GN , N ) >) for some finite n such that tr PL = L and s L, PL /s
HALT{({},1,{})} , as follows.
By Chomsky Normal Form, any such L can be generated by a grammar GL whose
production rules are of the form A BC or A a where A, B, C are nonterminals and
a . We formulate PL to emulate the operation of GL . For every nonterminal A a
process PA is created.
If A B C GL then PA = PB ; PC .
If A a GL then PA = (a HALT{({},1,{})} ){({a},0,{})} .
If A BC | a GL then PA = (PB ; PC ) u ((a HALT{({},1,{})} ){({a},0,{})} ). If, in GL
the set of nonterminals be V = {S, A1 , , An } and the set of terminals be T = {a1 , , an }
then we get a vector recursive equation X = F(X, U) where
X = (PS , PA1 , , PAn ) and U = ((ai HALT{({},1,{})} ){({ai },0,{})} ) 1 i m). By the
structure of a valid CFG GL there cannot be an unguarded loop of recursive definitions
among the nonterminals of the grammar. Hence F must satisfy conditions of theorem 2.2.1
and there is a unique solution process that mimics GL . Y = PS will generate the required
language.
4.5.1
Although the DFRP has its origin in DCSP, an important extension introduced there is the
concept of a variable alphabet. The nondeterministic process framework proposesd here,
naturally, is an extension over its corresponding CSP counterpart, namely Nondeterministic
CSP (NCSP).
In NCSP, nondeterminism has been treated in terms of the set of refusals of a process.
A refusal of a process is a collection of events, which when offered by the environment to the
process, may be refused by the latter. In NCSP, nondeterminism arises from the fact that,
104
at a time, a process may have mutiple refusal possibilities. The set of all possible refusals
( which is actually a family of sets of events) captures the immediate nondeterministic
behaviour of the process. Subset of a refusal is also considered as a separate refusal and
these refusals are used to represent the dynamics of a process. It is as if the description
of the process has been arrived at by conducting experiments ( like offering a collection of
events to the process) and observing its external behaviour.
On the other hand, the NFRP framework takes a view point of internal behaviour. It
is as if, the deterministic process model were known, and by some concealment operation,
a nondeterministic model has been arrived at, where, a collection of possible deterministic
futures, characterised by corresponding alphabet, termination and blocking behaviour
have been clubbed together to represent a nondeterministic future.
Thus given any nondeterministic mark of any NFRP, one can construct the immediate
one length strings of events that are possible in the process at that stage. This is however
not possible in NCSP, where information about both the refusals and trace is necessary to
determine the one length strings. This is because, by definition, any subset of a refusal is
also a refusal, and hence, an individual refusal may not qualify as a possible deterministic
future.
In order to gain an understanding of the relation between NCSP and NFRP we make the
following assumptions. (a) Constant alphabet in case of NFRP; (b) Both types of processes
are nonterminating; (c) The divergence component of NCSP processes, which has been
defined to capture behaviour of processes in case of infinite occurrence of hidden events, is
taken to be empty. This results in the class of NCSP without divergence (NCSOWOD). In
other words, we restrict our discussions within the class of nonterminating NCSP(WOD)
processes on one side and nonterminating, constant alphabet nondeterministic processes
of N on the other side. For detailed definitions of NCSP(WOD) and CSP we refer to [68]
and [17].
Formally a nonterminating NCSP(WOD) P is a CSP P = ((P ), F (P ), D(P )) such
4.5. ASSESSMENT
105
process and vice versa while preserving behaviour in some sense. For this reason, as also to
compare similar operators defined in the two formalisms, we consider some transformations.
The following transformation rule G converts an NCSP(WOD) P into a nondeterministic process G(P ) L. G preserves traces and G(P ) contains the maximum number of
deterministic futures in each nondeterministic mark, each of which corresponds to a suitable refusal of P .
tr G(P ) := tr P .
G(P )(s) := {(, 0, f ) | = (P ), (s, f ) F (P ), Imp(P/s) f }.
The idea behind the transformation rule G is that only a refusal of a process, which contains all the impossible events, will correspond to an individual deterministic future of a
nondeterministic mark, in which it contributes the set of blocked events.
Actually several such transformations can be defined depending on their properties.
For example, the following transformation rule G is similar to G as it is trace-preserving.
But instead of maximum number of refusals, it retains only the maximal refusals, each of
which naturally contains all the impossible events.
tr G (P ) := tr P .
G (P )(s) := {(, 0, f ) | = (P ), (s, f ) F (P ), / (P ), (s, f {}) F (P )}.
On the other hand the transformation rule G 1 converts a process PF L into a
NCSP(WOD) G 1 (PF ) as follows:
(G 1 (PF )) := PF .
F (G 1 (PF )) := {(s, X) | s tr PF , X f , for some (PF , 0, f ) PF (s)}.
D(G 1 (PF )) := .
G 1 actually creates the refusal set by taking the subset closure of the blocking components of all the futures present in the mark. It is also trace-preserving.
For any NCSP(WOD) P we have G 1 (G(P )) = P . But for any PF L, in general
G(G 1 (PF )) 6= PF . Note that the same holds for G and G 1 , i.e., G 1 (G (P )) = P but
G (G 1 (PF )) 6= PF . In both cases G 1 fails to be right inverse because in general there
exist non-unique NFRPs of L having same trace but different marking functions, which
upon application of G 1 give rise to identical NCSP(WOD) G 1 (PF ). G or G maps back
to one specific NFRP among these non-unique processes, which may not be the original
one.
The transformations preserve the sense of a number of similar operators defined in the
two frameworks. We assign suffix C and suffix F respectively to the similar operators of
NCSP(WOD) and L (or N ). Then it can be easily checked:
(a) P1 uC P2 = G 1 (G(P1 ) uF G(P2 ));
(b) P1 kC P2 = G 1 (G(P1 )kF G(P2 ));
106
4.5. ASSESSMENT
107
Also while designing, similar restriction on P CO and SCO [17] should also be taken into
account. The following NCSP YC emerges as a result.
P1 = (a (P1 u SKIP{a,b} ); P2 )
P2 = (b SKIP{a,b} )
P3 = (b (P3 u SKIP{b,c} ); P4 )
P4 = (c SKIP{b,c} )
108
of the different futures are identical at the first instant. In absence of this, ACO of NFRP
and its counterpart in NCSP do not carry completely identical meaning.
4.5.2
Relations between CCS and CSP have been discussed at length in both [17]) and [18]).
Here we briefly mention a few features in which NFRP is different from that of CCS.
The CCS was designed to provide a mathematical framework for studying different
kinds of equivalence among process algebra models of concurrent nondeterministic systems. Here nondeterminism is introduced by a special symbol (different from the termination component of deterministic futures of our NFRP model) which correspond to
the occurrence of a hidden event. Unlike CSP or NFRP, in CCS, these hidden events are
not ignored and they can be used, even for guarding recursive equations involving process expressions. Different kind of process equivalences are defined, which differ from each
other in their treatment regarding the hidden events. These include strong bisimulation,
weak bisimulation etc. In NFRP, however the only equivalence between processes defined
is that of equality (P1 N P2 and vice-versa). In fact, trace equivalence (identical set
of observed strings) can also be defined in both models. This trace equivalence, cannot
distinguish between models having identical traces but different deadlocking property. The
failure equivalence can be compared between the two models, only under the restriction
of constant alphabet. It is however difficult, if not impossible, to define weak bisimulation in NCSP and in NFRP as these frameworks are not distinguishing enough. In fact
they were not meant to be also. Finally, as for NCSP, the P CO of NFRP and corresponding conjunction operator of CCS are not comparable. The latter does not have any
blocking property. On the other hand it includes aspects of hiding, nondeterminism, and
interleaving as well as synchronization. In NFRP, we have separate operators for all these.
This work has presented a formalism for capturing nondeterministic behaviours of DES
in general and FRP model in particular. Nondeterminism is often unavoidable in hierarchical supervision, where low level deterministic descriptions may become nondeterministic
due to deliberate undermodelling or lack of observation. Together with the DFRP model
the present work forms a uniform process algebraic approach for modelling untimed logical
discrete event systems. However problems related to control and observation remain to be
worked out.
In the next chapter we equip the DFRP formalism with concepts of states and variables.
Chapter 5
State-based Extension of FRP
5.1
Introduction
One important limitation of the DFRP model is that, it does not support any feature
to store and update numeric and non-numeric information about the state of different
physical and logical variables associated with a system. In man-made real world systems,
concept of states and associated variables emerges quite naturally. One can also evaluate
system performance through simulation if such variables are available. Many logical decisions are taken based on the state of these variables. Naturally DFRP cannot model such
logical decisions directly and as a result poses considerable difficulty to the modeller of
a system. The motivation behind this work emerged from the recognition that both the
events and the collection of system variables are independent real entities and therefore
need to be modelled while modelling the logical behaviour of Discrete Event Systems.
An important step towards integrating features of both state and trace based models
has been taken through the notion of extended processes in [22], where both events and
state transformations are dealt with simultaneously. However only a restricted class of
operators, to be used for recursive definition, has been mentioned. Moreover possibility of
shared variables among subprocesses and different constraints that may arise due to them,
have not been discussed.
In this work, a second attempt has been made to integrate both trace and state based
features using the concept of extended processes. The major difference between the current
work and the existing one [22] stems from the fact that a shared memory architecture has
been used in the current work unlike in [22] where a message passing architecture is used.
In [22] it has been shown how a shared variable can be modelled, not as an event, but as
a buffer process.
109
110
The relative advantages and disadvantages of using message passing / local memory
architecture and global memory architecture in parallel and distributed computation are
well known to the computer science community. The former has the advantage that parallel
computation assigns values to any variable in a deterministic way, since each variable is
local in nature. But it has large communication burden. In case of the latter, there is
little communication burden, but strategies like semaphores and mutual exclusions should
be enforced in order to ensure a deterministic effect of parallel computation on shared
variables. In most cases therefore, the choice of an architecture is determined by the
application in hand as well as physical constraints.
Extended FRP (EFRP) can be considered to be a generalised computational model and
the above discussion is applicable to this also. The choice of a global memory architecture
in this work for modelling DES has been motivated by the following facts.
The state based extension of the DFRP model is intended to capture real life discrete
event systems, and not just programming languages. While modelling programming
languages, the only purpose is to capture a logical class of computations and even
there itself are several hardware and software systems that support the concept of
shared memory. On the other hand, while modelling real life dynamic systems,
a good model should reflect the structure of the dynamics as it exists in reality.
Shared natures of physical variables are often intrinsic features of several systems.
For example, consider the behaviour of a set of generators in an interconnected
power grid. Through quantization of the state space, one may be able to describe
the behaviour in a DES framework, by modelling each generator as a process which
evolves through sequence of events such as tripped-on-underfrequency, on-full-load,
on-no-load, tripped-on-overspeed etc. In such a system, note that the frequency
is a true shared variable in the sense that it is global and all generators contribute
to its value. Modelling such a system with message passing would imply a frequency
variable local to each generator, or a buffer process for the frequency, both of which
are quite artificial.
A second problem arises if each shared variable is to be modelled by a buffer process.
If the number of shared variables is large, an equally large number of processes have
to be constructed. In any implementation, this will lead to a significant storage
and communication overload. It is just for this reason, notwithstanding the many
problems of shared memory, that many multiprocessing systems are implemented
with shared memory architecture, to save on memory and communication.
111
5.2
In order to model state variables explicitly in a trace based formalism we shall use the idea
of extended process space as originally envisaged in [22]. To begin with, we shall augment
the deterministic process space of Inan and Varaiya with an explicit state component. The
resultant Augmented Process Space will have many similarities as well as some important
differences with the Deterministic Process of Inan-Varaiya with Memory [22].
Let V be a set of variables. For each v V, let type(v) denote the set from which v
can take values. The underlying state-space Q is defined as
Q := vV type(v).
112
113
5.3
Process Operators
In this section we define constant processes and a number of different operators on both
the augmented and extended process space. The operators can be used to build complex
models from simpler ones.
The constant processes of DA are ST OPA,q and SKIPA,q , for some A and q Q.
Definition 5.3.1 (Constant Processes:)
ST OPA,q := ({<>}, ST OPA,q (<>) = A, ST OPA,q (<>) = 0, ST OPA,q = q)
SKIPA,q := ({<>}, SKIPA,q (<>) = A, SKIPA,q (<>) = 1, ST OPA,q = q)
Naturally, their extended process counterparts will be,
ST OPA : Q DA | D(ST OPA ) = Q and ST OPA (q) := ST OPA,q .
SKIPA : Q DA | D(SKIPA ) = Q and SKIPA (q) := SKIPA,q .
Note that any HALTm or P D 0 for some P DA is either ST OPA,q or SKIPA,q
for some A , q Q. Similarly P D 0 for some P Q
DA is either ST OPA or SKIPA
for some A .
Next, following DFRP we present a collection of operators and associated restrictrions.
Definition 5.3.2 (Augmented Deterministic Choice Operator (ADCO):) Given
P1 , , Pn DA , distinct events 1 , , n from some A , and a collection of
associated state transition functions hi : Q Q, 1 i n and q Q the ADCO is
defined as:
P = (1 P1 | | n Pn )A, q
114
If = 1,
P := SKIPA,q .
If = 0 then
trP := {<>} {< i > ^s | s trPi , 1 i n};
P (<>) := A, P (< i > ^s) := Pi (s);
P (<>) := 0, P (< i > ^s) := Pi (s);
P (<>) := 0, P (< i > ^s) := Pi (s).
The continuity of state is achieved under the following constraints. (i) For the given
q Q, hi must be defined for all i. (ii) Also the initial states of each Pi must satisfy the
condition that Pi (<>) = hi (q). Note that, hPi (<>, ) = hi (), but if instead of being the
first event, i occurs in P , after some < j > ^s, then hPi (< j > ^s, ) will be determined
P
as hPi (< j > ^s, ) = hij (s, ).
ADCO describes the many courses of events that are possible at the beginning of a
process. It is continuous and con. Using ADCO, we define its extended version.
Definition 5.3.3 (Extended Deterministic Choice Operator (EDCO):) Given extended processes P 1 , , P n , distinct events 1 , , n , a collection of associated state transition functions hi : Q Q, 1 i n, and a partial function v : Q 2 {0, 1} Q,
the EDCO is defined as
D(P ) := {q Q | v(q) = (Aq , q , qq0 ) implies that q = 0 and there exists nonempty
{k1 , , km } Aq such that for all l, 1 l m, q D(P kl ), hkl (qq0 ) is defined and
[P kl (q)](<>) = hkl (qq0 )}. For q D(P )
P (q) := (k1 P k1 (q) | | km P km (q))(Aq ,0,qq0 )
where {k1 , , km } = {j | q D(P j ) hj (qq0 ) is defined [P j (q)](<>) = hj (qq0 )}.
We denote v as (A, 0) if for all q D(P ), v(q) = (A, 0, q). Also if for all q D(P ),
v(q) = ({1 , , n }, 0, q) and {k1 , , km } = {1, , n}, we can simply drop the initial
marking symbol v. This is also continuous and constructive.
According to the definition of the choice operator in extended processes, (see definition
2.1.10), the extended processes P 1 , , P n are to be evaluated at the initial mark. Since we
have retained the same spirit here in definition 5.3.3, for meaningful operation, it becomes
necessary that the state continuity constraint [P j (q)](<>) = hj (qq0 ) be satisfied.
The following Assignment Operator (AO), also originally defined in [22], is widely used
for satisfying this state continuity constraints in EDCO.
115
116
P1 (s) if s A1
P2 (ts ) where s = rs ^ts A2
0
if s A1
P2 (ts ) if s = rs ^ts A2
P1 (s) if s A1
P2 (ts ) if s = rs ^ts A2
(P1 ; P2 )(s) :=
(P1 ; P2 )(s) :=
Note that ASCO is quite restrictive in the sense that, even if P1 terminates successfully,
if there is a mismatch in the terminating state of P1 and initial state of P2 , the overall
sequential composition deadlocks. The extended version can remove this kind of restriction.
Definition 5.3.10 (Extended Sequential Composition Operator (ESCO):) Given
processes P 1 and P 2 , P 1 ; P 2 is defined as follows.
D(P 1 ; P 2 ) = D(P 1 ). For q D(P 1 ; P 2 ), (P 1 ; P 2 )(q) is defined as
tr (P 1 ; P 2 )(q) := A1q A2q where A1q , A2q are defined as
A1q := {s tr P 1 (q) | ( P 1 (q)(s) = 0) ( P 1 (q)(s) = 1 ((q 0 = P 1 (q)(s) / D(P 2 ))
(q 0 6= P 2 (q 0 )(<>))))};
A2q := {s = rs ^ts | (rs tr P 1 (q)) ( P 1 (q)(rs ) = 1) (q 0 = P 1 (q)(rs ) D(P 2 ))
(q 0 = P 2 (q 0 )(<>)) (ts tr P 2 (q 0 ))}.
Here also A1q A2q = and for s A2q , the components rs and ts are unique.
(P 1 ; P 2 )(q)(s) :=
(P 1 ; P 2 )(s) :=
P 1 (q)(s) if s A1q
P (q 0 )(ts ) where s = rs ^ts A2q
2
0
if s A1q
0
P (q )(ts ) if s = rs ^ts A2q
2
(P 1 ; P 2 )(s) :=
P 1 (q)(s) if s A1q
P 2 (q 0 )(ts ) if s = rs ^ts A2q
Note that the state matching requirement for augmented processes is relaxed to some
extent by allowing the extended processes to have variable state q 0 as the initial state.
It is in the definition of the Augmemted Parallel Composition Operator (AP CO) where
augmented processes differ most from the deterministic processes, both without and with
117
memory. The difference emerges due to the introduction of the silent event . The AP CO
captures the concurrent evolution of two or more processes. Synchronisation of certain
tasks is brought about by naming the two tasks in the two processes identically. Such
synchronous events appear only once in the trace. At times when one process generates
an event , the other process may execute synchronously, irrespective of the event in
the environment, but only appears in the trace. This is explained below in the following
formal definition.
Definition 5.3.11 (Augmented Parallel Composition Operator (AP CO):) To define the AP CO we first define the projection of a string s on a process P (denoted
by s P ) as follows.
<>P :=<> .
For 6= ,
s^ < >P :=
undefined if s P /tr P
(s P )^ < > if P (s P )
(s P )^ < > if ( /P (s P )) ( P (s P ))
(s P ) if , /P (s P )
( P1 (s P1 ) = 1 P2 (s P2 ) = 1)
1 if
( P1 (s P1 ) = 1 P2 (s P2 ) P1 (s P1 ))
(P1 || P2 )(s) :=
( P2 (s P2 ) = 1 P1 (s P1 ) P2 (s P2 ))
0 otherwise
118
hP i (s Pi , q)
P
hP i (s Pi , h j (s Pj , q))
P
Pi
h (s Pi , h j (s Pj , q))
Thus an event 6= can take place in a component process provided it is not blocked
by the environment. Events common to the alphabets of the component processes must
occur synchronously. If an event is possible in P1 and is not blocked by P2 and at
the same time the silent transition is possible in P2 , then in P1 and in P2 will
take place synchronously and only will appear in trace of P1 kP2 . Note that this occurs
even if is in the alphabet (and trace) of P2 . Essentially has been introduced to
model the fact that at certain points in the dynamics of processes, some change in the
state variable, by events occurring in other external processes operating in parallel, may
spontaneously cause a change in the event dynamics. By spontaneous we imply one without
any deliberate action by the process itself or external manifestation. A typical analogy is
that of a hardware interrupt where an event (interrupt) by another processor can simply
cause a change in the event dynamics (jump to the interrupt service subroutine) without
any external manifestation. Note that just as an interrupt can be disabled, a similar effect
can be achieved by not including in the alphabet. Further use of will be found in
the context of an extended process in a later part. Though according to the definition,
one may not always be able to block independent occurrences of in traces of P1 kP2 ,
meaningfully, should appear in traces of P1 kP2 only when it is acting as a component
process in (P1 kP2 )kP3 for some P3 and is induced in P1 kP2 by some event in P3 . This
meaningful behaviour can be achieved by using AGCO[[]] on the process P describing
the physical behaviour of the overall system.
The constraints that must be satisfied by AP CO are as follows
i) P1 (<>) = P2 (<>). This is because P1 and P2 should have same initial global state
for a well defined P1 kP2 .
ii) All the function compositions used in finding (P1 kP2 )(s) must be commutative. We
emphasize here that this requirement is both natural and reasonable from the point of
view of deterministic modelling of real world DES. Recall that either events are physically
synchronous or they are modelled as such because their exact order of occurrence is deemed
to be unimportant for the particular application of the model. In the former case commu-
119
tativity is physical while in the latter only because it is satisfied, the events are modelled
to be synchronous. Note also that we do not require that the set of variables accessed and
manipulated by the components of AP CO are disjoint as in [22] where the commutativity requirement is obviated. In case neither of the situations hold, loss of commutativity
results in nondeterminism in the state trajectory. Since in this work we concern ourselves
with a deterministic framework, the commutativity of maps has been assumed to remove
the possible nondeterminism when shared events modify common variables.
iii) For any component process Pi , Pi (s Pi ) s Pi ^ < > tr Pi . To see the
importance of this constraint, assume that it is violated in P1 kP2 , for some s and P1 with
s Pi tr Pi , i = 1, 2. Then , (s P2 )^ < > tr P2 and / P1 (s P1 ) implies
s^ < >/tr (P1 kP2 ). This is physically absurd.
It is interesting to observe that, as far as the state dynamics is concerned, (P kP ) 6= P .
This equality holds in the case of CSP and DFRP where the concept of state is absent.
In augmented processes also, tr (P kP ) = tr P . However even with simple state transition
functions such as hP (, q) = q + 1 with Q = Z, the set of integers, hP (s P , hP (s P , q))
6= hP (s P , q).
Definition 5.3.12 (Extended Parallel Composition Operator (EP CO):) Given extended processes P 1 and P 2 , P 1 kP 2 is defined as below:
D(P 1 kP 2 ) := {q D(P 1 ) D(P 2 ) | P 1 (q)kP 2 (q) is defined }. Also (P 1 kP 2 )(q) :=
P 1 (q)kP 2 (q).
ASCO, ESCO, AP CO and EP CO are all continuous and ndes.
Definition 5.3.13 (Augmented State Modification Operator (ASM O):) Given
some process P DA , a function r : Q Q, with domain D(r), such that P (<>) D(r),
the ASM O, denoted by P {r} modifies the initial state and consequently the state trajectory
of a process. Formally
<> tr P {r} and P {r}(<>) := r( P (<>)).
If s tr P {r} then s^ < > tr P {r} iff (s^ < > tr P ) (hP (s, P {r}(s)) is defined ).
P {r}(s^ < >) := hP (s, P {r}(s)).
s tr P {r}, P {r}(s) := P (s) and P {r}(s) := P (s).
Definition 5.3.14 (Extended State Modification Operator (ESM O):) Given some
extended process P Q
DA , a function r : Q Q, with domain D(r), the ESM O, denoted
by P {r} is defined as:
D(P {r}) := {q D(P ) | (P (q)){r} is defined }. Also (P {r})(q) := (P (q)){r}.
120
The difference between ESM O and AO should be noted carefully. In the former,
the initial state of the augmented process P (q), namely (P (q))(<>), gets modified to
r( (P (q))(<>)) and the subsequent state trajectory is also modified. In the latter, the argument of the extended process P itself is changed from q to h(q) and a different augmented
process P (h(q)) is selected.
Logical branching operator is another extended operator which was originally introduced in [22] and it captures the standard if-then-else construct of programming languages.
Definition 5.3.15 (Logical Branching operator (LBO):) Given a boolean condition
b : Q {T rue, F alse} extended processes P T , P F , the LBO P = (P T b P F ) :
Q DA such that
121
A-
B-
4
5
R1
R2
B
C-
7
R3
122
m
by two events, namely sm
r (start of processing of Mm in Rr ) and er (end of processing of
Mm in Rr ). The time gap between these two events is fixed and is equal to t(m, r) units
(assumed to be discrete). For the mth product the predetermined resource utilisation
sequence m is expressed as the event sequence
m
m
m
m =< sm
rm1 , erm1 , , srmkm , ermkm > .
On the other hand, using methods of [9], the product sequence in individual units is also
found. For Rr , this sequence is expressed as
r1
r1
rkr
rkr
>.
, em
, , sm
, em
r =< sm
r
r
r
r
The plant is modelled as an extended process namely Batch process, which is the parallel
operation of the unit processes Rr , 1 r L along with the supervisor (constraint m ,
1 m N ) or specification processes P m , 1 m N . The state space is formed with N
boolean variables vm , 1 m N , each representing whether the Mm is being processed
by some unit or not and L counters, cr , 1 r L, (type(cr ) = ), each capturing the
timing constraints t(m, r) for different m and r. Finally,
Batch process := ((R1 k kRL )k(P 1 k kP N ))[[{}]]
The process P m is determined by the sequence m and is given as
m
m
m
[[Am ]]
P m = (sm
rm1 (erm1 (srmkm (ermkm P m [])) ))
where Am is the collection of events present in m . The process P m does not modify any
variables. It just ensures that the product sequence m is maintained.
The process Rr is determined by the sequence r and is given as
Rr = (U mr1 ; ; U mrkr )[[Ar {tick}]] ; Rr
Here Ar is the collection of events present in r . Rr is the extended process that simulates
the timed behaviour of the unit Rr . Also for j {mr1 , , mrkr }, U jr is given below.
U jr = (W jr (v1 = 1) (vm = 1) V jr )
V jr = ( U jr [] | sjr X jr [cr := 0; vj := 1])
W jr = ( U jr [] | sjr X jr [cr := 0; vj := 1] | tick U j [])
If all the units and products are idle (v1 = = vm = 0) then in individual U jr , the process
V jr is activated. In this process, either unit r starts working on product j, or some other
123
unit in its environment starts working on some product. The event tick is not allowed in
V jr , to represent the physical fact no time is wasted with all the units being idle. In other
words counting of time starts only after some unit starts working. However, if some unit
is busy, the event tick is allowed to take place simultaneously in all Rr as captured by the
process W jr . The effect of the event sjr is expressed with the help of assignment operator.
Finally
X jr = ((ejr SKIP {} [vr := 0]) cr = t(j, r) (tick X jr [cr := cr + 1])
In X jr , the product stays for t(j, r) time units, before the r-th unit finishes its work and
sets vj := 0. The global change operator ensures that the events shared by any P m and
Rr are always synchronized and also tick is globally synchronized among all Rr s.
5.4
Recursive Characterisation
124
Definition 5.4.2 (Extended Mutually Recursive Processes (EMRP):) A finite collection of processes {(P 1 , , P n )} is called a family of Extended Mutually Recursive Processes (EMRP) with respect to < n (GE , Q
DA ) > if j, P j , s tr P j we have P j /s =
n
< f > (P 1 , , P n ) for some < f > < (GE , Q
DA ) >.
Theorem 5.4.1 {(P 1 , , P n )} is a family of EMRP with respect to < n (GE , Q
DA ) > iff
Q n
P = (P 1 , , P n ) (DA ) is the unique solution of the recursive equation with consistent
initial condition,
(X 1 , , X n ) = X = F(X), X D 0 := P D 0
(5.1)
Q n
n
where F : (Q
DA ) (DA ) and each component Fi of F is guarded, that is
Fi (X) = (i1 < fi1 > (X) | | iKi < fiki > (X))mi
(5.2)
125
In the following cases choice of < f 0 > will depend on the argument of the extended process.
ESM O : For any q D((< f {h} > (P))/ < >)
((< f {h} > (P))/ < > )(q) = (( < f {h} > (P))(q))/ < >
= (((< f > (P)){h})(q))/ < >= ((( < f > (P))(q)){h})/ < >
= ((( < f > (P))(q))/ < >){hq } = (((< f > (P))/ < > )(q)){hq }
= ((< f 0 > (P, P0 )){hq })(q) = ( < f 0 {hq } > (P, P0 ))(q)
where hq : Q Q | q 0 Q, hq (q 0 ) := (((f (P))(q)){h})(< >).
M BO : ((< f1 b f2 > (P))/ < >)(q) :=
if b(q) = T
if b(q) = F.
if b(q) = T
if b(q) = F.
((< f1 > (P))/ < >; (< f2 > (P)))(q) if < > tr (< f1 > (P))(q)
((< f2 > (P))/ < >)(q 0 ) if ((< f1 > (P))(q) = SKIPA,q0 ) < > tr (< f2 > (P))(q 0 )
Note that, in the first case, < f2 > (P) is trivially written as < f2 > (P, P0 ). But in the
second case ((< f2 > (P))/ < >)(q 0 ) is first replaced using induction hypotheses to
(< f20 > (P, P0 ))(q 0 ) and then the argument q 0 (which is the initial state of < f1 > (P))(q))
is brought back to q using the AO h : Q Q such that
q Q, h(
q ) := q 0 .
EP CO : Let < > tr (< f1 kf2 > (P))(q). Consider the following two possibilities.
(i) < >(<f1 >(P))(q) =< 1 > and < >(<f2 >(P))(q) =< 2 > where possible (, 1 , 2 )
tuples are (, , ), (, , ), (, , ) or (, , ). Then ((< f1 kf2 > (P))/ < >)(q)
:= ((((< f1 > (P))/ < 1 >){h2 })k(((< f2 > (P))/ < 2 >){h1 }))(q)
:= ((< f10 {h2 } > (P, P0 ))k(< f20 {h1 } > (P, P0 )))(q). Here hi : Q Q such that hi () :=
i >(P))(q)
h(<f
(<>, ) for i = 1, 2.
i
126
(ii) For i, j = 1, 2, i 6= j if < >(<fi >(P))(q) =< > and < >(<fj >(P))(q) =<> then
((< fi kfj > (P))/ < >)(q) := (((< fi > (P))/ < >)k((< fj > (P)){hi }))(q)
:= ((< fi0 > (P, P0 ))k(< fj {hi } > (P, P0 )))(q). Here also hi : Q Q such that hi () :=
i >(P))(q)
h(<f
(<>, ). Also < fj > (P) is trivially written as < fj > (P, P0 ). In this case
i
includes also.
Thus by induction principle, the claim made at the beginning of the proof is found to
be true. The rest of the proof proceeds in an identical fashion as the one (theorem 3.2) in
[21].
2
Definition 5.4.3 (Extended Finitely Recursive Processes (EFRP)) :
A process Y Q
DA is said to be an Extended Finitely Recursive Process (EFRP) with
n
respect to < (GE , Q
DA ) > if it can be represented as
P = F(P), Y =< g > (P)
where P = (P 1 , , P n ), F is of the form 5.2 and < g >< n (GE , Q
DA ) >. (F, < g >)
is said to be a realisation of the process Y .
Definition 5.4.4 (Extended Algebraic Process Space:) The collection of all possible
EFRPs w.r.t < n (GE , Q
DA ) > for arbitrary n is called the Extended Algebraic Process
Space and is denoted as A(< (GE , Q
DA ) >).
Like DFRPs A(< (GE , Q
DA ) >) is also closed under post-process operation.
Finally we mention the specific differences between the space A(< (GE , Q
DA ) >)
introduced here and the similara one presented in [22].
In [22] GE consists of only EP CO and ESCO.
In the present work the concept of a silent transition has been introduced. As a
result the definition of EP CO is modified.
In [22], in the case of EP CO, the state space is modelled as cross product of individual state spaces. But this can make some well defined EP COs, in the sense of
[22], physically meaningless since independent transformations of multiple copies of
shared variables is not meaningful in reality. In the case of ESCO also no restriction is imposed on the domain even if two sequential processes share some common
variables.
A state modification operator has been introduced to maintain the mutual recursiveness of EP CO.
5.5. ASSESSMENT
127
In our model the continuity of state, explicit mention of state transition functions
associated with events etc., have led to many constraints which are not mentioned in
[22].
5.5
Assessment
In the present work, the DFRP framework presented in [21] has been augmented and
extended to include different state-based features like variables, logical decisions etc. A
first attempt in this direction was made in [22] where the concept of extended memory was
introduced. In the present work we have made a second attempt of introducing state based
features. This work differs from the one in [22] mainly in the use of shared memory. A
larger class of operators has been used to give a recursive characterisation of the extended
process. A number of constraints are introduced to maintain state continuity under the
shared memory architecture. A concept of a silent transition has been introduced in this
model to capture effects on a process of the events that take place in the environment of
the process. With this extension the FRP framework becomes powerful to capture different
numerical and real-time features like clock, hard-deadline, logical decisions etc. It can also
mimic powerful state based models like TTM as shown in the following.
Modelling Arbitrary Timed Transition Models (TTM)
The TTM framework is proposed in [13]. Here we demonstrate the describing power of
EFRP by modelling a general TTM. For a detailed exposition on TTM the reader is
requested to refer to [13].
Let Plant = M1 k kMn be a TTM plant, where the i-th TTM Mi is described as
Mi = (Vi , i , Ti ).
Here
Ti := Ti {tick, initial} = {ij | 1 j ki , ij = (eij , hij , lij , uij )}.
V i := Vi {}. Ci := {cij | 1 j ki , type(ci ) = }. Vi := V i Ci .
VEx := V1 Vn . QEx := vVEx type(v).
In the above description the plant is modelled as the parallel composition of n TTMs.
An individual TTM is described as a tuple of a set of variables Vi , an initial condition i ,
and a collection of transitions Ti . Each transition of Ti is a tuple consisting of an enabling
condition e, a state transformation h, a lower time bound l and an upper time bound u.
From Ti two special transitions, initial and tick, are removed to form Ti , which now
128
contains ki number of transitions. From Vi the next transition variable is removed and
ki counter variables are added, one for each transition of the TTM, other than tick or
initial. We now deal with an extended variable set VEx and state space QEx . The central
idea is to introduce individual extended processes for each transition of the TTM, which
takes care of respective enabling conditions and time bounds using counter variables, silent
transitions and LBO. These processes are called event processes. However initial and
tick, are modelled explicitly as events since they dont have such constraints.
We now proceed to give the formal definition of the EFRP P lant that models a general
TTM Plant. It is formulated as a concurrent operation of n extended processes M i ,
1 i n, each of which models the i-th TTM Mi .
P lant := (M 1 k kM n )[[{}]]
M i := (ST OP {initial} i (initial Y i )).
At a state q, the realization M i (q) of the extended process M i either evaluates to a
deadlocked process or it is ready to execute initial and engage in purposeful activity,
depending upon whether i (q) is false or true. If it is false for some i then P lant is
deadlocked due to ST OP {initial} in Mi . Otherwise P lant starts meaningful activities with
initial taking place in all its components. Y i is a recursive process. In each call of Y i a
parallel composition of ki event processes, namely kP ij , 1 j ki , is called. Once this
composition terminates, Y i is called again.
Y i := (P i1 k kP iki ).
Y i is modelled to be a nonterminating process since the TTM is such. Each nonterminating
event process P i plays the key role in modelling. Formally
P ij = ((X 4ij cij = uij (X 3ij lij cij < uij X 2ij )) eij X 1ij )[[{tick, ij }]] . Here
X 1ij := ( SKIP {} [cij := 0] | tick SKIP {} [t := t + 1, cij := 0]).
X 2ij := ( P ij [] | tick P ij [t := t + 1, cij := cij + 1]).
X 3ij := ( P ij [] | ij P ij [hij , cij := 0] | tick P ij [t := t + 1, cij := cij + 1]).
X 4ij := ( P ij [] | ij P ij [hij , cij := 0]).
eij :=
hij :=
Vr
5.5. ASSESSMENT
129
Now we explain the operation of the event process P ij . The process X 1ij corresponds
to the case when the enabling condition for i is false. At this stage, in P ij , only tick
or (due to some i0j in some P 0ij ) is allowed. Also the counter is reset to zero. X 2ij
takes place when the enabling condition is true but the value of time is less than the lower
bound. Here also the event ij cannot take place. However in response to tick, counter
is incremented. The idea behind keeping in each of these processes is to be able to
check enabling as well as timing conditions after every event in the environment of P ij .
However is prevented from appearing in the traces of overall process P lant using the
EGCO [[{}]]. X 3ij pertains to the case when the enabling condition, as well as both
the lower and upper time bound constraints are satisfied and ij is allowed to happen.
X 4ij is the case when the counter value equals uij . It is interesting to note that in this
case tick is not allowed as its occurrence will violate the upper time bound constraints of
TTM. Physically this means that, before the next time instant, either ij will take place
or the enabling condition will become false due to some event in the environment. The
EGCO[[+{tick, ij }]] ensures that tick is synchronized among all processes and ij is also
synchronized when it is a shared transition. Communicating transitions are also modelled
as shared transitions. Finally note that after each event in kni=1 Y i , the post-process is
i
P ij ) proceeds synchronously, with
kni=1 Y i itself. This is because each process in kni=1 (kkj=1
the occurrence of either a tick in all the components, or a ij in one (or some if shared)
and in others. In the next call (at a new state), all the conditions are rechecked. Thus
P lant models Plant successfully.
2
However the present framework suffers the same problems as those of the original DFRP
and the NFRP model in the sense that it has a large modelling flexibility and resultant
loss of tractability. Moreover, for general EFRPs, it is not known how the different statecontinuity constraints can be verified in a recursive way. To solve these problems, as
well as to attempt the different control and observation problems, bounded and tractable
subclasses of n (GE , Q
DA ), in the sense of those in Chapter 3, should be identified.
In chapter 3, 4 and 5 we have defined three different variants of the DFRP framework
and illustrated their use by means of simple and other synthetic examples. In the next
chapter, we have taken three examples drawn from practical real-world systems and see
how they can be modelled using these frameworks.
130
Chapter 6
Case Studies
In this chapter we present three case studies showing the utility of the proposed hierarchical
subclass and the extensions of DFRP, in modelling real world DES. The first example
involves modelling of a manufacturing shop-floor using the bounded subclass H. In the
second, the nondeterministic extension of DFRP is used in modelling a fault diagnosis
procedure adopted by an automatic diagnostic expert. Finally, discrete dynamics of a robot
controller has been modelled using the state based extension of the DFRP framework.
6.1
Modelling of a Shop-Floor
132
Distributor
Q
Q
Q
Q
Q
Q
+
QQ
s
job-shop A
job-shop B
Buffer 1
Buffer 2
AGV
133
1
Y = Y SF = SF1
X1 = (SF1 )
1
U = (Y 2 )
1
Y 2 = Y MC
Y 3 = Y JA = JA11 kJA21
?
.
.
.
.
.
.
X2 = (D.. JSA.. BU1..; JSB.. BU2.. AGVC)
X3 = (JA11 , JA12 .. JA21 , JA22 )
2 P U2 = (... Y 3 ... ... Y 4 ... ... Y 5 , Y 6 )
4
)
=
JB
Z PPP
PP
Z
PP
Z
PP
Z
PP
Z
PP
Z
q
P
~
Z
Y =Y
= JB11 kJB21
.
5
6
X4 = (JB1.. JB2)
4
.
J U = (Y 7 .. Y 8 )
Y 6 = Y AGV D = AGV D1
J
X6 = (AGV D1 , , AGV D4 )
Y 5 = Y AGV A = AGV A1
J
5
J^
X = (AGV A1 , , AGV A4 )
J
3
Job-shop A: In Job-shop A, after supply of parts (event supplyA ), the parts are arranged
for concurrent processing (event partA ). Once concurrent processing is over (represented
by the subtask Y 3 = Y JA ), the process parts are combined (event combineA ) and examined
(event examineA ). If it is found to be satisfactory, then the event f inishA signals that
the job-shop A is ready for another round of processing. However if some error is found
out, (errorA ), error recovery must take place (recoveryA ) before the job-shop can start
operation once again. This dynamics is modelled by the process JSA1 where,
JSA1 = (supplyA JSA2 ; Y JA ; JSA3 ){supplyA },0 .
JSA2 = (partA SKIP{} ){partA },0 .
JSA3 = (combineA JSA4 ){combineA },0 .
JSA4 = (examineA JSA5 ){examineA },0 .
JSA5 = (f inishA JSA1 | errorA JSA6 ){f inishA ,errorA },0 .
JSA6 = (recoveryA JSA1 ){recoveryA },0 .
134
135
Y M C = D1
[[+JSA ]]
kJSA1
[[+JSB ]]
kJSB1
[[+BU 1 ]]
kBU 11
[[+BU 2 ]]
kBU 21
[[+AGV C ]]
kAGV C1
where
D = {supplyA , f inishA , supplyB , f inishB }.
JA = {supplyA , f inishA }.
JB = {supplyB , f inishB }.
BU 1 = {f inishA , reqA , ackA }.
BU 2 = {f inishB , reqB , ackB }.
AGV C = {ackA , ackB , load}.
Node 3: As mentioned earlier, part A is produced by combining two concurrently
produced subparts, namely A1 and A2. Formally
JA11 = (processA1 JA12 ){processA1 },0 .
JA12 = (storeA1 SKIP{} ){storeA1 },0 .
JA21 = (processA2 JA22 ){processA2 },0 .
JA22 = (storeA2 SKIP{} ){storeA2 },0 .
Finally Y 3 = Y JA = JA11 kJA21 .
Node 4: Part B is also produced by combining two concurrently produced subparts,
namely B1 and B2. Individually B1 (B2 respectively) is also produced by some direct processing (event d_processB1 (event d_processB2 resp.)) followed by combining (combineB1
(combineB2 resp.)) two concurrently produced subparts B3 and B4 (B5 and B6 resp.).
The production of these subparts is captured by the process Y 7 = Y HJB1 (Y 8 = Y HJB2
respectively). Formally, for i = 1, 2
JBi1 = (d_processBi JBi2 ; Y HJBi ; JBi3 ){d_processBi },0 .
JBi2 = (partBi SKIP{} ){partBi },0 .
JBi3 = (combineBi JBi4 ){combineBi },0 .
JBi4 = (storeBi SKIP{} ){storeBi },0 .
Finally Y 4 = Y JB = JB11 kJB21 .
Node 5: The arrival of the AGV to the requesting buffer is modelled by the process
Y 5 = Y AGV A = AGV A1 which is given by the following recursive equations with selfexplanatory event labels.
AGV A1 = (AGV _start AGV A2 ){AGV _start} .
AGV A2 = (check_f or_buf f er AGV A3 ){check_f or_buf f er} .
136
AGV A3 = (buf f er_f ound AGV A4 | not_f ound AGV A2 ){buf f er_f ound,not_f ound} .
AGV A4 = (AGV _stop SKIP{} ){AGV _stop} .
Node 6: The movement of the AGV from any of the buffer to the deposit position is
also modelled by a similar process Y 6 = Y AGV D = AGV D1 which is given as follows.
AGV D1 = (AGV _start AGV D2 ){AGV _start} .
AGV D2 = (check_deposit_position AGV D3 ){check_deposit_position} .
AGV D3 = (position_f ound AGV D4 | not_f ound AGV D2 ){position_f ound,not_f ound} .
AGV D4 = (AGV _stop SKIP{} ){AGV _stop} .
Node 7: The concurrent production of part B3 and B4 is modelled as Y 7 = Y HJB1 =
JB31 kJB41 where JB31 and JB41 are given by the following processes.
JB31 = (processB3 JB32 ){processB3 },0 .
JB32 = (storeB3 SKIP{} ){storeB3 },0 .
JB41 = (processB4 JB42 ){processB4 },0 .
JB42 = (storeB4 SKIP{} ){storeB4 },0 .
Node 8: The concurrent production of part B4 and B5 is similarly modelled as Y 8 =
Y HJB2 = JB51 kJB61 where JB51 and JB51 are given by the following processes.
JB51 = (processB5 JB52 ){processB5 },0 .
JB52 = (storeB5 SKIP{} ){storeB5 },0 .
JB61 = (processB6 JB62 ){processB6 },0 .
JB62 = (storeB6 SKIP{} ){storeB6 },0 .
The other conditions regarding boundedness of the different modules can be easily seen
to be true. Thus clearly Y SF H.
This example shows how the logical and physical structure of a system can naturally
lead to a hierarchical description. Finally, a rough calculation shows that if the example
is modelled using FSM, a large number of states (around 105 ) will be required. Clearly,
hierarchical DFRP gives a more compact description.
6.2
In this case study the proposed nondeterministic extension of the DFRP framework has
been used to model the dynamics of a fault-diagnosis mechanism in response to some fault
in a chemical system, namely a continuously stirred tank reactor (CSTR).
The original example of the plant has been taken from [70] and its schematic diagram is
given in Fig. 6.2(a). Here we are interested in modelling the sequences of observations made
by a fault diagnoser in response to logical sequences of process related queries which may
137
138
Once a particular level of the tree is searched, a synchronisation event takes place in
the model, in all the marked nodes. Then the query process from the marked children
nodes commences.
Finally, the search terminates at a set of leaf nodes, indicating the primary faults that
have taken place.
Each fault event is described in the format variable-name variable-condition. In many
cases we have complementary variable-conditions like high (H) and low(L) or open(O)
or closed(C). We call these event pairs as complementary event pairs and if one event of
such a pair is symbolically denoted by then the other is called . It is sometimes possible
that an event has more than one complementary event. In that case if the event is , the
non-singleton complementary set is denoted as .
For each primary fault node we define a process P = P (), where, if has a complementary event then,
P () := ( HALT{({},1,{})} ){({},0,{})} .
else if it has a complementary event set then
P () := ( HALT{( ,1, )} ){({},0,{})} .
else if it does not have any complementary event then
P () := ( HALT{({},1,{})} ){({},0,{})} .
A diagnosis session stops successfully after identifying the primary fault processes P .
Blocking of complementary events indicates that in the fault diagnosis session, which proceeds after taking a snapshot of the physical conditions indicated by different sensors, we
do not get contradictory information about the occurrence of a fault. Note that, here we
do not consider the temporal behaviour of processes where some physical faulty condition
and its complementary condition can be observed at different times.
For each secondary node having node structure as shown in figure 6.2(b), we define the
process P = P ( | (11 1n1 ) (m 1 m nm )) as follows:
[[{}+{}]]
nk
[ =0]
P := ( ( (km
){}
k=1 ((kj=1 Pkj ) u HALTk ))
where
k := {(, 1, ) | {k1 knk }, 6= }.
{} := {({}, 0, {})}.
){,}
139
{, } := {({, }, 0, {})}.
If has a complementary set of events, namely , then GCO[[{}+{}]] will be replaced
by GCO[[ + ]]. If a secondary fault event (secondary node) does not have a complementary fault event one should naturally do away with the GCO in the corresponding
process description.
In the above, arrival at a secondary node , (start of P ) signifies a query. A positive
observation is denoted by the occurrence of the event after which the process P waits
for the next level query of its children. This takes place upon the occurrence of the
synchronisation event which denotes that queries at all the secondary nodes of that
level of the tree are completed. Assuming that there are m possible (or) groups of and
children, one or more of these groups may be the causes. If the k-th group (denoted by
k
knj=1
Pkj ) is not a cause, then HALTk is substituted. Here k represents the fact that
k-th group of and children is not a cause for , because some non-empty subset of the
fault set {k1 knk } has failed to take place. The LN O captures the fact that among
the possible m (or groups) causes, at least one (group) must have actually taken place.
The GCO[[{} + {}]] reflects the fact that in the search subsequent to the occurrence of
, not only will not be encountered in P but it will also be blocked in the environment.
The same is true for multiple complementary events.
Before describing the fault diagnosis mechanism as a process, we describe the list of
faults (events/nodes).
STREAM FAULTS : S-i-j-k where i {1 7}, j {F (Flow), T (Temperature)}, and
k {H (high), L (low)}. S-i-j-H and S-i-j-L are complementary events.
REACTOR FAULTS : R-i-j where i {L (Level), T (Temperature), P (Pressure)}, and
j {H (high), L(low)}. R-i-H and R-i-L are complementary events.
TANK FAULTS : T-i-j-k where i {1,2}, j {L (Level), T (Temperature) }, and k
{H (high), L (low)}. T-i-j-H and T-i-j-L are complementary events.
VALVE FAULTS : V-i-j where i {1 4}, j {O (Open), C (Closed)}. V-i-O and
V-i-C are complementary events.
PIPE FAULTS : P-i-j where i {1 7}, j {L (Leak), B (Block)}. There is however
no complementary event pair here.
PUMP FAULTS : PuH (pump high), PuL (pump low ) and PuN (pump normal). Out of
these three events we consider any two events form a complementary event set for the third.
Thus if is P uH then consists of {P uN, P uL}. Also in the primary fault structure
P (P uH), the marking of HALT will be ({P uN, P uL}, 1, {P uN, P uL}) and similarly for
P (P uL) and P (P uN ).
140
Tank 1
Pipe 1 / Stream 1
LL e Valve 1
Pipe 3 / Stream 3
Tank 2
Pipe 2 / Stream 2
Valve 2 e LL
Pipe 4 / Stream 4
Stirrer
Valve 3
e L
L
?
Pump
Pipe 5 /
j
Stream 5
Pipe 6 / Stream 6
Valve 4 e LL
Pipe 7 / Stream 7
X
PX
PX
PX
PX
PX
PX
PX
@
PX
X
PX
PPXXXX
@
R
@
XXX
PP
)
q
9
z
X
11
i1
ini
mnm
m1
1n1
(b) Reason of fault = m
i=1 (i1 1ni ).
S6FL
( j
h h
(
((
Hh
(
hhhh
(
HHh
`
`
(
h
(
h
hhh
hX
?
((((
H
)
S5FL +(
z j
X
HH
j
e
j
e P6L
j
j
e
j
e
j
e
PuN
PuN
@
A
PuL
V4C
6
6
A@
RLL j
@
R
? AA
U @
?
e
j
e
j
e
j
P5L P5B
(c)
e
j
J
J^
J
P6B
e
j
PuL
Primary Reason
-
Cause-Effect Relation
j
Secondary Reason
-
Blocking Relation
Figure 6.3: (a) Schematic Diagram of a CSTR, (b) Node structure of a general fault tree,
(c) Part of a fault tree for the CSTR.
141
Now we proceed to give the NFRP description of the fault tree. Since the structure
of processes modelling primary and secondary faults has been explained before in detail
these are not elaborated here. The reader will note that most of the nodes (processes)
are of pure or type except those corresponding to the processes PS6F L , PS6F H and PS6T H .
All the STREAM and REACTOR faults are secondary faults and all the VALVE, TANK,
PIPE and PUMP faults are primary faults. It should be noted that the choice of fault
events is only intended to capture the logical structure in a realistic case. Obviously many
other faults could be included if an application demands.
SECONDARY FAULTS: P = P ( | (11 1n1 ) (m 1 m nm ))
a)STREAM FAULTS:
FLOW TYPE FAULTS:
PS1F L = P (S1F L | (T 1LL), (P 1L), (P 1B), (V 1C)).
PS1F H = P (S1F H | (T 1LH), (V 1O)).
PS2F L = P (S2F L | (T 2LL), (P 2L), (P 2B), (V 2C)).
PS2F H = P (S2F H | (T 2LH), (V 2O)).
PS3F L = P (S3F L | (S1F L), (P 3L), (P 3B)).
PS3F H = P (S3F H | (S1F H)).
PS4F L = P (S4F L | (S2F L), (P 4L), (P 4B)).
PS4F H = P (S4F H | (S2F H)).
PS5F L = P (S5F L | (RLL), (P uL), (P 5L), (P 5B)).
PS5F H = P (S5F H | (RLH), (P uH)).
PS6F L = P (S6F L | (P uL), (P uN, S5F L), (P 6L), (P 6B), (P uN, V 4C)).
PS6F H = P (S6F H | (P uN, S5F H), (P uH), (P uN, V 4O)).
PS7F L = P (S7F L | (S6F L), (V 4C), (P 7L), (P 7B)).
PS7F H = P (S7F H | (S6F H)).
TEMPERATURE TYPE FAULTS:
PS1T L = P (S1T L | (T 1T L)).
PS1T H = P (S1T H | (T 1T H)).
PS2T l = P (S2T L | (T 2T L)).
PS2T H = P (S2T H | (T 2T H)).
PS3T L = P (S3T L | (S1T L)).
PS3T H = P (S3T H | (S1T H)).
PS4T L = P (S4T L | (S2T L)).
PS4T H = P (S2T H | (S2T H)).
142
b)REACTOR FAULTS:
LEVEL TYPE FAULTS:
PRLL = P (RLL | (S3F L), (S4F L), (S5F H)).
PRLH = P (RLH | (S3F H), (S4F H), (S5F L)).
TEMPERATURE TYPE FAULTS:
PRT L = P (RT L | (S3T L), (S4T L)).
PRT H = P (RT H | (S3T H), (S4T H), (RP H)).
PRESSURE TYPE FAULTS:
PRP L = P (RP L | (V 3O)).
PRP H = P (RP H | (V 3C)).
PRIMARY FAULTS: P = P ()
c)TANK FAULTS: {T1TL,T1TH,T1LL,T1LH,T2TL,T2TH,T2LL,T2LH}
d)PIPE FAULTS: {PiL, PiB such that 1 i 7}.
e)VALVS FAULTS: {V1O,V1C,V2O,V2C,V3O,V3C,V4O,V4C}.
f)PUMP FAULTS: {PuH,PuL,PuN}.
One can now simulate the diagnosis process with any consistent initial abnormalities.
For example if the starting symptoms observed in the CSTR are High Reactor Temperature
and Low Flow in Stream 7. Then the fault diagnostic session will be modelled by the NFRP
Y = PRT H kPS7F L .
By the natural assumption that top level faults are non conflicting we ensure that the
overall process Y wont get blocked by attempting to generate both and in its trace
for some fault event .
From the above the following points regarding the role of NFRP are imporant to note. In
the design of a fault diagnostic system, such as a diagnostic expert, the usual procedure is to
capture the cause-effect relationships between the faults and their symptoms in some form
such as a fault tree [71] or a signed directed graph [72]. Many such trees may arise due to the
143
various possible faults. In the next step, all interpretations of all collections of such graphs,
that may arise in a system due to possibly multiple faults, are computed by a suitable
graph traversal mechanism. Note that, this requires concurrent traversal of graphs with
constraints that arise due to interaction among the faults. Based on these interpretation
a rule-base is compiled for subsequent on-line inferencing by the diagnostic expert. It
is not easy to construct a systematic description of this procedure in terms of graphical
methods such as fault trees, mainly because such paradigms do not support structures
for systematic composition of individual graphs. Constructing product graphs has the
problem of a combinatorial explosion. Moreover the nondeterminism is not explicit in these
models. The NFRP, on the other hand provides a modular, unified and compact description
for computing the interpretation. It explicitly supports description of event sequences,
concurrency with synchronization and nondeterminism. It is therefore an important tool
in describing the concurrent execution of individual fault diagnosis processes at a higher
level. In an implementation, such a description may be used to govern the execution of
elementary fault diagnosis processes or equivalently govern the traversal of the individual
fault tree to generate the overall sequence of fault symptoms, leading to the primary reasons
(faults).
6.3
In this case study, we develop the EFRP model of a robot controller which carries out
some repetitive work on a job depending upon the program selected. The basic dynamics
of the robot controller are taken from an example in [73].
Briefly, the actions are as follows. After power on and system initialisation the robot
enters the manual mode and its state is inactive. In this mode the program to be run
can be selected and the valid command is start. In response to start the state becomes
establishing connection where the controller sends repeated job requests to a scheduler.
After a fixed number of requests, if the job is not granted, the robot gives a connection
failure signal and goes back to manual mode. If granted, the robot enters the working
state and loads the selected program in the buffer. Each program is assumed to be a
sequence of block of instructions for the axis actuator of the robot. At a time one block
of instructions is sent. The program is repetitive, i.e., at the end of the sequence, the first
block is sent again. Before sending each block, data is read from a collection of sensors. If
some sensor shows abnormal value, alarm is signalled and the program halts. Otherwise a
block is sent and acknowledgement is received. After each acknowledgement the command
144
register is checked. In absence of new commands, the controller proceeds to send the next
block. In working mode valid commands are halt or end or end on (external) request
(e_o_r). In case of halt the controller enters the suspended mode and waits for new
valid commands, namely resume or end. In response to resume the robot enters the
working state and resumes sending instruction blocks to the actuator. In response to end
the controller enters the terminating state, loads a terminating program in the buffer
and sends it to the actuator. At the end of this program which is non-repetitive, the robot
controller goes back to manual mode.
For building the EFRP model of the above sequence of actions we have used the following variables as well as their range of values.
SV (State variable) := { I (inactive), M (manual), EC (establishing connection), W
(working), S (suspended), T (terminating)}.
Count (Counter for number of requests) := {1, , N R}.
Buf f er (It holds the sequence of executable blocks of the selected program.
C (Command register) := { S (start), H (halt), R (resume), E (end)}.
CF (Command flag) := {Old, N ew}.
P N (Program number) := {1, , p}.
P L (Program length) := {N 1, , N p, N T }.
Connect (Connect flag showing response to the job request signal) := {T, F }.
k (Sensor counter) := {1, , N S}.
SV D (Dummy location for storing a sensor data).
CV D (Dummy location for storing the critical value of some physical variable measured
by a sensor).
j (Counter for number of blocks being sent).
SR (Sensor register, an array of length N S).
DSP (Display register, an array of length N S).
SF (Sensor flag indicating all sensors showing normal value or not) :={1, 0}.
DR (External disconnect request flag) := {0, 1}.
lP ower, lM , lEC, lCF , lW , lS, lT , lI : Light indicators (l.i.). Each can be either 1
(on) or 0 (off).
The different constants used are p (number of programs), N i (length of i th program,
1 < i < p), N S (number of sensors), N R (maximum number of requests), N T (length of
terminating program), CR (Array storing the critical values of different parameters.).
The set of event () names, their interpretations and transformations (h ) are given
below.
on [lP ower := 1 ] Power on.
145
146
147
req_end SKIP{} [hreq_end ] | ack SKIP{} [hack ] | halt SKIP{} [hhalt ]).
The GCO[[+{halt, ack}]] captures that these events are synchronised between Pexe and
Ppanel . Ppanel captures the fact that command buttons are active as long as remote end
request, acknowledge from axis actuator or halt has not taken place. Once these events
take place the panel is reactivated at proper time.
Pexe := prog Presume [hprog ].
Presume := Psensor ; Paxis .
Psensor := int_s Psread [hint_s ].
Psread := read_s Pscheck [hread_s ].
Pscheck := (disp ((pollend SKIP{} [hpollend ]) k > N S Psread )[hdisp ])
SRD < CV D (disp (alarm SKIP{} [halarm hdisp ])).
In Pexe the selected program is loaded and interrupt signal is sent to the sensors. The
different sensors, N S in number, are read one by one. The data obtained by each sensor
is compared with the critical value of the corresponding physical parameter and the data
is displayed. If the sensed data shows a value in normal range, next sensor is read and if
the reading of all the sensors is complete, Psensor ends with a pollend signal. However if
an abnormal value is detected, instead of reading the rest of the sensors, an alarm signal
is given out.
Paxis := (int_A int_ack send ack Pnb [hack ]) SF = 1 (halt
Phalt [hhalt ]).
In case of alarm, the robot halts. Otherwise communication is established with the
axis actuator via proper interrupts, j_th block of program is sent to the actuator, and an
acknowledgement is received upon proper completion of the instructions given in the block.
Note that, when Paxis is taking place, Ppanel has terminated (due to synchronous events
halt or ack) making the command panel temporarily inactive.
Pnb := Pe_o_r DR = 1 ((Presume kPpanel ) CF = Old Decide).
Before sending the next block of motion instruction, the disconnect request flag is
checked. and in case of such a remote end request the robot ends its operation. In absence
of such requests, the command flag is checked. If no new command button is pressed, the
robot prepares to send the next block and the command panel is reactivated. Note that
the programs are repetitive, i.e., if all the blocks of the program are sent, robot sends the
first block once again. If new command button is pressed, it proceeds to check the validity
of the new command.
Phalt := (set_t_2 (out_t_2 halt | p_s Decide[hp_s ] | pr Decide[hp_r ] | p_h
Decide[hp_h ] | p_e Decide[hp_e ])).
In the halt process a timer is set and the controller waits for some command button to
148
be pressed.
Pe_o_r := (e_o_r Pend [he_o_r ]).
Pend := prog_T Pintr [hP rog_T ].
Pintr := (int_A int_ack send ack_T (M anual j > P L Pintr )[hack_T ]).
In the terminating procedure, the special terminating program is brought to the buffer
and sent to the actuator blockwise. At the end of this procedure the robot goes back to
the manual mode.
In the above example we have seen, how different features of EFRP, including variables
and logical decisions, can model the logical structure of the robot controller compactly.
The state continuity constraints of the different EDCO operators are satisfied because of
the use of the AO operators. Constraints of EP CO and ESCO are also satisfied quite
naturally. However, silent transition has not been used in this model, though its use has
been shown in the previous chapter.
Finally, in the next chapter, we present an assessment about the current work.
Chapter 7
Conclusions
The DFRP framework based on process algebra is a relatively new candidate for modelling
DES. The work presented in this thesis is only a first step towards establishing this as
a powerful but tractable modelling framework for DES. We conclude this work with an
assessment of the framework and some future research problems that will have to be tackled
to enhance its prospect in practical applications.
State and Graph based formalisms such as Petri Nets (PN) and State Charts (SC) are
established frameworks which are popular for their inherent simplicity and graphical
interfaces. Languages like STATEMATE [15], based on SC and Grafcet [74], based
on PN have been developed and are used in the industry. But these models become
unwieldy when the number of states of the system increases. The DFRP model with
its operators for process composition and recursive description is suitable for compact
descriptions of complex systems. Therefore, process algebraic frameworks should be
thought of as complementary models to the state based frameworks, instead of as
alternatives. But, in the absence of graphical interfaces, these are not very userfriendly.
Thus, user-friendly modelling and analysis tools for DES should be developed, which
will support multiple formalisms so that smaller systems can be modelled using graphical or state-based models for easy understandability. Later, these models can be
treated as processes and can be combined using process operators to build models of
complex systems.
Modularity and hierarchy are important requirements of any modelling framework
for complex systems. The original DFRP framework does not support these features
149
150
CHAPTER 7. CONCLUSIONS
explicitly and is as flat as FSM and PN. Efforts have been made to include hierarchical features in PN [75] and FSM [32]. In case of the DFRP framework, a concept
of module is defined in [21]. Using this concept, in this work, a bounded hierarchical
structure has been proposed for modelling of DES.
However, as mentioned earlier, there are important issues that have not been discussed in this thesis but should be addressed regarding hierarchical characterisation
of DFRPs. Firstly, it will be necessary to design algorithms to determine whether
bounded and hierarchical characterisation is possible for a given DFRP. Secondly, a
hierarchical DFRP, that is bounded, can always be implemented using a finite number of states and different properties like reachability, deadlock, liveness, language
equivalence etc., can be found out by adapting standard FSM-based algorithms to
this finite state implementation. However the associated state-space tends to assume
a large size due to the product spaces that arise in presence of concurrent subsystems and FSM-based algorithms become inefficient and computationally expensive
because of state explosion in the implementation. For hierarchical DFRPs, therefore,
properties, which are amenable to hierarchical analysis, should be identified. Efficient
algorithms, which make use of the underlying structure and may avoid computation
over product spaces, can then be designed to analyse these properties. This will result
in a significant reduction of the size of the search space and associated computational
advantages over flat DFRP.
The problem of supervision is well researched in the FSM model. It is of immediate interest to define this problem for unbounded and bounded subclasses of the
DFRP model and solve them while taking advantage of the high-level, recursive
and hierarchical structure of DFRP. Once again, it should be noted that straightforward application of FSM based algorithms for controller synthesis will not be
computationally attractive for bounded DFRPs and will fail in case of an unbounded
DFRP. In this context, a compositional way of controller specification may be useful.
For individual small bounded modules, one may use FSM-based controller synthesis
approach. Later, these individual controllers can possibly be connected hierarchically using DFRP operators to meet the overall control requirement. For unbounded
DFRPs one perhaps cannot hope for a generic procedure for controller synthesis. But
specific subclasses of unbounded DFRP and associated control requirements may be
identified for which controller synthesis is possible in a finite number of steps.
A contribution of this work is the nondeterministic extension of the DFRP model
151
to include the effects of event concealment under the setting of variable alphabet.
Contrasted to the refusal based external viewpoint of NCSP, a possible future
based internal viewpoint of nondeterminism is used in this work. Also for the sake of
simplicity, divergence (result of infinite hiding) has not been treated in this work.
The following research problems may be taken up to consolidate this work:
Firstly, the nondeterministic extension is too general and it is necessary to identify bounded subclasses (in the sense of DFRP) of the nondeterministic model for
further analysis. Secondly, concealment of events in any DFRP or NFRP leads to
new nondeterministic processes. The central problem of observation is to identify
whether infinite hiding is taking place and if not, how to construct a finitely recursive characterisation of the resultant nondeterministic process. It will be useful
to identify suitable subclasses of both DFRP and NFRP where this problem will be
solvable. Finally, a suitable formulation and solution to the problem of supervision
under partial observation for different subclasses of this model are other important
tasks.
Prominent DES researchers are concerned about the fact that at such an early stage
of research in this area, two distinct schools of thought are emerging, without much
mutual interaction. On one hand there is the logical model school, to which this
thesis belongs, which is engrossed in modelling and analysis of logical structure of
the event dynamics and associated problems. On the other hand the performance
model school is interested in numerical performance related analysis in a stochastic
setting. Queueing methods and Markov Chains are major methods in this analysis
and in most cases, analytical solutions are obtained only after assuming quite simple
logical structure of underlying event dynamics. The popularity enjoyed by Petri Nets
is to an extent derived from their ability to deal with structural issues [12] as well as
numerical performance related questions via generalised Stochastic Petri Nets [10].
In this work, a step towards combining these divergent issues in the DFRP formalism,
has been taken, via the introduction of the EFRP model. Besides structural issues like
liveness, deadlock, safety, reachability, etc., quantitative issues like timing constraints
such as deadlines, time bounds and deterministic performance measures can also be
modelled in this framework by virtue of the presence of variables. The following
problems should be investigated in order to make this extension more useful.
Firstly, for general EFRP, it is not known how the different state-continuity constraints can be verified in a recursive way. To solve these problems, as well as to
attempt the different control and observation problems, bounded and tractable sub-
152
CHAPTER 7. CONCLUSIONS
classes, in the sense of those in Chapter 3, should be identified. Secondly, EFRP,
with its state-based features may be quite useful for formal specification. It will
be worthwhile to investigate whether formal specification languages like LOTOS [25]
and associated verification tools can be developed over the EFRP model for exploring
properties related with supervision, logical correctness etc. Thirdly, in EFRP, timing
features are modelled indirectly through TTM or logical branching operators. It will
useful to compare this model with timed CSP [76] or with the real time semantics of
DFRP provided by Cieslak [58] and to integrate the different features if possible.
The DES modelling tools allow one to model a physical system at a certain level of abstraction such that the underlying continuous variable dynamics is often obliterated.
But, for successful design and analysis of systems, it sometimes becomes necessary
to integrate continuous and discrete dynamics in a unified framework. Extension of
the process algebraic framework to handle such hybrid features will definitely be a
major challenge to future researchers.
Thus, we come to the end of this thesis. Looking back, the author wonders whether
this thesis has been able to bring out some order out of the chaos that exists in this field.
This work would be considered to have had some success only it can create some interest
in the process algebraic approach to DES modelling and motivate researchers to take on
the adventures of real world applications.
Appendix A
Proofs of Some Results
Proof of theorem 2.2.2:
() Let {P1 , , Pn } be a family of MRP w.r.t < n (G, ) >. By one step expansion
formula, each process Pi can be expressed as
Pi = (i1 Pi / < i1 >| | iki Pi / < iki >)Pi (<>) . By definition 2.2.10 each
process Pi / < ij > can be expressed as < fij > (P) for some < fij >< n (G, ) >.
Thus Pi = (i1 < fi1 > (P) | | iki < fiki > (P))mi , where mi = Pi (<>). Now
we construct the collection of recursive equations X = F(X), X 0 := P 0, where each
component Fi of F is obtained from the corresponding one step post-process expression of
Pi given above, i.e.,
Xi = Fi (X) := (i1 < fi1 > (X) | | iki < fiki > (X))mi .
It is obvious that each Fi is guarded, continuous and ndes. To see that X 0 = P 0 is
consistent, all we have to show that for any i, Pi 0 = (Fi (P 0)) 0. If the assumption
(b) of the theorem is satisfied, then by constancy of 0 function, the above consistency
requirement is trivially satisfied. On the other hand suppose (a) is satisfied, i.e., 0 is not
necessarily constant but < G > is an MRS family. Also by fact 2.2.2, each function in
< n (G, ) > is spontaneous. Then note that
Pi 0 = (i1 < fi1 > (P) | | iki < fiki > (P))mi 0
= (i1 (< fi1 > (P)) 0 | | iki (< fiki > (P)) 0)mi 0
(as Fi is ndes)
= (i1 (< fi1 > (P 0)) 0 | | iki (< fiki > (P 0)) 0)mi 0
(as each < fij > is ndes)
= (i1 < fi1 > (P 0) | | iki < fiki > (P 0))mi 0
(as each < fij > is spontaneous)
153
154
= (Fi (P 0)) 0.
Clearly conditions of theorem 2.2.1 is satisfied by the above collection of recursive
equations and (P1 , , Pn ) is the unique solution.
() Conversely suppose P is the unique solution of equations (2.3) and (2.4). For any
arbitrary Pi in P, one can show MR-ness using induction on the length of s tr Pi .
The basis case is trivially true as Pi / <> = < P roj i > (P). Assume that, for some
s tr Pi , Pi /s =< f > (P) for some < f >< n (G, ) >. Then
Pi /(s^ < >) = (Pi /s)/ < >= (< f > (P))/ < >. We claim that there exists
< f 0 >< n (G, ) > such that (< f > (P))/ < >= (< f 0 > (P)). This proves our
result for Pi /(s^ < >) and the rest follows by induction.
All we need now is to prove the claim made above and for this, we again use induction
on the number of steps in definition 2.2.9 needed to construct < f >< n (G, ) >.
If f C , i.e., < f > (P) = P 0 for some P then tr < f > (P) 6= {<>} implies
that for any < > tr < f > (P), (< f > (P))/ < > = (P 0)/ < >. Now using
condition (3) of definition 2.1.6 we have the following. If P 0 = (1 (P 0)/ < 1 >|
| k (P 0)/ < k >)m with m = (P 0)(<>), then
(P 0) 1 = (1 ((P 0)/ < 1 >) 0 | | k ((P 0)/ < k >) 0)m . However
by (1) of definition 2.1.6 (P 0) 1 = P 0. Comparing, one can conclude that for any
< > tr P 0, (P 0)/ < >= ((P 0)/ < >) 0. Since the process in the right
hand side belongs to 0, one can find c0 C such that c0 = r(((P 0)/ < >) 0).
Thus (< f > (P))/ < >= (< f 0 > (P)) for some f 0 = c0 C n (G, ).
If f P roj(n), i.e., < f > (P) = Pi for some i, then by equation 2.4
< f > (P)/ < >= Pi / < >=< fij > (P) for some < fij >< n (G, ) > as = ij
for some j.
Finally suppose the claim is true for some < f1 >, , < fk > in < n (G, ) > and let
f = g (f1 , , fk ) for some k ary function < g >< G >. Now
< f > (P)/ < >= (< g > (< f1 > (P), , < fk > (P)))/ < >. The MRness of
< G > guarantees existence of < g 0 >< G > such that
< g > (< f1 > (P), , < fk > (P))/ < >=< g 0 > (P10 , , Pm0 0 ), where for each
j, 1 j m0 i, 1 i k with Pj0 =< fi > (P) or Pj0 =< fi > (P)/ < >. By
induction hypotheses any < fi > (P)/ < > can be expressed as < fi0 > (P) for some
< fi0 >< n (G, ) >. For each j, 1 j m0 let hj be the element of n (G, ) such
that hj = fi if Pj0 =< fi > (P) or hj = fi0 if Pj0 =< fi > (P)/ < >. Next we construct
f 0 n (G, ) such that < f 0 > (P) = < g 0 > (< h1 > (P), , < hm0 > (P)), using the
following recursive syntactic transformation G as follows.
155
f 0 = G(g 0 , h1 , , hm0 ) :=
g 0 (h1 , , hm0 )
if g 0 G
g
(G(
g1 , h1 , , hm 1 ), , G(
gk , hm0 m k +1 , , hm0 )) otherwise
j and
where g 0 = g (
g1 , , gk ) is in G, g G, gj G, < gj > has an arity m
0
j = 1 and G(
gj , hmr ) := hmr
m =m
1 + + m
k . Note that, if, for some j, gj = Id, then, m
where mr = m
1 + + m
j1 + 1. Clearly f 0 n (G, ) and (< f > (P))/ < > =
(< f 0 > (P)). This completes our proof.
2
[[B]]
C , each of P(h
Step : We have to prove that for any nonempty B,
),
[[+C]]
[B]
[+C]
P(h
), P(h
), and P(h
) is true. Also if the hypothesis is such that for
k
n
[[B]]
Case 1: Let v = h
. To prove the veracity of P(v), we have to compute CF1 (v)
first. Consider the following possibilities on CF1 (h).
Case 1.1: CF1 (h) = ST OPA . Then CF1 (v) = ST OPAB . P(v) is trivially true.
[[B]]
[[B]]
satisfies (1) (8). Also the only subexpression of Pi
, different from the original, is
[[B 0 ]]
[[B 0 ]]
Pi . Then CF1 (Pi
) = Pi
which also trivially satisfies (1) (8). Thus both
(i) and (ii) of P(v) are satisfied in this case.
[[BB
1 ]][[+(C1 B)]][(B2 B)][+(C2 B)]
operator symbol is absent. Then CF1 (v) = Pi
, which is
clearly an element of U1 . Since by induction hypothesis P(h) is true, claims (2) (6)
[[B ]][[+C1 ]][B2 ][+C2 ]
are satisfied by every subexpression of Pi 1
. It is then easy to see that
156
CF1 (v) also satisfies (2) (6). Claim (7) follows from the fact that as claims (1) (6) are
already satisfied by CF1 (v), a further application of CF1 () on it does not make any more
structural change. Finally as by induction hypothesis < h > (X) = < CF1 (h) > (X) =
[[BB
1 ]][[+(C1 B)]][(B2 B)][+(C2 B)]
= < Pi
> (X)= < CF1 (v) > (X). Hence (8) is satisfied.
Together (i) of P(v) is satisfied. To prove (ii), consider the different subexpressions u0
[[BB
[[BB
1 ]]
1 ]][[+(C1 B)]]
of CF1 (v), different from itself. They are Pi , Pi
, Pi
and
[[B 00 ]]
[[BB
1 ]][[+(C1 B)]][(B2 B)]
Pi
respectively. Corresponding CF1 (u0
) expressions will
00 ]]
00
00
[[B 00 ]]
[[BB
B
[[BB
1
1 B ]][[+(C1 (BB ))]]
be Pi
, Pi
, Pi
and
00
00
00
[[BB
1 B ]][[+(C1 (BB ))]][(B2 (BB ))]
Pi
respectively. Each of them satisfies (1) (8).
Thus (ii) of P(v) is also satisfied.
[[B]]
))[[+(C1 B)]][(B2 B)][+(C2 B)] . Now, at this stage we see the requirement
(km
i=1 CF1 (vi
of second part the induction hypothesis. Since P(h) is true, by (ii) of P(h) (taking u =
[[
B]]
MP (km
[[B]]
F (km
)). Combining the above facts one can easily conclude that (i) of
i=1 CF1 (vi
P(v) is satisfied.
To prove part (ii) of P(v), we consider any u0 (Sub(CF1 (v)) {CF1 (v)}) and nonempty
B 00 . It is easy to see that for any such u0 there exists u (Sub(CF1 (h)) {CF1 (h)})
00
00
[[+C]]
Case 2: Let v = h
. Again to prove P(v) we first compute CF1 (v). Consider
the following possibilities on CF1 (h). Without losing any generality of the result we can
assume C /MP (CF1 (h)) (otherwise CF1 (v) = CF1 (h).
Case 2.1: CF1 (h) = ST OPA . Then CF1 (v) = ST OPAC . P(v) is trivially true.
[[+C]]
Case 2.2: CF1 (h) = Pi . Then CF1 (v) = Pi
. Satisfaction of P(v) follows identically as in Case 1.2.
157
case follows identically as in Case 1.3.
[[+C1 (CC
1 MP (ki=1 vi ))]][B ][+C (C2 C)]
, which is clearly an element of U2 V1 .
(km
i=1 vi )
[[+C1 ]]
[[+C1 (CC
m
1 MP (ki=1 vi ))]]
,
such u0 either belongs to Sub(km
i=1 vi ), or it is one of (ki=1 vi )
m
[[+C1 (CC
00
m
1 MP (ki=1 vi ))]][B ]
. After applying GCO[[B ]] on any of these
or (ki=1 vi )
[[B 00 ]]
expressions, it can be easily seen that, CF1 (u0
) is some subexpression of
m
00
00
[[B 00 ]]
m
[[+(C1 (CC
1 MP (ki=1 vi ))B )]][(B B )]
(ki=1 CF1 (vi
))
. Then satisfaction of (1)(8)
[[B 00 ]]
0
by CF1 (u
) follows from part (ii) of the hypothesis itself.
Combining the results of the above subcases, we find that P(v) is satisfied by Case 2.
[B]
Case 3: Let v = h
. P(v) can be proved in a similar way as in Case 2 and the
proof is not elaborated here.
[+C]
Case 4: Let v = h
. P(v) can also be proved in a similar way as in Case 2.
m
Case 5: Let v = ki=1 hi such that each of P(hi ) is true. Now CF1 (v) = km
i=1 CF1 (hi ).
Then (i) of P(v) follows immediately from the fact that each of the P(hi ) is true.
To prove (ii) of P(v), we consider any u0 (Sub(CF1 (v)) {CF1 (v)}) and nonempty
[[B 00 ]]
B 00 . We have to show that CF1 (u0
) satisfies (1)(8). There are two possibilities.
00
0
Case 5.1: The first possibility is that u = CF1 (hj ) for some j. To compute CF1 (
u0[[B ]] ),
we have to consider different possible structures that CF1 (u0 ) can possess. Note however
that, as u0 = CF1 (hj ) and P(hj ) is true, by (i)(f) of P(hj ), CF1 (u0 ) = u0 .
00
[[B 00 ]]
Case 5.1.2 Let CF1 (u0 ) = Pi Then CF1 (
u0[[B ]] ) = Pi
which obviously satisfies (1) (8) trivially.
[[B 00 ]]
(km
))[[+(C1 B )]][(B2 B )][+(C2 B )] . Since P(hj ) is true, by part (ii) of
i=1 CF1 (vi
[[B 00 ]]
) satisfies (1) (8). By inducP(hj ) (taking u = vi and B 0 = B 00 ) each of CF1 (vi
m
m
tion hypothesis C1 MP (ki=1 vi ) = , B2 F (ki=1 vi ) and (B2 C2 ) F (km
i=1 vi ).
[[B 00 ]]
00
00
m
It implies (C1 B ) MP (ki=1 CF1 (vi
)) = , (B2 B )
00 ]]
[[B
[[B 00 ]]
F (km
)) and ((B2 B 00 ) (C2 B 00 )) F (km
)). Comi=1 CF1 (vi
i=1 CF1 (vi
158
bining the above facts one can easily conclude that (1) (8) are satisfied by
[[B 00 ]]
CF1 (u0
).
Combining the results of above subcases, we find that P(v) is satisfied by Case 5.1.
Case 5.2: The other possibility is that u0 (Sub(CF1 (hj )) - {CF1 (hj )}) for some j.
[[B 00 ]]
Clearly by (ii) of P(hj ) CF1 (u0
) satisfies (1) (8). Hence part (ii) of P(v) is
satisfied.
Combining the results of the above subcases, we find that P(v) is satisfied by Case 5.
By principle of induction, the lemma is found to be true.
2
(v) For some u U such that if u = (krj=1 u0j )[[+C1 ]][B2 ][+C2 ] , no u0k mR u0l , let {u1 , , um }
be a set of elements of U such that (1) i, u mR ui , but ui m/R u, and (2) / i, j such that
both uj mR ui and ui mR uj .
Then W = M (Mm ( Mm (Mm (u, ui1 ), ui2 ), , uim )) satisfies the following properties. (a) It is finitely computable and uniquely defined irrespective of choice of i1 , , im
as long as they are m distinct elements of {1, , m}; (b) Comp(u) m
i=1 Comp(ui ) =
r
0 [[+C1 ]][B2 ][+C2 ]
0
0
gW Comp(g); (c) If W = {(ki=1 ui )
} or {u1 , , ur }, then each u0i U
and no u0i mR u0j .
159
Basis Let U = U1 . Note that each element of U1 is variant of some component expression.
Therefore, whenever for any two element u1 , u2 , if u1 mR u2 , then the reverse is also
satisfied, i.e., u2 mR u1 and Mm (u1 , u2 ) = Mm (u2 , u1 ) and this itself is another variant of
the same component expression as that of u1 , u2 . Now, it can be easily seen that P(U1 ) is
satisfied.
Hypothesis Let P(U) be satisfied by some U.
k
Step Let V CF1 (n (GD ,
D )) such that, (a) U V, and (b) for any C1 , B2 , C2 , and
ui U, 1 i m, v = {(km
i )[[+C1 ]][B2 ][+C2 ] } V. We have to show that P(V) is
i=1 u
satisfied.
mi
i
t
m
i
m
i
C2 = ((m
ij for any i and each
i=1 C2 )B2 ) ((i=1 C2 )F (h)(i=1 C1 )). Since h = kj=1 u
uij U, by steps of M and (i), (ii), (iii) of P(V) we find that M (h) is a finitely computable
unique subset of expressions from U such that if M (h) := {(kri=1 fi0 )[[+C1 ]][B2 ][+C2 ] } or
160
(
vl1 k k
vlq k
uk11 k k
ukp m )[[+C1 ]][B2 ][+C2 ] = (kni=1 ui )[[+C1 ]][B2 ][+C2 ] ( say).
kp
2 and C2 are three unique subsets obtained (irwhere naturally each ui U and C1 , B
respective of order of merging) from repeated application of third condition of merging
(definition 3.2.4) and CF1 . Note that this expression contains all the components from
mk
each vi . Also for any li , lj , ki , vli mR vlj and also kj=1i ukij mR vli . Applying steps of M
and using these facts as well as (i), (ii), (iii) of P(V), and (iv) and (v) of P(U), we find
that (iv) of P(V) is also satisfied.
Finally to show (v) consider some v V such that v = (kri=1 ui )[[+C1 ]][B2 ][+C2 ] for
some C1 , B2 , C2 and each ui U and no ui mR uj . Let {v1 , , vm } V such that (1) i,
v mR vi , but vi m/R v and (2) / i, j such that both vj mR vi and vi mR vj . From the above,
we can conclude that v is not a variant of any component expression and B2 F (km
i )
i=1 u
is nonempty. Then Mm ( Mm (Mm (v, vi1 ), vi2 ), , vim ) =
(
u1 k k
ur k
v1 k k
vm )[[+C1 ]][B2 ][+C2 ]
m
m
where C1 = C1 m
i=1 Mp (vi ), B2 = B2 i=1 F (vi ) and C2 = C2 i=1 F (vi ). It is an
unique expression irrespective of order of merging. Note that, for any i, 1 i m, if vi / U
i
i
i
i
then vi has a structure (km
ij )[[+C1 ]][B2 ][+C2 ] , for some C1i , B2i , and C2i where uij U.
j=1 u
i
ij )
i , we can conclude that B2i F (km
Also, as vi merges into v, or rather into km
j=1 u
i=1 u
is empty. Using these properties, the steps of M , (i), (ii) and (iii) of P(V) and (iv) and
(v) of P(U), we can find that P(V) will be satisfied.
Together, the induction proposition is found to be true for V. Now using principle of
k
induction we find that for h CF1 (n (GD ,
D )), M (h) satisfies the above properties. This
in turn show that (a) of lemma 3.2.3 is satisfied.
Finally using the facts that CF2 is applied recursively on all arguments of P CO, as well
as the results of lemma 3.2.2 and the above induction, the other claims of lemma 3.2.3 can
be easily shown to be true.
2
161
;
Algorithm to decide boundedness of A(< n (HD
, D ) >)
Input: The process realisation X = F(X), and Y =< Pi > (X), for some i, 1 i n.
(Remark: Data-structure)
Structure node {
integer :: count, level;
boolean :: mark;
expression:: E;
Array of pointers to nodes:: Child[ ]}
( Remark: A structure called node is used here. A node N has the following fields. N .count
keeps track of the number of children attached to N . N .level denotes the level of the tree
at which N is placed in the tree. N .E is expression present in N . N .mark denotes
the status of the node which is either expanded or unexpanded. Finally for different i,
1 i N .count, N .Child[i] is a pointer that points to the i-th child node of N . The
node pointed by this pointer is denoted as (N .Child[i]))
Procedure: Main
{
structure node :: Root;
boolean :: status;
boolean buildtree (structure node *root, *link, *current);
(Remark: The Main procedure is the top level procedure in the algorithm. It uses the
variable status to store the result of the recursive function buildtree which indicates whether
the process mentioned in the Input is bounded or unbounded. The pointer variable Root is
used to initialise the procedure by pointing to a node. This node is used as the root node
for the tree to be built by buildtree function.)
Create(Root);
(*Root).level := 0;
(*Root).count := 0;
(*Root).mark := unexpanded;
(*Root).E := Pi ;
status := buildtree (Root, Root, Root);
}
(Remark: The root node is created and assigned with level 0 and an expression Pi which
is mentioned in the Input of the procedure. Initially no child is attached to it and it is yet
to be expanded. The buildtree function is called with all arguments pointing to the root
node)
162
Procedure: Buildtree
boolean buildtree (structure node *root, *link, *current)
{
structure node :: *new;
boolean :: temp-status, new-status;
boolean attach(structure node n1 , n2 );
(Remark: The Buildtree procedure uses three pointers as its arguments. The pointers
point to the root node (R), the node N and the current node (N ) respectively. Also
the node pointer new points to the node Nnew which contains the post process expression
(N .E)/ for different . In this procedure, after finding the appropriate position for Nnew
from the table 3.4, the function attach is used to attach this node to the tree. Before
attaching, however it is checked whether Nnew .E is already present in some other child of
the node where Nnew is to be connected. If so, Nnew is no longer connected to the tree and
the attach function sets the new-status variable as old. Otherwise the node is connected to
the tree and new-status is assigned the value new).
If (N 0 6= current | root N 0 current, N 0 .E = (current).E)
((current).E = ST OPA ) ((current).E = SKIPA ) then
{ (*current).mark := expanded;
temp-status := bounded;
return temp-status
}
(Remark: If N .E indicates any loop or bears a ST OPA or SKIPA type of expression
then it is not expanded further and the current (most recent) call of the recursive function
buildtree terminates returning a bounded status.)
else
for ( F ((current).E))
{
Create(new);
(new).E := (current).E/;
if (combination is any of (f ) vs (1) or (a)-(d) vs (1)-(4), except (b) vs (1)) then
new-status := attach(current, new);
else if (combination is any of (e) vs (1)-(4), or (b) vs (1)) then
new-status := attach(root, new);
else if (combination is any of (f ) vs (2)-(4), then
new-status := attach(link, new);
If (new-status = new) (N 0 6= new | root N 0 new, N 0 .E and
163
(new).E begin with same argument, len(N 0 .E) < len((new).E) and
it is not the case that N 0 .E = Pj and (new).E = Pj ; SKIPA ) then
{ temp-status := unbounded;
*current.mark := expanded
return temp-status
}
free(new);
}
temp-status := bounded;
(Once Nnew is attached to the tree, the path from R to Nnew is checked for unboundedness.
If the condition of unboundedness is satisfied, the for loop as well as the most recent call of
the buildtree terminates, returning an unbounded status. Otherwise the for loop terminates
indicating unboundedness condition has still not been found true. The new pointer is made
free for further use. Since the search is depth-first, we now proceed to build the tree from
the different children of the current node N . A recursive call of buildtree will be used
in which some child of the current node N will be used as the new current node. Also,
depending upon the relative length of the expressions contained in N and that of its child,
the new location of the link node (i.e., N ) has to be fixed. If the length of the expression
of the child node is more than that of the current node, the current node itself will be used
as the link node in the next level of recursive call. Otherwise, if it is equal, the old link
node will suffice. A node is marked as expanded only after tree building from its child nodes
is complete or in between the unboundedness condition has been found to be satisfied.)
While((u, 1 u (current).count | *(current.Child[u]).mark = unexpanded)
(temp-status = bounded))
{
If (len(*(current.Child[u]).E) > len(*current.E)) then
temp-status := buildtree(root, current, current.Child[u]);
else if (len(*(current.Child[u]).E) = len(*current.E)) then
temp-status := buildtree(root, link, current.Child[u]);
}
*current.mark := expanded;
return temp-status
}
Procedure: Attach
boolean attach(structure node *n1 , *n2 )
164
boolean :: n2 -status;
(Remark: This procedure attaches the node n2 to n1 only if no existing child of n1 bears
same expression as that of n2 . It then makes appropriate changes in different fields of n1
and returns n2 -status as new. Otherwise if expression of n2 is a repetition of that of some
child of n1 , it is not connected to n1 and n2 -status is returned as old. In both cases)
if (
/ u, 1 u (n1 ).count | (n1 .Child[u]).E = n2 .E) then
{
n2 .level := n1 .level+1;
n2 .count := 0;
n2 .mark := unexpanded;
n1 .count := n1 .count+1;
Create(n1 .Child[n1 .count]);
n1 .Child[n1 .count] := n2 ;
n2 -status := new
}
else n2 -status := old;
return n2 -status
}
2
Bibliography
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
165
166
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
BIBLIOGRAPHY
BIBLIOGRAPHY
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
167
168
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75]
[76]
BIBLIOGRAPHY
T HE AUT HOR
Supratik Bose was born in Asansol, India, on July 10, 1968. He received his B.Tech
degree in Electrical Engineering and M.Tech degree in Instrumentation Engineering both
from I.I.T. Kharagpur, India, in 1990 and 1992 respectively. He has worked in the area of
microprocessor based control and discrete event systems. His hobbies include reading and
teaching high school mathematics.
Temporary Contact Address
(a) Department of Electrical Engineering
I.I.T. Kharagpur, W.B. 721302, INDIA
e-mail: supratik@ee.iitkgp.ernet.in
(b) T.D.B. College (Qr. No. 2)
Raniganj, Burdwan
W.B. 713347. INDIA Phone: (0341) 445526
Permanent Home Address
Q/2 Panchasyar, Garia
Calcutta 700084, INDIA
Publication
i) Microprocessor Based Gain Scheduling Control of A Coupled Tank System, by S. Bose,
S. Mukherjee and A. Patra in Proceedings of 15th National Systems Conference, Roorkee,
March 1992, pp 111-115.
ii) Modelling Discrete Event Systems: A Process Algebra Approach, by S. Bose and S.
Mukhopadhyay in Proceedings of 3rd National Seminar on Theoretical Computer Science,
Kharagpur, June 1993, pp 190-201.
iii). Comments on Observability of Discrete Event Dynamic Systems by S. Bose, A.
Patra and S. Mukhopadhyay in IEEE Transactions on Automatic Control, Vol 38, May
1993, pp 830.
iv). On Observability With Delay: Antitheses and Syntheses by S. Bose, A. Patra and
S. Mukhopadhyay in IEEE Transactions on Automatic Control, Vol 39, Apr., 1994, pp
803-806.
v).An Extended Finitely Recursive Process Model For Discrete Event Systems by S. Bose