You are on page 1of 60

Quantitative Model Checking

Lecture Notes

Anne Remke

November 2nd, 2016


These lecture notes are based on a set of slides developed by Boudewijn R.
Haverkort and Joost-Pieter Katoen in 2007. The slides have since then been used
extensively by different people to teach several courses e.g., at the University of
Twente, at RWTH Aachen and Oxford University. They have undergone different
rounds of editing and formatting. During the last years I have used them to teach
the 3TU course Quantitative Evaluation of Embedded Systems that is held at the
same time in Delft, Eindhoven and Twente. Recently, this course relies more and
more on video lectures and hence it has become important to have a summary of
the course content in written form with some additional explanation available.
Last year, a student from TU Delft, Frits Kastelein has taken the initiative and
started a latex document with lecture notes and has been kind enough to send to
me for further use. I have then incorporated a chapter on Continuous Time Markov
Chains and the corresponding CSL Model checking algorithms that has been taken
from my PhD thesis into the document and together with students from WWU
Munster we have worked hard to bring these lecture notes into the form they have
today. I would like to thank David Knning and Carina Pilch for their efforts.
I hope that even though, the document is far from complete it proves useful
for students following the course Quantitative Evaluation of Embedded Systems as
part of the 3TU master on Embedded Systems and to those following the course
Quantitative Model Checking, either at WWU Munster or in its online version on
Coursera.
Please feel free to contact me with any mistakes and spelling errors you may find
at anne.remke@wwu.de.
Contents

1 Introduction 5
1.1 Reliability computations . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 Reliability of serial systems . . . . . . . . . . . . . . . . . . . 7
1.1.2 Reliability of parallel systems . . . . . . . . . . . . . . . . . . 7
1.1.3 Reliability of combined systems . . . . . . . . . . . . . . . . . 8

2 Model checking labeled Transition Systems 11


2.1 Labeled transition systems . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 System Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Computational Tree Logic . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Model checking algorithms for CTL . . . . . . . . . . . . . . . . . . . 15
2.4.1 Existential Normal Form . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Computing Sat () . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 Worst case time-complexity of CTL . . . . . . . . . . . . . . . 24

3 Discrete-time Markov chains 25


3.1 Definition of a DTMC . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Transient Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Steady-state distribution and State Classification . . . . . . . . . . . 28

4 Probabilistic Model checking 33


4.1 Syntax and semantics of PCTL . . . . . . . . . . . . . . . . . . . . . 33
4.2 Model checking the non-probabilistic part of pCTL . . . . . . . . . . 35
4.2.1 Computing Sat (a) . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.2 Computing Sat () . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.3 Computing Sat ( ) . . . . . . . . . . . . . . . . . . . . . . 35
4.2.4 Computing Sat ( ) . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Model checking the probabilistic part of pCTL . . . . . . . . . . . . . 36

3
4 CONTENTS

4.3.1 Computing Sat (PEp (X)) . . . . . . . . . . . . . . . . . . . . 36


Computing Sat PEp U k

4.3.2 . . . . . . . . . . . . . . . . 38
Computing Sat PEp U

4.3.3 . . . . . . . . . . . . . . . . 41
4.3.4 Worst case time-complexity of pCTL . . . . . . . . . . . . . . 42

5 CTMCs 43
5.1 Continuous-time Markov chains . . . . . . . . . . . . . . . . . . . . . 43
5.2 Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.3 Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3.1 Transient state probability . . . . . . . . . . . . . . . . . . . . 47
5.3.2 Uniformization method . . . . . . . . . . . . . . . . . . . . . . 48
5.3.3 Steady-state probability . . . . . . . . . . . . . . . . . . . . . 50

6 CSL 53
6.1 Continuous stochastic logic (CSL) . . . . . . . . . . . . . . . . . . . . 53
6.2 Model checking finite state CTMCs . . . . . . . . . . . . . . . . . . . 55
6.2.1 Worst case time-complexity of CSL . . . . . . . . . . . . . . . 57
6.3 General model checking routine . . . . . . . . . . . . . . . . . . . . . 57

Bibliography 57
Chapter 1

Introduction

As computer- and communication systems keep growing rapidly, it is very important


to be able to analyze the performance of such systems before they are actually built.
It is possible to analyze computer- and communication systems in a quantitative way
by using model-based performance evaluation. A variety of techniques and tools have
been developed to accommodate such evaluations, e.g., based on queueing networks
[10], stochastic Petri nets [1] and process algebras [17, 15]. The integration of ICT
(information and communications technology) in different applications is rapidly
increasing in e.g. Embedded and Cyber physical systems, Communication protocols
and Transportation systems. Hence, their reliability and dependability increasingly
depends on software. Defect can be fatal and extremely costly (with regards to
mass-production of products and safety-critical systems.
First, a model of the real system has to be built. In the simplest case, the model
reflects all possible states that the system can reach and all possible transitions
between states in a (labelled) State Transition System (LTS). When adding proba-
bilities and discrete time to the model, we are dealing with so-called Discrete-time
Markov chains (DTMCs) which in turn can be extended with continuous timing
to Continuous-time Markov chains (CTMCs). Both formalisms have been used
widely for modeling and performance and dependability evaluation of computer and
communication systems in a wide variety of domains. These formalisms are well un-
derstood, mathematically attractive while at the same time flexible enough to model
complex systems. Once the Markov model has been constructed, it is possible to
calculate e.g. performance measures of the system with a number of well-known
numerical methods. Such performance measures are for example the utilization of
the server, the time a customer has to spend waiting in the line or the length of
the queue. For both, finite and infinite Markov chains, solution methods exist to
calculate the probabilities of residing in each single state.
Model-based performance evaluation is a method to analyze the system in a
quantitative way. Model checking, however, traditionally focuses on the qualita-
tive evaluation of the model. As formal verification method, model checking analyzes

5
6 1 Introduction

the functionality of the system model. A property that needs to be analyzed has to
be specified in a logic with consistent syntax and semantics. For every state of the
model, it is then checked whether the property is valid or not.
Please note that the main focus of this course is on quantitative model checking
for Probabilistic Computational Tree Logic (PCTL) and for Continuous Stochastic
Logic (CSL). The former can be used to express quantitative properties on DTMCs
and the latter for CTMCs. Efficient computational algorithms have been devel-
oped for checking finite DTMCs and CTMCs against formally specified properties
expressed in these logic, cf. [4, 5], as well as supporting tools, cf. PRISM [20],
and ETMC2 [16], the APNN toolbox [7], and recently MRMC [19]. Other tools,
like GreatSPN [11] are used as front-end to model checking tools like PRISM and
MRMC. As such, we only introduce Labelled State Transition Systems, the Com-
putational Tree Logic and corresponding algorithms to provide a background and a
context for students. We do not aim to provide a complete coverage of this topic,
for which we refer to [18].
Quantitative model checking is one way to perform formal verification, which can
be seen as the application of rigorous, mathematics-based techniques to establish the
correctness of computerised systems. Essentially, formal verification is about proving
that a program satisfies its specification. Many techniques are used for formal
verification, manual proofs, automated theorem proving using a proof assistant or
proof checker tool, static analysis or model checking, which performs systematic
checks on a given property for all states using model checking algorithms and tools
Recent history is full of examples, where software glitches caused massive safety
problems and have lead either to large call-backs actions and hence to considerable
financial and image loss for companies or even to large outages and break-downs.
In the following I would like to give a couple of examples:
The Toyota Prius was one of the first mass-produced hybrid vehicle. Unfor-
tunately, it suffered from several software glitches, such as airbag control in 2007,
its anti-lock braking system (2010) and its hybrid system (2015). Eventually these
have been fixed via software update after in total 185,000 cars were recalled at huge
cost. Not only the incidents themselves but also their handling at Toyota prompted
much criticism and bad publicity for the company. One of the most well-known and
often used examples is the maiden flight of the rocket Ariane 5 on 4th of June 1996
at ESA (European Space Agency). An uncaught exception caused by a numerical
overflow in a conversion routine, which in turn resulted in incorrect altitude sent by
the on-board computer resulted in the self-destruction of the rocket after 37 seconds.
Many more of these examples exist and all these stories have in common that
programmable computing devices were in use (conventional computers and networks
and software embedded in devices like the airbag controllers or mobile phones) and
in all cases programming errors were the direct cause of failure. These failures were
critical for safety, business and performance and resulted in high costs.
1.1 Reliability computations 7

1.1 Reliability computations


A system can often be represented as a network with independent components that
are connected in series, in parallel or both. This section shows, how the reliability
of the whole system can be computed using simple computations, if the reliability
of the single components is known. For more information refer to [2, pp.8387].

1.1.1 Reliability of serial systems


First, we consider a series of n components as illustrated in Figure 1.1, where
R1 , . . . , Rn are the reliability probabilities of the n components. In case the compo-
nents have indeed different reliability probabilities, the total reliability of the whole
system RS can be calculated as:

RS = R1 R2 R3 ... Rn . (1.1)

On the other hand, the above formula can be simplified if all components have
the same reliability Ri = R1 = . . . = Rn :

RS = (Ri )n . (1.2)

Figure 1.1: Reliability of a series of n components

Example 1 (Series of 10 components). We consider a system with 10 identical


components connected in series each with a reliability of 0.95. The total reliability
of the system is then given by

RS = (Ri )10 = (0.95)10 0.5987.

1.1.2 Reliability of parallel systems


Considering a system of n components in parallel, as illustrated in Figure 1.2, where
R1 , . . . , Rn are the reliability probabilities of the n components. If these probabilities
differ per component, the total reliability RS of the parallel system can be calculated
as: n
Y
RS = 1 (1 Ri ) = 1 (1 R1 ) (1 R2 ) ... (1 Rn ). (1.3)
i=1
8 1 Introduction

If all components have the same reliability Ri = R1 = . . . = Rn , the computation


of the overall reliability simplifies to:
n
Y
RS = 1 (1 Ri ) = 1 (1 Ri )n . (1.4)
i=1

Figure 1.2: Reliability of parallel systems

Example 2 (System of 4 parallel components). Consider a system with 4 identical


components connected in parallel with the reliability probabilities 0.99, 0.95, 0.98
and 0.97, respectively. The total reliability of the system is then given by:
RS = 1 (1 R1 ) (1 R2 ) (1 R3 ) (1 R4 )
= 1 (1 0.99) (1 0.95) (1 0.98) (1 0.97).
= 1 0.0000003
= 0.9999997

1.1.3 Reliability of combined systems


To compute the reliability of a system that combines serial and parallel components,
the approaches above can be combined. Sub systems, i.e. multiple parallel systems
that are connected in series or serial systems connected in parallel, can be handled
independently first and the obtained sub reliabilities can then be used to determine
the overall reliability of the main system. The following example makes clear how
this is done.
Example 3 (System of 4 parallel components). See the system in Figure 1.3. Let
na be the number of componentes in sub system A and let nb be the number of
components in sub system B. The overall reliability is calculated as follows:
Ra = 1 ni=1
Q a
(1 Ri )
Qn b
Rb = 1 i=1 (1 Ri )
R = Ra Rb
1.1 Reliability computations 9

Figure 1.3: Reliability of a combination of configurations


10 1 Introduction
Chapter 2

Model checking labeled Transition


Systems

This chapter focuses on labeled transition systems, LTS s, in Section 2.1 in Section
2.2. Section 2.3 introduces the Computational Tree Logic, CTL that is to express
properties of LTS s and finally model checking algorithms for CTL are presented in
Section 2.4.

2.1 Labeled transition systems


Transition systems (TS) form a standard class of models in Computer Science. They
can be seen as directed graphs, where nodes represent states and edges model tran-
sitions. While a state contains some information about the system, the transitions
model changes between states. Hence, transitions systems can be used to describe
the behavior of systems. In the following we will assign atomic properties to states
to represent simple facts that hold in the state they have been assigned to. These
atomic properties can be seen as labels and hence turn the TS into a Labelled Transi-
tion System (LTS). All atomic properties are accumulated in the finite set of atomic
properties AP . An example of a LTS is schown in Figure 2.1 and formally a LTS is
defined as follows:

Definition 1 (LTS). A labeled transition system (LTS ) is a triple (S, T, L) where:

S is a finite set of states,

T S S is a transition relation such that {s1 , s2 } T , if there is a


transition possible from state s1 to s2 ,

L : S 2AP , is a labeling function that assigns a set L(s) sAP to any state
s.

11
12 2 Model checking labeled Transition Systems

For more details on LTS, see [6, pp.1923].


Example 4 (LTS). In the example given in Figure 2.1, the state space equals S =
{s1 , s2 , s3 } and the set of transitions is T = {(s1 , s3 ) , (s3 , s2 ) , (s3 , s1 )}. The labeling
function is defined such that L(s1 ) = {red}, L(s1 ) = {green} and L(s1 ) = {yellow}.
Note that it is possible to add multiple labels to a single state.

Figure 2.1: An example of a LTS

2.2 System Evolution


For a given state s of the LTS one of the outgoing transitions is selected nonde-
terministically, that is the outcome is unkown a priori and no probability can be
assigned to the different possibilities. Also, in case more than one starting state are
possible, the actual starting state is chosen nondeterministically. The evolution (or
execution) of a labeled transition system can be described as a path (or execution
fragment), that consists of a finite or infinite sequence of states:
Definition 2 (Path). A path of a labeled transition system T = (S, T, L) is a
finite or infinite sequence of states s1 , s2 , s3 , s4 , ... . . . such that si , si+1 S and
(si , si+1 ) T for all i 0.

Alternatively, the evolution of the system can be presented by an infinite tree


of states, called the computation tree, where a state has a many successors as it
has outgoing transitions. This corresponds to a branching notion of time, where
each moment splits up into all possible futures. A path can be seen as a single
branch or execution of the computation tree, as in the example of Figure 2.2. The
tree itself then contains all possible executions of the system starting from an initial
state. The initial state forms the root of the computation tree, as shown in Figure
2.2, then each node splits up according to all possible transitions, maintaining the
nondeterministic choice between successors. Note that in the computation tree of a
LTS the atomic properties states are usually mapped to their atomic properties.

2.3 Computational Tree Logic


The branching notion of time was introduced by Clarke and Emerson in the early
eighties, together with a temporal logic that allows to express properties of some
2.3 Computational Tree Logic 13

Figure 2.2: Computation tree of an LTS

or all computations that start in a particular state s. This so-called Computational


Tree Logic (CTL) allows to reason about the future evolution of the system, not
about its history. Note that efficient model checking algorithms for CTL exist and
will be presented in Section 2.4.
The syntax of CTL distinguishes between state and path-properties. The for-
mer are logical combinations of atomic properties and the latter reason about tem-
poral properteis of paths. Such path-properties may reason about the validity of
a state formula in all states along a path, using the always-operator, or about the
validity of a state formula after for all states along the path a different state formula
has been valid, using the until -operator. In CTL a path-property always needs to
be wrapped into either the exists or the forall operator which decide whether the
property specified for paths needs to hold for at least one path or for all possible
executions, respectively.
An example for a state property is The ball is red . An example for a path
property is The traffic light will eventually turn green. Normally, we denote a
state-formula with capital greek letters, e.g. and path-formulas as small greek
letters .
In the following, we will provide a short summary of the syntax and semantics
of CTL. A more detailed definition and description of CTL can be found in [6, pp.
317329].
Definition 3 (State formulas). A CTL state formula is constructed as follows:
::= tt | a | | | | | , (2.1)

where, the true operator tt holds in every state and a AP denotes an atomic
14 2 Model checking labeled Transition Systems

property (see Section 2.1). We can use the Boolean operators , and
to combine state formulas. The exists quantifier and the for all quantifier are
always followed by a path formula, indicated by .

Definition 4 (Path formulas). A path formula for CTL is constructed using the
following grammar:
::= X | U . (2.2)

X denotes the next operator and U is the until operator. X indicates that the
next state on the path fulfills the state formula . U denotes that holds for
all states along the path until holds.

The following short-hand notations are used in the following and denoted even-
tually and always, respectively.

:= tt U
 :=
indicates that along the path eventually the state formula holds.  means
that always in every state on the path holds.
As mentioned above, a path formula has to be wrapped inside either or ,
see Section 3 for obtaining a CTL formula, e.g., means that there exists a path
where eventually is fulfilled.  denotes that for all states of all possible paths
the state formula holds.
Example 5 (State formulas). The following expressions are examples for state for-
mulas:

The ball is not red.

(ball red)

The ball is not red or the ball has dots.

(ball red) (ball dots)

There exists a path, such that the traffic light will turn green.

traffic light green

For every path, the traffic light will turn green.

traffic light green


2.4 Model checking algorithms for CTL 15

Example 6 (Path formulas). The following expressions are examples for path for-
mulas:

In the next state of the path, the state formula = (traffic light green) is
fulfilled.
X(traffic light green)

The traffic light will eventually turn from red to green without being neither
red nor green in between.
red U green

Example 7 (CTL Model Checking). See the computation trees of LTS s in Figures
2.3, 2.4, 2.5, 2.6, 2.7 and 2.8. All the computation trees represent the evolution
of LTS s that consist of the states {s1 , s2 , . . . , s8 , s9 }. We would like to discuss the
validity of different CTL formulas for the initial state.

2.4 Model checking algorithms for CTL


To check if a state s S satisfies a state formula , we need compute the satisfac-
tion set of : A satisfaction set is the set of states that satisfy the state formula
. This set, also denoted as Sat (), needs to be computed recursively. For more
details on CTL model checking, see [6, pp.341358].

Definition 5 (Satisfaction relation for CTL). Let a AP be an atomic proposition


and T = (S, T, L) be a labeled transition system, state s S, , CTL state
formulas and a CTL path formula. The satisfaction relation |= is defined for state
formulas and for path formulas by

s|=tt
s|=a iff a L(s)
s|= iff s|= s|=
s|= iff |=
s
s|= iff there is a path P aths(s)s.t. |=
s|= iff for each path P aths(s) : |=
s|= iff for each path P aths(s) : |=

The satisfaction set for a state formula is defined as: Sat () = {s S : s|=}
16 2 Model checking labeled Transition Systems

Figure 2.3: red holds: there exists Figure 2.4: red holds: there exists
a path starting from state s1 , where a path starting from state s1 , where
eventually red holds. Paths s1 , s2 , s6 always red holds. Path s1 , s2 , s6 ful-
and s1 , s3 , s8 fulfill this property. It fills this property. Path s1 , s3 , s8 does
does not matter that state s8 is con- not fulfill this because state s8 is con-
nected to more states as one existing nected to further states, which my not
path is sufficient to fulfill the formula. have the label red.

Figure 2.5: yellow U red holds: there Figure 2.6: red holds: all the paths
exists a path starting from state s1 , from state s1 have to fulfill that even-
where yellow holds for all states until tually red holds along the path. The
red holds. Paths s1 , s4 and s1 , s2 , s5 Paths s1 , s2 , s5 and s1 , s2 , s6 and s1 , s3
and s1 , s2 , s6 fulfill this until prop- and s1 , s4 fulfill this requirement. As
erty. It is sufficient to find one ful- these are all the paths starting from
filling path. Whatever happens after s1 , the property holds. Further con-
the state in which red holds does not nections from s3 , s4 , s5 are irrelevant
influence the outcome. as red holds as soon as red holds.
2.4 Model checking algorithms for CTL 17

Figure 2.7: red maybe holds: for Figure 2.8: yellow Ured holds: all
all the paths starting from state s1 , the paths from state s1 need to have
red has to be always fulfilled. We do only states fulfilling yellow until red
not have enough information to decide holds. Paths s1 , s2 , s5 and s1 , s2 , s5
if the property holds, since the states and s1 , s3 , s7 and s1 , s2 , s8 and s1 , s4
s5 , s7 , s8 and s9 have outgoing connec- fulfill this until property. These are all
tions, where (red) also has to hold. the paths starting from state s1 thus
Note that if the system did not have the property holds. Whatever hap-
any extra unknown states, this prop- pens after the state where red is ful-
erty would hold. filled does not influence the outcome.

2.4.1 Existential Normal Form


It is sufficient to know how to compute the satisfaction set of CTL formulas that
are in the so-called Existential Normal Form (ENF). This is because each CTL
formula can be rewritten in e existential normal form, i.e., a CTL formula with the
basic modalities X, U and . However, it is also possible to design analogous
algorithms for the CTL operators U and .
The following defintion provides the grammar that constructs CTL formulas in
ENF:

Definition 6 (Existential Normal Form). A formula for CTL in the existential


normal form is defined as:

::= tt | a | | | X | ( U ) |  (2.3)

A CTL formula X can be rewritten in the existential normal form by X.


A formula ( U ) can be expressed as ( U ( 4)) . For
further normal forms and equivalence rules in CTL, refer to [6, pp. 329331].
18 2 Model checking labeled Transition Systems

2.4.2 Computing Sat ()


CTL is based on a branching notion of time. The following example illustrates how
the satisfaction set for an LTS T = (S, T, L) and a state formula can be computed
recursively. The formula in the example is:

a (b U c)

As shown in the parse tree in Figure 2.9, the formula is first splitt into two parts
separating the left and right side of the highest binding operator . This yields
1 2 with 1 = a and 2 = (b U c). The left side, 1 , can then be
further split up in and a. Note that all the nodes consists only of state formulas
(not path formulas). That is why appear together in one node. The right side is a
bit more complex: 2 can be spilt up into the part left and right of the until operator
U. Note that U is a path formula and not a state formula, so it is necessary to have
it appear in the node of the parse tree together with its corresponding quantifier, .
Now 2 = 3 U4 with 3 = b and 4 = c. 3 is already an atomic property
and 4 needs to be further split up as shown in Figure 2.9.

Figure 2.9: A parse tree of a CTL state formula

To compute Sat (), we perform a recursive computation, starting with the leaves
of the parse tree and compute Sat (x ) where x are the atomic properties in the
leaves. Then we go one level up and combine the satisfaction sets of the nodes
already computed according to the rule that corresponds to the operator that is
shown in the parent node, until we have reached the root of the parse tree. I.e., we
compute all the satisfaction sets of the leaves and go one level up and compute all
the satisfaction sets of the higher nodes until we reach the top node.
In the following, we present the computations for all CTL operators, including
the cases of the all-quantifier combined with eventually, until and always:
2.4 Model checking algorithms for CTL 19

Computing Sat (a)

Sat (a) = {si S | a L (si )}

So all states s Sat (a) in the satisfaction set of a, have the label a.

Computing Sat ()

Sat () = S \ Sat ()
To compute the satisfaction set of a negated CTL formula , we have to take
the complement of Sat with respect to the state space S. For example, if S =
{0, 1, 2, 3} and Sat () = {0, 2}, then Sat () = S \ Sat () = {0, 1, 2, 3} \ {0, 2} =
{1, 3}.

Computing Sat ( )

Sat ( ) = Sat () Sat ()


To compute the satisfaction set of two disjunct CTL formulas we have to take the
union of their respective satisfaction sets. For example, if S = {0, 1, 2, 3, 4} and
Sat () = {0, 2} and Sat () = {0, 2, 3}, then Sat ( ) = Sat () Sat () =
{0, 2} {0, 2, 3} = {0, 2, 3} where is the union of the two sets.

Computing Sat ( )

Sat ( ) = Sat () Sat ()


To compute the satisfaction set of two conjunct CTL formulas, we have to take
the intersection of their respective satisfaction sets. E.g., {0, 1, 2, 3, 4} S and
Sat () = {0, 2} and Sat () = {0, 2, 3}, then Sat ( ) = Sat () Sat () =
{0, 2} {0, 2, 3} = {0, 2} where is the intersection of the two sets.

Computing Sat (X)

To compute this satisfaction set, we first need to define the so-called successor set
of a state s:

Definition 7 (Successor set). In a labeled transition system T = (S, T, L), the set
of successors for a state s S is defined as:

Post (s) = {s0 S | (s, s0 ) T }


20 2 Model checking labeled Transition Systems

So, Post (s) includes all the states s0 such that s and s0 are in the transition
relation (s, s0 ) T . We can now determine Sat (X) as the set of states that have
a successor in the satisfaction set of :
Sat (X) = {s S | (Post (s) Sat ()) 6= }
The satisfaction set hence includes all the states s S for which the intersection,
, between the successor states Post (s) and the satisfaction set of , Sat () is not
the empty set .

Computing Sat (X)

This satisfaction set can be determined along a similar reasoning than for Sat (X).
The only difference is that all successor states have to be in Sat :
Sat (X) = {s S | Post (s) Sat ()}
For a state s S, all successor states have to fulfill , hence Post (s) is a subset
of Sat ().

Computing Sat (U)

The satisfaction set Sat (U) contains all states starting from which a path exists,
such that holds until holds. Sat (U) is defined as the smallest subset Q S
for which the following two conditions hold:
1. Sat () Q
2. (s Sat () (Post (s) Q) 6= ) = s Q
To compute Q recursively, we first start with Q as the set that contains all the
states s0 S where holds, so we already have Sat () Q. In the next iteration
we add each state s Sat () having a successor in Q, so (Post (s) Q) 6= .
The second part requires an iterative computation until the set does not increase
anymore:
S0 = Sat ()
S1 = S0 -states that can directly move to S0
Repeat until Sk+1 = Sk
So, S0 contains all the states where holds, S0 = Sat (). Next, we need to
find all the states, s / S0 and s Sat (), for which it holds that there exists a
successor s S0 with (s, s0 ) T . Finally, we need to compute S1 = S0 s for all
0

found states s. We then continue with S1 instead of S0 and S2 instead of S1 until


Sk+1 = Sk . Note that this technique is always finished in a finite number of steps.
It is S0 S1 . . . Sk sk+1 Sat (U).
2.4 Model checking algorithms for CTL 21

Figure 2.10: An example of an LTS

Example: computing Sat (yellow U blue) See the LTS given in Figure 2.10.
We compute Sat (yellow U blue) for an LTS consisting of the states {s1 , s2 , s3 , s4 }
S with L (s1 ) = {yellow}, L (s2 ) = {yellow}, L (s3 ) = {blue} and L (s4 ) = {white}
with transition relations {(s1 , s2 ) , (s2 , s3 ) , (s3 , s2 ) , (s4 , s3 )} T .

S0 = Sat (blue) = {s3 }

s2 Sat (yellow) and (s2 , s3 ) T and s3 S0


S1 = {s3 } {s2 } = {s2 , s3 }

s1 Sat (yellow) and (s1 , s2 ) T and s2 S1


S2 = {s3 , s3 } {s1 } = {s1 , s2 , s3 }

No further states can be added. Sk+1 = Sk = Sat (yellow U blue) = {s1 , s2 , s3 }

Computing Sat (U)

Sat (U) consists of all the states in Sat () where for all paths holds until
holds united with the set Sat (). Sat (U) is the smallest subset Q S such
that:

Sat () Q and

(s Sat () and Post (s) Q) = s Q

We start with Q S as the set that contains all the states s0 S where holds,
so that Sat () Q. In the next iteration, we add each state s Sat () having
all successors in Q. The second part requires an iterative computation until the set
does not decrease anymore:

S0 = Sat ()

S1 = S0 -states that only directly move to S0

Repeat until Sk+1 = Sk

S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
/ S0 for which all successors are in S0 , so s0 with (s, s0 ) T
find all the states s
it is s0 S0 and s Sat (). Finally we need to compute S1 = S0 s for all
22 2 Model checking labeled Transition Systems

found states s. We then continue with S1 instead of S0 and S2 instead of S1 until


Sk+1 = Sk . Note that this is always finished in a finite number of steps. It is
S0 S1 . . . Sk sk+1 Sat (U).

Example: computing Sat (yellow U blue) See the LTS given in Figure 2.10.
We compute Sat (yellow U blue) for an LTS consisting of the states {s1 , s2 , s3 , s4 }
S with L (s1 ) = {yellow}, L (s2 ) = {yellow}, L (s3 ) = {blue} and L (s4 ) = {white}
with transition relations {(s1 , s2 ) , (s2 , s3 ) , (s3 , s2 ) , (s4 , s3 )} T .

S0 = Sat (blue) = {s3 }

s2 Sat (yellow) and (s2 , s3 ) T and s3 S0


S1 = {s3 } {s2 } = {s2 , s3 }

s1 Sat (yellow) and (s1 , s2 ) T and s2 S1


S2 = {s3 , s3 } {s1 } = {s1 , s2 , s3 }

No further states can be added. Sk+1 = Sk = Sat (yellow U blue) = {s1 , s2 , s3 }

Note that this is the same result as computing Sat (U), because all the states
that were added iteratively fulfilled the property that they had only transitions to
S0 , S1 and S2 . This is pure coincidence.

Computing Sat ()

Sat () consists of all the states s S for which there exists a paths starting
from state s such that always holds. Sat () is the largest subset Q S such
that

Q Sat () and

s Q = (Post (s) Q) 6=

Q is a subset of the satisfaction set of , Sat (), and if s Q, it follows that


this state has a successor in Q, so that (Post (s) Q) 6= . We compute this set by
using the following iterative process:

S0 = Sat ()

S1 = S0 -states that can directly move to S0

Repeat until Sk+1 = Sk


2.4 Model checking algorithms for CTL 23

So, S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
find all the states s S0 for which there exists a successor s0 S0 with (s, s0 ) T
and s is in the satisfaction set of , i.e. s Sat (). Finally, we need to compute
S1 = S0 s for all found states s. We then continue with S1 instead of S0 and S2
instead of S1 until Sk+1 = Sk . Note that this computation is always finished within
a finite number of steps. It is S0 S1 . . . Sk sk+1 Sat (). Also note
that a state s Sat () can have a successor s0 / Sat ().

Example: computing Sat (b) See the LTS given in Figure 2.11. We compute
Sat (b) for an LTS consisting of the states {s1 , s2 , . . . , s7 , s8 } S with the labels
and transitions as shown in the figure.

Figure 2.11: Another example of an LTS

S0 = Sat (b) = {s3 , s6 , s7 , s8 }

S1 = S0 -states that can directly move to S0 = {s3 , s6 , s7 , s8 }{s3 , s6 , s7 } =


{s3 , s6 , s7 }

S2 = S1 -states that can directly move to S1 = {s3 , s6 , s7 } {s3 , s7 } =


{s3 , s7 }

S3 = S2 -states that can directly move to S2 = {s3 , s7 } {s3 , s7 } =


{s3 , s7 } = S2 = Sat (b)

Computing Sat ()

Sat () consists of all the states s S for which for all paths starting from s
always holds. Sat () is the largest subset Q S such that

Q Sat () and

s Q = Post (s) Q
24 2 Model checking labeled Transition Systems

Q is a subset of Sat () and if s Q, it follows that this state has all its successors
in Q, so that Post (s) Q. We compute this set by using the following iterative
process:

S0 = Sat ()

S1 = S0 -states that only directly move to S0

Repeat until Sk+1 = Sk

So, S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
find all the states s S0 for which all successors are in S0 , so s0 with (s, s0 ) T it is
s0 S0 and s Sat (). Finally we need to compute S1 = S0 s for all found states
s. We then continue with S1 instead of S0 and S2 instead of S1 until Sk+1 = Sk .
Note that this computation is always finished within a finite number of steps. It is
S0 S1 . . . Sk sk+1 Sat (). Also note that a state s Sat ()
cannot have any successor s0 / Sat ().

Example: computing Sat (b) See the LTS given in Figure 2.11. We compute
Sat (b) for an LTS consisting of the states {s1 , s2 , . . . , s7 , s8 } S with the labels
and transitions as shown in the figure.

S0 = Sat (b) = {s3 , s6 , s7 , s8 }

S1 = S0 -states that only directly move to S0 = {s3 , s6 , s7 , s8 }{s3 , s6 , s7 } =


{s3 , s6 , s7 }

S2 = S1 -states that only directly move to S1 = {s3 , s6 , s7 } {s3 } = {s3 }

S3 = S2 -states that only directly move to S2 = {s3 } {} = {} = S2 =


Sat (b)

2.4.3 Worst case time-complexity of CTL


The worst case time-complexity of model checking a CTL formula for a labeled
transition system T = (S, T, L) is polynomial in the size of the Markov chain and
polynomial in the number of operators that are included in the formula.

O (|| (|S| + |T |)) (2.4)

|| is the number of operators (as in the computation tree), |S| is the number states
and |T | is the number of transitions. Refer to [6, p. 355].
Chapter 3

Discrete-time Markov chains

This chapter introduces the discrete-time Markov Chains (DTMC ). A definition is


given in Section 3.1. Section 3.2 considers the transient evolution of such DTMCs.
A classification of the states in a DTMC is given in Section 3.3.

3.1 Definition of a DTMC


A d DTMC is defined as follows:
Definition 8 (DTMC). A (labeled) discrete-time Markov Chain (DTMC ) is de-
scribed as a tuple (S, P, AP, L) with:

S a countable set of states


P (s, s0 ) = 1, for alls S
P
P : S S [0, 1] a stochastic matrix with s0 S

AP the set of atomic properties


L : S 2AP a labeling function which assigns to each state s S the set L(s)
of atomic propositions that are valid in the state s.

P has the following properties:


0 P (s, s0 ) 1 for all {s, s0 } S (3.1)
X
P (s, s0 ) = 1 for any s S (3.2)
s0 S

Example 8 (DTMC). See the example of a DTMC M = (S, P, AP, L) given in


Figure 3.1. We can derive the probability matrix P:
 
0.9 0.1
P=
0.4 0.6

25
26 3 Discrete-time Markov chains

Figure 3.1: An example of a DTMC

P (0, 0) = 0.9, P (0, 1) = 0.1, P (1, 0) = 0.4 and P (1, 1) = 0.6. Note that
Equation 3.1 holds: All the values in the matrix are bigger or equal to zero and
smaller or equal to 1. Equation 3.2 also holds: P (0, 0) + P (0, 1) = 0.9 + 0.1 = 1
and P (1, 0) + P (1, 1) = 0.4 + 0.6 = 1.

Definition 9 (Paths). An infinite path in a DTMC M = (S, P, AP, L) is an infinite


sequence s0 , s1 , s2 , . . . of states such that i 0 : P(si , si+1 ) > 0. A finite path is a
finite prefix of an infinite paths. Let P aths(s) denote the set of (infinite) paths in
M which start in state s.

3.2 Transient Evolution


In the previous section, we got the definition of a DTMC. We are now interested in
the transient evolution. The transient evolution is used to calculate the probability
that the DTMC is in a specific state after a specific amount of discrete time steps.
Refer to [14, pp. 3841].

Definition 10 (Transient Evolution). For a DTMC We define ps with a state s S


as a function that return the probability of being in state s after n N steps. We
are define p as the vector for all states. So, formally we define:

ps : N [0, 1] (3.3)
p : N [0, 1]|S| (3.4)

so that p (n) = p0 (n) , p1 (n) , . . . , p|S|2 (n) , p|S|1 (n) (3.5)

For example, p0 (4) is the chance of being in state 0 after 4 time steps. When
we look at the example, see Figure 3.1, let the initial probability distribution be
p (0) = (p0 (0) , p1 (0)) = (1, 0), i.e., the execution starts in state 0. Note that the
sum of all elements in p (u) should always add up to 1.
3.2 Transient Evolution 27

Example 9 (Transient Evolution). The transient evolution after one time step is
calculated as follows:
p (1) = (0.9 p0 (0) + 0.4 p1 (0) , 0.1 p0 (0) , 0.6 p1 (0)) = p (0) P

This can been seen as a matrix operation:


p (1) = p (0) P
 
  0.9 0.1
p (0) = 1 0 , P =
0.4 0.6
 
  0.9 0.1  
p (1) = 1 0 = 0.9 0.1
0.4 0.6

If we generalize this, we get the following: (for n 0)


p (n) = p (n 1) P = p (0) Pn

The probabilities to move from a state to another one, are given by so-called
1-step transition probabilities:

Definition 11 (1-step transition probabilities). We define 1-step probabilities to


move from state s S to state s0 S for the transition matrix P as:
P(s, s0 ) = P r{X(1) = s0 |X(0) = s} (3.6)
= P r{X(n) = s0 |X(n 1) = s} for all n = 1, 2, ... (time-homogeneity)
(3.7)
P has Eigenvalue 1 and all Eigenvalues are at most 1. Pn is a stochastic matrix for
all n. P fully characterizes the DTMC.

The following equation defines the probability to move from a state to another
in a fixed numer of steps:

Definition 12 (Chapman-Kolmogorov Equation). The probability to move from


state s S to s0 S in a DTMC in n 0 steps is:
X
ps,s0 (n) = ps,s00 (l) ps00 ,s0 (n l) for all 0 l n
s00

The Chapman-Kolmogorov equation establishes a relation between the n-step


transition probabilities and the l- and (nl)-step transition probabilities. In matrix-
vector form, we get the n-step transition probability matrix by P(n) :
P(n) = P(l) P(nl)
28 3 Discrete-time Markov chains

For l = 1, we have: P(n) = P(1) P(n1) = P P(n1) . So it follows:

P(n) = P P(n1) = . . . = Pn1 P(1) = Pn

Given an initial state, the probability to be in state s after n steps is:

X
ps (n) = P r{X(n) = s} = P r{X(0) = s0 } P r{(X(n) = s|X(0) = s0 }
s0 S

In matrix-vector form, given p (0):

p (n) = p (0) Pn

3.3 Steady-state distribution and State Classifi-


cation

Definition 13 (Steady-state probability). If there is a limiting distribution v, see


Equation 3.8, then Equation 3.9 holds. Note that a limiting distribution does not
always exist! The right part of Equation 3.9 is also called the normalization equation.
The left part results a linear system of equations that are linear independent, therefor
the right part of the equation is needed to be solvable.

v = lim p (n) = lim p (0) P n (3.8)


n n
X
v (P I) = 0 and vj = 1 (3.9)
j

Example 10 (Calculating the steady-state). See the example given in Figure 3.1.
If we now build a system of linear equations with v (P I) = 0 (note that v is
normalised), we get:
 
 0.1 +0.1
= 0 0 = v1 v2 = 51 54
      
v (P I) = 0 = v1 v2
+0.1 0.4

Within this section, different classifications are given for the states in a DTMC.

Definition 14 (Transient and recurrent states). The probability of returning back


to state s for the first time after exactly n steps is defined as:

fs (n) = Pr {X (n) = s, X (n 1) 6= s, . . . , X (2) 6= s, X (1) 6= s|X (0) = s}


3.3 Steady-state distribution and State Classification 29

fs (n) equals the conditional probability that starting in state s at step n = 0,


X (0) = s, we do not reach state s, X (n 1) 6= s, . . . , X (2) 6= s, X (1) 6= s before
step n.
The probability to eventually return to state s is:

X
fs = fs (n)
n=1

State s S is called transient if fs < 1


There is a non-zero probability that the DTMC will not return to s

State s S is called recurrent if fs = 1


It is impossible to never come back to a recurrent state

For a recurrent state s, the mean number of steps between two successive visits
to s is:

X
ms = n fs (n)
n=1

We call s positive recurrent, if ms < , otherwise null recurrent.

Example 11 (Transient and recurrent states). See Figure 3.2. We have a DTMC
with S = {0, 1, . . . , 10, 11}. We define five subsets of S: {S1 , S2 , S3 , S4 , S5 } S with
S1 = {0, 5}, S2 = {1, 2, 6, 7}, S3 = {3, 4}, S4 = {10}, and S5 = {8, 9, 11}. S2 can be
seen as a component of the system which is transient because it has outgoing con-
nections that once you take that path you can never reach any of the states s S2 .
For the other subsets S1 , S3 , S4 and S5 , it holds that once you are in one of the
recurrent subsets and take only paths that have an origin in the subsets you can
always reach all the states of the subset eventually, i.e., the states S2 = {1, 2, 6, 7}
are transient and the other states, S1 S3 S4 S5 = {3, 4, 5, 8, 9, 10, 11} are recur-
rent.

Definition 15 (Absorbing states). A state s is absorbing if and only if P (s, s) = 1.


An absorbing state is always positive recurrent, this means that the mean number
of steps that we need to come back to this state is 1, ms = 1.

Recall the 1-step transition probabilities in Equation 3.7. As an example, note


that in Figure 3.2, state 10 is absorbing.

Definition 16 (Irreducibility). The states of a DTMC can be classified dependent


on there properties.
30 3 Discrete-time Markov chains

Figure 3.2: An example DTMC (presented without the probabilities)

A DTMC is called irreducible if from each state s, one can reach any other
state in a finite number of steps.

A DTMC is called reducible if from each state s, one cannot reach every
state in a finite number of steps.

In a finite DTMC, irreducibility is equal to the requirement that all states are
recurrent. We define the function ps,s0 with states {s, s0 } S as a function returning
the probability of going from state s to state s0 in exactly n executions:

ps,s0 : N R (3.10)
p (s) (s0 ) (n) = r (3.11)

For example, p (0) (4) (6) returns the probability of going from state 0 S to
state 4 S in exactly 6 steps.

Definition 17 (Periodicity). A state s S is periodic if

d > 1 : (n : n mod d 6= 0 ps,s (n) = 0) . (3.12)

A state s is called ergodic if it is aperiodic and positive recurrent. In a finite,


aperiodic and irreducible DTMC, all states are ergodic. If only a d (s) = 1 exists,
state s is called aperiodic.

So, s is periodic if there exists a d > 1, for which it holds that for all n with
n mod d 6= 0,the probability of returning to s after n executions equals zero, i.e.
ps,s (n) = p (s) (s) (n) = 0. Period d is the greatest common divisor of all values n
with p (s) (s) (n) > 0. It also holds that:

In a DTMC all states in the same component have the same period.

An irreducible DTMC is periodic if all its states are periodic.


3.3 Steady-state distribution and State Classification 31

Example 12. See Figure 3.3. We have a DTMC with S = {0, 1, 2, 3}. The initial
state is 0 S. We want to find ps,s (n) > 0 for all n.
Note that there exists a path from state 0 to state 2 to state 3 and to state 0
again. We now know that p0,0 (3) > 0. Since you can repeat this process infinitely
often p0,0 (6) > 0, p0,0 (9) > 0, . . ., we can directly see that the greatest common
divisor of all the values of n is 3 here. As 3 > 0, state 0 is periodic. Note that
this DTMC is irreducible because you can reach each state from another state. All
the states of this DTMC are in the same component which is recurrent, so you
can conclude that all the states have the same period. We have found out that
state 0 has period 3, so the period of all the other states must also be 3! You
can compute the transient state probabilities p (u) = p (n 1) P with the initial
distribution p (0) = (1, 0, 0, 0). It follows: p (1) = 0, 13 , 23 , 0 , p (2) = (0, 0, 0, 1),


p (0) = p (3) = p (3 i), p (1) = p (4) = p (1 + 3 i), p (2) = p (5) = p (2 + 3 i).

Figure 3.3: Another example DTMC

1 2
0 3 3 0
0 0 0 1
It is P =
0 0 0 1.

1 0 0 0

You can see that also the transient revolution is period because the states are pe-
riod. With a period of 3, you obtain the same transient distribution. In this case
limn p (n) does not exist.

Let s and s0 be mutually reachable from each other. Then it holds that:

s is transient s0 is transient
s is null-recurrent s0 is null-recurrent
s is positive recurrent s0 is positive recurrent
s has period d s0 has period d
32 3 Discrete-time Markov chains

Definition 18 (Stationary distribution). The stationary distribution of a DTMC


with matrix P is defined as = (. . . , s , . . .), such that = P. Stated differently:
X
s = s0 P(s0 , s). (3.13)
s0

In an irreducible, aperiodic and positive reccurent DTMC, the following


holds:

The limiting distribution v = limn p (n) exists

v is independent of the initial probability distribution p (0)

v is equal to the unique stationary (steady-state) probability distribution

So, v = means that there exist only one solution for the following set of
equations: (see also Equation 3.9)
X
(P I) = 0 and j = 1 (3.14)
j

An irreducible and positive reccurent DTMC has a unique stationary dis-


tribution with:

1
s = (3.15)
ms
Note that the limiting distribution does not need to exist in this case, since the
DTMC could be periodic.

For more details on DTMCs and further examples, see [6, pp. 747780] and [14,
pp.3743].
Chapter 4

Probabilistic Model checking

This chapter introduces the probabilistic computational tree logic, pCTL. The syn-
tax and semantics of state and path formulas in pCTL are provided in Section
4.1 and Section 4.2 presents algorithms for model checking different kinds ofpCTL
formulas.

4.1 Syntax and semantics of PCTL


The Probabilistic computational tree logic, pCTL, is a temporal logic for describing
properties of Markov Chains, especially DTMC s). pCTL is an extension of CTL
and its main addition is the probabilistic operator P. The probabilistic operator P
replaces the universal, , and existential, , path quantification. Remember that a
path formula must always have a quantification to be a state formula.

Definition 19 (State formulas for pCTL). A state formula in pCTL is defined as:

::= a | | | | PEp () (4.1)

See Equation 4.1. a AP is an atomic property (recall Section 2.1). We can use
the Boolean operators , and to combine several state formulas. p is
the probability, E is the comparison operator, PEp () indicates that the probability
that paths fulfill is E p, where E can be replaced by >, <, and .
Definition 20 (Path formulas for pCTL). A path formula in pCTL is defined as:

::= X | U k (4.2)

Recall from Section 2.3 for CTL that X is denoted as next operator and U is
called the until operator. Those path-operators always need to be wrapped inside
a pCTL state formula . X means that the next state on the path will fulfill .

33
34 4 Probabilistic Model checking

U k means that holds on all the states along the path until holds in k
steps or less. Note that when k is not provided, it means that k = , e.g., U
means that must hold until holds but the number of steps is not bounded. Also
note the following short-hand notations, which are denoted eventually and alwyas,
repsectively.

= tt U k ,
 = .
An additional short-hand notation is the following, denoted as implies:

= .
Example 13 (pCTL Formulas). Please find below some examples for pCTL formu-
las, also containing path-operators:

The probability of not going down in the next state is at leasts 95%.

P0.95 (down)
The probability of going down within 5 steps is at most 1%.

P0.01 tt U 5 down


The probability of going down within 5 steps after continuously operating with
at least two processors is at most 1%.
P0.01 2up U 5 down


Which can also be written as follows, in case more (e.g. four) processors are
available:
P0.01 (2up 3up 4up) U 5 down


Definition 21 (Satisfaction relation for pCTL). Let a AP be an atomic propo-


sition and M = (S, P, AP, L) be a discrete-time Markov Chain, state s S, ,
pCTL state formulas and a pCTL path formula. The satisfaction relation |= is
defined for state formulas and for path formulas by

s|=tt
s|=a iff a L(s)
s|= iff s|= s|=
s|= iff |=
s
s|=PEp () iff P r{ P aths(s)||=}
The satisfaction set for a state formula is defined as: Sat () = {s S : s|=}
4.2 Model checking the non-probabilistic part of pCTL 35

Read more about pCTL in [6, pp. 780785] and in [21, pp.68]. (Note that in
[21], a different notation is used.)

4.2 Model checking the non-probabilistic part of


pCTL
Checking whether a state s S in a labeled discrete time Markov-Chain satisfies a
pCTL formula is done the same way as for CTL for the non-probabilistic part:

1. Compute recursively the set Sat () of states that satisfy .

2. Check whether state s S belongs to Sat ().

Refer to [21, pp.914] and [6, pp. 785787].


In order to model check a pCTL formula Sat (), we need to compute the parse
tree of . Note that path-operators can never appear on their own in the nodes of
the parse tree. They always remain together with probabilistic operator as PEp X
or PEp U and only the sub-formulas that are full state formulas on their own are
extracted into the sub-nodes of the parse tree. Then the model checking routine
takes place recursively, as explained for CTL in Section ??.

4.2.1 Computing Sat (a)


See Section 2.4.2, computing Sat (a). This approach works the same for pCTL as
for CTL.

Sat (a) = {s1 , s2 , . . . , sn1 , sn } and for every state s Sat (a) holds that a L (s)
(4.3)

4.2.2 Computing Sat ()


See Section 2.4.2, computing Sat (). This approach works the same for pCTL as
for CTL.
Sat () = S \ Sat () (4.4)

4.2.3 Computing Sat ( )


See Section 2.4.2, computing Sat ( ). This approach works the same for pCTL
as for CTL.
Sat ( ) = Sat () Sat () (4.5)
36 4 Probabilistic Model checking

4.2.4 Computing Sat ( )


See Section 2.4.2, computing Sat ( ). This approach works the same for pCTL
as for CTL.
Sat ( ) = Sat () Sat () (4.6)

4.3 Model checking the probabilistic part of pCTL


A state s S fulfills the pCTL formula PEp if the probability for taking a path
fulfilling that starts in state s is E p. can be either contain a next operator, X
or an until operator, U k . Below it is shown how to compute the satisfaction
set for pCTL formulas containing the probability operator.

4.3.1 Computing Sat (PEp (X))


First we define: X
Prob (s, X) = P (s, s0 ) (4.7)
s0 Sat()

as thePchance of going to a state where holds, starting from the state s S. The
sum s0 Sat() goes over all the states where holds. Now we can define:

s Sat (PEp (X)) iff Prob (s, X) E p (4.8)

Example 14 (Computing Sat (P0.8 (Xb))). See the labeled DTMC Figure 4.1.
Here we have a labeled DTMC with probabilities for the transitions and labels for
the states. This DTMC consists of states {s1 , s2 , . . . , s5 , s6 } S with transitions
{t1 , t2 , . . . , t8 , t9 } T and probability matrix P as shown below. L (s1 ) = {},
L (s2 ) = {a}, L (s3 ) = {}, L (s4 ) = {a}, L (s5 ) = {b} and L (s6 ) = {b}.


0 0.1 0.9 0 0 0
0.4 0 0 0.6 0 0

0 0 0.1 0.1 0.5 0.3
P=

0 0 0 1 0 0

0 0 0 0 1 0
0 0 0 0 0.7 0.3

Sat (b) = {s5 , s6 }

Computing for all states s S Prob (s, Xb) = Prob (s, X {s5 , s6 })
4.3 Model checking the probabilistic part of pCTL 37

Figure 4.1: An example of a DTMC

Prob (s1 , X {s5 , s6 }) = P (s1 , s5 ) + P (s1 , s6 ) = 0 + 0 = 0


Prob (s2 , X {s5 , s6 }) = P (s2 , s5 ) + P (s2 , s6 ) = 0 + 0 = 0
Prob (s3 , X {s5 , s6 }) = P (s3 , s5 ) + P (s3 , s6 ) = 0.5 + 0.3 = 0.8
Prob (s3 , X {s5 , s6 }) = P (s4 , s5 ) + P (s4 , s6 ) = 0 + 0 = 0
Prob (s3 , X {s5 , s6 }) = P (s5 , s5 ) + P (s5 , s6 ) = 1 + 0 = 1
Prob (s3 , X {s5 , s6 }) = P (s6 , s5 ) + P (s6 , s6 ) = 0.3 + 0.7 = 1

Check for all s S if Prob (s, X {s5 , s6 }) 0.8

Prob (s1 , X {s5 , s6 }) < 0.8 s1


/ Sat (P0.8 (Xb))
Prob (s2 , X {s5 , s6 }) < 0.8 s2
/ Sat (P0.8 (Xb))
Prob (s3 , X {s5 , s6 }) 0.8 s3 Sat (P0.8 (Xb))
Prob (s4 , X {s5 , s6 }) < 0.8 s4
/ Sat (P0.8 (Xb))
Prob (s5 , X {s5 , s6 }) 0.8 s5 Sat (P0.8 (Xb))
Prob (s6 , X {s5 , s6 }) 0.8 s6 Sat (P0.8 (Xb))

Sat (P0.8 (Xb)) = {s3 , s5 , s6 }

If a labeled DTMC becomes very large, a better approach to compute Sat (P0.8 (Xb))
 T
is to multiply the probability matrix P with the vector v = v1 v2 . . . vn1 vn ,
where vi is 1 if state si Sat () and 0 otherwise. If wi of the vector w =
 T
w1 w2 . . . wn1 wn = P v T matches the probability bound, it is si
Sat (PEp (X)). Computing Sat (P0.8 (Xb)) for the example given in Figure 4.1
yields:
38 4 Probabilistic Model checking


0 0.1 0.9 0 0 0 0 0
0.4 0 0 0.6 0 0 0 0

0 0 0.1 0.1 0.5 0.3
0 = 0.8 = w

Pv =
(4.9)
0 0 0 1 0 0 0 0

0 0 0 0 1 0 1 1
0 0 0 0 0.7 0.3 1 1

v1 = v2 = v3 = v4 = 0, v5 = v6 = 1. w3 , w5 and w6 are bigger or equal to 0.8


thus Sat (P0.8 (Xb)) = {s3 , s5 , s6 }!

Computing Sat PEp U k



4.3.2
First we define:


1 if s Sat ()





P (s, s0 ) Prob s0 , U k1

P 
k s0 S

Prob s, U =


if k > 0 and s Sat () \ Sat ()





0 otherwise

(4.10)

In the first case of Equation5.14, it is considered that if s Sat (), then state
s fulfills , so Prob s, U k = 1. In the second case, a state can be in Sat ()
and in Sat (). Note that in the first case, we have already captured all the states
that are in Sat (), so we do not need to check these states again: so we consider
s Sat () \ Sat (). We are not interested in states that are not in s Sat () or
s Sat () because they cannot fulfill the until property. If k > 0 we need to sum
all the probabilities of the outgoing transitions of state s times the probability of
reaching a state from s0 , where s0 is the successor state of s where holds until
holds in less then k 1 steps.
The first two cases cover all the possibilities.Thus Prob s, U k = 0, other-


wise. Now we can define:

s Sat PEp U k iff Prob s, U k E p


 
(4.11)

Note that this calculation is often labour intensive. Next we are


 going to look
k
at methods that are more efficient in computing Sat PEp U .
4.3 Model checking the probabilistic part of pCTL 39

Reduction to transient analysis

Recall section 3.2 (Transient evolution for DTMCs). First we need to transform the
labeled DTMC M into M0 by:

Making all the -states absorbing, i.e., removing all outgoing transitions and
adding a self loop with probability 1.

Making all the states that are not -states or -states, ( )-states ab-
sorbing.

Doing transient analysis: Check =k (eventually ) in exactly k steps.

It is sufficient to do this for exactly k steps because all the -states and ( )-
states have been made absorbing, so once we hit one of those states you are sure
that you stay there indefinitely. In the example below we are going to see how to
do the transient analysis.

Example 15 ( Computing Sat P0.4 (green blue) U 3 red ). See the DTMC M


given in Figure 4.2. We first transform M to M0 , the result is shown below. Note
that all the -states, s3 S, and violating states, {s4 , s5 } S, have been made
absorbing. Also note that probabilities have been added. Now the transient analysis
is:
2 1
0 3 3 0
1
0 14 14
Compute the probability matrix P for M0 : 2
0 0 1 0

0 0 0 1
 
Compute v1 v2 . . . vn1 vn Pk (where  k is the untilbound) n times and

alter the vector v in the
 following way:
 1 0 . . . 0 0 , 0 1 ... 0 0 ,
. . ., 0 0 . . . 1 0 , 0 0 . . . 0 1 :

q1 = 1 0 0 0 P3 = 0 29 11 1
   
18 6
q2 = 0 1 0 0 P3 = 16 0 12 13
   

q3 = 0 0 1 0 P3 = 0 0 1 0
   

q4 = 0 0 0 1 P3 = 0 0 0 1
   

For all i with si Sat


 (): if qj i matches the probability bound p then
k
sj Sat PEp U :
11
0.4 s1 Sat P0.4 (green blue) U 3 red

q13 = 18
q23 = 21 0.4 s2 Sat P0.4 (green blue) U 3 red

40 4 Probabilistic Model checking

q33 = 1 0.4 s3 Sat P0.4 (green blue) U 3 red




q43 = 0 < 0.4


1
q14 = 6
< 0.4
1
q24 = 3
< 0.4
q34 = 0 < 0.4
q44 = 1 0.4 s4 Sat P0.4 (green blue) U 3 red


Figure 4.2: An example of a Figure 4.3: An example of


DTMC a transformed DTMC with
probabilities

We can now conclude: {s1 , s2 , s3 , s4 } = S Sat P0.4 (green blue) U 3 red




Backwards computation

First we need to transform the DTMC M = (S, P, AP, L) into M0 by:

Making all the -states absorbing, i.e., removing all outgoing transitions and
adding a self loop with probability 1.

Making all the states that are not -states or -states, ( )-states ab-
sorbing.

Doing transient analysis: Check =k (eventually ) in exactly k steps,

In the example below we are going to see how to do the transient analysis.

Example 16 (Computing Sat P0.4 (green blue) U 3 red ). See the DTMC M


given in Figure 4.2. We first to transform M to M0 , the result is shown in figure


4.3. Note that all the -states, s3 S and violating states, {s4 , s5 } S have been
made absorbing. Now the transient analysis is:
4.3 Model checking the probabilistic part of pCTL 41

2 1

0 3 3
0
1 0 1 1
Compute the probability matrix P for M0 : 2 4
0 0 1 0
4

0 0 0 1
 T  T
Create the indication vector x = x1 x2 . . . xn1 xn : x = 0 0 1 1

Compute PxT = xT k times and each time use the result of the previous
calculation:
2 1 1 2 1 1 2 2 1 2 7
0 3 3 0 0 3
0 3 3 0 3 3
0 3 3 0 3 9
1 0 1 1 0 1 1 0 1 1 1 2 1 0 1 1 2 5
2 4 4 2 2 4 4 2 3 2 4 4 3 6
0 0 1 0 1 = 1 0 0 1 0 1 = 1 0 0 1 0 1 = 1 = x
0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 1

If xi matches the probability bound p then si Sat PEp U k




x1 = 97 0.4 s1 Sat P0.4 (green blue) U 3 red




x2 = 97 0.4 s2 Sat P0.4 (green blue) U 3 red




x3 = 97 0.4 s3 Sat P0.4 (green blue) U 3 red




x4 = 97 0.4 s4 Sat P0.4 (green blue) U 3 red




We can now conclude: {s1 , s2 , s3 , s4 } = S Sat P0.4 (green blue) U 3 red




Note that this method is more efficient then the forward method and the formal
method.

4.3.3 Computing Sat (PEp ( U ))


See the formal method above in this section. Equation 5.14 results into the following
set of equations, when we take the limit k = :

x = Px + i (4.12)
 T
where i is the indicator vector i = i1 , i2 , . . . , in1 , in where ii = 1 if si is a
-state and 0 otherwise. P is a matrix for which holds:

P (s, s0 ) = P (s, s0 ) if s Sat () \ Sat () and 0 otherwise (4.13)

All outgoing transitions of states that are not -states or -states are removed.

Example 17 (Computing Sat P>0.8 a U b ). See the DTMC M given in




Figure 4.1.
42 4 Probabilistic Model checking

Compute the probability matrix



0 0.1 0.9 0 0 0
0.4 0 0 0.6 0 0

0 0 0.1 0.1 0.5 0.3
P= 0

0 0 1 0 0

0 0 0 0 1 0
0 0 0 0 0.7 0.3

Compute : P

0 0.1 0.9 0 0 0
0 0 0 0 0 0

0 0 0.1 0.1 0.5 0.3
P =

0 0 0 0 0 0

0 0 0 0 0 0
0 0 0 0 0 0
 T
Solve x = Px + i with i = 0 0 0 1 1 :
T
x = 0.8 0 98 0 1 1


If xi matches the probability bound p, then si Sat P>0.8 a U b .




/ Sat P>0.8 a U b

x1 = 0.8 0.8 s1
/ Sat P>0.8 a U b

x2 = 0 < 0.8 s2
x3 = 89 0.8 s3 Sat P>0.8 a U b


/ Sat P>0.8 a U b

x4 = 0 < 0.8 s4
x5 = 1 > 0.8 s5 Sat P>0.8 a U b


x6 = 1 > 0.8 s6 Sat P>0.8 a U b




We can now conclude: {s3 , s5 , s6 } Sat P>0.8 a U b




4.3.4 Worst case time-complexity of pCTL


The worst case time-complexity of model checking a pCTL formula for a DTMC
M = (S, P, AP, L) is:
O (|| kmax |P| |S|) (4.14)
|| is the number of operators (as in the computation tree), kmax is the maximal
step bound appearing in a sub until formula 1 U k 2 of and kmax = 1 if does
not contain a step-bounded until operator. |P| is number of transitions and |S| is
the number states. Refer to [6, p. 786]
Chapter 5

CTMCs

In this chapter, we introduce the foundations of continuous-time Markov chains


(CTMCs). In Section 5.1 the class of continuous-time Markov chains with both
finite and infinite state space is presented. Paths on CTMCs and their cylinder
set are discussed in Section 5.2, before the transient state probabilities and the
steady-state probabilities are introduced in Section 5.3.

5.1 Continuous-time Markov chains


A continuous-time Markov chain (CTMC ) is a stochastic process, characterized by
a discrete state space S = {0, 1, . . .}, the continuous time domain T = [0, ) and
the Markov property. This property states that the probability to reside in a given
state in the near future only depends on the current state and not on the states
visited before, and neither on the already passed residence time in the current state.
We first present the definition of a labeled CTMC before we discuss its properties.

Definition 22 (CTMC). A labeled CTMC M is a tuple (S, R, L) consisting of a


countable set of states S, a transition rate matrix R : S S R0 and a labeling
function L : S 2AP that assigns atomic propositions from a fixed finite set AP to
each state.
The value R(s, s0 ), equals the rate at which the CTMC moves from a state s to
state s0 in one step.

Based on the transition rate matrix it is possible to express a number of other


means to describe the behavior of the CTMC. The total rate at which any transition
outgoing from state s is taken, is denoted
X
E(s) = R(s, s0 ). (5.1)
sS, s6=s0

43
44 5 CTMCs

A CTMC as defined above is a stochastic process {X(t)|t T }, where X(t) s


is random variable that gives the state occupied in the process at time t. For non-
negative t0 < t1 < . . . < tn+1 and x0 , x1 , . . . , xn+1 , the Markov property for a CTMC
can be stated as [23]:

Pr{X(tn+1 ) = xn+1 |X(t0 ) = x0 , . . . , X(tn ) = xn } = Pr{X(tn+1 ) = xn+1 |X(tn ) = xn }.

Furthermore, we require a CTMC to be time homogeneous, that is, invariant to


time-shifts (with t, s T , t > s):

Pr{X(t) = x | X(s) = xs } = Pr{X(t s) = x | x(0) = xs }.

In a CTMC, the state residence times must be exponentially distributed; this is


a result of the required memorylessness. The probability to leave state s before time
t is exponentially distributed:

Pr{leave s before t} = 1 eE(s)t .

The embedded discrete-time Markov chain corresponding to the CTMC is de-


noted as
0 R(s, s0 )
N(s, s ) = (5.2)
E(s)
and expresses the probability that the CTMC moves from state s to state s0 in the
next step.
The rate matrix R allows for self loops in the CTMC. This can be useful, because
it is possible for a CTMC derived from a high-level specification to contain self loops.
For performance measures and most algorithms presented in this theses, self loops
do not matter as residence times in a CTMC obey a memoryless distribution, hence
self loops can be eliminated. However, if only one-step probabilities are analyzed,
self loops can make a difference.
When self loops are removed from the transition matrix, we obtain a square
generator matrix Q : S S R0 , defined by
(
E(s), for s = s0 ,
Q = R E, with E(s, s0 ) = .
0, otherwise.
The value Q(s, s0 ), for s 6= s0 , equals the rate at which a transition from state s
to state s0 occurs in the CTMC, whereas Q(s, s0 ) denotes the negative sum of the
off-diagonal entries in the same row of Q; its value represents the rate of leaving
state s (in the sense of an exponentially distributed residence time).

Example 18 (CTMC). See the example of a CTMC M = (S, R, L) given in Figure


5.1.
5.1 Continuous-time Markov chains 45

Figure 5.1: An example of a CTMC

We can derive the square generator matrix Q:



5 4 1
Q = 10 10 0
0 4 4

Definition 23 (Irreducibility). A CTMC M = (S, R, L) is called irreducible if for


any two states s, s0 S, there exists n N such that Rn (s, s0 ) > 0.

The CTMC may contain states that cannot be left anymore. In this case all
outgoing rates from this state equal zero. The state is then called absorbing:

Definition 24 (Absorbing state). A state s of a CTMC M = (S, R, L) is called


absorbing if R(s, s0 ) = 0, s0 S.

In order to describe the evolution of a CTMC in time completely, we need initial


probabilities for the individual states. These are given by way of the initial distri-
bution, that assigns an initial probability to every state.

Definition 25 (Initial distribution). An initial distribution on a CTMC M =


(S, R, L) is a function : S [0, 1] such that
P
sS (s) = 1.

Definition 26 (Recurrence). A state s is said to be recurrent if the probability to


return to that state is one. A recurrent state s is said to be positive recurrent if the
mean time between two successive visits to state s is finite.
46 5 CTMCs

Definition 27 (Ergodicity). A state s S of a CTMC M = (S, R, L) is called


ergodic if it is positive recurrent and aperiodic. A CTMC is called ergodic, if and
only if the state space consists of one irreducible set of ergodic states where from
every state i S every other state j S can be reached with a positive probability
within a finite number of steps.

Note that CTMCs are aperiodic by definition. CTMCs can either have a finite
or an infinite countable state space s. The latter can be used to model systems with
infinite server capacity, as for example the infinite-server queue, or models with
infinite buffer. An important difference between finite and infinite-state CTMCs is
that the corresponding transition rate matrices of infinite CTMCs are of infinite
size. In the following we will deal with finite CTMCs, only.

5.2 Paths
While the generator matrix only considers the one-step behavior of the CTMC, the
actual evolution of the CTMC over time is specified in detail with a path. In a path
states and transitions alternate, where the rates between any two successive states
have to be positive to assure that the path can actually be taken.

Definition 28 (Infinite paths). Let M = (S, R, L) be a CTMC. An infinite path


t0 t1 t2
is a sequence s0 s1
s2 . . . with, for i N, si S and ti R>0
such that R(si , si+1 ) > 0 for all i. A finite path of length l + 1 is a sequence
0t t1 tl1
s0
s1
. . . sl1 sl such that sl is absorbing, and T(si , si+1 ) > 0 for all i < l.

For an infinite path , [i] = si denotes for i N the (i + 1)st state of path .
The time spent P in state si is denoted by (, i) = ti . Moreover, with i the smallest
index with t ij=0 tj , let @t = [i] be the state occupied at time t. For finite
paths with length l + 1, [i] and (, i) are defined
P in the wayQdescribed above for
i < l only and (, l) = and @t = sl for t > l1 j=0 tj . P ath (s) is the set of all
finite and infinite paths of the CTMC Q that start in state s and P athQ includes
all (finite and infinite) paths of the CTMC Q.
Now we need a way to state the probability for a given path to be taken while
time proceeds. In order to define such a probability measure Pr , for an initial
distribution on paths, we need to define cylinder sets first.
The cylinder set is a set of paths that is defined by a sequence of states and
time intervals of a given length k. The cylinder set then consists of all paths that
visit the states stated in the defining sequence in the right order and that change
states during the specified time intervals. Thus, the first k states of the paths in the
cylinder fit into the special structure specified through the sequence of states and
time intervals, while the further behavior remains unspecified.
5.3 Probabilities 47

Definition 29 (Cylinder set). Let s0 , . . . , sk S be a sequence of states with positive


rates R(si , si+1 ) > 0 for (0 i < k), and let I0 , . . . , Ik1 be nonempty intervals
in R0 . Then the cylinder set C(s0 , I0 , s1 , I1 , . . . , Ik1 , sk ) consists of all paths in
P athM (s0 ) such that [i] = si for i k and (, i) Ii for i < k.

Let be an initial distribution of the CTMC. Then the probability measure Pr


on cylinder sets is defined by induction on the length of the defining sequence k, as
follows:
Basis: Pr (C(s0 )) = (s0 )
Induction step: Pr (C(s0 , I0 , . . . , sk , I 0 , s0 )) =
Pr (C(s0 , I0 , . . . , sk )) N(sk , s0 ) (eE(sk )a eE(sk )b ),

with k > 0, and a = inf I 0 and b = sup I 0 . If s is the only possible initial state
((s) = 1), we write Prs . Recall that N(sk , s0 ) is the one step probability in the
embedded discrete-time Markov chain, as defined in 5.2. For more details on the
probability measure on paths refer to [5] and [9].

5.3 Probabilities
Based on the probability measure on paths, two different types of state probabilities
can be distinguished for CTMCs. Transient state probabilities are presented in
Section 5.3.1 and the steady-state probabilities are presented in Section 5.3.3.

5.3.1 Transient state probability


The transient state probability is a time-dependent measure that considers the CTMC
M at a given time instant t. The probability to be in state s0 at time instant t,
given initial state s, is denoted as:

VM (s, s0 , t) = Pr( P ath(s) | @t = s0 ).

The transient probabilities are characterized by a linear system of differential equa-


tions of possibly infinite size. Let V(t) be the matrix of transient state probabilities
at time t for all possible starting states s and for all possible goal states s0 (we
omit the superscript M for brevity here), then the so-called Kolmogorovs forward
equations [14]:
d
V(t) = V(t) Q,
dt
describe the transient probabilities, where the initial probabilities are given as V(0).
For finite CTMCs the system of equations can be solved for example with a Taylor
series expansion or more efficiently with uniformization [13], also known as Jensens
method [12].
48 5 CTMCs

5.3.2 Uniformization method


Uniformization is a well-known computational efficient method of performing tran-
sient analysis on finite CTMCs. To use uniformization, first the one-step transition
probability matrix P needs to be defined as:
Q
P=I+ Q = (P I), (5.3)

where the uniformization rate is maxi {|qi,i |}. P is a stochastic matrix because
all entries lie between 0 and 1 and the rows sum up to 1 and describes a DTMC.
Using P, we get:

V(t) = (0)eQt = (0)e(PI)t = (0)eI ePt = (0)et ePt . (5.4)

The last matrix exponential can be rewritten using a Taylor-series expansion



t
X (t)n X
V(t) = (0)e = (0) (t; n)Pn , (5.5)
n=0
n! n=0

where the Poisson probabilities

(t)n
(t; n) = et , n N, (5.6)
n!
state the probability of n events occurring in the interval [0, t) in a Poisson process
with rate . Up to time t exactly n jumps have taken place with probability (t; n).
The change of probability in the DTMC after n steps can be described by (0)Pn .
Thus, by the law of total
Pprobability the transient probability vector V(t) is obtained
n
as the weighted sum n=0 (t; n)P over all possible number of steps. Let n be
the state probability distribution vector after n epochs in the DTMC with transition
matrix P, that can be derived recursively:

0 = (0) and n = n1 P, n N+ (5.7)

Then, equation (5.5) can be rewritten as



X
X
n
V(t) = (t; n)((0)P ) = (t; n) n . (5.8)
n=0 n=0

To avoid the infinite series, the sum can be truncated; the resulting error can be
calculated in advance. If, for example, up to K transitions are considered,
K
X
V(t) = (t; n) n + (K), (5.9)
n=0
5.3 Probabilities 49

where (K) is the error that occurs when the series is truncated after K steps:
K
X X (t)n X (t)n
(K) = k (t; n) n k k et k=1 et (5.10)
n=K+1 n=K+1
n! n=0
n!

Further on it is possible to precompute the finite number of steps K for a given time
instant t and a required accuracy criterion . Thus, the smallest K is needed that
satisfies
K
X (t)n 1
t
= (1 )et . (5.11)
n=0
n! e

Example 19 (Uniformization). Figure 5.2 shows a simple CTMC. Let the initial
distribution be 0 = [1, 0].

Figure 5.2: An simple example of a CTMC

We can derive the square generator matrix Q:


 
3 3
Q=
2 2

For = 3, we obtain the one-step transition matrix P:


 
0 1
P= 2 1
3 3

We can now calculate the transient probabilities for time t = 1:



X
V(1) = (0) (t; n)Pn
n=0
     2
1 0 0 1 0 1
= (3, 0) (0) + (3, 1) (0) 2 1 + (3, 2) (0) 2 1 + . . .
0 1 3 3 3 3
[0.404043, 0.595957]
50 5 CTMCs

5.3.3 Steady-state probability


The steady-state probabilities to be in a state s0 , given initial state s, are defined as

M (s, s0 ) = lim VM (s, s0 , t), (5.12)


t

and indicate the probabilities to be in some state s0 in the long run. For CTMCs
with a finite state space, the steady-state probabilities always exist. Furthermore,
if the CTMC is strongly connected, the initial state does not influence the steady-
state probabilities as the probability distribution does not depend on the progress in
time (we therefore often write (s0 ) instead of (s, s0 ) for brevity). The steady-state
probability vector then follows from the possibly infinite system of linear equations
and its normalization:
X
Q = 0, and s = 1. (5.13)
s

For finite CTMCs this system of linear equations can be solved with numerical
means known from linear algebra [22].
Example 20 (Steady-state probabilities for strongly-connected CTMCs). Consider
the example CTMC M = (S, R, L) in Figure 5.1 above. Recall that the square
generator matrix Q was:
5 4 1
Q = 10 10 0
0 4 4
From Q and Equation 5.13 we can derive the following system of linear equations:

5 0 + 10 1 =0
4 0 10 1 + 4 2 =0
0 4 2 =0
0 + 1 + 2 =1

Solving the system results into:

5 1
1 = 0 = 0
10 2
1
2 = 0
4
1 1
0 + 1 + 2 = 1
2 4

4 2 1
0 = , 1 = , 2 =
7 7 7
5.3 Probabilities 51

Steady-state probabilities with multiple BSCCs

For CTMCs with a graph that is not strongly connected, we define the bottom
strongly connected component (BSCC) as the maximal strongly connected compo-
nent in the graph without any outgoing transitions to states of another component
[5]. Note that there might be incoming transitions. So, the transient states are not
contained in any BSCC and every absorbing state results in a BSCC on its own.
For a single BSCC B S of a CTMC M = (S, R, L), the steady-state probabil-
ities can be computed according to Equation 5.13. Let P r{reach B from s} denote
the probability of eventually reaching a state in B starting from state s. It can be
determined by resolving the following system of equations:


1
if s B
P r{reach B from s} =

(P(s, s0 ) P r{reach B from s})
P
s0 S otherwise
(5.14)

Given initial state s S, the steady-state probability of a state s0 B is:

M (s, s0 ) = P r{reach B from s} sB0 . (5.15)

Example 21 (Steady-state probabilities with multiple BSCCs). Figure 5.3 shows


a labeled CTMC with multiple BSCCs: B1 = {s1 }, B2 = {s3 } and B3 = {s4 , s5 }.
Starting from the initial state s0 S, we want to calculate the steady-state proba-
bility of being in a state with the label a.

Figure 5.3: An example of a CTMC with multiple BSCCs

Only the states s1 , s2 and s5 hold the label a. s2 is a transient state, so we are
only interested in B1 and B3 . The steady-state probability for state s1 is:
52 5 CTMCs

M (s0 , s1 ) = P r{reach B1 from s0 } sB11


with:
1 1
P r{reach B1 from s0 } = P r{reach B1 from s1 } + P r{reach B1 from s2 }
2 2
1
P r{reach B1 from s2 } = P r{reach B1 from s0 }
2
1 1 1
P r{reach B1 from s0 } = 1 + P r{reach B3 from s0 }
2 2 2
2
P r{reach B1 from s0 } =
3

2 2
M (s0 , s1 ) = 1= .
3 3

The steady-state probability for state s5 is:

M (s0 , s5 ) = P r{reach B3 from s0 } sB53


with:
1
P r{reach B3 from s0 } = P r{reach B3 from s2 }
2
1 1
P r{reach B3 from s2 } = P r{reach B3 from s0 } +
2  4 
1 1 1
P r{reach B3 from s0 } = P r{reach B1 from s0 } +
2 2 4
1
P r{reach B3 from s0 } =
6

1 2 1
M (s0 , s5 ) = = .
6 3 9

So the steady-state probability of being in a state with the label a is:

2 1 7
+ = .
3 9 9
Chapter 6

CSL

In this chapter the logic CSL is presented as a formalism to specify complex prop-
erties on states and paths of CTMCs. In Section 6.1, the syntax and semantics of
CSL formulas are given. Section 6.2 summarizes how the next operator, the time
bounded until, the interval until and the point interval until are model checked on
finite CTMCs. In Section 6.3, we discuss the general model checking routine via
satisfaction sets.

6.1 Continuous stochastic logic (CSL)


Now that we have defined labeled CTMCs we need a formalism to specify desirable
properties on states and paths. This can be done with the continuous stochastic
logic (CSL) [3], [5], which is a stochastic extension of CTL [8].

Definition 30 (CSL). Let p [0, 1] be a real number, ./ {, <, >, } a compar-


ison operator, I R0 a nonempty interval and AP a set of atomic propositions
with a AP . CSL state formulas are defined by
::= tt | a | | | S./p () | P./p (),
where is a CSL path formula defined on
::= X I | U I . (6.1)

Example 22 (CSL Formulas). Some examples for CSL state formulas would be:

The probability of going down within 10 time units after having continuously
operated with at least 2 processors is at most 1%

P0.01 (2up 3up) U [0,10] down




53
54 6 CSL

At least 99% of time, at least 2 processors are operating

S0.99 (2up 3up)


The steady-state probability of being in state s is at most 0.001%
S0.00001 (at s)

For a CSL state formula and a CTMC M, the satisfaction set Sat() contains
all states of M that fulfill . Satisfaction is stated in terms of a satisfaction relation,
denoted |=, as follows.

Definition 31 (Satisfaction on state formulas). The relation |= for states of a CTMC


M = (S, R, L) and CSL state formulas is defined as:
s |= tt for all s S,
s |= iff s |= and s |= ,
s |= a iff a L(s) ,
s |= S./p () iff M (s, Sat()) ./ p,
s |= iff s 6|= ,
s |= P./p () iff P robM (s, ) ./ p,

where M (s, Sat()) = s0 Sat() M (s, s0 ), and P robM (s, ) describes the proba-
P

bility measure of all paths P ath(s) that satisfy when the system is starting
in state s, that is, P robM (s, ) = Pr{ P athM (s) | |= }.

The steady-state operator S./p () denotes that the steady-state probability for
-states meets the bound p. P./p () asserts that the probability measure of the
paths satisfying meets the bound p.

Definition 32 (Satisfaction on path formulas). The relation |= for paths in a CTMC


M = (S, R, L) and CSL path formulas is defined as:
|= X I iff [1] is defined and [1] |= and (, 0) I,
|= U I iff t I : (@t |= (t0 [0, t) : @t0 |= )). (6.2)

We consider the time interval of the next operator to I = [t1 , t2 ] for t1 , t2 R0 .


The next operator X [t1 ,t2 ] then states that a transition to a -state is made during
the time interval [t1 , t2 ]. The until operator U I asserts that is satisfied at
some time instant t I and that at all preceding time instants holds.
In the following, we deal with five different time intervals for the until operator:
6.2 Model checking finite state CTMCs 55

the bounded until operator with interval I = [0, t] for t R>0 ,

the time interval until with I = [t1 , t2 ] for t1 , t2 R>0 and t1 < t2 ,

the point interval until with I = [t, t] for t R,

the unbounded until operator with interval I = [0, ) and

the unbounded until operator with I = [t, ) for t R>0 .

Note that the path formula U I is not satisfiable for I = . For a more detailed
description of CSL, see [5].

6.2 Model checking finite state CTMCs


Baier et al. recently proposed numerical methods for model checking CSL formulas
over finite state CTMCs [5]. We briefly rehearse the approach developed there, as
it forms the basis for our model checking approach for infinite-state CTMCs.
To model check the next operator = X I we need the one step probabilities
to reach a state that fulfills within a time in I.

Proposition 1 (The next operator [5]). For s S, interval I R0 and a CSL


state formula :
X T(s, s0 )
Prob(s, X I ) = (eE(s)infI eE(s)supI ) . (6.3)
0
E(s)
s |=

In [5], it is shown that model checking the time bounded until, the interval until,
and the point interval until can be reduced to the problem of computing transient
probabilities for CTMCs. The idea is to use a transformed CTMC where several
states are made absorbing. As introduced in [5] this proceeds as follows:

Definition 33 (Absorbing). For a CTMC M = (S, R, L) and a CSL state formula


let CTMC M[] result from M by making all states in M absorbing, i.e.,
M[] = (S, R0 , L), where R0 (s, s0 ) = R(s, s0 ) if s 6|= and 0 otherwise.

The CSL path formula = U [0,t] is valid if a state is reached, before time
t via some path (that is a path via only states). As soon as a state is reached,
the future behavior of the CTMC is irrelevant for the validity of . Thus all states
can be made absorbing without affecting the satisfaction set of formula . As soon
56 6 CSL

as a ( ) state is reached, will be invalid, regardless of the future evolution


of the system. As a result we may switch from M to M[][ ] = M[ ],
as explained in [5].

Proposition 2 (Time bounded until [5]). For any CTMC M = (S, R, L) :


X
Prob M (s, U [0,t] ) = Prob M[] (s, U [0,t] ) = M[] (s, s0 , t). (6.4)
s0 |=

For the interval until with time bound I = [t1 , t2 ], 0 < t1 t2 we again follow
the idea of CSL model checking. It is important to note that
Prob(s, U [t1 ,t2 ] ) 6= Prob(s, U [0,t2 ] ) Prob(s, U [0,t1 ] ).

For model checking a CSL formula that contains the interval until operator, we
need to consider all possible paths, starting in a state at the actual time-instance
and reaching a state during the time interval [t1 , t2 ] by only visiting states on
the way. We can split such paths in two parts: the first part models the path from
the starting state s to a state s0 and the second part the path from s0 to a
state s00 only via states. We therefore need two transformed CTMCs: M[] and
M[ ], where M[] is used in the first part of the path and M[ ] in the
second. In the first part of the path, we only proceed along states, thus all states,
that do not satisfy do not need to be considered and can be made absorbing. As
we want to reach a state via states in the second part, we can make all state
that do not fulfill absorbing, because we cannot proceed along these states, and
all states that fulfill , because we are done, as soon as we reach such a state.

In order to calculate the probability for such a path, we accumulate the multi-
plied transition probabilities for all triples (s, s0 , s00 ), where s0 |= and is reached
before time t1 and s00 |= and is reached before time t2 t1 . This can be done,
because we use CTMCs that are time homogeneous.

Proposition 3 (Interval until [5]). For any CTMC M = (S, R, L) and (0 < t1 <
t2 ):
X X
Prob M (s, U [t1 ,t2 ] ) = M[] (s, s0 , t1 ) M[] (s0 , s00 , t2 t1 ). (6.5)
s0 |= s00 |=

The point interval until can then be seen as a simplification of the interval until,
where the second part of the computation does not need to be considered. The CSL
path formula = U [t,t] is valid if a state is reached, at time t via only
states, hence all states that do not satisfy do not need to be considered and can
be made absorbing. In the goal state s0 both and have to be valid.
6.3 General model checking routine 57

Proposition 4 (Point interval until [5]). For any CTMC M = (S, R, L) :


X
Prob M (s, U [t,t] ) = Prob M[] (s, U [t,t] ) = M[] (s, s0 , t). (6.6)
s00 |=

6.2.1 Worst case time-complexity of CSL


The worst case time-complexity of model checking a CSL formula for a CTMC
M = (S, R, L) is:
O || |R| tmax + |S|2.81

(6.7)
|| is the number of operators (as in the computation tree), |R| is the number of
non-zero entries in the rate matrix R and is the uniformization rate (see Section
5.3.2). tmax is the maximal time bound appearing in a sub until formula 1 U I 2
of and tmax = 1 if does not contain a time-bounded until operator. |S| is the
number states. For a detailed explanation on the worst case time-complexity of
CSL, refer to [5].

6.3 General model checking routine


One possibility for model checking that we are going to use is to develop the satis-
faction set Sat() = {s S | s |= } for a given CSL formula . For every state
s S it can then be checked whether s |= by verifying whether s Sat().
The construction of Sat() is done recursively and follows the inductive structure
of the CSL syntax. A CSL formula is split into its sub-formulas and for every
sub-formula the model checker is invoked recursively, as illustrated in Algorithm 1.
All seven CSL operators, as addressed in Section 6.1, are covered and a satisfaction
set is returned. The satisfaction set resulting from a steady-state formula is denoted
SatS , the satisfaction set resulting from a next formula is denoted SatX and the
satisfaction set resulting from an until formula SatU , respectively.
58 6 CSL

Algorithm 1 Sat( : CSL state formula) : set of states


begin
if = tt then
return S;
else if AP then
return {s S | L(s)};
else if = 1 2 then
return Sat(1 ) Sat(2 );
else if = 1 then
return S\Sat(1 );
else if = S./p (1 ) then
return SatS (./ p, 1 );
else if = P./p (X I 1 ) then
return SatX (./ p, I, 1 );
else if = P./p (1 U I 2 ) then
return SatU (./ p, I, 1 , 2 );
else
no valid CSL operator;
end if
end
Bibliography

[1] M. Ajmone Marsan, G. Balbo, G. Conte, S. Donatelli, and G. Franceschinis.


Modeling with Generalised Stochastic Petri Nets. John Wiley & Sons, 1995.
[2] R. Allan. Reliability Evaluation of Power Systems. Springer US, 2013.
[3] A. Aziz, K. Sanwal, and R. Brayton. Model checking continuous-time Markov
chains. ACM Transactions on Computational Logic, 1(1):162170, 2000.
[4] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. On the logical char-
acterisation of performability properties. In Proc. 27th Int. Colloquium on
Automata, Languages and Programming (ICALP00), volume 1853 of LNCS,
pages 780792. Springer, 2000.
[5] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. Model-checking al-
gorithms for continuous-time Markov chains. IEEE Transactions on Software
Engineering, 29(6):524541, 2003.
[6] C. Baier and J.-P. Katoen. Principles of Model Checking (Representation and
Mind Series). The MIT Press, 2008.
[7] P. Buchholz, J.-P. Katoen, P. Kemper, and C. Tepper. Model checking large
structured Markov chains. Journal of Logic and Algebraic Programming, 56(1-
2):6997, 2003.
[8] E. Clarke, E. Emerson, and A. Sistla. Automatic verification of finite-state
concurrent systems using temporal logic specifications. ACM Transactions on
Programming Languages and Systems, 8(2):244263, 1986.
[9] L. Cloth. Model checking algorithms for Markov Reward Models. PhD thesis,
University of Twente, 2006.
[10] A. Conway and N. Georganas. Queueing Networks: Exact Computational Anal-
ysis. MIT Press, 1989.
[11] D. DAprile, S. Donatelli, and J. Sproston. CSL Model Checking for the Great-
SPN Tool. In Proc. 19th Int. Symposium on Computer and Information Sci-
ences (ISCIS04), volume 3280 of LNCS, pages 543552. Springer, 2004.

59
60 BIBLIOGRAPHY

[12] W. Grassmann. Finding transient solutions in Markovian event systems through


randomization. In Numerical solution of Markov chains, pages 411420. Dekker,
1991.

[13] D. Gross and D. Miller. The randomization technique as a modeling tool


and solution procedure for transient Markov processes. Operations Research,
32(2):343361, 1984.

[14] B. Haverkort. Performance of Computer Communication Systems. John Wiley


and Sons, 1999.

[15] H. Hermanns, U. Herzog, and J. Katoen. Process algebra for performance


evaluation. Theoretical Computer Science, 274(1-2):4387, 2002.

[16] H. Hermanns, J.-P. Katoen, J. Meyer-Kayser, and M. Siegle. A tool for model-
checking Markov chains. International Journal on Software Tools for Technol-
ogy Transfer, 4(2):153172, 2003.

[17] J. Hillston. A Compositional Approach to Performance Modeling. Camebridge


University Press, 1996.

[18] J.-P. Katoen. Concepts, Algorithms, and Tools for Model Checking. Arbeits-
berichte des IMMD 32(1), Friedrich-Alexander Universitat Erlangen Nurnberg,
1999.

[19] J.-P. Katoen, M. Khattri, and I. Zapreev. A Markov Reward Model Checker.
In Proc. 3rd Int. Conference on the Quantitative Evaluation of Systems
(QEST06), pages 243244. IEEE press, 2005.

[20] M. Kwiatkowska, G. Norman, and D. Parker. Probabilistic symbolic model


checking with PRISM: a hybrid approach. International Journal on Software
Tools for Technology Transfer, 6(2):128142, 2004.

[21] M. Kwiatkowska, G. Norman, and D. Parker. Stochastic model checking.


In M. Bernardo and J. Hillston, editors, Formal Methods for the Design of
Computer, Communication and Software Systems: Performance Evaluation
(SFM07), volume 4486 of LNCS (Tutorial Volume), pages 220270. Springer,
2007.

[22] W. J. Stewart. Introduction to the Numerical Solution of Markov Chains.


Princeton University Press, 1994.

[23] K. Trivedi. Probability and Statistics with Reliability, Qeueing and Computer
Science Applications. John Wiley & Sons, 2002.

You might also like