Professional Documents
Culture Documents
Anne Ku
February 1995
2
ABSTRACT
Capacity planning has always been subject to major uncertainties, but privatisation of the
U.K. electricity supply industry (ESI) has introduced the additional risks of business and
market failure. To meet these broader modelling requirements, two radically different
approaches characterised by model synthesis and flexibility are investigated.
Ideally, by using more than one technique, model synthesis should be more capable of
meeting the conflicting criteria of comprehensibility and comprehensiveness. The
noticeable trend of building bigger energy models supports this view in practice. A case
study based modelling experiment was conducted to compare replications of traditional
approaches with prototypes of synthesis. The conclusion from this is that the pursuit of
greater model comprehensiveness through model synthesis is an elusive and ultimately
impractical objective.
Rather than rigorous modelling for completeness, flexibility introduces an entirely different
treatment of uncertainty. Flexibility has received much attention lately, but its usefulness
is under-researched in modelling uncertainty for this context. In this respect, flexibility is
studied 1) as a decision criterion, 2) as a feature of the modelling approach, and 3) in
contrast to robustness.
The seemingly feasible answer of model synthesis is fraught with conceptual and
operational difficulties. The less obvious concept of flexibility offers a more promising
and useful framework. Instead of modelling uncertainty for completeness, this thesis
promotes modelling flexibility for contingency.
3
To my parents,
4
ACKNOWLEDGEMENTS
Writing this thesis was like putting together a big jigsaw puzzle. Without the clues along
the way, the final picture might have appeared much later and perhaps fuzzier. I would
like to thank the kind individuals, who supplied these clues and some of the missing pieces.
First of all, I thank my supervisor Derek Bunn, for his patience and wisdom in guiding me
to the final end and his sustained interest as my thesis evolved. I would also like to thank
Kiriakos Vlahos for suggesting the idea of flexibility in the first place and his considerable
help and feedback on various stages of my research.
I would like to thank the following individuals for intellectual input: Hans Christian
Reinhardt for stimulating discussions on the concept of flexibility and my research as a
whole; Franois Longin for application of flexibility and robustness; Jonathan Levie and
Mike Staunton for detailed comments on writing and content; Isaac Dyner for feedback on
model synthesis; and Stephen Watson for clarification of the initial literature review on
flexibility.
I would like to thank the LBS Library staff, in particular, John Hall and Lynne Powell for
inter-library loans. I am also deeply grateful to my friend Zakia Mehdi for lending me her
notebook computer in the final stages and Eduardo Ayrosa for configuring it to my
specifications. I also acknowledge the advice and encouragement of other colleagues and
friends who have shared the PhD experience with me.
5
Chance favours the prepared mind.
6
TABLE OF CONTENTS
Abstract ........................................................................................ 3
Acknowledgements ........................................................................................ 5
7
2.7.4 Technology ........................................................................................ 67
2.7.8 Environment........................................................................................ 75
8
3.5.1 Commercially Available Software........................................................ 116
9
4.6 Motivation for Flexibility.............................................................................. 171
10
A.5 Sensitivity Analysis....................................................................................... 200
11
B.4.1 Description of Approach ..................................................................... 257
12
5.6 Technology / Information Systems / Telecommunications .............................. 287
13
7.3.1 Relative Flexibility Benefit .................................................................. 335
14
8.5.1 Partitioning, Sequentiality, Staging ...................................................... 396
D.2 Simple Example: no lead time, demand = supply, planned = actual levels....... 410
15
D.5 The UK Electricity Supply Industry .............................................................. 424
Chapter 9 Conclusions
16
LIST OF TABLES
Chapter 1
Table 1.1 Research Questions and Methodology ............................................................36
Chapter 2
Table 2.1 Privatised Structure in England and Wales ....................................................40
Table 2.2 Comparison of Industry Structures................................................................46
Table 2.3 Evolution of Electricity Planning in the USA ..................................................52
Table 2.4 Important Factors in Capacity Planning..........................................................61
Table 2.5 Fuel/Technology Comparisons ......................................................................69
Table 2.6 Model Requirements for Capacity Planning....................................................79
Chapter 3
Table 3.1 Arguments For and Against Risk Analysis....................................................105
Table 3.2 Steps in Decision Analysis ...........................................................................106
Table 3.3 Pros and Cons of Decision Analysis .............................................................108
Table 3.4 Critique of Techniques .................................................................................126
Chapter 4
Table 4.1 Model Evaluation and Comparison Criteria ..................................................139
Table 4.2 Unprotected but Dominant Utility: National Power .......................................142
Table 4.3 Protected but Competitive Utility .................................................................143
Table 4.4 Unprotected but Encouraged Utility..............................................................144
Table 4.5 Comparison With Respect to Evaluation Criteria..........................................149
Table 4.6 Summary of Approaches..............................................................................150
Table 4.7 Major Concerns in Model Synthesis .............................................................155
Table 4.8 Structuring Issues ........................................................................................157
Table 4.9 Dependent Variables in the Reduced Model ..................................................163
Table 4.10 Independent Variables in the Reduced Model..............................................164
Table 4.11 Difficulties of Model Synthesis Implementation ..........................................169
Table 4.12 Completeness and Unease .......................................................................172
17
Appendix A
Table A.1 Consolidated Range .................................................................................... 200
Table A.2 UK Parameters ........................................................................................... 203
Table A.3 Base Costs for the UK ................................................................................ 204
Table A.4 Simulation Parameters for Nuclear.............................................................. 212
Table A.5 Simulation Values for Coal ......................................................................... 214
Table A.6 Simulation Values for Gas .......................................................................... 216
Appendix B
Table B.1 Sources of Information................................................................................ 229
Table B.2 Status of Plant ............................................................................................ 231
Table B.3 Existing Plant as at July 1993 ..................................................................... 232
Table B.4 Summary of All Plant in England and Wales NGC System as at July 1993 .. 236
Table B.5 Input Files to ECAP.................................................................................... 255
Table B.6 Output Files from ECAP............................................................................. 256
Chapter 5
Table 5.1 Uses of Flexibility ....................................................................................... 292
Chapter 6
Table 6.1 Flexibility and Robustness ........................................................................... 297
Table 6.2 Gerwins (1993) Methods of Coping With Uncertainty................................. 304
Table 6.3 Response to Areas of Uncertainties in Chapter 2 .......................................... 304
Table 6.4 Mandelbaum (1978) .................................................................................... 315
18
Chapter 7
Table 7.1 Elements and Indicators of Flexibility...........................................................335
Table 7.2 Annual Costs 336
Table 7.3 Comparison of Expected Value Measures.....................................................353
Table 7.4 Equal Entropies for Different Number of States............................................369
Chapter 8
Table 8.1 Problem Categories and Expected Value Measures .......................................386
Table 8.2 Areas of Uncertainties Affecting Costs and Revenues ...................................388
Appendix D
Table D.1 Terminology and Notations .........................................................................411
Table D.2 Lead Time and Cost of Not Meeting Demand ..............................................418
Table D.3 Preferences with respect to Risk Attitude when P(Dt > Qmax) >0 ................419
Table D.4 Conditions for Robustness and Flexibility....................................................420
Chapter 9
Table 9.1 Research Questions and Answers .................................................................435
Table 9.2 Flexibility ..............................................................................................441
19
20
LIST OF FIGURES
Chapter 1
Figure 1.1 Organisation of Thesis ..................................................................................30
Chapter 2
Figure 2.1 Privatised Industry Structure in the UK........................................................40
Figure 2.2 Spiral of Impossibility .................................................................................54
Chapter 3
Figure 3.1 Decomposition Methods................................................................................89
Figure 3.2 Scenario Planning Process ..........................................................................100
Figure 3.3 The Over and Under Model........................................................................110
Figure 3.4 Matrix Model of Decisions and Outcomes..................................................112
Figure 3.5 Decision Tree in SMARTS ........................................................................114
Figure 3.6 Technology Choice Decision Tree ...............................................................115
Figure 3.7 Technology Choice Objectives Hierarchy ....................................................116
Figure 3.8 Optimisation Grid.......................................................................................118
Figure 3.9 Decision Tree with Optimisation Algorithm.................................................120
Figure 3.10 Scenario / Decision Analysis .....................................................................121
Figure 3.11 Decision Tree of New Technology Evaluation ...........................................123
Figure 3.12 SMARTE Methodology ............................................................................124
Figure 3.13 OR Techniques .........................................................................................130
Chapter 4
Figure 4.1 Experimental Protocol ................................................................................135
Appendix A
Figure A.1 Uncertainty Modelling................................................................................181
Figure A.2 Horizontal Analysis of Value Ranges .........................................................182
Figure A.3 Vertical Analysis of Cost Contribution .......................................................184
21
Figure A.4 Factors Influencing Cost ............................................................................ 185
Figure A.5 Carbon Tax Calculations for Coal-fired Plants ........................................... 193
Figure A.6 Contribution to Final Cost ......................................................................... 204
Figure A.7 UK Coal vs Nuclear Trade-off Curves with $3 Carbon Tax ....................... 205
Figure A.8 UK Coal vs Nuclear Trade-off Curves with $10 Carbon Tax...................... 206
Figure A.9 Coal ............................................................................................. 207
Figure A.10 Nuclear ............................................................................................. 208
Figure A.11 Risk Profiles for Nuclear ......................................................................... 213
Figure A.12 Risk Profiles for Coal .............................................................................. 215
Figure A.13 Risk Profiles for Gas ............................................................................... 217
Figure A.14 Trade-off Curves for Coal, Nuclear, and Gas (no tax) .............................. 218
Figure A.15 Most Likely Case..................................................................................... 218
Figure A.16 Most Expensive Case............................................................................... 219
Figure A.17 Carbon Tax on Coal ................................................................................ 220
Figure A.18 Carbon Tax on Gas ................................................................................. 220
Figure A.19 Modelling Directions................................................................................ 222
Appendix B
Figure B.1 Load Duration Curves for Demand Uncertainty.......................................... 239
Figure B.2 Scenario Generation................................................................................... 240
Figure B.3 Replication of the Probabilistic Approach................................................... 253
Figure B.4 Prototype One: Single Project.................................................................... 260
Figure B.5 Prototype Two: Marginal Cost Analysis .................................................... 261
Appendix C
Figure C.1 Similarities of Techniques.......................................................................... 265
Figure C.2 Risk Analysis and Decision Analysis.......................................................... 267
Figure C.3 Types of Model Linkages........................................................................... 272
Chapter 6
Figure 6.1 Conceptual Framework............................................................................... 295
22
Chapter 7
Figure 7.1 Hobbs Example.........................................................................................337
Figure 7.2 Expected Conditions ...................................................................................339
Figure 7.3 Investment Y 341
Figure 7.4 General Structure of Normalisation.............................................................345
Figure 7.5 Expected Conditions ...................................................................................346
Figure 7.6 Schneeweiss and Khn................................................................................347
Figure 7.7 EVPI ..............................................................................................350
Figure 7.8 Relative Flexibility Benefit..........................................................................350
Figure 7.9 Deterministic EV ........................................................................................352
Figure 7.10 Notation ..............................................................................................357
Figure 7.11 Maximum Entropy as a Function of States ................................................358
Figure 7.12 Entropy and Standard Deviation................................................................358
Figure 7.13 State Discrimination .................................................................................359
Figure 7.14 Decomposition Rule..................................................................................360
Figure 7.15 Decision Tree Transformation for Entropic Treatment...............................366
Chapter 8
Figure 8.1 Decision Tree of Generic Example ..............................................................383
Figure 8.2 Influence Diagram of Generic Example .......................................................384
Figure 8.3 Hirsts (1989) Example ..............................................................................387
Figure 8.4 Electricity Planning Example ......................................................................389
Figure 8.5 Plant Economics Influence Diagram............................................................391
Figure 8.6 Plant Economics Decision Tree...................................................................392
Figure 8.7 Pool Price Influence Diagram......................................................................394
Figure 8.8 Pool Price Decision Tree.............................................................................395
Figure 8.9 Partitioning ..............................................................................................397
Figure 8.10 Sequentiality and Staging..........................................................................398
Figure 8.11 Flexibility by Plant Lives ..........................................................................399
Figure 8.12 Postponement and Deferral Decision Tree .................................................402
Figure 8.13 Deferral with respect to Market and Plant Uncertainty...............................403
Figure 8.14 Diversity Influence Diagram .....................................................................404
Figure 8.15 Diversity Decision Tree ............................................................................405
23
Appendix D
Figure D.1 Costs of holding and production: Ch and Cp ............................................. 412
Figure D.2 Relationship between Iopt and Ch, Cp ........................................................ 416
Figure D.3 Cost of extra production Cxp ..................................................................... 420
Chapter 9
Figure 9.1 Research Messages..................................................................................... 434
24
CHAPTER 1
1.1 Motivation
The electricity supply industry (ESI) is one of the most capital intensive of all
industries, with huge investments in power stations that are expected to pay off
over several decades. Long construction lead times and operating lives imply the
need for capacity planning to determine the types, sizes, and timing of new plants
to be built as older plants are retired. These decisions are made in the face of great
uncertainty, and the often irreversible commitments are translated into future costs.
attitudes, new commitments may quickly become obsolete and inadequate. The
privatisations of recent years, especially in the UK, have added to the uncertainties
and priorities in the UK and elsewhere have redefined what constitutes capacity
costly implications. One trend has been to build larger energy models, e.g. NEMS
project in the US (DOE, 1994). However, larger models are more difficult for the
those employing single techniques, have difficulty meeting the conflicting criteria of
overcome the deficiencies of individual techniques and yet exploit the synergies
25
between them. Other than developing new modelling languages to this end, the
having different options available. Flexible technologies, such as short lead time,
modular, dual-purpose plant, are promoted in the electricity planning literature, e.g.
CIGRE (1991, 1993). Under great uncertainty, it has been suggested that
flexibility has become an important operational objective, not only desirable but
also necessary (Slack, 1988). This suggests that flexibility may become more
Although the modelling literature tends to support model synthesis, its feasibility
has not yet been fully established, particularly the conceptual and operational
while the electricity industry calls for flexibility, its usefulness in modelling has not
been specifically demonstrated. The next section thereby lists a number of specific
research questions related to these two themes of model synthesis and flexibility.
26
1.2 Research Questions
How can we deal with the range of uncertainties more completely and adequately
than existing approaches in capacity planning, i.e. to meet the conflicting criteria of
and flexibility as two ways to answer this question, with emphasis on the
conceptual aspects.
Driving the main argument is a list of questions on the issues involved in model
1) What are the new requirements for capacity planning in the privatised and
restructured UK ESI?
2) What are existing approaches to this problem and how well do they treat these
uncertainties?
4) Is model synthesis feasible and practical for these purposes? What are the
conceptual and operational issues involved in model synthesis?
5) What is flexibility? How is it defined? How does it relate to other words and
concepts?
27
1.3 Research Methodology
the new requirements for capacity planning. A classification of areas and types of
and modelling methods used in the UK and elsewhere reveals the limitations of
existing approaches. These two reviews also provide the criteria for subsequent
evaluation of models.
To investigate the feasibility and practicality of model synthesis and to compare its
found in the literature and unifies the definitions of flexibility. It also provides the
28
favourability, distinction between options and strategies for operationalisation,
indicators also provide the basic terminology and framework for modelling
This thesis consists of two independent and separately argued texts corresponding
to two entirely different ways of approaching the problem. Part One investigates
unease and as a more practical means of coping with uncertainties. Initially, the
two themes seem totally unrelated. It is not until Chapter 9 that they are brought
first three appendices. Brief summaries of each chapter and appendix follow.
29
Figure 1.1 Organisation of Thesis
2
Introduction:
Uncertainties in
PART ONE Electricity Generation PART TWO
A 7
Pilot Study 1 Measuring Flexibility
B 8
Three Archetypal Approaches Modelling Flexibility
C
A Conceptualisation of Model D
Synthesis Flexibility and Robustness
9
Conclusions
This introductory chapter explains the title of this thesis by defining electricity
capacity planning and discussing the uncertainties which affect it. The
type, e.g. internal and external, quantitative and qualitative, etc. Uncertainties that
affect capacity planning in the ESI are classified according to area, e.g. demand,
30
fuel, and environment. It introduces the criteria of adequacy, completeness,
decision analysis, these techniques are briefly defined and evaluated on the basis of
complementary techniques.
This chapter documents the research into model synthesis, with supporting details
in the first three appendices (A, B, C). The investigation consists of two pilot
content, the first stage of three archetypal approaches to gain a more indepth, fair,
and relevant critique than mere literature review, the second stage of
emerges as an important issue. The results of these experiments cast doubt upon
31
the general usefulness of model synthesis as a fully comprehensive modelling
goal of completeness (or the lack of it) and as a practical means to cope with
uncertainty.
This study is a first attempt in this thesis to address a range of uncertainties that
gives critical insight into the use of sensitivity analysis and risk analysis to model
uncertainty. Data from various OECD countries are consolidated from two
This study establishes the feasibility of model replication but also shows the level
This is the first stage of the two staged model experiment. Three representative
approach is based on the use of decision trees and influence diagrams. The
associated input data and assumptions for the core optimisation model are given.
Model synthesis is defined as the use of two or more techniques in some integrated
32
structuring the components and strategies for synthesis. The frequently used terms
in this thesis are defined here, e.g. technique, model, and approach.
measures, and applications of flexibility since its earliest formal reference in the
1930s and as appeared in various industries and business sectors and academic
disciplines. The review is based on two types of sources: 1) journal articles in the
last two decades (1970 - 1994), which mention flexibility and closely related
dissertations on this topic. It shows the richness of the literature as well as the
the basis for subsequent clarification, analysis, and application in the remaining
three chapters.
flexibility and related words, e.g. adaptability, is applied to flexibility and more
together with conditions under which it is useful, the downside of flexibility, and
33
and practical aspects of flexibility. Two different kinds of operationalisation of
flexibility via options and strategies are discussed. The need for measuring
flexibility is proposed.
Using the four step method of Chapter 4, i.e. criteria, replication, evaluation, and
comparison, three groups of measures are critiqued for their consistency with the
from the conceptual framework are translated into indicators, which support the
partial measures found in the literature. The second group is based on the decision
analysis notion of expected value. Three different expected value measures and a
proposal for an improved measure with features of the previous three are assessed.
The third group is based on the scientific concept of entropy, and two types of
entropic measures are assessed. This detailed analysis concludes that indicators
and expected values may be used to measure flexibility but not entropic measures.
This chapter makes use of previously defined indicators and expected value
flexibility. Practical guidelines for this are developed and tested in a decision
tool for synthesis. It gives the circumstances under which to use indicators and
34
Appendix D Flexibility and Robustness: Response to Demand Uncertainty
The concepts of flexibility and robustness are applied to an analysis of over and
under capacity in production and inventory control, i.e. supply versus demand.
Measures of robustness and flexibility are derived with respect to costs of over and
under capacity. The original example is extended with additional detail to other
applications.
Chapter 9 Conclusions
The final chapter (9) summarises the main conclusions and research contributions.
35
The following table (1.1) provides a simple location guide of research questions
and methodology.
1) What are the new requirements for capacity Chapter 2 literature review,
planning in the privatised and restructured UK classification
ESI?
2) What are existing approaches to this problem and Chapter 3, literature review,
how well do they treat these uncertainties? Appendix B evaluation
critique
36
CHAPTER 2
2.1 Introduction
This chapter explains the title of this thesis, motivation and description of the
research problem, definition of the main terms and issues, and the overall agenda
for the rest of the thesis. Beginning with the importance of electricity (section 2.2),
this chapter introduces the structure of the industry (section 2.3) and background
developments (section 2.4) that have led to the current emphasis on uncertainties.
classified in section 2.6. Difficulties in capacity planning are then reviewed with
respect to major areas or sources of uncertainty in section 2.7. The final section
2.2 Electricity
economies (Price, 1990). Compared to primary fuels such as coal and oil,
electricity is clean and safe. No waste is produced at the users end. All pollution
is caused and borne by the producer, not the end-user. Unlike most other fuels
which require storage and processing, electricity is immediately available and easily
Precisely for these attractive characteristics, electricity has become the essential
electricity supply to be reliable, i.e. available when we need it, and affordable.
These requirements are summarised in the words of Allan and Billington (1992,
37
page 121): the primary technical function of a power system is to provide
have since restructured and deregulated their ESI to introduce competition, which
was believed to improve cost efficiency, increase diversity of fuel supply, and
In the UK, recent privatisation of public sector companies have changed the
priorities of the industry and introduced new responsibilities. Companies are now
government funding. The new utilities must consider the interests of all
stakeholders, the higher cost of capital, and competitive forces that did not exist
before.
and transmission are wholesale functions while distribution and supply are
fixed costs of transmission lines. Distribution is transmission at the retail level, i.e.
revenue-collection.
38
instance, it ranges from the nationalised French industry to the very fragmented
private ownership in the Netherlands and Germany. The industry structure partly
political stability, total state ownership, over-capacity (removing the immediate risk
of power shortages), low indebtedness (with little plant built since 1979 to the time
of privatisation), and relatively high efficiency (due to the integrated grid system).
Before 1990, electricity in the England and Wales was generated and transmitted
the government, and, as a result, was able to make long-term investment decisions
for the whole country. The twelve regional area boards then distributed and
sectors.
In 1990-1991, the UK ESI was restructured considerably and privatised with great
vertically integrated public utility CEGB was split and its assets sold to the private
sector as two generating companies National Power and PowerGen and twelve
newly formed public sector company Nuclear Electric. The responsibilities for
regulator, Office of Electricity Regulation (OFFER), was set up to look after the
restructured industry. Table 2.1 and Figure 2.1 illustrate the new structure in
England and Wales. Further details on the structure, organisation, and operation
39
of this market are given in Williams (1990), EA (1992), Energy Committee (1992),
O
PowerGen
kW kW N
National
Nuclear Electric National Power S
Grid
Company U
Independent Generators
Pool Pool M
Output PowerGen
Input
Price Price
E
NGC Pumped Storage
R
40
After privatisation and deregulation, the new companies are commercially oriented
in the supply contracts between the end-users and the distribution companies, the
short-term and long-term contracts between the generators and the distributors,
contracts between the National Grid Company (NGC) and the generators,
contracts between the NGC and the distributors, direct sales contracts, and implied
contracts with all classes of consumers. The Regulator and the NGC, rather than
generators, are responsible for the reliability of the system and security of supply.
The NGC was set up to look after the operation of the bulk transmission system as
well as to administer the trading of electricity through the daily power pool. The
daily power pool is intended to serve three purposes. First, it determines which
generating stations are run, based on bid prices rather than the former merit order
ranking of costs. Secondly, the mechanics of the pool determine the cost and
price of electricity traded. Finally, the pool exists to ensure that sufficient
means of signals to existing and potential generators in the form of large or small
The half-hourly spot market for electricity was designed to cope with the non-
storability of electricity. Therefore it is run for every single day of the year.
Generators tell the NGC how much electricity each of their generating units can
provide and their bid prices for each half-hour period for the next day. The NGC
41
ranks the stations in order of bid prices and selects the cheapest to meet the
The NGC produces three different schedules (unconstrained, operational, and out-
turn) for all power stations called to run. The unconstrained schedule ranks all
power stations according to ascending order of their offer prices and descending
used to calculate the System Marginal Price (SMP). After taking into account
transmission constraints and inflexible plant, the NGC modifies this schedule into
an operational one. Since bidding occurs the day before generation, actual
electricity demand and plant availability may turn out differently from expected.
The actual order of plants called to run is the out-turn schedule. The out-turn
availability is used if it is less than the declared availability in the calculation of the
The 24 hour market is governed by the fluctuations of Pool Input Price (PIP) also
known as the Pool Purchase Price (PPP) and the Pool Output Price (POP) also
known as the Pool Selling Price (PSP). The PIP is recalculated every half hour to
reflect the changing cost of generation with the fluctuation of demand. All sellers
of electricity receive the same PIP per unit of electricity, which is expressed in
pence per kilowatt hour. Likewise, the buyers of electricity purchase at the current
POP value. The difference between the PIP and the POP consists of a charge,
called the uplift, which covers all additional costs of keeping reserve power on line,
services. The size of the uplift varies through out the day but is typically around
There are two components of the PIP which reflect the cost of energy production
(energy credit) and the provision of capacity (capacity credit). The cost of energy
42
cover the generators investment costs, there is a capacity payment equal to
LOLP * (VOLL - SMP), where LOLP is the loss of load probability, VOLL the
value of loss of load, and SMP the system marginal price. The price formulae for
These parameters are calculated by the NGC. The schedule of plants and SMPs
The LOLP is the probability within any half hour of demand exceeding available
reliability of individual plants in meeting the load as planned. It reflects the balance
of supply and demand. For any half hour, if demand is significantly higher than
available capacity, LOLP will be high. LOLP will be higher during the winter peak
than the summer trough. LOLP is intended to give the incentive to invest in future
plant capacity.
VOLL (or VLL) is the price that pool members have to pay to ensure that no
supply is lost. This has been initially set at 2/kWh. VOLL is closely related to the
planning margin. Excess capacity causes planning margins to rise. To reduce the
margins, the NGC can set a lower VOLL so that capacity credit will be low even if
the former CEGB days, the planning margin is not pre-determined but market
dependent.
The SMP is the offer price of the most expensive station in operation in each half
hour, expressed in pence/kWh. Power stations that are bid into the operational
43
schedule will receive the SMP. The difference between this SMP and its own
utility will bid as many power stations into the merit order as it can while keeping
production should be lower than the SMP. The merit order is also determined by
location.
While it was intended for the bulk of trading to be transacted through the pool,
initially buyers and sellers actually entered into contracts for differences to
reduce the exposure to pool price volatility. The risk averse attitude that prevails
enter into these contracts which effectively stabilise the price of electricity for both
parties. In the first year of privatisation, 95% of all electricity supply was covered
by such contracts so that less than 5% of electricity was actually traded through
the pool. Back-to-back contracts can also provide the necessary security to raise
funds for independent power producers if gas supplies are similarly hedged by
long-term contract. In the direct sales market, generation companies have direct
The following summarises the state of the UK ESI four years after privatisation
(Reuters, May 1994.) The market shares of the two major generators (duopolists)
National Power and PowerGen have fallen from a total of 73% at vesting to 61%
due to the entry of independent power producers, who own a total of 3,225 MW
of new CCGT plant. The total capacity ratio of National Power and PowerGen
has stayed at roughly two to one (40%: 22%). Nuclear Electrics market share has
44
increased to 25%. Behaving like wholesale companies, the generation companies
have found lucrative business in supplying large energy consumers through direct
sales, thus taking business from the distribution companies. On the other hand, any
of the distribution companies (RECs) can generate up to 20% of their power needs
and sell it through the grid. They are keen to purchase electricity more cheaply,
their own right. Likewise, non-energy producers can generate electricity for their
own use, provided they have a licence or exemption as stipulated by the Electricity
Act 1989. Deregulation has paved the way for more diverse solutions and
While the effects of privatisation are still being felt, some obvious concerns face the
UK industry today: public awareness of the environment, the cost of cleaning up,
fuel switching from coal to gas, new entrants to the market, electricity trade in the
EC, potential over-capacity, and the future of nuclear power. The definition of
economic plant now includes greater consideration for the environment and
thermal efficiency as well as shorter lead-time and modular units. While it was
supply (or suppliers) and greater sensitivity to changing markets (Grimston, 1993),
privatisation has also introduced considerable market uncertainties and higher cost
(1991) identify several sources of possible problems for potential market failure:
equilibrium in the network; industrys capital intensity and level of sunk costs,
investment lead times and short run capacity constraints; natural monopoly
Since 1991, a number of new issues have surfaced: over-contracting for new gas
45
plants, protective contracts for British Coal, instability of the LOLP and the
long-term fuel supply contracts, it remains to be seen how such long-term decisions
can be driven by the short-term bid prices. The fact that only a portion of all
electricity is actually traded through the pool gives an element of artificiality about
the pool prices that is unsettling for customers and generators. Furthermore, the
pool is only half a market, that is, supply-side bidding only. These weaknesses and
There are currently three types of ESI structures in the world: unitary integrated
electricity supply industry varies greatly from country to country. Table 2.2 shows
46
In a vertically integrated industry, demand-side management is promoted as an
interruptible supply, and real time pricing are designed to change consumers
utilisation behaviour so that the utilities can better manage load distribution. These
utilities who may generate and import power as well as supply to the end-user.
With separate ownership of and responsibility for generation and distribution, the
UK companies do not have such incentives and run the risk of over-capacity.
competitive forces in the electricity market, with the independent regulator OFFER
acting as the watchdog for the industry. The merit order dispatch principle, where
cost, still applies. However, unlike the US and most other countries where the
merit order is based on cost, the National Grid Company sets out the order based
The imbalance of electricity needs in Europe is one reason for the cross border
electricity, to Germany which faces a high cost of domestic power and to Italy a
shortage of capacity. The UK buys electricity from France through the cross-
channel link. The transitions of the economies of Eastern and Central Europe offer
would like to see the industry broken up in each country into a generating sector, a
supply sector, and a transmission sector, the latter being open to third-party access.
However, complex issues such as ownership of the grid and legislative mechanisms
have to be ironed out, not least to alleviate opposition from member countries and
47
special interest groups. The growing number of privatisations1 around the world
support the relevance of this thesis. Electricity supply industries in other countries
are described in Helm and McGowan (1989) and Joskow and Schmalensee (1983).
History suggests that we are more capable of reacting to and dealing with short-
term uncertainties than long-term ones (Senge, 1990.) Indeed, gradual changes
over a long period of time seem to have less impact than sudden changes. While
we may react immediately to a price spike, we seldom react to slow and minor
pollution, as long as we are healthy we are not too concerned. We run the risk of
being too reactive to short-term events and too inert to long-term trends. The
history of power generation is full of such tales, and these have determined the
attitudes that power companies have taken to capacity planning and uncertainty.
Before the oil shock of the 1970s, fuel prices and electricity demand were
relatively stable and predictable. For planning purposes, demand was easily
forecast by trend analysis using compounded rates of past growth. There was no
In 1973 and 1974, oil prices quadrupled and led to unprecedented leaps in related
fuel prices. The resulting energy crisis combined with the effects of a world-wide
1 During the period from February to August 1994, the following countries have privatised, started
privatising, or announced intentions of privatising full or parts of their electricity supply industries:
Argentina, Australia, Austria, Brazil, Canada, Chile, Congo, Czech Republic, Germany, Hungary,
India, Indonesia, Italy, Kuwait, Mexico, Morocco, New Zealand, Pakistan, Peru, Portugal, Slovakia,
Spain, Sweden, and Thailand. (Reuters, 1994)
48
as cutting back by cancelling new orders or deferring construction, were costly to
cope with deviations from predictions. Over-supply in the 1970s led to caution in
the 1980s. Henderson et al (1988) outline the historic trends in capacity and load
and the resulting problems faced by some of the utilities in the US at the time.
Another disruption to the stable scene was the series of nuclear accidents which led
to a dramatic cancellation of new orders and cast serious doubt on the future of
nuclear power. The Three-Mile Island accident and the Chernobyl disaster caused
negative public reaction and government response. Public concern for health and
safety rose above the minimum cost objective of capacity planning. The promise of
cheap electricity from nuclear power was questioned as countries like Sweden put
result of the influential green movement that originated in the United States and
Germany. Special interest groups such as Greenpeace and Friends of the Earth
voiced their disapproval of nuclear power and actively campaigned for legislative
protection of the ozone layer, the 1988 Toronto Treaty on the reduction of carbon
dioxide emissions, and the 1992 Rio Summit on global environmental concerns.
As emission limits are being discussed at the global level, individual countries are
translating the targets into national legislation. Talk of carbon tax and emissions
trading permits, for example, has made coal-fired plants potentially less
competitive. Squeezed from both sides by the hazards of nuclear and the adverse
49
environmental effects of fossil fuels, companies are turning to other forms of
energy.
Environmental legislation in the form of emission limits and fuel taxes favours
cleaner and more efficient plant. Confronted with increasingly stringent emission
controls, generating companies in the UK are considering the early closure of less
economic and dirty coal and oil fired stations, life extension of existing Magnox
nuclear power stations, and investment in cleaner plant. Concern for the
environment and competition in the new electricity market have led to the
phenomenon known as the dash for gas. No longer restricted from use in
electricity generation, natural gas is now a much sought after fuel. The high
availability of cheap gas from the North Sea and the new technology of combined
cycle gas turbines (CCGT) answer the call to lower emissions with its negligible
sulphur and reduced carbon dioxide emissions. Its high thermal efficiency, typically
around 50%, gives greater electricity output and thus value for money. Shorter
construction times make CCGT an attractive and viable choice of new plant as well
addition, both extra capacity in Scotland and cheap electricity from France threaten
potential over-capacity in England and Wales. The industry has responded with
projects.
Capacity planning has always been necessary because of long lead times and other
more important now to anticipate and prepare for surprises. For example, ten
years ago, there was no uncertainty surrounding the cost of capital, which was
50
became a big issue. The companies are now at risk of business failure and indeed
planning. Planning to determine the right level of capacity to have at any time is
the UK, a number of uneconomic and environmentally unfriendly plant have been
types of decisions about power plant investment: 1) what to build (choice and mix
of technology), 2) how much to build (capacity), and 3) when to build (timing and
performance levels, expected operating lives, construction time and cost, fuel cost,
and other external factors. How much and when to build depend on demand
projections, existing capacity, and the retirement schedule. Combined together, the
facilities and equipment must be added to the power system in order to replace
worn-out facilities and equipment and to meet the changing demand for
electricity.
The resulting schedule of investment decisions indicates the dates for installing new
capacity and the dates for retiring old plants over a period of forty to fifty years. In
capacity planning, it is often required to focus on specific issues and decisions, such
evaluating the costs and benefits of over- and under-capacity, and assessing
51
Changing business environments have shifted the planning emphasis over the years,
leading to the development of more suitable planning methods, e.g. table 2.3 for
the US. The techniques used for planning purposes have evolved with the needs of
the industry. The dramatic restructuring of the UK ESI implies a similar evolution.
The traditional approach to capacity planning under uncertainty has been that of
against a forecast of future electricity demand and fuel supply. Accurate, reliable
forecasts of demand and timely delivery of supply are needed otherwise costly
demand and over-capacity tie up capital. The cost of investment must be spread
over less output, resulting in higher unit costs. As a result of these uncertainties,
52
the effects of over- and under-investment are much greater. [See Munasinghe
demonstrate their inability to predict shocks to the system and the resulting
ensure the security of supply is to keep a reserve margin. This excess of installed
capacity must be greater than expected peak capacity to cater for planned
advance, the reserve margin is intended to close the gap between actual and
forecast peak demand. More sophisticated approaches (Eden et al, 1981) use
energy.
Uncertainties are the reasons why planning is difficult and why plans are not
optimal (Dowlatabadi and Toman, 1990). Others view the acute areas of
uncertainty as being floating exchange rates, changing social and political values,
pollution control regulation, energy cost, and raw material availability. Volatility
and instability of fuel prices lead to more uncertainties. The complex interactions
53
It is important to identify and understand uncertainties, especially the major ones,
because they have potentially negative consequences. Too much or too little
capacity translates to higher costs, as over investment raises electricity prices and
technologies.
uncertainty as they may lead to further uncertainty and undesirable effects. For
example, Ford (1985) has identified a spiral of impossibility in figure 2.2, where
the plus signs indicate a positive relationship. As higher prices discourage demand,
a utilitys capital costs must be spread over a smaller number of kilowatt hours
which in turn leads to still higher prices, inducing a loss of customers. Some of
them turn to building their own power plants; others switch to alternative forms of
energy.
54
Figure 2.2 Spiral of Impossibility
+ Alternative
Sources of
Electricity -
+ - Customer
Price of Demand
Electricity
Fuel
Cost
-
Electricity +
Consumption
parallel closely with Morgan and Henrions (1992) distinction between types and
the modelling treatment, i.e. how to model, while areas of uncertainty give
insight to the variables that must be included, i.e. what to model. Finally, to
55
LITERATURE DEFINITIONS
Uncertainty is a generic term used to describe something that is not known either
relates to the unknown at a given point in time, although it is not necessarily the
unknow-able. The term uncertainty has been used to mean an unknown that
simply missing information. Such incomplete information may also come from
In the literature, uncertainty and risk are often used interchangeably. Knight
(1921) was the first to distinguish between measurable risk and unmeasurable
which alternative outcomes have been specified and probabilities been assigned to
them. Strangerts concept of pure uncertainty was introduced around 1950 where
Knights definitions, Barbier and Pearce (1990) note that risk denotes broadly
are not known. Hertz and Thomas (1984) associate risk with the lack of
56
elements of the problem structure. Chapman and Cooper (1983) consider risk to
be the undesirable implication of uncertainty. Risk may also tend to focus on just
bad outcomes, i.e., what can go wrong. Choobineh and Behrens (1992) consider
engineering perspective, Merrill and Wood (1991) observe the causal relationship
between uncertainty and risk: uncertainty refers to factors not under control and
TYPES OF UNCERTAINTY
planning, while those outside are external. Hirst and Schweitzer (1990) describe
External uncertainties apply to load growth, fuel prices, availability and costs of
Generation technologies with different lead times face demand forecasts with
short term and long term. Short-term uncertainties apply to factors which
cause demand to be uncertain on a time scale that is substantially shorter than the
time necessary to build even the shortest lead time power plant. Long term
forecasts are more uncertain due to the additional consideration of factors and
57
during the time necessary to construct a power plant. Some performance
Not all factors are measurable, especially in relation to the way uncertainties are
into the quantifiable and the non-quantifiable. The normal and quantifiable
performance, retrofit or retirement of old plants, and the role of alternative energy.
Barbier and Pearce (1990) discuss three types of uncertainties surrounding the
scale of their effects. Time-lag uncertainties are present in cause and effect
cycles.
IEA (1987) suggests two types of uncertainty that surround the value of a variable.
is that we cannot be certain of its value. Zadehs fuzzy set theory, described in
linguistic ambiguity gives rise to fuzziness. Choobineh and Behrens (1992) argue
that the principal sources of uncertainty are often non-random in nature and relate
58
Gerking (1987) distinguishes between sources of uncertainty and the changing
tracking the past and anticipating the future), decisional uncertainty (the potential
external uncertainty (events that are beyond the control of the system being
impact of uncertainty over time is important in the modelling process. There are
four types: static uncertainty (several alternatives are recognised as possible when
there is no indication that the uncertainty may change over time or that it can be
period of time relative to the decision alternatives), dynamic uncertainty (as time
In this thesis, uncertainty refers to factors, that affect the outcomes of decisions
but which are not known at time of planning. There are two kinds of factors: 1)
variables that enter into the planning model, and as such, can be specified,
uncertainty may be quite different from its estimate; 2) variables or events that do
not enter into the planning model, and as such, cannot be predicted or even
foreseen at all. For example, the restructuring of the UK ESI and its implications
could not be foreseen two decades ago and therefore would not have been treated
59
This definition of uncertainty broadly captures the distinctions between types of
Data uncertainty refers to the availability and accuracy of data. For example,
plant details are often inaccessible due to commercial reasons. Data for modelling
Uncertainty in the user refers to that hidden agenda the user has not
communicated to the model builder, i.e. what the final decision maker has not told
the developer of the planning model. It also refers to the gap between the model
and the user, i.e. what is not captured by the model but desired by the user.
prices (variables and alternatives) are listed in table 2.4. These factors differ in
degree of sensitivity and uncertainty. For example, capital cost has a high impact
but also highly uncertain ones. Too many existing techniques focus on the former,
60
as evident in Chapter 3. Here, we discuss those factors that have potentially high
and uncertain impact. Relationships between factors are also important in the
The next sub-sections are grouped into factors that directly contribute to capacity
planning (plant economics, demand, fuel, and technology) and indirect (financing
These areas of uncertainties are by no means exhaustive but provide an insight into
61
2.7.1 Plant Economics
Central to capacity planning are the components that directly determine the cost of
electricity generation. Each major cost category contains fixed and variable
components, with the variable element tied to utilisation. Fixed cost is composed
mostly of capital cost incurred during the construction phase. Variable costs are
the running costs due mainly to fuel and operations and maintenance (O&M).
Within each type of plant or fuel, the range of technologies varies considerably.
The final costs are also highly affected by load factor, life, plant efficiency, and
discount rate. Factors that are highly variable and need to be considered include
inflation rates, interest rates, technical and regulatory conditions in the electric
Capital costs, which are committed years before a power plant begins operating,
must be recovered during its lifetime. Capital costs are sensitive to discount rates
generation costs. Baseload plants have high capital cost and low generating cost as
compared to peaking or peakload plants which have relatively low capital cost and
high generating cost. Because demand fluctuates throughout the day and year,
baseload plants are scheduled to supply the bulk of demand and peakload plants
different plants are brought on-line depends on capital and operating costs as well
as technical characteristics of plants. Those plants with high capital costs are also
often difficult to switch on and off quickly, and therefore more suited for baseload.
To meet the restrictions on certain emissions, the less polluting plants tend to get
ordered first.
62
Uncertainties in construction costs and lead times are a major source of concern
capacity. If additional funding is needed but not available, it could lead to the
undesirable result of project abandonment. The longer the time to commission, the
higher will be the interest during construction (interest on funds provided during
construction period).
Many uncertainties arise during the long planning horizon. Peck et al (1988)
mention the importance of assessing equipment life, which is affected by the cost of
maintenance and new technologies. When certain fuels become less favourable
plants will have to be retired early. Capital costs would then spread over a shorter
lifespan thereby effectively increasing the generation cost. This is especially true of
Power stations are function-specific infrastructure. Once the maximum useful life
plants in particular have the burden of end of life uncertainty which translates into
costs and risks of safety. These concerns are not easily converted into monetary
units even though the common practice is to set aside a provision for
63
The capital intensive and function-specific nature of power plants is offset by the
lower costs achieved through economies of scale. In the past, the emphasis was on
demand and unstable conditions, the downside risk is expensive. It is more difficult
These commitments can be very costly over a long period of time. [For further
discussion, see Merrill et al 1982, Krautmann and Solow 1988, and Hobbs and
Maheshwari 1990.]
2.7.2 Fuel
Uncertainties that affect fuel price and supply are important as fuel related costs
make up the majority of the running costs of fossil fuel plants. National Power
(1992) attributes 53% of their operating cost to fuel, while PowerGen (1992)
claims close to 70%. Adverse shifts in relative fuel prices have a direct impact on
running costs, possibly changing the technology mix and the merit order of power
The oil shocks of the 1970s warned electricity generators of the risks of over-
dependence on a single fuel source. The uncertainty associated with fuel has to do
with political and economic risks of the supplying countries. Price instability,
domestic primary fuel suppliers are not insulated from price, as the privatised
generators can choose to import from abroad. After several oil shocks and
continued unrest in the oil-producing countries, the worlds oil reserves are still
concentrated in the politically unstable areas of the Middle East (65%) and South
America (13%). Gas reserves are distributed unevenly as well, with 38% in the
Soviet republics and 31% in the Middle East. Most countries have their own
64
reserves of coal with the majority in the USA (24%), Soviet republics (22%), and
mainland China (15%). Secure fuel supplies are necessary for the reliable provision
through long-term contracts with fuel suppliers cannot prevent disruptions due to
war, strike, embargoes, etc. Fuel diversity is a way to spread the risks and to avoid
Water Reactor) to be built in the UK, was approved on grounds of fuel diversity.
Nuclear fuel cycle has its own uncertainties, particularly in the back-end.
concerns show that fuel cost and availability are not the only determinants of
technology choice.
The volatility of fuel price as illustrated in Stoll (1989) makes it difficult to predict
one-half from 1982 to 1986. Natural gas, as a derivative of oil, followed similar
patterns.
side forecasts. However, experience (Balson and Barrager 1979 and Energy
Business Review 1991) shows that accuracy varies greatly among the different
Forecasts for hydro-electric power supply and consumption are by far the most
accurate. This is probably due to the counting element. That is, sites are identified and
schemes are planned for many years ahead. But this may no longer hold if adverse shifts
in weather patterns are expected from global warming.
Nuclear power forecasts are highly unreliable for several reasons. Many are politically
motivated. There are great political and social uncertainties. Costs of R&D, operations,
and especially dismantling and decommissioning are either badly estimated or ignored.
65
Little is allowed for environmental problems and the NIMBY (Not In My Back Yard
syndrome) attitude that prevails.
Forecasts for natural gas are also unreliable. Until gas discoveries are made, these
supplies are simply not included. Similarly with Liquid Natural Gas (LNG) imports,
unless contracts have been signed, it is considered too speculative to include it. These
event-driven uncertainties cause great discrepancies in forecasts.
Forecasts for coal vary. In the fifties and the sixties, the picture for Europe was over
optimistic due to the failure to anticipate the high cost of production in Germany and the
UK. There was also a panic response to the oil crises of 1973 and 1974. Increased use of
coal in the future depends on the successful development and acceptance of clean fuel
technologies.
Estimates of future oil prices must be guided by careful economic and geological analysis
as they are highly uncertain and subjective. The uncertainties are dependent on reserves,
recovery costs, world demand, and politics at the national and international levels. As a
result, the forecasts are revised almost as soon as new reserves are discovered.
forecasting methods are strong in analysing and using historic trends but weak in
prices, GNP, population, growth, and weather, because the relationships are not
clear or stable.
Demand uncertainty is one of the major determinants of future capacity need. The
demand for electricity varies throughout the day, the week, and the year. Since
electricity cannot be stored, there must be sufficient capacity to generate the power
demanded at any time. The traditional appoach of fitting plans to demand forecasts
66
At the macro-level, electricity demand growth has been closely correlated with
GNP growth. Other factors that affect uncertainties in demand are given in
and demand.
demographic bulge, i.e. whether the next generation will have greater or less
predict. The strength and persistence of the energy conservation movement may
curves and more commonly shaped into load duration curves which are useful in
merit ordering. Load distribution curves map daily demand against time, e.g.
hours of the day, days of the week, months of the year, etc. Load duration curves
are aggregated and averaged from load distribution curves and are represented as
better at following the rises and falls of demand. Load demand changes are
difficult to adapt to because of the long lead times in construction. These issues
relating to uncertainty of load growth and shape, and their impacts are discussed in
2.7.4 Technology
The choice of technology, i.e. type of plant, is determined by the type of fuel used
and technical performance characteristics like heat rate, emission factors, operating
67
life, and black-start capability. Table 2.5 lists the benefits and costs of the main
technologies.
Plants with greater thermal efficiency, lower emission levels, better designs, and
Moreover, their designs are the results of optimisation in plant size, multi-unit
the uncertainties in electricity demand and fuel supply (Hirst 1990, Ford 1985).
68
Table 2.5 Fuel/Technology Comparisons
during which time changes in regulation and environmental standards are expected.
69
On the other hand, the newest and latest technologies take years before their cost
effectiveness and fuel efficiency are fully accepted. New installations usually have
lower load factors in the first two years, reducing the electricity output and revenue
income. Any new installation will always carry this performance uncertainty. Even
technology that has been accepted in other countries has to undergo much
technical tests and policy analysis are required for each new technology.
decision to the nuclear industry could result in building of new pressurised water
reactors which will come into service in early next century. If not, a large amount
of nuclear capacity may need to be retired, unless the industry can bear the
diversify its plant mix, e.g. build non-nuclear plant. The 1998 expiry of the Non
Fossil Fuel Obligation (NFFO) and the Fossil Fuel Levy (FFL), which subsidise the
uncertain whether the European Commission will allow the extension of these
such as time-of-use pricing, dynamic and spot pricing, improved energy efficiency,
and conservation programmes, are attractive because they could provide a viable
70
effort and consumer education. [For energy efficiency, see McInnes and
1990b, Hayes 1989, Sim 1991, Gellings et al 1985; for other references on this
topic, see Henderson et al 1988, Hobbs and Maheshwari 1990, Berrie and
McGlade 1991.]
By the time a new power station is ready to commence operation, it has already
projects. The uncertainties surrounding the cost and availability of new debt and
competition, and the power pool induce risk averse investment behaviour which
translates to higher discount rates for capital investment. These impacts have
shifted investment to less capital intensive technologies. Discount rates are used to
calculate tomorrows costs and benefits into todays terms, reflecting market
perception of risks and returns. The choice is not as apparent as in the public
sector where a uniform discount rate was set to value projects but not to reflect
business risk.
Uncertain revenue requirements make financial planning difficult and may prevent
utilities from recovering all of their costs. Economic instability and inflation
produce higher than expected interest rates and unprecedented cost increases in
new facilities. Long lead times and regulatory delays are exacerbated by the
71
Hobbs and Maheshwari 1990, Jones and Woite 1990, Bunn et al 1991, and Merrill
et al 1982.]
2.7.6 Market
Shortly before privatisation, UBS Phillips and Drew (1990) foresaw increased
market risk for the newly privatised companies. In a privatised industry, the
possibility of business failure is real. Competition should give rise to more efficient
electricity markets, implying tighter reliability standards and reducing the spread
between cost and price. Deregulation also opens the markets to new entrants, thus
increasing the competition and eroding the profit margin. These privatisation
effects are discussed in Bunn et al (1991), Berrie and McGlade (1991), and UBS
Pool price volatility concerns all participants in this industry as the trading of
availability of plant (OFFER, 1992) has led not only to higher capacity payments
and higher pool prices but also increases in uplifts, resulting in very high, short
duration price spikes. Immediate demand responses to such high prices coupled by
feedback from other elements send rippling effects throughout the system. Supply
side disruptions such as plant retirement and reduced plant availability contribute to
increases in capacity payments, which in turn raise prices. The time difference
Unleashing the free market forces in the new UK ESI brings about market
uncertainties that are short-term in nature. Reacting to short-term needs runs the
72
risk of jeopardising long-term interests. This is one reason why different types of
New entrants and the expiry of fuel-supply contracts threaten the dominant players
in the UK ESI. As the independent generators commit their CCGT orders to back
to back contracts, they too wonder: how the dominant generators will behave,
how the pool prices will change, whether transmission charges will be revised to
favour projects in the South and disadvantage those in the North, what new
time could deter the orderly investment at the beginning of the next century. There
is also a concern about the overall risk of poor business performance, hostile take-
overs, and further deregulation of the industry, such as through forced sell-offs.
The power planning life cycle begins from the first stage of feasibility analysis and
submission of proposal. Approvals depend on the site selected, the type of plant
proposed, and other factors which are subject to many uncertainties. The long
planning horizons of the electricity industry, e.g. 30 to 40 years, mean that industry
life cycles are much longer than the length of a government in office. Political
policy legislation.
In the UK, UBS Phillips and Drew (1991) along with many other analysts
predicted that the 1992 general election could have a major effect on the industry
and the value of the firms. UBS Phillips and Drew (1990) warned of a political
risk, stemming from the changes that could be made to the industry if political
73
possible Labour government and how they would affect the status of conservation,
British Coal, nuclear policy, payment for renewables, the status of National Grid,
Regulatory uncertainty refers to the legislative changes that can impact at the firm
level. Governments have a wide variety of policy instruments they can use to bring
promotion, pricing taxes, subsidies and other financial incentives, reforms in market
shape of the regulatory environment (Schroeder et al, 1981) depends on the speed
Some of the regulatory impacts on the actions of the US electric utilities are
between two types of effects: the transfer of ownership from the public to the
private sector and the competitive structure of the market. The former can be
(competitive market structure) has been analysed through the power pool
incentives to invest, regulatory measures, uncertainty and risk in the new markets,
74
uncertainty surrounds what the regulator will do. UBS Phillips and Drew (1992)
analyse the effects of changing the pool rules, the possible referral of the generators
to the Mergers and Monopolies Commission and the valuation of the generating
companies.
2.7.8 Environment
Increasingly, energy and the environment are perceived as directly linked. Four
1991) on electricity and the environment: 1) energy and electricity supply and
environmental and health impacts into policy planning and decision making for the
electricity sector. The symposium proposed that the electricity utility companies
take a longer planning perspective than just the 7-10 years for construction, in view
of the time scale of many health and environmental impacts, such as the irreversible
The International Panel on Climate Change (IPCC) warned the world of a global
(Leggett, 1990) will cause a rise in sea levels, higher global temperatures, and
changes in precipitation and seasonal patterns. Although the exact impact and time
frame are not certain, it is known that the largest contribution comes from energy
production and use. In the UK, for instance, the burning of fossil fuels in
electricity production accounts for 34% of carbon dioxide released, 72% of sulphur
Fossil-fuel burning gives off carbon dioxide, the main greenhouse gas. Reduction
will require market incentives or legislative measures, since there are no technical
75
means to reduce CO2 in fuel combustion other than using fuel with less carbon
coal and oil, and to a lesser extent, gas. [The effects of a carbon tax are discussed
in Grubb (1989), Hoeller and Wallin (1991), Cline (1992), and Kaufmann (1991).]
With the exception of CFCs and carbon dioxide, most emissions are difficult to
concentrations of some gases are more sensitive to emission rates than others due
to the different lifetimes in the atmosphere. The mechanisms and rate of removal
Much scientific uncertainty surrounds the impacts and timing of climatic changes.
The irreversibility of these effects implies that legislation should be passed now to
reduce or stop such emissions which will impact on a generators future plans.
SO2 emissions to 60% below 1980 levels by the year 2003 and that the NOx
emissions must be 30% lower than in 1980 by the year 1998. A government White
renewables by the year 2000 and to provide 24% of the UK energy by 2025. This
target adds to the growing list of objectives that planners must consider. The UK
has conditionally complied to the IPCC target to stabilise CO2 emissions at 1990
levels by 2005. These legislative requirements affect all power producers directly.
To meet the sulphur emission target, utilities use fuels with lower sulphur content
is so expensive that it is only cost effective if installed on the newer and larger
plants to allow for economies of scale and longer operating time. Capital cost of
76
station Drax (National Power, 1992) is around 700 million. On a per kilowatt
of the plant.
anticipate for self-interest although accounting for such externalities is still fairly
of this, Markandya (1990) suggests to identify and account for the main sources of
electricity (oil, gas, coal, hydropower, nuclear) and their effect on the environment.
capacity in the UK ESI combined with stricter environmental laws impel power
generating companies to re-evaluate the options they have to meet the interests of
2.7.9 Public
In some countries, like the US, public opinion has frequently interfered with the
business of power generation itself. Foley and Prepdall (1990) cite public
electromagnetic fields, and public health. People are concerned about health and
Nuclear power is probably the energy source that is most influenced and dictated
by public opinion (Evans and Hope, 1984), but it was not until the 1970s that
77
opponents of nuclear power began to delay the development of this industry. The
professional response was that nuclear power was cheap and safe and that no other
energy source could meet the increasing demands forecasted for world economic
growth. In spite of this reassurance, nuclear accidents and adverse public reaction
British Nuclear Fuels Limited (BNFL) opinion polls regularly show that two out of
every three people believe the risks associated with nuclear power outweigh the
Sweden. A recent survey (Nuclear Forum, 1992) shows that the more people
know about radiation the more likely they are to be in favour of nuclear power.
2.8 Conclusions
This chapter has listed and explained the areas of uncertainties in electricity
viewed from a modelling perspective, i.e. types of uncertainty. Emerging from this
78
Table 2.6 Model Requirements for Capacity Planning
The uncertainties discussed in this chapter do not exist in isolation. The publics
concern for the environment has often led to legislative action. New requirements
translate into new technologies, and these in turn fuel the competition.
Competitive forces spark off further uncertainties in the market. The enumeration
The first is the obvious and classic approach of modelling, ingrained in the
79
modelling approaches to capacity planning as defined by operational research
techniques and their application. We criticize their ability to capture the different
means to completeness but need to establish its feasibility and practicality, hence
The second is a non-traditional approach. Part Two of this thesis addresses the
desirable property of a system. However, it is not clear how such a vague and
80
Part One
3.1 Introduction
techniques that have been used to address the problems in capacity planning are
capture the areas of uncertainties with respect to completeness and the manner in
The next three sections present the most commonly used techniques and their
analysis in section 3.4 refers to the use of decision trees and other decision-
two or more techniques, which we call model synthesis. Here in section 3.5, the
83
same critique of areas and types of uncertainties is given. Modelling requirements,
section (3.6). Several proposals are made on the basis of this review.
3.2 Optimisation
As early as the 1950s, capacity planning was perceived as that of meeting least
cost objectives within a highly constrained operational environment, and was first
modelled with linear programming (Mass and Gibrat, 1957). If the problem
becomes too large to handle, decomposition methods (Vlahos, 1990) could be used
recently, stochastic programming has been used to model the deterministic and
non-deterministic effects.
The objective cost function typically includes capital, fuel, and generation costs,
over the entire planning period. The constraints typically include forecast demand,
84
plant availability, and other technical performance parameters. The planning period
is usually split into sub-periods for modelling detail variations. The result is an
duration curve (LDC). However, use of the LDC implies that costs and availability
of supply depend only upon the magnitude of the load and not on the time at which
the load occurs. This assumption is approximate for hydro systems and accurate
for thermal systems only if seasonal availabilities are taken into account. Because
power demand is highly variable throughout the day and year, the resulting LDC
In their first trial in 1954, Electricit de France (EdF) developed a schematic linear
Described in a classic paper by Mass and Gibrat (1957), this is the earliest
Some early LP models are discussed and classified in Anderson (1972) into so-
called marginal analysis, simulation, and global models. In all cases, cost
exogenous, and all formulations are deterministic. The results must also satisfy
reliability of supply. Allowances for uncertainty are given in the form of simple
85
investment costs to meeting projected future demand. Projections of future
revenues, costs, and capital requirements are made using a corporate financial
known information and the impacts of variations in assumed trends. The results
are tested in the sensitivity analysis that typically follows an optimisation run. This
Greater accuracy and detail, such as to capture the non-linear effects of economies
of scale and merit ordering (load duration curves), impede upon the computational
constraints and variables, are common, e.g. Vlahos (1990), but they require other
programming cannot be readily rewritten to address other issues, in the way that
the dual and slack variables. But this only gauges the sensitivity of the output
result to the input variables. Model validation is not the same as uncertainty
analysis.
pioneering work of Mass and Gibrat. The first part addresses the short-term
86
dispatching problem, which seeks the optimal utilisation of an existing array of
generation plants to meet short-term demand at minimal cost. The second part
generation capacity over a planning horizon of several years. As the short run
cyclical programming to minimise the cost per cycle over an infinite number of
distribution of future demand, equipment reliability, length of lead times for new
equipment, and possible changes in regulation which may affect plant efficiency.
To keep the model computationally tractable, reserve margins are used to deal with
The advantages for choosing linear programming are nevertheless abundant. Great
piecewise linear functions. Dual variables are useful in post LP analysis though not
As the earliest and most popular of all optimisation techniques, linear programming
87
2) The optimal capacity size of a generating unit determined by LP is a continuous
function, and therefore must be rounded to the nearest multiple of a candidate unit.
This rounding may result in a sub-optimal solution.
4) None of the examples above have dealt with objectives other than cost
minimisation.
7) The strict linearity conditions imply that non-linear effects such as economies of
scale and reliability cannot be modelled accurately.
many smaller solvable ones, thereby reducing computer processing time. For
example, the linear approximation of the load duration curve can be divided into
88
In the context of linear programming, two types of decomposition are available,
namely the resource directive and the price directive. They differ in the way the
resource marginal
prices of resource
utilisation prices of
resources availability
levels resources
PERIPHERAL PERIPHERAL
UNITS UNITS
(1990) and Vlahos and Bunn (1988a). Different expansion plans are examined
iteratively, and the operating cost is calculated with marginal savings from extra
capacity. This information is fed back to the master programme at each stage, and
a new plan is proposed. Lower and upper bounds to the total cost are set at each
uncertainties. A primal dual method solves the probabilistic problem using simple
into a set of linked static deterministic problems where linkages are enforced
89
through Lagrange multipliers. These problems are solved separately in a primal
Decomposition has several advantages over linear programming (Vlahos and Bunn
3) The advantage of decomposition lies in its efficiency and ability to handle non-
linearities.
nested sub-problems, and the solution of one sub-problem is derived from the
each stage involves an optimisation over one variable only. Preceding decisions are
An optimal sequence of decisions has the property that whatever the initial
circumstances and whatever the initial decision, the remaining decisions must be
optimal with respect to the circumstances that arise as a result of the initial
decision.
The least cost investment decision in Boyd and Thompson (1980) uses a
90
effect of demand uncertainty on the relative economics of electrical generation
technologies with varying lead times. The algorithm evaluates short versus long
lead time generation technologies, under different cases. Many assumptions were
made in the demand and supply models to reduce the stages, such as the
assumption of a linear cost model, e.g. a lumped sum investment due when plant
comes on line, rather than the usual progressive payments during the construction
plan consists of lists of tables. To get the decision at any stage, one must read off
the tables corresponding to the right level of demand that might occur in that
aspects of the implementation have not been discussed. For example, it is not clear
whether this method has been completely computer-coded and tested against other
models. Extensions to the model are required to increase the level of detail in
the resolution of uncertainty was proposed, but flexibility was not defined or
demonstrated.
energy production model. This formulation is two stage with recourse, but
contains too many assumptions to be useful for electricity generation. For one
thing, the power station must be able to store the energy produced!
91
The main modules of the commercially available software packages (all
and Reliability Evaluation System of Ohio State University) are based on dynamic
solution. The number of possible capacity mixes increases rapidly as the planning
Automatic System Planning) makes further use of constraints to limit the number
of expansion alternatives.
2) Load duration curves can be of any form, especially without the requirement of
linear approximation.
4) This approach can also incorporate uncertainties in demand and in fixed and
variable costs.
92
3) The sequential manner in which the recursive algorithm searches for the optimum
is not conducive to handling decisions in the multi-staged sense, that is, with the
stepped resolution of uncertainty.
5) In spite of this range of solutions, the data is not helpful in assessing the kinds of
decisions under different conditions in time.
uncertainty about the demand b, uncertainty about the input prices c, and
nature, i.e. = (s), s W, where W is the index set, i.e. the set of all possible
where x R = {x | Ax b, x 0}
(1980) and Modiano (1987), and the first two described here: two stage
quadratic programming.
93
Dantzig (1955) was probably the first to demonstrate a way of incorporating
decision vector x is chosen. A random event (or a random vector of events) occurs
between the first and second periods. The algorithm solves the decision vector y in
the second stage after taking into account the values for x and the random events.
The decision vector x represents the investment decision which must be made well
before most uncertainties are resolved. Such investment decisions are classified as
here-and-now, i.e. taken before uncertainties are resolved. The second decision
vector typically describes the operating decisions which are taken after the random
These two main approaches are also known as the passive and the active
(Sengupta, 1991). The passive approach requires the decision maker to wait for a
sufficient number of sample observations, hence, wait and see. The active
updating until more information becomes available. Thus it converts the original
94
While a formulation containing random variables (two stage with recourse) can
always be solved by replacing the random variables by their expected values, data
the marginal cost-pricing strategy for sharing capital costs given an optimal
capacity plan. In many ways, the treatment of capacity constraints and demand
for the random load event. The main drawback of this technique lies in the large
number of constraints required, thus increasing the time to search for the feasibility
region.
mathematical programming that take risk into account to see what elements of
1) the E-model which minimises expected costs by substituting mean values for
random variables;
2) the V-model which minimises the mean-squared error measured from a certain
level;
3) the P-model which minimises the probability that costs exceed a certain level;
95
4) the K-model which finds the minimum level of costs for which there is 100%
probability that costs do not exceed this level; and
5) the F-model which minimises expected dis-utility for a given risk aversion
coefficient.
not suitable for incorporating cost uncertainty. The most practical approach is to
minimisation for each alternative scenario. The class of two-stage with recourse is
In other words, dimensionality gets out of control. Non-linear feasible regions and
3.3 Simulation
simulation are driven by a different set of objectives: not to find the optimal
analysis and sensitivity analysis are included here, as the manual counterparts of the
Carlo simulation and other sampling techniques also fall into this category.
96
3.3.1 System Dynamics
factor. A key feature is the effect of feedbacks, of which there are two: a re-
enforcing loop and a balancing loop. An example of the re-enforcing loop is the
and flow diagrams can be used to structure the problem. Many off-the-shelf
software, such as DYNAMO (Richardson and Pugh, 1981) and ITHINK (High
graphic capabilities portray the complex interactions during the simulation runs and
The system dynamics model of Ford and Bull (1989) represents eight categories of
units in the system. A spreadsheet pre-processor translates costs and savings into
cost efficiency curves which are then used to generate electric loads. The more
Two strategies for capacity planning are tested in Ford and Yabroff (1980) by
recession, demand, capacity, and other parameters. Trade-offs between short lead
time and long lead time technologies are also examined in the system dynamic
97
The industry simulation model of investment behaviour of Bunn and Larsen (1992)
considers the interplay of demand, investment, and capacity and their effects on the
determinants of price, i.e. LOLP and VOLL (explained earlier in Chapter 2). This
model has also been used to determine the optimal capacity level of each of the
players given the interactions of market forces as well as to hypothesize the effects
analysis within a scenario analysis. This type of causal modelling is suitable for the
The system dynamics approach has been greeted with mixed feelings by the
and very detailed, planners are suspicious of models which do not require that level
of detail in the input specifications. Yet at the same time, system dynamic models
generating scenarios of the future. A less formal and less structured method of
scenario generation and analysis, called simply scenario analysis, makes use of
judgement and discussion. Kahn and Weiner (1967), where scenario analysis
likely or least likely, is also very popular although very subjective. Others include
single, dominant issue, e.g. the economy, status quo or business as usual, and
domination.
98
Five scenarios of the future were constructed in CEGBs assessment of Sizewell B
(Greenhalgh 1985, Hankinson 1986.) Firstly, the consumer sector was divided into
category, causal links were developed for a highly disaggregated matrix framework
which allowed for changes in activity level and energy efficiency. Against
forecast of future energy requirements was converted into each scenario and
translated into future electricity demand. The five scenarios were credible pictures
of the future: 1) a high growth and high services scenario, 2) high growth and high
industrial production, 3) a middle of the road, 4) stable but low growth, and 5)
unstable low growth scenario. CEGBs scenarios were based on forecasts unlike
planning.
process (in figure 3.2) is technique for analysing alternative futures and developing
99
Figure 3.2 Scenario Planning Process
Identify
World
Scenarios
Step 1 Environmental
Electric Fuel Resource
Prices and Other Options
Loads
Constraints
SCENARIO
DEVELOPMENT Develop
Resource
Plans
IMPLICATIONS
Generate
Indicators
Evaluate
Indicators
Step 3
STRATEGIES Develop
Strategies
Source: Mobasheri et al (1989)
Other ways to conduct scenario analysis are discussed in Huss and Honton (1987),
O'Brien et al (1992), and Bunn and Salo (1993). They make use of intuitive logics,
analysis.
analysis to be one of the four most popular methods to assess uncertainty, the
100
2) It relies less on computing power and more on brainstorming and discussions than
other types of analysis. Scenario analysis encourages people to think more
creatively and broadly about the future, unlike traditional extrapolation techniques.
3) The use of scenarios is particularly helpful in the projection of long range, complex
and highly uncertain business environments.
5) It is most suitable for situations where few crucial factors can be identified but not
easily predicted, where uncertainty is high, where historical relationship is shaky,
and where the future is likely to be affected by events with no historical precedence.
6) It is based on the underlying assumption that if we are prepared to cope with a small
number of scenarios by testing plans against scenarios then we are prepared for
almost any outcome.
Although scenario analysis has become more popular as a corporate planning tool,
4) Care must be taken that there are not too many factors involved, else it would be
mere speculation. There is a trade-off between the feasibility of considering a large
number of factors and the validity of considering a few.
101
3.3.3 Sensitivity Analysis
identifying and screening the most important variables. Hirst and Schweitzer
(1990) describe this as a way to see which factors trigger the largest changes in
performance and which options are most sensitive to the change. If a change in an
estimate has very little effect on the output, then the result is not likely to depend
reasonable limits of change for each independent variable, the unit impact of these
changes on the present worth or other measure of quality, the maximum impact of
each independent variable on the outcome, and the amount of change required for
Sensitivity analysis is very rarely used as a stand-alone technique. Its main purpose
the end of a rigorous optimisation study or scenario analysis to validate the results,
considered and also one at a time. Other criticisms of sensitivity analysis follow.
1) Brealey and Meyers (1988) criticise the use of optimistic and pessimistic estimates
because their subjective nature gives ambiguous results.
102
3) A sensitivity analysis does not involve probabilities and therefore does not indicate
how likely a parameter will take on a value.
4) A percentage change in one variable may not be as likely or comparable with the
same percentage change in another variable.
5) Changing more than one variable at a time may not be feasible due to possible
dependence between variables.
6) It does not attempt to analyse risk in any formal way, leading some authors, such as
Hull (1980), to propose a follow-up with risk analysis.
The injection of random elements such as probability distributions into scenario and
sensitivity analysis departs from the deterministic view of the world. Any
technique that assigns probabilities to critical inputs and calculates the likelihood of
Schweitzer (1990). These probabilities are subjective, i.e. based on the judgement
probability distributions mean that more iterations are necessary for completeness.
103
different way of incorporating stochastic sensitivity into their cost-minimisation LP
model. Instead of using point estimates for values of input parameters, they use
uncertainty surrounding central parameter values. Their model maps these input
distributions to outputs and employs statistical analysis to find the inputs which
While there may be many ways to analyse uncertainty, the term risk analysis
refers to a specific kind of analysis, first described in the classic paper by Hertz
understanding and awareness of risk associated with a variable. Hertz and Thomas
distribution theory to calculate the mean, variance, skewness, and other parameters
equations are specified, and the input probability distributions are sampled and put
especially when exact probabilities are not known. In this sense, it is distribution-
free. Risk analysis by Monte Carlo simulation or other sampling methods is called
@RISK (Palisade Corporation, 1992). The main arguments for and against using
risk analysis are given in Hertz and Thomas (1984) in table 3.1.
104
Table 3.1 Arguments For and Against Risk Analysis
FOR AGAINST
Provides a systematic and logical approach Sometimes provides few guidelines to aid
to decision-making problem formulation
Between the extremes of hard and soft techniques lies decision analysis, a term
coined from the marriage of decision theory and system analysis. The explicit
decision analysis from optimisation and simulation. [See Raiffa 1968, Keeney
1982, Bunn 1984, Watson and Buede 1987, and Covello 1987 for descriptions.]
Thomas and Samson (1986) generalise the usual steps in decision analysis in table
3.2 below.
105
Table 3.2 Steps in Decision Analysis
1 Structuring the problem definition of the set of alternatives strategies, the key
uncertainties, the time horizon and the attributes or
dimensions by which alternatives should be judged
5 Sensitivity analysis in relation to the optimal strategy, which may lead to further
information gathering
trees and influence diagrams are complementary structuring tools for decision
analysis. Decision trees capture the chronological sequence of decision and chance
2) Capacity planning has traditionally been treated in the optimisation sense and
computationally data intensive, with little part for the decision maker because of
heavy data demands. Decision analysis essentially restructures the problem into
strict component parts of a decision: options, chance/uncertain events, outcome,
106
preferences etc. There is a concern that decision trees may over simplify the
problem and exclude some of the essential details.
3) Decision analysis assumes that the model is being developed in direct consultation
with the decision makers. This is often not the case with capacity planning models
where analysts build the models and present the results to the executive level.
This assumption ignores the possible gap between the analyst (modeller) and the
decision maker (user), e.g. whether the analyst is really modelling what the decision
maker wants and whether the decision maker really understands or uses the results
of the model. This aspect of decision making and modelling is discussed in Chapter
4 and again in Chapter 6.
4) Reduction methods are needed to screen out dominated options as the decision tree
can get messy or bushy very quickly. As mentioned before, the dimensionality
problem occurs when the permutation of the number of stages and branches
becomes too large to handle. In a decision tree, there is also the question of how to
propagate the stages, whether by time or by event.
5) Other drawbacks of decision tree analysis given Thomas (1972) and others include
the need to specify discount rates beforehand, problems with probability
elicitation and assignment, and pre-specification of mutually exclusive scenarios.
Moore and Thomas (1973) also question the usefulness of decision analysis
techniques in practice and present the pros and cons in table 3.3 below.
107
Table 3.3 Pros and Cons of Decision Analysis
Pro Con
4) Allows decision-maker to judge how much 4) Evaluates the decision in soft number
information is worth gathering in a given terms because input data at present are often
decision problem. soft in the sense that only crude
measurement is possible.
In spite of the above criticisms, the use of decision analysis in capacity planning has
sections. This suggests that it may become better received in the UK now that the
decisions, uncertainties, and strategic rather than operational issues. For these
reasons, we reserve the following sub-sections to illustrate and describe four
The first major study to harness decision analysis in the power planning context
was the so-called Over/Under Model (Cazalet et al, 1978). It centres around the
108
additional capacity is planned. If capacity is greater than the actual demand plus a
margin, then the project is delayed. The main objective is reliability, i.e. to ensure
Depicted in figure 3.3, this model tracks the way the uncertainty of demand growth
is resolved over time, the role of uncertainty in load forecasts, and the effect of
plant lead times on capacity planning. Its primary objective is to estimate the
109
Figure 3.3 The Over and Under Model
Capacity in MW 13,600
(1,000)
0.2
12,000
0.6
12,600
(600)
0.6
13,200
(600)
...
0.2
12,800
0.2 (200)
13,000
(800)
0.2
12,200 12,600
(200) 0.6
Probability (400)
0.2
12,200
( ) Demand Growth (0)
Demand is the only uncertainty that drives the Over/Under Model from period to
uncertainty, but this assumes that the adjustment is costless. Furthermore, it does
not account for adjustments in the technological mix. Although grossly over-
representing a new period with each stage of the tree. The time horizon is
110
3.4.2 An Extension of the Baughman-Joskow Model
Many of the earlier models addressed only one of the following three aspects of
Joskow (1976) were the first to simultaneously address all three aspects by basing
models which are integrated together by a transfer of information through the use
of a rolling horizon.
and regional model in Baughman and Kamat (1980) which looks at the rate of
change of demand rather than absolute demand levels. This study shows how
easily the single utility case can be extended to cover a broader geographic scope
and compares relative costs and benefits of over- and under-capacity. The tree
representation together with the payoff matrix of figure 3.4 and graphical results
111
Figure 3.4 Matrix Model of Decisions and Outcomes
No shortage
SLOW Shortage Severe shortage
Adequate
Capacity Expansion Decision
Under-capacity Under-capacity
capacity
No shortage
MODERATE No shortage Shortage
Adequate
Over-capacity Under-capacity
capacity
No shortage
RAPID No shortage No shortage
Over-capacity Over-capacity Adequate
capacity
Decision analysis can also be used to examine the effect of pursuing different
application of Hobbs and Maheshwari (1990). They address the effects of using
different planning objectives under uncertainty and the impact upon costs and
electricity prices of risk averse decision making. The four conflicting objectives in
the assessment of the benefits of small power plants and conservation when load
uncertainty in demand growth, fuel prices, capital costs, and imported power upon
112
optimal utility plans, the value of information, and the variance of electricity prices
This example suggests the attractiveness of applying a simple model (figure 3.5),
detailed model to focus on the critical ones. Once the range of uncertainties and
options is reduced, the problem can be addressed by the data intensiveness of the
additional uncertainties such as fuel price, capital cost, and imported power.
113
Figure 3.5 Decision Tree in SMARTS
Year 0 1 to 30 1 to 4 4 4
Yes
1 Hi Hi Hi
0.167 0.167 0.25
Lo Lo Lo
0.333 0.333 0.25
Stage 1 2 3 4 5 6 7
Several criticisms are noted. 1) Only two time periods are used. 2) The
probability of growth rates for the two time periods are independent, clearly not
the case in reality. 3) Perfect correlation was boldly assumed between fuel and
capital costs. 4) Three point probability distributions were used rather than more
Values and preferences of different decision makers and stakeholders translate into
more than one and possibly conflicting objectives. Keeney and Sichermans (1983)
114
model of Utah Power and Lights planning process emphasizes the aspect of
Subsequently, but still based on the original technology choice decision, the
Baltimore Gas Study (Keeney et al, 1986) extends the objective hierarchy and adds
The Baltimore Gas Study uses the decision tree in figure 3.6 to capture the choices
available to a utility and the uncertainties that result through time. This analysis
addresses both the dynamic and multiple objective aspects of the problem. The
even analyses are performed afterwards to confirm the best strategies chosen.
Ready Fuel
in 1994 Available Low Demand
Conventional Coal
Purchase Power for 1998 (with
1994 - 1998 New or without
Technology scrubbers)
Facility Meets
Available Expectations
New Technology
for 1998
Source: Keeney et al (1986) Doesn't
115
Figure 3.7 Technology Choice Objectives Hierarchy
OBJECTIVES HIERARCHY
particularly at the national or regional system level, involve more than one
technique, which we call model synthesis. Model synthesis refers to any formal
attempt to use two or more of the above-mentioned techniques to achieve the same
practice to illustrate 1) the kinds of techniques that are suitable for synthesis, 2)
how well they address the areas and types of uncertainties, and 3) the manner of
synthesis.
planning problems. Not all packages are integrated as some would require the user
to import data from different modules and perform separate analyses. Generation
116
Electric Generation Expansion Analysis System (EGEAS), developed by the US-
analysis, and data collapsing for trade-off and uncertainty analysis. These
options are selected by the user. For uncertainty analysis, the user specifies
subsets of uncertain input parameters. The model then generates scenarios for
These commercial packages are composed of modules which share the same data
base and have a common interface. Packages such as EGEAS, WASP, and
computers, they are not flexible for modification or customisation. These packages
are not transparent to the user and therefore not customisable or extensible. They
techniques.
planning tool that can be used to model uncertainties in load growth, fuel and
capital costs, and performance of new generation technologies. The power system
planners risk attitudes and value judgements are captured in the multi-attribute
utility functions, with the decision criterion to find the least expected cost or the
maximum expected utility. DEFNET can also incorporate attributes other than
117
cost, such as environmental impact, licensing and operating delays due to
figure 3.8 depicts possible decision paths which branch out from an initial state.
More information available in the near term translates to finer details as opposed to
the coarser or wider range of possibilities in the future. This is analogous to the
mode.
...
Decision
Possible
Decisions
... Resource
Plan
Initial ...
State
Extension
... Period
FRAMEWORK
Capacity expansion planning can be viewed as investment decisions made for the
118
branch and bound algorithm. The operation sub-problem is a multi-stage, multi-
optimisation framework, a minimax decision rule is used to select the optimal cost.
decision tree. The optimal expansion strategy minimises the maximum regret
obtained from evaluating the binary tree of different growth rates and decisions.
This elaborate methodology has nevertheless been only applied to the Brazilian
type of technology.
A decision tree structures the capacity planning problem in Mankki (1986). The
method. The decision tree shown in figure 3.9 captures the uncertainties of
demand, fossil fuel prices, and nuclear capital costs. The method of probability
119
Figure 3.9 Decision Tree with Optimisation Algorithm
LNCC LNCC
HNCC HNCC
C1991 = coal power plant (production start 1991); CFFP = constant fossil fuel prices; RFFP = rising fossil fuel prices;
LNCC = low nuclear capital costs; HNCC = high nuclear capital costs
Clark (1985) extends the Over/Under Model described earlier to generate scenarios
for the analysis of demand uncertainty. This integrated demand forecasting model
offs. The model is also used to examine excess capacity rules in the adjustments
of reserve margins.
120
DECISION ANALYSIS FOR SCENARIOS
Garver et al (1976) use a decision tree (figure 3.10) to generate three strategies
and five event scenarios for a five year period. The decision alternatives and
possible event scenarios are propagated yearly. Two criteria are used to evaluate
worth costs, and 2) the cost of uncertainty represents the opportunity loss.
Although the time horizon is relatively short and the scenarios few, this example
illustrates the potential for structuring extensive scenario analysis in a decision tree
framework.
Business as usual
50% oil increase in 1980
Minimum long range 50% nuclear fuel price increase in 1980
cost strategy 50% coal price increase in 1980
20% cost of capital increase in 1980
Business as usual
Oil increase
1976 to 1980 Short term oil Nuclear increase
decisions conservation strategy Coal increase
Capital increase
Business as usual
Oil increase
Short term capital Nuclear increase
conservation strategy Coal increase
Source: Garver et al (1976) Capital increase
objective linear programming with sensitivity analysis. For this study, country risk
121
was assessed by a scoring system based on the weighted sum of a subset list of
factors that affect legislation and regulation related to the shipping of fuel. The
demand for electricity, fuel costs, controllability (how closely a power plant can
follow the load duration curve), utilisation rate for each type of plant, and daily
load curve. However, the dimensionality of extensive scenario analysis due to the
analysis.
the decisions regarding whether or not to build a new type of technology and then
use a linear programme to calculate its effect on costs and installed capacity. A
utility function called the net decision benefit is used to propagate the decision tree
in figure 3.11. Lack of experience with the operation of this new type of
technology implies uncertainties in future costs. Capital costs are derived from a
multiple of related technological cost estimates. Other costs are elicited from
122
Figure 3.11 Decision Tree of New Technology Evaluation
CDFR? CDFR
performance
No uncertain
Yes
Uncertain
CDFR?
delay
No Foreign fast
Yes
reactors CFRs?
performance
Source: Kreczko et al (1987) uncertain No
Results from different types of simulation in Merrill and Schweppe (1984) are
whether manual or computer based. The resulting functions are validated using t
of fit to the data base. Users insights are introduced into the model, hence the
large subjective element. SMARTE applications are found in Merrill (1983) and
Merrill et al (1982) which feature the combined use of simulation and optimisation.
123
Figure 3.12 SMARTE Methodology
Modelling and
Regression
(SMARTE Functions)
Trade-Off Evaluation
Source: Merrill et al (1982)
3.6 Conclusions
planning reveals the limitation of existing approaches and the potential for greater
model completeness through synthesis. The main conclusions are presented and
supported below.
1) All kinds of OR techniques have been applied to the problem of capacity planning,
albeit the areas and types of uncertainties are modelled with different degrees of
completeness and adequacy.
These techniques pictured in figure 3.13 span the range of optimisation, simulation,
and decision analysis. Table 3.4 summarises the critique of techniques with respect
to uncertainties.
2) Models (or applications) based on single techniques are able to capture some aspects
very well and others not at all. Adding greater detail does not compensate for what
the technique is designed to do.
enough to address all the issues. For example, models without a decision analytic
124
focus have difficulty capturing the multi-staged nature of decisions and the
associated risks. One cannot use optimisation techniques for decision analysis
purposes as the assumptions are not compatible. Likewise, one cannot use
in the same sense as variability. Rather than building something intrinsic to deal
uncertainties. Others give limited but inadequate attention to this issue. The
Increasing uncertainties have often called for softer approaches such as scenario
3) The critique of techniques with respect to areas and types of uncertainties reveal the
the lack of completeness of coverage, inadequacy of treatment, and further difficulties
of manageability, computational tractability, and other problems.
The modelling critique supports the above conclusions. Table 3.4 presents the
125
Table 3.4 Critique of Techniques
126
decision considers multi-staged resolution of how many stages to consider
analysis uncertainty subjective probabilities or historic
decision criteria and multi-attribute evidence for uncertain events
utility functions which decisions to consider
role of decision maker how to propagate decisions, by
not detailed enough individual factors or scenarios
not optimal
4) The additional modelling difficulties translate into new modelling requirements, which
reflect the conflicting criteria of comprehensiveness and comprehensibility and
practicality.
The above modelling difficulties can be condensed into five main areas: 1)
1) Mathematical restrictions lay the rules and foundation for any technique and sets
the boundaries and conditions for its functionality. Linearity and convexity
requirements of linear programming prevent its applicability to non-linear and non-
convex relationships.
2) These structural conditions for functionality determine the way in which the
problem can be formulated. For example, multi-stage resolution of uncertainty
cannot be achieved within a linear programming framework while optimisation
with respect to given constraints cannot be accomplished by decision analysis or
system dynamics.
4) Many of the problems of dimensionality and algorithm efficiency are related to data
specification, that is, the level of complexity and realism that can be modelled
without sacrificing tractability. Assumptions, approximations and reduction of
information which are undertaken to simplify the problem to a manageable level
must be assessed against the need for completeness. These considerations imply
127
judgement, trade-off evaluation, and decision making at the model construction
stage.
representation by probabilities
However, not all techniques are capable of representing and adequately treating
uncertainties.
5) A balance of hard and soft techniques is needed to address the different aspects of
capacity planning.
We distinguish between hard and soft techniques for the purposes of this
computational in nature. They are suitable for well-structured problems and are
aimed at solving bigger problems more quickly and efficiently. Towards the other
end of the spectrum lie softer methods that are less formal but more qualitative in
scope, suitable for uncertainty and strategic analysis. They are more descriptive
and long-term investment and retirement decisions, which entail many parameters
of uncertainties, conflicting objectives, rapid changes, high costs and risks. These
128
aspects are not well addressed by hard prescriptive techniques. Ill-representation
been preferred to make the problem tractable. The well-specified problems that
requiring a balance of hard and soft approaches. Strategic decisions have high cost
implications, and the uncertainties that affect such decisions require soft
model uncertainty in capacity planning, both types of techniques are needed, i.e.
scheduling and the softer methods of uncertainty analysis. Hard and soft
In the past, the focus was towards efficiency of algorithms via the continued
identified or exploited.
Some techniques that have been used in electricity planning share similar
129
Figure 3.13 OR Techniques
stochastic
programming
linear
decomposition sensitivity
programming
analysis
optimisation simulation
multi-criteria
decision
analysis
analysis
(influence diagrams)
multi-attribute
utility
(decision trees)
different techniques or approaches can balance the objectives and assist towards a
more complete model. For example, system dynamics and optimisation allow a
7) Model synthesis has the potential to overcome the deficiencies of single technique
based models by exploiting synergies between techniques and achieving a balance of
techniques.
synthesis should be capable of supporting the balance of hard and soft techniques
130
8) Decision analysis emerges as a versatile technique with proven potential for synthesis
with other techniques.
This review has deliberately steered towards decision analysis, which has not
received much attention in the UK (as compared to the US). Recent desk-top
decision software, such as DPL (ADA, 1992), has automated the previously
its use.
9) A fair method of model comparison beyond the literature review is needed to evaluate
model characteristics and performance to a greater level of detail and depth.
This review has attempted to assess the capacity planning models reported in the
difficulties found in this review, it is necessary to look into the model, e.g. via a
The strong engineering culture of electricity capacity planning has evolved from
resulting in models with more detail. In light of this, the following emerge.
1) How can a single utility, given its limited resources, expand existing techniques
to cope with all these uncertainties (in Chapter 2) given the modelling
difficulties listed above?
3) How can we compare models in more fairly and in greater depth, to get beneath
what is written and reported?
131
CHAPTER 4
4.1 Introduction
Chapter 3 reviewed the range and evolution of techniques which have been applied
capable of capturing the different kinds of complexities and uncertainties than those
based on single techniques. This suggests that model synthesis (using more than
one technique) may help to achieve the ideal of comprehensive yet comprehensible
Yet the modelling literature has little to offer on strategies and criteria for model
synthesis.
This chapter gives an account of the investigations into the feasibility and
synthesis prototypes (section 4.5) are assessed according to the dominant criteria
and manageability).
That model synthesis, via the decision analytic framework proposed by this thesis,
model completeness a reasonable goal in the first place? 2) Are there more
practical means to compensate for the lack of completeness or deal with the range
of uncertainties? Section 4.6 discusses these questions with respect to the concept
133
of flexibility. The last section 4.7 summarises the main findings and proposes an
adopted to allow an objective and systematic method of inquiry. Figure 4.1 shows
the three main components, namely, literature review, feasibility tests, and
First, extensive literature reviews from the previous two chapters identified
uncertainties and modelling requirements which formed the basis for a lengthy
model evaluation criteria (section 4.3.4). Second, two pilot studies (section
of the evaluation method (section 4.3.3). Third, a case study (section 4.4.1) was
developed to capture the current concerns in the UK ESI and to inject energy
policy insight into the subsequent analysis. Fourth, three modelling approaches
(section 4.4.2) representing those followed in the industry were replicated and
4.5.4) was tested. Insights from replicating existing approaches were transferred
studied.
134
Figure 4.1 Experimental Protocol
method of model comparison. The next three sections give the rationale, details,
4.3.1 Rationale
A mere literature review of applications does not sufficiently expose the full
limitation and potential of existing approaches. What is written can be biased and
minimal, supporting the authors intentions but not illuminating the intricacies of
their particular approach. Authors have discretion over what to reveal and
1 Circularity of the modeller as evaluator may cast doubt into the authenticity of the replication and the
objectivity of the evaluation. Model replication and synthesis depend greatly on modelling skills and
availability of modelling software. Depending on these conditions, different researchers may reach
different outcomes.
135
consequently may hide weaknesses of their approach while presenting them in an
thoroughness.
particular case study and not always comparable. These fundamental differences
to a single case study anchors the modelled content to enable ease and fairness of
systematic assessment. The first stage replicates and evaluates the three modelling
tests the feasibility of model of model. One of these approaches operationalises the
and Lotus 123, add-ins, e.g. @Risk, and decision software, e.g. DPL, offer the
136
kind of ease-of-use and multi-functionality which facilitate rapid model
construction. Most of the models reported in the literature have been painstakingly
computer coded from scratch and required major effort in implementation. The
new tools allow models to be conceptualised, constructed, and tested much more
quickly and effectively. Such tools as these and others can be used to mimic or
the literature.
The feasibility of model replication and evaluation has been established in two
unrelated pilot studies. The first study addresses specific issues of the Nuclear
Review in the UK. The well-known techniques of sensitivity analysis and risk
analysis were applied to a comparison of levelised costs of nuclear, coal, and gas
plant based on data taken from various OECD countries. Documented fully in
Appendix A, this study exemplifies the level of detail pursued in the modelling
experiment. The second study used data from the Hinkley C public inquiry for
replicating the deterministic approach, which is later subsumed into the first stage
as well as replication.
This case study based modelling experiment contains several standard research
Most model comparison studies are not case study based but mere reviews of
model specifications. The studies by Dixon (1989) and Davis and West (1987)
belong to the category of case study based comparative analysis of models. While
Dixon compares existing models to critique and improve upon them, Davis and
West compare to show off the model they developed. Dixon comments only on
137
the input and output thus treating the models as black boxes. On the other hand,
Davis and West probe into the trade-offs of specific techniques employed in the
models. Neither study defines the criteria for comparison beforehand. The basis of
comparison is very general and superficial, e.g. strengths and weaknesses, ease of
Not enough has been written about how to evaluate models. Mulvey (1979)
driven bias provided a methodology exists for evaluating models which are based
on each dimension, with the final results compared linearly by dominance. Morris
(1967) suggests broad characteristics of models, which are useful but not specific
enough for the purposes of assessing large complex models. Beyond roughly
review, hence more detailed and comprehensive than earlier studies. Instead of
ranking all models on one criterion, our evaluation method assesses each model
and 3 was proved feasible for evaluation purposes in the second pilot study. This
138
purposes. For example, it was not always possible to compare models with
different assumptions and properties. A reduced checklist was more workable but
lacked the detail and comprehensiveness for which the evaluation and synthesis
were intended. The reduced criteria consist of five major categories for evaluation:
The level of detail relates to input variable specification which directly contributes
was largely ignored in the evaluation, it directly affected the transparency of the
model. This category includes questions like how many variables are, can, or must
139
be included; how many types of plants are considered; and the level of operational
and financial detail. Operational detail relates to parameters such as load factor,
utilisation rate, and thermal efficiency. Financial detail covers costs, e.g.
operations and maintenance charges, interest during construction, tax, and discount
rates.
extensibility and reusability. A simple model structure can hardly capture all
such as risk attitude, sequential stages, and uncertainty resolution. Risk attitude
can be incorporated in the discount rate, utility functions, and risk tolerance
The quality of the output is related to the level of detail in inputs. The range of
insight and richness of solutions surface from the range and diversity of
alternatives. These and other aspects of the problem, such as business risk, reflect
140
representativeness, dimensionality, and computational tractability. These pre-
defined criteria act as cues for the investigation of the potential, limitation, and
The case study captures a snapshot of the UK electricity supply industry as at July
The utilities involved in power generation in the UK Electricity Supply Industry fall
into three categories, broadly labelled as unprotected and dominant, protected but
briefly described below, and their plant mix as at July 1993 summarised in
associated tables.
The unprotected but dominant utility describes the two major power generators,
National Power and PowerGen. These duopolists are primarily concerned with
e.g. threat of MMC2 referral, caps on electricity prices, the Regulators scrutiny of
141
Table 4.2 Unprotected but Dominant Utility: National Power
Oil 3 4,484
Hydro 3 40
TOTAL 41 25,008
The protected but competitive utility characterises Nuclear Electric, which has not
been privatised. Despite heavy subsidy of the nuclear levy and other government
protective measures, the nuclear generator has an equally strong profit motive, i.e.
to show that it can eventually compete in the private sector when the subsidy
expires in 1998. Many of these uncertainties will be resolved with the outcome of
the Nuclear Review due in 1995, i.e. privatisation, subsidies, public support,
financing of power stations, and the future of nuclear power. Nuclear Electrics
142
Table 4.3 Protected but Competitive Utility
Magnox 7 3,293
AGR 5 6,039
Hydro 1 30
TOTAL* 13 9,362
Finally, the unprotected but encouraged utility reflects the views and opportunities
CCGTs which are tied to back-to-back 15 year fuel contracts as a way for the
as at July 1993.
143
Table 4.4 Unprotected but Encouraged Utility
environmental regulation and potential over-capacity, the case study addresses the
following question.
calculated from totalling all investment and operating costs. The relative
economics of plant can be determined by its merit order in the entire system.
surrounding the decisions to invest new plants and retire old ones are explicitly
considered: demand and fuel price. These uncertainties at the industry level
144
concerns all types of power generators. Therefore differentiation amongst the
differentiation by type of utility; however these firm level uncertainties have not
2) Fuel price uncertainty affects the types of fuels used in the technologies. The main
factors describing fuel price uncertainty are base price and subsequent escalation
rates. Emission regulation, spot prices, and related fuel prices determine the
direction and rate of fuel price escalation.
The consolidated industry data used in the replicated models are contained in
AGR, large coal, medium coal, small coal, oil, OCGT, CCGT, hydro, and the
Scottish and French links). Eight different seasonal load duration curves were
specified to correspond to peaks and troughs in demand during the year. Linear
trend forecasts were specified for peak demand and each type of fuel for each
period in the planning horizon. Capital and operating costs were specified for each
of the 85 plants. New alternatives included fossil fuel plant, nuclear, and
renewables.
The first stage of the modelling experiment examines three modelling approaches
145
and probabilistic. These have been highlighted in public inquiries into proposals to
build new nuclear power plant. The deterministic and probabilistic approaches
fuel forecasts. In the US, on the other hand, regulatory hearings have increasingly
made use of the decision analytic approach which combines some features of the
146
3) The decision analytic approach is patterned after the North American decision
analysis school, a discipline actively practised by consulting firms such as SDG
(Strategic Decisions Group) and ADA (Applied Decision Analysis). We replicate a
variant of the Over/Under Model of Cazalet et al (1978). This kind of decision
analysis approach is illustrated in Anders (1990) and Peerenboom et al (1989). The
approach is heavily decision analysis oriented with emphasis on the technology
choice decision. A decision tree is structured to capture the major decisions and
uncertainties.
Details of these approaches, their replication, and evaluation are given in Appendix
replications of the Sizewell B and Hinkley C models but with industry data updated
to 1993. The decision analytic approach required more extensive prototyping, i.e.
The three approaches were replicated and evaluated independently of each other.
captures the operational and financial details missing in the decision analytic
characteristics and costs. In the absence of actual plant data, estimates may reduce
the level of accuracy and detail and adversely affect the reliability of the output.
This aspect of modelling, i.e. the inability to model the full system given the
147
The level of detail leads to the comprehensiveness of specification which are met
by the first two approaches. Both are developed from a full model specification of
making, i.e. a single plan is produced rather than the usual multi-staged contingent
alternatives, etc. However, the probabilistic and decision analytic approaches run
whilst saying nothing about the preferences and risk attitudes of the decision
plan, i.e. results that lie within an acceptable range. It gives no indication of the
Range and richness of insight in the output depend on the specification of the
uncertainties at once and also gives risk profiles of different output parameters at
the end of the simulations. The decision analytic approach requires the sequential
smaller set of permutations than possible from risk analysis. Risk profiles as
constructed from cumulative output distributions can be compared for cost ranges
148
produces discrete pictures of these alternatives, which give much less information.
Although the deterministic approach produces outputs that are scenario dependent,
decision analysis. In practice, finite states and the dimensionality of multiple stages
prevent such a formulation. The probabilistic approach, on the other hand, uses
The deterministic approach treats uncertainty in a static and limited fashion. There
149
problem to be addressed in terms of controllable and uncontrollable events, i.e.
decisions and uncertainties. Table 4.6 summarises the main points in the evaluation
comprehensible model.
risk attitude
flexible construction
and sensitivity analyses give limited insight to the kinds of uncertainties that
150
results point to the strengths and weaknesses of each approach and the need to
turbulence of the last two decades. Inaccurate forecasts lead to sub-optimal plans.
The traditional approach of fitting plans to forecasts of fuel supply and electricity
difficult, because the optimisation programme optimises the entire system and not
analysis which, on the other hand, is incapable of modelling the intricacies of the
power system.
3) The decision analytic approach is unable to capture the details required in power
systems planning.
Given the above conclusions, the next stage of the modelling experiment attempts
151
through synthesis of the essential feature of the first two approaches (optimisation)
4.5.1 Rationale
Applications based on single techniques lack the breadth to cover the range of
problem. The three representative approaches are individually unable to meet the
model, should achieve the kind of balance and completeness unattainable by any
single technique. Some people, e.g. Linstone (1984) and Brown and Lindley
modelling all refer to the term we coined model synthesis. The final synthesized
1) By exploiting synergies between techniques, a synthesis reflects the notion that the
whole is greater than the sum of the parts.
3) Due to the high cost of development (Balci, 1986), it seems easier to use existing
readily available models and tools rather than to develop one from scratch.
152
Synthesis capitalises on familiarity and reusability of existing models. Reisman
(1987) urges a synthesis of models rather than the development of more models, as
there are too many models already. Synthesis through generalisation and
systematisation reduces the jargon and effort required to master new techniques.
4) Instead of new investment (of knowledge, resource, etc), synthesis involves issues of
integration and automation.
6) The above five reasons are supported by the noticeable modelling trend seen in
practice, as explained below.
NEMS, and WASP in Ruth-Nagel and Stocks (1982), Beaver (1993), and IAEA
(1984). While advances in software, hardware, and human capability may help to
achieve these modelling goals of completeness, the required amount of effort and
resource may well exceed that available to a single utility. This thesis addresses
The second stage of the experiment determines the feasibility of model synthesis
for a typical power company in the UK, given its limited resources. Different
within a decision analysis framework are conceived and tested. Models of models
(explained in section 4.5.4) are built to facilitate this synthesis. Beginning with a
full conceptualisation of the issues involved in model synthesis in section 4.5.2, the
4.5.5.
153
4.5.2 Conceptualisation of Model Synthesis
Applying more than one technique to achieve synthesis involves the following
briefly discuss these concerns and then summarise the conceptualisation of model
and assumptions, and ability to exploit synergies between techniques depend on his
familiarity with the model components. This technique-driven bias arises out of the
imply concerns of manageability of data and model, validation, error tracking and
model such that resulting changes in the components are consistent and the
specifications or using the synthesized model for different purposes will require re-
154
One of the difficulties of synthesis results from the (lack of) compatibility of
requirements cannot be resolved, the final model may not work. Ironically, the
strength and appeal of model synthesis lies in the diversity (and complementarity)
of model components!
techniques. However, methods to facilitate this are not always obvious. Appendix
compatibility
validation
extensibility of components
reusability
155
Compatibility Co-existence of model components feasibility of synthesis
in a synthesis depends on the
underlying assumptions and effort required in resolving
theoretical foundations. conflicts of interest
literature is full of new modelling languages, most notably Geoffrion (1987), but
strategies for synthesis appear necessary to narrow down the possibilities and
avoid the costly method of trial and error. [Appendix C gives three main strategies
model components.
1) The required number of interfaces increases with the number of techniques, and this
contributes to the dimensionality problem.
2) Data transfer and sharing complicate the communication between different models.
Output data from a model component is rarely in a form acceptable by another,
hence requiring some transformation.
156
3) Any components requiring direct user input must have appropriate user interface.
The above structuring issues are detailed in Appendix C, but summarised in table
4.8 below.
In addition to the above, a distinction into weak and strong forms of synthesis is
combined, for operational planning purposes. At the middle level, models are
157
aggregated to pull the information together. At the top level, models are
dependence than a strong synthesis and hence easier to build but possibly more
cumbersome to assimilate the results. Here, the model components are not tightly
represent weak forms of synthesis, whereas the second stage investigates stronger
forms of model synthesis. The strongest level of synthesis is full integration, where
components are no longer distinct from each other. While the strong form may
require more work for the modeller initially, the resulting synthesis provides less
work for the user. The modelling work involved in synthesizing is a fixed
investment cost, while the additional work involved in using the resulting model is
a variable operating cost. Thus model integration provides the rationale for
investigate the practical issues of synthesis. Stage two of this experiment attempts
158
4.5.3 Decision Analysis Framework
close to the decision maker in a reduced form. The decision tree structure is
linkage.
The decision tree structure of nodes and branches has synergies with other
decisions and uncertainties and the communication of strategic issues. Until the
advent of desk top decision software, decision trees were restricted to structuring
simple problems. The tedious task of expected value calculation especially in large
DPL (ADA, 1992). These developments motivate a re-use of the age-old decision
Decision trees have rarely been used as a framework for incorporating other
interface, i.e. dynamic or static linkages in the decision and chance nodes; a method
and the construction of a decision tree and its equivalent influence diagram.
As concluded from model replications of the first stage of the experiment, decision
trees are too simplistic to incorporate the level of detail required of capacity
159
techniques of optimisation or simulation become overwhelming and troublesome.
Using the core optimisation model in its existing data-intensive form within a
2) Each optimisation gives results that require considerable reduction and conversion
for further use in decision nodes.
3) Interim processing is required to organise the inputs and outputs into an acceptable
form.
The above difficulties imply that any further sensitivity analysis or alternative
There are at least three ways to overcome these operational difficulties. 1) One is
Another is to design an interface with other models to generate the necessary data.
However, neither solution improves upon the speed and ease of the original
optimisation as they are both static links3. For these reasons, they have not been
3 Static ways of referencing a bigger and more complicated model include 1) setting up look-up tables, 2)
keeping a database of feasible solutions,3) approximation, 4) using an aggregate function, and 5)
sampling and interpolation to extract or read off values from the original model.
160
of model to facilitate dynamic linkages. The next section gives a detailed account
of this investigation.
4.5.4.1 Introduction
model. The reduced model answers the need for less input and output and more
3) The original model produces too much output, mostly unnecessary for its intended
use or requires further manipulation or reduction to be useful.
5) The original model requires too much input data which cannot be obtained easily.
6) The original model is not an end in itself, but a means to an end, therefore
approximation is acceptable.
7) The original model cannot be used in model synthesis in its existing form.
In the physical and engineering sciences, response surface methodology (Box and
Draper, 1987) is an established way of building a simpler model from the inputs
most common being the least square method for regression. The reduced
regression model can be compared with the full optimisation model for model fit,
161
whereas most regression models are fitted on data which cannot be generated at
will.
is related to the previous two criteria. A reduced model is intended for further use,
analysis. The additional effort required to adapt or transform this model must not
be excessive.
4.5.4.2 Methodology
repeated for different values of independent variables and different forms of the
final reduced model. The next paragraphs explain the seven main steps.
First we determined the payoffs and values needed in the decision analysis
framework, as listed in table 4.9 below. These incremental payoffs were intended
for attachment to nodes of the decision tree, subject to values of other independent
variables or nodes along the path. A regression equation was built for each
dependent variable.
162
Table 4.9 Dependent Variables in the Reduced Model
Investment cost
Total cost
Cumulative new plant installed capacities per Production Costing Results (*.PCR)
plant type for a certain time period (in pre-
selected periods)
In the manner of the first pilot study (Appendix A), we used sensitivity analysis to
find the main independent variables that determine the previous dependent
variables. The reduced form should be much simpler than the full model, hence,
choice of which input variables to fix and which to vary in the optimisation model.
163
Table 4.10 Independent Variables in the Reduced Model
Once the dependent and independent variables have been selected, we determined
the value ranges for each Xi. We anchored the average value for each Xi, i.e.
E(Xi), then fixed a margin above and below it. Thereafter, we adjusted the
There are two main ways to generate data points for independent variables:
2) Probability distributions reflect how likely and how frequently the input data
(values of independent variables) will occur. Hence it is a more realistic (accurate)
164
reflection of how one would expect to get the data (if available) than the factorial
design. The distribution method also allows the use of sampling techniques, such
as Monte Carlo and Latin Hypercube Sampling within risk analysis. As a shortcut,
we used the sampled data generated from the Probabilistic Approach.
This is supported by Morgan and Henrion (1990), who observed that the
extracted sets of data for independent variables into text input template files and
editted them into an acceptable form for the optimisation programme. Each data
set was used for one optimisation execution (run), resulting in one set of output
data. The relevant dependent variables were extracted from this output into a
spreadsheet. This was repeated for n sets of data and runs. One hundred to one
thousand runs were made for each combination of Y and Xs. After all data sets
have been processed, we modified the formats to prepare for regression analysis.
6) Regression Analysis
The latest releases in desk-top statistical software offered not only statistical but
also visual model fitting facilities. We used the Curve Fit facility in the statistical
good regression model, we used facilities such as forced entry (all variables
165
square to get the overall fit. We also looked at possible interaction terms, outliers,
The R squares were very low, ranging from 0.05% to 32% at best, indicating a
poor fit. The variance of R squares was very large. This implied that the form of
7) Validation
These two kinds of validation (within and outside range) were aimed to show the
The within and out of range validation tests were unsatisfactory as there was no
pattern to the outcomes of using new data on reduced models. Along with large
residual variance, these results made the reduced models unacceptable and not
reusable.
Our negative results seem to contradict that of Bunn and Vlahos (1992) who
variables were varied: demand escalation rate, nuclear capacity cost, discount rate,
coal price, coal price escalation rate, and the level of Non-Fossil Fuel Obligation
(NFFO). The resulting dependent variable is the difference in total cost of the
optimal plan without the NFFO and with the NFFO. The regression model was
validated against a further 250 new scenarios. Building such a model was mainly
166
analysis. Our choice of independent variables is totally different from theirs as are
our dependent variables. The background scenarios (fixed variables not entered
into the regression) are also different. In addition to equal permutation (as they
have done), we also used probability distributions, which should give a more
4.5.4.3 Conclusions
After substantial modelling effort in which different data sets were produced, we
were unable to arrive at a convincing argument for model of model using the
costing detail of the optimisation programme for the decision analysis framework is
below.
1) INFEASIBLE
None of the regression models were reliably and consistently representative of the
original model. The residuals were large and varied, with no apparent pattern.
This made results unpredictable. Poor R square implied that regression may not be
a good basis for model building. These results were perhaps due to the parameters
chosen.
It was difficult to ensure that the artificial data generated from successive runs of
the original model could produce meaningful and admissible combinations for
regression analysis.
167
Both within range and out of range validation of reduced models failed to give
convincing results. This not only questions the acceptability of the reduced model
2) IMPRACTICAL
The effort in producing and validating a regression model was quite large. In fact,
it was greater than the sampling and risk simulation work involved in the
Probabilistic Approach. This effort should not exceed that of re-using the original
3) NOT REUSABLE
The previous two criticisms (infeasible and impractical) foreshadows its reusability.
The reduced model, even if well-calibrated to the original, could only be used for
the background scenario given, hence of limited use. In other words, each form of
the reduced model is confined to the background scenario. This implies that any
model. Likewise, changing any parameter that was originally fixed to produce the
reduced model does not guarantee valid results as out of range validation showed
that the model was limited to the independent variables and ranges specified.
required for capacity planning. In its existing form, the core capacity optimisation
model was incompatible with decision analysis in data (input and output),
structure, and level of detail. The output of optimisation was not meaningful for
168
linking without further reduction. These complexities of data size and form added
One way to overcome the above problems was by a model of model. After
extensive tests, this approach failed to meet the criteria of feasibility, practicality,
These practical limitations of models synthesis are due to the conceptual difficulties
given in Appendix C and the operational difficulties which reveal the importance of
for further research, which together with our experimental findings help to explain
the impractical pursuit of model synthesis given the limitations of our current state-
1) Model synthesis requires the resources and capability beyond a single model
builder. Utilities in the UK ESI, especially new entrants, have limited resources.
Model synthesis may not be a practical solution. Furthermore, a single model
builder is biased by the technique familiarity, choosing only to use techniques that
169
are most familiar and available, thus unable to see the synergies for model
synthesis.
2) Even if model synthesis is workable, it is hard to say if the insights from the
resulting model are more useful than using different techniques, i.e. without any
synthesis. The fixed cost of synthesis may be too great especially if not re-usable.
On the basis of the above conclusions, we turn to other ways of dealing with the
literature has called for flexibility in planning and the consideration of flexible
can be used within this context. Flexibility as an end in itself radically departs from
170
4.6 Motivation for Flexibility
i.e. capturing all uncertainties, as a means to deal with strategic uncertainties. For
reasonable and achievable modelling goal in the first place? Aversion to large
models in strategic planning has led to simple models, e.g. Ward (1989), for the
for the user of the model, i.e. the final decision maker who relies on the model for
guidance and defensibility. A high level of confidence ensures that the resulting
model will be used and re-used. A low level of confidence suggests that the user
experiences unease in using the model fully or at all. The real question is: is it
statement flexibility compensates for model unease. The left-hand column gives
the users general belief about model completeness, i.e. whether or not models can
be complete. The top row gives what the user has been told about the model in
171
question. There is no unease if the user believes that models can be complete, and
this particular model is complete. There is model unease whenever the model in
We distinguish between intra and extra model unease. Intra-model unease refers
removing this kind of unease. Extra-model unease refers to the gap between the
user and the model, i.e. the user believes that models can never be complete.
Therefore there will always be an unease about what the model gives and what the
user desires from the model. This gap between the user and the model
characterises the style of decision making in this industry because the decision
maker is not the builder of capacity planning models. This gap can be argued as
follows.
1) Model and forecast errors always exist. Traditional approaches rely greatly on the
accuracy of forecasts. However, forecasts by definition are predictions.
Discrepancies (errors) between the actual and the forecast, no matter how small,
will always occur. Models are simplifications.
172
2) The dynamics of model building, decision making, and realisation of plans imply
that there is always a gap due to lead time. The nature of the generation business is
such that investments have to be made before they are needed, during which time
any number of factors may occur and change the expected performance. There are
lead times to the construction of a useful model, the effective communication of its
results, and understanding and acceptance by the final users.
3) Models do not supply everything the user wants. The user may not understand the
model fully, hence unease. The user may want to retain own control, i.e. not rely
on the model completely. Other organisational and political reasons may prevent
full acceptance of the model.
If we believe that models can never be complete, then there will always be unease.
model unease. This also suggests that the real goal we should be aiming towards,
towards greater completeness, but modelling for (or to produce) practical solutions
to cope with uncertainty. Practical means to cope with uncertainty are given in the
next sub-section.
Practical means of coping with uncertainty have been suggested by Hertz and
Thomas (1983) and Hirst and Schweitzer (1990). 1) Ignoring uncertainty allows
one to focus on the complexities albeit at a high cost. 2) Building more accurate
forecasts may help to achieve more accurate optimisation, but this does not
form of robustness, but this does not eliminate uncertainties. The remaining
173
4) Defer decisions by waiting until additional information is available or until
important uncertainties are resolved. The cost of waiting includes the opportunity
uncertainties respectively.
6) Sell risks by conducting auctions for supply and demand resources. Negotiate
7) Adopt a flexible strategy that allows easy and inexpensive changes. One way is to
and small modular unit size. Recent electricity planning literature (e.g. CIGRE,
1991 and 1993) has also proposed technical means of achieving flexibility in
4.7 Conclusions
How can we cope with increasing uncertainty and meet the conflicting criteria
of comprehensiveness and comprehensibility?
suggested and two of them pursued in this thesis, namely decision analysis as an
174
We investigated the feasibility of model synthesis by conducting a two staged
1) The first stage of the modelling experiment showed that existing approaches were
incapable of dealing with the conflicting criteria of comprehensiveness and
comprehensibility.
2) The second stage revealed the limitations of model synthesis as a singular approach
to uncertainty modelling.
4) These results suggest that model synthesis is not a trivial undertaking, and the work
involved may well exceed the capacity of a single modeller and the limited
resources of a single utility.
We then examined the goal of model completeness and concluded that reducing or
removing model unease may be a more appropriate goal for dealing with the
Flexibility has been suggested as a hedge against model unease and as a practical
the decision maker. That such an ill-defined concept could complement or even
infrastructure), inflexibility (of long lead times and high sunk cost), and illiquidity
(as the sale of uneconomic plant is still a relatively new phenomenon). Assets in
175
the electricity supply industry are not as easily exchangeable or tradeable as those
requirements and data input; thus the vague concept of flexibility must be precisely
its practical usefulness. We need to be able to define, measure, and apply it to our
problem. The second part of the thesis clarifies the concept of flexibility through a
176
APPENDIX A
Pilot Study 1
A.1 Introduction
5) providing the rationale for model synthesis. The level of detail documented here
At time of writing in July 1992, the nuclear review in the UK ESI promised for
1994 provided a rich background and rationale for the comparison of plant
economics. Nuclear power has always been a highly controversial topic, with
and the evaluation of intangibles. It is a real event that is bound to provoke debate
up to the actual date of review, thus providing plenty of evidence and results for
our comparison. The following script (courtesy of Kiriakos Vlahos) gives a brief
While the Hinkley Point C inquiry was under way, the UK government took the decision
to postpone any decisions about the nuclear development programme until a review of
nuclear in 1994. It also withdrew the nuclear industry from the privatisation
programme and formed a new company, called Nuclear Electric which would operate
existing nuclear stations.
177
The Hinkley Point C inquiry approved the development of the new power station on the
grounds of the Non-Fossil Fuel Obligation, although the economics of nuclear at the
time looked desperate compared to either coal or gas.
Since then, concern about the environment and especially the greenhouse effect has been
growing, and the EEC is planning to introduce a carbon tax on fossil fuels. Such a tax
would improve the economics of nuclear power stations, since they do not produce the
main greenhouse gas CO2, neither do they produce SO2 and NOx in the generation of
power.
In addition, Nuclear Electric has been performing quite well in financial terms,
producing substantial profits, of course to a large extent due to the nuclear levy. But
they did manage to improve the availability of the AGR stations and to increase the
market share of nuclear overall. Nuclear Electric and BNFL, the two main companies of
the UK nuclear industry, are keen to build new nuclear power stations and they even
called for the review date to be brought forward. This has been declined by the
government.
Developments in the electricity and gas markets are also relevant. A large power station
building programme coincided with privatisation, and Combined Cycle Gas Turbine
(CCGT) is the type of plant that will inevitably dominate the new power station market.
If projections materialise, gas consumption will double in the UK by the year 2000.
Whether the gas industry can produce that much gas at competitive prices is an open
question. The UK and European gas supply and demand situations need to be carefully
examined.
The latest news is that Nuclear Electric wants to build Sizewell C a successor to
Sizewell B, but with double the size (about 2.5 GW). They estimate that this follow
up will achieve significant economies of scale and will be economic compared to
competing electricity generation technologies.
nuclear power to compete against other plant especially in anticipation of the likely
over capacity due to gas-fired plants, which are expected to dominate the early part
of next century. This pilot study examines the economics of nuclear power and its
immediate competitors, coal and gas, the most influential factors affecting the final
178
cost of electricity, the impacts of the proposed EC carbon tax, and the overall
effect of uncertainties.
Marginal or levelised cost analysis is used for assessing plant economics rather than
constrained optimisation of the entire system. The main data used in this study
originates from UNIPEDE (1988) and OECD/NEA (1989) reports, which are
referred as UNIPEDE and OECD throughout the study. Electricity costs are
components of cost are then presented and discussed. Uncertainties are assessed
by the techniques of sensitivity analysis and risk analysis. Extensions to this study
The levelised cost of electricity, also known as the average or uniform discounted
cost, is the accepted method for comparing the economics of different power
OECD. All the costs are discounted to the present value at a certain point in time
so that the terms, which are expressed in constant money of that given date, can be
summed and divided by the discounted electrical output. For a project in the
pence/kWh.
This study identifies the main components of the cost of electricity and the major
factors that influence them. It explores the extent to which these components
contribute to the final levelised cost under the impacts of varying discount rates
Instead of giving a point estimate for the cost of electricity, this study uses
sensitivity and risk analyses to give a realistic range of estimates. A realistic range
contains the most likely values, and in this case, is qualified within the international
179
context, specifically of plants to be commissioned in the last five years of this
of the major factors upon baseload coal-fired and nuclear power stations in the
United Kingdom. The same approach can be extended to other types of plant.
First, the factors that influence power plant economics are isolated by taking
possible interactions between these factors. Ranges for the main parameters are
common currency and then taking the minimum and maximum values for all OECD
countries surveyed. The range for a given parameter is then reduced by discarding
those outer values that reflect unrealistic circumstances in the UK sense, e.g.
ranges are then used as bounds in the sensitivity analysis. Data for UK coal and
nuclear power plants are taken from both studies and used as base cases in this
paper. Many countries are represented in both studies, thus allowing for data
comparison and validation. Where input data is not available, the parameters are
The approach of following sensitivity analysis with risk analysis has been
(1980) and Clemen (1991). Indeed, sensitivity analysis has been widely acclaimed,
e.g. by Rappaport (1967) and Hertz and Thomas (1984), as a logical adjunct to
The important factors that influence generation cost are first identified. The ranges
of values are extracted from published sources for sensitivity analysis and risk
analysis. The basic factors are defined in terms of the likely ranges of values and
their impacts, the nature of relationships (linear or non-linear), and the magnitude
180
of impacts. Progressively the assumptions are dropped and constraints tightened,
scenario isolation of
analysis factors
ranges
causal sensitivity
analysis analysis ranking
of factors
risk dominance
analysis of
technologies
decision
analysis
The levelised costs of plants to be commissioned in the near term are the costs to
the generators not the price charged to consumers. This study examines the
levelised costs of plants to be commissioned between the years 1995 and 2000 for
The levelised cost of electricity generation by coal-fired plants given in the OECD
and UNIPEDE reports ranges from 1.33 to 3.99 pence/kWh. These costs were
181
calculated from the raw data supplied by OECD countries discounted at 5% for 30
years to constant currency of January 1987. The levelised cost consists of three
components, namely, contributions from the initial investment, annual O&M, and
the variable fuel cost. The variability in contribution by fuel is the greatest for coal,
0.47 to 2.64 pence/kWh, over twice as much as investment, which ranges from
reveal a narrower range of costs, 1.33 to 2.94 pence/kWh, with the reverse order
for coal in relative contributions of investment and fuel. This is not surprising as
investment costs are much higher for nuclear than for coal plants. Fuel costs show
great variability because the expectations of future fuel prices differ widely
amongst these countries. Figure A.2 illustrates the range of values for all the
countries studied.
3.50
3.00
2.50
pence/kWh
2.00
1.50
1.22
1.00 0.90 0.86
0.50
0.00
COAL COAL COAL fuel COAL total NUCLEAR NUCLEAR NUCLEAR NUCLEAR
investment O&M investment O&M fuel total
This kind of horizontal analysis shows the range of costs across the countries
182
country, the general order and magnitude of difference can be used to assess
Variability across countries can be examined by comparing the ratio of the largest
to the smallest cost by component, e.g. the maximum divided by the minimum for
cost due to investment (or O&M or fuel) across all countries. Here, the greatest
difference is the contribution by the O&M cost of coal-fired plants, the largest
being 8.31 times the smallest. But the range is small: 0.11 to 0.90 pence/kWh in
absolute terms compared with other costs. Even the smallest discrepancy claims a
factor of two, i.e. the largest nuclear investment cost is twice the smallest. For
both coal and nuclear fuel cost contributions, the largest is 5.55 times the smallest.
Although differences between countries are not the subject of this study, it is
exchange rates and different assumptions made by each country, a vertical analysis,
as shown in figure A.3, eliminates these issues by looking at each cost component
in relation to the total. Again the minimum and the maximum are taken of the
proportions for all OECD countries surveyed to set the maximum bounds used in
183
Figure A.3 Vertical Analysis of Cost Contribution
80
71 70
60
%
47
44 43
40
34
29
24
20 18
11 12
8
0
COAL COAL O&M COAL fuel NUCLEAR NUCLEAR NUCLEAR
investment investment O&M fuel
Compared to other components, O&M contributes least to the total cost for both
coal and nuclear. Investment contributes between 18 and 34% to the cost of coal,
while it is much greater for nuclear, being 47 to 70%. The relationship between
investment and fuel is again reversed for nuclear and coal, i.e. the contribution by
investment is much higher than that by fuel for nuclear (and the opposite for coal).
Such range diagrams depict the relative importance of different components within
a single technology and the same component between different technologies. The
length represents variability, the longer it is the greater is the range of possible
values. Nuclear fuel has the greatest variability, contributing anywhere from 12 to
43% of total cost. The height represents the importance of the component, the
higher it is the greater the contribution. The major components of coal and nuclear
are due to fuel and investment costs, respectively, each of which contributes up to
184
A.4 Major Components of Cost
The UNIPEDE and OECD studies were conducted in 1988 and 1989 respectively,
before discussions of a carbon tax came into full swing. Grubb (1989) and others
have discussed the effect of a carbon tax on the relative competitiveness of fossil
fuels for electricity generation and its effectiveness in curbing global warming.
Using raw data and established conversion rates from these reports, a carbon tax
can be calculated to see its effect on the final cost. The carbon tax component is
externalities.
Aside from the costs of investment, O&M, fuel, and carbon tax, the drivers of the
four cost components are discount rate, life, load factor, escalation rates,
efficiency, and carbon dioxide emission factor. These factors are depicted in figure
discount rate
total investment
lifetime
load factor
annual O&M
escalation levelised cost
efficiency
carbon tax
emission factor
tax rate
185
Each of these factors are defined next. The ranges of input values are taken from
the two reports and adjusted with assumptions and approximations. The costs are
expressed in constant US$ and ECU of January 1987 in OECD and UNIPEDE
reports respectively. These cost figures are converted into sterling using the
A.4.1 Assumptions
The UNIPEDE and OECD reports could not have based their calculations on fixed
uniform scenarios, for the figures were simply collected from the participants
without adherence to a priori rules. Instead their aim was solely to calculate
levelised costs using the input figures provided. The input figures, such as capital
costs, fuel costs, and load factors, vary from country to country, and the
differences are explained in the two reports. Each country has its own assumptions
about fuel prices and trends, especially for the stages of the nuclear fuel cycle.
Most countries expected future coal prices to soften such that nuclear generation
costs would not be that much cheaper than coal generated electricity. OECD goes
further to seek an independent view from the Coal Industry Advisory Board,
whose average of best estimates was significantly lower than majority of the
Generally speaking, differences in capital costs are associated with costs of actual
updating of older data. Cost of labour and welfare charges factor into the running
costs and, along with other figures, are tempered by the economic situations in
each country. As all costs are converted into a common currency, some
currency to ECU or US$ in January 1987 when the exchange rates were taken.
186
Capital investment of new coal-fired plants is largely burdened by the additional
cleaning equipment, which depends on the type of sulphur and nitrogen oxide
removal processes. Other capital cost differences are due to the economies of
scale in unit sizes (315 to 840 MW for the UNIPEDE study, 165 to 850 MW for
OECD study.) Similarly, the number of units on the same site contributes to the
scale effects in investment and running costs. In other words, marginal costs
With reference to long term uncertainties, UNIPEDE considered the risk of error
in determining total generating costs. This risk lies with the price of natural
uranium and the cost of irradiated fuel management for nuclear generation and in
the long term price of coal for coal-fired generation. However, it did not
investigate the magnitude or likelihood of the uncertainties nor the time scale of
Rather than inventing pseudo highs and lows for the different values needed to
calculate costs or basing the study on historic costs, we use realistic ranges of
similar plant types in OECD countries. These figures refer to plants that will be
commissioned around the same period of time. They have already been deflated
As mentioned previously, each country submitted its figures based on its own
assumptions and expectations of the future. Taking the range of input figures runs
the risk of ignoring conflicting assumptions about fuel prices, regulatory scene,
cost of fuel in one country may be due to its proximity to the source, whereas, the
high cost of fuel in another could be due to its unlucky experience in procuring a
different grade of fuel in the past. However, by taking the entire range in such a
187
sensitivity analysis, we can be sure of encompassing all possible and some
improbable.
Some cost differences are structural due to the regulatory framework, economic
and target plant mix, status of over or under capacity, government subsidies,
environmental concerns, and various other factors contribute to new plant costs.
These differences cannot be generalised for any ranges, but for the sake of
completeness, the range over all countries ensures that all possible values are
included.
Initially, ranges are taken from international studies for the sensitivity analysis.
Later, these ranges are adjusted for the UK case and revised by more current
views.
A.4.3 Investment
Most capital and related costs are incurred in the form of construction cost during
investment schedule and prevailing interest rates. Both OECD and UNIPEDE
studies computed the interest during construction using an interest rate equal to the
discount rate. Therefore, when the discount rates were varied in the sensitivity
analysis, IDC would change accordingly. It may be argued that the use of an
interest rate in the calculation of the IDC relates to a financing decision whereas
the use of a discount rate in the calculation of levelised cost follows an investment
decision. By this token, the interest rate and discount rate are not necessarily be
the same, especially since the interest rate used to calculate the IDC can vary with
time and the kind of financial arrangement. In contrast, a fixed discount rate is
used to revalue all other costs to a specific date. This aspect of capital cost
188
requires detailed modelling and in-depth investigation beyond the scope of this
pilot study. For simplicity, the IDC is taken as a lump sum and included in the
investment cost in the sensitivity analysis. Thus changing the discount rate would
not affect the IDC, and the financing and investment evaluations are kept separate.
We also include the IDC in the risk analysis that follows, as the IDC relates to the
for coal plants. It is assumed that the reduced plant efficiencies due to this
capital costs. For comparison purposes, this is included in the investment costs for
both coal and nuclear. The provision for nuclear is much greater than that for coal
and the magnitude is more uncertain. This provision depends on whether the
dismantling is partial or total and the time elapsed between the final shutdown of
the station and the start of decommissioning. The impact of this provision is not
The investment cost for coal ranges from 475 to 1,211 per kW of installed
In general, fuel costs make up 80% of the running costs, with the remaining 20%
due to operations, maintenance, and labour . Although there are fixed and variable
portions to the O&M cost, it is approximated as a single fixed annual cost in this
study. O&M cost component is the smallest of the three major components after
189
fuel and investment as shown in the previous section on costs. Specific cost of
labour and welfare charges depend on the economic situation in each country.
The OECD report gives total O&M costs. This figure is taken as an annual fixed
O&M cost with zero variable O&M cost. The bulk of this cost is due to the
portion of labour in fixed costs and the amount of labour employed at site.
Because the UNIPEDE report does not list O&M costs, a slight approximation
must be made to derive the O&M cost as an input. The annual fixed O&M cost is
calculated from multiplying the average discounted O&M cost per kWh (in the
reference case of 5% discount rate and 25 year life) by the annual utilisation of
6,600 hours. Again, the resulting annual fixed cost is assumed to include the
O&M costs for coal varies between 6.92 to 59.74 per kW per year, while the
range is smaller for nuclear, being 16 to 49 per kW per year. To compensate for
the variability in O&M costs, a modest range of escalation rates is applied in the
sensitivity analysis.
A.4.5 Fuel
The treatment of nuclear fuel differs greatly from that of fossil fuels in generating
electricity. The calculation of the final cost of electricity generation due to the
complicated nuclear fuel cycle requires additional coefficients, which are not
evident in the two reports. For this reason, the output values of pence/kWh
attributed to fuel was used. A more detailed study could calculate the conversion
from raw uranium concentrate, through the nuclear fuel cycle, to a more accurate
Fossil fuel prices are available by equivalent heat, weight or volume. For instance,
oil is typically measured in barrels, coal in tonnes, and natural gas in cubic metres
190
or therms. Measurement by heat content standardises for all fossil fuels. To relate
these fuel prices to the plant heat rate, the price per equivalent heat is used, e.g.
per GJ.
Coal prices vary considerably between countries depending on the location of the
domestic, and the subsidies and taxes on fuel. The high prices of domestic coal in
Germany and Spain are an order of three to four times the cost of imported coal
Canada is half the cost of the cheapest imported coal in the world. In both reports,
imported coal prices were given in the UK values. Given that the final analysis is
aimed at sensitivity of costs in the UK, a logical conclusion is to tighten the range
of possible coal prices by restricting the analysis to imported coal. These imported
coal prices varied from 89 pence to 2.24 per GJ, which compares reasonably with
The price of steam coal in the UK (IEA, 1991) increased from 1.26 to 1.81 per
GJ between 1980 and 1990, with an average high of 1.86 in 1988. After
deflating, the price has fallen in real terms. Negotiations between the major
generators and British Coal (Financial Times, March 1992) indicated an expected
price of 1.50 to the current 1.63 per GJ range, while Scottish Power had been
able to procure coal at 1.00 per GJ. Average prices of coal purchased by the
reached as high as 1.99 /GJ between 1986 and 1992. Thus the derived range of
0.89 to 2.24 per GJ is not unrealistic for evaluating the case of UK coal fired
stations. The levelised costs reported in OECD and UNIPEDE are most sensitive
to assumptions about future fuel prices. For this reason, comparisons with
191
A.4.6 Carbon Tax
The parameters specific to fossil fuels necessary to calculate the carbon tax are
absent from these two studies. Conversion rates such as carbon dioxide emission
factors, heat content, and plant efficiency are readily found in recent policy and
economic studies on the carbon tax. However, these policy-oriented papers do not
specifically state many of their assumptions for the numbers. Tax units are
expressed in $/ton and $/BOE. While BOE is understood to be the amount of fuel
equivalent to the CO2 released from burning a barrel of oil, it is unclear whether
the unit of ton refers to the long ton, the short ton, or the metric tonne. Not only
are the units misleading, there are at least two ways to calculate the tax effect: by
establish a common basis for the calculation of carbon tax, values for these
parameters of oil, gas, and coal are taken from various papers and recalculated to
tally against the original results. This type of multi-source analysis establishes the
such parameters can be found by using figures from various sources. [See Ontario
The EC carbon tax is a specific tax levied on the equivalent amount of CO2
7.64 tonnes of oil. Given a heat content of 42.6 GJ/tonne and an emission factor of
75 kg CO2/GJ, burning a barrel of oil emits approximately 418.19 kg CO2. The
fuel equivalent emission factor used in the calculation for other fuels is 7.64 / 42.6 /
75 = 0.00239 BOE/kg CO2. Without specifying the exact grade of coal for the
range of emission factors (71.7 to 108 kg CO2/GJ) and plant efficiencies (25 to
45%) and $/ exchange rates ($1.40 to $2 = 1.00), the effect of a $3 per BOE
carbon tax would range from a low of 0.21 pence/kWh to a high of 0.80
192
stations surveyed in the OECD and UNIPEDE reports, this study assumes that
similar types of coal could be used in all stations. In other words, an average
carbon dioxide emission factor of 88.11 kg CO2/GJ could be used to calculate
carbon taxes.
1993, $4 in 1994, rising by $1 each year until $10 in the year 2000. In this study,
the same amount of tax is applied to every single year for the entire economic life
of the plant. Thus the no tax scenario can be compared to the high tax scenario of
$10/BOE. The actual impact of an incremental carbon tax would lie in between the
two. Figure A.5 illustrates the impact of a carbon tax relative to plant efficiency.
1.20
0.80
0.40
0 1 2 3 4 5 6 7 8 9 10
0.00
$/BOE
As seen from above, the contribution of carbon tax is fairly insignificant at the $3
level. But at the $10/BOE level and assuming a low plant heat rate, it could double
193
A.4.7 Efficiency
The amount of carbon dioxide released when a fuel is burned depends on its carbon
content (translated into carbon dioxide emission factor.) Likewise, the amount of
useful energy converted from this parallel process depends on the plant heat rate.
Given that 3.6 GJ of heat is equivalent to 1 MWh of energy, the remaining factor in
the heat rate is simply the plant efficiency rate. Coal-fired stations in the UK have
If the plant heat rate is not given, efficiency is approximated by the contribution of
fuel to the final discounted cost of electricity generation and the raw fuel price.
Hence efficiency expressed in GJ/kWh is derived from dividing the levelised fuel
cost in pence/kWh by the raw fuel price of pence/GJ. This calculated efficiency
parameters such as carbon and heat content, all possibilities are considered from a
range of values.
Plant efficiency links the fuel to the calculation of the carbon tax component and
the fuel component as both require a heat to energy conversion. Plant efficiencies
derived from the two reports vary from 25 to 45.45% for coal-fired stations. Plant
efficiency is not required for nuclear which is expressed in kWh terms (a crude
approximation of the nuclear fuel cycle). Furthermore, carbon taxes do not apply
Fixed costs such as investment and O&M are divided by the actual generating
hours to arrive at the pence per kWh figure. This utilisation rate is determined by
the scheduled and unscheduled outage rates, in other words, the percentage of time
that a plant is scheduled to operate less the percentage of time it is out of service
194
due to planned and unplanned shutdowns for maintenance, refuelling, and other
reasons.
Plant utilisation rate is frequently expressed in terms of capacity factor, load factor,
and availability factor. The OECD report uses the term load factor as a percentage
of total hours in a year, while the UNIPEDE report uses hours in a year to reflect
utilisation. The load factor convention is chosen for this paper, and the UNIPEDE
hours are divided by 8,760 hours in a year to arrive at an annual percentage figure.
The UNIPEDE study uses an incremental utilisation rate in all scenarios, i.e. the
load factor is increased from 45% in the first year to 57% in the second year, and
finally 75% for the rest of the life time. On the other hand, the OECD study uses a
levelised load factor of 72%, which was derived from averaging the increasing load
factors after initial commissioning and the settled down load factor of 75.3%. For
simplicity, a constant load factor is used in this pilot study, with the assumption
that it represents the levelised lifetime annual utilisation rate. For coal, the
sensitivity ranges from 63% to 80%. For nuclear, it is slightly higher, from 65% to
85%.
Using the same O&M cost and the same load factor for every single year of a
plants lifetime underestimates the cost in the initial years where O&M costs are
higher than usual and load factors are lower than usual. Also during the latter
years when mid-life refurbishment and additional maintenance costs are necessary,
O&M costs are expected to increase with the decrease in load factor.
It is unreasonable to expect all costs to remain the same for every single year of
plant operation. More likely, the O&M and fuel prices will fluctuate year by year.
Instead of computing yearly cashflows, the levelised method uses escalation rates.
195
A modest range of -1% to 3% per annum is assumed in the sensitivity analysis.
load. It is assumed that these detailed fluctuations have been averaged in this
study.
factors in the normal course of operations can be modelled with escalation rates.
A.4.10 Life
Economic or amortising life differs from technical life depending on the accounting
conventions practised in each country. For discounting purposes, the economic life
is used.
The two reports use standard lifetimes of 25 and 30 years to compute the levelised
costs. However, in the national calculations, the actual lifetimes used by each
performance of the plant and are usually longer than economic lives which are used
for accounting purposes. At the lower end are Italy (13 years) and Japan (15 and
16 years) for economic life. Since UK falls on the higher side (45 years), the range
sector than the public sector, to reflect the degree of risk. A shorter life is
recovered more quickly, albeit at a higher cost to the consumer. The level of
business risk is captured in the choice of economic life and the choice of discount
rate.
196
A.4.11 Discount Rate
The UNIPEDE study used a 5% discount rate for two reference lifetimes of 25 and
30 years. OECD used a 5% and a 10% discount rate for a 30 year life. The choice
a utility. The rates also differ between the public and private sectors.
The former CEGB gave the public sector price of 3.22 pence/kWh (at 1987 prices)
for PWR in the House of Commons Energy Committee (1990) inquiry into the cost
(equivalent to discount rate) over a life of 20 years. The private sector price of
using a discount rate of 10% to reflect the degree of risk perceived by the private
sector. The two reports and this pilot study show that the levelised cost of
discount rate reflects not only the opportunity cost of capital but also the time
The discount rate used in the public sector tends to be much lower than that used
by the private sector because regulated monopolies with guaranteed rates of return
on capital can obtain low costs of borrowing. The discount rate perceived by the
private sector tends to reflect the return on capital that can be invested in various
markets, including the return to shareholders on the equity vested in the private
utility.
Each country used the same discount rate to calculate its coal and nuclear costs.
Across all countries, discount rates varied from 4 to 10%. These rates are based
upon market rates, reference rates used in previous energy plans, government
argued that higher discount rates should be applied to nuclear projects to reflect
197
the greater investment risk. One study (Virdis and Rieber, 1991) even proposed a
There are many ways to determine which discount rate to use. The OECD study
a) based on the real costs of investment funds over the time scale of the project,
The selection of a discount rate therefore would depend on the projected rates of
Ottinger et al (1990) list four different ways to measure future economic benefits
Aside from financial determinants of the discount rate, Ruth-Nagel and Stocks
For the moment, a modest sensitivity range of 4 to 15% is used for discounting
198
A.4.12 Consolidating the Range
One of the main interests of this study is to analyse the UK base case and how it
varies within the ranges given by the international context. The ranges, as
postulated previously, reflect the possible values and include the improbable as well
as the probable. It is assumed that the extreme anchors are less likely than the base
before, it is more informative to consider all possible values than a fixed percentage
It is possible that OECD and UNIPEDE reports may not capture the full range of
values for the UK. Comparisons with other published sources are required.
Furthermore, the ranges may vary for different periods in time. The ranges
captured in January 1987 reflect each countrys expectation of the future at that
point in time. Many events have taken place since then, and the ranges should be
The bounds taken from the two reports are converted into sterling using the
exchange rates given at the beginning of January 1987, that is, 0.7241 = 1 ECU,
and 0.678 = $1.00, i.e. 1.00 = $1.475. The minimum is taken of the lower
bounds of both UNIPEDE and OECD studies. Likewise the maximum is taken of
the upper bounds. The minimum and maximum are re-adjusted so that the ranges
In addition, annual O&M and fuel escalation rates are assumed. Carbon tax rates
discussions.
199
Table A.1 Consolidated Range
load factor 63 % 80 % 65 % 85 %
investment cost 475 /kW 1,211 /kW 868 /kW 1725 /kW
annual fixed O&M cost 6.92 /kWa 59.74 /kWa 16 /kWa 48.8 /kWa
fuel cost 0.89 /GJ 2.24 /GJ 2.10 /MWh 11.66 /MWh
O&M escalation -1 % pa 3 % pa -1 % pa 3 % pa
fuel escalation -1 % pa 3 % pa -1 % pa 3 % pa
While these ranges may appear too large for analysing the sensitivities of UK
parameters, it is more justifiable to reduce the range than to expand it later. The
One of the main motivations of this study is to understand the factors that influence
the cost of electricity generation. The input parameters are assumed constant
throughout the plant life to simplify the NPV annuity method of computation. The
ranges selected from the UNIPEDE and OECD reports are applied to UK base
cases.
200
A.5.1 Calculation Method
spreadsheet function (PMT) returns the annuity or the equivalent constant annual
Annual Value of a lump sum amount to be spread over a given period at a given discount rate =
PMT(discount rate %, years in period, present value of total payment)
investment cost.
cost of electricity generation due to fuel = fuel cost /GJ * 3.6 heat/energy conversion factor
efficiency %
therefore, the average discounted levelised cost = investment + O&M + fuel + tax
When annual escalation rates for fuel and O&M are introduced, a leveling factor
must be multiplied to the existing formula to discount the compounded rates back
201
r (1 + r ) T k (1 k T )
The general notation for this leveling factor is
(1 + r ) T 1 1 k
(1 + e)
where r = discount rate in %, T = life in years, and k = where e = annual
(1 + r )
escalation rate in %. This is the power expansion of the same expression in sigma
T (1 + e) t
t =1 (1 + r )
t
notation: T
(1 +1r ) t
t =1
This calculation method is derived from the levelised bus-bar method explained in
IAEA (1984). It is broadly consistent with the methods used in UNIPEDE and
The average discounted cost offers several advantages in comparing future power
plants. The ratio of discounted total generation cost over the plants entire lifetime
to the discounted sum of electricity generated over the same period is independent
of the date of discounting and the current or future inflation rate. All figures are
A.5.2 UK Parameters
Figures for UK Coal and Nuclear were submitted to both UNIPEDE and OECD
reports. In the UNIPEDE study, data was provided for two units of 840 MW coal
plant, cooled by sea water and equipped with sulphur and nitrogen oxide removal.
The nuclear power plant is a 1,155 MW Pressurised Water Reactor (PWR) which
includes one reactor of total capacity and two turbo generators of half capacity.
Given the date of submission, it is probably the data for Sizewell B. Meanwhile,
data for Hinkley C PWR was provided for the OECD study, and the figures for this
1,175 MW reactor are consistent with those submitted in October 1988 to the
public inquiry.
202
Converting the units from ECU and US$ into equivalent at the beginning of
January 1987 yields the following input values for the UK.
discount rate 8% 8% 8% 8%
load factor 74 % 75 % 72 % 75 %
investment cost 891 /kW 892 /kW 1,578 /kW 1,543 /kW
annual fixed O&M cost 30.59 /kWa 23.73 /kWa 26.07 /kWa 22.10 /kWa
fuel cost 1.65 /GJ 0.89 /GJ 5.36 /MWh 4.47 /MWh
O&M escalation 0 % pa 0 % pa 0 % pa 0 % pa
fuel escalation 0 % pa 0 % pa 0 % pa 0 % pa
These costs and assumptions were made in 1987 and 1988, after the Sizewell B
inquiry, during the Hinkley C inquiries, before privatisation, and before the decision
to retain nuclear in the public sector. These figures should be adjusted in light of
privatisation in 1990, the demise of new nuclear plant until the 1994 nuclear review
and the current dash for gas phenomenon. Instead of using 5% discount rate
purposes of modelling insight, only one set of values is necessary, thus UNIPEDE
is retained while OECD values are dropped for the rest of the study. The base
costs for the UK figures given above are summarised in table A.3.
203
Table A.3 Base Costs for the UK
40.00
35.00 Carbon Tax Component
30.00
0.1 pence/kWh
25.00 O&M
20.00
Fuel
15.00
10.00 Investment
5.00
0.00
Coal Nuclear
Although initially coal is more expensive than nuclear, the choice of discount rates
can change the relative attractiveness of coal and nuclear. Figure A.7 depicts the
effects of varying the discount rate. The slope of nuclear plants is steeper than that
of coal plants because the investment costs are considerably greater and the
magnitude of investment to fuel costs are reversed for the two plants. In the base
204
case with $3 carbon tax applied to coal, the cross-over or breakeven discount rate
occurs at approximately 13%. Only then does nuclear become more expensive
than coal.
6.00
Coal
5.00 Nuclear
pence/kWh
4.00
3.00
2.00
4 9 14 19 24
Discount Rate
If this carbon tax is increased to $10/BOE, then coal will definitely be more
expensive than nuclear. As seen in figure A.8, even at the unlikely discount rate of
20%, coal is still more expensive than nuclear.
205
Figure A.8 UK Coal vs Nuclear Trade-off Curves with $10 Carbon Tax
6.00
5.00
pence/kWh
4.00
Coal
3.00
Nuclear
2.00
5 10 15 20
Discount Rate
Discount rates are particularly significant in capital intensive projects, like coal and
investment costs.
By applying the derived ranges from the two reports to the calculations of UK base
values, it is possible to find the impacts of different factors. The tornado diagram
of figure A.9 shows the importance and impact of various costs in descending
order.
206
Figure A.9 Coal
Carbon Tax 0 to 10
Life 13 to 50
Load factor 63 to 80
pence/kWh
Each bar denotes the range of costs computed by varying its corresponding factor
without changing the other parameters. In the coal example, lowering the discount
rate from the base case of 8% to 4% lowers the levelised cost from 3.56 to 3.11
pence/kWh. Similarly, increasing the discount rate to 15% while keeping all other
As discovered earlier, the effect of discount rate accentuates with higher capital
costs. The nuclear case in figure A.10 shows the overwhelming importance of
discount rate as opposed to investment, life, and other factors. Costs are
comparison has minimal effect. Efficiency rates and carbon taxes do not apply in
207
Figure A.10 Nuclear
Life 13 to 45
Load factor 65 to 85
Efficiency 0 to 0
Carbon Tax 0 to 10
pence/kWh
In reality, discount rates and lifetimes do not fluctuate. These factors are decisions
on the same basis as other factors, which are highly affected by external
circumstances.
importance of factors. Ranges give more information than point estimates. How
likely the parameter will take on a value in the range requires the additional
A sensitivity analysis tells us how much the output varies with variations in the
shows the effect of varying one parameter at a time. A two-way analysis shows the
208
result of varying two parameters at a time. Useful insights can be drawn provided
the variables are independent of each other. The computation slows down
dimensionality.
represented by probability distributions, where some values are more likely than
others. The introduction of probability into this analysis describes the ranges in a
more informative way. In the simplest case, every variable has an equal chance of
taking any value in a range. Since the previous sensitivity analysis was based on
would be the triangular distribution, in which the base value is most likely and has a
greater chance of occurring than any other value, with the lower and upper bounds
A.6.1 Methodology
First of all, factors are distinguished between decision variables and uncontrollable
external events. Decision variables are treated in the manner as sensitivity analysis,
that is, changing one value at a time, whereas external variables are approximated
using probability distributions and varied simultaneously. For this study the
discount rate is the only decision variable, the others are external variables. Life is
the analytical approach in circumstances where the input distributions are not
symmetric or standard and where computing facilities are available. The Latin
Hypercube Sampling (LHS) method was chosen as it performs better than the
Monte Carlo method, which tends to take much longer to approximate a given
209
distribution. LHS divides the distribution into equal intervals of the number of
iterations selected. Sampling is then taken randomly within each interval, without
avoids the clustering problems found in the Monte Carlo method. LHS converges
The number of iterations required for accurate sampling depends on the number
number, output distributions cannot get any smoother. This can also be validated
gave jagged risk profiles. As a result, iterations were increased to 600 to reach
smoothness.
The previous sensitivity analysis already established the ranges and the ranking of
important factors. Now it is necessary to use a coherent set of base values for the
different plant types. Rather than using base values from both reports, we used
values from UNIPEDE, which correspond closely to the values selected in a report
by Bunn and Vlahos (1989). Values for combined cycle gas-turbine plants have
Unlike the previous analysis, the range is considered for the construction cost
rather than the investment. This establishes the dependence of the interest during
construction (IDC) upon the discount rate and the construction cost. Here, the
discount rate is represented by the interest rate. The IDC is made a function of the
base IDC, old interest rate and the new interest rate, as follows:
210
old interest during
construction
log (1 + )
old construction cost
new interest
log (1 + old interest rate )
during = construction * [(1 + new interest rate) -1]
construction cost
The external variables of load factor, construction cost, O&M, and fuel cost are
values are analysed for different discount rates and economic lives. For coal and
gas, carbon tax is also varied between for the no tax case, $3 minimum tax, and the
maximum $10 tax. Again, the incremental effect is not captured in this calculation
method.
A.6.3 Nuclear
was adequate. Privatisation introduced higher business risk and expected return on
Commons inquiry (Energy Committee, 1990) into the cost of nuclear power. Two
discount rates at 40 year lives are selected: 8% and 10%. The external variables
follow triangular distributions around the base value. Simulation was performed
211
Table A.4 Simulation Parameters for Nuclear
annual fixed O&M cost 26.8 /kWa triang: 16, 48.82 30.54 /kWa
Note that provision for decommissioning is extracted from investment rather than
value 10.14 and maximum 20 should ideally be computed against a lower discount
rate from the rest of the project, as is the current practice. Quite a controversy
reflects the uncertainty. The result after 600 iterations is depicted in the chart
below.
212
Figure A.11 Risk Profiles for Nuclear
Nuclear
0.14
0.08
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
The vertical axis gives the probability or frequency, while the horizontal axis gives
the output range expressed in 0.1 pence/kWh. The levelised cost at 10% discount
rate halfway overlaps the 8% case. Reading from the chart, we can say with 100%
probability that nuclear will not cost more than 4.5 pence/kWh if calculated at the
8% discount rate. Compare this with cost estimates given at the Cost of Nuclear
inquiry by Energy Committee (1990): estimates for Hinkley Point C varied from
4.31 pence/kWh to 7.12 pence/kWh at 1987 prices. These were due to the
uncertainty, and adjustment for inflation. In fact, alternative estimates (other than
CEGB) for private sector prices ranged from 4.91 pence/kWh at 8% discount rate
to 5.62 pence/kWh at 10% discount rate. Our simulation is not that far off.
A.6.4 Coal
The uncertainty in carbon tax is reflected discretely: $3 tax, $10 tax, or no tax.
According to current debate, the tax is expressed in US dollars which has been
213
highly susceptible to exchange rate fluctuations. Therefore an external variable for
the exchange rate is built in the model for coal and gas plants.
annual fixed O&M cost 33 /kWa triang: 6.92, 59.74 33.22 /kWa
The risk profiles for coal with $3 carbon tax are shown in figure A.12. At the
higher discount rate of 10%, however, more uncertainty is seen in the larger output
range. At 8% discount rate, the most likely cost is 3.7 pence/kWh (peak of risk
profile). At 10%, the most likely cost lies between 4 and 4.5 pence/kWh.
214
Figure A.12 Risk Profiles for Coal
0.16
0.14 8%, 45 yrs
0.12
10%, 45 yrs
0.1
probability
0.08
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
A.6.5 Gas
In the last five years, the UK has seen a build-up of natural gas fired plant (CCGT)
which have the advantages of high efficiency (typically 45 to 55%), lower carbon
dioxide emissions, shorter construction lead time, and modularity of unit size. For
these reasons, it is included for completeness. The following base values are taken
from Bunn and Vlahos (1989) and the ranges subsequently adjusted to UNIPEDE
215
Table A.6 Simulation Values for Gas
The greatest uncertainty lies in the fuel price, as natural gas is a premium fuel.
With the build up of gas turbines in this country, there is speculation that the fuel
price may rise with increasing demand. Up to 60% over capacity is expected in the
next decade according to Financial Times (21 Sept 1992). Investment costs are
generally very low as its construction period is relatively short compared to coal
and nuclear, thus keeping the interest during construction very low. This is the
main reason why the costs are not as sensitive to discount rate as the other two
types of plants. Risk profiles in figure A.13 show little difference between the 8%
216
Figure A.13 Risk Profiles for Gas
0.12
8%, 40 yrs
0.1
10%, 40 yrs
0.08
probability
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
The risk profiles for all three types of plants are combined for a ranking of plant
In the base case without carbon tax, gas is the cheapest option, with nuclear and
coal in competition. The overlap of risk profiles in figure A.14 shows a small
chance that gas may be more expensive than coal and nuclear.
217
Figure A.14 Trade-off Curves for Coal, Nuclear, and Gas (no tax)
Cheapest Case
0.14
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
A carbon tax levy such as that proposed by the EC would invariably favour the less
polluting plants. However, the high capital cost of tax-free nuclear makes it more
costly than gas with tax. The most likely case is presented in figure A.15. Here
coal with carbon tax becomes more expensive than nuclear power.
0.16
0.12
coal 8%, 45 yrs, $3 tax
0.1
probability
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
218
In the extreme, i.e. most expensive case, we apply $10 carbon tax on gas and coal
and assume the risks of nuclear power translate into a 10% discount rate. The
results in figure A.16 show that the cost of nuclear is much more uncertain than
coal as it spreads over a larger range: 3.0 to 5.7 pence/kWh (nuclear) as compared
to 3.5 to 5.7 pence/kWh (coal). Coal is still more expensive than nuclear and gas.
0.14
nuclear 10%, 40 yrs
0.12
0.1
probability
0.08
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
As stated earlier, the incremental nature of the proposed carbon tax is not modelled
in this study. The true effect of such a tax lies somewhere between the $3 and $10
case where a fixed amount is levied for every single year of the project. When
applied to coal, risk profiles show the significance of a $10 tax. A $10 tax could
following chart.
219
Figure A.17 Carbon Tax on Coal
0.16
0.14 no tax
0.12 $3 tax
0.1
probability
$10 tax
0.08
0.06
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
The effect of a $10 tax on gas is not as great as that on coal. This is due to the
considerably lower emission factor as well as the higher plant efficiency rate. See
0.12
0.1 no tax
0.08 $3 tax
probability
0.04
0.02
0
22 27 32 37 42 47 52 57
0.1 pence/kWh
220
A.7 Summary and Conclusions
This study reveals the factors that influence the cost of electricity generation as a
by focusing on the major components of cost and isolating the important drivers.
Base values for UK coal and nuclear are extracted from OECD and UNIPEDE
and OECD because it uses constant values and minimal escalation rates.
Nuclear, coal, and gas plants are compared. A ranking of technologies shows that
gas (CCGT) is cheapest of all three, even with a carbon tax levy.
At low discount rates, fuel cost has a greater impact than investment costs. At
high discount rates, the reverse is true. In practice, a firm faces the greatest
relative likelihood.
methodological issues. To compare with the current scene, the 1987 values used in
this study must be updated. We have used risk analysis with the bold assumption
This assumption not only disregards the dependence between the factors but also
takes a single staged view of the problem rather than the multi-staged nature of
capacity planning.
221
To achieve a more realistic and complete representation of power plant economics,
this study can be extended in three directions, as shown in figure A.19. Greater
not only improves completeness of modelling but also allows a closer examination
extended from the constant to the varying. Most parameters exhibit yearly
fluctuations while others vary even more frequently during the operating life of a
varying according to
profile
NATURE of VALUES
escalation
factors
constant
NUMBER of PARAMETERS
DEGREE of UNCERTAINTY
probabilistic
Disaggregation means breaking down large components into smaller ones for
222
understand investment cost, we must look at its components of construction cost,
provision for decommissioning, and other capital costs. Similarly, operations and
maintenance should be evaluated against its fixed and variable components, unlike
cost as it reflects the cost of capital. The interest rates used in the calculation
depend on the interpretation and the subsequent risks involved. If business risk is
interest rate also depends on the size and economic life of the project. The
construction period affects the size of the IDC, particularly in the form of
analysed by varying the interest applied to the investment schedule prior to the
commissioning date.
Actual utilisation of a power plant depends on its planned and unplanned down
time and the merit order. Strictly speaking, utilisation should come from the
combined effects of availability and load factor. In this case, load factor is given by
the merit order of operations, typically determined by the fixed and variable costs.
Ignoring emission constraints, plants with high fixed costs and low variable costs
are loaded before those with low fixed cost but high running costs. Alternatively,
we can introduce demand by way of load distribution curves, which are then
tax shields on depreciation, and inflation accounting. Although this study has
corporate tax and inflation rates to see the effects on cashflow planning.
223
From a technical perspective, this study has restricted the power plants to typical
generic coal plant. Different grades of coal have different levels of carbon,
sulphur and heat contents and , in turn, burn at different efficiencies and release
different quantities of CO2, SOx, and other gases. Consequently, the resultant
carbon taxes will vary. Plant efficiencies are also related to operational efficiencies
The uncertainties at the back end of the nuclear fuel cycle and the last stage of
decommissioning are much cause for concern. This decade will witness the
decommissioning of the older nuclear reactors in the UK. Analysis into the
great proportion of total costs, which carry future risks and responsibilities.
With the exception of fixed annual escalation rates, all values have been kept
assumption as it does not allow fluctuations in load factor and fuel prices. To
account for these fluctuations, the levelised cost approach must be expanded to
handle yearly cashflow calculations so that, at the very least, yearly fluctuations can
availability. Some vary constantly, e.g. spot fuel prices. Utilisation depends on
between factors over their entire ranges, these effects can be modelled by fitting
suitable equations.
This pilot study has established the feasibility of model replication of sensitivity and
risk analyses with available desk-top computing tools. The incremental manner in
224
which details are added and complexity increased maintains the modelling at a
By extracting values from different international sources, we are able to get a range
of possible values for each factor, which not only improves upon the traditional
point estimates but also gives us insight into causality and broader perspectives.
However, we had to take a view on which base values to use and which extreme
values to include. These international reports, being also seven years out of date,
Next, we used sensitivity analysis to rank the factors, thus allowing us to focus on
the important and highly sensitive variables, such as discount rates and capital cost.
Tornado diagrams are helpful aids for this analysis even though the ranking
depends on the base values and the extreme values. During this process, there
emerged a need for guidelines on the number of factors sufficient for an analysis
Finally, we applied risk analysis to get the extra dimension of likelihood. The two-
different types of plants, although at this stage only the risk profiles. However, use
triangular distributions, the base and extreme values can equally define the finite
criteria. For factors without extreme bounds, it is unclear whether we should set a
fixed percentage around the base values or a variable percentage. Our assumption
225
determined that 600 iterations on one random seed was sufficient to get smooth
output profiles. We need to validate this by testing with other random seeds and
more iterations. Finally, we have approached this case study from a neutral
position, whereas the actual case study would undoubtedly be assessed very
the lines of financial, technological, and modelling. The financial aspects relate to
isolating the discount rates used in calculating the interest during construction,
provision for decommissioning, and the other costs. In other words, the cost of
capital requires a much closer examination into what it represents. In the private
sector, corporation taxes and inflation impact cashflow management, which cannot
treatment and use of the discount rate. The technology issues relate to the
The three directions of increasing the number of parameters, varying the values,
and increasing the uncertainties represented in the problem are a mere framework
use other techniques to facilitate a greater level of detail and modelling capability.
In this respect, model synthesis may provide the answer to greater completeness
comprehensibility.
226
APPENDIX B
decomposition, sensitivity analysis, risk analysis, and decision analysis. The main
objectives of this stage of the two staged modelling experiment are 1) to determine
the limitations of each approach, 2) to assess the potential for synthesis, and 3) to
The next section describes the data used in the capacity planning optimisation
approaches. After this, the replication and evaluation of the three approaches are
documented.
Accurate details of all power plants in the UK, especially the status of new plants,
are highly confidential and proprietary. As a result, the task of data consolidation
information. Before presenting the consolidated data of all plants in the NGC
system, we discuss some problems with obtaining scarce data and reconciliating
The main source of information for planning purposes is the annually published
Seven Year Statement by the National Grid Company (NGC). This document is
227
released in April each year and updated in July, October, and December. It
contains the status of every plant by ownership and technology type in the NGC-
operated transmission system as well as new plants that will be connected in the
future. However, it does not give details of actual load factors, capital costs, fuel
costs, thermal efficiencies, and emission factors. By the time it is released, some
which are listed in table B.1. In case of conflict, the more reliable and recent
publication is used.
228
Table B.1 Sources of Information
0 Offer report
10 White Paper: The Prospects for Coal Conclusions of the Government's Coal Review,
March 1993
12 Power in Europe 23 Apr 1993, Issue No. 147 (ILEX UK Power Station Monitor)
229
Reported capacities vary widely, according to individual approximations of either
registered or declared net capacities. Actual registered and declared net capacities
There is some confusion over a unique name for a plant, which is usually the name
of its location or owner. For new projects, sometimes no plant name is given, only
the owners name, but owners keep changing as different joint ventures or
consortiums are formed. This is especially true of new plants which go through
various name changes in the early stages of the project. For example, Greystones
and Wilton, Teeside never appear together in the same source. So it can be
assumed that they refer to the same plant, that of the largest CCGT. Obvious
duplications have been eliminated where they correspond to different units of the
same plant.
The life of a plant depends on a number of factors. Owners need only to give six
months notice to the NGC for closure of plant, but permission to extend the life of
a plant may enter into a time-consuming public inquiry. Because plant closure
implies job losses, such announcements are not made in company newsletters
process, as indicated by its status in table B.2. A company may sign a System
Connection Agreement with the NGC before Section 36 Consent is given by the
government. Transmission contracted plant (T) does not mean that it will go
consent given (S) and Under construction (U) and Transmission contracted (T). In
many cases, the announcement of new plant is merely a strategic move, signalling
additional capacity. The major generators have employed this market signalling
230
strategy to deter new entrants. Information on new plants in their early planning
A has applied for S36 planning permission, government consent under consideration
Ap has applied for S36 planning permission, but pending results of public inquiry
I has import facilities, e.g. to import coal, according to Kleinwort Benson Securities
(1990) The Electricity Handbook
P postponed or deferred
U under construction
Z notified zero registered capacities for next 7 years (to 2000). The registered capacity
shown here is the remaining capacity of the plant in the system.
The decommissioning years for all nuclear plant were taken from National Audit
Plant data consolidated from various publications are presented in table B.3 and
231
Table B.3 Existing Plant as at July 1993
232
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
2.3, 2.4 EI Thorpe Marsh medium 1,098 1,050 NP 1963 1994 13.48
D coal
2.3 EI Tilbury B medium 1,412 1,360 NP 1968 15.26
coal
2.3 E Willington B medium 376 376 NP 1962 12.76
coal
3, 3.2 D E Eggborough GT OCGT, 68 NP 1968 1993
but 2.3 RI aux
incl Z
6(9.4.93),2 E Aberthaw B GT OCGT, 51 NP 1967 93.81
.3 aux
0,2.3 E Didcot GT OCGT, 100 NP 1972 100.99
aux
0,2.3 E Drax GT OCGT, 140 NP 1974 93.16
Z aux
0,2.3 E Fawley GT OCGT, 68 NP 1969 94.30
aux
0,2.3 E Ironbridge GT OCGT, 34 NP 1967 99.70
aux
0,2.3 E Littlebrook GT OCGT, 105 NP 1982 93.01
aux
0,2.3 E Pembroke GT OCGT, 75 NP 1970 95.76
aux
0,2.3 E Rugeley B GT OCGT, 50 NP 1969 84.54
aux
0,2.3, 2.4 EI Thorpe Marsh OCGT, 56 NP 1966 1994 79.78
D GT aux
0,2.3 E Tilbury GT OCGT, 68 NP 1965 130.82
aux
0,2.3 EI West Burton GT OCGT, 80 NP 1967 94.14
aux
0,11,2.3 E Cowes GT OCGT, 140 140 NP 1982 93.03
main
0,2.3 E* Letchworth GT OCGT, 140 140 NP 1979 93.16
main
0,2.3 E* Norwich GT OCGT, 110 110 NP 1966 94.46
main
0,2.3 E* Ocker Hill GT OCGT, 280 280 NP 1979 97.37
main
2.3 E Fawley oil 1,034 968 NP 1969 40.76
R
Z
6(9.4.93),2 E Littlebrook D oil 2,160 2,055 NP 1982 25.66
.3
2.3 E Pembroke oil 1,530 1,461 NP 1970 25.64
R
Z
0, 2.2, E Aberthaw A small 376 192 NP 1960 1993 18.07
3.2,2.3 RI coal
Z
2.3 E Blyth A small 448 456 NP 1958 14.25
coal
233
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
0,2.2,3,9,2 E* Rugeley A small 560 228 NP 1961 1993 17.67
.3 Z coal
0,2.2,3,9,2 E Skelton Grange small 448 228 NP 1961 1993 16.41
.3 R coal
Z
2.3 E* Staythorpe B small 336 354 NP 1960 15.59
coal
2.2,2.3 E Uskmouth small 336 228 NP 1961 1993 18.03
R coal
Z
0,2.2,3,9,2 E Willington A small 392 98 NP 1957 1993 16.84
.3 R coal
D
Z
TOTAL 25,146 24,968
ABOVE
3.4, E Killingholme CCGT 300 900 PG 1992 14.10
6(20.4.93), PG1
8,1.2,2.3
4.1, 11 Rheidol Hydro 49 PG 1966
234
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
0,2.3 E* Taylor's Lane GT OCGT, 140 132 PG 1979 152.00
main
0,2.3 E Watford GT OCGT, 70 70 PG 1979 110.79
main
6(9.4.93),7 E Grain oil 2,068 2,700 PG 1979 39.52
,2.3
2.3 E Ince B oil 1,010 960 PG 1982 34.84
TOTAL 2,822
ABOVE
235
Table B.4 Summary of All Plant in England and Wales NGC System as at July 1993
The Central Electricity Generating Board used scenario and sensitivity analyses to
assess the impacts of major uncertainties on its plans for capacity expansion, mainly
Their scenario analysis rested on major scenario drivers and painted interesting
pictures of the future, typically reflecting most likely and extreme cases. A detailed
electricity planning model is then run for each scenario to determine the optimal
mix of capacity over the planning horizon. This optimisation programme is data-
intensive and non-transparent but enables the assessment of both marginal plant
economics and total costs. The main advantage of such an optimisation is that
236
different constraints can be included. While scenario analysis covers a range of
futures, it does not take into account fluctuations within or deviations from each
The deterministic approach is easy to follow, thus lending itself to credibility and
immediate acceptance.
each with different capital and operating costs, to different periods with the further
marginal cost analysis in the manner of the first pilot study (Appendix A), the
amount (unit capacity size and total number of plant) to install in terms of total to
production costing which contribute to merit order will be lost, and seasonal plant
availabilities and load duration curves are absent. The timing decision is a function
of the lead time or construction period, expected demand, existing capacity, and
type of plant. Decisions are made at one point in time for all future periods. This
kind of deterministic optimisation treats the different periods equally and does not
consider contingencies.
scenarios and then spreadsheet macros to translate them into input files for the
proprietary PC-based application which runs in the Windows operating system and
237
uses Benders decomposition. ECAP has been validated in (Vlahos and Bunn,
programme and found to be about 100 times faster. It makes use of iterations to
close the gap between lower and upper bounds to a user-defined tolerance level.
The smaller the tolerance level, the more optimal the solution, and the longer it
Permutations of demand growth over time, seasonal load duration curves, and fuel
more peaks in demand, translating to a steeper load duration curve for that
particular season. The base case contained 8 load duration curves (LDC) to
correspond to each of the seasons. On top of the original LDC, we constructed the
case for more base load and more peak load, hence ending up with two additional
LDCs for each season, totalling 24 LDCs altogether. LDC for season 1, for
238
Figure B.1 Load Duration Curves for Demand Uncertainty
LDC Season 1
original more base more peak
load load
100
90
80
70
60
Load %
50
40
30
20
10
0
0 0.2 0.4 0.6 0.8 1
Hours
Figure B.2 shows how these three factors are used to generate the scenarios.
These scenarios reflect status quo and extreme conditions. Within each scenario,
the optimisation programme is run to get the optimal capacity plan. Of the 36
outcome plans, those that generated the highest and lowest optimal expansion
costs are then analysed in depth. Next, inputs are toggled to create a plan where
also tested by comparing the selected scenarios under the status quo 20% reserve
margin conditions to 40% at low and high rates of demand growth. We took a
short cut, the parameters varied in the sensitivity analysis were the same ones used
239
Figure B.2 Scenario Generation
Status Quo:
SET+0 SET+1 SET+2
same as expected from
STATUS previous planning exercises
QUO
Status Quo: Low: High: SET0+ Status Quo: HFO 3%, DIST 3%, AGR
continue as before conservation population explosion 1%, Gas 1%, Coal & nuclear 0%
(1% growth) consumer consciousness (immigration?)
energy efficient appliances shift to more energy SET1+ No Growth: 0% for all fuels
PRD
fuel switching intensive industries
period SET2+ High Gas: same as Status
VAT on fuels economic growth
demand Quo, but Gas 4%
5% annual growth 1.5% annual growth
growth
SET3+ High Coal: Same as Status
Quo, but Coal 4%
Levels of peak demand expected in each period of planning horizon, Fuel price escalation rates
assuming a 20% planning margin ESC
240
The 65 existing plants in the system total 55,616 MW (or 55.6 GW) of capacity.
The breakdown by type of plant and ownership are given table B.3. The base case
or status quo scenario takes the updated list of existing plants and subjects them to
Renewable technology, such as wind and tidal power, are assumed to grow at a
Availabilities, generation costs, basic seasonal load duration curves, and other
factors are assumed not to have deviated from the last CEGB run in 1990. New
plant options of the same technology have the same characteristics as existing
plants. For example, a new nuclear plant, whether AGR or PWR, has a technical
and economic life of 40 years, interest during construction1 of 4.2, and 1175 MW
capacity per unit. The six types of new plant options are nuclear, coal, CCGT for
baseload, CCGT for peakload, gas turbine (open cycle), and renewables. A further
renewables. The capacity cost per plant per season is the same for plants of the
same technology. Generation cost varies marginally for each season. Operations
and maintenance (O&M) costs represent an annual fixed cost per plant.
Factors that contribute towards the status quo scenario are called status quo files.
The status quo period demand grows at 1% per year starting from 51,400 MW
peak demand in 1994. The status quo fuel escalation rates are annually 3% for
heavy fuel oil, 3% for diesel, 1% for AGR, 1% for natural gas, and 0% for coal and
nuclear. There are two kinds of fuel escalation rates, one for AGR, and the other
for Magnox and PWR. Two kinds of fuel escalation rates correspond to British
1 Interest during construction can be expressed as a lump sum monetary value, as a rate of
interest, or, in this case, a number of years of interest during construction. This is simply
half the construction period.
241
Coal and imported coal. All oil-fired stations take heavy fuel oil. All gas turbines
take diesel (DIST). All Combined Cycle Gas Turbines (CCGT) take natural gas.
Plant availabilities and load duration curves are described for each of the eight
seasons. These eight seasons correspond to the weekday and weekends of four
kinds of season of the year with respect to peak and plateaus. Plant availabilities
have more variation than fuels, e.g. four kinds of availability patterns exist for
nuclear plants: magnox, AGR, nuclear A, and nuclear B. Availability for new
plant options is considerably lower than existing plants as seen in practice. The
possible, not more than three sequences per plant. The seasonal load duration
curves are taken from old CEGB statistics, with the assumption that overall
seasonal patterns of demand have not changed. For extreme scenarios, the LDCs
were varied towards more peak or more base-load. The existence of fuel supply
contracts implies minimum energy constraints which reflect fuel supply contracts in
place, i.e. power stations called to run must meet minimum utilisation levels.
Running with and without minimum energy constraints had little impact on the final
results, with the only interesting observation being those plants in the merit order
which satisfy the minimum energy constraints exactly. Hence, to speed up the
optimisation runs, capacity plans under all scenarios have been generated without
In the status quo case, a 20% planning or reserve margin was assumed for all 12
periods in demand growth. This means that minimum capacity required is 20%
above the peak demand expected in that period. This assumption was found to be
well justified by another study (Bunn et al, 1993) where a system dynamics model
found that a 24% planning margin achieved equilibrium conditions in the electricity
market. This planning margin is reflected by the setting of the Value of Loss of
build too much (above 20%) or too little capacity in the long run. Other factors
242
reflect the current assumptions of the industry: 10% discount rate, 10% Non
The transient nature of the industry means that data accuracy and model precision
are not of high priority in this exercise, as the focus is on modelling methodology
not policy insight. With 36 scenarios to configure, the replication was designed to
specification. For shorter runs, the tolerance level for convergence of the
of the more precise 0.005 or 0.0005. Thus fewer iterations are required to close
the gap between the upper and lower bounds for the total cost of the final capacity
plan. Tightening the tolerance changes the optimal expansion plan by lowering the
more optimal plan (say at 0.005 tolerance level) calls for building CCGT for
peakload as well as base load, building more gas turbines, but much less CCGT
capacity altogether. It is assumed that as long as the tolerance level used is same
The 35 non-status quo scenarios are generated by varying the period demand
growth in three ways, the seasonal load duration curve in three ways, and the fuel
escalation rates in four ways. The status quo period demand assumes a 1% annual
growth rate, while the low case assumes only a 0.5% annual growth rate, and the
high case of 1.5%. The low growth case is expected when any or all of the
and VAT on fuels to curb electricity consumption. The high growth case is not
very likely to occur since the economy is unlikely to take a sharp turn upwards and
243
grow faster than the previous decade. Neither is it likely to shift to more energy
deterministic approach does not take account of the likelihoods, so this aspect is
not explored further. Variations of load duration curves merely takes the same
seasonal status quo, trimming it down for baseload or lifting it higher for peakload.
(This process began with a visual and graphical inspection and adjustment,
of taking demand data directly from the National Grid Company which supplies
such information as forecast for future periods and load distribution curves, and
then converting them into load duration curves.) In the Sizewell B public inquiry,
the CEGB had used negative growth rates for the low scenario and 2.6% per
annum for high demand growth. These seasonal LDCs may steer towards more
base load if any of the following occurs: mild weather conditions (more stable
demand), demand side management, tariff incentives, and better load management
on the suppliers side. If the industry evolves more vertically (or incentives for
cross functions) such that generators supply electricity and suppliers also generate
electricity as the likely trend observed now, then demand side management may
become popular.
Fuel price uncertainty is reflected in four ways. The status quo, as stated before,
contains escalation rates reflecting todays trends. The no growth case assumes
0% escalation for all types of fuels, a scenario that is likely if all existing and new
plants are tied to fixed fuel supply contracts. The high gas case uses 4% instead of
status quo 1% annual growth rate for natural gas, reflecting the premium on gas if
more CCGTs are built or if major interruptions in supply occur. The current
custom of back-to-back contracts for new CCGTs, however, makes it unlikely that
a steep growth in gas prices will occur. Finally, the high coal case tags coal prices
to 4% instead of 0% pa. The escalation rates of domestic and imported coal are
assumed to be the same here. This situation may occur if the closure of coal pits in
244
the UK is a result of an under-estimation of production capacity and demand for
coal, i.e. leading to a scarcity of coal. Similarly imported coal may become more
expensive but necessary if such a scenario exists. In all scenarios, the discount rate
was set at 10%, NFFO at 10%, corporate tax at 33%, and tolerance at 0.05.
The status quo scenario was examined with respect to the optimal expansion plan,
the merit order of plants in the next 15 years, and the relative economics of each
plant. Three additional scenarios were generated for in-depth study: those giving
the highest and lowest costs and the third arising from the combination of high gas,
high period demand, and peakload duration curves. Each scenario was also tested
During the next 15 years, the optimal expansion plan prescribes the installation of
building 6014 MW of open cycle gas turbine (OCGT) in the first three years. This
amounts to an investment cost of 12.652 billion for the next 44 years. By the year
2010, the newly installed CCGTs would have pushed the just installed CCGTs in
1993 down the merit order. Renewables would, of course, lead the merit order,
followed by nuclear plant, links to France and Scotland, coal station, CCGT, oil,
and new gas turbine. However, the marginal fuel saving2 (MFS) of CCGT is
substantially lower than the Scottish Link and coal stations just before them in the
merit order. OCGTs (open cycle gas turbines) offer no marginal fuel savings as
245
they are retained for peakload purposes. Presumably by then, all existing OCGTs
would have reached their end of life or else be pushed out of merit completely.
This status quo scenario was also tested for sensitivity to discount rate. At 6%
discount rate (interpretation: still in the public sector), more coal and less CCGT
should be built. While CCGT may suffice in the earlier years, i.e. the first 15 years,
it is more economical to build coal plants in the latter part of the 44 year planning
horizon. But as a result, the optimal expansion plan is more expensive by 64%.
Imposing minimum energy constraints gives slightly higher overall cost but also
This is an extreme but unrealistic scenario which gives the most expensive option
(45.4 billion of which the investment cost is 13.95 billion). The average cost of
all 36 scenarios was 36.7 billion. Given that coal is not a suitable alternative
because of its high price escalation, the need to install CCGT in almost every
period to meet high demand makes it very costly. By the year 2010, 18.3 GW of
then, all newly installed CCGT would expect to move up in merit and become
baseload, hence preceding existing coal-fired stations in the merit order. The load
factor of these coal stations fall from 75% in the status quo case to 60% and down
existing coal plant in such a scenario. The sell-off option was not considered here.
246
Scenario 2: No growth in fuel prices, base load duration curves, and low demand
growth
This combination describes very stable circumstances, hence the plan is least costly
of all scenarios. As demand is not expected to grow much (0.5% pa), the total
investment cost for the entire planning horizon is only 9.478 billion. New gas
turbines in the first 25 years can cope with any peaks in demand. Thereafter,
CCGT for peaking load should be built. As with other scenarios, new renewables
are built at increasing capacities every year, to reflect the Non-Fossil Fuel
Obligation and also the industrys inclination towards more environmentally clean
installed. The annual fuel cost in the year 2010 (just after our 15 year evaluation)
is almost half that of the status quo case in the same year.
Scenario 3: High gas prices, peakload duration curves, and high demand growth
This is a scenario driven by extremely uncertain factors. However, the results are
not as extreme as expected. High gas prices make CCGTs unattractive. High
demand growth calls for new capacity. This combination makes coal an attractive
option. After the first twelve years, coal plants should be installed every year,
duration curves call for more gas turbines to be built, total ling 19.673 GW of
OCGT over the 44 year planning horizon. What is more interesting is that by the
year 2010, oil stations will have moved up the merit order, replacing CCGT.
However, the new open cycle gas turbines will still trail the build-up of CCGTs.
247
Further scenario analysis
The above analysis prompted further scenario generation to answer two questions.
1) What effect will falling coal prices have on capacity planning? 2) What
If coal prices fall at an annual rate of 1%, new coal installation becomes attractive
but not until later periods, the earliest being the 25th year. New CCGT should still
be installed in the early periods to meet rising demand and to replace retired plant
capacity. New capacity to meet the high rate of demand growth will thus be met
by new CCGT in the early periods and new coal in the later periods.
The easiest way to create a nuclear scenario, i.e. to make the nuclear option more
attractive, is to make other fuels less attractive. Hence, the fuel escalation rates for
coal and gas were raised to a high of 4% per annum, while no escalation was made
for nuclear. No coal plant but only 1.875 GW of CCGT should be built. Starting
in the 7th period (year 2012), a substantial 11.6 GW of nuclear capacity should be
installed, rising to a total of 37.5 GW by the end of the planning horizon. Diesel is
also more attractive, hence new gas turbines should be installed every year except
two periods. If we lower the discount rate from 10% to 6%, i.e. assuming a public
sector scenario, then even more nuclear capacity (total of 44.7 GW) should be built
and earlier too (4th period instead of 7th). This additional nuclear capacity
displaces much of the gas turbines in the 10% discount rate case. By the year
2010, the order of merit is obvious: renewables, nuclear, links, coal, oil, CCGT,
and OCGT. The high operating cost of OCGT forces it to the bottom of the order
in all cases.
248
Sensitivity Analysis
Sensitivity to tolerance level and minimum energy constraint has already been
A 40% margin boosted up the minimum capacity levels required of all periods in
the high growth and low growth cases. These were applied to the four scenarios
described above: status quo, high coal price and high demand growth, no growth,
and high gas price. These combinations led to 8 scenarios with 40% planning
margin implied in the period demand growth. The results were examined and
compared with scenarios closest to them, not necessarily mentioned above. The
striking outcome is that in all 8 scenarios, 16.3 GW of new gas turbine (OCGT)
plant should be built in the first three years. The fastest way to meet a 40%
planning margin is by constructing those fossil fuel plants which have the lowest
interest during construction (hence shortest construction time) and much lower
O&M cost than that of CCGT. In general, the additional capacity is met by
building more gas turbines. Other characteristics do not change very much, if at
all. Another way of looking at sensitivity to planning margin is to vary the planning
margin per period, e.g. start with 20% and gradually move up.
several directions towards more model completeness. The values we used have
been central estimates, hence the extreme scenarios are symmetric around the base
case. Asymmetry is more likely the case, and should be considered. We have only
249
uncertainties may change over time (in relative importance). Hence it may be
useful to include other uncertainties. We have taken a short cut of using the same
Some aspects of the model are more difficult to change because they require re-
the electricity market: different discount rates for different plants or ownership,
In the Sizewell B public inquiry, the inspector (Layfield, 1987) recommended that
the CEGB should include probabilities in their analysis of the capacity expansion
decision. A more rigorous analysis of uncertainty was only one of several reasons
for using a second model. A second model of the form described in Evans (1984)
was needed to test CEGBs model because of the complexity and importance of
the calculations. It was also necessary to derive cost estimates for different sets of
results, i.e. some favourable some not. A probabilistic method enables the results
views. Furthermore, a probabilistic approach would have merit if the results from
a deterministic approach were not robust. However, this implies that CEGBs
approach must address robustness by accounting for very extreme and adverse
scenarios.
250
A probabilistic approach appeared more favourable given the harsh attacks on the
analysis but in actual fact used the techniques separately, decision analysis merely
to structure the problem without any computation in the decision analytic sense.
planning horizons imply a danger of using single estimates as a basis for decision
making. Evans (1984) approach has been used in Kreczko et al (1987) and Evans
and Hope (1984). Evans did not consider uncertainties in discount rate or plant life
times as they were not a concern ten years ago. However, the choice of discount
rate and length of operating lives of nuclear plants have become major issues now.
This goes to show that no matter how sophisticated the model, there are some
things that cannot be foreseen and the resulting model may be incomplete.
output are due entirely to the input configurations. In the extreme, uncertainty
analysis, according to Inman and Helton (1988) involves the determination of the
variation or imprecision in the output that results from the collective variation in
251
the inputs. The main departure from the deterministic approach comes from the
The replication follows that of Evans (1984) but with fewer input probability
distributions were assigned to a few uncertainties. After the runs had been
distributions, easy to specify and meaningful to the user. In practice, the choice of
252
Figure B.3 Replication of the Probabilistic Approach
@RISK
Sample n data points from m distributions
output distributions given enough iterations. 100, 300, and 1000 iterations were
tested. Whilst 100 iterations can be completed in 2 days, 1000 iterations required
2 weeks. The number of iterations is the same as the number of data points in
number of iterations was not clear, more iterations were used than necessary.
The input distributions were specified in the @RISK add-in package (Palisade
sampling) and other types of stratified sampling methods. Fewer iterations, i.e.
smaller samples, are needed to recreate the probability distribution. These data
253
points were then extracted from the @RISK output spreadsheet and consolidated
into an equivalent number of text files for input into the optimisation programme.
Automating the process of specifying input data files, running the optimisation
programme, and saving the output files accordingly was accomplished by writing a
were used for simulation: Excel, PFE file editor, and BENDERS (optimisation
files, in particular, the escalation rate for the fuel (diesel) used in gas turbines.
Because it was impossible for BENDERS to signal the end of its run to the parent
Excel macro, it was necessary to pre-specify how long Excel had to wait in this
The optimisation programme produced five different files for each run. Two were
discarded and three files retained for each run. The intermediate results file INR
was kept to monitor the duration of run, as it was used in monitoring and
subsequent adjustment of the waiting time of the parent Excel macro. The OEP
file contained the optimal expansion plan in terms of investment, operating and
total costs and also newly installed capacities per type of plant per period. The
production costing results PCR file contained the merit order of all plants in the
system for the periods requested -- first, second, and fifth periods in the planning
horizon.
Inputs and outputs of the deterministic and probabilistic approaches are largely
determined by the input and output files of the core optimisation programme
ECAP. Table B.5 lists the input data files. Table B.6 lists the output data files.
254
Table B.5 Input Files to ECAP
LDC Load Duration Curve Demand for electricity described by the load duration
curve which is approximated by a step function. 8
seasons are specified.
PRD Period Definition Defines the periods of the planning horizon and
minimum and maximum total plant capacity.
Reserve margin.
ESC Escalation Rates Fuel escalation data which determines how the
variable plant operating costs are escalating, defined
by escalation codes and patterns.
OLD Existing Plant File Plants in the system, e.g. table B.3, containing name
of plant, scrapping life in years, capacity in MW,
availability, generating cost, escalating code for fuel
used.
255
Table B.6 Output Files from ECAP
OEP Optimal Expansion Plan Main output file: investment and retirement
schedule containing all existing plants and new
plants that meet objectives in cumulative block form.
NCC Net Capacity Costs Annual Capital Cost + Fixed operating Costs - Fuel
Savings for all new plants in the middle year of all
periods that the plants operate.
PCR Production Costing Results Details of plant operation in the middle year of each
period of planning horizon, displayed according to
merit order.
SMP System Marginal Price Information about SMPs in different seasons and
periods of the planning horizon.
Triangular distributions were used because they were simple to specify, requiring
only three parameters, and yet reflected asymmetry. There are other probability
distributions that may be more appropriate for different parameters, i.e. not all
used because it was most efficient. However, there are other sampling methods in
the domain of uncertainty analysis proper. Morgan and Henrion (1992) describe
and different utilities will have different views on which probabilities to use.
256
While we can increase the number of uncertain parameters considered, we must
consider. The few uncertainties considered here alone have already led to
problematic display of output due to the sheer volume of data points in the output
distributions.
Another interpretation of this approach includes the use of risk analysis to screen
out dominated or infeasible scenarios before the real analysis. This reduces the
also a facility to combine sensitivity analysis with risk analysis. But these tests
uncertainty is resolved in stages over time. The Electric Power Research Institute
(EPRI) in the US has funded and sponsored a number of planning projects which
used decision analysis as the main technique for structuring and analysis of
Kamat (1980), Hobbs and Maheshwari (1990), Keeney and Sicherman (1983), and
Keeney et al (1986).
257
Capacity planning can be defined by three types of decisions relating to the type,
size, and timing of plant investment. Sullivan and Claycombe (1977) suggest a
investment and operating cost and expected demand. Then decide on size, which is
requirements. Finally decide on timing. The timing decision is based on the type,
forecasts.
The Over and Under Model (Cazalet et al, 1978) can be adjusted to suit the UK
pricing formula (VOLL, value of loss of load). This extends to the multi-staged
approach, the focus changes from the system as a whole to that of sequential
the art decision software DPL (ADA, 1992) enabled the use of decision trees and
action. The three types of decisions, namely, technology choice, capacity size, and
However, many other studies have argued the need to consider all three decisions
in the context of optimising the entire system (portfolio) because any optimal
allocation affects merit order. This multi-staged analysis of decisions results in the
258
introduction of a specific technology of a specific capacity size at a specific future
date. The decision model replicated follows that of Cazalet not Keeney.
Unlike the previous two approaches, the decision analytic approach is not based on
in decision analysis centres around the structure rather than the data intensiveness
the decision tree, repetitive decision sequence, number of alternatives, and other
capacity planning in the UK context. The simplest is the single project timing
decision in which the type and size of plant have already been determined. Next is
the technology choice model, i.e. a decision regarding the selection of one of two
planning decisions. The second shows the resolution of uncertainty. The third
formulations are possible, these three prototypes capture the most important
Figure B.4 shows the first prototype configuration. At each period, the decision
continue, there is still an uncertainty about delay. Let the current stage be i. If the
next stage advances to i+1, i.e. progress, then a payment of the interest during
construction (IDC) is incurred. If the next stage remains at i, then there is a delay
and a delay penalty must be paid. If the following stage still remains at i, then there
has been no progress and a no-progress penalty must be paid. If this stage is 0,
259
then the project has been abandoned. The penalty values are levied as follows: a
delay implies interest accumulation (extra interest payment if the construction cost
is borrowed in full or else the cost of extending the borrowed funds to cover
capital expenditure) or the difference between interest earned and paid and ongoing
recurrent costs. No progress implies some kind of ongoing cost that has to be
contracts. Once the project is completed after 3 stages, it can begin to earn
revenue. The revenue is determined by the actual demand and level of total
conditional on the demand probability of the previous period. This model allows
Year 2
Continue2?
yes b
x1
x2
Year 1
Continue1? IDC
yes no
x0
Year 0 abandon
Start? x2
b
x1 x1 xx1
x1
IDC delay
yes no progress
no
x1 a
x0 xxo
no progress
x0
no no progress
260
B.4.3 Marginal Cost Analysis
Marginal cost analysis as employed in the first pilot study (Appendix A) is useful
for comparing plant alternatives that differ in technology (type), size, and timing.
of uncertainties are inserted in the decision tree to show their impact on cost. In
figure B.5, the choice of discount rate is determined first, followed by the type of
The technology choice decision can also be evaluated by a typical annual cost
breakeven analysis as prototype three. Here fixed and variable costs for a typical
converted to an annual cost figure for comparison. A plant is only worth building
if it satisfies two conditions. One, it can be bid into the pool. Two, it can be
profitably operated.
The decision tree allows us to focus on different sequences and order of decisions
261
cannot capture the details of plant for merit ordering and other operational
262
APPENDIX C
fill the void in the modelling literature. There are many possibilities for synthesis
but no criteria or strategies to this end. The decision support literature, e.g. Dolk
(1993), discusses model integration and model management systems with respect
synthesize existing techniques and models. The host of conceptual issues suggests
This appendix is organised as follows. Section C.1 clarifies the concept of model
synthesis, e.g. the difference between a technique and a model. Section C.2 shows
the potential for synthesis between various techniques by illustrating the similarities
between them. Section C.3 examines model structuring issues of technique choice,
ordering, and linkage. Section C.4 proposes a distinction between weak and
C.1 Definitions
simulation, and other types of analysis. Many operational research methods and
technique becomes a model when input and output variables are specified and data
263
is applied. A technique is the engine without the data, whereas a model employs a
technique with data. A model evolves into a new technique when its algorithm or
driving engine becomes generic, i.e. applicable to a class of problems but not
A composite model (Kydes and Rubin, 1981) is any model which is made up of
types of methodologies. Model synthesis concerns the use of more than one
distinct models, enabling results and insights that cannot be achieved by separate
approaches, not just at the output level of, say, combining forecasts.
deterministic base case model, where single mean values represent uncertainties, is
analysis, and risk analysis are single staged while decision analysis is multi-staged.
264
Another difference lies in the order of deterministic and uncertain nodes. Decision
analysis and risk analysis are able to consider continuous probability distributions
while scenario analysis and sensitivity analysis are limited to the discrete. The
decision
Decision Analysis
single
or
discrete
Scenario Analysis
continuous
repeated
Sensitivity Analysis
Risk Analysis
likelihood of possible outcomes and, in the case of power planning, may result in
selecting a technology even when its cost advantage is much smaller than the
that formal algorithms are unable to offer. Some compromise may be achieved by
265
inquiry (Layfield, 1987) concluded that probabilistic analysis is a helpful and
quantitative variables while decision analysis can deal with the non-quantitative,
subjective, and judgmental variables, thus incorporating the preferences and values
single point forecasts, thereby adding more information to the analysis of basic
project appraisal. Decision analysis brings in the values and preferences of the
decision maker through utility functions which reflect risk attitude. These
266
Figure C.2 Risk Analysis and Decision Analysis
Rate of return
Basic Project appraisal Single valued (sensitivity
forecasts analyses)
CAPITAL
INVESTMENT Managerial
Information
PROPOSAL judgement and DECISION
review
Intangibles,
other decision
parameters
Probability
Probability distribution for
The risk analysis process distributions NPV, IRR, and
for decision payback
variables
SINGLE
CAPITAL Managerial
Information
INVESTMENT judgement and DECISION
PROPOSAL review
Intangibles,
other decision
parameters
When the chance nodes of a conventional decision tree are replaced with
decision tree, as first described in Hespos and Strassman (1965). All quantities
probability distributions. The information about the results from any or all possible
267
probabilistic form. The probability distribution of possible results from any
risk. However, like most decision trees, it quickly becomes messy, too large and
between variables. Reduction methods which screen out dominated options would
C.3 Structuring
are used so that the main task of synthesis becomes that of structuring and
Agarwal, 1991.) These factors determine the number and kinds of techniques to
The high cost of model development (Balci, 1986) is one of the main reasons for
using commercially available software packages that provide the algorithms and
has facilitated rapid model development as well as model synthesis. The use of
such software pushes verification and validation to others, reduces the overall
268
Therefore, software availability is an important determinant of technique
The modellers familiarity with model components greatly reduces modelling time
and standardisation of software and hardware help to flatten the learning curve and
steering the modeller away from those components that may be more conducive to
driven bias.
The level of detail that can be incorporated by each technique varies greatly. Some
are better at specifying technical and operational detail, while others are more
effects, dependence, uncertainty, and time dynamics. In fact, this is one of the
main reasons for synthesis, i.e. each model component is selected for its functional
(Greenberger, 1981) is to keep the model as simple as possible while still capturing
by way of balancing the hard and soft, descriptive and prescriptive, and the
269
techniques compromise their individual capabilities and limitations. For example,
balance of hard, prescriptive, and deterministic linear programme with the soft and
analysis, linear programming, and sensitivity analysis was actively used by the
Compatibility means the ability to co-exist and work together. In model synthesis,
compatibility resides at the data and theoretical levels. At the data level,
techniques must be able to share data in the same form or convert them into the
form they need. At the theoretical level, basic axioms must not be violated.
between the components. For instance, the heavy data demands of linear
C.3.2 Ordering
The order in which model components are activated in the synthesis relates to the
discuss two main kinds of ordering: increasing complexity and most relevant
aspect first.
simplicity, it is not clear whether we should build a crude but symbolic version of
the target model and add incremental detail, such as the way we add flesh to the
skeletal frame, or start with the smallest and simplest part of the problem to model
270
and increase in scope and complexity. From a technique point of view, it seems
reasonable to capture the main elements of the problem in a simple model initially
The strategy of decreasing importance calls for first capturing the most relevant
a less important aspect. For example, if the technology choice decision is the most
decision analysis should be selected. On the other hand, if the investment and
retirement of plants over the forty year planning horizon is more important, we will
There are other beginnings. Using most intuitive model first to get the decision
makers involved ensures that the basis for the model is user-driven and as a result
anchoring bias.
C.3.3 Linkage
addressed in whole. Models that coexist in a given framework are part of a larger
composite model only if their total contribution is greater than the sum of each.
Model synthesis reflects the definition of a system: the whole is greater than the
sum of its parts. While such components may co-exist and still stand alone and
not interact, some linkage is required to pull the outputs together. In most cases,
the components are linked in one or more of the following ways as illustrated in
figure C.3. Two techniques are linkable if they are compatible, that is,
communicable.
271
Figure C.3 Types of Model Linkages
(4) embedded
1) The easiest method of integration is sequential, whereby the results from one
component are fed into the next. The sequential manner in which data is passed
limits the number of model linkages. However, sequential modelling is quite time
consuming, because a new stage cannot begin until previous stages have ended.
Once a component has passed its output to the next, it can be shut down or
processing, i.e. several models are run simultaneously, and the results fed into a
final model. Because computer costs are quite high, in reality this parallel method
is achieved sequentially, with the results of each model saved and entered into the
final model at the end. Models in the same stage of analysis can commence in any
order.
272
3) A variation of the sequential method is the incorporation of feedback or iteration.
Here, results from one model are fed into a previous model, increasing the number
of interfaces to three if the previous model is not the first model used. Feedbacks
occur in real life; thus, feedback modelling helps to refine the data and correct
earlier assumptions.
The embedded model provides results which are needed by the larger model. In a
concentric structure like the layers of an onion, outer components are highly
5) A multi-level modelling approach to deal with the long range capacity expansion
intensive. A hierarchical modelling process, i.e. modelling at more than one level,
integrating module of the large energy model NEMS (DOE, 1994) is solely
Possibilities for synthesis increase with the number of techniques. The greater the
number of techniques and linkage methods, the greater are the number of
273
handle and the operational implications of the components themselves. We have
shown the possibilities, but there is a need for guidelines concerning which
ordering to follow, which linkage is best, the circumstances under which model
standardise the inputs and output and types of inputs and outputs; different subsets
of data passed to different modules or same data output to more than one module.
We propose a distinction into weak and strong forms of synthesis to reflect the
Whether or not a model is weak or strong depends on the factors addressed in the
not highly dependent on each other. The weakest form is given by a model which
the strong form, model components are tightly integrated and contribute towards
each others informational and functional needs. Factors that contribute towards
274
C.5 Strategies for Synthesis
The questions we have raised in model structuring pertain mostly to trading off
relate to an overall modelling strategy whose determinants are not yet clear. We
C.5.1 Modular
allows parts of the model to be changed without affecting the rest. It is easy to
expand and contract. Different people can work on different parts of the model
the staged approach where modules can be run in stages if necessary. Each
module represents a complete, enclosed aspect of the problem. Both modular and
staged approaches help to reduce the complexity and increase the manageability.
C.5.2 Hierarchical
The concept of hierarchies is related to modularity but with the added dimensions
Among the many approaches, Thompson and Davis (1990) describe the problem-
driven method. A problem is broken into a series of decision levels, with the
275
highest being aggregate, that is, containing the smallest number of variables by
Nested techniques follow the hierarchical approach. Those at the top level are
getting the big picture right and adding the details later.
C.5.3 Evolutionary
an evolving model which is repeatedly redefined to reflect the new and increased
understanding of the problem, the changing objectives, and the availability of new
data. Ward (1989) suggests that models generating different levels of detail should
kept manageable. Starting from a simple model with few parameters but
encapsulating the big picture, additional factors and dimensions are introduced
with a view to test the feasibility and attractiveness of different techniques. The
276
In spite of these favourable characteristics, this method of investigation has several
runs the risk of never finding anything. Modelling without an end goal or without
In building decision support systems, Sprague and Carlson (1982) describes three
tactical options: the quick hit, staged development, and the complete system. 1)
The quick hit has the lowest risk in the short run but no re-assurance of re-
specific model using whatever is available quickly and without any plans for
the complete system approach is most comprehensive and ambitious and by default
most time-consuming. It requires a lot of foresight and planning but bears the risk
of technological obsolescence.
The need for a uniform modelling framework led Geoffrion (1987) to develop
to address the fragmented modelling world where low productivity and poor
277
managerial acceptance prevail. Structured modelling is a bold attempt to reduce
framework means that a consistent level of detail and scope cannot be maintained
develop criteria and methods to extract a subset of models from the grand design.
give an exhaustive list of criteria and strategies for model synthesis. Nonetheless,
these conceptual issues provide the basis for further research into model synthesis
278
Part Two
5.1 Introduction
flexibility suggests that the best way to allow such portability is to study how
that are lost or de-emphasized in any single discipline. The last major cross
(1982). Since then, new uses and measures of flexibility have appeared. An
investors preference for flexibility translates into the notion of liquidity, or the
past. In the labour markets, employers allow flexible hours to attract better skilled
Flexible information systems offer users more functionality. The so-called dash
for gas in the present UK Electricity Industry refers to the rapid build-up of a
flexible technology called combined cycle gas turbine which can be quickly built
off-site in small but modular unit sizes. In all of these areas, flexibility represents a
281
In spite of its popularity, flexibility has not received the formal recognition worthy
(Orr, 1967) remain elusive. Its usefulness may have been overlooked, particularly
its potential contribution to aspects of modelling that are not well served by
existing ideas.
related words in the industries covered by the ABI-INFORM (1970 - 1994) CD-
ROM. Research implications from this review and specific studies of flexibility are
also noted. This review provides the basis for its conceptual development in the
next chapter and its operationalisation in chapters 7 and 8 via measuring and
modelling.
As early as the 1970s, utility executives had called for flexibility (Schroeder et al,
1981.) The need for flexibility in power generation planning has been suggested by
Berrie and McGlade (1991), Vlahos (1990), Clark (1985), Borison (1982) and
others, none of whom have defined nor shown how it can be used. This supports
widely acknowledged, but the concept is rarely defined precisely, much less
quantified.
282
Flexibility in planning (Hirst, 1990) comprises the selection of a resource portfolio
that can be easily adapted to various conditions, e.g. small unit sizes, short lead
to continuous demand, small plants could more easily follow load growth than
large plants. Extending the argument to the extreme, a series of zero-lead time,
infinitesimally small power plants can completely eliminate all periods of excess or
deficit capacity. The drawback to maximal flexibility is that small plants cost more
per kW of capacity and per kWh of output, i.e. no economy of scale. Another
shortening the lead times for different types of plant through the use of option
concepts. Such a plan contains flexible elements (options) and uses a decision rule
to instil flexibility. CIGRE (1991) define flexibility as the ability of the power
Evans (1982) lists four ways to induce flexibility so that positive consequences
occur while avoiding negative ones. He considers them appropriate for research
into flexibility in technology assessment. To illustrate their uses, we add our own
283
3) Flexibility is the susceptibility of modification or the ability to effect alterations,
such as the liquidity of an investment, a business, or a technological portfolio.
Flexibility is achieved through the overall configuration (via balance, risk
diversification, fuel diversification) rather than through the individual properties of
various components. The flexibility of a capacity mix is determined by the ease in
meeting different conditions of shifting demand levels and load duration curves.
5.3 Economics
Since the 1930s, flexibility has been recognised in separate studies as a component
in this area, Jones and Ostroy (1984) claim that it plays a limited role in
One of the earliest advocates, Stigler (1939) discusses the relationship between
flexibility and adaptability by analysing average and marginal cost curves in the
flexibility built into a plant depends on the costs and gains of flexibility, hence
implying that flexibility is not a free good. A plant is flexible if it could produce a
cost. This translates to a flat average cost curve, i.e. the flatter it is, the more
284
From his chronological account of the development of flexibility in the economics
literature, Carlsson (1984) concludes that flexibility gives a firm the ability to deal
The firms need for flexibility directly depends on the stability of the environment in
which it operates. Low flexibility is sufficient for stable environments, and high
Mascarenhas (1981) describes two ways to realise this: increase options (hence its
markets. Flexible specialisation and integration (Gertler, 1988) are strategies for
in market demand and to adopt new products quickly. Eppink (1978) notes that
the more uncertain the situation, the more an organisation will need flexibility as a
desirable trait of a planning strategy, structure, and plan. Later, Evans (1991) uses
flexibility.
value flexibility and tend to make commitments in stages thereby limiting the
285
5.5 Labour Markets
workforces, and career planning. Against increasing rates of change and volatility
of the labour market, flexible human resource policies have been devised to allow
various job functions to meet different needs of the organisation. Women are seen
and contract workers are also part of this group, as their working arrangements can
remaining flexible enough to switch plans in case one strategy does not work out.
labour markets and labour processes through increased versatility in design and
example, refers to a new form of skilled craft production made easily adaptable by
types of flexibility for employers and employees and has different international
contexts. In France and the UK, for example, employers view flexibility as fixed
term contracts, the ability to lay off workers, and the introduction of flexible
working hours for the employee. Flexi-time (Ullrich, 1980) allows employees to
vary their hours of work to suit their own preferences. On the other hand,
286
Flexibility applies to both employees and employers. Atkinson (1985) identifies
refers to the smooth and quick deployment of employees between activities and
tasks. Through multi-skilling and retraining, the same labour force may be
worked hours to be quickly, cheaply, and easily varied in line with short term
changes in the demand for labour. [The author might have equally used the term
manipulate labour costs according to the state of supply and demand in the external
based pay. Employees, on the other hand, desire two different kinds of flexibility
The term flexibility has been applied in contradictory contexts. On the one hand,
Flexible and open systems have the capacity to communicate with other types of
technology protocols, i.e. they are responsive to other types of signals. Flexible
materialise. On the other hand, an inflexible technology is slow and costly to adjust
to unexpected events.
287
In the design of effective decision support systems (DSS), Sprague and Carlson
evolutionary.) At the first level, the flexibility to solve enables the user to confront
that it can handle a different or expanded set of problems. At the third level, the
flexibility to adapt refers to the DSS-builders ability to adapt to changes that are
flexibility applies to the long term view, that is, the ability of the system to evolve
in response to changes in the basic nature of the technology on which the DSS is
based.
5.7 Manufacturing
most of the industrial era, followed by quality in the 1970s and 1980s, Chandra
and Tombak (1992) observe that flexibility is now the dominant theme in the
1990s because of shorter product life cycles, more competitive markets, and the
Flexibility is no longer just desirable but vitally necessary, to the extent that it
competitiveness.
288
economic advantages, such as the ability to rapidly introduce new parts and to
give trouble-free service and can accommodate faults. Flexible or robust designs
frameworks to capture the essential dimensions. Slack (1983) says flexibility refers
to how far and how easily you could change what you want to achieve, thus
containing the dimensions of range and ease. Along the same lines, Kumar (1987)
options or the choices available and on the freedom with which various choices can
depends not only upon the number of alternatives but also on the extent to which
(1992) define flexibility as the ability of a system to cope with changes effectively.
definitions, and new applications, e.g. Sethi and Sethi (1990), Gupta and Goyal
(1989), and Bernardo and Mohamed (1992). A unique, undisputed definition does
operational flexibility, design flexibility, short term, long term, scheduling, job, mix,
etc. The same term, e.g. product mix flexibility, often has different meanings, e.g.
289
5.8 Other Areas
In other areas, flexibility communicates the same kind of diversity, variety, and
of systems under the following conditions seen today: high rate of environmental
As apparent from this review, the literature on flexibility is rich with definitions,
2) Identical terms used in different studies do not necessarily have the same meaning.
For example, decision flexibility, according to Heimann and Lusk (1976), is an
alternative criterion to the expectation model, whereas Merkhofer (1977) defines it
as the availability of alternatives or the size of choice set for a decision. Clearly,
criterion and choice set are not the same.
290
3) The interpretations of flexibility are often confusing. For example, Mandelbaum
(1978) considers diversity a source of flexibility, i.e. a way to achieve or increase
flexibility. On the other hand, DeGroote (1994) says that flexibility is a hedge
against diversity. Stirling (1994) says flexibility is a form of diversity, but at the
Sizewell B public inquiry, diversity was largely regarded as a hedge against
uncertainty.
4) In the extreme, the orthodoxy of flexibility has been criticised by Pollert (1991) as
being an ideological fetish. In connection with labour markets, for example, it is
ambiguous, multi-form, generic, confusing, and heavily value-laden.
rather inferred from its uses in a range of disciplines. Previous studies of flexibility,
notably Merkhofer (1975), Mandelbaum (1978), Eppink (1978), and Evans (1982),
1) Flexibility along with other closely related words is widely used across different
areas. Although we have only reviewed the uses and definitions of flexibility in
business-related fields, we can expect the notion of flexibility to exist everywhere.
Table 5.1 lists the main uses of flexibility as covered in this review. It shows that
flexibility is a characteristic of the product as well as the process, e.g. plan and
planning, decision maker and decision making.
2) Reference to flexibility has increased in the last two decades. The word
flexibility or flexible is used much more often today than it was a decade ago
(ABI-INFORM, 1970 - 1994.) This is due both to new conditions under which
flexibility is useful and new ways in which flexibility takes form. This review
suggests that the increased reference to flexibility is partly due to the more
competitive markets and uncertain environments as well as the greater availability
of technological options and increased functionality of systems to meet these needs.
291
4) The concept implies having or offering an abundance as well as a variety of
choices. It is a way of coping with unavoidable uncertainty by having or offering
many different choices or functions.
6) Flexibility seems to fill the gap between planned (intended) and actual outcomes,
especially in situations involving high cost, long lead time, and heavy
commitments.
plan: planning, Y Y Y
planning strategy,
planning process,
planning approach
decision, choice Y Y
292
Five immediate directions for further research arise from this review.
1) Several concepts are closely related, e.g. flexibility increases with increasing
number of alternatives, diversity of choices, response to change, etc. These
relationships suggest that flexibility may be a function of more established concepts.
How does flexibility relate to more established concepts?
2) In all of the sectors reviewed, flexibility is advocated as desirable, albeit not without
negative effects and provisioning costs. This begs the question, is flexibility always
desirable? Under which conditions is flexibility useful? What happens when we
have too much or too little flexibility? How much flexibility is enough?
3) If flexibility is not always desirable, this poses another question. Under what
circumstances is it no longer desirable? Gerwin (1993) warns of the downside of
flexibility, to which little, if any, attention has been given. For instance, excess
amounts of flexibility, such as quick short-term responses to uncertainty, could lead
to wasteful activities.
4) While a unique and formal definition may not yet exist, the manner in which it is
used and promoted may help to elaborate the concept. By this, we mean identifying
necessary elements to define flexibility, such as Eppinks (1978) designation of
type, aspect, and components of flexibility.
5) To harness this concept for practical use, we need to know how to operationalise
and measure flexibility.
(1978) open research areas still hold today. For example, problems of continuous
action space make the range aspect of flexibility difficult to measure. Practical
development of flexibility attributes and implications of their use are required, thus
293
(1988) uses this technique as a means to capture the option value of flexibility.
294
CHAPTER 6
Conceptual Development
6.1 Introduction
relationships have been formally proven. Others are illustrated by examples. These
conceptual relationships are depicted by triangles in figure 6.1 and discussed in the
flexibility is useful (section 6.9), discuss its downside (section 6.10), distill
(section 6.13.) The final section (6.14) concludes the main findings and raises the
commitment options
6.6 6.7
confidence flexibility robustness regret
6.3 6.5
6.4 6.8
295
6.2 Conceptual Analysis
related words. We briefly describe those words which share the closest meanings
and versatility.
similar in the context of return to a normal state. Liquidity, meaning the ease of
conversion, is also a kind of flexibility, being the ease of transition from one time
period to a desired position in the next period (Jones and Ostroy, 1975). In this
furnish a variety of consumption plans. Both plasticity and flexibility denote some
other states. Robustness and resilience are closely related; the former refers to the
ability to satisfactorily endure all envisioned contingencies while the latter refers to
296
6.3 Flexibility and Robustness
Gupta and Buzacott (1988), Mandelbaum (1978), Eppink (1978), and Ansoff
(1968) see two fundamental ways of responding to change and uncertainty, which
tolerance. It is the innate capacity to function well in more than one state and thus
Eppink (1978) ACTIVE: the response capacity of PASSIVE: the possibility to limit
the organisation the relative impact of a certain
environmental change
Mandelbaum (1978) observes that action flexibility is only needed when we have
less than perfect information. It is acquired by taking appropriate action after the
change takes place to take advantage of the new state. This kind of flexibility is
297
only desirable when there is uncertainty about what actions to take and useful if
State or passive flexibility, on the other hand, already exists in the new state when
the change takes place. Therefore it is not necessary to learn about the present
change.
These two types of flexibility agree with the conceptual analysis of Evans (1982).
this end readily encompasses both notions of flexibility and robustness and more.
The distinction between passive and active forms of flexibility becomes less clear
flexible and robust or maintain a system that is robust overall but containing
investigate the meaning of robustness in the next sub-section and then compare it
6.3.2 Robustness
Robustness is a term in its own right. It has several definitions. In quality control,
robust quality represents the zero defect concept. In statistics, robust regression
298
observations. Robustness to likely errors is the ability of a procedure to give good
results under less than ideal conditions. Accuracy in problem representation can
individual variables.
(1994) define a robust plan as one whose cost varies little with changes in
stability. Hashimoto et al (1982) associate robustness with the probability that the
actual cost of the system will not exceed some multiple of the minimum possible
cost of a system designed for the actual conditions that occur in the future.
flexibility, i.e. acceptable over a range. Merrill and Wood (1991) equate
robustness with the proportion of possible futures a given plan would be best in.
Gupta and Rosenhead (1968) state that it is closely related to the term
made now versus the number and diversity of options left open. Paraskevopoulos
uncertainty which directly translates to testing a plan that is optimal under a given
demonstrates how powerful the method is, how applicable it is, and how well it
performs regardless of changes. Other words that are implied by the word
299
robustness are consistency, insensitivity, tolerance (of error and change), long-
The words flexibility and robustness often appear in the same articles and even
used inter-changeably as if they mean the same thing. They have also been used to
example, CIGRE (1991) suggest that flexibility at the planning stage ensures
against expected value. Hobbs et al (1992) confuse this further by suggesting that
(the pursuit of) flexibility can result in a robust plan that will be satisfactory
under a range of possible market and regulatory conditions even if that plan fails
production level. Robustness, on the other hand, is associated with not needing to
change. Flexibility and robustness are not opposite or the same, but merely two
300
4) Over and Under Capacity
5) Response to Uncertainty
of power systems. The resulting plan will not be optimal for a particular future but
satisfactory for most of the possible futures. CIGRE (1991) propose two
indicates the overall power system strength to withstand external impacts. They
adequate when the development parameter variations become too large and that
each type of plant represents an open option. This aspect of flexibility is closely
number of different plants according to type of fuel, capacity size, and other
301
FUNCTIONAL REQUIREMENT OF SYSTEMS
The notions of flexibility and robustness describe two mutually exclusive functional
fool-proof characteristic that ensures the system will not crash. A robust
programming language has a consistent style to its syntax and semantics, whereas a
One of the main differences between robustness and flexibility is that the former
infers a present cost whereas flexibility implies a future cost. Robustness by means
which will not be eliminated until demand reaches that margin level in the future.
This is the opportunity cost of providing flexibility as indicated by the level of idle
On the other hand, flexibility is a potential change, reflecting a future cost that has
not occurred yet. But the provision of flexibility, that is, the capability to be
flexible, may incur a present cost. For example, importing electricity when demand
rises implies a future cost, but the option to import in the future implies a present
cost.
The use of fixed and variable costs to describe flexibility was first made by Stigler
(1939) who proposed that flexibility increases when resources are transferred
from the fixed to the variable. A firm is more flexible if it is able to incur
302
OVER- AND UNDER-CAPACITY
cost for under-utilisation (Son and Park, 1987). Reserve margins are typically built
Robustness and flexibility translate into over- and under-capacity to deal with
surprises. Flexibility, on the other hand, refers to adaptation, i.e. we expect there
(1993) offers several delivery methods to cope with uncertainty, depending on the
nature of uncertainty. Although he does not call these two methods by the names
of robustness and flexibility, the parallelism is evident from his table on page 406,
as reproduced below.
303
Table 6.2 Gerwins (1993) Methods of Coping With Uncertainty
Market acceptance of kinds of Long-term contracts with Small setup times, modular
products customers products
Length of product life cycles Life extension practices Less hard tooling and
backward integration
plant economics: cost of contracts to stabilise pool avoid plant with high capital
production prices cost
technology: lead time plants at different stages of import or export power when
construction needed
304
FEATURE OF MODELLING APPROACH
setting reserve margins based on past volatility of demand, and optimising against
this to produce a capacity expansion plan. This method of dealing with uncertainty
towards robustness, i.e. ensuring that the resulting decision or plan is acceptable
guarantee that the result will be optimal in the actual future, only that it is optimal
Hobbs and Maheshwari (1990) suggest the use of Monte Carlo Simulation (or risk
scenario analysis to check how well a plan performs under a particular future.
These three types of analyses are different ways to show how applicable the object
this sense (of Knight, 1921), only the area of uncertainty is identifiable, as the
305
respect to uncertainty and contingency (Dixit and Pindyck 1994, Kogut and
Kulatilaka 1994, Smith and Nau 1990, Triantis and Hodder 1990, Trigeorgis and
Mason 1987, and others.) Flexibility, in this sense, is a feature of the modelling
The expected utility model follows the principle of optimisation, that is, maximising
dynamic situations, Heimann and Lusk (1976) argue that flexibility offers a better
expected value is a more feasible and sought after goal. When the decision maker
does not have full confidence in the model, Mandelbaum (1978) argues that
leaves as many options open in the future as possible, rather than immediately
seeking the course of action that will lead to the highest payoff.
The use of risk as a surrogate for uncertainty is seen in the utility theory of risk
306
defines flexibility as the management and engineering margins implemented in a
flexibility maximises the expected risk utility function of the decision maker. He
shares similar traits with energy systems, e.g. capital intensiveness, long planning
as opposed to risk profiles. The resulting system is robust because of the margins
understanding is given by Knight (1921) and discussed in Chapter 2.] Merrill and
Wood (1991) proposes two measures for this type of risk: the likelihood of
making a regrettable decision (which they call robustness) and the amount by
which the decision is regrettable. Concurrently, Merrill and Wood also advocate
trade-off with 100% probability or for all possible values of uncertainties. A robust
plan is one which could be selected from every future, no matter how the
uncertainties turn out. The above analysis shows that robustness, risk, and regret
One way to reduce risk is to select a course of action that minimises future regret.
Minimax regret, otherwise known as the Savage criterion, follows the decision rule
of selecting the path of least regret. Regret is defined as the opportunity cost of an
action, i.e. the assessment of a lost or foregone opportunity made in hindsight. For
some individuals, regret or remorse ranks high in terms of emotional distress. The
307
robustness definition of Gerking (1987): a robust decision is one for which
decision maker is unsure about his preferences and also not confident of his model,
would seek flexibility, for instance, by delaying the decision until he gets more
commitment, he increases his flexibility, i.e. the ability to select another course of
action.
element which prevents one from undoing the situation or choosing a different
gives up the opportunity to wait for new information that might affect his decision,
certainty the decision maker has in his preferences, his perception of the future, and
other specifications of the model. However, no one has subsequently made the link
an existing course of action, i.e. the degree of reversibility or opting out. Similarly,
the amount of desired flexibility varies directly with the amount of uncertainty
perceived by the decision maker, such as his confidence in the model, sureness of
308
Kreps (1979) asserts that the decision makers preference for flexibility is increased
for model unease or lack of model confidence. Someone who is unsure is less
likely to commit himself than one who is sure. Jones and Ostroy (1984) observe
that the variability in a decision makers beliefs is directly related to the amount of
flexibility he desires. A decision maker who is unsure is more likely to retain some
flexibility than one who is sure about his preferences. Similarly, a person with
minimal commitment, i.e. few and breakable obligations, has more flexibility than
one with more responsibility. The following relationship can be deduced from the
above analysis: lack of confidence reduces the desire for commitment and
flexibility refers to the ability to change. These are incorporated in the finance
definition of options, i.e. the right (ability) but not the obligation (need) to
transact (change).
In finance, an option is defined as the right to purchase or sell the underlying asset
at a specific price with the right lasting for a specific time period. The right can
i.e. the ability to respond to change by changing. Hirshleifer and Riley (1992)
maintain that remaining flexible is like buying an option as to what later action will
be taken: the more flexible position chosen, the greater is the value of the
option. This relationship suggests that techniques based on option pricing theory,
309
In the options framework, it can be shown that the value of managerial flexibility is
Volatility and risk are surrogate measures of uncertainty, and Tomkins (1991)
states that the greater the risk (or volatility), the higher the option value. The more
volatile the underlying market, the riskier it is. The riskier the market, the greater
the value of the option. Just as volatility determines the option value, we can
The traditional net present value (NPV) rule of investing in a project when the
NPV of its expected cashflows is at least as large as its cost ignores the
opportunity cost of making a commitment now and giving up the option of waiting
for new information. NPVs do not take into account the considerable uncertainty,
decisions. Instead of using the NPV rule, Dixit and Pindyck (1994) advocate the
development is the existence of such options, e.g. legal contracts allowing the
arrangements, although most new ventures are still completely hedged in back-to-
back supply and fuel contracts. These completely hedged contracts reflect the
robust rather than the flexible response. The Regional Electricity Companies
310
Financial options are financial instruments that can be used for hedging purposes.
Marschak and Nelson (1962) point out the difference between hedging and
flexibility. One hedges because of the uncertainty and desire to avoid high
variance of returns. On the other hand, one takes flexible initial actions when
expecting to learn more about the world and to take advantage of that learning
before making subsequent moves. In short, flexibility can increase the variance
of expected payoffs whereas hedging cannot. Those who hedge are risk averse,
but those who take flexible actions want high payoff. Evans (1982) also notes the
difference between hedges and flexible responses. Hedges and compromises are
options that make the worst outcome a little better at the expense of making the
best outcome a little worse. Hedges involve a negative approach, while flexible
The relationship between uncertainty and flexibility was first formally noted by
Stigler (1939). Later Marschak and Nelson (1962, page 52) state the value of
flexibility is a function of variation in price and how well that variation can be
predicted before the decision is made, i.e. the specific uncertainty to which
flexibility deals with and the quality of information regarding this uncertainty.
Based on this relationship, they prove that the greater the uncertainty, the greater
311
flexibility of the technology makes it more attractive to operate in a more diverse
Fine and Freund (1990) suggest the use of flexible capacity to hedge against
introduce new product models, reduce the need for interperiod inventories, and
Related to uncertainty and flexibility are other concepts such as liquidity, learning,
Uncertainties of the capital market, Hart (1937) argues, require the maintenance of
next period and the more the investor expects to learn about them between today
and tomorrow, the more he should be willing to pay for flexibility (liquidity).
Carlsson (1989) assigns Kleins (1984) type I and type II flexibility to risk and
such can be built into production processes. Type II flexibility is built into
and their interactions in the long term. Firms must be alert to new opportunities
for new products and processes, i.e. they must be able to rapidly respond to
312
technology. These two types of flexibility are not equivalent to the passive and
consideration of flexibility: the environment, the model, and the decision maker
accuracy, or adequacy. Uncertainties in the user refer to his unease in the model,
unsureness about his own preferences, and his lack of (detailed) information.
synthesis as a means to this end, but our subsequent investigation revealed the
possible, then model synthesis and other types of rigorous modelling should
eventually produce a complete model. Until then, we need to look for other ways
the modelling approach may not be appropriate. The close relationship between
uncertainty and flexibility suggests that flexibility may be a way to compensate for
not captured in the model, and to deal with uncertainties independent of the
modelling approach.
313
We also argue that even if a model is complete, there may still exist a gap between
the model and the user. In other words, the user may not have full confidence in
the model and, according to our conceptual framework, may seek flexibility
the user regarding the model, and he argues that it motivates the consideration of
flexibility, but he does not elaborate on how flexibility can be used to compensate
perceived by the user, who believes that completeness can be achieved. The user
the user has in the model, regardless of his belief in model completeness. If the
model is incomplete, the user may turn to flexibility instead. However, even if the
model is complete, the user may wish to retain flexibility as he may not wish to
rely entirely on the model. In the latter case, robustness is no longer sufficient.
Because flexibility is not a free good (Stigler, 1939), there is no point being flexible
or having flexibility if it is not needed or desired. [We use the word useful as
question.] This implies that uncertainty must be important or costly enough for the
Flexibility can only be considered if there exists at least one other alternative other
than status quo, i.e. staying in the present state or continuing with the present
314
In table 6.4, we translate Mandelbaums conditions under which flexibility is not
useful to their converse, i.e. conditions under which flexibility is useful, thereby
A well modelled and solved problem Presence of new types of uncertainties and
inadequacies of existing techniques and
modelling approaches.
The decision maker has enough faith in the Allowance for unexpected changes is made in
model to implement results with little or no the form of reserve margins or plants being
allowance for unexpected changes built; even so this is very costly.
Delays are not possible or have a detrimental Regulatory delays are possible and sometimes
or negative impact inevitable.
Complex situation with multiple interested While limiting changes may reduce conflict of
parties; changes are not desirable because interest, it also reduces the responsiveness.
lengthy debates are necessary
giving the illusion that it is always desirable. Little has been said about its
downside. In this section, we identify the negative aspects of flexibility and those
The two-way relationship between flexibility and uncertainty suggests that there
may exist an optimal level of flexibility, beyond which more flexibility is not useful.
It also suggests that too much flexibility may be harmful. Too much flexibility, e.g.
too many options, may complicate the analysis and confuse the decision maker.
315
The value of flexibility may follow the diminishing marginal return rule, i.e. the
monitoring, waiting, acquiring more information, and the regret of expired options.
There are also cost penalties to small unit sizes and shorter construction periods in
Gerwin (1993) notes that increasing (product) variety leads to complexity and
make existing flexible technology obsolete. Having flexibility makes one less
careful to get it right the first time and thus may be more costly in the long run.
The close relationship between flexibility and uncertainty merely establishes that 1)
fact, thinking about flexibility, i.e. brainstorming and permuting the number of
possible choices, may create more uncertainty for the decision maker.
Even though uncertainty is not necessarily undesirable, some decision makers seek
altogether. The cautious decision maker may prefer fewer decision choices to
individual may want less freedom of choice to avoid having to make decisions, i.e.
316
possibilities exist. Hence, even under conditions of uncertainty, not everyone
desires flexibility.
ability to perform well both in the old state before a change and in the new state
after the change;
ability to switch from the first period position to a second period position at low
cost;
set of remaining programmes after the initial choice has been made; and
systems ability to perform different jobs that may occur or to perform one job
under different environmental conditions.
decision perspective. These two ways of viewing flexibility are equivalent, i.e.
inferring one another. Collingridge (1979) shows that a system which is easy to
control can be seen as a sequential decision of high flexibility. The ease in control
is a function of the number of options open to the decision maker. To keep ones
options open is to invest in an easily controlled system. [In Chapter 7, we see that
the systems and decision views of flexibility with respect to entropy are not
equivalent.]
317
context-free. Types or kinds of flexibility relate to the conditions under which it is
flexibility. [This is clarified and applied in Chapter 7.] For example, volume
flexibility addresses demand uncertainty. Types of flexibility have been defined and
used in manufacturing where it has received the most attention. It is not necessary
to have different types of flexibility to use the concept. However, any discussion of
flexibility should include the essential definitional elements, which many authors
components) that are not independent of each other and to-date only apply to an
organisational context.
Slack (1988) gives range and response dimensions for each type of flexibility he
defines. Range refers to the ability to adopt different states, while response refers
to the ability to move between states. In an earlier paper, Slack (1983) gives the
dimensions of range and ease, where ease is the cost and time to make the change.
Cost and time are frictional elements to do with the difficulty of changing.
time, and discretion. We interpret these as follows. Discretion refers to the ability
and potential (and willingness) to change or fill a gap. Range refers to the number
and diversity of choices available. Time refers to responsiveness, lead time, and
time to change.
In addition to range and time, Schneeweiss and Khn (1990) add five more
(uncertainty), evaluation of elasticity, and the possibility to plan for it. They assert
318
The above elements of flexibility are closely related to Kogut and Kulatilakas
(1994) three conditions under which options are valuable: uncertainty, time
dependence, and discretion (the ability to exercise and change.) Decisions depend
on time, and the value of flexibility comes from investing in the capability to
1) Flexibility conveys a change, usually in the future tense, i.e. a potential. This is
implied by the transition between two states, choosing between alternatives, barriers
to change, and switching cost.
2) Flexibility denotes more than one way of responding to change, hence the notion of
range. Range includes the size of choice set, number of alternatives, the extent to
which demand can be met, and levels of change.
3) Flexibility is different from gradual change. The time element is very important
here, as typically we speak of a rapid response. Time includes responsiveness,
lead time, and time to change.
4) The fourth element relates to the conditions posed in the previous section, i.e.
existence of uncertainty and alternatives or strategies for the consideration of
flexibility.
The concept of favourability reflects the value or benefits of change. These are the
positive values associated with acquiring and realising the flexibility. Favourability
position to take advantage of the new situation and get a better outcome. We
319
move to a new position to avoid or minimise a bad outcome, such as the loss of
revenue or incurring higher costs or not being able to get out of the situation. If
there are several states we can move to, we will move to the one that gives the
most benefit. Similarly, the choice we select in the second stage will be the one
that flexibility and favourability are two separate decision criteria, requiring a
trade-off between the number of choices and expected value. They suggest either
function over the two attributes and then optimise on it. In the latter case, the kind
Heimann and Lusk (1976) give a treatment of satisficing on value but maximising
on flexibility.
aspects, e.g. the ability to change, number of choices, and responsiveness, which
making.
320
We propose a distinction between two types of operationalisations, namely 1)
more options in the future; and 2) strategies, which preserve, introduce, or increase
flexibility as courses of action. Short lead time, modular, and small unit
and limited commitment. Technical means of achieving system flexibility are listed
flexibility include selecting a portfolio that contains flexible elements (Hirst, 1990),
These sources of flexibility have also been suggested and confirmed by others such
321
2) Partitioning the action space, resources, or opportunities as time proceeds not only
enlarges the choice set but also allows more elements (members of the choice set) to
move freely. By dividing what seems like one capacity size decision into several
decision variables, we have more control over each unit. Partitioning also gives the
ability to decide sequentially. In the power industry, modular and small unit sized
combined cycle gas turbine plants exemplify this kind of flexibility as they can be
built incrementally. Gustavsson (1984) supports the use of standardised modular
components to increase flexibility design of products and systems. Standardisation
improves economy while modularity increases flexibility by the allowance of
different combinations of sizes and types of technology as well as incremental
additions.
3) Postponement of action gives time and opportunity to obtain more information, for
uncertainties to be resolved, and new options to open up and be developed
simultaneously, such as the use of temporary arrangements. This delay is not
usually free, neither is the additional information free, hence the value of this
information must be worth the delay. Paying a premium for the option to delay,
building reserves as in uncommitted funds (thereby enlarging the choice set), etc,
are all examples of postponement.
4) Searching for additional actions is a way to enlarge the choice set. One definition
of flexibility is the number and variety of choices available. Option-generating
techniques as described in Keller and Ho (1988) assist in the search for more
solutions to a problem. This is based on the rationale that the more choices
available, the more and different types of futures (uncertainties) can be met.
5) Reducing the resistance to change makes it easier and cheaper to change. This is
accomplished by removing or relaxing constraints as well as lowering the cost of
change. Together with the fourth source, this strategy enables decisions to be made
more frequently while increasing the number and quality of options available at
each point. The removal of technological barriers results in the development and
availability of new plants. Removing constraints enlarges the choice set. This is
equivalent to reducing lead times, cost of changing, and other barriers to change,
i.e. disablers.
322
having a balance of technologies. Tolerance is a way of increasing state flexibility,
i.e. catering to many. Generating companies typically have a good mix of
technologies by type and capacity size and timing (commission and retirement
dates).
These six sources of flexibility are closely linked to the five criteria proposed by
Collingridge and James (1991) for increasing flexibility in policy making. These
uncertainty.
3) Maximum diversity decreases the dependence on any fuel, hence lowering risk.
1) Reducing the relative impact of external changes makes oneself less vulnerable.
For example, multi-product firms with highly diversified portfolios exhibit high
external flexibility or robustness.
323
In the electricity context, Yamazee and Hashimmashhadi (1984) suggest four ways
1) Shortening the lead time to acquire a resource or plant reduces the uncertainties
surrounding future conditions.
2) Lowering capital costs limits fixed financial commitment. This re-iterates Stiglers
(1939) view of flexibility as the transfer of costs and resources from the fixed to the
variable.
6.14 Conclusions
development.
conceptual framework.
a) We identified two types of flexibility (passive and active) and clarified the
distinction between flexibility and robustness.
324
lack of confidence leads to less commitment and more flexibility.
considerations.
the downside of flexibility, which has not received enough attention in the
literature.
elements in its definition, rather than giving a single formal definition. These five
This conceptual development has unified and clarified the definitions and
applications of flexibility from the cross disciplinary review. To make use of this
perhaps even, measure flexibility. Measures may be needed to defend plans, make
the next chapter, we give a rigorous assessment of three groups of measures which
emerge as most popular and most promising from our cross disciplinary review.
325
CHAPTER 7
Measuring Flexibility
7.1 Introduction
The previous chapter indicated a need to measure flexibility. To make use of the
flexibility, we need to assess its costs and benefits. Measuring flexibility facilitates
One of the greatest challenge put forth by many researchers is that of unifying
have shown the difficulty of reconciling its many interpretations and applications.
Its multi-dimensional aspects make the quest for a single best measure impractical
recognised and necessary objective, new measures are continually being suggested.
There, Sethi and Sethi (1990) and Gupta and Goyal (1989) have provided the most
In the manufacturing sector, Gerwin (1993) believes that the research on flexibility
needs to have a more applied focus to complement the theoretical work, and that
the main barrier to advances on both theoretical and applied fronts stems from the
lack of measures for it and its economic value. On the contrary, our review has
327
disciplines, they individually do not reflect the rich cross-disciplinary interpretation
of flexibility.
Among others, Bernardo and Mohamed (1992) argue for an explicit consideration
2) Multi-dimensionality compounds the effort that goes into creating scales for
assessment. [As concluded from our cross disciplinary review and conceptual
development, the concept of flexibility is multi-faceted, containing both the
context-dependent types and context-free elements of flexibility.]
3) Because flexibility can be studied at different system levels, such measures require
collections of disparate data sets.
4) Operationalisations that span industries are more useful albeit more difficult to
create for research than those based on a single industry. [We have studied the uses
and definitions of flexibility from a broad basis, i.e. a cross disciplinary review that
spanned industries.]
328
suggested measuring this uncertainty as a substitute for flexibility. In this thesis,
developed from the cross disciplinary review. As such, there is a need to represent
4) A measure of flexibility should distinguish between the flexible and the inflexible,
and ideally distinguish between different degrees of flexibility.
5) Finally, such a measure should facilitate a trade-off between the notions of size of
choice set and value inherent in flexibility, that is, where flexibility conflicts with
favourability.
indicators, expected value, and entropy. Most measures belong to the first group
The second group of measures are based on the concept of expected value in
section 7.3. 1) The relative flexibility benefit (Hobbs et al, 1994) puts a positive
329
value on the ability to take advantage of favourable uncertain conditions. 2) The
normalised flexibility measure (Schneeweiss and Khn, 1990) deals with the
states of the uncertain condition matched to the states or choices of the second
stage decision, thereby conveying the notion of slack. 3) The expected value of
information. The first expected value measure deals with the number of
uncertainties, the second with the states of uncertainties, and the third with the
The third group of measures are based on the scientific concept of entropy (section
7.4), which has not received enough attention with respect to flexibility. Two
Section 7.5 compares expected values with entropy. The final section 7.6
summarises the critique and comparison of these three groups of measures and
We give a new terminology for partial measures of flexibility. These measures are
translated and organised into indicators which reflect those necessary definitional
elements of flexibility. The term indicator is used to indicate rather than measure
Therefore any measure of flexibility should reflect the potential but not necessarily
the realisation.
330
In their analysis of flexibility and uncertainty, Jones and Ostroy (1984) observe that
Hobbs et al (1994), Hirst (1989), and Schneeweiss and Khn (1990) as a sequence
of decisions in a minimum of two stages where the first stage is the initial position,
providing the flexibility which can be realised in the second stage. It has also been
interpreted as a state transition (Kumar 1987 and others), where the initial position
can move to another state. Either way, flexibility is associated with the initial state
but measured by the number of states it can move to or the number of choices
case for comparison purposes, i.e. the inflexible path with no subsequent options.
The choices in the first stage decision lead to different levels of flexibility. An
initial position has flexibility if there is at least one other state it can move to.
Similarly, a first stage decision should have in the following second stage decision
at least two choices, with the minimum being change or do not change. The
proceeds from the first stage to the second stage do not change is similar to
want flexibility for no reason at all, for it is not free. This reflects our
331
Change inducers, i.e. relevant uncertainties, are called trigger events, and they
capacity. The change is the decision makers response to the trigger event(s) and
be several events that trigger a second stage decision. Similarly, a second stage
A trigger event represents an uncertainty that has two or more possible states. For
matches a subsequent decision choice, then it is called a trigger state for that
additional capacity is associated with high demand while selling extra capacity is
associated with low demand, with demand being the trigger event. High and low
demand are trigger states for the purchasing and selling options, respectively.
Mandelbaum and Buzacott (1990) define flexibility as the number of options open
(1980) support this definition of flexibility, i.e. the size of a choice set associated
with alternative courses of action. The size of choice set can be directly deduced
from that meaning of flexibility which relates to having many choices and multi-
who admit that it is insufficient by itself and subject to the partitioning fallacy. In
fact, Evans (1982) accuses Marschak and Nelson of confusing the property of
flexibility with its measure, implying that the size of choice set is only one aspect of
332
choices is necessary to avoid triviality, e.g. choices that are feasible but unlikely to
be chosen.
According to Upton (1994, p. 77) flexibility is the ability to change or adapt with
little penalty in time, cost, and effort of performance. Reflecting the difficulties in
changing and the barriers to change, these penalties are what Slack (1983) calls
frictional elements. Reducing the lead time or response time makes it faster to
defines switching cost as the average sum of gains and losses in the transition,
which is only incurred if the change occurs. Removing barriers makes it easier to
There is a difference between the cost of providing the flexibility and the cost of
changing. The enabler reflects the cost of providing flexibility, i.e. it guarantees
future flexibility and reflects the premium on flexibility. The flexibility associated
with any investment or initial position is largely valued by the initial sunk cost
to costs, the enabler and disabler are similar to fixed and variable costs. The
enabler is the premium or cost associated with the first decision which guarantees
flexibility later on. The disablers indicate the availability of an option by minimal
cost, minimal time, and other reduction of barriers. The disabler includes the
switching cost when flexibility is realised. The enabler is like Stiglers (1939) fixed
cost, while disablers are variable costs which may or may not occur. These costs
are similar to the loss or benefit if the change occurs (Buzacott 1982, Gupta and
Goyal 1989) and Mitnicks (1992) marginal or incremental cost of the additional
We translate the favourability inherent in flexibility into positive values which are
desirable. We call these elements motivators. These are the benefits or payoffs
333
Mandelbaum (1978) suggests that flexibility can be measured by the effectiveness
and likelihood of change. But he does not decompose these elements further nor
apply them. From our perspective, effectiveness refers to the number of favourable
refers to the probability of the occurrence of the trigger state as well as the
reflect that element of potential. Eppink (1978) associates the likelihood of state
disablers (the more difficult the change, the less likely) and the motivators (the
position, but measured by the number of favourable choices that are available
costs and other frictional elements called disablers. The type of flexibility depends
on the type of uncertainty which triggers the subsequent choices. The likelihood
motivators of the choices. Flexibility is relative to other choices in the first stage.
Flexibility increases with the number of choices in the second stage, likelihood of
favourable choices, and ease of change. Table 7.1 translates essential elements of
flexibility into measurable indicators. Although not all are necessary, each
334
Table 7.1 Elements and Indicators of Flexibility
system performs under a single set of expected future conditions against how well
section 7.3.1.2, we extend this example to test their claim that the measure can be
335
7.3.1.1 Single Investment: Flexible vs Inflexible
to add flexibility to cope with future environmental restrictions. The default option
is to burn coal. If co-firing is installed, it can burn either gas or coal. The
flexibility of co-firing is merely the additional option of burning gas. Using our
terminology of section 7.2, the trigger event for the flexibility of co-firing is the
uncertainty of natural gas prices. The future price of natural gas is equally likely
to be one of the following: $2.40, $3.20, and $4.00 per million British Thermal
Units. If the capability to co-fire is not installed, the annual generation cost would
be $100M, for the default option of burning coal. With co-firing capability at an
annual investment cost of $0.5M, the firm can choose to co-fire and pay an annual
annual generation costs are added to the annual investment cost to get $97.2M,
$101.2M, and $105.2M respectively in table 7.2 and in brackets in figure 7.1. If
co-firing is invested but not used, i.e. natural gas prices make it too expensive, the
utility firm would incur the annuitised investment cost ($0.5M) plus the default
generation cost of $100M, totalling $100.5M. The firm gains flexibility from co-
firing because it can burn gas when natural gas price is low and burn coal
otherwise, i.e. two choices instead of one. This example treats burning gas and
Trigger Event: Natural Gas Price Second Stage Decision of Co-firing No Co-firing
(includes annual $0.5M investment) (no investment)
State Probability Burn Gas Burn Coal Burn Coal
At $2.4/mmBTU 1/3 $97.2M $100.5M $100M
At $3.2/mmBTU 1/3 $101.2M $100.5M $100M
At $4.0/mmBTU 1/3 $105.2M $100.5M $100M
336
Figure 7.1 depicts this example in a decision tree. The expected value associated
with the portion of the tree to the right of a node, i.e. all branches emanating from
the node, is enclosed in brackets above the branches. The direct payoff
assignments (annual costs) are given below the associated branches. The expected
values associated with end nodes (tip of the right most branches) are totals of the
individual payoffs along the path. For example, the total cost of Burn Gas given
Gas Prices are $2.4/mmBTU is [97.2] = 96.7 below the branch + 0.5 install
capability to co-fire. All numbers in brackets are calculated by the expected value
To read the decision tree, we start from the left. Invest $0.5M a year to install the
ability to co-fire. If the natural gas price is $2.40 (and this occurs with 1/3
probability), the best option is to burn gas and pay a further $96.7M (instead of
$100M). If the natural gas price is $3.20 or $4.00, it is cheapest not to burn gas
but burn coal instead. If co-firing is installed, one-third of the time it would cost an
extra $96.7M to burn gas compared to an extra $100M to burn coal. This gives a
savings of $3.3M. [$3.3M = $100M (not installed ) - $96.7M (install and run
337
favourably)]. Equivalently, the total annual cost of $100.5M - $97.2M = $3.3M.
The Value of Co-firing Under Expected Conditions minus Expected Value of Co-
Price). The first term (value of co-firing under expected conditions) is taken from
the top portion of the same decision tree with new probability assignments (figure
7.2) which treats the uncertain event as certain on the average state and
improbable under other states. The second term (expected value (EV) of co-
firing given uncertain gas price) is taken from the top portion of the decision tree in
figure 7.1. Under expected conditions, the price of gas is 1/3 * ($2.4) + 1/3 *
occurrence. All other prices are set to 0% probability for the purposes of expected
the original probabilities were equal. If the original probabilities were not equal,
we may require a new state to represent the average.] In this example, the values
V and expected values E refer to costs and expected costs, respectively, in which
case, the best payoff comes from not burning gas. The benefit or cost-savings of
338
Figure 7.2 Expected Conditions
to co-fire and the value of co-firing. The value of installing the capability to co-
fire is the difference between the expected value of the top branch (install) in figure
7.1 and the expected value of the bottom branch (no co-firing) less the investment
cost. In cost terms, -$99.4M - (-$100M) - $0.5M = cost savings of $0.1M. The
value of co-firing depends on the price of natural gas. In other words, the value is
highest when gas is cheapest, and lowest when gas is most expensive.
Hobbs relative benefit of flexibility is the difference between the expected value
from considering the uncertainty of natural gas prices and that from treating natural
gas prices as one average value. It is the difference between considering several
futures (three states of the world) and one representative future (one average state
of the world.) The bottom portion of the decision trees (No_co-firing) serves as
the default case for comparison purposes, as flexibility is a relative value. The
value of flexibility is relative to the zero value of the default inflexible case. It
shows that the default case is not affected by the future price of natural gas.
339
The above analysis agrees with our understanding that flexibility is a potential and a
probabilistic weighting of the optimal realisation, i.e. burn gas when natural gas
price conditions are favourable. This flexibility is only favourable when the natural
Hobbs et al claim that this relative flexibility benefit can be used to compare two
of X compared to Y is simply the difference between them, i.e. F(X) - F(Y), where
gives cost-savings (compared to the No_co-firing case) for two out of three natural
gas price levels. Y is more flexible than X as it offers one more favourable option
than X does. As a result it costs more to invest in Ys capability, $0.6M per year
(instead of $0.5M for X). Figure 7.3 shows the relevant decision tree for Y.
340
Figure 7.3 Investment Y
Although Y gives favourable payoffs ($98.7M and $99.8M) more often than X
does, its expected cost ($99.7M) is greater than Xs ($99.4M). The value of co-
$99.8M - $99.7M) is much lower than F(X) which was $1.1M. Y is more flexible
but less favourable than X. However, on grounds of flexibility, the firm should
would be 2/3 favourable compared to 1/3 for X. The flexibility benefits F(X) and
costs, and uncertainty is not required for its assessment. In this example,
one out of three chances to give a favourable payoff of $97.2M, while Y has two
out of three chances ($98.7M and $99.8M). Although Ys payoffs are not as
341
favourable as Xs, Y seizes more opportunity and offers a larger (favourable)
choice set than X. Therefore Y is more flexible by this definition. However, in one
Why have the authors confused flexibility and favourability? Their argument is
sound: if flexibility is desirable, then it should give benefits. The more flexible it is,
the more benefits it should give. Expected value is a good candidate for the
relative flexibility benefit fails when comparing two investments that conflict on
flexibility and favourability, i.e. X is more favourable (more cost-savings) but less
flexible (fewer favourable choices) than Y. The relative flexibility benefit measure
multitude of uncertainties and decisions prevail, the relative flexibility benefit could
lead to a poor recommendation. Hobbs et al have assumed that the more flexible it
is, the more favourable it is, i.e. no conflict between the two. In doing so, they
gas prices. This is the trigger event, without which flexibility cannot be valued.
They also fail to mention the implications of the number of trigger states and their
probability distribution, i.e. the gas prices. If $2.40/mmBTU is most likely, then
likely, then Y gives the best outcome. If $4.00/mmBTU is most likely, then co-
firing capability is not valuable, as natural gas becomes too expensive for co-firing.
The trigger event (natural gas price uncertainty) is an indicator of flexibility. The
final payoff, we can reduce this problem to the following. X gives one out of three
342
favourable options in the second stage. Y gives two out of three. Thus Y is the
but averaged out when comparing X and Y. The number of favourable choices is
not that which is offered by the co-firing option, i.e. burn gas or not, but in context
of natural gas price uncertainty. So X actually offers fewer favourable options than
A decision tree analysis, without examining the structure of the tree itself but only
looking at expected values, will gloss over this. This of course depends on the
firms view of natural gas price uncertainty, being only three states and of equal
and Khn (1990) is intended for the comparison of more than two options. It is a
the ideally flexible V+ and the most inflexible V- options considered in the analysis.
The numerator is the difference between the chosen investment V* and most
inflexible V-. So, N(V*) = (V* - V-) / (V+ - V-). A normalised measure provides an
343
appropriate values for V that capture both flexibility and favourability for use as
The most flexible option is the first stage option which gives the most number of
choices in the second stage. Each choice (available in the second stage) meets each
uncertain condition favourably, i.e. gives the best payoff provided that the trigger
or favourable state occurs. The choice that gives the most flexibility is able to meet
(or take advantage of) each uncertain condition exactly. The less flexible options
One choice corresponds to the most inflexible case (C in figure 7.4), that is, where
uncertain conditions cannot be dealt with at all in the default case, i.e. there are no
future options available. Another choice represents the ideally flexible case (A)
which may or may not appear in the problem, as long as its value is known. The
ideally flexible case represents perfect mapping between states of uncertainty and
choices of the decision. The remaining choice (B) has a degree of flexibility
between the two extremes. Any path that gives some flexibility, i.e. more than one
option in the second stage, has a value between the two extremes. Thus the
normalised flexibility measure of the chosen branch (B) from the first decision is
most flexible is N(A) = 1 and most inflexible is N(C) = 0. This type of flexibility
has to do with matching of trigger states to choices in the second stage decision. It
344
Figure 7.4 General Structure of Normalisation
Using Hobbs flexibility benefit measure, we would get figure 7.5 for the values
given expected conditions. That is the E(Conditions) = branch Three with 100%
probability, and A3 is the option that gives the most favourable outcome or B2 if B
is invested.
345
Figure 7.5 Expected Conditions
production level settings to meet different demand levels, with C as the default do
nothing case, in which case Pay C = 0. Pay A and Pay B are enablers. The
machine with the most number of levels minimises slack production in meeting
346
Figure 7.6 Schneeweiss and Khn
In their example, there is no second stage decision following the inflexible option
of C, implying that the payoffs cannot be adjusted like A and B. The expected
values of A, B, and C options include the amount of slack arising from the
the manufacturing sector. For example, Rller and Tombak (1990) use machines
loading order (black-start capability), and unit size as they correspond to demand
levels and load distribution. This measure is highly dependent on the partitioning
347
and specification of the states of the trigger event and mapping to the second stage
options. Implicitly, the more options available, the better is the match. As
decisions are discrete, in the limit, partitioning approximates to the continuous case
giving a higher normalised flexibility measure.) As the trigger event must be the
same for all options considered, this measure cannot be used to assess different
types of flexibility.
new information can be obtained, i.e. one expects to learn about the future. The
In decision analysis, the value of information is the most a decision maker would
normally calculated by reconstructing the decision tree such that all uncertainties
occur before the decision nodes, so that appropriate action may be taken to
optimise payoffs. The difference between the flipped tree and the original tree is
decisions. [Note: Delaying decisions is only one strategy for increasing flexibility.]
It is the maximum a decision maker can gain by taking the flexible initiative. He
flexibility as the size of choice set associated with a decision. [Note: The size of
choice set is only one element of flexibility.] The value of information, that is,
resolving the uncertainty before making the decision, depends on the amount of
348
decision flexibility. This can be interpreted as follows: the usefulness of putting
the chance node before the decision node depends on the degree to which the
made. Either way, there is a cost to uncertainty resolution, and this is equivalent to
(EVPIGUF) is the upper limit to the value of information given some level of
computed from flipping a decision tree gives the upper limit to information
gathering. [Note: The upper limit to information gathering is not the upper bound
to maximum flexibility.] His measure is most relevant when a decision maker has
several decision variables to manipulate, i.e. several decisions to make, the order
and the timing of which can be adjusted to allow the resolution of uncertainty
beforehand.
The trigger event acts as perfect information for the subsequent second stage
decision. Hobbs relative flexibility benefit is merely the difference between the
information is the exact, actual, and zero error value or prediction. Figure 7.7
illustrates the calculation of EVPI, which has similarities with Hobbs relative
349
Figure 7.7 EVPI
decisions. Further tests are required to determine its generalisability. In the next
350
7.3.4 Towards an Improved EV Measure
We propose and test two ways of improving upon the expected value measures
likelihood of the favourable trigger states. These measures are tested and
Putting all uncertain events before all decisions gives the highest expected value
(EVPI). Putting all uncertain events after all decision sequences gives the expected
most inflexible to most flexible), they are candidates for anchoring Schneeweiss
which decisions and uncertainties are reordered should give an overall expected
value that falls between the EVPI and Deterministic EV. Expected values that fall
outside the range defined by the Deterministic EV and EVPI are not worth
considering as these options are too unfavourable compared to the flexibility they
offer. In the flexibility decision tree configuration, both X and Y provide expected
values between EVPI and the Deterministic EV, thus indicating that X and Y
351
Figure 7.9 Deterministic EV
turn depends on the likelihood of the trigger states of the trigger event (natural gas
price uncertainty.) This suggests that expected values should be weighted by the
352
Table 7.3 Comparison of Expected Value Measures
Scenario A B C D
Natural Gas Price: 1/3; 1/3; 1/3 1/5; 2/5; 2/5 3/5; 1/5; 1/5 1/5; 1/5; 3/5
P($2.4); P($3.2); P($4.0)
F(Y) = V(Y|E) - E (Y) 0.1 -0.1, 0.7 -0.6, 0.5 -0.26, 0.54
probabilities. For example, $3.20 = 1/3 ($2.4) + 1/3 ($3.2) + 1/3 ($4.0).
V(X|E(gas price)) refers to the value of the best choice for second stage X given
the average gas price. The relative flexibility benefit gives negative values in some
The EVPI is the expected value of the decision tree with the natural gas price
uncertainty resolved before the decisions are made. The Deterministic EV refers to
353
the expected value of the decision tree under the deterministic configuration in
figure 7.13, i.e. all decision nodes followed by all chance nodes, not Hobbs value
given expected conditions. The difference between the EVPI and Deterministic
decision of burning gas or burning coal. The normalised measure is the difference
between the expected value of the particular investment and the deterministic EV
conditions and the normalised measure. The weighting indicates how likely or,
more appropriately, how often the normalised measure is expected. This says that
if the three different natural gas prices are equally likely to occur (probability of
1/3), then X and Y are equal (0.2) or incomparable. In this scenario, we cannot
distinguish between the two investments on the basis of this measure. If the
probability assignments are 1/5, 2/5, and 2/5, this means that X will be favourable
1/5 of the time and Y will be favourable 3/5 of the time. The normalised measure
does not indicate this, but the weighted measure does. In this case, the weighted
outcomes. On the other hand, if the probabilities are distributed 3/5, 1/5, and 1/5,
could make. If the probability assignment shifts in the direction of the most
expensive natural gas price, where neither X nor Y could give favourable payoffs,
the normalised measure for Y becomes negative. Thus Y should not be considered
as the choice between $103.6M burn gas and $100.5M burn coal (given co-firing is
354
In this particular example, the weighted normalised measure outperforms the
relative flexibility benefit and the normalised flexibility measure. It correctly ranks
the same decision tree. The weighted measure is more meaningful than the
benefit measure. A zero or negative measure means that any flexibility provided by
1) It may not be possible to re-order the nodes so that all chance nodes occur before all
decisions to get the EVPI. These are due to the conditionality expressions defined
in the corresponding influence diagram. Thus we do not get the true EVPI as the
upper bound but a lower EV of less than perfect information.
2) Similarly, it may not be possible to put all chance nodes after all decision nodes to
get a Deterministic EV. The inflexible or status quo initial option may not be most
favourable. The deterministic EV does not necessarily refer to the worst expected
value, only that the inflexible case is the best because uncertainty is not considered
and no premium is included in its cost.
3) Weighting by the proportion of favourable states may also invite criticism. The
weighting does not reflect the degree of favourability, but only distinguishes
between the favourable and the not, i.e. the triggered and untriggered states.
The close relationship between uncertainty and flexibility suggests the use of
entropy to measure flexibility. Entropy originates from the Greek word meaning
number and balance of elements in a closed system. Kumar (1987) sees the
355
decision theoretic perspective, Pye (1978) associates entropy with the uncertainty
energy policy, Stirling (1994) uses entropy as an index for diversity, which is an
This section investigates the properties of entropy (section 7.4.1) that make it
(1987). Pye treats flexibility as the uncertainty of future decision sequences, and
where state transitions are reversible. Pye uses logarithm to the base 2 while
Kumar uses the natural logarithm. Their arguments are theoretically based though
not implemented in practice. What they overlook are those properties (section
flexibility. These criticisms support and expand those earlier attacks by White
more flexibility is required but not vice versa. A measure of uncertainty indicates
the required but not actual amount of flexibility. In this context, entropy has been
suggested by Pye (1978) and Kumar (1987) as a measure of the number of choices
and the freedom of choice. Freedom of choice can be interpreted as the probability
The basic formula for entropy is the negative sum of logarithms of the probabilities
m m
weighted by the probabilities: H(a) = -
i =1
p(ai)LN(p(ai)) =
i =1
p(ai)LN(1/p(ai))
where ai are states of the uncertain event following the previous node a, which is
356
the one stage case H(a) = S(a1, a2, ... , am) and the above formula applies. Two or
more stages can be computed using the decomposition rule explained in 5) below.
The number of states n(a) = m. The probability that the ai th state occurs is
represented by p(ai). The basic formula has the property that 0 H(a) LN(n(a)).
Stage 1 Stage 2
a
11
a 12
a1
a2
a
a 1k 1
a
m
2) Entropy increases with the number of states. The more choices we have, the more
flexibility. Figure 7.11 shows the logarithmic relationship between the number of
357
Figure 7.11 Maximum Entropy as a Function of States
L o g a rithmic R e la t i o n s h i p s
Maximum Entropy H(a)
5 LOG(n,2)
LN(n)
4
LOG10(n)
3
0
0 5 10 15 20 25
Number of States n(a)
3) Maximum entropy occurs if each state has equal probability. For any a with m
flexibility if each choice has an equal chance of being selected. For n = 2, figure
7.12 shows that entropy is greatest when p = 0.5 where standard deviation of the
distribution is 0.
M e a s u r e s o f D ispe rsion
0.8
0.7
0.6
Entropy
0.5
0.4 S t a n d a rd
D e v ia t i o n
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
P robability
358
4) Entropy is a symmetric function. If the state probabilities are permuted amongst
themselves, the entropy will not change. That is, S(1/3, 2/3) = S(2/3, 1/3) as in
entropy is independent of value or payoff and does not distinguish between states.
Entropy depends only on the total number of states and the permutation of
probabilities.
a1 1/3 a1 2/3
a2 2/3 a2 1/3
5) The decomposition rule states that every multi-staged tree can be reduced to its
equivalent one stage multi-state form. The three probability trees in figure 7.14
have equal entropies of LN(5). They follow the decomposition formula of between
groups and within groups. Within a group, the basic formula applies. Between
i =1
p(ai)LN(1/p(ai))
H(ai) = 0 if ai are terminal nodes. The first tree is single staged, so the basic
formula applies: H(a) = 5*1/5*LN(5) = LN(5). The entropy for the second tree is
4/5LN(5/4) + 8/5LN(2) = LN(5). The entropy for the third tree is 2/5LN(5/2) +
359
1/2
1/5 1/5
1/2
2/5
1/5 1/2
1/5 1/2
4/5 1/2 1/3
3/5
1/2 1/2
1/5 1/2
2/3
1/5
1/2 1/2
6) Entropy increases if the states are brought closer together, i.e. probabilities
becoming more equal. The more indifferent we are to the choices or the more
equally favourable the choices are to us, the greater the flexibility. In this sense,
entropy is also a measure of dispersion. For example, S(1/7, 6/7) < S(2/7, 5/7) <
S(3/7, 4/7) < S(1/2,1/2). Any averaging procedure that brings the probabilities
can be found. It is therefore more attractive than the size of choice set measure,
which is discrete, and other measures of dispersion, like standard deviation, which
Pyes notion of flexibility reflects our basic understanding: the ability to adapt or
of uncertainty which the decision maker retains concerning the future choices
he will make, however, deviates from the expected value treatment of flexibility.
uncertainty. Since the basic entropy formula contains probabilities not values, he
360
suggests two ways of incorporating value: weighting value as probabilities or
using a cut-off satisfactory value level to reduce the total number of choices.
1) Pyes first robustness measure reflects the independence of flexibility and value.
The most robust move is that which retains maximum flexibility. Maximum
uncertainty occurs when all moves are equally likely, in which case, entropy is
simply the logarithm of the number of moves. For any multi-staged probabilistic
The most robust move is one which retains the most flexibility subject to the
condition that the decision maker's estimate of the probability of choosing a future
sequence of moves depends on its value. The value associated with each move is
translated into a probability of the move by weighting the value by the sum of all
combined into a single utility function. However, Pye does not show how to
3) The third robustness measure combines probabilities and values using a cut-off
satisfactory level. If the value is above the cut-off level, the difference is taken and
weighted accordingly. Any value below this level is not included in the final
weighting.
There are five problems with the practical implementation of this theoretical
treatment.
1) The first robustness measure works only with equal probabilities. If the number of
choices are reduced, the remaining probabilities must be adjusted to equal
probabilities rather than re-weighted. In practice, we would re-weight the
remaining choices and get unequal probabilities.
361
2) The second robustness measure weights the individual values by total values and
performs expected value on probabilities and values. These two rounds of
averaging distorts the real picture as entropy will undoubtedly increase.
3) The usefulness of the second measure for values that cannot be linearly combined
eludes the reader as Pye does not discuss the method of value combination either
before or after the weighted probability transformation.
4) Negative values cannot be weighted into probabilities, but Pye does not indicate
whether they should be rescaled to positive values or dropped from the calculation.
5) The third robustness measure combines aspects of the former two measures but
ignores the imbalance of re-weighted and rescaled value-transformed probabilities.
In addition to the above criticisms, Pye touches upon four issues that are
information, value, and flexibility. The second paradox surrounds decision analysis
and entropic treatment of flexibility. The third issue is about value and flexibility
trade-off. The fourth issue concerns the transformation of values into probabilities.
1) Pye states: the introduction of information about the value of sequences of moves
reduces the decision makers generally desirable uncertainty concerning his future
moves and so reduces flexibility. This means that the decision maker retains
maximum flexibility when no values are known and all future moves are equally
likely. Thus the size of choice set is maximally large and maximally uncertain. As
soon as the value of any choice is revealed, the set becomes differentiated and
flexibility is reduced. However, Merkhofer and others state just the opposite. Any
new information resolves some degree of uncertainty before the decision is made
and consequently aids the decision maker in improving upon the outcome. The
value of this information depends on the degree to which the final payoffs can be
effected. Any information about value should help the decision maker assess the
amount of flexibility he has. These two views of information, value, and flexibility
reflect a paradox: value of information versus information about values and the
effect on flexibility and the value of flexibility. Pyes statement suggests that
flexibility is inversely related to value, but this contradicts the size of choice set
definition of flexibility, i.e. the number of favourable (valuable) choices.
362
2) A second paradox concerns ideological differences between decision analysis and
flexibility. Pye recalls: in classical decision analysis, value is maximised and a
dominated move would be eliminated from the set of moves under consideration on
the basis of estimates of value made at the time of the initial decision. When
maximising flexibility it is most inappropriate to eliminate a move, since the fewer
are the moves, the smaller is flexibility, unless the sequence of moves is rejected as
unsatisfactory. Decision analysis uses dominance of values to eliminate
unfavourable moves, while Pyes entropic treatment of flexibility leaves options
open. This paradox demarcates two ways of thinking: flexibility and favourability.
However, flexibility has no value without considering the favourability of options,
which is necessary to differentiate between the choices and eliminate the less
favourable ones.
4) Finally, Pyes method of linearly weighting values as probabilities assumes that the
probability of a move is linearly dependent on the value of the move. He does not
address the implications of eliminating unfavourable moves and rescaling values.
Any averaging method tends to increase entropy, and this is misleading for value
optimisers. Furthermore, the transformation of values to probabilities assumes that
the relative importance of values is the same as the freedom of choice. The relative
importance of a choice is not necessarily the same as the likelihood of selecting it.
Other factors such as the disablers of time, cost, and effort should be taken into
account in determining the freedom of choice. Even so, such value-probability
weighting does not transform the underlying process into a stochastic one, which is
the essence of entropy.
363
7.4.3 Systems View (Kumar, 1987)
Originally, the entropy concept was applied to a closed system, from which two
views emerge and often coincide. One concerns the uncertainty of selecting an
element from a system. The other concerns the transformation of the system into a
different state. Pye and Kumar do not distinguish these two system views nor the
shift into decision terminology whence states become choices and options. In
available and on the freedom with which various choices can be made. Entropy is
exactly such a function of the number of choices and the freedom of choice.
Kumar (1986) applies entropic concepts from a Markovian system (with reversible
enables the addition of between group and within group entropies, is attractive for
(but does not apply) four entropic measures to satisfy desirable properties of
flexibility measures.
Kumar claims that entropy offers an objective basis for measuring flexibility but
does not mention the way these probabilities are derived. The probability
associated with state ai (reflecting the probability that ai is chosen) is the same as
the transitional probability from a to ai. These transitional probabilities are not the
same as the probabilities associated with the states of the trigger event. Kumars
probabilities reflect the freedom of choice, which seem to encompass our indicators
364
availability of options, probability distribution of the trigger event, and other
factors that affect the likelihood of choosing the second stage option.
treatment of values, the derivation of probabilities for freedom of choice, and the
rules for determining which choices to include have not been addressed by Kumar.
probabilities: transforming values such as switching cost and response time into
cannot distinguish between states. Therefore, any entropic measure will not be
able to trade-off value and probability. While it may be true that the greatest
flexibility exists when all choices are equally likely, entropy fails when the choices
are not equally likely. That is, it is possible to have H(a) > H(b) when n(a) < n(b)
because choices are not equally likely. These fundamental problems with entropy
literature: Dreschler (1968), White (1969), Wilson (1970), White (1970), and
White (1975). These, along with an excellent review by Horowitz and Horowitz
(1976), seemed to have sealed the fate of entropy in this field. As these criticisms
were not specific to flexibility, entropy received another comeback in Pye (1978)
and Kumar (1987). While heralding the properties of entropy, Pye and Kumar
have glossed over fundamental issues, such as the derivation of probabilities and
the importance of state discrimination. This section discusses these issues and their
365
1) Entropy concerns uncertainty, at best, probabilities. Flexibility is about choices,
which are states of a decision. The selection of choices depends on decision rules
the trigger event, as in figure 7.15. If they are represented as uncertainties, the
decision rules that determine their selection must somehow be transformed into
2) Entropy does not give any more information than the permutation of probabilities
(reflecting balance or dispersion) and the total number of states. If the same
probabilities are reassigned to different states, the entropy remains the same.
Adding new states with zero-probabilities does not change the entropy. Thus any
permutation or addition of zero probability states will not change the entropy.
3) Entropy does not discriminate between states. Any re-assignment of the same
discrete probabilities to different states will give the same entropy. For n = 2, it is
366
4) As the number of non-zero probability states increases, entropy will increase.
Intuitively, as the number of stages increases, entropy should also increase. The
may not necessarily give higher entropy than single or fewer stages.
5) H(a) is always greater than H(b) if n(a) > n(b) and p(ai) = 1/n(a) and p(bi) =
1/n(b). Without equal probability, H(a) is not necessarily greater than H(b). As
6) It is meaningless to use entropy in the expected value manner, i.e. to weight and
states. Several dependence relationships are thus ignored: state, probability, value.
within a stage by cumulative probabilities will change the entropy because the
367
10) Entropy is an absolute measure, not relative. Flexibility is relative with respect to
events and decisions, states and choices, probabilities, and values. These additional
straightforward as Pye has described. As negative values are not allowed, they
rescaling (adding all values by a constant) changes the probabilities and entropy but
does not change the standard deviation. Any averaging procedure brings
probabilities closer together hence distorting the original dispersion and balance.
These state probabilities depend on how easy, how favourable, and how likely we
are to choose it compared to other options available from the same decision stage.
12) The concept of entropy was originally developed for systems not decisions. The
system (state transition) and decision (stages) views are quite different. State
transitions in systems are reversible while most decisions are irreversible and the
13) The decomposition property separates the entropy between groups and the entropy
within groups. It is meaningful for the systems view but not the decision view.
For the decision view, stages represent passage of time, whereas in the systems
view, stages are only state transitions not chronological sequences. In decision
sequences, the number of stages reflect not only passage of time (the more stages,
368
of value of information as pertained to the early resolution of uncertainty, because
symmetric reversals give the same entropy. In this respect, White (1975) observes
15) Entropy is not a unique measure. Table 7.4 shows different distributions of five,
four, and three states each giving the same entropy 1.099, while probability
States v(ai) P(ai) H(ai) v(ai) P(ai) H(ai) v(ai) P(ai) H(ai)
a1 1 0.0388 0.1262 1 0.0456 0.1409 5 0.3333 0.3662
a2 1 0.0388 0.1262 3 0.1369 0.2722 5 0.3333 0.3662
a3 1 0.0388 0.1262 6 0.2738 0.3547 5 0.3333 0.3662
a4 10.7476 0.4174 0.3647 11.9150 0.5437 0.3313
a5 12 0.4661 0.3558
entropy 1.099 1.099 1.099
standard 5.6992 0.2214 4.7936 0.2187 2.7386 0.1826
deviation
16) Entropy is based on the analysis of a stochastic process. Decisions are not
stochastic. Weighting values so that they sum to one does not make the process
17) Entropy can be derived from parameters of moment distributions, so it does not
The above arguments suggest that entropy is not suitable for our conceptual
369
points out that Pyes model can lead to bad decisions. Entropic measures are only
there are many choices and many stages, thus necessary to prune the tree before
further analysis. However, entropy cannot deal with the trade-off between balance
(freedom of choice) and number of choices unless these choices are equally likely
indifferent to the choices, meaning that all choices are identical. This goes against
But entropy decreases as diversity (of probability assignments not value) increases.
The recursive equations of entropy and expected values for a multi-staged tree can
i =1
p(ai)LN(1/p(ai))
370
m ki
E(a) =
i =1
p(ai)E(ai) where E(ai) =
j =1
p(aij)E(aij).
E(aij) = V(aij) if aij is a terminal node. V(aij) = v(ai) + v(aij) [all values along the
Entropy and expected values are equal for a probabilistic decision tree (no decision
Other than the above commonalities, expected values and entropy have entirely
number of favourable choices may be more apparent and more widely applicable.
chapter defines flexibility differently. As such, they make use of different indicators
where a first stage decision guarantees the certainty (availability) of future choices.
Thus future decisions have no uncertainties other than the trigger events, and they
are only subject to the favourability criterion. Hobbs uses enabler and motivator,
disabler and motivator, but no enabler. Hobbs assumes that the first decision
371
guarantees future flexibility, while Pye views all future decisions as uncertain,
values discriminate between states because payoffs and their relative weights (as
between states. But expected values do not indicate the number of states.
Meanwhile, entropy reflects the number of states and their dispersion but does not
number and balance of states. Expected values mainly depend on values, with
Entropy could be a good indicator for that aspect of flexibility that is value-free
and a screening device for large multi-staged probabilistic trees which treat future
decisions as uncertain. However, flexibility is not value free, and the first stage
decisions as decisions and not uncertainties. Except for a few of its attractive
such as the size of choice set and standard deviation. In addition to the above,
various arguments in section 7.4.4 have shown the inadequacies of using entropy
7.6 Conclusions
Individually, each measure reviewed in this chapter does not capture the multi-
expected values to measure flexibility more fully. Indicators are individually not
372
sufficient, but together meet the criteria for measuring flexibility. Expected values
We caution against total reliance on expected values and propose the use of
fail to meet our criteria for measuring flexibility. We dismiss any measures based
INDICATORS
likelihood, and two stage decision sequence) describe the essence of flexibility and
provide a framework for structuring options and strategies for flexibility. Not all
indicators are necessary, but each indicator alone is not sufficient to capture the
Expected values disguise the number of options available and the uncertainty to
probabilities are aggregated but lost in the final expected value measure.
on a mapping between the decision that realises the flexibility and the uncertain
condition that provides the opportunity for it. The flexibility conveyed is that of
373
capitalising on the maximum number of uncertain events. It differentiates between
flexibility and no flexibility, but not the degree of flexibility. It does not resolve the
and second stage options. A poor match gives a large slack and less favourable
outcome than the tighter fit of a better match between the states of the condition
and the options available. The initial choice that leads to minimum total slack gives
the most flexibility. It provides an index between 0 and 1, and is useful for
The expected value of information explains the structuring of trigger events before
decisions that give flexibility. Defined for decision flexibility, it pertains only to
postponement of decisions.
performs better than existing expected value measures in our extension of Hobbs
1) EVPI and Deterministic EV requires restructuring the decision tree, which may not
be possible due to conditional dependence of events. Furthermore, restructuring the
tree may not be realistic, e.g. uncertainty occurs only after decision is made; no way
to postpone decision or get information to resolve the uncertainty; inflexible
decision structure.
374
2) EVPI is meaningful only in Merkhofers terms, i.e. in the context of decision
flexibility, not generalisable, thus missing out on the richness of flexibility.
3) The method of using loose indicators such as probability (likelihood) may be more
revealing, simpler, and more accurate than weighting and normalisation, which
tend to disguise the simple elements.
ENTROPY
The underlying theory behind entropy is not appropriate for our conceptual
does not recognise value and thus ignores the favourability inherent in flexibility.
treat the future as uncertain, thus removing the value preferences of the decision
maker.
flexibility, we propose to
1) use indicators and expected values to measure different flexibility options and
strategies, via
375
CHAPTER 8
Modelling Flexibility
8.1 Introduction
The title of this chapter, Modelling Flexibility, refers to the structuring and
flexibility. Decision trees and influence diagrams are used to structure flexibility.
via
the relevant context of capacity planning in the UK ESI to answer two outstanding
These guidelines have been developed from extensive analysis to test the breadth of
377
main points, as they have already been analysed to the same level of depth as those
This chapter is organised as follows. Section 8.2 presents the guidelines for
structuring. Section 8.3 presents the guidelines for assessment. Section 8.4
presents basic models of plant economics and pool prices, respectively, using the
postponement, and diversity. The final section 8.6 summarises the guidelines in
brief.
8.2 Structuring
uncertainty. We propose the use of 1) decision trees and influence diagrams for
(section 8.2.2) with 3) three types of uncertainties, namely trigger, local, and
The term decision analytic framework first appeared in Chapter 4 of this thesis as
a proposal to make use of decision trees and influence diagrams to organise other
techniques. Here the same term refers to a modelling framework of decision tree-
378
2) Decision trees and influence diagrams are structuring tools for uncertainties,
decisions, and contingency; and
Previous chapters have shown that flexibility has value only in the presence of
deterministic approach, i.e. one that assumes all uncertainties do not exist or have
only one state. Other conceptual aspects of flexibility call for multi-contingency.
This precludes the probabilistic approach, where the expanded risk analysis
literature, are variants of the decision tree method of analysis, e.g. stochastic
and 6.
The decision analytic framework relies on influence diagrams and decision trees for
structuring the problem. Influence diagrams are used to define conditionality and
events can be assigned and expressed easily. Decision trees are used to define
379
decisions and uncertainties can be ordered as they occur. The combination of
OPERATIONALISATION OF FLEXIBILITY
ordering. The latter two strategies (illustrated in section 8.5) are based on
3) One way of obtaining flexibility is by examining the extent to which decisions and
trigger events can be re-ordered to get higher payoffs. Insight into timing gives the
possibility of postponing a decision until its trigger event occurs. Likewise,
flexibility is increased if trigger events can be identified and introduced.
capturing the favourability aspect of flexibility and generally perform well albeit
with caution. Since expected values are based on decision analysis, this suggests
that decision trees provide an automatic flexibility calculus. We also cite the
380
rationale for such a framework from Chapter 4, e.g. technique familiarity, software
minimum of two stages, in which flexibility is associated with the first stage but
FIRST STAGE
The first stage contains at least two choices, each providing a different level of
flexibility. The two choices correspond to the activating initial position and default
option. The choice that provides future flexibility, i.e. the activating initial
position, is assigned a payoff, i.e. the purchase of this flexibility at a cost, called the
enabler.
SECOND STAGE
decision. Within this mapping, there is also an implicit assignment of trigger state
states.
For example, flexibility of capacity size, as defined by the ability to adjust total
variation in demand levels can be met by adjusting capacity size. There may be
several choices in the second stage, but they lead to favourable payoffs only if
381
trigger states of the uncertain event occur. If demand is high, then high capacity is
useful. If demand is low, then low capacity is useful. Thus the flexibility of
capacity size purchased by the first decision is determined once the trigger event is
known. The alternative that does not lead to flexibility is not affected by the
trigger event. For example, if flexibility of capacity size is not provided, then
The choices available at the second stage depend on the choice selected in the first
stage. The second stage consists of the exercise decision which carries possible
further cost (disabler) and a return payoff (motivator). The payoff depends mainly
on the outcome of the trigger event, an uncertainty that is resolved after the first
This two stage cycle can be repeated. Furthermore, each stage can be composed
Besides trigger events, there are other uncertainties that affect final payoffs. Local
and external events occur after the second stage decision. Local events are those
has no effect on the second stage decision but has consequences on the final
payoff. External events are those uncertainties that affect all choices in the first
stage decision: to provide or not to provide flexibility. For example, pool price
uncertainty affects all choices of plant investment. These uncertainties are not
Figure 8.1 depicts the above terminologies. In Stage 1, the decision maker chooses
between Purchase flexibility A at a cost (enabler) called Premium and stay with
382
the status quo of No flexibility B. The existence of Stage 2 is only meaningful if
the Trigger Event precedes it. Option 1 in Stage 2 gives the best payoff if
Trigger state 1 occurs. Similarly Option 2 gives the best payoff if trigger State
2 occurs. If none of the trigger states occur, the Dont Exercise option in Stage
2 gives the most favourable payoff. The payoffs associated with the first stage
choice of Purchase flexibility A are affected also by the states of Local Event A
and External Event. For the No flexibility B case, the payoffs are affected by
The associated influence diagram in figure 8.2 shows that Trigger Event does not
affect the payoffs for B. Similarly, Local Event B does not affect A. However,
383
Figure 8.2 Influence Diagram of Generic Example
Such a model of flexibility may contain several trigger events, local events, and
external conditions associated with first and second stage decisions. There may be
many stages in such a decision tree, but each stage that realises a type of flexibility
8.3 Assessment
flexibility, and comparing options or strategies that differ in the degree of flexibility
they provide. This assessment depends on the level of complexity, which spans the
384
8.3.1 Simple Problems
for additional options. The latter two examples are means to increase the
boundaries of the solution space, thereby enlarging the choice set. We explain
each case. We suggest the use of appropriate indicators to trade off conflicting
385
off analysis can be found in the decision analysis literature, e.g. dominance,
multiple stages, multiple states, etc. Because of the dimensionality implied by these
problems, indicators may not be sufficient for assessment. In such cases, it is also
necessary to examine the structure of the decision tree, e.g. counting and tracing
The type of measure to use for assessment depends on the structure of the
problem. With respect to the examples studied for the development of these
guidelines, we classify complex problem structure into four categories in table 8.1.
timing decision, which is one of the main decisions in capacity planning. In the
decision analysis context, Hirst (1989) treats the timing decision as a function of
386
relying on imperfect information (forecast of future demand) and shows that
plants with short lead time and small modular unit size are more flexible than those
with long lead time and large unit sizes. Figure 8.3 illustrates his example which
demonstrates the trade-off between the costs and benefits of flexibility. A utility
must provide for new load, the timing of which is uncertain. If it chooses to build
a plant that takes ten years but new load arrives before the plant is ready, it will
incur a high cost of 43.41 /kWh. On the other hand, if it chooses to build a short-
lead time plant, it can afford to wait three years to get more information. The
uncertainty to flexibility mapping in this case corresponds to the timing of the new
387
8.4 Capacity Planning in the UK Electricity Supply Industry
decision of whether or not to add a plant to its existing portfolio, similar to Hobbs
The uncertainties affecting the decision can be grouped into those affecting
uncertainties in table 2.6 into uncertainties affecting costs (plant economics) and
Financing requirements *
Political/regulatory * *
Environment *
Public * *
Figure 8.4 depicts the decision tree, where the first stage decision consists of
uncertainties surrounding the running cost of X called Plant X act as the local
event for X and uncertainties surrounding the pool price called Pool act as the
388
external event for all three choices in the first stage. In the next two sub-sections,
we decompose the corresponding chance nodes Plant and Pool to show how
389
8.4.1 Plant Economics
We discuss how to represent and deal with uncertainties that affect plant
plant economics in the first pilot study of this thesis (appendix A). These formulae
are based on IAEAs (1984) method of levelised costs to approximate the average
model via an influence diagram in figure 8.5. Proceeding from left to right, this
diagram shows how the variables are related. The left most nodes are direct inputs
Chapter 2. The nodes in the middle correspond to levelising constants. The nodes
on the right are components of the final levelised cost, namely, Invest/kWh, Fixed
O&M/kWh, Variable O&M/kWh, Fuel Cost/kWh, and Carbon Tax/kWh. The
390
Figure 8.5 Plant Economics Influence Diagram
391
Figure 8.6 shows a possible configuration of the capacity planning problem any
operationalisation of flexibility. Typically, we would invest and then run the plant
when it is ready. The local events for plant X include the uncertainties of lead
time, fuel price of plant X, and fuel escalation rate for plant X. The local events
for plant Y are lead time and fuel price. The no-investment option serves as the
status quo. The external events Pool Price and Base_elec_costs affect the payoffs
392
characteristics, such as lead-times, are changed; and 3) decisions on investment are
the nodes so that the trigger event precedes the second stage decision. The first
stage investment decision precedes the trigger event for the flexibility decision of
whether to run or not. Without this trigger event, there is no additional flexibility
that X and Y can introduce to the system. The flexibility that investment X or Y
brings to the picture is simply the second stage decision of running the new plant if
If the external event Pool Price is made a trigger event for both X and Y, we see
immediately that X and Y are more flexible than the status quo none option. If
lead time is made the trigger event, then the fixed capital cost is known before any
running costs are incurred. However, the subsequent decision that determines the
The elements in the pool price have been designed to encourage or discourage new
capacity investments. Described in greater detail in Chapter 2, the capacity
payment, also called the availability payment, is the expected cost of unserved
energy given to the generators in addition to the SMP for those plants called to run
composed of the Value of Loss of Load, the System Marginal Price, and the Loss
of Load Probability. Together (SMP + capacity payment) they comprise the Pool
A plant that has been declared available may or may not get bid into the pool. A
plant that gets bid into the pool may or may not set the SMP for that half-hour. A
plant that gets bid into the pool may or may not get called to run. These
393
uncertainties together with actual demand level and total capacity in the system are
market uncertainties that affect the overall price of electricity. Those plants that
have been bid into the pool but not called to run receive the capacity payment
The influence diagram of figure 8.7 shows the relationships between variables in
the pool price formula which indicate market uncertainty and also affect the
investment decision. This influence diagram is similar to the causal loop diagram
of the system dynamics study of Bunn and Larsen (1992) but without cycles.
We construct the decision tree (figure 8.8) to parallel the plant economics
formulation of the previous sub-section. The utility has three choices in the first
stage: invest in plant X, invest in plant Y, or do not invest at all. The levelised
investment costs (pence/kWh) for X and Y are Pay_X and Pay_Y respectively.
After investing in a plant, it can be declared available and bid into the pool.
394
Whether or not the plant gets bid depends on the expected demand and total
declared capacity available, which determine the LOLP. Whether or not it actually
gets called to run in the corresponding half-hour next day depends on the actual
demand.
395
We specify X and Y such that if X is successfully bid, it is almost surely the most
expensive plant bid within the half-hour, in which case the SMP will be equivalent
expensive plant bid, and therefore will not set the SMP for that half-hour. The
the pool but does not get called to run, the plant still receives the capacity
payment. [For simplicitys sake, we have not included the possibility that plants
not declared available will be called to run if actual demand is much higher than
expected.] The do-nothing case assumes that the existing old plant has an equal
chance of being bid into the pool and called to run, but the event has no effect on
the level of SMP or LOLP. Focussing on market uncertainties for the moment, we
assume that the running costs are of two states only (high or low) and not further
conditions, come from the extreme scenarios, i.e. minimum and maximum of plant
other words, re-order or add nodes to the original structure to get a flexibility
flexibility to use.
8.9 illustrates the meaning of partitioning. Partitioning the action space implies
396
redefining the original choice set to enlarge it. The left decision with two choices
A and B is partitioned into the right decision with five choices. The states of the
trigger event may be defined to trigger the choices in the second stage decision.
the amount of commitment made at each stage and thereby frees up resources to
sub-decisions that are spread over a period of time gain from the resolution of
397
Figure 8.10 Sequentiality and Staging
ways, such as 1) shortening the life of a plant, 2) adjusting the construction lead
modularity of unit size. Plants which can be built in incremental modular units and
run as they are built offer flexibility by minimising commitment. We discuss next
an example that concerns flexibility of plant lives against the uncertainty of new
competitive technology.
take advantage of new conditions. Figure 8.11 compares plants with varying lives
and the costs of switching to a new technology at the end of a plants life. It shows
that plants with extendable lives provide more flexibility than those without. As
398
plant lives reflect the amount of commitment, reducing commitment increases
399
[Note: the chance node labelled a appears twice in the decision tree. It refers to
the repetition of that portion of the tree to the right of the first labelled node.]
At time t, plant X and Y reach their end of life, while plant Z has not reached its
end of life. At this time, there may be new technologies available for investment.
Market and plant conditions may favour these new technologies, i.e. higher pool
price and lower running costs. If the competing technology gives better
X and Y are both more flexible than Z. If the competing technology gives worse
flexible than Y because Xs life can be extended. We assume that it will not be
economically feasible to retire early and invest in the better performing technology
at the same time. If the utility firm switches to the better performing technology, it
makes a third stage decision, that of deciding on lead time. If demand is high, then
a zero-lead time technology will capture the high revenue. If demand is low then
the less costly non-zero lead time switched technology is preferable. This demand
uncertainty acts as the trigger event for the third stage decision. Plant Z with
remaining life is neither able to take advantage of the competitive technology nor
We analyse this example using expected value measures described in the previous
chapter. The relative flexibility benefit (Hobbs et al, 1994) and the normalised
flexibility measure (Schneeweiss and Khn, 1990) both indicate that investing in
plant X provides the most flexibility out of all three choices in the first stage.
competitive technology is not likely at all, then plant Z could be the best initial
choice as the inflexible option gives the highest expected value when no uncertainty
400
is considered. If there is no information about the competing technology until after
The timing decision is much discussed in option-pricing literature, that is, when is
the optimal time to invest and to exercise your option? The timing decision is
illustrated by reversing the order of decision and chance nodes. Better payoffs can
be attained if these uncertainties are trigger events for the decisions they precede.
During the period of delay, new options may arise as well as expire. For
simplicitys sake, we assume that the choice set remains the same in spite of re-
ordering. The timing decision of investing the first stage or exercising the second
stage can be portrayed as a multi-staged decision tree, where the decision to invest
or run a plant depends on the occurrence of the trigger state of the trigger event.
Deferral can be achieved by adjusting the construction lead time of new plants,
shortening or extending plant lives, or any number of ways, at a deferral cost. The
analysis reduces to a trade-off between the cost of deferral and the expected
Figure 8.12 shows the basic structure of such a decision tree. Condition1 triggers
the decision to run in the second stage. Condition2 triggers the decision to run if
the initial investment decision is deferred. If Condition1 also triggers the Invest2
decision, then deferral is useful. By deferring the investment decision, one gets
affects the decision to run after Invest in the first decision also provides
information for Invest2. The choice that corresponds to the more favourable state
of its preceding trigger condition gives a better payoff. If these conditions (trigger
401
events) are independent of each other, then deferring the decision provides no
flexibility.
Extending this to multiple stages captures the opportune time to invest or exercise.
We see that deferring only makes sense if the conditions are related and not
independent.
Merkhofer (1977, p. 719) suggests that the decision makers time preferences
should be considered to determine the value of flexibility obtained from delaying a
8.13, we should take the discount rate into consideration. Sunk1, Sunk2, and
Plant2, and Plant3 are plant costs incurred in each of the three periods. Market1,
402
Figure 8.13 Deferral with respect to Market and Plant Uncertainty
8.5.3 Diversity
types of flexibility. Our example in figure 8.14 shows that adding plant X with
flexibility than plant Y that only caters to one uncertain condition. Equally, adding
To build the decision model, we identify the uncertainties which flexibility answers
asking, Which uncertainties can we manage with the attributes of our plant or
different types and sizes of plants with different lives and retirement dates.
403
Figure 8.14 Diversity Influence Diagram
The three conditions A1, A2, and A3 trigger second stage decisions for plant A. If
condition A1 is favourable, i.e. demand level goes up, then we can add another unit
of plant A to meet the higher level of demand. The trigger state is high demand
growth, and the associated flexibility choice is adding another unit to meet the
appropriate action can be taken on plant A. These second stage decisions have not
been explicitly illustrated in the decision tree but follow the same kind of decision
sequences shown in the examples of plant economics and pool price in section 8.4.
404
Figure 8.15 Diversity Decision Tree
405
Selecting a plant with three attributes that contribute to flexibility is similar to
production level adjustment captures this source of flexibility. The machines differ
appropriate trigger states of the trigger event (demand) preceding the second stage
decision. Adding options that contribute to the overall diversity in the system,
plant mix, or portfolio contributes to overall flexibility because these options cater
to different states of external conditions. Adding such options will always increase
flexibility provided they are free. Because they are not free, it is necessary to make
cost and benefit trade-offs, and this assessment requires the use of indicators and
expected values.
8.6 Conclusions
We have shown the applicability and practicality of our guidelines for structuring
3) Structure the problem in the decision analysis framework, i.e. with decision tree
and influence diagram, to include
406
a) uncertainty-flexibility mapping, and
b) minimum 2-stage decision sequence
The decision tree is asymmetric because of the default status quo case in the first
stage. Specify the decision tree with relevant indicators, i.e. enabler, disabler(s),
motivator(s), trigger event(s), trigger states, likelihood.
4) Assess flexibility with indicators and expected values. For simple problems, use
indicators. For complex problems, follow the categories in table 8.1.
The problems in capacity planning have been greatly simplified to focus on the
does not replace the need for rigorous modelling, as completeness is still necessary.
It merely compensates for the lack of completeness or the model unease found in
the decision making style of the industry. In other words, the traditional modelling
approach (with the trend in model synthesis) is still necessary but no longer
sufficient.
407
APPENDIX D
D.1 Introduction
abstract concepts to the familiar example of supply and demand in production and
and their relationships are then applied to other areas to further illustrate the
differences between them. Finally, flexibility and robustness are discussed within
Flexibility is the ability to react or change, and robustness is the lack of a need to
windy day, neither the willow nor the oak tree will collapse because the former
bends with the wind (flexibility) and the latter withstands the wind (robustness).
Flexibility implies a future cost as it is a defense against the unexpected, e.g. the
demand. Robustness, on the other hand, implies a present holding cost typically
demand.
Although many types of flexibility exist, Gerwin (1993) insists that they are only
409
increases or decreases in aggregate production level. Cazalet et al (1978) have
looked at the implications of building over and under capacity with respect to
costs associated with such production levels to illustrate both flexibility and
range and time. Range refers to the amount of change, and time refers to the
length of time to make the change. In this chapter, range refers to levels of
production that can be assigned or achieved, while time is the lead time to produce.
Gerwin also observed that the time aspect of flexibility has received much less
attention than the range. For this reason, we will expand on the time aspect.
levels
We take a simple case to illustrate the difference between flexibility and robustness.
A firm produces the quantity qt at time t to meet exactly the demand dt at time t,
i.e. qt = dt. This occurs when lead time is zero and the actual quantity produced is
the same as the actual demand at time t. Lead time T is defined as the time it takes
and forecasted demand, while q and d represent actual levels of production and
demand respectively. Capital letters Q and D denote planned whereas small letters
q and d denote actual. For the moment we do not distinguish between planned and
production and demand at time t. Let It denote the normal production level at time
t. This firm chooses to fix It at a constant level I. Iopt is the optimal level of
normal production which minimises the total cost. Ch is the cost of holding the
410
quantities above I when demand is greater than I. The following table D.1 lists the
In the simple case where production quantity equals demand and planned equals
actual, the demand curve is the same as the production curve, as illustrated in
figure D.1. The cost of holding and production, Ch and Cp, apply to the areas
delineated by the line I and the demand curve. Cp is not the cost of normal
production Cnp which is used to calculate I, but the cost of production beyond
411
Figure D.1 Costs of holding and production: Ch and Cp
Cp = cost of production
Quantity Qt
Q max
Q min
C h= cost of holding
time t
The average cost up to time t with normal output at I is the sum of the cost of
normal production Cnp times the normal production level I, the holding cost Ch
times the amount not sold (I - Qt for I > Qt), and the cost of production Cp times
412
D.2.2 Flexibility
Flexibility means the ability to change or react when necessary. This can be
achieved in several ways. It does not matter what I is as long as the firm has the
fluctuating It = Qt, though this may incur additional cost. Suppose we set I to the
minimum demand level, I = Qmin.
D.2.3 Robustness
Robustness means the absence of a need to change or react. When the level of
normal production is set at maximum demand Qmax, it is not necessary to change
the level of production because demand will never exceed Qmax. (Recall that we
I = Qmax
by substitution,
t Q max
Ct (Qmax) =
0 Q min
Ch (Qmax - Qt ) f t (Qt ) dQt dt
413
D.2.4 Flexibility versus Robustness
From these above two equations for Ct, we can tell which is cheaper (hence better).
opt
The choice of an optimal level of normal production I is determined by finding I
such that Ct(I) is minimised, as follows.
t
= Min [ Ct (I) = [Ch (I - Qt) | {I > Qt} + Cp (Qt - I) | {I < Qt }] dt ]
opt
I
I 0
opt
To find I , we consider two special cases of Ch and Cp. If the cost of holding and
the cost of production are equal, Iopt cannot be uniquely determined. If these costs
opt
are not equal, then we can solve for I by making a simplifying assumption, that
1
Let f(Q) = (uniform distribution)
Q max Q min
I
[ Ch Q
Q max
(I - Qt) dQt + Cp
t
Ct (I) = (Qt - I) dQt ]
Q max Q min min I
t Ch ( I - Q ) 2 C p ( I Qmax ) 2
Ct (I) = min
+
Q max Q min 2 2
t
= [ Ch (I - Qmin) 2 + Cp (I - Qmax )2]
2(Q max Q min )
414
Setting the first derivative to 0, we solve for Iopt
dCt ( I ) t
=0= [Ch ( I Qmin ) + C p ( I Qmax )]
dI Qmax Qmin
Ch (I - Qmin) + Cp (I - Qmax) = 0
Ch I - Ch Qmin + Cp I - Cp Qmax = 0
opt
If Ch = 0 then I = Qmax (robustness)
opt
If Cp = 0 then I = Qmin (flexibility)
opt opt
How do Ch and Cp relate to I ? Differentiating I by Ch, we find a negative
C p (Q min Q max)
= <0
(Ch + C p ) 2
opt
But when we differentiate I by Cp, we find
Ch (Qmax Qmin )
= >0
(Ch + C p ) 2
415
opt
This implies that as the cost of holding increases, I decreases, and we should hold
opt
less. Likewise, as the cost of production increases, we should increase I or hold
more. This agrees with common sense, and we can see it below in figure D.2.
These relationships can also be found by evaluating Iopt when Ch > Cp and when Ch
< Cp. When Ch and Cp are unequal and nonzero, Iopt will be somewhere between
Quantity Iopt
Cp
Qmax
Qmin Ch
Cost
To make the problem more realistic, we vary the basic conditions to examine the
D.3.1 Levels of I
The range aspect of flexibility translates into the number of levels of production.
Maximum flexibility occurs when normal production level It can be set to Dt.
Alternatively, robustness also applies to the situation where Qmax > dmax but
416
D.3.2 Cost of Not Meeting Demand Cd
Consider the situation where demand may not be met by existing production
capacity. If demand Dt is not met, future demand may fall, because customers can
switch to other suppliers. If this firm does not have means to meet demand above
Dt, it is inflexible. The cost of extra production beyond maximum capacity Cxp can
be assessed relative to the cost of not meeting demand Cd and the available means
of achieving this.
Suppose Dt is never more than Qmax, i.e. P (Dt > Qmax) = 0. This may happen for
the following reasons. In an efficient market, price rises as demand rises. A price
rise will deter further increase in demand beyond, say, Qmax. However, when prices
capacity. Finally, it could well be that the producer has no interest in meeting the
Dt that exceeds Qmax because it is too costly or impossible. For example, the lead
time to acquiring additional production capacity may be too long or the producer
monopolist without any obligation to meet the additional demand may prefer to
ignore it. As long as the cost of not meeting demand is significant and nonzero, the
firm needs to consider flexibility. Thus Cd is not only an economic cost but also an
opportunity cost and one reflecting the cost of contractual or social obligations.
Consider the effect of lead time T, so that the quantity produced is not the same as
the quantity demanded, and at best, Qt = Dt+T. As long as the cost of not meeting
demand Cd is zero, i.e. the firm has no obligation to meet demand, and the lead
417
time to production has no effect on costs. However, if there is a cost to not
lead time is zero, flexibility is better. If lead time is nonzero, robustness is better.
In the case of non-zero lead time and a zero cost of not meeting demand, the firm
must decide the benefits and costs of additional revenue. This discussion is
The above analysis assumes that the decision maker, or the firm producing the
goods, is risk neutral. The decision makers risk attitude affects his preference for
flexibility or robustness only when there is a chance that Dt may exceed Qmax. If
lead time is zero, all else being equal, the decision makers attitude to risk does not
affect his preference for flexibility or robustness. For nonzero lead time and
nonzero cost of demand, the risk averse decision maker would prefer robustness to
flexibility as the latter implies a risk that demand may not be met on time and that
future demand may be affected as a result. Thus, even if the cost of production is
lower than the cost of holding, the risk averse decision maker would prefer a
higher level of normal production to avoid the risk that the lead time (for extra
production) would entail. Implicitly, the cost of holding includes the cost of
expired goods if not sold. Table D.3 classifies the decision makers preferences
418
Table D.3 Preferences with respect to Risk Attitude when P(Dt > Qmax) >0
T > 0 and Cd >> 0 robustness; set I > Qmax robustness with flexibility flexibility
Robustness implies setting Qmin and Qmax to cover Dmin and Dmax. This translates
into minimising the probability that Dmax exceeds Qmax and ensuring that Qmin can
fall to Dmin without cost. Flexibility, on the other hand, implies fluctuating
changing demand. Thus Dmax can be greater than Qmax and Dmin less than Qmin
Suppose that actual demand dt may exceed expected maximum demand Dmax. In
figure D.3 below, the area between dmax and Qmax refers to the cost of extra
production Cxp. This is the average unit cost of producing beyond maximum
production capacity.
419
Figure D.3 Cost of extra production Cxp
d max
Qmax
Cp
I
Ch
Qmin
time t
In this case, dmax cannot be predicted with accuracy. No matter what level of I or
the level of Qmax, there is always a chance that demand will exceed it. If we set I =
Qmax, and dmax turns out to be greater than Qmax, some demand will not be met. If
dmax < Dmax< Qmax, then we have incurred substantial holding cost, especially
opt
between dmax and Qmax. To determine I in this case, we need to consider the
probability that dmax exceeds Qmax or dt exceeds Dmax, the cost of not meeting
demand Cd, and the cost of extra production beyond given maximum production
From the above, we conclude that as long as there is a chance that demand may
exceed maximum production capacity and the cost of not meeting demand is not
420
zero, some flexibility is necessary. Where cost of production beyond maximum
So,
The existence of lead time, holding cost, production cost, extra production cost,
and cost of not meeting demand implies a need to forecast and plan ahead.
Typically, future demand is forecasted so that Dt can approximate dt, with the
or by ensuring that Dmax exceeds dmax. Production levels are managed so that
planned Qt approximates actual qt. The more complicated the system, the more
likely we can expect errors in forecasting future demand, modelling of the system,
and planning decisions to occur. Robustness gives a present known cost of holding
The above analysis can be applied to any situation involving the control of supply
through two examples. The first example concerns bank customers preferences
421
D.4.1 Example 1: Current and Savings Accounts
savings account. In addition, he would like to reduce the amount of time spent on
monitoring his current account. Insufficient balance in the current account means
that the cheque will bounce and he will be charged a fee Cd (cost of not meeting
demand). The credit balance in the current account represents normal production
level It, the cost of which is the opportunity cost of not earning interest in the
savings account Ch. The cost of transferring between accounts is the cost of
production Cp. Lead time T is the amount of advance notice he has to give to the
bank or the length of time it takes to transfer between the accounts. The total
amount of money he has between the two accounts is Qmax. The minimum balance
in the current account is Qmin. The maximum total withdrawal from the current
account is Dmax. If the customer is flexible (has the capability to be flexible) and
prefers it, he would keep It (the balance in the current account) as low as possible.
If the customer is risk averse and also prefers not to have to monitor or transfer
between accounts too frequently, he would keep a high balance in the current
account, hence the robustness option. Thus, the cost of robustness is the cost of
holding, i.e., opportunity cost of the positive balance in the current account. The
cost of flexibility is the extra effort required to monitor and transfer between
Some banks are offering all kinds of financial packages to suit customers
preferences. The uncertainties and risks these banks face with regard to
customers frequency and amount of transfers between the accounts are implicitly
built into the fees they charge. These fees include non-interest-bearing balance and
actual charges. Variations of the above include interesting bearing current account
with minimum balance, where the interest is still lower than savings and other types
422
of longer term accounts. There is also a facility for automatic transfer between
accounts at the cost of keeping a minimum balance, fixed set up fee, or transaction
fee per transfer. Alternatively, negative production levels can be associated with an
This analysis may also be applied to money market accounts, off-shore accounts,
and financial instruments which give customers the robustness and flexibility
The buy or rent decision is not only an accounting issue but also a strategic one,
affecting the way a firm can deal with future uncertainty. In accounting terms,
buying is very different from renting machinery, as one of ownership and control
versus borrowing. The former becomes an asset, gets entered into the balance
sheet, gets depreciated, and eventually has scrap value. The latter becomes an
expense, reduces taxable profit, and does not get carried over to the following
year. Strategically speaking, buying ties the firm to this specific technology for the
life of the machinery whereas renting enables the firm to switch to new
terminate its commitment at any time and limit its technological confinement.
Ownership, particularly of a capital asset, pays off if the capital cost discounted
over the life of the asset is less than the total cost of renting over the same number
of years. In practice, a firm may choose to own most of the machinery and rent
deal with expected demand and flexibility to deal with the unexpected. Here cost
of holding is the purchase cost which reflects the opportunity cost of the machine.
Cost of production is the cost of renting. Lead time is translated into how soon the
423
firm can rent or return the extra machine required. The shorter the lead time, the
meeting demand are no longer borne by one single utility, i.e. the defunct Central
Electricity Generation Board CEGB. Qmax is the maximum electricity production
capacity but not a constant level due to scheduled and unscheduled maintenance
and different availabilities of plant. For this reason, normal production level It <
Qmax. Furthermore, Qmax is the aggregate of plant capacity of the utilities in this
change accordingly. The expression for Qmax thus approximates the actual qmax,t :
n
Qmax Qmax,t = Qmax,i,t where Qmax,i,t is the maximum capacity of each utility
i =1
i at time t.
The cost of not meeting electricity demand is very high, especially in this
competitive environment. The cost of not meeting demand Cd is translated into
positive. Traditional approaches have tried to deal with this uncertainty by first
forecasting demand, setting a reserve margin R above expected peak demand based
424
The cost of robustness Cr is the holding cost of over-capacity. Cost of flexibility
Cr = Ch (robustness)
Cf = Cxp + Cx (flexibility)
The cost of flexibility Cf does not include cost of holding which is an ongoing
present cost. Instead, Cf includes future transaction and production costs. Thus
else or if it is zero. A holding cost should also contain an interest or discount rate
to reflect the time over the period of holding. Robustness means the need not to
change, hence It = I. Robustness diminishes if It is not constant, as there is a cost
How does one gain flexibility? An extremely flexible option has no holding cost
and gives instant response. Importing power, for instance, shifts the holding cost
(of the plant) to someone else, e.g. Scotland or France. Combined Cycle Gas
Turbines (CCGT) are quick to build and require minimal warm-up time.
clause in building a new type of plant, also offer flexibility. Thus flexibility can be
425
Portfolio Theory (Markowitz, 1952) recommends keeping a well-balanced and
generation, diversity in the capacity mix ensures security of fuel supply. Diversity
utility with a diversified capacity mix, i.e. different types of technologies, different
fuels, and different plant lives and other characteristics, is not only protected from
fuel supply disruptions (robustness) but also has different alternatives to cope with
the unexpected (flexibility). But once again, due to nonzero holding cost,
uncertain Qmax , and a fuel supply disruption possibility, diversity in the capacity
analysis of choice and uncertainty. This motivates the use of decision analysis as
many options 1 to m that is available for any scenario that arises, but ultimately
D.6 Conclusions
We have examined the differences between flexibility and robustness and the
conditions under which they are useful by means of derivation of cost measures.
We have shown that flexibility and robustness are not the same, neither are they
426
when more information can be expected, and implying a future cost. Robustness,
on the other hand, is backward looking, minimising regret and a present cost. We
can only use it as it is, whereas flexibility offers the potential to change and
transform.
We have only looked at demand uncertainty, and at the case of demand being
greater than supply. This study can be extended to look at the other side, where
in the system, and other areas of application such as regulations, computer systems,
427
CHAPTER 9
Conclusions
This thesis has proposed and investigated the feasibility and practicality of model
capacity planning. The main conclusion is that 1) model synthesis is feasible but
has practical limitations and 2) flexibility is useful but not in the same sense as
model synthesis. This conclusion is supported by the main themes listed below and
From the beginning, there appeared no link between model synthesis and
tradition. Until their contribution to this research problem was evident, it did not
seem feasible to consider both model synthesis and flexibility. Model synthesis
seems to fit into the discussion of model management systems and model
integration issues in the decision support systems literature. Yet there does not
exist a taxonomy suitable for it, hence the conceptualisation of model synthesis in
very confusing. Several prior attempts were made to reconcile the two apparently
unrelated concepts from the means-ends angle. In other words, is model synthesis
429
Modelling for completeness is still necessary. But it is not sufficient, as
between the decision maker (the user of the model) and the model itself. The
earlier discussed in chapters 4 and 6 resolves the themes of model synthesis and
comprehensiveness, comprehensibility
essential for a usable model. Model synthesis makes use of complementary and
balance of hard and soft techniques to address the intricacies of power generation
and the strategic nature of uncertainties in capacity planning. It reflects the idea of
completeness.
430
2) Synthesis capitalises on economies of scale, reflecting the whole is greater than
the sum of its parts. It exploits the synergies between its component parts.
and functionality.
OPERATIONAL ISSUES
demonstrates the feasibility but not the practicality of model synthesis. This thesis
investigated one form of synthesis to capture the two important but complementary
tool to capture the details of the core capacity planning optimisation model. To
facilitate this, a model of model to reduce and approximate the inputs and
outputs of the optimisation model was proposed and tested. Although regression
analysis for model fitting is an established and acceptable response surface method
and indeed a similar optimisation model has been successfully reduced in this
manner, the series of experiments found that such a model of model is infeasible,
to achieve synthesis. These conceptual issues far out-number the tests achievable
difficulties, this thesis concludes that model synthesis is impractical for a utility
faced with the kinds and range of uncertainties described in Chapter 2 in the UK
431
FLEXIBILITY AS A DECISION CRITERION
has value when there is uncertainty has been proved by several authors (Marschak
and Nelson 1962, Merkhofer 1975). However, the trade-off between flexibility
and optimality has not been sufficiently addressed in the literature, except for a
further, the elements necessary to define the multi-faceted concept of flexibility are
notions of range (size of choice set) and time, requires uncertainty conditions, and
flexibility, refers to the ability to react with minimal penalty on cost, time, and
432
FEATURE OF MODELLING APPROACH
aim for completeness of coverage, to ensure all likely ranges of uncertainty are
potential for using decision analysis as an organisational tool for synthesis. With
diagrams such as DPL (ADA, 1992), decision analysis becomes even more
attractive as a structuring tool. However, decision analysis assumes that the model
is built in direct consultation with decision makers, which is not the case with the
modelling and decision making styles of the electricity supply industry. Thus it is
to the modelling approach, i.e. to compensate for model unease, rather than as a
The argument of this thesis is dominated by the main themes discussed in section
9.1, woven by research questions and answers listed below, and distinguished by
types of contributions listed in section 9.3. Figure 9.1 illustrates how the chapters
are related to each other. The numbers correspond to chapters while the arrows
carry the messages. Chapter 2 links the two parts together by uncertainty. The
two themes of model synthesis and flexibility are related by the use of the decision
433
model synthesis addresses completeness (Chapters 2, 3, 4), flexibility addresses
m
words, concepts,
ea
su
"coping" with uncertainty relationships
re
s
model unease
rs
EV a t o
s o nd
UF mapping
pe a
fU
c
di
ty reas
U as o
m the
ar
in
K f
strategies
sy
e
od sis
ES U
n
el
3 8
U = uncertainty decision analysis framework
F = flexibility replication and evaluation method
UF = uncertainty-flexibility
Table 9.1 answers the ten questions raised initially in Chapter 1 table 1.1. Each of
434
Table 9.1 Research Questions and Answers
435
1) Requirements
comes from. Areas of uncertainties refer to the source of uncertainty, i.e. the
factor which is uncertain. Different factors that affect capacity planning but are
uncertain are identified and discussed. This classification and enumeration is the
uncertainties, intricacies in power generation and other aspects of the business are
by replication.
Chapter 3 reviews the techniques (figure 3.13) used in electricity capacity planning,
concludes that all kinds of OR techniques have been used for this but models based
due to inappropriate level of detail, lack of decision focus, and insufficient attention
models.
436
than by literature review and to enable a fair comparison. It concludes (in Chapter
four step method is proposed and tested. The four steps consist of the following:
The feasibility of this method has been established by two detailed pilot studies, the
first of which is documented in Appendix A. This method is used for the first stage
of the modelling experiment (Appendix B and Chapter 4). This four step method is
cited examples.
the conceptual and operational difficulties that must be overcome for feasibility.
concludes that model synthesis is feasible but not practical for a utility in the UK
ESI.
437
5) What is flexibility? How is it defined? How does it relate to other words and
concepts?
Many attempts at giving a precise definition of flexibility end up restricting its rich,
c) Robustness as safety or lack of risk in a decision; a robust decision is one for which
elements will not have to be regretted.
d) Lack of confidence reduces the desire for commitment and increases the preference
for flexibility.
e) Flexibility and robustness are embedded in the finance definition of an option: the
right but not the obligation
f) Close relationships exist between uncertainty and flexibility, liquidity and learning.
6) Usefulness of flexibility
Chapter 5 shows the wide range of contexts in which flexibility is found. Chapter 6
discusses and supports its use as a decision criterion (as opposed to optimality)
438
7) Conditions for usefulness and downside
not useful is translated into its converse in table 6.4, to show that capacity planning
These three conditions are suggestive but not guaranteed, i.e. the mere existence of
all types of flexibility, and indeed, all degrees of flexibility as useful or valuable. In
other words, there may be a limit to the usefulness of flexibility. There is also no
8) Operationalisation of flexibility
planning, etc.
9) Measuring flexibility
Instead of developing a single best overall measure, this thesis has found two
groups of measures which meet the criteria for measuring flexibility. Partial
439
measures form the largest group in the literature, and they support the classification
of indicators. Expected value based measures are useful, but caution is needed as
further use.
inventory control (Appendix D). This example shows another way to assess
flexibility.
it. Chapter 8 uses the terminology of Chapter 7 to develop practical guidelines for
number of choices and states. The practical guidelines consist of four steps:
influence diagrams, and assess with indicators and expected value-based measures
Table 9.2 summarises the questions raised in Chapter 4 and answers concluded in
440
Table 9.2 Flexibility
expected
value
entropic
measures
(synthesis).
441
1) CRITIQUE
A critique refers to an assessment against given criteria. Four critiques have been
a) A critical review of the industry and history of capacity planning provides the basis
assesses all kinds of OR techniques and models against the areas and types of
and evaluated against the criteria compiled from the list of uncertainties in Chapter
emerge. Grouped into three main categories, they are assessed according a pre-
2) METHODOLOGY
this thesis. Three kinds of methods have been contributed by this thesis: a) the
four step model replication and evaluation, b) two staged modelling experiment,
a) A method of model assessment that is more objective and fair than mere literature
review is developed and successfully tested in two separate pilot studies. This four
442
step method is applied in the two stage modelling experiment and again in the
b) A two staged case study based modelling experiment is designed and conducted to
approaches.
c) Practical guidelines for structuring and assessing flexibility are developed and
applied to UK ESI capacity planning examples of plant economics and pool price
3) CONCEPTUAL DEVELOPMENT
process of analysis. What emerged from this are a) the conceptual issues in model
terminology.
a) To fill the void in the literature, issues in model synthesis, including a taxonomy
b) Following the same manner of conceptual analysis of flexibility and closely related
words, Chapter 6 analyses the relationship between flexibility and more established
conceptual aspects of flexibility are developed. The following new terms are
443
4) SYNTHESIS
Three different kinds of synthesis have been examined in this thesis: a) model
synthesis, b) optimisation and decision analysis, and c) expected value and entropy.
While optimisation uses lots of data, decision analysis uses few. Optimisation is
single staged and deterministic, while decision analysis is multi-staged and contains
decision analysis. However, incompatible data size and interface prevented a direct
formulation. Furthermore, the multiple alternative stages of the decision tree are
and uncertainty aspects of flexibility. This at first suggested that a synthesis would
Our investigation into model synthesis and flexibility has opened up a number of
areas for further research. The following three areas are suggested.
444
ON MODEL SYNTHESIS
We have only examined the synthesis between optimisation and decision analysis
via a model of model. We cannot generalise from this limited experience that we
have found all the conceptual and operational difficulties in model synthesis.
The weak and strong forms of synthesis pertain to the degree of interaction
between the components. Does the level of synthesis (or integration) contribute to
ON FLEXIBILITY
1) Measuring Flexibility
Appendix D showed that flexibility is useful when there is a chance that actual
demand may exceed forecasted demand. The probability that actual demand
exceeds forecasted demand indicates a need for flexibility. This suggests that
Gerwin (1993) mentioned that the type of flexibility corresponds to the type of
does not imply that type of flexibility can only be analysed with respect to (area of)
assessment may be helpful, such as some kind of multi-attribute ranking and trade-
off to score the value of flexibility with respect to each of the key uncertainties.
445
The uncertainty to flexibility mapping assumes a one to one correspondence. How
to flexibility mapping.
We may need to use multi-attribute weighting and ranking. Guidelines are required
to its triggering uncertainty and reflected by the structure of the decision tree, i.e.
ON CAPACITY PLANNING
expectations about future market conditions. These models are thus important for
investment with long payout times, 2) low resale or scrap value of newly installed
446
should be applicable to capacity planning problems elsewhere, particularly in
electricity supply industries of other countries as they all share the unique features
The modelling approach is still necessary for completeness, and model synthesis is
a means to this. To hedge against model unease and to cope with uncertainties,
flexibility becomes necessary. This thesis has addressed the two themes separately.
To use model synthesis and flexibility together for model completeness and
447
REFERENCES
Allan, R.N. and R. Billington (1992) Probabilistic methods applied to electric power
systems --- are they worth it? Power Engineering Journal, Vol. 6, No. 3, May, pp. 121 -
129
Amagai, Hisashi; Pingsun Leung (1989) Multi-Criteria Analysis for Japans Electric
Power Generation Mix, Energy Systems and Policy, Vol. 13, No. 3, pp. 219 - 236
Anastasi, Anne (1990) Reaction: Diversity and Flexibility, The Counseling Psychologist,
Vol. 18, No. 2, pp. 248 - 261
Anders, George J. (1990) Probability Concepts in Electric Power Systems, John Wiley
and Sons
Atkinson, John (1985) Flexibility, Uncertainty and Manpower Management, IMS Report
No. 89, Institute of Manpower Studies
Atkinson, John (1989) Management Strategies for Flexibility and the Role of the Trade
Unions, IMS Paper No. 154, Institute of Manpower Studies
Balci, Osman (1986) Requirements for Model Development Environments, Computers and
Operational Research, Vol. 13, No. 1, pp. 53 - 67
Balson, W.E.; S.M. Barrager (1979) Uncertainty Methods in Comparing Power Plants,
EPRI, FFAS-1048 Technical Planning Study 78 - 797
449
Barbier, Edward; David Pearce (1990) Thinking economically about climate change,
Energy Policy, Vol. 18, No. 1, January/February, pp. 11 - 18
Baughman, M.L.; D.P. Kamat (1980) Assessment of the Effect of Uncertainty on the
Adequacy of the Electric-Utility Industry's Expansion Plans, 1983 - 1990, EPRI Project
EA 1446, 1153-1, Electric Power Research Institute, December 1979
Baughman, Martin L.; Paul Joskow (1976) Energy Consumption and Fuel Choice by
Residential and Commercial Consumers in the United States, Energy Systems and Policy,
Vol. 1, No. 4, pp. 305 - 323
Beaver, Ron (1993) Structural Comparison of the Models in EMF12, Energy Policy, Vol.
21, No. 3, March, pp. 238 - 248
Beck, P.W. (1982) Corporate Planning for an Uncertain Future, Long Range Planning,
Vol. 15, pp. 12 - 21
Bernardo, J.J.; Z. Mohamed (1992) The measurement and use of operational flexibility in
the loading of Flexible Manufacturing Systems, European Journal of Operational
Research, Vol. 60, No. 2, pp. 144 - 155
Berrie, T.W.; D. McGlade (1991) Electricity planning in the 1990s, Utilities Policy, Vol.
1, No. 3, April, pp. 199 - 211
Bonder, S. (1979) Changing the future of operations research, Operations Research, Vol.
27, pp. 209 - 224
Borison, A.B.; B.R. Judd, P.A.Morris, E.C. Walters (1981) Evaluating R&D Options
Under Uncertainty, Volume 3: An Electric-Utility Generation-Expansion Planning
Model, EPRI EA-1964, Vol 3 Research Project 1432-1, EPRI
450
Borison, Adam B.; Peter A. Morris, Shmuel S. Oren (1984) A State-of-the-World
Decomposition Approach to Dynamics and Uncertainty in Electric Utility Generation
Expansion Planning, Operations Research, Vol. 32, No. 5, pp. 1052 - 1068
Borison, Adam Bruce (1982) Optimal Electric Utility Generation Expansion Under
Uncertainty, PhD Thesis, Stanford University
Box, G.E.P.; N.R. Draper (1987) Empirical Model Building and Response Surfaces, John
Wiley, New York
Boyd, Robert; Roderick Thompson (1980) The Effect of Demand Uncertainty on the
Relative Economics of Electrical Generation Technologies with Differing Lead Times,
Energy Systems and Policy, Vol .4, No. 1-2, pp. 99 - 124
Brill, E. Downey, Jr; John M. Flach; Lewis D. Hopkins; S. Ranjithan (1990) MGA: A
Decision Support System for Incompletely Defined Problems, IEEE Transactions on
Systems, Man, and Cybernetics, Vol. 20, No. 4, pp. 745 - 757
Brown, Rex V.; Dennis V. Lindley (1986) Plural Analysis: Multiple Approaches to
Quantitative Research, Theory and Decision, Vol. 20, pp. 133 - 154
Bunn, Derek W. (1984) Applied Decision Analysis, John Wiley and Sons
Bunn, Derek W.; Ahti A. Salo (1993) Forecasting with Scenarios, European Journal of
Operational Research, Vol. 68, No. 3, August, pp. 291 - 303
Bunn, Derek W.; Eric R. Larsen (1992) Sensitivity of Reserve Margins to Factors
Influencing Investment Behaviour in the Electricity Market of England and Wales, Energy
Policy, Vol. 20, No. 5, May, pp 420 - 429
Bunn, Derek W.; Erik R. Larsen; Kiriakos Vlahos (1991) Modelling the Effects of
Privatisation on Capacity Investment in the UK Electricity Industry, London Business
School, November
451
Bunn, Derek W.; Erik R. Larsen; Kiriakos Vlahos (1993) Complementary Modelling
Approaches for Analysing Several Effects of Privatization on Electricity Investment,
Journal of the Operational Research Society, Vol. 44, No. 10, pp. 957 - 971
Bunn, Derek W.; Kiriakos Vlahos (1989a) Evaluation of the Long-term Effects on UK
Electricity Prices following Privatisation, Fiscal Studies, Vol. 10, No. 4, pp. 104 - 116
Bunn, Derek W.; Kiriakos Vlahos (1989b) Evaluation of the Nuclear Constraint in a
Privatised Electricity Supply Industry, Fiscal Studies, Vol. 10, No. 1, pp. 41 - 52
Butler, Timothy; Kirk Karwan, James Sweigart (1992) Multi-Level Strategic Evaluation of
Hospital Plans and Decisions, Journal of the Operational Research Society, Vol. 43, No.
7, pp. 665-675
Carlsson, Bo (1989) Flexibility and the Theory of the Firm, International Journal of
Industrial Organisation, Vol. 7, pp. 179 - 203
Cazalet, E.G.; C.E. Clark, T.W. Keelin (1978) Costs and Benefits of Over/Under
Capacity in Electric Power System Planning, EPRI Research Project 1107, Electric Power
Research Institute, Palo Alto, California
Chandra, Pankaj; Mihkel M. Tombak (1992) Models for the evaluation of routing and
machine flexibility, European Journal of Operational Research, Vol. 60, pp. 156 - 165
Chapman, C.B.; Dale Cooper (1983) Risk analysis: Testing some prejudices, European
Journal of Operational Research, Vol. 14, No. 3, pp. 238 - 247
452
CIGRE (1991) Flexibility of power systems: Principles, means of achievement and
approaches for facing uncertainty by planning flexible development of power systems,
Electra, No. 135, Working Group 01 of Study Committee 37, pp. 76 - 101
CIGRE (1993) Dealing with Uncertainty in System Planning - Has Flexibility Proved to be
an Adequate Answer? Electra, No. 151, Working Group 37.10, December, pp. 53 - 65
Clark, Charles E., Jr (Decision Focus Inc) (1985) Decision Analysis in Strategic Planning,
Strategic Management and Planning for Electric Utilities, pp. 39 - 61
Cline, William R. (1992) Global Warming, the Economic Stakes, Institute for
International Economics
Collingridge, David (1979) The Fallibist Theory of Value and its Applications to Decision
Making, PhD Thesis, University of Aston
Ct, G.; M.A. Laughton (1979) Decomposition techniques in power system planning: the
Benders partitioning method, Electrical Power and Energy Systems, Vol. 1, No. 1, April,
pp. 57 - 64
Covello, Vincent T (1987) Decision Analysis and Risk Management Decision Making:
Issues and Methods, Risk Analysis, Vol. 7, No. 2, pp. 131 - 139
Dantzig, G.B. (1955) Linear Programming Under Uncertainty, Management Science, Vol.
1, pp. 197 - 206
453
DeGroote, Xavier (1994) The Flexibility of Production Processes: A General Framework,
Management Science, Vol. 40, No. 7, July, pp. 933 - 945
Dhar, S.B. (1979) Power System Long-range Decision Analysis Under Fuzzy
Environment, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-98, No. 2,
March/April, pp. 585 - 596
Dixit, Avinash K.; Robert S. Pindyck (1994) Investment Under Uncertainty, Princeton
University Press, Princeton
Dixon, E.C. (1989) Modelling Under Uncertainty: Comparing Three Acid-Rain Models,
Journal of the Operational Research Society, Vol. 40, No. 1, pp. 29 - 40
DOE (1994) The National Energy Modeling System: An Overview, May, Energy
Information Administration, Office of Integrated Analysis and Forecasting, US Department
of Energy, Washington, DC 20585
Dolk, Daniel R.; Jeffrey E. Kottemann (1993) Model integration and a theory of models,
Decision Support Systems, Vol. 9, pp. 51 - 63
Drechsler, F.S. (1968) Decision Trees and the Second Law, Operational Research
Quarterly, Vol. 19, No. 4, pp. 409 - 419
Energy Committee (1990) The Cost of Nuclear Power, Vol. I & II, Session 1989-90,
Fourth Report , HMSO
454
Energy Committee (1992) Consequences of Electricity Privatisation, HMSO, Vol. 1 & 2,
February
Eppink, D. Jan (1978) Planning for Strategic Flexibility, Long Range Planning, Vol. 11,
pp. 9 - 15
Eppink, Derk Jan (1978) Managing the Unforeseen: a study of flexibility, PhD Thesis,
Free University of Amsterdam
Eschenbach, Ted G. (1992) Spiderplots versus Tornado Diagrams for Sensitivity Analysis,
Interfaces, Vol. 22, No. 6, November/December, pp. 40 - 46
Evans, John Stuart (1982) Flexibility in Policy Formation, PhD Thesis, Aston University
Evans, Nigel (1984) The Sizewell Decision: a sensitivity analysis, Energy Economics,
Vol. 6, No. 1, January, pp. 14 - 20
Evans, Nigel (1984) An economic evaluation of the Sizewell decision, Energy Policy,
September, pp. 288 - 295
Evans, Nigel; Chris Hope (1984) Nuclear Power: Future, Costs and Benefits, Cambridge
University Press, Cambridge
Eyre, N.J. (1990) Gaseous Emissions due to Electricity Fuel Cycles in the United
Kingdom, Energy and Environment Paper, No. 1
Foley, Michael; Dan Prepdall (1990) Siting new power plants: challenges for the 1990s,
Electrical World, Vol. 204, No. 10, October, pp. 58 - 60
455
Ford, Andrew; Irving W. Yabroff (1980) Defending Against Uncertainty in the Electric
Utility Industry, Energy Systems and Policy, Vol. 4, No. 1-2, pp. 57 - 98
Ford, Andrew; Michael Bull (1989) Using system dynamics for conservation policy
analysis in the Pacific Northwest, System Dynamics Review, Vol. 5, No. 1, Winter, pp. 1 -
16
Gass, Saul I., ed. (1981) Validation and Assessment of Energy Models, Proceedings of a
Symposium held at the National Bureau of Standards, Gaithersburg, Maryland, 19 - 21
May 1980, NBS Special Publication 616
Gellings, Clark W.; Pradeep C. Gupta; Ahmad Faruqui (1985) Strategic Implications of
Demand-Side Planning, Electric Power Research Institute
Gertler, Meric S. (1988) The limits to flexibility: comments on the Post-Fordist vision of
production and its geography, Transactions : Institute of British Geographers, Vol. 13,
pp. 419 - 432
Ghosh, D.; R. Agarwal (1991) Model Selection and Sequencing in Decision Support
Systems, Omega, Vol. 19, No. 2/3, pp. 157 - 167
456
Goldberg, Michael A.(1989) On Systemic Balance: Flexibility and Stability in Social,
Economic, and Environmental Systems, Praeger Publishers, New York
Goldman, Steven Marc (1974) Flexibility and the Demand for Money, Journal of
Economic Theory, Vol. 9, pp. 203 - 222
Greenhalgh, Geoffrey (1985) Sizewell B: the Central Electricity Generating Boards Case,
Power Tomorrow, Kogan Page
Greenhalgh, Geoffrey (1990) Energy conservation policies, Energy Policy, Vol. 18, No. 3,
April, pp. 293 - 299
Grimston, Malcolm (1993) Issues in the Privatisation of the Electricity Supply Industry,
British Nuclear Industry Forum, presented at the Institute of Energy
Grubb, Michael (1989) The Greenhouse Effect: Negotiating Targets, The Royal Institute
of International Affairs
Gupta, Yash P.; Sameer Goyal (1989) Flexibility of manufacturing systems: Concepts and
measurements, Invited Review, European Journal of Operational Research, Vol 43, pp.
119 - 135
457
Gustavsson, Sten Olof (1984) Flexibility and Productivity in Complex Production
Processes, International Journal of Production Research, Vol. 22, No. 5, pp. 801 - 808
Hankinson, G.A. (1986) Energy Scenarios- The Sizewell Experience, Long Range
Planning, Vol. 19, No. 5, pp. 94 - 101
Hart, A.G. (1937) Anticipations, Business Planning, and the Cycle, Quarterly Journal of
Economics, pp. 272 - 297
Helm, Dieter and McGowan, Frances (1989) Ch 13 Electricity Supply in Europe: Lessons
for the UK, p. 237- 260, in Helm, Kay, Thompson, ed. The Market for Energy, Clarendon
Press
Helsinki (1991) Proceedings of the Senior Expert Symposium on Electricity and the
Environment, 13 - 17 May, Helsinki, Finland
Henderson, Y. K.; R.W. Kopcke, G.J. Houlihan, N.J. Inman (1988) Planning for New
Englands Electricity Requirements, New England Economic Review, January- February,
pp. 3 - 30
Hertz, David B. (1964) Risk Analysis in Capital Investment, Harvard Business Review,
January/February, pp. 95 - 106
Hertz, David; Howard Thomas (1983) Risk Analysis and its Applications, John Wiley and
Sons
458
Hertz, David; Howard Thomas (1984) Practical Risk Analysis: An Approach through
Case Histories, John Wiley & Sons
Hespos, R.F.; Strassman. P.A. (1965) Stochastic Decision Trees for the Analysis of
Investment Decisions, Management Science, Vol. 11, No. 10, August, pp. B244 - B259
High Performance Systems (1990) ITHINK: the visual thinking tool for the 90s, Users
Guide
Hirshleifer, Jack; John G. Riley (1992) The Analytics of Uncertainty and Information,
Cambridge University Press
Hirst, Eric (1989) Benefits and Costs of Small, Short Lead-Time Power Plants and
Demand-Side Programs in an Era of Load-Growth Uncertainty, March, Oak Ridge
National Laboratory, Tennessee, USA
Hirst, Eric (1990) Benefits and Costs of Flexibility: Short-Lead Time Power Plants, Long
Range Planning, Vol. 23, No. 5, pp. 106 - 115
Hirst, Eric; M. Schweitzer (1990) Electric Utility Resource Planning and Decision
Making: the importance of uncertainty, Risk Analysis, Vol. 10, No. 1, pp.137-146
Hobbs, Benjamin F.; Jeffrey C Honious, Joel Bluestein (1992) Whats Flexibility Worth?
The Enticing Case of Natural Gas Cofiring, The Electricity Journal, pp. 37 - 47
Hobbs, Benjamin F.; Jeffrey C. Honious; Joel Bluestein (1994) Estimating the Flexibility
of Utility Resource Plans: An Application to Natural Gas Cofiring for SO2 Control, IEEE
Transactions on Power Systems, Vol. 9, No. 1, May, pp. 167 - 173
Hoeller, Peter; Markku Wallin (1991) Energy Prices, Taxes and Carbon Dioxide
Emissions, OECD Economic Studies, No. 17, OECD, Autumn, pp. 91 - 105
459
Holling, C.S. (1973) Resilience and Stability of Ecological Systems, International Institute
of Applied Systems Analysis
Holmes, Andrew (1987) A Changing limate: Environmentalism and Its Impact on the
European Energy Industries, Financial Times Business Information
Holmes, Andrew (1992) Conference Report: European Energy Policy Impact of the Single
Market, Energy World, No 198, May
Horowitz, Ann R.; Ira Horowitz (1976) The Real and Illusory Virtues of Entropy-Based
Measures for Business and Economic Analysis, Decision Sciences, Vol. 7, pp. 121 - 136
Hull, J.C. (1980) The Evaluation of Risk in Business Investment, Pergamon Press, Oxford
Hunt, Sally; Graham Shuttleworth (1993) Forward, option, and spot markets in the UK
Power Pool, Utilities Policy, January, pp. 2 - 8
Huss, W.R.; E.J. Honton (1987) Scenario Planning--What Style Should You Use?, Long
Range Planning , Vol. 20, No. 4, pp. 21 - 29
IEA (1987) International Energy Agency Statistics: Energy Prices and Taxes
IEA (1991) International Energy Agency Statistics: Energy Prices and Taxes, 3rd quarter
1991
Inman, Ronald L.; Jon C. Helton (1988) An Investigation of Uncertainty and Sensitivity
Analysis Techniques for Computer Models, Risk Analysis, Vol. 8, No. 1, pp. 71 - 90
Jones, P.M.S.; G. Woite (1990) Cost of nuclear and conventional baseload electricity
generation, IAEA Bulletin, March
460
Jones, Ian (1989) Risk Analysis and Optimal Investment in the Electricity Supply Industry,
(repreinted from 1986 article in Applied Economics), in Helm, Kay, Thompson, eds., The
Market for Energy, Clarendon Press, Chapman and Hall, pp. 214 - 236
Jones, Robert A.; Joseph M. Ostroy (1984) Flexibility and Uncertainty, Review of
Economic Studies, Vol. 51, pp. 13 - 32
Joskow, P.L. and Schmalensee, R. (1983) Markets for power, An Analysis of Electric
Utility Deregulation, Cambridge, Mass, MIT Press
Kahn, Herman; A.J. Weiner (1967) The Year 2000, MacMillan, London
Keeney, Ralph L. (1982) Decision Analysis: An Overview, Operations Research, Vol. 30,
pp. 803 - 838
Keeney, Ralph L.; Alan Sicherman (1983) Illustrative Comparison of One Utilitys Coal
and Nuclear Choices, Operations Research, Vol. 31, No. 1, pp. 50 - 83
Keeney, Ralph L; John F Lathrop, Alan Sicherman (1986) An Analysis of Baltimore Gas
and Electric Companys Technology Choice, Operations Research, Vol. 34, No. 1,
January/February, pp. 18 - 39
Klein, Burton H. (1984) Prices, Wages, and Business Cycles: A Dynamic Theory,
Pergamon, New York
Knight, Frank H. (1921) Risk, Uncertainty, and Profit, Houghlin Mifflin Company,
Chicago
461
Kogut, Bruce; Nalin Kulatilaka (1994) Operating Flexibility, Global Manufacturing, and
the Option Value of a Multinational Network, Management Science, Vol. 40, No. 1,
January, pp. 123 - 139
Krautmann, Anthony C.; John L. Solow (1988) Economies of Scale in Nuclear Power
Generation, Southern Economic Journal, Vol. 55, No. 1, July, pp. 70-85
Kreczko, Adam; Nigel Evans, Chris Hope (1987) A decision analysis of the commercial
demonstration fast reactor, Energy Policy, Vol. 15, No. 4, August, pp. 303 - 314
Kulatilaka, Nalin (1988) Valuing the Flexibility of Flexible Manufacturing Systems, IEEE
Transactions on Engineering Management, November, Vol. 35, No. 4, pp. 250 - 257
Kydes, Andy S.; Lewis Rubin (1981) Workshop Report 1: Composite Model Validation, in
Gass, ed (1981), pp. 139 - 148
Leggett, Jeremy, ed (1990) Global Warming, the Green Peace Report, Oxford University
Press
462
Levy, D. (1989) The Role of Models in the Planning Process, the Experience of Electricite
de France, Proceedings of the Workshop on Resource Planning Under Uncertainty for
Electric Power Systems, Stanford University
Linstone, H.A. (1984) Multiple Perspectives for Decision Making: Bridging the Gap
btween Analysis and Action, North-Holland, New York
Lo, E.O.; R. Campo, F. Ma (1987) Decision Framework For New Technologies: A Tool
For Strategic Planning Of Electric Utilities, IEEE Transactions on Power Systems, Vol.
PWRS-2, No. 4, November, pp. 959 - 967
Maddala, G.S. (A.L. Fletcher & Associates) (1980) Cost Uncertainty in Programming
Models of Electricity Supply, EPRI EA-1636 Research Project 1220-4, Electric Power
Research Institute
Mandelbaum, Marvin; John Buzacott (1990) Flexibility and decision making, European
Journal of Operational Research, Vol. 44, pp. 17 - 27
Mankki, Pirjo (1987) Electric Utility Generation Expansion Planning Under Uncertainty,
Proceedings of the Ninth Power Systems Computation Conference, Cascais, Portugal,
Butterworths
Markandya, A. (1990) Environmental Costs and Power Systems Planning, Utilities Policy,
Vol. 1, No. 1, pp. 13 - 27
Mascarenhas, Briance (1981) Planning for Flexibility, Long Range Planning, Vol. 14, No.
5, pp. 78 - 82
463
McInnes, Genevieve; Erich Unterwurzacher (1991) Electricity end-use efficiency, Energy
Policy, Vol. 19, No. 3, April, pp. 208 - 216
McKay, Michael D.; Richard J. Beckman; Leslie M. Moore; and Richard R. Picard (1992)
An Alternative View of Sensitivity in the Analysis of Computer Codes, Proceedings of the
American Statistical Association Section on Physical and Engineering Sciences, Boston,
August 9 - 13
McNamara, John R. (1976) A linear programming model for long-range capacity planning
in an electric utility, Journal of Economics and Business, Vol. 28, pp. 227 - 235
Merkhofer, M.W. (1975) Flexibility and Decision Analysis, PhD Thesis, Stanford
University
Merrill, H.M.; F.C. Schweppe (1984) Strategic Planning for Electric Utilities: Problems
and Analytic Methods, Interfaces, Vol. 14, No 1, January/February, pp. 72 - 83
Merrill, Hyde M.; Allen J. Wood (1991) Risk and Uncertainty in Power System Planning,
Electrical Power & Energy Systems, Vol. 13, No. 2, pp. 81 - 90
Merrill, Hyde; Fred Schweppe, David White; Dimitrios Aperjis, Matthew Mettler (1982)
Energy Strategy Planning for Electric Utilities Part I, SMARTE Methodology and Part II,
Case Study, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-101, No. 2,
February, pp. 340 - 355
Miller, Louis W.; Norman Katz (1986) A Model Management System to Support Policy
Analysis, Decision Support Systems, Elsevier Science Publishers B.V., Vol. 2, pp. 55 - 63
Mitnick, Stephen A. (1992) To Scrub or Not to Scrub: the hidden risks of inflexibility,
The Electricity Journal, pp. 44 - 49
464
Mobasheri, Fred; Lowell H. Orren; Fereidoon P. Sioshansi (1989) Scenario Planning at
Southern California Edison, Interfaces, Vol. 19, No. 5, September/October, pp. 31 - 44
Modiano, Eduardo Marco (1987) Derived Demand and Capacity Planning Under
Uncertainty, Operational Research, Vol. 35, No. 2, March/April, pp. 185 - 197
Morris, W. (1967) On the Art of Modelling, Management Science, Vol. 13, No. 12, pp.
B707 - B717
Munasinghe, Mohan (1990) Energy Analysis and Policy: energy in developing countries,
World Bank
Nuclear Forum (1992) Nuclear Forum: The News Magazine of the British Nuclear
Forum, August/September
O'Brien, F.A.; R.G. Dyson, C. Morris (1992) Multiple Scenario Analyis: What has
Probability Distibution Theory to Offer, No. 49, Warwick Business School Research
Bureau
OECD/NEA (1989) Projected Costs of Generating Electricity from Power Stations for
Commissioning in the Period 1995-2000, OECD/NEA, IEA
465
OFFER (1992) Review of Pool Prices, December
Orr, Daniel (1967) Chapter 13 Capital Flexibility and Long Run Cost Under Stationary
Uncertainty, in M. Shubik, Studies in Mathematical Economics, pp. 171 - 187
Ottinger, Richard; Nicholas Robinson, David Wooley, David Hodas (1991) Incorporating
the cost of protecting the environment into decisions about electric power, Perspectives in
Energy, Vol. 1, pp. 95 - 114
Palisade Corporation (1992) @RISK: Risk Analysis and Simulation Add-In for Microsoft
Excel
Paribas (1990) The UK Electricity Privatisation, July, Paribas Capital Markets Group,
International Equity Research
Peck, Stephen C.; Deborah K. Bosch, John P. Weyant (1988) Industrial Energy Demand:
A Simple Structural Approach, Resources and Energy, Vol 10, January, pp. 111 - 133
Price, Terence (1990) Political Electricity: What Future for Nuclear Energy?, Oxford
Universty Press
466
Raiffa, Howard (1968) Decision Analysis, Addison Wesley
Reisman, Arnold (1987) Some Thoughts for Model Builders in the Management and Social
Sciences, Interfaces, Vol. 17, No. 5, pp. 114 - 120
Richardson, G.P.; A.L. Pugh (1981) Introduction to System Dynamics Modelling with
DYNAMO, Cambridge, Massachusetts, MIT Press
Rller, L.H.; M. Tombak (1990) Strategic Choice of Flexible Production Technologies and
Welfare Implications, Journal of Industrial Economics, Vol. 38, pp. 417 - 431
Rosenhead, Jonathan; Martin Elton; Shiv K. Gupta (1972) Robustness and Optimality as
Criteria for Strategic Decisions, Operational Research Quarterly, Vol. 23, No. 4, pp. 413
- 431
467
SCE (1992) Twelve Scenarios for Southern California Edison, Planning Review, Vol. 20,
No. 3, pp. 30 - 37
Schaeffer, Peter V.; Louis J. Cherene (1989) The inclusion of spinning reserves in
investment and simulation models for electricity generation, European Journal of
Operational Research, Vol. 42, pp. 178 - 189
Schneewei, Christoph; Martin Khn (1990) Zur Definition und gegenseitigen Abgrenzung
der Begriffe Flexibilitt, Elastizitt und Robustheit, Zeitschrift fr
Betriebswirtschaftslehre, Vol. 42, May, pp. 378 - 395
Schroeder, Christopher H.; Lyna L. Wiggins, and Daniel T. Wormhoudt (1981) Flexibility
of scale in large conventional coal fired power plants, Energy Policy, Vol. 9, No. 2, June,
pp. 127 - 135
Schweppe, Fred C.; Hyde M. Merrill; William J. Burke (1989) Least Cost Planning:
Issues and Methods, Proceedings of the IEEE, Vol. 77, No. 6, pp. 899 - 907
Sherali, H.D.; A.L. Soyster, F.H. Murphy, S. Sen (1984) Intertemporal Allocation of
Capital Costs in Electric Utility Capacity Expansion Planning under Uncertainty,
Management Science, Vol. 30, No. 1, pp. 1 - 19
Sim, Steven R. (1991) How Environmental Costs Impact DSM, Public Utilities
Fortnightly, Vol. 128, No. 1, July 1st, pp. 24 - 27
468
Slack, Nigel (1988) Manufacturing systems flexibility - an assessment procedure,
Computer-Integrated Manufacturing Systems, Vol. 1, No. 1, February, pp. 25 - 31
Smith, James E.; Robert F. Nau (1992) Valuing Risky Projects: Option Pricing Theory
and Decision Analysis, Duke University, Fuqua School of Business, Working Paper #9201
Son, Y.K.; C.S. Park (1987) Economic measure of productivity, quality, and flexibility in
advanced manufacturing systems, Journal of Manufacturing Systems, Vol. 6, No. 3, pp.
193 - 206
Sprague, Ralph H. Jr; Eric D. Carlson (1982) Building Effective Decision Support
Systems, Prentice-Hall Inc., New Jersey
Stigler, George (1939) Production and Distribution in the Short Run, The Journal of
Political Economy, Vol. 47, No. 3, pp. 305 - 327
Stoll, Harry G. (1989) Least-Cost Electric Utility Planning, John Wiley & Sons
Strangert, Per (1977) Adaptive Planning and Uncertainty Resolution, Futures, Vol. 9, No.
1, February, pp. 32 - 44
Sullivan, William G.; Wayne Claycombe (1977) Decision Analysis of Capacity Expansion
Strategies for Electrical Utilities, IEEE Transactions on Engineering Management, Vol.
EM-24, No. 4, November, pp. 139 - 144
Thomas, Howard (1972) Decision Theory and the Manager, Pitman Publishing
Thomas, Howard; Danny Samson (1986) Subjective Aspects of the Art of Decision
Analysis: Exploring the Role of Decision Analysis in Decision Structuring, Decision
469
Support and Policy Dialogue, Journal of the Operational Research Society, Vol. 37, No.
3, March, pp. 249 - 265
Triantis, Alexander J.; James E. Hodder (1990) Valuing Flexibility as a Complex Option
The Journal of Finance, Vol. XLV, No. 2, June, pp. 549 - 565
UBS Phillips and Drew (1990) The Electricity Industry in England and Wales, August,
UBS Phillips and Drew Global Research Group
UBS Phillips and Drew (1991) Electricity Research: Investing in Electricity, 31 October
UBS Phillips and Drew (1992) National Power and PowerGen: An Offer Refused, 30
January
Ullrich, Maureen F. (1980) Whatever Happened to the Work Ethic? Motivating Employees
in a Changing Society, Montana Business Quarterly, Vol. 18, No. 1, Spring, pp. 14 - 17
UNIPEDE (1988) Electricity Generation Costs: Assessments Made in 1987 for Stations
to be Commissioned in 1995, Sorento Congress, May 30 - June 3
Vickers, John; George Yarrow (1991) Reform of the electricity supply industry in Britain,
An assessment of the development of public policy, European Economic Review, Vol. 35,
pp. 485 - 495
470
Virdis, Maria R.; Michael Rieber (1991) The Cost of Switching Electricity Generation
from Coal to Nuclear Fuel, Energy Journal, Vol. 12, No. 2, pp. 109 - 134
Vlahos, Kiriakos (1990) Capacity Planning in the Electricity Supply Industry, PhD
Thesis, London Business School
Vlahos, Kiriakos (1991) ECAP: The Electricity Capacity Planning Model Users Manual
Vlahos, Kiriakos; Derek Bunn (1988a) Electricity Capacity Planning Using Mathematical
Decomposition, Electricity Planning Project Research Paper Series, May, London
Business School
Vlahos, Kiriakos; Derek Bunn (1988b) Large Scale Evaluation of Benders Decomposition
for Capacity Expansion Planning in the Electricity Supply Industry, Electricity Planning
Project Research Paper Series, June, London Business School
Ward, S.C. (1989) Arguments for Constructively Simple Models, Journal of the
Operational Research Society, Vol. 40, No. 2, pp. 141 - 153
Watson, Stephen R.; Dennis M. Buede (1987) Decision Synthesis: the principles and
practice of decision analysis, Cambridge University Press, Cambridge
White, D.J. (1969) Viewpoint: Operational Research and Entropy, Operational Research
Quarterly, Vol. 20, No. 1, pp. 126 - 127
White, D.J. (1970) Viewpoint: The Use of the Concept of Entropy in System Modelling,
Operational Research Quarterly, Vol. 21, No. 2, pp. 279 - 281
White, D.J. (1975) Entropy and Decision, Operational Research Quarterly, Vol. 26, No.
1, pp. 15 - 23
Williams, Simon (1990) The Electricity Handbook, June, Kleinwort Benson Securities
Wilson, A.G. (1970) The Use of the Concept of Entropy in System Modelling,
Operational Research Quarterly, Vol. 21, No. 2, pp. 247 - 265
471
Yu, Oliver S.; Hung-po Chao (1989) Electric Utility Planning in a Changing Business
Environment: Past Trends and Future Challenges, Proceedings of the Workshop on
Resource Planning Under Uncertainty for Electric Power Systems, Stanford University,
California
Zelenovic, Dragutin M. (1982) Flexibility --- a condition for effective production systems,
International Journal of Production Research, Vol. 20, No. 3, pp. 319 - 337
472