You are on page 1of 472

Modelling Uncertainty

in Electricity Capacity Planning

Anne Ku

Thesis submitted to the University of London

for the degree of Doctor of Philosophy

London Business School

February 1995
2
ABSTRACT

Capacity planning has always been subject to major uncertainties, but privatisation of the
U.K. electricity supply industry (ESI) has introduced the additional risks of business and
market failure. To meet these broader modelling requirements, two radically different
approaches characterised by model synthesis and flexibility are investigated.

Ideally, by using more than one technique, model synthesis should be more capable of
meeting the conflicting criteria of comprehensibility and comprehensiveness. The
noticeable trend of building bigger energy models supports this view in practice. A case
study based modelling experiment was conducted to compare replications of traditional
approaches with prototypes of synthesis. The conclusion from this is that the pursuit of
greater model comprehensiveness through model synthesis is an elusive and ultimately
impractical objective.

Rather than rigorous modelling for completeness, flexibility introduces an entirely different
treatment of uncertainty. Flexibility has received much attention lately, but its usefulness
is under-researched in modelling uncertainty for this context. In this respect, flexibility is
studied 1) as a decision criterion, 2) as a feature of the modelling approach, and 3) in
contrast to robustness.

Although intuitively appealing, flexibility is a vague and multi-faceted concept that


requires much clarification before further application. A cross disciplinary review
identifies its close relationships with more established concepts, the conditions under which
it is useful, and the necessary elements in its definition. These elements translate into
indicators for measuring and modelling flexibility. Practical guidelines for the
operationalisation, structuring, and assessment of flexibility are developed from this
conceptual framework and supported by examples specific to the UK ESI.

The seemingly feasible answer of model synthesis is fraught with conceptual and
operational difficulties. The less obvious concept of flexibility offers a more promising
and useful framework. Instead of modelling uncertainty for completeness, this thesis
promotes modelling flexibility for contingency.

3
To my parents,

James and Lucy,

complementary but not always compatible

4
ACKNOWLEDGEMENTS

Writing this thesis was like putting together a big jigsaw puzzle. Without the clues along
the way, the final picture might have appeared much later and perhaps fuzzier. I would
like to thank the kind individuals, who supplied these clues and some of the missing pieces.

First of all, I thank my supervisor Derek Bunn, for his patience and wisdom in guiding me
to the final end and his sustained interest as my thesis evolved. I would also like to thank
Kiriakos Vlahos for suggesting the idea of flexibility in the first place and his considerable
help and feedback on various stages of my research.

I would like to thank the following individuals for intellectual input: Hans Christian
Reinhardt for stimulating discussions on the concept of flexibility and my research as a
whole; Franois Longin for application of flexibility and robustness; Jonathan Levie and
Mike Staunton for detailed comments on writing and content; Isaac Dyner for feedback on
model synthesis; and Stephen Watson for clarification of the initial literature review on
flexibility.

On financial support, I acknowledge the PhD Programmes generous contribution towards


my tuition in the early years and fees for conferences (Kiel, DC, and Berlin). I am also
grateful for the support of the Overseas Research Scholarship, Decision Sciences Teaching
Scholarship, Energy Scholarship, and Decision Sciences Faculty for conferences
(Lancaster, Cologne, and York).

I would like to thank the LBS Library staff, in particular, John Hall and Lynne Powell for
inter-library loans. I am also deeply grateful to my friend Zakia Mehdi for lending me her
notebook computer in the final stages and Eduardo Ayrosa for configuring it to my
specifications. I also acknowledge the advice and encouragement of other colleagues and
friends who have shared the PhD experience with me.

Finally, I thank Dave Woodhouse, for emotional support.

5
Chance favours the prepared mind.

- Louis Pasteur (1822 - 1895)

6
TABLE OF CONTENTS

Abstract ........................................................................................ 3

Acknowledgements ........................................................................................ 5

Table of Contents ........................................................................................ 7

List of Tables ........................................................................................ 17

List of Figures ........................................................................................ 21

Chapter 1 Preface and Overview

1.1 Motivation ........................................................................................ 25

1.2 Research Questions....................................................................................... 27

1.3 Research Methodology.................................................................................. 28

1.4 Organisation of Thesis .................................................................................. 29

Chapter 2 Introduction: Uncertainties in Power Generation

2.1 Introduction ........................................................................................ 37

2.2 Electricity ........................................................................................ 37

2.3 Industry Structure ........................................................................................ 38

2.3.1 The Privatised UK Electricity Supply Industry..................................... 39

2.3.2 ESI of Other Countries ........................................................................ 46

2.4 Background Developments............................................................................ 48

2.5 Capacity Planning ........................................................................................ 50

2.6 Uncertainty and Types of Uncertainty ........................................................ 53

2.7 Areas of Uncertainty..................................................................................... 60

2.7.1 Plant Economics.................................................................................. 61

2.7.2 Fuel ........................................................................................ 64

2.7.3 Electricity Demand.............................................................................. 66

7
2.7.4 Technology ........................................................................................ 67

2.7.5 Financing Requirements ...................................................................... 71

2.7.6 Market ........................................................................................ 72

2.7.7 Political and Regulatory ...................................................................... 73

2.7.8 Environment........................................................................................ 75

2.7.9 Public ........................................................................................ 77

2.8 Conclusions ........................................................................................ 78

Part One Model Synthesis for Completeness

Chapter 3 Approaches to Capacity Planning

3.1 Introduction ........................................................................................ 83

3.2 Optimisation ........................................................................................ 84

3.2.1 Linear Programming ........................................................................... 84

3.2.2 Decomposition Methods ...................................................................... 88

3.2.3 Dynamic Programming........................................................................ 90

3.2.4 Stochastic Programming...................................................................... 93

3.3 Simulation ........................................................................................ 96

3.3.1 System Dynamics................................................................................ 97

3.3.2 Scenario Analysis................................................................................ 98

3.3.3 Sensitivity Analysis............................................................................. 102

3.3.4 Probabilistic and Risk Analysis ........................................................... 103

3.4 Decision Analysis ........................................................................................ 105

3.4.1 A Classic Application - the Over and Under Model.............................. 108

3.4.2 An Extension of the Baughman-Joskow Model..................................... 111

3.4.3 Multiple Objectives Under Uncertainty ................................................ 112

3.4.4 Multi-Attribute, Objectives Hierarchy.................................................. 114

3.5 Model Synthesis........................................................................................ 116

8
3.5.1 Commercially Available Software........................................................ 116

3.5.2 Decision Analysis with Optimisation.................................................... 117

3.5.3 Scenario Generation ............................................................................ 120

3.5.4 Decision Analysis as a Framework....................................................... 122

3.6 Conclusions ........................................................................................ 124

Chapter 4 The Pursuit of Model Synthesis

4.1 Introduction ........................................................................................ 133

4.2 Experimental Protocol................................................................................... 134

4.3 Model Replication and Evaluation................................................................. 135

4.3.1 Rationale ........................................................................................ 135

4.3.2 Method of Replication ......................................................................... 136

4.3.3 Method of Model Evaluation and Comparison...................................... 137

4.3.4 Evaluation Criteria .............................................................................. 138

4.4 Case Study Based Modelling Experiment ...................................................... 141

4.4.1 Case Study ........................................................................................ 141

4.4.2 Stage 1: Three Archetypal Modelling Approaches ............................... 145

4.4.3 Comparison of Approaches.................................................................. 147

4.5 Model Synthesis ........................................................................................ 152

4.5.1 Rationale ........................................................................................ 152

4.5.2 Conceptualisation of Model Synthesis.................................................. 154

4.5.3 Decision Analysis Framework.............................................................. 159

4.5.4 Model of Model................................................................................... 161

4.5.4.1 Introduction ............................................................................... 161

4.5.4.2 Methodology.............................................................................. 162

4.5.4.3 Conclusions ............................................................................... 167

4.5.5 Second Stage Conclusions ................................................................... 168

9
4.6 Motivation for Flexibility.............................................................................. 171

4.6.1 Completeness and Model Unease...................................................... 171

4.6.2 Coping with Uncertainty .................................................................. 173

4.7 Conclusions ........................................................................................ 174

Appendix A Pilot Study 1:

A Comparison of the Economics of Nuclear, Coal, and Gas Power Plant

Using Sensitivity Analysis and Risk Analysis

A.1 Introduction ........................................................................................ 177

A.2 Modelling Approach ..................................................................................... 179

A.3 The Cost of Electricity Generation ................................................................ 181

A.3.1 Range of Levelised Costs .................................................................... 181

A.3.2 Variability in Cost Components........................................................... 183

A.3.3 Contribution to Cost............................................................................ 183

A.4 Major Components of Cost ........................................................................... 185

A.4.1 Assumptions ....................................................................................... 186

A.4.2 Ranges of Values ................................................................................ 187

A.4.3 Investment ........................................................................................ 188

A.4.4 Operations and Maintenance................................................................ 189

A.4.5 Fuel ........................................................................................ 190

A.4.6 Carbon Tax ........................................................................................ 192

A.4.7 Efficiency ........................................................................................ 194

A.4.8 Load Factor ........................................................................................ 194

A.4.9 Escalation Rates.................................................................................. 195

A.4.10 Life ........................................................................................ 196

A.4.11 Discount Rate ..................................................................................... 197

A.4.12 Consolidating the Range...................................................................... 199

10
A.5 Sensitivity Analysis....................................................................................... 200

A.5.1 Calculation Method ............................................................................. 201

A.5.2 UK Parameters.................................................................................... 202

A.5.3 Sensitivity to Range............................................................................. 206

A.6 Risk Analysis ........................................................................................ 208

A.6.1 Methodology ....................................................................................... 209

A.6.2 Revised Values.................................................................................... 210

A.6.3 Nuclear ........................................................................................ 211

A.6.4 Coal ........................................................................................ 213

A.6.5 Gas ........................................................................................ 215

A.6.6 Trade-off Curves................................................................................. 217

A.6.7 Impact of Carbon Tax ......................................................................... 219

A.7 Summary and Conclusions............................................................................ 221

Appendix B Stage One: Three Archetypal Approaches: Data

Consolidation, Model Replication, and Evaluation

B.1 Data Consolidation ....................................................................................... 227

B.2 Deterministic Approach ................................................................................ 236

B.2.1 Description of Approach...................................................................... 236

B.2.2 Description of Replication ................................................................... 237

B.2.3 Results of Replication.......................................................................... 245

B.2.4 Conclusions and Extensions................................................................. 249

B.3 Probabilistic Approach ................................................................................. 250

B.3.1 Description of Approach...................................................................... 250

B.3.2 Replication and Results ....................................................................... 252

B.3.3 Extensions of Probabilistic Approach................................................... 256

B.4 Decision Analytic Approach.......................................................................... 257

11
B.4.1 Description of Approach ..................................................................... 257

B.4.2 Three Prototypes ................................................................................. 259

B.4.3 Marginal Cost Analysis....................................................................... 260

Appendix C A Conceptualisation of Model Synthesis

C.1 Definitions ........................................................................................ 263

C.2 Synergies Between Techniques...................................................................... 264

C.2.1 Decision and Uncertainty Nodes .......................................................... 264

C.2.2 Sensitivity Analysis, Risk Analysis, Decision Analysis......................... 266

C.3 Structuring ........................................................................................ 268

C.3.1 Selection of Components ..................................................................... 268

C.3.2 Ordering ........................................................................................ 270

C.3.3 Linkage ........................................................................................ 271

C.4 Weak and Strong Forms ............................................................................... 274

C.5 Strategies for Synthesis................................................................................. 275

C.5.1 Modular ........................................................................................ 275

C.5.2 Hierarchical ........................................................................................ 275

C.5.3 Evolutionary ....................................................................................... 276

C.5.4 Other Approaches ............................................................................... 277

Part Two Flexibility for Uncertainty

Chapter 5 Cross Disciplinary Review

5.1 Introduction ........................................................................................ 281

5.2 Energy Sector and Electricity Planning.......................................................... 282

5.3 Economics ........................................................................................ 284

5.4 Corporate Planning / Business Strategy......................................................... 285

5.5 Labour Markets ........................................................................................ 286

12
5.6 Technology / Information Systems / Telecommunications .............................. 287

5.7 Manufacturing ........................................................................................ 288

5.8 Other Areas ........................................................................................ 290

5.9 Observations and Conclusions....................................................................... 290

Chapter 6 Conceptual Framework

6.1 Introduction ........................................................................................ 295

6.2 Conceptual Analysis ..................................................................................... 296

6.3 Flexibility and Robustness ............................................................................ 297

6.3.1 Two Types of Flexibility ..................................................................... 297

6.3.2 Robustness ........................................................................................ 298

6.3.3 Flexibility versus Robustness............................................................... 300

6.4 Flexibility versus Optimality as a Decision Criterion ..................................... 306

6.5 Robustness, Risk, and Regret ........................................................................ 306

6.6 Commitment, Confidence, and Flexibility ...................................................... 308

6.7 The Right But Not the Obligation.................................................................. 309

6.8 Uncertainty and Flexibility............................................................................ 311

6.9 Conditions Under Which Flexibility is Useful ................................................ 313

6.10 Downside of Flexibility................................................................................. 315

6.11 Necessary Elements to Define Flexibility....................................................... 317

6.12 The Concept of Favourability........................................................................ 319

6.13 Operationalising Flexibility ........................................................................... 320

6.14 Conclusions ........................................................................................ 324

Chapter 7 Measuring Flexibility

7.1 Introduction ........................................................................................ 327

7.2 Indicators of Flexibility................................................................................. 330

7.3 Expected Value Measures ............................................................................. 335

13
7.3.1 Relative Flexibility Benefit .................................................................. 335

7.3.1.1 Single Investment: Flexible vs Inflexible..................................... 336

7.3.1.2 Comparing Investments: Flexibility vs Favourability .................. 340

7.3.2 Normalised Flexibility Measure........................................................... 343

7.3.3 Expected Value of Information ............................................................ 348

7.3.4 Towards an Improved EV Measure ..................................................... 351

7.4 Entropic Measures........................................................................................ 355

7.4.1 For Entropy ........................................................................................ 356

7.4.2 Decision View (Pye, 1978) .................................................................. 360

7.4.3 Systems View (Kumar, 1987).............................................................. 363

7.4.4 Against Entropy .................................................................................. 365

7.5 Comparison of Entropic and Expected Value Measures ................................. 370

7.6 Conclusions ........................................................................................ 372

Chapter 8 Modelling Flexibility

8.1 Introduction ........................................................................................ 377

8.2 Structuring ........................................................................................ 378

8.2.1 Decision Analytic Framework.............................................................. 378

8.2.2 Two Stage Decision Sequence ............................................................. 381

8.2.3 Local and External Events................................................................... 382

8.3 Assessment ........................................................................................ 384

8.3.1 Simple Problems ................................................................................. 385

8.3.2 Complex Problems .............................................................................. 386

8.4 Capacity Planning in the UK Electricity Supply Industry............................... 388

8.4.1 Plant Economics.................................................................................. 389

8.4.2 Pool Price ........................................................................................ 393

8.5 Operationalising Strategies ........................................................................... 396

14
8.5.1 Partitioning, Sequentiality, Staging ...................................................... 396

8.5.2 Postponement and Deferral .................................................................. 401

8.5.3 Diversity ........................................................................................ 403

8.6 Conclusions ........................................................................................ 406

Appendix D Flexibility and Robustness: Response to Demand Uncertainty

by Over- and Under-Capacity

D.1 Introduction ........................................................................................ 409

D.2 Simple Example: no lead time, demand = supply, planned = actual levels....... 410

D.2.1 Proportional Cost ................................................................................ 412

D.2.2 Flexibility ........................................................................................ 413

D.2.3 Robustness ........................................................................................ 413

D.2.4 Flexibility versus Robustness............................................................... 414

D.2.5 Optimal Policy Iopt ............................................................................... 414

D.2.6 Special Cases ...................................................................................... 414

D.2.7 Relative Costs ..................................................................................... 415

D.3 Extensions of Simple Example ...................................................................... 416

D.3.1 Levels of I ........................................................................................ 416

D.3.2 Cost of Not Meeting Demand Cd .......................................................... 417

D.3.3 Effect of Lead Time T ......................................................................... 417

D.3.4 Risk Attitude....................................................................................... 418

D.3.5 Levels of Qmin, Qmax with respect to Dmin, Dmax...................................... 419

D.3.6 Forecasted Demand versus Actual Demand.......................................... 419

D.3.7 Errors in Forecasting, Modelling, and Planning .................................... 421

D.4 Applications by Further Examples................................................................. 421

D.4.1 Example 1: Current and Savings Accounts.......................................... 422

D.4.2 Example 2: Buying versus Renting...................................................... 423

15
D.5 The UK Electricity Supply Industry .............................................................. 424

D.6 Conclusions ........................................................................................ 426

Chapter 9 Conclusions

9.1 Main Themes ........................................................................................ 429

9.2 Research Questions and Answers.................................................................. 433

9.3 Research Contributions................................................................................. 441

9.4 Further Research ........................................................................................ 444

References ........................................................................................ 449

16
LIST OF TABLES

Chapter 1
Table 1.1 Research Questions and Methodology ............................................................36

Chapter 2
Table 2.1 Privatised Structure in England and Wales ....................................................40
Table 2.2 Comparison of Industry Structures................................................................46
Table 2.3 Evolution of Electricity Planning in the USA ..................................................52
Table 2.4 Important Factors in Capacity Planning..........................................................61
Table 2.5 Fuel/Technology Comparisons ......................................................................69
Table 2.6 Model Requirements for Capacity Planning....................................................79

Part One Model Synthesis for Completeness

Chapter 3
Table 3.1 Arguments For and Against Risk Analysis....................................................105
Table 3.2 Steps in Decision Analysis ...........................................................................106
Table 3.3 Pros and Cons of Decision Analysis .............................................................108
Table 3.4 Critique of Techniques .................................................................................126

Chapter 4
Table 4.1 Model Evaluation and Comparison Criteria ..................................................139
Table 4.2 Unprotected but Dominant Utility: National Power .......................................142
Table 4.3 Protected but Competitive Utility .................................................................143
Table 4.4 Unprotected but Encouraged Utility..............................................................144
Table 4.5 Comparison With Respect to Evaluation Criteria..........................................149
Table 4.6 Summary of Approaches..............................................................................150
Table 4.7 Major Concerns in Model Synthesis .............................................................155
Table 4.8 Structuring Issues ........................................................................................157
Table 4.9 Dependent Variables in the Reduced Model ..................................................163
Table 4.10 Independent Variables in the Reduced Model..............................................164
Table 4.11 Difficulties of Model Synthesis Implementation ..........................................169
Table 4.12 Completeness and Unease .......................................................................172

17
Appendix A
Table A.1 Consolidated Range .................................................................................... 200
Table A.2 UK Parameters ........................................................................................... 203
Table A.3 Base Costs for the UK ................................................................................ 204
Table A.4 Simulation Parameters for Nuclear.............................................................. 212
Table A.5 Simulation Values for Coal ......................................................................... 214
Table A.6 Simulation Values for Gas .......................................................................... 216

Appendix B
Table B.1 Sources of Information................................................................................ 229
Table B.2 Status of Plant ............................................................................................ 231
Table B.3 Existing Plant as at July 1993 ..................................................................... 232
Table B.4 Summary of All Plant in England and Wales NGC System as at July 1993 .. 236
Table B.5 Input Files to ECAP.................................................................................... 255
Table B.6 Output Files from ECAP............................................................................. 256

Part Two Flexibility for Uncertainty

Chapter 5
Table 5.1 Uses of Flexibility ....................................................................................... 292

Chapter 6
Table 6.1 Flexibility and Robustness ........................................................................... 297
Table 6.2 Gerwins (1993) Methods of Coping With Uncertainty................................. 304
Table 6.3 Response to Areas of Uncertainties in Chapter 2 .......................................... 304
Table 6.4 Mandelbaum (1978) .................................................................................... 315

18
Chapter 7
Table 7.1 Elements and Indicators of Flexibility...........................................................335
Table 7.2 Annual Costs 336
Table 7.3 Comparison of Expected Value Measures.....................................................353
Table 7.4 Equal Entropies for Different Number of States............................................369

Chapter 8
Table 8.1 Problem Categories and Expected Value Measures .......................................386
Table 8.2 Areas of Uncertainties Affecting Costs and Revenues ...................................388

Appendix D
Table D.1 Terminology and Notations .........................................................................411
Table D.2 Lead Time and Cost of Not Meeting Demand ..............................................418
Table D.3 Preferences with respect to Risk Attitude when P(Dt > Qmax) >0 ................419
Table D.4 Conditions for Robustness and Flexibility....................................................420

Chapter 9
Table 9.1 Research Questions and Answers .................................................................435
Table 9.2 Flexibility ..............................................................................................441

19
20
LIST OF FIGURES

Chapter 1
Figure 1.1 Organisation of Thesis ..................................................................................30

Chapter 2
Figure 2.1 Privatised Industry Structure in the UK........................................................40
Figure 2.2 Spiral of Impossibility .................................................................................54

Part One Model Synthesis For Completeness

Chapter 3
Figure 3.1 Decomposition Methods................................................................................89
Figure 3.2 Scenario Planning Process ..........................................................................100
Figure 3.3 The Over and Under Model........................................................................110
Figure 3.4 Matrix Model of Decisions and Outcomes..................................................112
Figure 3.5 Decision Tree in SMARTS ........................................................................114
Figure 3.6 Technology Choice Decision Tree ...............................................................115
Figure 3.7 Technology Choice Objectives Hierarchy ....................................................116
Figure 3.8 Optimisation Grid.......................................................................................118
Figure 3.9 Decision Tree with Optimisation Algorithm.................................................120
Figure 3.10 Scenario / Decision Analysis .....................................................................121
Figure 3.11 Decision Tree of New Technology Evaluation ...........................................123
Figure 3.12 SMARTE Methodology ............................................................................124
Figure 3.13 OR Techniques .........................................................................................130

Chapter 4
Figure 4.1 Experimental Protocol ................................................................................135

Appendix A
Figure A.1 Uncertainty Modelling................................................................................181
Figure A.2 Horizontal Analysis of Value Ranges .........................................................182
Figure A.3 Vertical Analysis of Cost Contribution .......................................................184

21
Figure A.4 Factors Influencing Cost ............................................................................ 185
Figure A.5 Carbon Tax Calculations for Coal-fired Plants ........................................... 193
Figure A.6 Contribution to Final Cost ......................................................................... 204
Figure A.7 UK Coal vs Nuclear Trade-off Curves with $3 Carbon Tax ....................... 205
Figure A.8 UK Coal vs Nuclear Trade-off Curves with $10 Carbon Tax...................... 206
Figure A.9 Coal ............................................................................................. 207
Figure A.10 Nuclear ............................................................................................. 208
Figure A.11 Risk Profiles for Nuclear ......................................................................... 213
Figure A.12 Risk Profiles for Coal .............................................................................. 215
Figure A.13 Risk Profiles for Gas ............................................................................... 217
Figure A.14 Trade-off Curves for Coal, Nuclear, and Gas (no tax) .............................. 218
Figure A.15 Most Likely Case..................................................................................... 218
Figure A.16 Most Expensive Case............................................................................... 219
Figure A.17 Carbon Tax on Coal ................................................................................ 220
Figure A.18 Carbon Tax on Gas ................................................................................. 220
Figure A.19 Modelling Directions................................................................................ 222

Appendix B
Figure B.1 Load Duration Curves for Demand Uncertainty.......................................... 239
Figure B.2 Scenario Generation................................................................................... 240
Figure B.3 Replication of the Probabilistic Approach................................................... 253
Figure B.4 Prototype One: Single Project.................................................................... 260
Figure B.5 Prototype Two: Marginal Cost Analysis .................................................... 261

Appendix C
Figure C.1 Similarities of Techniques.......................................................................... 265
Figure C.2 Risk Analysis and Decision Analysis.......................................................... 267
Figure C.3 Types of Model Linkages........................................................................... 272

Part Two Flexibility for Uncertainty

Chapter 6
Figure 6.1 Conceptual Framework............................................................................... 295

22
Chapter 7
Figure 7.1 Hobbs Example.........................................................................................337
Figure 7.2 Expected Conditions ...................................................................................339
Figure 7.3 Investment Y 341
Figure 7.4 General Structure of Normalisation.............................................................345
Figure 7.5 Expected Conditions ...................................................................................346
Figure 7.6 Schneeweiss and Khn................................................................................347
Figure 7.7 EVPI ..............................................................................................350
Figure 7.8 Relative Flexibility Benefit..........................................................................350
Figure 7.9 Deterministic EV ........................................................................................352
Figure 7.10 Notation ..............................................................................................357
Figure 7.11 Maximum Entropy as a Function of States ................................................358
Figure 7.12 Entropy and Standard Deviation................................................................358
Figure 7.13 State Discrimination .................................................................................359
Figure 7.14 Decomposition Rule..................................................................................360
Figure 7.15 Decision Tree Transformation for Entropic Treatment...............................366

Chapter 8
Figure 8.1 Decision Tree of Generic Example ..............................................................383
Figure 8.2 Influence Diagram of Generic Example .......................................................384
Figure 8.3 Hirsts (1989) Example ..............................................................................387
Figure 8.4 Electricity Planning Example ......................................................................389
Figure 8.5 Plant Economics Influence Diagram............................................................391
Figure 8.6 Plant Economics Decision Tree...................................................................392
Figure 8.7 Pool Price Influence Diagram......................................................................394
Figure 8.8 Pool Price Decision Tree.............................................................................395
Figure 8.9 Partitioning ..............................................................................................397
Figure 8.10 Sequentiality and Staging..........................................................................398
Figure 8.11 Flexibility by Plant Lives ..........................................................................399
Figure 8.12 Postponement and Deferral Decision Tree .................................................402
Figure 8.13 Deferral with respect to Market and Plant Uncertainty...............................403
Figure 8.14 Diversity Influence Diagram .....................................................................404
Figure 8.15 Diversity Decision Tree ............................................................................405

23
Appendix D
Figure D.1 Costs of holding and production: Ch and Cp ............................................. 412
Figure D.2 Relationship between Iopt and Ch, Cp ........................................................ 416
Figure D.3 Cost of extra production Cxp ..................................................................... 420

Chapter 9
Figure 9.1 Research Messages..................................................................................... 434

24
CHAPTER 1

Preface and Overview

1.1 Motivation

The electricity supply industry (ESI) is one of the most capital intensive of all

industries, with huge investments in power stations that are expected to pay off

over several decades. Long construction lead times and operating lives imply the

need for capacity planning to determine the types, sizes, and timing of new plants

to be built as older plants are retired. These decisions are made in the face of great

uncertainty, and the often irreversible commitments are translated into future costs.

In the presence of rapidly changing technology, economics, and shifting social

attitudes, new commitments may quickly become obsolete and inadequate. The

privatisations of recent years, especially in the UK, have added to the uncertainties

by introducing business risk into a previously safe market. New responsibilities

and priorities in the UK and elsewhere have redefined what constitutes capacity

planning, from once an engineering-dominated operational task to the domain of

strategic decision making where responsiveness and other measures against

complex and uncertain environments are paramount.

A more comprehensive treatment of uncertainty is now necessary given these

costly implications. One trend has been to build larger energy models, e.g. NEMS

project in the US (DOE, 1994). However, larger models are more difficult for the

policy-maker to understand and manage. Indeed, traditional approaches, especially

those employing single techniques, have difficulty meeting the conflicting criteria of

comprehensiveness and comprehensibility. One way to resolve this conflict is

suggested by combining techniques or models through model synthesis to

overcome the deficiencies of individual techniques and yet exploit the synergies

25
between them. Other than developing new modelling languages to this end, the

literature has little to offer on the method and manner of synthesis.

Elsewhere, the need for flexibility is frequently mentioned as a response to

uncertainty. Flexibility is intuitively appealing as it communicates a practical means

of dealing with uncertainty. Rather than modelling uncertainty more

comprehensively or accurately, the emphasis is on the ability to react, such as

having different options available. Flexible technologies, such as short lead time,

modular, dual-purpose plant, are promoted in the electricity planning literature, e.g.

CIGRE (1991, 1993). Under great uncertainty, it has been suggested that

flexibility is preferred to optimality as a decision criterion (Mandelbaum, 1978).

In competitive and uncertain environments, like the manufacturing sector,

flexibility has become an important operational objective, not only desirable but

also necessary (Slack, 1988). This suggests that flexibility may become more

important as the electricity industry becomes more deregulated. However, it is still

unclear how flexibility can be defined, measured, and explicitly modelled to be

useful to capacity planning under uncertainty.

Although the modelling literature tends to support model synthesis, its feasibility

has not yet been fully established, particularly the conceptual and operational

requirements. In the context of the UK ESI, it is also necessary to determine the

practicality of synthesis, as opposed to existing individual approaches. Similarly,

while the electricity industry calls for flexibility, its usefulness in modelling has not

been specifically demonstrated. The next section thereby lists a number of specific

research questions related to these two themes of model synthesis and flexibility.

26
1.2 Research Questions

How can we deal with the range of uncertainties more completely and adequately

than existing approaches in capacity planning, i.e. to meet the conflicting criteria of

comprehensiveness and comprehensibility? This thesis proposes model synthesis

and flexibility as two ways to answer this question, with emphasis on the

conceptual aspects.

Driving the main argument is a list of questions on the issues involved in model

synthesis and flexibility. Many of these questions bring up additional questions

throughout the thesis.

1) What are the new requirements for capacity planning in the privatised and
restructured UK ESI?

2) What are existing approaches to this problem and how well do they treat these
uncertainties?

3) How can we compare different modelling appoaches more objectively,


systematically, fairly, and in more depth than by reviewing the literature?

4) Is model synthesis feasible and practical for these purposes? What are the
conceptual and operational issues involved in model synthesis?

5) What is flexibility? How is it defined? How does it relate to other words and
concepts?

6) In what way(s) can flexibility be useful in addressing uncertainty in electricity


capacity planning?

7) When, i.e. under which conditions, is it useful or not useful?

8) How can we operationalise flexibility?

9) How can we measure flexibility?

10) How can flexibility be modelled and applied to electricity planning?

27
1.3 Research Methodology

This thesis employs the following research methods: literature review,

experiment, model replication, conceptual development, synthesis, and

comparative theoretical evaluations.

A literature review (Chapter 2) of ongoing developments of the UK ESI identifies

the new requirements for capacity planning. A classification of areas and types of

uncertainties serves as the basis for a critique of existing approaches to capacity

planning. An extensive literature review (Chapter 3) of capacity planning models

and modelling methods used in the UK and elsewhere reveals the limitations of

existing approaches. These two reviews also provide the criteria for subsequent

evaluation of models.

To investigate the feasibility and practicality of model synthesis and to compare its

performance with existing approaches, a two-staged case study based modelling

experiment (Chapter 4) is conducted. This experiment consists of pilot studies

(Appendix A) to establish the feasibility of model replication and evaluation,

development of a hypothetical case study, consolidation of published data

pertaining to the UK ESI, replication of three modelling approaches (Appendix B),

conceptualisation of model synthesis (Appendix C), and construction of

prototypes. The experimental protocol offers a systematic and thorough method

of model replication and evaluation.

To assess the usefulness of flexibility, a cross disciplinary review (Chapter 5) of its

applications and interpretations is first conducted. A conceptual development

(Chapter 6) of words, relationships, and aspects of flexibility clarifies the confusion

found in the literature and unifies the definitions of flexibility. It also provides the

basis for subsequent identification of conditions under which it is useful,

discussion of the downside of flexibility, illumination of the important concept of

28
favourability, distinction between options and strategies for operationalisation,

and determination of the necessary elements in its definition. The definitional

elements translate into indicators for measuring flexibility. Three groups of

measures (indicators, expected value, and entropy) are evaluated (Chapter 7)

against criteria based on the conceptual development of Chapter 6. These

indicators also provide the basic terminology and framework for modelling

flexibility using decision trees and influence diagrams. Practical guidelines

(Chapter 8) for structuring and assessing flexibility are developed. These

guidelines facilitate the structuring and assessment of models of strategies for

operationalising flexibility in the context of capacity planning in the UK ESI, thus

showing the relevance, applicability, and usefulness of flexibility.

1.4 Organisation of Thesis

This thesis consists of two independent and separately argued texts corresponding

to two entirely different ways of approaching the problem. Part One investigates

model synthesis as a means to completeness in modelling the different

uncertainties. Part Two investigates flexibility as a means to compensate for model

unease and as a more practical means of coping with uncertainties. Initially, the

two themes seem totally unrelated. It is not until Chapter 9 that they are brought

together and resolved.

Figure 1.1 shows the organisation of this thesis in terms of chapters (1 to 9)

followed by appendices (A to D in italics). The modelling experiment to establish

the feasibility of model synthesis is described in Chapter 4 but documented in the

first three appendices. Brief summaries of each chapter and appendix follow.

29
Figure 1.1 Organisation of Thesis

2
Introduction:
Uncertainties in
PART ONE Electricity Generation PART TWO

Model Synthesis Flexibility


3
Approaches to Capacity 5
Planning Cross Disciplinary Review
4
The Pursuit of 6
Model Synthesis Conceptual Development

A 7
Pilot Study 1 Measuring Flexibility

B 8
Three Archetypal Approaches Modelling Flexibility

C
A Conceptualisation of Model D
Synthesis Flexibility and Robustness
9
Conclusions

Chapter 2 Introduction: Uncertainties in Power Generation

This introductory chapter explains the title of this thesis by defining electricity

capacity planning and discussing the uncertainties which affect it. The

background developments in the industry, particularly the privatisation and

restructuring in the UK, lead to the current emphasis on uncertainty which

motivates this research. The term uncertainty is defined and distinguished by

type, e.g. internal and external, quantitative and qualitative, etc. Uncertainties that

affect capacity planning in the ESI are classified according to area, e.g. demand,

30
fuel, and environment. It introduces the criteria of adequacy, completeness,

feasibility, and usefulness.

PART ONE: Model Synthesis for Completeness

Chapter 3 Approaches to Capacity Planning

Applications of operational research techniques for electricity capacity planning are

reviewed. Grouped in the broad categories of optimisation, simulation, and

decision analysis, these techniques are briefly defined and evaluated on the basis of

completeness of capturing the areas of uncertainty and adequacy of modelling

treatment. These techniques are either specific to capacity planning or uncertainty

analysis. The applications are either based on individual techniques or several

techniques. It concludes that models based on two or more techniques, defined as

model synthesis, seem to be more capable of achieving completeness through

complementary techniques.

Chapter 4 The Pursuit of Model Synthesis

This chapter documents the research into model synthesis, with supporting details

in the first three appendices (A, B, C). The investigation consists of two pilot

studies to establish the feasibility of model replication and evaluation,

development of a UK ESI-relevant case study to anchor the data and model

content, the first stage of three archetypal approaches to gain a more indepth, fair,

and relevant critique than mere literature review, the second stage of

conceptualisation to identify conceptual issues in synthesis, and the construction

of model synthesis prototypes to identify the operational issues. A decision

analysis framework for organising other techniques is proposed and a model of

model tested to facilitate this framework. A further requirement of compatibility

emerges as an important issue. The results of these experiments cast doubt upon

31
the general usefulness of model synthesis as a fully comprehensive modelling

approach, especially in terms of practicality in the context of the UK ESI. In view

of this, flexibility is proposed as a more useful concept to address the modelling

goal of completeness (or the lack of it) and as a practical means to cope with

uncertainty.

Appendix A Pilot Study 1

This study is a first attempt in this thesis to address a range of uncertainties that

affect plant economics, i.e. levelised or marginal costs of electricity generation. It

gives critical insight into the use of sensitivity analysis and risk analysis to model

uncertainty. Data from various OECD countries are consolidated from two

different reports to determine the ranges of uncertainty for various parameters.

This study establishes the feasibility of model replication but also shows the level

of detail of subsequent studies.

Appendix B Stage One: Three Archetypal Approaches

This is the first stage of the two staged model experiment. Three representative

modelling approaches to capacity planning are replicated and evaluated. 1) The

deterministic approach consists of scenario analysis and sensitivity analysis of

the capacity planning optimisation model. 2) The probabilistic approach is an

expanded risk analysis of the optimisation model. 3) The decision analytic

approach is based on the use of decision trees and influence diagrams. The

associated input data and assumptions for the core optimisation model are given.

Appendix C A Conceptualisation of Model Synthesis

Model synthesis is defined as the use of two or more techniques in some integrated

fashion towards model completeness. Conceptual issues are developed for

32
structuring the components and strategies for synthesis. The frequently used terms

in this thesis are defined here, e.g. technique, model, and approach.

PART TWO: Flexibility for Uncertainty

Chapter 5 Cross Disciplinary Review

This chapter summarises the extensive cross disciplinary review of definitions,

measures, and applications of flexibility since its earliest formal reference in the

1930s and as appeared in various industries and business sectors and academic

disciplines. The review is based on two types of sources: 1) journal articles in the

last two decades (1970 - 1994), which mention flexibility and closely related

words, and 2) specific research studies of flexibility, e.g. previous doctoral

dissertations on this topic. It shows the richness of the literature as well as the

general confusion in practice of what is meant by flexibility. This review provides

the basis for subsequent clarification, analysis, and application in the remaining

three chapters.

Chapter 6 Conceptual Development

This chapter clarifies the confusion through analysis. A conceptual analysis of

flexibility and related words, e.g. adaptability, is applied to flexibility and more

established concepts, e.g. uncertainty, commitment, etc. In particular, the contrast

between flexibility and robustness is discussed, with specific application in

Appendix D. A distinction is made between the context-dependent types of

flexibility and the context-free elements of flexibility. An uncertainty to flexibility

mapping is proposed to determine types of flexibility. The important concept of

favourability inherent in flexibility is highlighted. These conceptual relationships

together with conditions under which it is useful, the downside of flexibility, and

elements in its definition provide a conceptual framework to link the theoretical

33
and practical aspects of flexibility. Two different kinds of operationalisation of

flexibility via options and strategies are discussed. The need for measuring

flexibility is proposed.

Chapter 7 Measuring Flexibility

Using the four step method of Chapter 4, i.e. criteria, replication, evaluation, and

comparison, three groups of measures are critiqued for their consistency with the

conceptual development of Chapter 6. For the first group, definitional elements

from the conceptual framework are translated into indicators, which support the

partial measures found in the literature. The second group is based on the decision

analysis notion of expected value. Three different expected value measures and a

proposal for an improved measure with features of the previous three are assessed.

The third group is based on the scientific concept of entropy, and two types of

entropic measures are assessed. This detailed analysis concludes that indicators

and expected values may be used to measure flexibility but not entropic measures.

Chapter 8 Modelling Flexibility

This chapter makes use of previously defined indicators and expected value

measures to structure and assess options and strategies for operationalising

flexibility. Practical guidelines for this are developed and tested in a decision

analysis framework, as a structuring tool for flexibility but not as an organisational

tool for synthesis. It gives the circumstances under which to use indicators and

different kinds of expected value based measures. Relevance to electricity planning

is illustrated by a decision model of plant economics. Relevance to the UK ESI is

illustrated by a model of pool price. Four ways to operationalise flexibility are

examined with respect to these guidelines.

34
Appendix D Flexibility and Robustness: Response to Demand Uncertainty

The concepts of flexibility and robustness are applied to an analysis of over and

under capacity in production and inventory control, i.e. supply versus demand.

Measures of robustness and flexibility are derived with respect to costs of over and

under capacity. The original example is extended with additional detail to other

applications.

Chapter 9 Conclusions

The final chapter (9) summarises the main conclusions and research contributions.

It also suggests directions for further research.

35
The following table (1.1) provides a simple location guide of research questions

and methodology.

Table 1.1 Research Questions and Methodology

Question Location Methodology

1) What are the new requirements for capacity Chapter 2 literature review,
planning in the privatised and restructured UK classification
ESI?

2) What are existing approaches to this problem and Chapter 3, literature review,
how well do they treat these uncertainties? Appendix B evaluation
critique

3) How can we compare different modelling appoaches Chapter 4, proposal,


more objectively, systematically, fairly, and in more Appendix A, B feasibility studies,
depth than by reviewing the literature? experiment

4) Is model synthesis feasible and practical for these Chapter 4 experiment,


purposes? What are the conceptual and operational Appendix C conceptualisation
issues involved in model synthesis?

5) What is flexibility? How is it defined? How does it Chapter 5, 6 literature review,


relate to other words and concepts? analysis

6) In what way(s) can flexibility be useful in Chapter 5, 6 literature review,


addressing uncertainty in electricity capacity conceptual
planning? development,
analysis

7) When, i.e. under which conditions, is it useful or Chapter 6 analysis


not useful?

8) How can we operationalise flexibility? Chapter 6, 7, 8 literature review,


application

9) How can we measure flexibility? Chapter 7, literature review,


Appendix D analysis, critique

10) How can flexibility be modelled and applied to Chapter 7, 8 application,


electricity planning? development of
guidelines

36
CHAPTER 2

Introduction: Uncertainties in Power Generation

2.1 Introduction

This chapter explains the title of this thesis, motivation and description of the

research problem, definition of the main terms and issues, and the overall agenda

for the rest of the thesis. Beginning with the importance of electricity (section 2.2),

this chapter introduces the structure of the industry (section 2.3) and background

developments (section 2.4) that have led to the current emphasis on uncertainties.

Capacity planning is defined in section 2.5. Uncertainty is defined and

classified in section 2.6. Difficulties in capacity planning are then reviewed with

respect to major areas or sources of uncertainty in section 2.7. The final section

(2.8) discusses the use of such an extensive classification of uncertainties.

2.2 Electricity

Electricity consumption is growing faster than other energy sectors in industrialised

economies (Price, 1990). Compared to primary fuels such as coal and oil,

electricity is clean and safe. No waste is produced at the users end. All pollution

is caused and borne by the producer, not the end-user. Unlike most other fuels

which require storage and processing, electricity is immediately available and easily

controllable at point of use.

Precisely for these attractive characteristics, electricity has become the essential

driver of our economy. The growing number of labour-saving devices powered by

electricity is another reason for our increasing dependence. We expect the

electricity supply to be reliable, i.e. available when we need it, and affordable.

These requirements are summarised in the words of Allan and Billington (1992,

37
page 121): the primary technical function of a power system is to provide

electrical energy to its customers as economically as possible with an acceptable

degree of continuity and quality, known as reliability.

Traditionally, centralised regulation of the electricity supply industry was

considered necessary to ensure security of supply and efficiency of production.

Efficiency was achieved through economies of scale. However, many countries

have since restructured and deregulated their ESI to introduce competition, which

was believed to improve cost efficiency, increase diversity of fuel supply, and

provide additional benefits to the consumer.

In the UK, recent privatisation of public sector companies have changed the

priorities of the industry and introduced new responsibilities. Companies are now

concerned about profitability and maintaining a competitive edge. No longer a

public sector monopoly, a private firm cannot rely on a guaranteed market or

government funding. The new utilities must consider the interests of all

stakeholders, the higher cost of capital, and competitive forces that did not exist

before.

2.3 Industry Structure

The business of providing electricity is characterised by four independent but

related functions: generation, transmission, distribution, and supply. Generation

and transmission are wholesale functions while distribution and supply are

predominantly retail functions. Transmission is a natural monopoly given the high

fixed costs of transmission lines. Distribution is transmission at the retail level, i.e.

delivery to final end-users. Supply consists of metering, maintenance, billing, and

revenue-collection.

Responsibilities in the electricity supply industry vary from country to country

according to the degree of deregulation and vertical integration. In Europe, for

38
instance, it ranges from the nationalised French industry to the very fragmented

private ownership in the Netherlands and Germany. The industry structure partly

determines the limitations and opportunities open to power generation.

2.3.1 The Privatised UK Electricity Supply Industry

STRUCTURE OF ENGLAND AND WALES

Privatisation was possible in the UK because of favourable conditions such as

political stability, total state ownership, over-capacity (removing the immediate risk

of power shortages), low indebtedness (with little plant built since 1979 to the time

of privatisation), and relatively high efficiency (due to the integrated grid system).

Before 1990, electricity in the England and Wales was generated and transmitted

by the Central Electricity Generating Board (CEGB), a monopoly wholly owned by

the government, and, as a result, was able to make long-term investment decisions

for the whole country. The twelve regional area boards then distributed and

supplied the electricity to their respective locally monopolised geographical

sectors.

In 1990-1991, the UK ESI was restructured considerably and privatised with great

emphasis on competition in generation through vertical dis-integration. The single

vertically integrated public utility CEGB was split and its assets sold to the private

sector as two generating companies National Power and PowerGen and twelve

regional electricity companies (RECs.) The nuclear assets were transferred to a

newly formed public sector company Nuclear Electric. The responsibilities for

generation were separated from that of transmission and supply. An independent

regulator, Office of Electricity Regulation (OFFER), was set up to look after the

restructured industry. Table 2.1 and Figure 2.1 illustrate the new structure in

England and Wales. Further details on the structure, organisation, and operation

39
of this market are given in Williams (1990), EA (1992), Energy Committee (1992),

and Hunt and Shuttleworth (1993).

Table 2.1 Privatised Structure in England and Wales

Function (estimated Description Players


proportion of price
to consumer*)
Generation the manufacture of electricity National Power
(65%) and its sale to the electricity spot Power Gen
market, also known as the pool, Nuclear Power
or by contracting with large users Independent power producers
and RECs (including the 12 RECs, see below)
Imports from Scotland and France
via the link
Transmission the bulk transportation of National Grid Company
(10%) electricity on a national scale
Distribution the local transportation of
(20%) electricity and delivery to the 12 Regional Electricity Companies
individual consumers (RECs)
Supply the purchase of electricity from generation companies (direct supply
(5%) the pool and its sale to to large consumers)
consumers RECs
Source: Williams (1990) *June 1990 estimates

Figure 2.1 Privatised Industry Structure in the UK

National Power Privatised Industry Structure


Distribution
in the UK Companies C

O
PowerGen

kW kW N
National
Nuclear Electric National Power S
Grid
Company U
Independent Generators
Pool Pool M
Output PowerGen
Input
Price Price
E
NGC Pumped Storage
R

Scotland & France Other S


Suppliers
Source: Paribas 1990

40
After privatisation and deregulation, the new companies are commercially oriented

and their responsibilities governed by contractual relationships. The obligation to

serve is no longer a statutory requirement but rather a contractual one, as reflected

in the supply contracts between the end-users and the distribution companies, the

short-term and long-term contracts between the generators and the distributors,

contracts between the National Grid Company (NGC) and the generators,

contracts between the NGC and the distributors, direct sales contracts, and implied

contracts with all classes of consumers. The Regulator and the NGC, rather than

generators, are responsible for the reliability of the system and security of supply.

The incentive to invest is contained in the wholesale pricing formula which

determines the price at which electricity is traded.

THE POWER POOL

The NGC was set up to look after the operation of the bulk transmission system as

well as to administer the trading of electricity through the daily power pool. The

daily power pool is intended to serve three purposes. First, it determines which

generating stations are run, based on bid prices rather than the former merit order

ranking of costs. Secondly, the mechanics of the pool determine the cost and

price of electricity traded. Finally, the pool exists to ensure that sufficient

generating capacity is provided to maintain security of supply in the long-term. By

means of signals to existing and potential generators in the form of large or small

capacity credits, these financial incentives encourage generators to plan for

additional capacity. Effectively, these signals supplement the traditional use of

long-term demand forecasting.

The half-hourly spot market for electricity was designed to cope with the non-

storability of electricity. Therefore it is run for every single day of the year.

Generators tell the NGC how much electricity each of their generating units can

provide and their bid prices for each half-hour period for the next day. The NGC

41
ranks the stations in order of bid prices and selects the cheapest to meet the

estimated demand per half hour.

The NGC produces three different schedules (unconstrained, operational, and out-

turn) for all power stations called to run. The unconstrained schedule ranks all

power stations according to ascending order of their offer prices and descending

order of plant availabilities with respect to forecasted demand. This schedule is

used to calculate the System Marginal Price (SMP). After taking into account

transmission constraints and inflexible plant, the NGC modifies this schedule into

an operational one. Since bidding occurs the day before generation, actual

electricity demand and plant availability may turn out differently from expected.

The actual order of plants called to run is the out-turn schedule. The out-turn

availability is used if it is less than the declared availability in the calculation of the

Pool Input Price for the generator.

The 24 hour market is governed by the fluctuations of Pool Input Price (PIP) also

known as the Pool Purchase Price (PPP) and the Pool Output Price (POP) also

known as the Pool Selling Price (PSP). The PIP is recalculated every half hour to

reflect the changing cost of generation with the fluctuation of demand. All sellers

of electricity receive the same PIP per unit of electricity, which is expressed in

pence per kilowatt hour. Likewise, the buyers of electricity purchase at the current

POP value. The difference between the PIP and the POP consists of a charge,

called the uplift, which covers all additional costs of keeping reserve power on line,

plant availability, forecasting errors, transmission constraints, and ancillary

services. The size of the uplift varies through out the day but is typically around

6% of the total pool price.

There are two components of the PIP which reflect the cost of energy production

(energy credit) and the provision of capacity (capacity credit). The cost of energy

production, SMP, reflects the running costs of electricity generation. Intended to

42
cover the generators investment costs, there is a capacity payment equal to

LOLP * (VOLL - SMP), where LOLP is the loss of load probability, VOLL the

value of loss of load, and SMP the system marginal price. The price formulae for

PIP and POP are as follows:

PIP = SMP + capacity payment

POP = SMP + capacity payment + uplift.

These parameters are calculated by the NGC. The schedule of plants and SMPs

also indicate the level of market activity.

The LOLP is the probability within any half hour of demand exceeding available

generating capacity. It accounts for demand uncertainty and the probabilistic

reliability of individual plants in meeting the load as planned. It reflects the balance

of supply and demand. For any half hour, if demand is significantly higher than

available capacity, LOLP will be high. LOLP will be higher during the winter peak

than the summer trough. LOLP is intended to give the incentive to invest in future

plant capacity.

VOLL (or VLL) is the price that pool members have to pay to ensure that no

supply is lost. This has been initially set at 2/kWh. VOLL is closely related to the

planning margin. Excess capacity causes planning margins to rise. To reduce the

margins, the NGC can set a lower VOLL so that capacity credit will be low even if

LOLP is high at certain times, thereby discouraging new capacity investment.

Similarly, a high VOLL provides an incentive to invest in new capacity. Unlike in

the former CEGB days, the planning margin is not pre-determined but market

dependent.

The SMP is the offer price of the most expensive station in operation in each half

hour, expressed in pence/kWh. Power stations that are bid into the operational

43
schedule will receive the SMP. The difference between this SMP and its own

marginal cost of production is the profit. To maximise profit on energy credits, a

utility will bid as many power stations into the merit order as it can while keeping

down costs of generation. To remain profitable, a generators marginal cost of

production should be lower than the SMP. The merit order is also determined by

plant characteristics such as black-start capability, load following, and geographic

location.

While it was intended for the bulk of trading to be transacted through the pool,

initially buyers and sellers actually entered into contracts for differences to

reduce the exposure to pool price volatility. The risk averse attitude that prevails

in an unfamiliar business environment drives the buyers and sellers of electricity to

enter into these contracts which effectively stabilise the price of electricity for both

parties. In the first year of privatisation, 95% of all electricity supply was covered

by such contracts so that less than 5% of electricity was actually traded through

the pool. Back-to-back contracts can also provide the necessary security to raise

funds for independent power producers if gas supplies are similarly hedged by

long-term contract. In the direct sales market, generation companies have direct

contracts with large industrial consumers. The ancillary services market is

beginning to enable generators to sell products generally associated with facilitating

electricity supply, such as keeping plant on standby to start at short notice.

STATE OF THE INDUSTRY

The following summarises the state of the UK ESI four years after privatisation

(Reuters, May 1994.) The market shares of the two major generators (duopolists)

National Power and PowerGen have fallen from a total of 73% at vesting to 61%

due to the entry of independent power producers, who own a total of 3,225 MW

of new CCGT plant. The total capacity ratio of National Power and PowerGen

has stayed at roughly two to one (40%: 22%). Nuclear Electrics market share has

44
increased to 25%. Behaving like wholesale companies, the generation companies

have found lucrative business in supplying large energy consumers through direct

sales, thus taking business from the distribution companies. On the other hand, any

of the distribution companies (RECs) can generate up to 20% of their power needs

and sell it through the grid. They are keen to purchase electricity more cheaply,

reduce business risk, promote competition in generation, and produce profits in

their own right. Likewise, non-energy producers can generate electricity for their

own use, provided they have a licence or exemption as stipulated by the Electricity

Act 1989. Deregulation has paved the way for more diverse solutions and

alternative ways of doing business.

While the effects of privatisation are still being felt, some obvious concerns face the

UK industry today: public awareness of the environment, the cost of cleaning up,

fuel switching from coal to gas, new entrants to the market, electricity trade in the

EC, potential over-capacity, and the future of nuclear power. The definition of

economic plant now includes greater consideration for the environment and

thermal efficiency as well as shorter lead-time and modular units. While it was

intended that competition leads to greater efficiency and security by diversity of

supply (or suppliers) and greater sensitivity to changing markets (Grimston, 1993),

privatisation has also introduced considerable market uncertainties and higher cost

of capital. In their analysis of the UK market structure, Vickers and Yarrow

(1991) identify several sources of possible problems for potential market failure:

the non-storability of the product (electricity) coupled with fluctuating levels of

demand; vertical coordination among generation, transmission, distribution, and

supply to final customers; dependence of supply on the maintenance of electrical

equilibrium in the network; industrys capital intensity and level of sunk costs,

investment lead times and short run capacity constraints; natural monopoly

conditions in transmission and distribution; and major environmental externalities.

Since 1991, a number of new issues have surfaced: over-contracting for new gas

45
plants, protective contracts for British Coal, instability of the LOLP and the

capacity payment system. In an industry involving long-term investments and

long-term fuel supply contracts, it remains to be seen how such long-term decisions

can be driven by the short-term bid prices. The fact that only a portion of all

electricity is actually traded through the pool gives an element of artificiality about

the pool prices that is unsettling for customers and generators. Furthermore, the

pool is only half a market, that is, supply-side bidding only. These weaknesses and

potential problems reflect market and regulatory uncertainties that have to be

managed through the intervention of the Regulator.

2.3.2 ESI of Other Countries

There are currently three types of ESI structures in the world: unitary integrated

state-owned system, mixed dominant state incumbent systems, and decentralised

mixed ownership system. The degree of fragmentation and competition in the

electricity supply industry varies greatly from country to country. Table 2.2 shows

the industry structures of six high electricity consumption countries.

Table 2.2 Comparison of Industry Structures

Source: Paribas (1990) UK France W.Germany USA Spain Japan

Privatised? yes no yes yes yes yes

Vertically-Integrated no* yes yes yes yes** yes


companies?

Pool system? yes no yes several yes no

Competition in generation? yes no yes yes*** yes some

Fragmentation? average little highly highly average average

Regulation style RPI-X n/a fixed tariff Return on fixed fixed


Equity tariff tariff
* except Scotland ** except ENDESA *** depends on region

46
In a vertically integrated industry, demand-side management is promoted as an

alternative to supply-side planning. Options such as time of use pricing,

interruptible supply, and real time pricing are designed to change consumers

utilisation behaviour so that the utilities can better manage load distribution. These

demand-side alternatives along with energy efficiency practices appeal to American

utilities who may generate and import power as well as supply to the end-user.

With separate ownership of and responsibility for generation and distribution, the

UK companies do not have such incentives and run the risk of over-capacity.

In the US, the price of electricity is monitored by regulatory agencies and

consumer bodies. Operating a rate-of-return regulation, the regulators ensure a

fair return to investors. In the UK, the price of electricity is governed by

competitive forces in the electricity market, with the independent regulator OFFER

acting as the watchdog for the industry. The merit order dispatch principle, where

power stations are scheduled to generate electricity in order of lowest operating

cost, still applies. However, unlike the US and most other countries where the

merit order is based on cost, the National Grid Company sets out the order based

on the cheapest offer price quoted in the UK spot market.

The imbalance of electricity needs in Europe is one reason for the cross border

trade in electricity. Excess generation capacity makes France a net exporter of

electricity, to Germany which faces a high cost of domestic power and to Italy a

shortage of capacity. The UK buys electricity from France through the cross-

channel link. The transitions of the economies of Eastern and Central Europe offer

potentially more opportunities for trade. In theory, the European Commission

would like to see the industry broken up in each country into a generating sector, a

supply sector, and a transmission sector, the latter being open to third-party access.

However, complex issues such as ownership of the grid and legislative mechanisms

have to be ironed out, not least to alleviate opposition from member countries and

47
special interest groups. The growing number of privatisations1 around the world

support the relevance of this thesis. Electricity supply industries in other countries

are described in Helm and McGowan (1989) and Joskow and Schmalensee (1983).

2.4 Background Developments

History suggests that we are more capable of reacting to and dealing with short-

term uncertainties than long-term ones (Senge, 1990.) Indeed, gradual changes

over a long period of time seem to have less impact than sudden changes. While

we may react immediately to a price spike, we seldom react to slow and minor

increases in price. Similarly, while we may be aware of the the hazards of

pollution, as long as we are healthy we are not too concerned. We run the risk of

being too reactive to short-term events and too inert to long-term trends. The

history of power generation is full of such tales, and these have determined the

attitudes that power companies have taken to capacity planning and uncertainty.

Before the oil shock of the 1970s, fuel prices and electricity demand were

relatively stable and predictable. For planning purposes, demand was easily

forecast by trend analysis using compounded rates of past growth. There was no

reason to expect the future to depart from this stable pattern.

In 1973 and 1974, oil prices quadrupled and led to unprecedented leaps in related

fuel prices. The resulting energy crisis combined with the effects of a world-wide

recession led to a down-trend in electricity demand. Scheduled investments and

installations in new capacity, which had been planned in anticipation of continued

growth in demand, had to be cancelled or postponed. Short-term measures, such

1 During the period from February to August 1994, the following countries have privatised, started
privatising, or announced intentions of privatising full or parts of their electricity supply industries:
Argentina, Australia, Austria, Brazil, Canada, Chile, Congo, Czech Republic, Germany, Hungary,
India, Indonesia, Italy, Kuwait, Mexico, Morocco, New Zealand, Pakistan, Peru, Portugal, Slovakia,
Spain, Sweden, and Thailand. (Reuters, 1994)

48
as cutting back by cancelling new orders or deferring construction, were costly to

cope with deviations from predictions. Over-supply in the 1970s led to caution in

the 1980s. Henderson et al (1988) outline the historic trends in capacity and load

and the resulting problems faced by some of the utilities in the US at the time.

Another disruption to the stable scene was the series of nuclear accidents which led

to a dramatic cancellation of new orders and cast serious doubt on the future of

nuclear power. The Three-Mile Island accident and the Chernobyl disaster caused

negative public reaction and government response. Public concern for health and

safety rose above the minimum cost objective of capacity planning. The promise of

cheap electricity from nuclear power was questioned as countries like Sweden put

a moratorium on their nuclear programme. The anti-nuclear sentiment was partly a

result of the influential green movement that originated in the United States and

Germany. Special interest groups such as Greenpeace and Friends of the Earth

voiced their disapproval of nuclear power and actively campaigned for legislative

action. Detailed historic accounts of environmental and anti-nuclear movements

are given in Holmes (1987) and Price (1990).

Meanwhile, scientific evidence of global warming and ozone depletion brought

attention to the environmental aspects of power generation, especially the

consequences of fossil-fuel burning. These environmental concerns have been

voiced at national and international levels. International collaboration led to the

1987 Montreal Protocol on chlorofluorocarbons (CFC) production controls for the

protection of the ozone layer, the 1988 Toronto Treaty on the reduction of carbon

dioxide emissions, and the 1992 Rio Summit on global environmental concerns.

As emission limits are being discussed at the global level, individual countries are

translating the targets into national legislation. Talk of carbon tax and emissions

trading permits, for example, has made coal-fired plants potentially less

competitive. Squeezed from both sides by the hazards of nuclear and the adverse

49
environmental effects of fossil fuels, companies are turning to other forms of

energy.

Environmental legislation in the form of emission limits and fuel taxes favours

cleaner and more efficient plant. Confronted with increasingly stringent emission

controls, generating companies in the UK are considering the early closure of less

economic and dirty coal and oil fired stations, life extension of existing Magnox

nuclear power stations, and investment in cleaner plant. Concern for the

environment and competition in the new electricity market have led to the

phenomenon known as the dash for gas. No longer restricted from use in

electricity generation, natural gas is now a much sought after fuel. The high

availability of cheap gas from the North Sea and the new technology of combined

cycle gas turbines (CCGT) answer the call to lower emissions with its negligible

sulphur and reduced carbon dioxide emissions. Its high thermal efficiency, typically

around 50%, gives greater electricity output and thus value for money. Shorter

construction times make CCGT an attractive and viable choice of new plant as well

as a means for independent power producers to enter this competitive market. In

addition, both extra capacity in Scotland and cheap electricity from France threaten

potential over-capacity in England and Wales. The industry has responded with

announcements of early closure of uneconomic plant and cancellation of new

projects.

2.5 Capacity Planning

Capacity planning has always been necessary because of long lead times and other

characteristics of the power generation business. Traditionally, such planning was

mainly undertaken to ensure sufficient capacity to meet future demand. It is even

more important now to anticipate and prepare for surprises. For example, ten

years ago, there was no uncertainty surrounding the cost of capital, which was

set at 5% in the UK public sector. However, in the run up to privatisation it

50
became a big issue. The companies are now at risk of business failure and indeed

hostile take-overs in a way that the CEGB was not.

In times of growth, capacity planning is also known as capacity expansion

planning. Planning to determine the right level of capacity to have at any time is

necessary for the replacement of retiring, uneconomic, or unfavourable plant. In

the UK, a number of uneconomic and environmentally unfriendly plant have been

retired prematurely or sold in favour of new gas-fired plant.

Capacity planning in the electricity supply industry is largely governed by three

types of decisions about power plant investment: 1) what to build (choice and mix

of technology), 2) how much to build (capacity), and 3) when to build (timing and

sequencing). The choice of technology depends upon available technologies, their

performance levels, expected operating lives, construction time and cost, fuel cost,

and other external factors. How much and when to build depend on demand

projections, existing capacity, and the retirement schedule. Combined together, the

three decisions constitute what is otherwise known as power system expansion

planning, which is defined by the International Atomic Energy Agency (IAEA,

1984) as the process of analysing, evaluating, and recommending what new

facilities and equipment must be added to the power system in order to replace

worn-out facilities and equipment and to meet the changing demand for

electricity.

The resulting schedule of investment decisions indicates the dates for installing new

capacity and the dates for retiring old plants over a period of forty to fifty years. In

capacity planning, it is often required to focus on specific issues and decisions, such

as making a choice between a known technology as opposed to a new one,

evaluating the costs and benefits of over- and under-capacity, and assessing

whether or not to invest in anticipation of regulatory changes.

51
Changing business environments have shifted the planning emphasis over the years,

leading to the development of more suitable planning methods, e.g. table 2.3 for

the US. The techniques used for planning purposes have evolved with the needs of

the industry. The dramatic restructuring of the UK ESI implies a similar evolution.

Table 2.3 Evolution of Electricity Planning in the USA

Time Business Environment Planning Planning Method


Emphasis
Period Supply Markets Regulation
Before Declining Strong Favourable Supply reliability; Optimisation;
1960s cost growth and stable Revenue Probabilistic
requirement analysis of
minimisation production
1960s - Gradual Continued Emerging Economic, Cost-benefit
1970s cost growth environmental reliability and analysis of
increase concerns environment trade- capacity planning
offs margin
1970s - Sharp cost Slowdown Conservation Demand-side and Integrated resource
1980s increases and PURPA* renewable options; planning;
Risk management Decision analysis
1990s High and Moderate Increasing Enhancing value Integrated value-
and uncertain growth; competition; of utility services based planning
beyond cost; Increasing Changing and business to
Adoption of heteroge- structure; shareholders,
information neity and Global customers, and the
technolo- uncertainty environmental public-at-large;
gies concerns Transaction-based
resource options
Source: Yu and Chao (1989) *Public Utility Regulatory Policies Act 1978

The traditional approach to capacity planning under uncertainty has been that of

fitting plans to forecasts. As described in Nicholson (1971) and others in Chapter

3, this modelling approach primarily consists of running an optimisation algorithm

against a forecast of future electricity demand and fuel supply. Accurate, reliable

forecasts of demand and timely delivery of supply are needed otherwise costly

consequences such as rationing, forced interruptions of power supply, and possible

import of expensive foreign fuel result. On the other hand, over-estimates of

demand and over-capacity tie up capital. The cost of investment must be spread

over less output, resulting in higher unit costs. As a result of these uncertainties,

52
the effects of over- and under-investment are much greater. [See Munasinghe

1990, Ford and Yabroff 1980.]

A compromise has to be made somewhere as accurate forecasting is now far more

difficult than before. Historic accounts of forecasts based on proven methods

demonstrate their inability to predict shocks to the system and the resulting

impacts. In the absence of an ability to hold stocks of electricity, one way to

ensure the security of supply is to keep a reserve margin. This excess of installed

capacity is required to cater for unexpected peaks in demand. The installed

capacity must be greater than expected peak capacity to cater for planned

maintenance of plants as well as to cover unforeseen plant breakdowns and

variations in peak demand. Since capacity decisions have to be made years in

advance, the reserve margin is intended to close the gap between actual and

forecast peak demand. More sophisticated approaches (Eden et al, 1981) use

probabilistic concepts such as loss of load probability and expected unserved

energy.

2.6 Uncertainty and Types of Uncertainty

Uncertainties are the reasons why planning is difficult and why plans are not

optimal (Dowlatabadi and Toman, 1990). Others view the acute areas of

uncertainty as being floating exchange rates, changing social and political values,

growing environmental awareness, government regulation, technological change,

pollution control regulation, energy cost, and raw material availability. Volatility

and instability of fuel prices lead to more uncertainties. The complex interactions

between different sources of uncertainty require multi-disciplinary considerations

(Berrie and McGlade 1991 and Merrill et al 1982), such as engineering,

environmental, economic, and political trade-offs.

53
It is important to identify and understand uncertainties, especially the major ones,

because they have potentially negative consequences. Too much or too little

capacity translates to higher costs, as over investment raises electricity prices and

under investment risks black-outs. Based on the information available today,

companies may invest in technologies which under-perform tomorrow due to the

changing circumstances caused by new fuel supplies and new competitive

technologies.

It is also necessary to evaluate the relationships between various sources of

uncertainty as they may lead to further uncertainty and undesirable effects. For

example, Ford (1985) has identified a spiral of impossibility in figure 2.2, where

the plus signs indicate a positive relationship. As higher prices discourage demand,

a utilitys capital costs must be spread over a smaller number of kilowatt hours

which in turn leads to still higher prices, inducing a loss of customers. Some of

them turn to building their own power plants; others switch to alternative forms of

energy.

54
Figure 2.2 Spiral of Impossibility

+ Alternative
Sources of
Electricity -

+ - Customer
Price of Demand
Electricity
Fuel
Cost

-
Electricity +
Consumption

Source: Ford (1985)

To clarify the meaning of uncertainty, we discuss the literature definition and

view of uncertainty. Then we list different types of uncertainty as viewed from

the literature. We distinguish between types and areas of uncertainty, which

parallel closely with Morgan and Henrions (1992) distinction between types and

sources of uncertainty. Sources of uncertainty refer to the areas or variables which

are unknown or uncertain, while types of uncertainty refer to the nature,

characteristic, or extent of uncertainty itself. Types of uncertainty give insight to

the modelling treatment, i.e. how to model, while areas of uncertainty give

insight to the variables that must be included, i.e. what to model. Finally, to

summarise the above, we give our interpretation of uncertainty and classifications,

for the context of this thesis.

55
LITERATURE DEFINITIONS

Uncertainty is a generic term used to describe something that is not known either

because it occurs in the future or has an impact that is unknown. Uncertainty

relates to the unknown at a given point in time, although it is not necessarily the

unknow-able. The term uncertainty has been used to mean an unknown that

cannot be solved deterministically or an unknown that can only be resolved

through time. Schweppe et al (1989) define uncertainties as quantities or events

that are beyond the decision makers foreknowledge or control. Paraskevopolous

et al (1991) attribute the origins of uncertainties to errors in specification,

statistical estimation of relationships, and assumptions of exogenous variables.

Uncertainty arises because of incomplete information such as disagreement

between information sources, linguistic imprecision, ambiguity, impreciseness, or

simply missing information. Such incomplete information may also come from

simplifications and approximations that are necessary to make models tractable.

Uncertainty sometimes refers to randomness in nature or variability in data.

In the literature, uncertainty and risk are often used interchangeably. Knight

(1921) was the first to distinguish between measurable risk and unmeasurable

uncertainty. Strangert (1977, p. 35) interprets Knight as follows: uncertainty

refers to an unstructured perception of uncertainty and risk to the situation in

which alternative outcomes have been specified and probabilities been assigned to

them. Strangerts concept of pure uncertainty was introduced around 1950 where

different outcomes are stated without reference to probabilities. Building upon

Knights definitions, Barbier and Pearce (1990) note that risk denotes broadly

quantifiable probabilities while uncertainty refers to contexts in which probabilities

are not known. Hertz and Thomas (1984) associate risk with the lack of

predictability about the problem structure, outcomes, or consequences in decision

or planning situation whereas uncertainty implies a lack of predictability about all

56
elements of the problem structure. Chapman and Cooper (1983) consider risk to

be the undesirable implication of uncertainty. Risk may also tend to focus on just

bad outcomes, i.e., what can go wrong. Choobineh and Behrens (1992) consider

uncertainty as the manifestation of unknown consequences of change and risk as

the consequence of taking an action in the presence of uncertainty. From an

engineering perspective, Merrill and Wood (1991) observe the causal relationship

between uncertainty and risk: uncertainty refers to factors not under control and

not known with certainty, whereas risk is a hazard because of uncertainty.

TYPES OF UNCERTAINTY

Factors within an organisations control are considered internal factors related to

planning, while those outside are external. Hirst and Schweitzer (1990) describe

internal uncertainties surrounding the type, availability, and costs of new

generating facilities, availability and costs of existing generating facilities,

availability and/or costs of power from life-extension projects, demand-side

management capability, and the availability of renewable energy resources.

External uncertainties apply to load growth, fuel prices, availability and costs of

purchased power, actual savings from demand-side management, regulatory

policies, inflation, interest rates, and environmental constraints.

Generation technologies with different lead times face demand forecasts with

different levels of uncertainty, which Boyd and Thompson (1980) distinguish as

short term and long term. Short-term uncertainties apply to factors which

cause demand to be uncertain on a time scale that is substantially shorter than the

time necessary to build even the shortest lead time power plant. Long term

forecasts are more uncertain due to the additional consideration of factors and

interactions, giving inertia to a substantial component of demand. The latter type

belongs to long-term uncertainties. The difference between the two types of

uncertainties depends upon the extent to which uncertainty in demand changes

57
during the time necessary to construct a power plant. Some performance

indicators, such as reliability constraints, reserve margin, and loss of load

probability should also be taken into account.

Not all factors are measurable, especially in relation to the way uncertainties are

expressed. The International Energy Agency (IEA, 1987) classifies uncertainty

into the quantifiable and the non-quantifiable. The normal and quantifiable

uncertainties surround technological developments, facility lifetime and

performance, retrofit or retirement of old plants, and the role of alternative energy.

The non-quantifiable uncertainties have to do with environmental considerations,

major accidents, political developments, and regulatory changes. The distinction

between the two is sometimes attributed to the amount of foreknowledge and

control (Merrill and Wood, 1991).

Barbier and Pearce (1990) discuss three types of uncertainties surrounding the

Greenhouse Effect. The scientific uncertainties over precise atmospheric and

geographical climatic responses are only resolved through advances in science.

Nuclear decommissioning and other technological uncertainties fall into this

category. Forecasting uncertainties are to do with predicting future changes and

scale of their effects. Time-lag uncertainties are present in cause and effect

cycles.

IEA (1987) suggests two types of uncertainty that surround the value of a variable.

Whether it is due to stochastic variability or lack of knowledge or both, the result

is that we cannot be certain of its value. Zadehs fuzzy set theory, described in

Dhar (1979), is concerned with ambiguity resulting from lack of knowledge.

System imprecision due to unavailable information, imprecise data, or simply

linguistic ambiguity gives rise to fuzziness. Choobineh and Behrens (1992) argue

that the principal sources of uncertainty are often non-random in nature and relate

to fuzziness rather than to data frequency.

58
Gerking (1987) distinguishes between sources of uncertainty and the changing

impact of uncertainty over time. He lists four main sources of uncertainty:

statistical uncertainty (associated with data collection and statistical regression),

interpretational uncertainty (the ability of a model specification to accurately

depict the essential causal relations of the socio-economic system to enable

tracking the past and anticipating the future), decisional uncertainty (the potential

for contemporary and future decisions to influence dependent variables), and

external uncertainty (events that are beyond the control of the system being

modelled and the decision makers.) Classification according to the changing

impact of uncertainty over time is important in the modelling process. There are

four types: static uncertainty (several alternatives are recognised as possible when

there is no indication that the uncertainty may change over time or that it can be

affected or diminished), quasi-static uncertainty (can be reduced in a negligible

period of time relative to the decision alternatives), dynamic uncertainty (as time

passes, certain developments of external inputs can be successfully removed from

further discussion, i.e. resolution of uncertainty over time), and unspecified

uncertainty (cannot be met with programmed planning measures.)

THESIS DEFINITION AND CLASSIFICATION

In this thesis, uncertainty refers to factors, that affect the outcomes of decisions

but which are not known at time of planning. There are two kinds of factors: 1)

variables that enter into the planning model, and as such, can be specified,

approximated, or predicted beforehand although the actual resolution of

uncertainty may be quite different from its estimate; 2) variables or events that do

not enter into the planning model, and as such, cannot be predicted or even

foreseen at all. For example, the restructuring of the UK ESI and its implications

could not be foreseen two decades ago and therefore would not have been treated

as an uncertainty at that time.

59
This definition of uncertainty broadly captures the distinctions between types of

uncertainty, as reviewed above.

We propose a more useful classification of uncertainty, by data, model, the user,

and area of uncertainty, i.e. factors that affect capacity planning.

Data uncertainty refers to the availability and accuracy of data. For example,

although pricing information is freely available in the UK power pool, individual

plant details are often inaccessible due to commercial reasons. Data for modelling

purposes is incomplete, insufficiently detailed, untimely, and possibly unreliable as

there is no requirement to publish or supply such information. Furthermore,

announcements of new plant may be strategically motivated as frequently these are

followed by deferrals or cancellations. These market signal distortions present

uncertainties in the data.

Uncertainty in the model concerns the right structure, techniques, etc.

Uncertainty in the user refers to that hidden agenda the user has not

communicated to the model builder, i.e. what the final decision maker has not told

the developer of the planning model. It also refers to the gap between the model

and the user, i.e. what is not captured by the model but desired by the user.

Areas of uncertainty are classified in the next section.

2.7 Areas of Uncertainty

Factors that are important to capacity planning such as determinants of electricity

prices (variables and alternatives) are listed in table 2.4. These factors differ in

degree of sensitivity and uncertainty. For example, capital cost has a high impact

on electricity prices, but for a well-known technology, it is highly predictable. In

modelling uncertainty, it is not only important to focus on highly sensitive variables

but also highly uncertain ones. Too many existing techniques focus on the former,

60
as evident in Chapter 3. Here, we discuss those factors that have potentially high

and uncertain impact. Relationships between factors are also important in the

subsequent analysis of uncertainty albeit only briefly mentioned here.

Table 2.4 Important Factors in Capacity Planning

Variables (attributes) Alternatives


(by type of fuel and technology)
DIRECT FOSSIL FUEL
Capacity size (total, unit) - coal (lignite, anthracite; FBC, IGCC)
Heat rate (efficiency, conversion rate) - oil
Discount rate - natural gas (CCGT)
Life --diesel (OCGT)
Fuel cost - orimulsion
O&M cost (fixed, variable, escalation rate)
Capital cost (fixed, variable) NON-FOSSIL FUEL
FGD and other add-on capital equipment - nuclear (AGR, MAGNOX, PWR)
Decommissioning (cost or provision) - renewables
Interest during construction -- hydroelectricity
Tax (corporate, carbon, etc) -- solar
Load factor (utilisation rate) -- wind
Load duration curve (merit order) -- wave
Emission factors (CO2, SO2, NOx) -- tidal
Non Fossil Fuel Obligation -- geothermal
Nuclear Fuel Levy -- biomass
Construction time (lead time, delays) -- waste incineration
Availability
Performance DEMAND SIDE MANAGEMENT
External costs (environmental, social) - time of use
- spot pricing
INDIRECT - interruptible power supplies
Accounting methods - energy efficiency
Heath and safety - conservation schemes
Regulatory
Competition Import or export of power
Environmental Combined heat and power
Public Attitudes Contractual options

The next sub-sections are grouped into factors that directly contribute to capacity

planning (plant economics, demand, fuel, and technology) and indirect (financing

requirements, market, regulatory, environment, public opinion) uncertainties.

These areas of uncertainties are by no means exhaustive but provide an insight into

their impacts on capacity planning.

61
2.7.1 Plant Economics

Central to capacity planning are the components that directly determine the cost of

electricity generation. Each major cost category contains fixed and variable

components, with the variable element tied to utilisation. Fixed cost is composed

mostly of capital cost incurred during the construction phase. Variable costs are

the running costs due mainly to fuel and operations and maintenance (O&M).

Within each type of plant or fuel, the range of technologies varies considerably.

The final costs are also highly affected by load factor, life, plant efficiency, and

discount rate. Factors that are highly variable and need to be considered include

inflation rates, interest rates, technical and regulatory conditions in the electric

utility environment, and the way they change over time.

Capital costs, which are committed years before a power plant begins operating,

must be recovered during its lifetime. Capital costs are sensitive to discount rates

and construction lead times.

Most technologies exhibit an inverse relationship between their capital and

generation costs. Baseload plants have high capital cost and low generating cost as

compared to peaking or peakload plants which have relatively low capital cost and

high generating cost. Because demand fluctuates throughout the day and year,

baseload plants are scheduled to supply the bulk of demand and peakload plants

brought in to meet short-duration peaks in demand. The order in which the

different plants are brought on-line depends on capital and operating costs as well

as technical characteristics of plants. Those plants with high capital costs are also

often difficult to switch on and off quickly, and therefore more suited for baseload.

To meet the restrictions on certain emissions, the less polluting plants tend to get

ordered first.

62
Uncertainties in construction costs and lead times are a major source of concern

because delays are common, often due to licensing complications, public

intervention, financing difficulties, project miscalculation, accidents, or over-

capacity. If additional funding is needed but not available, it could lead to the

undesirable result of project abandonment. The longer the time to commission, the

higher will be the interest during construction (interest on funds provided during

construction period).

Many uncertainties arise during the long planning horizon. Peck et al (1988)

mention the importance of assessing equipment life, which is affected by the cost of

maintenance and new technologies. When certain fuels become less favourable

because of poor environmental performance, unreliable supply, steep cost

escalations, or competition from alternative technologies, the associated power

plants will have to be retired early. Capital costs would then spread over a shorter

lifespan thereby effectively increasing the generation cost. This is especially true of

new untried technologies, where the initial learning curve is steep.

Power stations are function-specific infrastructure. Once the maximum useful life

is reached, a plant must be decommissioned. Hopefully all of its capital and

decommissioning costs can be recovered during its operating lifetime. Nuclear

plants in particular have the burden of end of life uncertainty which translates into

costs and risks of safety. These concerns are not easily converted into monetary

units even though the common practice is to set aside a provision for

decommissioning. Problems with radioactive waste, safe containment, dismantling,

and reprocessing of spent fuel present uncertainties in the operation of nuclear

plants. Together with the heavy burden of decommissioning costs, these

uncertainties made the privatisation of the UK nuclear power industry too

expensive and risky in 1990.

63
The capital intensive and function-specific nature of power plants is offset by the

lower costs achieved through economies of scale. In the past, the emphasis was on

building big to achieve economies of scale; but against a background of fluctuating

demand and unstable conditions, the downside risk is expensive. It is more difficult

to achieve economies of scale in a fragmented industry. Such big commitments

tie up capital in presence of rapidly changing technology and competitive forces.

These commitments can be very costly over a long period of time. [For further

discussion, see Merrill et al 1982, Krautmann and Solow 1988, and Hobbs and

Maheshwari 1990.]

2.7.2 Fuel

Uncertainties that affect fuel price and supply are important as fuel related costs

make up the majority of the running costs of fossil fuel plants. National Power

(1992) attributes 53% of their operating cost to fuel, while PowerGen (1992)

claims close to 70%. Adverse shifts in relative fuel prices have a direct impact on

running costs, possibly changing the technology mix and the merit order of power

plants over time.

The oil shocks of the 1970s warned electricity generators of the risks of over-

dependence on a single fuel source. The uncertainty associated with fuel has to do

with political and economic risks of the supplying countries. Price instability,

supply and transportation interruptions , and disruption of strikes all contribute to

uncertainty of supply (Merrill et al, 1982.) Even if self-sufficient in supply,

domestic primary fuel suppliers are not insulated from price, as the privatised

generators can choose to import from abroad. After several oil shocks and

continued unrest in the oil-producing countries, the worlds oil reserves are still

concentrated in the politically unstable areas of the Middle East (65%) and South

America (13%). Gas reserves are distributed unevenly as well, with 38% in the

Soviet republics and 31% in the Middle East. Most countries have their own

64
reserves of coal with the majority in the USA (24%), Soviet republics (22%), and

mainland China (15%). Secure fuel supplies are necessary for the reliable provision

of electricity at stable and predictable prices. Unfortunately effort to maintain this

through long-term contracts with fuel suppliers cannot prevent disruptions due to

war, strike, embargoes, etc. Fuel diversity is a way to spread the risks and to avoid

over-dependence on one country. Sizewell B, being the first PWR (Pressurised

Water Reactor) to be built in the UK, was approved on grounds of fuel diversity.

Nuclear fuel cycle has its own uncertainties, particularly in the back-end.

Reprocessing of spent fuel is a highly controversial issue, as the risks of fuel

transportation present a fear of possible nuclear weapon proliferation. These

concerns show that fuel cost and availability are not the only determinants of

technology choice.

The volatility of fuel price as illustrated in Stoll (1989) makes it difficult to predict

accurately. Oil prices quadrupled in 1974, doubled in 1979, and plummeted by

one-half from 1982 to 1986. Natural gas, as a derivative of oil, followed similar

patterns.

One way to reduce fuel-related uncertainty is to maximise the accuracy of supply-

side forecasts. However, experience (Balson and Barrager 1979 and Energy

Business Review 1991) shows that accuracy varies greatly among the different

types of fuels, as follows.

Forecasts for hydro-electric power supply and consumption are by far the most
accurate. This is probably due to the counting element. That is, sites are identified and
schemes are planned for many years ahead. But this may no longer hold if adverse shifts
in weather patterns are expected from global warming.

Nuclear power forecasts are highly unreliable for several reasons. Many are politically
motivated. There are great political and social uncertainties. Costs of R&D, operations,
and especially dismantling and decommissioning are either badly estimated or ignored.

65
Little is allowed for environmental problems and the NIMBY (Not In My Back Yard
syndrome) attitude that prevails.

Forecasts for natural gas are also unreliable. Until gas discoveries are made, these
supplies are simply not included. Similarly with Liquid Natural Gas (LNG) imports,
unless contracts have been signed, it is considered too speculative to include it. These
event-driven uncertainties cause great discrepancies in forecasts.

Forecasts for coal vary. In the fifties and the sixties, the picture for Europe was over
optimistic due to the failure to anticipate the high cost of production in Germany and the
UK. There was also a panic response to the oil crises of 1973 and 1974. Increased use of
coal in the future depends on the successful development and acceptance of clean fuel
technologies.

Estimates of future oil prices must be guided by careful economic and geological analysis
as they are highly uncertain and subjective. The uncertainties are dependent on reserves,
recovery costs, world demand, and politics at the national and international levels. As a
result, the forecasts are revised almost as soon as new reserves are discovered.

In general, uncertainty in supply forecasts is associated with uncertainties in

technological, environmental, political, and economic forecasts. Traditional

forecasting methods are strong in analysing and using historic trends but weak in

predicting event-based shocks to the system. As seen from the above

discussions, forecasting methods can no longer rely on historic relationships

between electricity consumption and economic and demographic factors such as

prices, GNP, population, growth, and weather, because the relationships are not

clear or stable.

2.7.3 Electricity Demand

Demand uncertainty is one of the major determinants of future capacity need. The

demand for electricity varies throughout the day, the week, and the year. Since

electricity cannot be stored, there must be sufficient capacity to generate the power

demanded at any time. The traditional appoach of fitting plans to demand forecasts

relies on the accuracy of predicting the shape and growth of demand.

66
At the macro-level, electricity demand growth has been closely correlated with

GNP growth. Other factors that affect uncertainties in demand are given in

Henderson et al (1988) as follows: the relationship between peak load and

economic activity, the price of electricity, technology, specific incentives, usage,

the fundamental growth of demand, forecasts, and baseline projections of supply

and demand.

Other aspects of demand uncertainty (Schroeder et al, 1981) include the

demographic bulge, i.e. whether the next generation will have greater or less

electricity consumption. Demand is also greatly influenced by new technologies,

so-called phantom appliances, the electricity intensities of which are difficult to

predict. The strength and persistence of the energy conservation movement may

counter the effects of greater energy consumption.

Forecasts of long-term electricity demand are translated into load distribution

curves and more commonly shaped into load duration curves which are useful in

merit ordering. Load distribution curves map daily demand against time, e.g.

hours of the day, days of the week, months of the year, etc. Load duration curves

are aggregated and averaged from load distribution curves and are represented as

percentage of demand (or load) against percentage of time. The short-term

scheduling of plants to operate depends on their controllability. Some plants are

better at following the rises and falls of demand. Load demand changes are

difficult to adapt to because of the long lead times in construction. These issues

relating to uncertainty of load growth and shape, and their impacts are discussed in

Stoll (1989) and Ford (1985).

2.7.4 Technology

The choice of technology, i.e. type of plant, is determined by the type of fuel used

and technical performance characteristics like heat rate, emission factors, operating

67
life, and black-start capability. Table 2.5 lists the benefits and costs of the main

technologies.

Plants with greater thermal efficiency, lower emission levels, better designs, and

improved cost-effectiveness are able to cope with the tougher environmental

standards today. New coal-fired technology, such as AFBC (Atomospheric

Fluidised Bed Combustion), PFBC (Pressurised Fluidised Bed Combustion), and

IGCC (Integrated Gasification Combined Cycle) as described in OECD/NEA

(1989) outperform existing coal-fired plant. Advanced nuclear reactor

technologies promise potential improvements in simplicity, safety, and economy.

AWCR (Advanced Water-Cooled Reactor) and HTGR (High-Temperature Gas

Cooled Reactor) demonstrate greater reliance on passive safety features and

increased use of modularisation to reduce construction costs and schedule.

Moreover, their designs are the results of optimisation in plant size, multi-unit

sizing, standardisation of design and component, and improvement in construction

efficiency. Modularity, instead of economies of scale, is one way of coping with

the uncertainties in electricity demand and fuel supply (Hirst 1990, Ford 1985).

68
Table 2.5 Fuel/Technology Comparisons

Fuel/ Benefits Costs


Technology
nuclear does not contribute to the public concern about health and
greenhouse effect except to a safety
minimum extent during manufacture cost-effectiveness questionable
of its fuel eventual profitability very sensitive
long-lasting uranium supply to delays in construction
reactor performance
least flexible of energy options, high
capital cost with long lead times and
great infrastructure requirements
reprocessing, nuclear proliferation
long-lived radioactive wastes
coal abundant and secure supply, most environmentally destructive in
especially domestically (in the U.K.) mining and combustion
relatively cheap if used at baseload emission control expensive and
cause further problems of waste
disposal
natural gas high thermal efficiency concern about long-term resource
modular availability
short payback period (short transport difficulties
construction time) supplies concentrated in the Soviet
low emissions Republics and other politically
unstable areas
orimulsion transportable over long distances high sulphur content
high combustion efficiency FGD needed
15 to 20 year long-term contracts greater particulate emission
adjustable to coal prices problem than heavy fuel oil
not affected by oil prices dependence of few suppliers
can make use of old oil fired
stations or new coal plants
carbon dioxide is 20% lower than
coal emissions
energy efficiency most flexible of options require education and promotion
(and other most publicly acceptable need incentives
demand-side most environmentally benign efficient devices to be sold
alternatives)
renewables independence from finite sources low power density
(energy from availability in small sizes and quick periodicity of supply (intermittent
wind, solar, installations output)
biomass, tidal, relatively low environmental impact high capital cost
geothermal, low operating costs intensive manpower requirements
waste- wide geographic dispersion require subsidies or preferential
incineration) tremendous diversity treatment to be commercially viable
many techniques require further
development to improve efficiency,
reliability, and cost

Technological obsolescence is a crucial concern when planning horizons are long,

during which time changes in regulation and environmental standards are expected.

69
On the other hand, the newest and latest technologies take years before their cost

effectiveness and fuel efficiency are fully accepted. New installations usually have

lower load factors in the first two years, reducing the electricity output and revenue

income. Any new installation will always carry this performance uncertainty. Even

technology that has been accepted in other countries has to undergo much

investigation and understanding before being adopted domestically. Rigorous

technical tests and policy analysis are required for each new technology.

As mentioned earlier, nuclear power has considerable uncertainties surrounding the

back-end of the fuel cycle, e.g. decommissioning, waste treatment, and

containment. In addition to these scientific uncertainties, nuclear power also faces

regulatory uncertainty in the UK, as the governments decisions on various issues

concerning Nuclear Electric are still pending at time of writing. A favourable

decision to the nuclear industry could result in building of new pressurised water

reactors which will come into service in early next century. If not, a large amount

of nuclear capacity may need to be retired, unless the industry can bear the

enormous costs through other means. If Nuclear Electric is privatised, it could

diversify its plant mix, e.g. build non-nuclear plant. The 1998 expiry of the Non

Fossil Fuel Obligation (NFFO) and the Fossil Fuel Levy (FFL), which subsidise the

nuclear industry as well as renewable energy, also causes financial concern as it is

uncertain whether the European Commission will allow the extension of these

subsidies. Prospects for future investment in nuclear power stations will be

determined by their ability to compete successfully in the market.

Technology choice is not restricted to supply-side only. Demand-side alternatives,

such as time-of-use pricing, dynamic and spot pricing, improved energy efficiency,

and conservation programmes, are attractive because they could provide a viable

solution to the environmental problems in the long run. However, demand-side

management (DSM), as the US experience shows, requires considerable marketing

70
effort and consumer education. [For energy efficiency, see McInnes and

Unterwurzacher 1991, Greenhalgh 1990; for demand-side management, see Hirst

1990b, Hayes 1989, Sim 1991, Gellings et al 1985; for other references on this

topic, see Henderson et al 1988, Hobbs and Maheshwari 1990, Berrie and

McGlade 1991.]

2.7.5 Financing Requirements

By the time a new power station is ready to commence operation, it has already

incurred substantial costs in the form of construction borrowings and accumulated

debt. In a climate of economic and regulatory uncertainty, interest rates and

exchange rates have a significant impact on financing costs of capital intensive

projects. The uncertainties surrounding the cost and availability of new debt and

equity capital are discussed in Merrill and Schweppe (1984).

Free competition eliminates the long-term guarantee of sales. Not surprisingly, in a

privatised industry like the UK ESI, uncertainty in demand, fuel prices,

competition, and the power pool induce risk averse investment behaviour which

translates to higher discount rates for capital investment. These impacts have

shifted investment to less capital intensive technologies. Discount rates are used to

calculate tomorrows costs and benefits into todays terms, reflecting market

perception of risks and returns. The choice is not as apparent as in the public

sector where a uniform discount rate was set to value projects but not to reflect

business risk.

Uncertain revenue requirements make financial planning difficult and may prevent

utilities from recovering all of their costs. Economic instability and inflation

produce higher than expected interest rates and unprecedented cost increases in

new facilities. Long lead times and regulatory delays are exacerbated by the

inability to recover work in progress. [Other financial concerns are mentioned in

71
Hobbs and Maheshwari 1990, Jones and Woite 1990, Bunn et al 1991, and Merrill

et al 1982.]

2.7.6 Market

Shortly before privatisation, UBS Phillips and Drew (1990) foresaw increased

market risk for the newly privatised companies. In a privatised industry, the

possibility of business failure is real. Competition should give rise to more efficient

electricity markets, implying tighter reliability standards and reducing the spread

between cost and price. Deregulation also opens the markets to new entrants, thus

increasing the competition and eroding the profit margin. These privatisation

effects are discussed in Bunn et al (1991), Berrie and McGlade (1991), and UBS

Phillips and Drew (1991).

Pool price volatility concerns all participants in this industry as the trading of

electricity effectively replaces the previous dependence on a stable monopolistic

system of load scheduling. Competitive elements introduce tremendous

uncertainty to pool price expectations. A combination of lower declared

availability of plant (OFFER, 1992) has led not only to higher capacity payments

and higher pool prices but also increases in uplifts, resulting in very high, short

duration price spikes. Immediate demand responses to such high prices coupled by

feedback from other elements send rippling effects throughout the system. Supply

side disruptions such as plant retirement and reduced plant availability contribute to

increases in capacity payments, which in turn raise prices. The time difference

between bidding and trading causes discrepancies between provisional prices

published one day ahead and final settlement prices.

Unleashing the free market forces in the new UK ESI brings about market

uncertainties that are short-term in nature. Reacting to short-term needs runs the

72
risk of jeopardising long-term interests. This is one reason why different types of

uncertainties in capacity planning cannot be addressed in isolation.

New entrants and the expiry of fuel-supply contracts threaten the dominant players

in the UK ESI. As the independent generators commit their CCGT orders to back

to back contracts, they too wonder: how the dominant generators will behave,

how the pool prices will change, whether transmission charges will be revised to

favour projects in the South and disadvantage those in the North, what new

environmental restraints or taxes will be imposed, and what changes to expect in

generation capacity (including nuclear capacity). Too much capacity in a short

time could deter the orderly investment at the beginning of the next century. There

is also a concern about the overall risk of poor business performance, hostile take-

overs, and further deregulation of the industry, such as through forced sell-offs.

2.7.7 Political and Regulatory

The power planning life cycle begins from the first stage of feasibility analysis and

submission of proposal. Approvals depend on the site selected, the type of plant

proposed, and other factors which are subject to many uncertainties. The long

planning horizons of the electricity industry, e.g. 30 to 40 years, mean that industry

life cycles are much longer than the length of a government in office. Political

uncertainty relates to the uncertain implications of changes in the government or

policy legislation.

In the UK, UBS Phillips and Drew (1991) along with many other analysts

predicted that the 1992 general election could have a major effect on the industry

and the value of the firms. UBS Phillips and Drew (1990) warned of a political

risk, stemming from the changes that could be made to the industry if political

ideology or sentiment were to change or if new legislation were to be introduced

by either the UK or EC. Paribas (1990) cited some political considerations of a

73
possible Labour government and how they would affect the status of conservation,

British Coal, nuclear policy, payment for renewables, the status of National Grid,

and the role of the regulator OFFER.

Regulatory uncertainty refers to the legislative changes that can impact at the firm

level. Governments have a wide variety of policy instruments they can use to bring

about change. Munasinghe (1990) lists a few: physical controls, technical

methods, direct investments or investment-inducing policies, education and

promotion, pricing taxes, subsidies and other financial incentives, reforms in market

organisation, and regulatory framework and institutional restructuring. The future

shape of the regulatory environment (Schroeder et al, 1981) depends on the speed

of approval processes, local versus national balance of regulatory control, and

emphasis on environmental matters.

Some of the regulatory impacts on the actions of the US electric utilities are

discussed in Baughman and Kamat (1980): unanticipated delays in licensing or

construction, uncertainty of business environment due to increasing government

influence in energy markets, and new policy instruments.

The impacts of privatisation are far reaching. Bunn et al (1991) distinguish

between two types of effects: the transfer of ownership from the public to the

private sector and the competitive structure of the market. The former can be

analysed according to the rate of return implications, price implications, capital

structure or debt implications, and corporate tax implications. The latter

(competitive market structure) has been analysed through the power pool

incentives to invest, regulatory measures, uncertainty and risk in the new markets,

and competitive strategies of the new players.

An independent regulator tries to mitigate anti-competitive behaviour or excess

monopolistic returns accruing to any dominant player. However, considerable

74
uncertainty surrounds what the regulator will do. UBS Phillips and Drew (1992)

analyse the effects of changing the pool rules, the possible referral of the generators

to the Mergers and Monopolies Commission and the valuation of the generating

companies.

2.7.8 Environment

Increasingly, energy and the environment are perceived as directly linked. Four

key areas to focus future concerns were recommended in a symposium (Helsinki,

1991) on electricity and the environment: 1) energy and electricity supply and

demand, implications for the global environment; 2) energy sources and

technologies for electricity generation; 3) comparative environmental and health

effects of different energy sources for electricity generation; 4) incorporation of

environmental and health impacts into policy planning and decision making for the

electricity sector. The symposium proposed that the electricity utility companies

take a longer planning perspective than just the 7-10 years for construction, in view

of the time scale of many health and environmental impacts, such as the irreversible

damage to ecosystems and the effects of radiation.

The International Panel on Climate Change (IPCC) warned the world of a global

warming of the earth. If not retarded or stopped, the greenhouse effect

(Leggett, 1990) will cause a rise in sea levels, higher global temperatures, and

changes in precipitation and seasonal patterns. Although the exact impact and time

frame are not certain, it is known that the largest contribution comes from energy

production and use. In the UK, for instance, the burning of fossil fuels in

electricity production accounts for 34% of carbon dioxide released, 72% of sulphur

dioxide, and 28% of nitrous oxides (Department of Energy, 1992.)

Fossil-fuel burning gives off carbon dioxide, the main greenhouse gas. Reduction

will require market incentives or legislative measures, since there are no technical

75
means to reduce CO2 in fuel combustion other than using fuel with less carbon

content or improving thermal efficiencies of plants. A carbon-energy tax has been

proposed by the European Commission to encourage energy efficiency and fuel

switching. If implemented, this would raise the price of electricity generated by

coal and oil, and to a lesser extent, gas. [The effects of a carbon tax are discussed

in Grubb (1989), Hoeller and Wallin (1991), Cline (1992), and Kaufmann (1991).]

With the exception of CFCs and carbon dioxide, most emissions are difficult to

measure. The projection of future emissions is even more uncertain as atmospheric

concentrations of some gases are more sensitive to emission rates than others due

to the different lifetimes in the atmosphere. The mechanisms and rate of removal

are uncertain. The impact of control measures is uncertain as it depends on time.

Much scientific uncertainty surrounds the impacts and timing of climatic changes.

The irreversibility of these effects implies that legislation should be passed now to

reduce or stop such emissions which will impact on a generators future plans.

Recent UK legislation, following EC directives, requires power stations to reduce

SO2 emissions to 60% below 1980 levels by the year 2003 and that the NOx

emissions must be 30% lower than in 1980 by the year 1998. A government White

Paper on environment has set the target of 1000 MW to be generated from

renewables by the year 2000 and to provide 24% of the UK energy by 2025. This

target adds to the growing list of objectives that planners must consider. The UK
has conditionally complied to the IPCC target to stabilise CO2 emissions at 1990

levels by 2005. These legislative requirements affect all power producers directly.

To meet the sulphur emission target, utilities use fuels with lower sulphur content

or fit desulphurisation equipment. Desulphurisation and denitrification equipment

is so expensive that it is only cost effective if installed on the newer and larger

plants to allow for economies of scale and longer operating time. Capital cost of

flue gas desulphurisation equipment (FGD) on Europes largest coal-fired power

76
station Drax (National Power, 1992) is around 700 million. On a per kilowatt

basis, it is equivalent to 373/kW, a significant proportion of the total capital cost

of the plant.

Environmental externalities, such as those side effects of electricity generation

described above, have not been traditionally included in electricity prices. As

environmental costs may be internalised universally through pollution taxes or

other policy instruments, Ottinger et al (1991) urge electricity producers to

anticipate for self-interest although accounting for such externalities is still fairly

new with significant uncertainties to be reviewed for each externality. In support

of this, Markandya (1990) suggests to identify and account for the main sources of

electricity (oil, gas, coal, hydropower, nuclear) and their effect on the environment.

In the past, negative environmental aspects of power generation have been

overlooked in times of electricity shortage. However, expectations of over-

capacity in the UK ESI combined with stricter environmental laws impel power

generating companies to re-evaluate the options they have to meet the interests of

reliability, profitability, and the environment.

2.7.9 Public

In some countries, like the US, public opinion has frequently interfered with the

business of power generation itself. Foley and Prepdall (1990) cite public

sensitivity to new industrial development, e.g. site selection, transmission lines,

electromagnetic fields, and public health. People are concerned about health and

safety, aesthetics (as a power station is considered visual pollution to the

countryside), environmental pollution, etc. Berrie and McGlade (1991) discuss

consumer reaction to price and the quality of supply.

Nuclear power is probably the energy source that is most influenced and dictated

by public opinion (Evans and Hope, 1984), but it was not until the 1970s that

77
opponents of nuclear power began to delay the development of this industry. The

professional response was that nuclear power was cheap and safe and that no other

energy source could meet the increasing demands forecasted for world economic

growth. In spite of this reassurance, nuclear accidents and adverse public reaction

have caused cancellations in construction and curtailed future investment.

British Nuclear Fuels Limited (BNFL) opinion polls regularly show that two out of

every three people believe the risks associated with nuclear power outweigh the

benefits. Public opinion is powerful enough to put a ten-year moratorium on

further nuclear construction in Switzerland and a phasing-out of nuclear power in

Sweden. A recent survey (Nuclear Forum, 1992) shows that the more people

know about radiation the more likely they are to be in favour of nuclear power.

2.8 Conclusions

This chapter has listed and explained the areas of uncertainties in electricity

generation important in capacity planning. These uncertainties have also been

viewed from a modelling perspective, i.e. types of uncertainty. Emerging from this

discussion are the difficulties of capacity planning in presence of these

uncertainties. These uncertainties and complexities are summarised in table 2.6.

78
Table 2.6 Model Requirements for Capacity Planning

Areas of Uncertainty Types of Uncertainty Complexity, Completeness

plant economics: capital, running internal, external technical characteristics


costs (controllable, of plants
uncontrollable)
fuel: price, supply levels of detail
operational, strategic
demand: shape, growth dependence of factors
short-term, long-term
technology: performance, side business risk
effects, competitors quantifiable,
unquantifiable strategic focus
financing requirements: financing (measurable, intangibles)
mechanisms, interest rates, revenue types of decisions:
requirements risk, non-risk uncertainty technology choice,
capacity size, timing
market: volatilities of the pool, stochastic variability,
competition fuzziness multiple perspectives:
regulator, private sector
political/regulatory: changing statistical, firm, consumer,
legislation, approval and licensing, interpretational, shareholder
timing and impact of new policy decisional, external
instruments multiple criteria:
static, quasi-static, reliability, plant
environment: scientific uncertainty dynamic, unspecified availability, efficiency,
in energy-environment interface, minimum cost, highest
internalising externalities through scientific, forecasting, profitability, public
new requirements time lag acceptance, environment

public: opposition as cause for data availability,


delay, image of firm accuracy, detail

The uncertainties discussed in this chapter do not exist in isolation. The publics

concern for the environment has often led to legislative action. New requirements

translate into new technologies, and these in turn fuel the competition.

Competitive forces spark off further uncertainties in the market. The enumeration

and discussion of uncertainties in power generation pose a big question: how do

we deal with these uncertainties in electricity planning? We propose two

approaches to answer this question.

The first is the obvious and classic approach of modelling, ingrained in the

engineering culture of electricity industries. In Part One of this thesis, we review

79
modelling approaches to capacity planning as defined by operational research

techniques and their application. We criticize their ability to capture the different

areas of uncertainties and their treatment of types of uncertainty on the basis of

completeness and adequacy, respectively. Completeness refers to the

comprehensive coverage of areas of uncertainties, while adequacy refers to a

sufficient or fitting treatment of different types of uncertainties. The enumeration

of different areas of uncertainty is aimed at ensuring completeness, i.e. not

overlooking any factor. Hence, implicit in the traditional modelling approach to

uncertainty is the goal of completeness. We propose that model synthesis is a

means to completeness but need to establish its feasibility and practicality, hence

the title Model Synthesis for Completeness.

The second is a non-traditional approach. Part Two of this thesis addresses the

usefulness of flexibility to uncertainties in electricity planning, hence the title

Flexibility for Uncertainty. The literature (Chapter 5) mentions its usefulness as

a response to uncertainty, as a practical means of coping with uncertainty, and as a

desirable property of a system. However, it is not clear how such a vague and

qualitative concept could be useful to such a precise and quantitative tradition

of electricity capacity planning.

80
Part One

Model Synthesis for Completeness


CHAPTER 3

Approaches to Capacity Planning

3.1 Introduction

Having identified the scope of uncertainties in the previous chapter, we now

consider how to model it. The modelling approach appeals favourably to an

engineering-dominated, process-oriented industry. The operational research (OR)

techniques that have been used to address the problems in capacity planning are

known for their applications to scheduling, resource allocation, routing, queuing,

inventory control, and replacement problems commonly found in other capital-

intensive, infrastructure-based industries such as telecommunications, military, and

transportation. We review these applications according to the extent to which they

capture the areas of uncertainties with respect to completeness and the manner in

which they treat the types of uncertainties with respect to adequacy.

The next three sections present the most commonly used techniques and their

capacity planning applications. These techniques are grouped according to their

primary functionality of optimisation, simulation, and decision analysis, delineating

the fundamental differences in modelling approaches. Optimisation in section 3.2

refers to linear programming, decomposition methods, dynamic programming, and

stochastic programming. Simulation in section 3.3 refers to system dynamics,

scenario analysis, sensitivity analysis, and probabilistic risk analysis. Decision

analysis in section 3.4 refers to the use of decision trees and other decision-

theoretic techniques. In each section, the techniques and respective applications

are briefly introduced, with emphasis on their particular treatment of uncertainty.

Following individual applications of techniques, we review applications based on

two or more techniques, which we call model synthesis. Here in section 3.5, the

83
same critique of areas and types of uncertainties is given. Modelling requirements,

relating to technique functionality and limitations, are developed in the concluding

section (3.6). Several proposals are made on the basis of this review.

3.2 Optimisation

Optimisation refers to the objective of minimising or maximising a function subject

to given constraints. Optimisation is usually understood in the deterministic

sense, whence linear programming, decomposition methods, and dynamic

programming readily apply. The next sub-sections present the optimisation

techniques in this order, with a final note on stochastic programming, which is a

form of non-deterministic optimisation.

As early as the 1950s, capacity planning was perceived as that of meeting least

cost objectives within a highly constrained operational environment, and was first

modelled with linear programming (Mass and Gibrat, 1957). If the problem

becomes too large to handle, decomposition methods (Vlahos, 1990) could be used

to breakdown the complexity and to speed up the computation. If viewed as a

multi-staged problem in which decisions follow in sequence, dynamic programming

(Borison, 1982) can be readily used as an alternative optimisation algorithm. More

recently, stochastic programming has been used to model the deterministic and

non-deterministic effects.

3.2.1 Linear Programming

Traditionally, capacity expansion has been formulated as a least cost investment

problem, utilising the algorithmic strengths of linear programming to minimise total

cost subject to fuel availability, demand, capacity, and other constraints.

The objective cost function typically includes capital, fuel, and generation costs,

over the entire planning period. The constraints typically include forecast demand,

84
plant availability, and other technical performance parameters. The planning period

is usually split into sub-periods for modelling detail variations. The result is an

investment schedule of different types of plants with varying sizes to come on

stream and retire at different dates.

A demand constraint is usually represented by a linear approximation to the load

duration curve (LDC). However, use of the LDC implies that costs and availability

of supply depend only upon the magnitude of the load and not on the time at which

the load occurs. This assumption is approximate for hydro systems and accurate

for thermal systems only if seasonal availabilities are taken into account. Because

power demand is highly variable throughout the day and year, the resulting LDC

used in the constraint is an average approximation.

In their first trial in 1954, Electricit de France (EdF) developed a schematic linear

programming model with only 4 constraints and 5 variables (Levy, 1989).

Described in a classic paper by Mass and Gibrat (1957), this is the earliest

application of LP to electricity planning. Dantzig (1955) modelled the same

problem with 70 constraints and 90 variables.

Some early LP models are discussed and classified in Anderson (1972) into so-

called marginal analysis, simulation, and global models. In all cases, cost

minimisation is the main objective. The quantities demanded are assumed to be

exogenous, and all formulations are deterministic. The results must also satisfy

engineering criteria, such as system stability, short-circuit performance, and

reliability of supply. Allowances for uncertainty are given in the form of simple

spare capacity, which have to be expressed as a mean expected quantity because of

the deterministic formulation.

A standard linear programming formulation in McNamara (1976) subjects the

objective of minimising the present value of a future stream of operating and

85
investment costs to meeting projected future demand. Projections of future

revenues, costs, and capital requirements are made using a corporate financial

planning model. The LP model quickly determines the logical implications of

known information and the impacts of variations in assumed trends. The results

are tested in the sensitivity analysis that typically follows an optimisation run. This

post-optimal sensitivity analysis is not a substitute for uncertainty analysis as it

does not sufficiently address the uncertainties of the problem.

Greater accuracy and detail, such as to capture the non-linear effects of economies

of scale and merit ordering (load duration curves), impede upon the computational

speed of linear programming. Nowadays, formulations with tens of thousands of

constraints and variables, are common, e.g. Vlahos (1990), but they require other

methods of improving the speed. Non-linear programming is one way to overcome

this. However, when uncertainty is incorporated in these quadratic functions, the

model becomes computationally intractable (Maddala, 1980). The complex, non-

separable, but convex objective function of a non-linear programme cannot take

advantage of what standard LP computer software can offer, namely,

computational speed, standardisation of problem formulation, input and output

processors, and integer programming facilities. Furthermore, non-linear

programming cannot be readily rewritten to address other issues, in the way that

LP can cope with allocation, scheduling, etc. In other words, a non-linear

programming formulation is very problem-specific. One way to fit the result to a

non-linear framework with uncertain parameters is through sensitivity analysis of

the dual and slack variables. But this only gauges the sensitivity of the output

result to the input variables. Model validation is not the same as uncertainty

analysis.

Schaeffer and Cherenes (1989) mixed-integer linear programme follows the

pioneering work of Mass and Gibrat. The first part addresses the short-term

86
dispatching problem, which seeks the optimal utilisation of an existing array of

generation plants to meet short-term demand at minimal cost. The second part

addresses the long-term objective of optimising the expansion of electricity

generation capacity over a planning horizon of several years. As the short run

operating cost of the dispatching problem depends on the investment decision of

capacity expansion, it is necessary to evaluate both problems. They make use of

cyclical programming to minimise the cost per cycle over an infinite number of

identical cycles of demand constraints, thus allowing the dynamics of demand to be

modelled while keeping the number of variables to a minimum. Through this

method, uncertainties are incorporated to address the following: level and

distribution of future demand, equipment reliability, length of lead times for new

equipment, and possible changes in regulation which may affect plant efficiency.

To keep the model computationally tractable, reserve margins are used to deal with

uncertainty. By using a different technique for short-term details, the model

overcomes the need to use a sophisticated probabilistic technique which may

increase the complexity of the model.

The advantages for choosing linear programming are nevertheless abundant. Great

computational power is built on its mathematical foundations. Fundamental

technical and cost relationships can be approximated accurately by linear or

piecewise linear functions. Dual variables are useful in post LP analysis though not

adequate for the analysis of uncertainty.

As the earliest and most popular of all optimisation techniques, linear programming

has been nevertheless superseded by other optimisation methods that overcome

some of the following problems when applied to power plant scheduling.

1) Incorporating the probabilistic nature of forced outage into LP is difficult because


all dependent variables have to be expressed or approximated by linear functions.

87
2) The optimal capacity size of a generating unit determined by LP is a continuous
function, and therefore must be rounded to the nearest multiple of a candidate unit.
This rounding may result in a sub-optimal solution.

3) The discrete nature of generating units can be treated by mixed-integer linear


programming, but as the number of integer variable increases, there exists a severe
penalty on computational cost. These linear approximations make the outcome less
accurate and less optimal.

4) None of the examples above have dealt with objectives other than cost
minimisation.

5) Linear programming is incapable of addressing business risk in an acceptable


fashion.

6) Neither can it handle the multi-staged nature of capacity planning as there is no


resolution of uncertainty.

7) The strict linearity conditions imply that non-linear effects such as economies of
scale and reliability cannot be modelled accurately.

8) Linear programming requires considerable computer resources to satisfy the large


number of constraints in capacity planning. The complexity and size of formulation
quickly reach the limit in efficiency.

9) It is a technique specific to capacity planning but not uncertainty. It is unable to


deal with uncertainty without relying on numerous assumptions, approximations,
and post-LP analysis.

3.2.2 Decomposition Methods

Decomposition refers to the breaking down of a large complicated problem into

many smaller solvable ones, thereby reducing computer processing time. For

example, the linear approximation of the load duration curve can be divided into

many periods, each to be solved by bounded variable LP or decomposition

methods. The iterative nature of decomposition reaches the optimum by

convergence in contrast to the simplex method of mathematical programming.

88
In the context of linear programming, two types of decomposition are available,

namely the resource directive and the price directive. They differ in the way the

information is passed, as depicted in figure 3.1.

Figure 3.1 Decomposition Methods

Price Directive Resource Directive


Decomposition Decomposition

CENTRAL UNIT CENTRAL UNIT

resource marginal
prices of resource
utilisation prices of
resources availability
levels resources

PERIPHERAL PERIPHERAL
UNITS UNITS

Source: Vlahos & Bunn (May 1988)

A well known representative of the resource directive decomposition is Benders

decomposition, which is used to formulate the capacity expansion model in Vlahos

(1990) and Vlahos and Bunn (1988a). Different expansion plans are examined

iteratively, and the operating cost is calculated with marginal savings from extra

capacity. This information is fed back to the master programme at each stage, and

a new plan is proposed. Lower and upper bounds to the total cost are set at each

iteration, to accelerate convergence.

The state-of-the-world decomposition of Borison et al (1984) uses the price

directive decomposition technique, such that prices or Lagrange multipliers

control the flow of the algorithm. Each state-of-the-world consists of a unique

scenario in terms of the information used to characterise the generation

technologies but at a fixed point in time with fixed resolution of specified

uncertainties. A primal dual method solves the probabilistic problem using simple

static deterministic solution techniques. The main problem is then decomposed

into a set of linked static deterministic problems where linkages are enforced

89
through Lagrange multipliers. These problems are solved separately in a primal

iteration while the multipliers are updated in a dual iteration.

Decomposition has several advantages over linear programming (Vlahos and Bunn

1988ab, Ct and Laughton 1979), as follows.

1) Non-linearities can be handled in decomposition but not in the LP formulation.

2) Integration of different levels of planning is possible.

3) The advantage of decomposition lies in its efficiency and ability to handle non-
linearities.

As a deterministic method, however, it is unable to handle uncertainties adequately.

3.2.3 Dynamic Programming

Anderson (1972) notes that the multi-staged, sequential decision-making nature of

capacity planning is analogous to the standard deterministic inventory problem

which is often solved by the recursive methods of dynamic programming. Dynamic

programming is a computational method which uses a recursive relation to solve

the optimisation in stages. A complex problem is decomposed into a sequence of

nested sub-problems, and the solution of one sub-problem is derived from the

solution of the preceding sub-problem. In this sequentially dependent framework,

each stage involves an optimisation over one variable only. Preceding decisions are

independent. The problem must be reformulated to uphold Bellmans Principle of

Optimality (Bellman, 1957):

An optimal sequence of decisions has the property that whatever the initial
circumstances and whatever the initial decision, the remaining decisions must be
optimal with respect to the circumstances that arise as a result of the initial
decision.

The least cost investment decision in Boyd and Thompson (1980) uses a

conventional backward-timing dynamic programming algorithm to determine the

90
effect of demand uncertainty on the relative economics of electrical generation

technologies with varying lead times. The algorithm evaluates short versus long

lead time generation technologies, under different cases. Many assumptions were

made in the demand and supply models to reduce the stages, such as the

assumption of a linear cost model, e.g. a lumped sum investment due when plant

comes on line, rather than the usual progressive payments during the construction

period. Several complications were found in practice. The optimal investment

plan consists of lists of tables. To get the decision at any stage, one must read off

the tables corresponding to the right level of demand that might occur in that

period. It is an unrealistic model as tremendous simplifications were necessary to

meet the computational requirements.

In a similar manner, Borisons (1982) probabilistic model of the problem focuses

on strategic uncertainties of demand, technological, regulatory, and long-term

generation expansion dynamics. The four part algorithm consists of a dynamic

programming procedure, a primal dual procedure, a recursive technique, and a

decomposition technique. While it sounds comprehensive on paper, many

aspects of the implementation have not been discussed. For example, it is not clear

whether this method has been completely computer-coded and tested against other

models. Extensions to the model are required to increase the level of detail in

financial, operating, and reliability aspects. Flexible decision making in response to

the resolution of uncertainty was proposed, but flexibility was not defined or

demonstrated.

A theoretical and idealistic approach is described in Richters (1990) dynamic

energy production model. This formulation is two stage with recourse, but

contains too many assumptions to be useful for electricity generation. For one

thing, the power station must be able to store the energy produced!

91
The main modules of the commercially available software packages (all

documented in IAEA, 1984) EGEAS, WASP, and CERES (Capacity Expansion

and Reliability Evaluation System of Ohio State University) are based on dynamic

programming. An iterative approach is used to find the unconstrained optimum

solution. The number of possible capacity mixes increases rapidly as the planning

period is lengthened, implying greater computational requirements. WASP (Wien

Automatic System Planning) makes further use of constraints to limit the number

of expansion alternatives.

Dynamic programming overcomes many of the restrictions in linear programming,

as follows (Borison et al, 1981).

1) The capacity constraints on individual technologies actually make it easier, in


contrast to most optimisation procedures where constraints increase computation
times.

2) Load duration curves can be of any form, especially without the requirement of
linear approximation.

3) There are fewer mathematical restrictions in dynamic programming, unlike the


linearity and convexity requirements of linear programming.

4) This approach can also incorporate uncertainties in demand and in fixed and
variable costs.

On the negative side, the curse of dimensionality predominates.

1) To limit computation time, however, the number of resolved uncertainties must be


kept to a minimum. Therefore this technique is not practical for addressing most
general closed loop decision making problems, which are to do with decision
making as a result of knowing the outcome of uncertain events, i.e. adjustment in
response to uncertain events.

2) Furthermore, dynamic programming is unable to treat uncertain capacity


constraints, equivalent availability, and capacity factor, as the relationships between
installed and operable capacity must be fixed.

92
3) The sequential manner in which the recursive algorithm searches for the optimum
is not conducive to handling decisions in the multi-staged sense, that is, with the
stepped resolution of uncertainty.

4) Dynamic programming is not only limited in addressing uncertainties but also


burdened by the curse of dimensionality, which is due to the branching and
recursion effects.

5) In spite of this range of solutions, the data is not helpful in assessing the kinds of
decisions under different conditions in time.

3.2.4 Stochastic Programming

In a typical optimisation formulation:

Minimise c'x subject to Ax b and x 0,

uncertainty about the demand b, uncertainty about the input prices c, and

uncertainty about the technical coefficients matrix A can be treated in stochastic

programming. These three types of uncertainties are related to parameters in linear

programming (Maddala, 1980), as follows. An ordinary LP problem becomes

stochastic when the set = (A, b, c) of parameters depends on random states of

nature, i.e. = (s), s W, where W is the index set, i.e. the set of all possible

states, in the following formulation:

Maximise or minimise Z = c'x,

where x R = {x | Ax b, x 0}

Four different approaches to stochastic programming are considered in Soyster

(1980) and Modiano (1987), and the first two described here: two stage

programming with recourse, chance constrained programming, stochastic

programming via distributional analysis, and expected value / variance criterion in

quadratic programming.

93
Dantzig (1955) was probably the first to demonstrate a way of incorporating

uncertainty into linear programming. The two-stage nature of such problems

became known as stochastic programming with recourse. In the first stage, a

decision vector x is chosen. A random event (or a random vector of events) occurs

between the first and second periods. The algorithm solves the decision vector y in

the second stage after taking into account the values for x and the random events.

The decision vector x represents the investment decision which must be made well

before most uncertainties are resolved. Such investment decisions are classified as

here-and-now, i.e. taken before uncertainties are resolved. The second decision

vector typically describes the operating decisions which are taken after the random

events have occurred, hence the wait-and-see approach.

These two main approaches are also known as the passive and the active

(Sengupta, 1991). The passive approach requires the decision maker to wait for a

sufficient number of sample observations, hence, wait and see. The active

approach develops an adaptive control procedure, to follow cautious policy by

updating until more information becomes available. Thus it converts the original

stochastic problem into a two-stage decision process.

Another variation of linear programming in the class of stochastic programming is

chance-constrained programming, which has the same LP objective function but

subject to constraints that are satisfied in a probabilistic manner. Instead of

satisfying the deterministic constraint

j aij xj < bi for all i,

the objective function is subject to the following variation:

probability ( j aij xj < bi ) i where 0 i 1 for all i.

94
While a formulation containing random variables (two stage with recourse) can

always be solved by replacing the random variables by their expected values, data

specification for the chance-constrained model is far more complex as it requires

knowledge of joint probability distributions. Consequently very sophisticated

constraints can only be obtained at the expense of highly delicate and

comprehensive probability distributions of model parameters. These multivariate

probability distributions and non-linear feasible regions present difficulties in

reaching the optimum.

Capacity expansion can also be treated as a capital cost allocation optimisation

under uncertainty. Sherali et al (1984) use a two-stage LP with recourse in

response to a finite, discretely distributed stochastic demand forecast, to determine

the marginal cost-pricing strategy for sharing capital costs given an optimal

capacity plan. In many ways, the treatment of capacity constraints and demand

restrictions is analogous to a transportation problem. This is essentially two stage

stochastic programming with recourse under finite discrete probability distributions

for the random load event. The main drawback of this technique lies in the large

number of constraints required, thus increasing the time to search for the feasibility

region.

Maddala (1980) analyses different formulations and solution methods of

mathematical programming that take risk into account to see what elements of

various methods can be incorporated into programming models of electric supply.

His study includes five stochastic programming approaches, as follows:

1) the E-model which minimises expected costs by substituting mean values for
random variables;

2) the V-model which minimises the mean-squared error measured from a certain
level;

3) the P-model which minimises the probability that costs exceed a certain level;

95
4) the K-model which finds the minimum level of costs for which there is 100%
probability that costs do not exceed this level; and

5) the F-model which minimises expected dis-utility for a given risk aversion
coefficient.

Maddala concludes that stochastic programming formulations in the literature are

not suitable for incorporating cost uncertainty. The most practical approach is to

consider a probabilistic range of alternatives and solve the deterministic cost

minimisation for each alternative scenario. The class of two-stage with recourse is

well-behaved as it satisfies the requirements of convexity, continuity, and

differentiability, but these do not hold for chance-constrained programming.

Stochastic programming cannot go very far in parametretising uncertainty because

the overwhelming number of constraints slows down the computation immensely.

In other words, dimensionality gets out of control. Non-linear feasible regions and

multivariate probability distributions also cause problems in specification.

3.3 Simulation

Optimisation by its goal seeking algorithm is prescriptive in nature. A more

descriptive and exploratory approach is achieved by simulation. The tools of

simulation are driven by a different set of objectives: not to find the optimal

solution but to experiment with different values. To simulate means to mimic.

Simulation can be accomplished manually or automated by a computer. Scenario

analysis and sensitivity analysis are included here, as the manual counterparts of the

computer simulation techniques of system dynamics and risk analysis. Monte

Carlo simulation and other sampling techniques also fall into this category.

96
3.3.1 System Dynamics

System dynamics is a method of analysing problems in which time is an important

factor. A key feature is the effect of feedbacks, of which there are two: a re-

enforcing loop and a balancing loop. An example of the re-enforcing loop is the

spiral of impossibility described in the previous chapter. Causal loop or stock

and flow diagrams can be used to structure the problem. Many off-the-shelf

software, such as DYNAMO (Richardson and Pugh, 1981) and ITHINK (High

Performance Systems, 1990), are available to translate these diagrams into

complicated differential equations, which describe the dynamic interactions. Such

user-friendly software allow quick what-if analyses of future scenarios. Powerful

graphic capabilities portray the complex interactions during the simulation runs and

provide statistical results afterwards.

The system dynamics model of Ford and Bull (1989) represents eight categories of

investment by continuous approximations to the discrete number of generating

units in the system. A spreadsheet pre-processor translates costs and savings into

cost efficiency curves which are then used to generate electric loads. The more

common iterative approach used by utilities links a series of departmental models

together but is time-consuming in the preparation and completion of a set of

internally consistent projections. In comparison, a system dynamics model offers

greater flexibility in modelling policy options as it allows the consideration of many

alternatives and what if scenarios.

Two strategies for capacity planning are tested in Ford and Yabroff (1980) by

simulating different types of investments with respect to price, consumer reaction,

recession, demand, capacity, and other parameters. Trade-offs between short lead

time and long lead time technologies are also examined in the system dynamic

models of Boyd and Thompson (1980).

97
The industry simulation model of investment behaviour of Bunn and Larsen (1992)

considers the interplay of demand, investment, and capacity and their effects on the

determinants of price, i.e. LOLP and VOLL (explained earlier in Chapter 2). This

model has also been used to determine the optimal capacity level of each of the

players given the interactions of market forces as well as to hypothesize the effects

of changing different variables under different scenarios, effectively a sensitivity

analysis within a scenario analysis. This type of causal modelling is suitable for the

analysis of market uncertainties.

The system dynamics approach has been greeted with mixed feelings by the

electricity supply industry. Because traditional models have been data-intensive

and very detailed, planners are suspicious of models which do not require that level

of detail in the input specifications. Yet at the same time, system dynamic models

usually produce voluminous output which requires careful validation.

3.3.2 Scenario Analysis

System dynamics is often used as a sophisticated but time-consuming means of

generating scenarios of the future. A less formal and less structured method of

scenario generation and analysis, called simply scenario analysis, makes use of

judgement and discussion. Kahn and Weiner (1967), where scenario analysis

probably first originated, defines a scenario as a hypothetical sequence of events

constructed to focus on causal processes and decision points.

Scenarios can be of several types, the most common being favourability to

sponsor, e.g. optimistic or pessimistic. Probability of occurrence, e.g. most

likely or least likely, is also very popular although very subjective. Others include

single, dominant issue, e.g. the economy, status quo or business as usual, and

theme-driven, e.g. economic expansion, environmental concern, or technological

domination.

98
Five scenarios of the future were constructed in CEGBs assessment of Sizewell B

(Greenhalgh 1985, Hankinson 1986.) Firstly, the consumer sector was divided into

three main categories: domestic, industrial, and commercial. Within each

category, causal links were developed for a highly disaggregated matrix framework

which allowed for changes in activity level and energy efficiency. Against

assumptions of world economy, world energy prices, and the UK economy, a

forecast of future energy requirements was converted into each scenario and

translated into future electricity demand. The five scenarios were credible pictures

of the future: 1) a high growth and high services scenario, 2) high growth and high

industrial production, 3) a middle of the road, 4) stable but low growth, and 5)

unstable low growth scenario. CEGBs scenarios were based on forecasts unlike

Shells approach (Beck, 1982) of self-consistent scenarios for contingency

planning.

Following Shells approach but in a more structured manner, Southern California

Edison (SCE) found that scenario development by a multi-disciplinary team was

more suitable than traditional forecasting methods for planning purposes.

Documented in SCE (1992) and Mobasheri et al (1989), their scenario planning

process (in figure 3.2) is technique for analysing alternative futures and developing

business strategies. It does not produce better forecasts, only a better

understanding of the forces that can change the future.

99
Figure 3.2 Scenario Planning Process

Identify
World
Scenarios

Step 1 Environmental
Electric Fuel Resource
Prices and Other Options
Loads
Constraints

SCENARIO
DEVELOPMENT Develop
Resource
Plans

Production Financial Environmental


Cost Evaluation Assessment
Step 2

IMPLICATIONS
Generate
Indicators

Evaluate
Indicators
Step 3

STRATEGIES Develop
Strategies
Source: Mobasheri et al (1989)

Other ways to conduct scenario analysis are discussed in Huss and Honton (1987),

O'Brien et al (1992), and Bunn and Salo (1993). They make use of intuitive logics,

trend-impact analysis, and cross-impact analysis. The last approach is

accomplished through surveys, interviews, Delphi techniques, and morphological

analysis.

In a survey of American utilities, Hirst and Schweitzer (1990) found scenario

analysis to be one of the four most popular methods to assess uncertainty, the

others being sensitivity analysis, portfolio analysis, and probabilistic analysis. It

is popular for the following reasons.

1) Scenario analysis prepares the firm to rapidly adjust to changing conditions.

100
2) It relies less on computing power and more on brainstorming and discussions than
other types of analysis. Scenario analysis encourages people to think more
creatively and broadly about the future, unlike traditional extrapolation techniques.

3) The use of scenarios is particularly helpful in the projection of long range, complex
and highly uncertain business environments.

4) Scenario analysis encourages the examination of underlying assumptions by


focusing on consequences of events.

5) It is most suitable for situations where few crucial factors can be identified but not
easily predicted, where uncertainty is high, where historical relationship is shaky,
and where the future is likely to be affected by events with no historical precedence.

6) It is based on the underlying assumption that if we are prepared to cope with a small
number of scenarios by testing plans against scenarios then we are prepared for
almost any outcome.

Although scenario analysis has become more popular as a corporate planning tool,

it is not free from criticism.

1) Unless the chance of occurrence of each scenario is explicitly analysed and


specified, planning to different scenarios could be quite costly.

2) These scenarios may never occur, so we may be preparing needlessly. Instead, a


combination or some in between scenario may surface and possibly at different
points in time.

3) It is hard to predict the interacting events in the future. Furthermore, human


judgement is prone to biases.

4) Care must be taken that there are not too many factors involved, else it would be
mere speculation. There is a trade-off between the feasibility of considering a large
number of factors and the validity of considering a few.

101
3.3.3 Sensitivity Analysis

Whereas scenario analysis is achieved by hypothesizing discrete, countable futures

each with a different set of assumptions, sensitivity analysis is concerned with

identifying and screening the most important variables. Hirst and Schweitzer

(1990) describe this as a way to see which factors trigger the largest changes in

performance and which options are most sensitive to the change. If a change in an

estimate has very little effect on the output, then the result is not likely to depend

to any great extent on the accuracy of that input variable.

Defining sensitivity analysis as the examination of impacts of reasonable changes

in base case assumptions, Eschenbach (1992) advocates the consideration of

reasonable limits of change for each independent variable, the unit impact of these

changes on the present worth or other measure of quality, the maximum impact of

each independent variable on the outcome, and the amount of change required for

each independent variable whose curve crosses over a break-even line.

Sensitivity analysis is very rarely used as a stand-alone technique. Its main purpose

is to challenge the uncertainty-free view of the world. It is often conducted at

the end of a rigorous optimisation study or scenario analysis to validate the results,

e.g. in the manner of OECD/NEA (1989) and UNIPEDE (1988).

Sensitivity analysis involves looking at major assumptions and exercising ones

judgement. However, it is incomplete as only a small number of outcomes is

considered and also one at a time. Other criticisms of sensitivity analysis follow.

1) Brealey and Meyers (1988) criticise the use of optimistic and pessimistic estimates
because their subjective nature gives ambiguous results.

2) As the underlying variables are likely to be inter-related, looking at variables in


isolation under-estimates the interaction effects.

102
3) A sensitivity analysis does not involve probabilities and therefore does not indicate
how likely a parameter will take on a value.

4) A percentage change in one variable may not be as likely or comparable with the
same percentage change in another variable.

5) Changing more than one variable at a time may not be feasible due to possible
dependence between variables.

6) It does not attempt to analyse risk in any formal way, leading some authors, such as
Hull (1980), to propose a follow-up with risk analysis.

3.3.4 Probabilistic and Risk Analysis

The injection of random elements such as probability distributions into scenario and

sensitivity analysis departs from the deterministic view of the world. Any

technique that assigns probabilities to critical inputs and calculates the likelihood of

resulting outcomes is considered a probabilistic analysis, according to Hirst and

Schweitzer (1990). These probabilities are subjective, i.e. based on the judgement

of planners or experts. Correlations among uncertainties are specifically

considered. Probabilistic methods help gauge the combined effects of multiple

uncertainties in cost and performance.

Compared to deterministic or fixed point approaches, probabilistic approaches give

the additional information of likelihoods and sensitivities of ranges. These two

dimensions of range and probability translate into a greater permutation of factors,

i.e. fractional designs. Dependence of factors calls for the assessment of

conditional probabilities, entailing additional analysis. Skewness and asymmetry of

probability distributions mean that more iterations are necessary for completeness.

The selection of an appropriate distribution requires judgement.

Most standard post-optimal sensitivity analyses do not treat uncertainty in any

probabilistic manner. Dowlatabadi and Toman (1990) introduce an entirely

103
different way of incorporating stochastic sensitivity into their cost-minimisation LP

model. Instead of using point estimates for values of input parameters, they use

subjective probability distributions to reflect judgements about the nature of

uncertainty surrounding central parameter values. Their model maps these input

distributions to outputs and employs statistical analysis to find the inputs which

have the greatest impacts on outputs. This stochastic sensitivity analysis is

similar to the method of risk analysis, which is explained next.

While there may be many ways to analyse uncertainty, the term risk analysis

refers to a specific kind of analysis, first described in the classic paper by Hertz

(1964). Risk analysis denotes methods aimed at developing a comprehensive

understanding and awareness of risk associated with a variable. Hertz and Thomas

(1984) introduce two kinds of approaches: analytical and simulation. In the

analytical approach, input probability distributions are combined using statistical

distribution theory to calculate the mean, variance, skewness, and other parameters

of the output distribution. In the Monte Carlo simulation approach, sets of

equations are specified, and the input probability distributions are sampled and put

through the equations to give output distributions. The analytical approach is

difficult and impossible to solve for a multi-staged problem containing dependent

non-linear variables. It is also time-consuming and not applicable if the

distributions are not standard. Simulation is practical for non-normal distributions,

especially when exact probabilities are not known. In this sense, it is distribution-

free. Risk analysis by Monte Carlo simulation or other sampling methods is called

risk simulation, which can be automated in spreadsheet add-in software such as

@RISK (Palisade Corporation, 1992). The main arguments for and against using

risk analysis are given in Hertz and Thomas (1984) in table 3.1.

104
Table 3.1 Arguments For and Against Risk Analysis

FOR AGAINST

Provides a systematic and logical approach Sometimes provides few guidelines to aid
to decision-making problem formulation

Permits a thorough analysis of alternative Sometimes time-consuming. More useful


options particularly in complex decision for complex decision problems.
problems
Lack of acceptance by some organisations
Enables the decision-maker to confront risk
and uncertainty in a realistic manner Difficulties sometimes exist in obtaining
probability assessments
Helps communication within the
organisation as experts in other areas are Managers sometimes find difficulty in
consulted in decision-making interpreting the output of the risk analysis
process
Allows decision-makers to judge how much
information to gather in a decision problem Not prescriptive, helps decision maker in
assessing range of values but does not
Allows judgement and intuition in decision- articulate the decision maker's preferences,
making to be presented in a meaningful way i.e. decision-neutral.

Source: Hertz and Thomas (1984)

3.4 Decision Analysis

Between the extremes of hard and soft techniques lies decision analysis, a term

coined from the marriage of decision theory and system analysis. The explicit

representation of the decision makers risk attitude and preferences distinguishes

decision analysis from optimisation and simulation. [See Raiffa 1968, Keeney

1982, Bunn 1984, Watson and Buede 1987, and Covello 1987 for descriptions.]

Thomas and Samson (1986) generalise the usual steps in decision analysis in table

3.2 below.

105
Table 3.2 Steps in Decision Analysis

1 Structuring the problem definition of the set of alternatives strategies, the key
uncertainties, the time horizon and the attributes or
dimensions by which alternatives should be judged

2 Assessing consequences specification of impact or consequence measures for the


decision alternatives

3 Assessing probabilities assessment (or definition) of probability measures for key


and preferences uncertainties and utility measures to reflect preference for
outcomes

4 Evaluating alternatives evaluation of alternatives in terms of a criterion of choice,


such as the maximisation of expected utility

5 Sensitivity analysis in relation to the optimal strategy, which may lead to further
information gathering

6 Choice of the most appropriate strategy in the light of the analysis


and managerial judgement leading to implementation of the
preferred strategy
Source: Thomas and Samson (1986)

Decision analysis encompasses a range of decision theoretic techniques, including

decision trees, influence diagrams, and multi-attribute utility analysis. Decision

trees and influence diagrams are complementary structuring tools for decision

analysis. Decision trees capture the chronological sequence of decision and chance

events, while influence diagrams capture conditionality and dependence of events.


Besides being more compact than decision trees, influence diagrams reveal

probabilistic dependence and information flow.

Several difficulties with a decision analysis formulation should be noted here.

1) Decision analysis imposes an analytical pattern of thinking, i.e. step by step,


which may restrict other possible ways of approaching the problem, e.g. creatively
or holistically.

2) Capacity planning has traditionally been treated in the optimisation sense and
computationally data intensive, with little part for the decision maker because of
heavy data demands. Decision analysis essentially restructures the problem into
strict component parts of a decision: options, chance/uncertain events, outcome,

106
preferences etc. There is a concern that decision trees may over simplify the
problem and exclude some of the essential details.

3) Decision analysis assumes that the model is being developed in direct consultation
with the decision makers. This is often not the case with capacity planning models
where analysts build the models and present the results to the executive level.
This assumption ignores the possible gap between the analyst (modeller) and the
decision maker (user), e.g. whether the analyst is really modelling what the decision
maker wants and whether the decision maker really understands or uses the results
of the model. This aspect of decision making and modelling is discussed in Chapter
4 and again in Chapter 6.

4) Reduction methods are needed to screen out dominated options as the decision tree
can get messy or bushy very quickly. As mentioned before, the dimensionality
problem occurs when the permutation of the number of stages and branches
becomes too large to handle. In a decision tree, there is also the question of how to
propagate the stages, whether by time or by event.

5) Other drawbacks of decision tree analysis given Thomas (1972) and others include
the need to specify discount rates beforehand, problems with probability
elicitation and assignment, and pre-specification of mutually exclusive scenarios.

Moore and Thomas (1973) also question the usefulness of decision analysis

techniques in practice and present the pros and cons in table 3.3 below.

107
Table 3.3 Pros and Cons of Decision Analysis

Pro Con

1) Systematic and logical approach to decision 1) Time consuming.


making.

2) Permits a thorough analysis of alternative 2) Lack of acceptance that all relevant


options. information can be encapsulated in the
analysis.

3) Separation of utility assessment (preference 3) Assessments of probabilities and utilities


assessment for outcomes) from probability are difficult to obtain.
assessments for uncertain quantities.

4) Allows decision-maker to judge how much 4) Evaluates the decision in soft number
information is worth gathering in a given terms because input data at present are often
decision problem. soft in the sense that only crude
measurement is possible.

5) Helps communication within the firm and


enables experts in other areas to be
fruitfully consulted.

In spite of the above criticisms, the use of decision analysis in capacity planning has

been well documented in several US studies as shown in the following sub-

sections. This suggests that it may become better received in the UK now that the

industry has been considerably restructured, with greater emphasis on individual

decisions, uncertainties, and strategic rather than operational issues. For these
reasons, we reserve the following sub-sections to illustrate and describe four

examples of decision analytic applications. These illustrations show the versatility

of decision tree as a structuring tool.

3.4.1 A Classic Application - the Over and Under Model

The first major study to harness decision analysis in the power planning context

was the so-called Over/Under Model (Cazalet et al, 1978). It centres around the

adjustment of planned capacity in three project stages according to the level of

demand. If installed capacity plus work-in-progress is short of the actual demand,

108
additional capacity is planned. If capacity is greater than the actual demand plus a

margin, then the project is delayed. The main objective is reliability, i.e. to ensure

no short-fall of capacity. Its simplicity and generic representation of capacity

planning introduced decision analysis to the modelling of such problems.

Depicted in figure 3.3, this model tracks the way the uncertainty of demand growth

is resolved over time, the role of uncertainty in load forecasts, and the effect of

plant lead times on capacity planning. Its primary objective is to estimate the

levelised consumer cost with respect to the reserve margin requirement.

Uncertainty of load demand is expressed in the form of probability functions. It

uses conditional expected demand in unfolding the actual demand in a sequential

manner. Capacity expansion is reduced or accelerated according to how well

current capacity matches the current demand.

109
Figure 3.3 The Over and Under Model

1978 1979 1980


demand demand demand
...
14,200
(1,200)
0.2
13,000 13,800
0.6
(1,000) (800)
0.2
13,400
0.2 (400)

Capacity in MW 13,600
(1,000)
0.2
12,000
0.6
12,600
(600)
0.6
13,200
(600)
...
0.2
12,800
0.2 (200)

13,000
(800)
0.2
12,200 12,600
(200) 0.6
Probability (400)
0.2
12,200
( ) Demand Growth (0)

Source: Cazalet et al (1978)

Demand is the only uncertainty that drives the Over/Under Model from period to

period. It relies solely on adjusting the reserve margin in response to demand

uncertainty, but this assumes that the adjustment is costless. Furthermore, it does

not account for adjustments in the technological mix. Although grossly over-

simplified, the Over/Under Model can be extended to multiple periods,

representing a new period with each stage of the tree. The time horizon is

relatively short (less than 20 years) compared to other models. It is nevertheless a

breakthrough in new methodological development and, as a result, often cited in

subsequent capacity planning applications.

110
3.4.2 An Extension of the Baughman-Joskow Model

Many of the earlier models addressed only one of the following three aspects of

capacity planning: supply, demand, or regulatory processes. Baughman and

Joskow (1976) were the first to simultaneously address all three aspects by basing

the planning and investment decisions on expectations rather than as inputs of

actual realisations of parameters. These aspects are represented in three sub-

models which are integrated together by a transfer of information through the use

of a rolling horizon.

A similar approach to assess the effect of uncertainty is found in a more aggregate

and regional model in Baughman and Kamat (1980) which looks at the rate of

change of demand rather than absolute demand levels. This study shows how

easily the single utility case can be extended to cover a broader geographic scope

and compares relative costs and benefits of over- and under-capacity. The tree

representation together with the payoff matrix of figure 3.4 and graphical results

permit a good discussion of impacts of over- and under-capacity. Only demand

uncertainty is considered although extensions are possible.

111
Figure 3.4 Matrix Model of Decisions and Outcomes

Demand Growth Outcome

SLOW MODERATE RAPID

No shortage
SLOW Shortage Severe shortage
Adequate
Capacity Expansion Decision

Under-capacity Under-capacity
capacity

No shortage
MODERATE No shortage Shortage
Adequate
Over-capacity Under-capacity
capacity

No shortage
RAPID No shortage No shortage
Over-capacity Over-capacity Adequate
capacity

Matrix Model of Decisions and Outcomes


Source: Baughman and Kamat (1980)

3.4.3 Multiple Objectives Under Uncertainty

Decision analysis can also be used to examine the effect of pursuing different

objectives under uncertainty. An example of this is the SMART/MIDAS

application of Hobbs and Maheshwari (1990). They address the effects of using

different planning objectives under uncertainty and the impact upon costs and

electricity prices of risk averse decision making. The four conflicting objectives in

the assessment of the benefits of small power plants and conservation when load

growth is uncertain are 1) minimising levelised retail electricity prices, 2)

minimising expected total costs, 3) minimising excess capacity, and 4) maximising

consumer surplus. They also examine the effects of different degrees of

uncertainty in demand growth, fuel prices, capital costs, and imported power upon

112
optimal utility plans, the value of information, and the variance of electricity prices

and total electricity production costs.

This example suggests the attractiveness of applying a simple model (figure 3.5),

such as decision analysis in the Simple Multi-Attribute Risk Trade-off System

(SMARTS), to explore a wide range of uncertainties and options, and then a

detailed model to focus on the critical ones. Once the range of uncertainties and

options is reduced, the problem can be addressed by the data intensiveness of the

detailed model, called the Multi-Objective Integrated Decision Analysis System

(MIDAS). In a later extension of this model, Hobbs and Maheshwari included

additional uncertainties such as fuel price, capital cost, and imported power.

113
Figure 3.5 Decision Tree in SMARTS

Decision No. of Units Undertake No. of Units Undertake


to Add DSM to Add DSM
Program Capital & Purchase Program
Chance
Fuel Cost Power Cost

Year 0 1 to 30 1 to 4 4 4

Yes

1 Hi Hi Hi
0.167 0.167 0.25

No Med. Med. Med.


0.5 0.5 0.5

Lo Lo Lo
0.333 0.333 0.25

Stage 1 2 3 4 5 6 7

Schematic of the decision tree used in the SMARTS analysis

Source: Hobbs and Maheshwari (1990)

Several criticisms are noted. 1) Only two time periods are used. 2) The

probability of growth rates for the two time periods are independent, clearly not

the case in reality. 3) Perfect correlation was boldly assumed between fuel and

capital costs. 4) Three point probability distributions were used rather than more

specific continuous distributions. 5) The ratio of peaking to baseload unit is fixed.

6) The study inherently assumes that delays can be made if necessary.

3.4.4 Multi-Attribute, Objectives Hierarchy

Values and preferences of different decision makers and stakeholders translate into

more than one and possibly conflicting objectives. Keeney and Sichermans (1983)

114
model of Utah Power and Lights planning process emphasizes the aspect of

building a multi-attribute objective hierarchy to clarify preferences.

Subsequently, but still based on the original technology choice decision, the

Baltimore Gas Study (Keeney et al, 1986) extends the objective hierarchy and adds

new uncertainties to the analysis.

The Baltimore Gas Study uses the decision tree in figure 3.6 to capture the choices

available to a utility and the uncertainties that result through time. This analysis

addresses both the dynamic and multiple objective aspects of the problem. The

corresponding objectives hierarchy in figure 3.7 enables the systematic

consideration of economic, management, and other impacts. Sensitivity and break-

even analyses are performed afterwards to confirm the best strategies chosen.

Decision analysis provides a common framework for communication and

documents the process step by step.

Figure 3.6 Technology Choice Decision Tree

Ready Fuel
in 1994 Available Low Demand

Conventional Design with Scrubbers Medium


Fuel
Coal Unit for Scrubbers Late Interrupt High
1994 Operation Required
Design for Com-
pliance Coal
Adaptable to 1990
Scrubbers No Scrubbers
Required

1986 No New Conventional Coal


Technology Unit for 1998
Available Operation

Conventional Coal
Purchase Power for 1998 (with
1994 - 1998 New or without
Technology scrubbers)
Facility Meets
Available Expectations
New Technology
for 1998
Source: Keeney et al (1986) Doesn't

115
Figure 3.7 Technology Choice Objectives Hierarchy

OBJECTIVES HIERARCHY

Economic impact Management impact Health and safety Socio-economic impacts


- customer cost - decision difficulty - mortality - water usage
- shareholder return - corporate image - morbidity - transportation impact
- community disruption
Environment impact Public attitudes - local employment
- visual impact
Feasibility - local tax revenue
Source: Keeney et al (1986)

3.5 Model Synthesis

The previous three sections reviewed applications of primarily single techniques to

focus on their specific modelling strengths and weaknesses. Most applications,

particularly at the national or regional system level, involve more than one

technique, which we call model synthesis. Model synthesis refers to any formal

attempt to use two or more of the above-mentioned techniques to achieve the same

objectives as that of a single technique. We present examples of model synthesis in

practice to illustrate 1) the kinds of techniques that are suitable for synthesis, 2)

how well they address the areas and types of uncertainties, and 3) the manner of

synthesis.

3.5.1 Commercially Available Software

Most commercially available software packages reviewed in IAEA (1984) make

use of different techniques to address different aspects of standard capacity

planning problems. Not all packages are integrated as some would require the user

to import data from different modules and perform separate analyses. Generation

planning is divided into different functions, such as sectoral demand analysis,

production costing, and plant scheduling.

116
Electric Generation Expansion Analysis System (EGEAS), developed by the US-

based Electric Power Research Institute (EPRI), contains an LP option, a

generalised Benders decomposition option for non-linear optimisation, a

dynamic programming option, scenario generation following sensitivity

analysis, and data collapsing for trade-off and uncertainty analysis. These

options are selected by the user. For uncertainty analysis, the user specifies

uncertain input parameters, assumptions, range of values, and jointly varying

subsets of uncertain input parameters. The model then generates scenarios for

each possible combination of uncertain parameter values.

These commercial packages are composed of modules which share the same data

base and have a common interface. Packages such as EGEAS, WASP, and

WIGPLAN have voluminous data requirements. Designed to run on mainframe

computers, they are not flexible for modification or customisation. These packages

are not transparent to the user and therefore not customisable or extensible. They

are not tailored to unique situations or unknown technologies.

3.5.2 Decision Analysis with Optimisation

We describe three examples of synthesis of decision analysis and optimisation

techniques.

DECISION ANALYSIS AND DYNAMIC PROGRAMMING

Lo et al (1987) combine decision analysis and forward dynamic programming to

produce a Decision Framework for New Technologies (DEFNET). It is a strategic

planning tool that can be used to model uncertainties in load growth, fuel and

capital costs, and performance of new generation technologies. The power system

planners risk attitudes and value judgements are captured in the multi-attribute

utility functions, with the decision criterion to find the least expected cost or the

maximum expected utility. DEFNET can also incorporate attributes other than

117
cost, such as environmental impact, licensing and operating delays due to

regulations, system reliability, corporate image, etc. The optimisation grid of

figure 3.8 depicts possible decision paths which branch out from an initial state.

More information available in the near term translates to finer details as opposed to

the coarser or wider range of possibilities in the future. This is analogous to the

increasing variance of probability distributions in the longer time horizon. The

model can be run deterministically or stochastically in optimisation or simulation

mode.

Figure 3.8 Optimisation Grid

DEFNET: Optimisation Grid Terminal State


... Feasible Alternative
Resource Plan States

... Infeasible States


Configuration States

...
Decision
Possible
Decisions
... Resource
Plan

Initial ...
State
Extension
... Period

1982 1987 1992 1997 2002 2027

Source: Lo, Campo, and Ma (1987)

DECISION ANALYSIS, DECOMPOSITION, AND STOCHASTIC

FRAMEWORK

Capacity expansion planning can be viewed as investment decisions made for the

long term, adjusted by operating decisions made in response to the changing

environment in the short term. Gorenstin et al (1991) represent the investment

sub-problem as a multi-stage, mixed-integer programme which is solved by a

118
branch and bound algorithm. The operation sub-problem is a multi-stage, multi-

reservoir hydro-thermal scheduling problem with the objective to minimise

operation cost. Benders decomposition technique integrates the two sub-

problems by feeding back the decision consequences at each iteration. The

operation sub-problem is solved for different demand scenarios simultaneously, to

give expected operation costs. Within this decomposition and stochastic

optimisation framework, a minimax decision rule is used to select the optimal cost.

Sixteen scenarios are generated from the combinations of uncertainties in a binary

decision tree. The optimal expansion strategy minimises the maximum regret

obtained from evaluating the binary tree of different growth rates and decisions.

This elaborate methodology has nevertheless been only applied to the Brazilian

system which consists of mostly hydro-plants, hence parameters specific to one

type of technology.

DECISION ANALYSIS AND DECOMPOSITION

A decision tree structures the capacity planning problem in Mankki (1986). The

problem is then formulated as a linear programme and solved by a decomposition

method. The decision tree shown in figure 3.9 captures the uncertainties of

demand, fossil fuel prices, and nuclear capital costs. The method of probability

elicitation or assignment was not discussed.

119
Figure 3.9 Decision Tree with Optimisation Algorithm

1986 1991 1991 1996 1996

LNCC LNCC

CFFP CFFP CFFP


HNCC HNCC
LNCC LNCC
RFFP LD RFFP LD
RFFP
Low Demand
HNCC HNCC
C1991 C1996 C2001
N1996 N2001
LNCC LNCC
High Demand
CFFP HD CFFP CFFP
HD
HNCC HNCC
LNCC LNCC
RFFP RFFP RFFP

HNCC HNCC

C1991 = coal power plant (production start 1991); CFFP = constant fossil fuel prices; RFFP = rising fossil fuel prices;
LNCC = low nuclear capital costs; HNCC = high nuclear capital costs

Source: Mankki (1987)

3.5.3 Scenario Generation

EXTENSION OF THE OVER/UNDER MODEL

Clark (1985) extends the Over/Under Model described earlier to generate scenarios

for the analysis of demand uncertainty. This integrated demand forecasting model

consists of a production simulation model, financial model, rate model, and an


econometric model. The probability of each scenario is determined by the

probabilities of cost, commercial operation date, equivalent availability, and

demand growth components. A sensitivity analysis produces a ranking of trade-

offs. The model is also used to examine excess capacity rules in the adjustments

of reserve margins.

120
DECISION ANALYSIS FOR SCENARIOS

Garver et al (1976) use a decision tree (figure 3.10) to generate three strategies

and five event scenarios for a five year period. The decision alternatives and

possible event scenarios are propagated yearly. Two criteria are used to evaluate

strategies: 1) expected value is computed from likelihood weighting of present

worth costs, and 2) the cost of uncertainty represents the opportunity loss.

Although the time horizon is relatively short and the scenarios few, this example

illustrates the potential for structuring extensive scenario analysis in a decision tree

framework.

Figure 3.10 Scenario / Decision Analysis

POSSIBLE EVENT SCENARIOS

Business as usual
50% oil increase in 1980
Minimum long range 50% nuclear fuel price increase in 1980
cost strategy 50% coal price increase in 1980
20% cost of capital increase in 1980

Business as usual
Oil increase
1976 to 1980 Short term oil Nuclear increase
decisions conservation strategy Coal increase
Capital increase

Business as usual
Oil increase
Short term capital Nuclear increase
conservation strategy Coal increase
Source: Garver et al (1976) Capital increase

MULTI-OBJECTIVE LP, SENSITIVITY ANALYSIS

The multi-criteria formulation of Amagai and Leung (1989) combines multi-

objective linear programming with sensitivity analysis. For this study, country risk

121
was assessed by a scoring system based on the weighted sum of a subset list of

factors that affect legislation and regulation related to the shipping of fuel. The

simply formulated LP is driven by four scenarios, which are determined by the

demand for electricity, fuel costs, controllability (how closely a power plant can

follow the load duration curve), utilisation rate for each type of plant, and daily

load curve. However, the dimensionality of extensive scenario analysis due to the

number of stages, decision alternatives, and range of uncertainties hinders efficient

analysis.

3.5.4 Decision Analysis as a Framework

DECISION ANALYSIS AND LINEAR PROGRAMMING

The soft or descriptive side of decision analysis is helpful in structuring a

multi-stage problem with interactions of uncertainty. Kreczko et al (1987) track

the decisions regarding whether or not to build a new type of technology and then

use a linear programme to calculate its effect on costs and installed capacity. A

utility function called the net decision benefit is used to propagate the decision tree

in figure 3.11. Lack of experience with the operation of this new type of

technology implies uncertainties in future costs. Capital costs are derived from a

multiple of related technological cost estimates. Other costs are elicited from

different estimates found in the literature and justified by rational assumptions

made on the basis of industry understanding. This application shows the

importance of addressing uncertainties, especially the subjective and judgemental

nature of estimates required in place of unavailable data.

122
Figure 3.11 Decision Tree of New Technology Evaluation

Decision Analysis of the CDFR

CDFR Yes Expansion


performance CFRs? rate
uncertain No uncertain
Yes

CDFR? CDFR
performance
No uncertain
Yes
Uncertain
CDFR?
delay
No Foreign fast
Yes
reactors CFRs?
performance
Source: Kreczko et al (1987) uncertain No

SIMULATION AND OPTIMISATION

Results from different types of simulation in Merrill and Schweppe (1984) are

synthesized to incorporate the multiple and often conflicting objectives and

uncertainties found in strategic planning. Their Simulation Modelling and

Regression Trade-off Evaluation (SMARTE) makes use of user-postulated

SMARTE functions to answer specific questions related to strategic planning.

SMARTE, depicted in figure 3.12, is not a computer programme but a set of

techniques for extracting maximum information from available simulation tools,

whether manual or computer based. The resulting functions are validated using t

statistics, coefficients of correlation, sums of squares of mismatches and goodness

of fit to the data base. Users insights are introduced into the model, hence the

large subjective element. SMARTE applications are found in Merrill (1983) and

Merrill et al (1982) which feature the combined use of simulation and optimisation.

Some statistical manipulation, such as bootstrapping an optimisation, may be

required to facilitate a simulation.

123
Figure 3.12 SMARTE Methodology

Production Financial Environmental Others


Cost Simulations Impact
Simulations Simulations

Modelling and
Regression
(SMARTE Functions)

Trade-Off Evaluation
Source: Merrill et al (1982)

3.6 Conclusions

This extensive review of techniques and applications to electricity capacity

planning reveals the limitation of existing approaches and the potential for greater

model completeness through synthesis. The main conclusions are presented and

supported below.

1) All kinds of OR techniques have been applied to the problem of capacity planning,
albeit the areas and types of uncertainties are modelled with different degrees of
completeness and adequacy.

These techniques pictured in figure 3.13 span the range of optimisation, simulation,

and decision analysis. Table 3.4 summarises the critique of techniques with respect

to uncertainties.

2) Models (or applications) based on single techniques are able to capture some aspects
very well and others not at all. Adding greater detail does not compensate for what
the technique is designed to do.

Models based on single techniques are not functionally versatile or comprehensive

enough to address all the issues. For example, models without a decision analytic

124
focus have difficulty capturing the multi-staged nature of decisions and the

associated risks. One cannot use optimisation techniques for decision analysis

purposes as the assumptions are not compatible. Likewise, one cannot use

scenario analysis to achieve optimality.

In many applications, the treatment of uncertainty is inadequate. Deterministic

treatment of parameters is followed, at best, with sensitivity analysis, which gives

no indication of the likelihood of the uncertain values. The approach of following

a rigorous optimisation with sensitivity analysis is regarded mainly to validate a

deterministic analysis, e.g. McNamara (1976). The validation process bears no

relation to the actual resolution of uncertainty in time. Uncertainty is also treated

in the same sense as variability. Rather than building something intrinsic to deal

with uncertainty, many consider sensitivity analysis to be sufficient in assessing

uncertainty albeit the attention to uncertainty is considered almost as an after-

thought. The primarily deterministic techniques are simply unable to address

uncertainties. Others give limited but inadequate attention to this issue. The

attitude that a recognition of uncertainty and some attention to it is better than

none at all is reflected in the ad hoc manner in which uncertainty is handled.

Increasing uncertainties have often called for softer approaches such as scenario

analysis which is only able to address partial aspects of the problem.

3) The critique of techniques with respect to areas and types of uncertainties reveal the
the lack of completeness of coverage, inadequacy of treatment, and further difficulties
of manageability, computational tractability, and other problems.

The modelling critique supports the above conclusions. Table 3.4 presents the

assessment of each technique against the uncertainties of Chapter 2 with additional

difficulties noted alongside.

125
Table 3.4 Critique of Techniques

Technique Areas and Types of Uncertainties Difficulties


(Chapter 2)
linear difficulty incorporating many technical and operational
programming operational/technical characteristics characteristics are non-linear
cannot incorporate risk attitude manageable level of model size and
unable to handle conflicting accuracy
objectives computational tractability
unable to handle multi-staged strict mathematical requirements of
decisions technique
unable to handle uncertainties
directly
confusion of validation with
uncertainty analysis
confusion of variability with
uncertainty analysis
stochastic limited multi-stage via SP with able to handle non-linearity
programming recourse probabilities introduce
can handle uncertainties but not dimensionality problem
comprehensively software inflexibility and difficulties
decomposition can integrate different levels of detail computational efficiency
can integrate different types of feasibility of handling uncertainties
decisions questionable
no evidence of multi-criteria
no evidence of uncertainties
dynamic not able to cope with closed loop dimensionality problem
programming decision making, i.e. resolution of mathematical restrictions
uncertainty, recourse
voluminous output
system not optimal time requirements
dynamics systemic but not from a decision computing requirements
makers perspective validation difficulties
difficult to validate
hard to calibrate
scenario not multi-staged subjective
analysis not optimal conceptual and not prescriptive
sensitivity not multi-staged not prescriptive
analysis few number of factors considered problems with dependence
rarely used alone factors considered one at a time
not optimal no account of likelihoods
no interaction effects considered
risk analysis use of probabilities dependence/independence issue
parametric uncertainty, but not due selecting appropriate probability
to model structure distribution
lack of decision focus skewness and asymmetry
interpretation of results
multi-variate probability distributions
adequate number of factors to
simulate

126
decision considers multi-staged resolution of how many stages to consider
analysis uncertainty subjective probabilities or historic
decision criteria and multi-attribute evidence for uncertain events
utility functions which decisions to consider
role of decision maker how to propagate decisions, by
not detailed enough individual factors or scenarios
not optimal

4) The additional modelling difficulties translate into new modelling requirements, which
reflect the conflicting criteria of comprehensiveness and comprehensibility and
practicality.

The above modelling difficulties can be condensed into five main areas: 1)

mathematical restrictions, 2) functionality, 3) computational tractability, 4) data

specification, and 5) uncertainty representation. These are briefly explained below.

1) Mathematical restrictions lay the rules and foundation for any technique and sets
the boundaries and conditions for its functionality. Linearity and convexity
requirements of linear programming prevent its applicability to non-linear and non-
convex relationships.

2) These structural conditions for functionality determine the way in which the
problem can be formulated. For example, multi-stage resolution of uncertainty
cannot be achieved within a linear programming framework while optimisation
with respect to given constraints cannot be accomplished by decision analysis or
system dynamics.

3) There is a trade-off in computational tractability of meeting the dimensionality of


variables, interaction effects, algorithm efficiency, and software performance.
Although some aspects of the problem formulation may be approximated at the
expense of efficiency and realistic representation, models usually become too large
to be computationally tractable. Borison et al (1984) note that uncertainty greatly
increases the number of conditions under which each technology choice decision
must be evaluated.

4) Many of the problems of dimensionality and algorithm efficiency are related to data
specification, that is, the level of complexity and realism that can be modelled
without sacrificing tractability. Assumptions, approximations and reduction of
information which are undertaken to simplify the problem to a manageable level
must be assessed against the need for completeness. These considerations imply

127
judgement, trade-off evaluation, and decision making at the model construction
stage.

5) The final difficulty concerns uncertainty representation. Chapter 2 has listed


various types of uncertainties, from values and parametric relationships to the
unforeseeable surprise events in the future. The applications reviewed in this
chapter dealt with uncertainties by:

representation by probabilities

accurate depiction of interactions

consideration of all types of scenarios and uncertainties imaginable.

However, not all techniques are capable of representing and adequately treating
uncertainties.

5) A balance of hard and soft techniques is needed to address the different aspects of
capacity planning.

We distinguish between hard and soft techniques for the purposes of this

discussion. Hard techniques are data intensive, mathematically rigorous and

computational in nature. They are suitable for well-structured problems and are

aimed at solving bigger problems more quickly and efficiently. Towards the other

end of the spectrum lie softer methods that are less formal but more qualitative in

scope, suitable for uncertainty and strategic analysis. They are more descriptive

than prescriptive as the purpose is often to understand rather than to solve.

Capacity planning traditionally has been approached in the domain of hard

disciplines because of its data requirements, technical and physical constraints.

Capacity planning encompasses short-term production costing or merit ordering

and long-term investment and retirement decisions, which entail many parameters

specific to plant and system.

The privatised electricity supply industry in the UK is characterised by new kinds

of uncertainties, conflicting objectives, rapid changes, high costs and risks. These

128
aspects are not well addressed by hard prescriptive techniques. Ill-representation

or total exclusion of such uncertainties and characteristics of capacity planning has

been preferred to make the problem tractable. The well-specified problems that

constitute capacity planning have evolved into an environment that is ill-specified,

requiring a balance of hard and soft approaches. Strategic decisions have high cost

implications, and the uncertainties that affect such decisions require soft

techniques which can handle the subjective, non-quantifiable parameters. To

model uncertainty in capacity planning, both types of techniques are needed, i.e.

those hard techniques that capture the intricacies of electricity generation

scheduling and the softer methods of uncertainty analysis. Hard and soft

techniques are complementary to each other in addressing the range of issues.

6) Similarities and synergies in techniques suggest possibilities for synthesis.

In the past, the focus was towards efficiency of algorithms via the continued

improvement and specialisation of stand-alone techniques. As a result of this

technique-driven mentality, synergies across different approaches were not

identified or exploited.

Some techniques that have been used in electricity planning share similar

characteristics and functionality. Others have synergies in structure. On this basis,

these techniques are arranged and linked in figure 3.13.

129
Figure 3.13 OR Techniques

stochastic
programming

linear
decomposition sensitivity
programming
analysis

optimisation simulation

dynamic risk analysis


programming system
multi-objective
scenario dynamics
analysis

multi-criteria
decision
analysis
analysis
(influence diagrams)
multi-attribute
utility
(decision trees)

Complementary modelling, as prescribed in Bunn et al (1993), shows how

different techniques or approaches can balance the objectives and assist towards a

more complete model. For example, system dynamics and optimisation allow a

more comprehensive analysis of different impacts of uncertainty and different

effects of privatisation than each technique alone.

7) Model synthesis has the potential to overcome the deficiencies of single technique
based models by exploiting synergies between techniques and achieving a balance of
techniques.

Intuitively, synthesis communicates the best of both worlds. Hence, model

synthesis should be capable of supporting the balance of hard and soft techniques

which complement each other in functionality towards completeness. Our review

of applications based on two or more techniques support this view.

130
8) Decision analysis emerges as a versatile technique with proven potential for synthesis
with other techniques.

This review has deliberately steered towards decision analysis, which has not

received much attention in the UK (as compared to the US). Recent desk-top

decision software, such as DPL (ADA, 1992), has automated the previously

tedious process of decision tree structuring and expected value calculation.

Influence diagrams complement decision trees, further supporting the versatility in

its use.

9) A fair method of model comparison beyond the literature review is needed to evaluate
model characteristics and performance to a greater level of detail and depth.

This review has attempted to assess the capacity planning models reported in the

literature as fairly and insightfully as possible. However, it is restricted to what has

been reported in the literature. To determine the applicability of these techniques

to capacity planning in the privatised UK ESI and methods of overcoming the

difficulties found in this review, it is necessary to look into the model, e.g. via a

replication of existing models.

QUESTIONS FOR MODEL SYNTHESIS

The strong engineering culture of electricity capacity planning has evolved from

modelling operational uncertainties to more competitive uncertainties, ultimately

resulting in models with more detail. In light of this, the following emerge.

1) How can a single utility, given its limited resources, expand existing techniques
to cope with all these uncertainties (in Chapter 2) given the modelling
difficulties listed above?

2) Can model synthesis feasibly and practically overcome these difficulties?

3) How can we compare models in more fairly and in greater depth, to get beneath
what is written and reported?

131
CHAPTER 4

The Pursuit of Model Synthesis

4.1 Introduction

Chapter 3 reviewed the range and evolution of techniques which have been applied

to capacity planning. Models based on two or more techniques seem to be more

capable of capturing the different kinds of complexities and uncertainties than those

based on single techniques. This suggests that model synthesis (using more than

one technique) may help to achieve the ideal of comprehensive yet comprehensible

models by exploiting synergies across complementary and compatible techniques.

Yet the modelling literature has little to offer on strategies and criteria for model

synthesis.

This chapter gives an account of the investigations into the feasibility and

practicality of model synthesis and associated model structures by a

conceptualisation of model synthesis (Appendix C), prototyping, and a comparison

of model performance (section 4.3). Through a series of modelling experiments

(section 4.4), replications of existing approaches (section 4.2) and construction of

synthesis prototypes (section 4.5) are assessed according to the dominant criteria

of comprehensiveness (model completeness) and comprehensibility (transparency

and manageability).

That model synthesis, via the decision analytic framework proposed by this thesis,

proved to have major practical limitations raised two important questions. 1) Is

model completeness a reasonable goal in the first place? 2) Are there more

practical means to compensate for the lack of completeness or deal with the range

of uncertainties? Section 4.6 discusses these questions with respect to the concept

133
of flexibility. The last section 4.7 summarises the main findings and proposes an

agenda for the rest of the thesis.

4.2 Experimental Protocol

An experimental research protocol, closely resembling a scientific experiment, was

adopted to allow an objective and systematic method of inquiry. Figure 4.1 shows

the three main components, namely, literature review, feasibility tests, and

modelling experiment. Such a case-study based modelling experiment was

originally intended to give both methodology and energy policy insight.

First, extensive literature reviews from the previous two chapters identified

uncertainties and modelling requirements which formed the basis for a lengthy

model evaluation criteria (section 4.3.4). Second, two pilot studies (section

4.3.2) were conducted to ascertain the feasibility of model replication with

available modelling software, applicability of the evaluation criteria, and soundness

of the evaluation method (section 4.3.3). Third, a case study (section 4.4.1) was

developed to capture the current concerns in the UK ESI and to inject energy

policy insight into the subsequent analysis. Fourth, three modelling approaches

(section 4.4.2) representing those followed in the industry were replicated and

evaluated. Fifth, issues of model synthesis (section 4.5.2) were conceptualised.

Several prototypes of model synthesis within a decision analysis framework

(section 4.5.3) were constructed. Similarly a so-called model of model (section

4.5.4) was tested. Insights from replicating existing approaches were transferred

to the second stage of model synthesis, where the construction of comprehensive

yet comprehensible models via complementary and compatible techniques was

studied.

134
Figure 4.1 Experimental Protocol

Literature Feasibility Modelling


Review Tests Experiment
Case Study
Uncertainties and
Modelling requirements Pilot Study 1 Stage 1: Replication of
(Chapter 2) (Appendix A) 3 Archetypal Approaches
(Appendix B)

Techniques and Pilot Study 2 Stage 2: Model Synthesis,


Applications Decision Analysis Framework,
(Chapter 3) Model of Model

Evaluation criteria Model Replication and Feasibility


Evaluation Practicality
Re-usability

Apart from minor criticisms1, the experimental protocol provided a systematic

method of model comparison. The next three sections give the rationale, details,

and results for each of the main research components.

4.3 Model Replication and Evaluation

4.3.1 Rationale

A mere literature review of applications does not sufficiently expose the full

limitation and potential of existing approaches. What is written can be biased and

minimal, supporting the authors intentions but not illuminating the intricacies of

their particular approach. Authors have discretion over what to reveal and

1 Circularity of the modeller as evaluator may cast doubt into the authenticity of the replication and the
objectivity of the evaluation. Model replication and synthesis depend greatly on modelling skills and
availability of modelling software. Depending on these conditions, different researchers may reach
different outcomes.

135
consequently may hide weaknesses of their approach while presenting them in an

undeservingly favourable light. Comparing models described in the literature is

further complicated by the implicit standards of fairness, objectivity, and

thoroughness.

The applications reviewed in Chapter 3 differ widely in content, technique, and

detail. They vary from the evaluation of a particular technology to a range of

technologies, from the analysis of demand uncertainty to industry investment

behaviour, etc. They also tend to be situation-specific, that is, geared to a

particular case study and not always comparable. These fundamental differences

complicate the task of model comparison.

We propose a four step method consisting of 1) definition of evaluation criteria

from literature review, 2) replication of model using available software, 3)

evaluation of model against pre-defined criteria, and 4) comparison of models.

Model replication permits a closer examination of uncertainty modelling as well as

a more meaningful critique of limitations and potential for synthesis. Application

to a single case study anchors the modelled content to enable ease and fairness of

comparison. A two staged modelling experiment provides the vehicle for

systematic assessment. The first stage replicates and evaluates the three modelling

approaches. The second stage combines features of different approaches as well as

tests the feasibility of model of model. One of these approaches operationalises the

recommendation of the inspector at the Sizewell B public inquiry, that a

probabilistic analysis is better than a deterministic one. Another approach transfers

the predominantly US-based decision analysis approach to the UK situation.

4.3.2 Method of Replication

Nowadays commercial desktop modelling tools such as spreadsheets, e.g. Excel

and Lotus 123, add-ins, e.g. @Risk, and decision software, e.g. DPL, offer the

136
kind of ease-of-use and multi-functionality which facilitate rapid model

construction. Most of the models reported in the literature have been painstakingly

computer coded from scratch and required major effort in implementation. The

new tools allow models to be conceptualised, constructed, and tested much more

quickly and effectively. Such tools as these and others can be used to mimic or

replicate easily the essential characteristics of capacity planning models reported in

the literature.

The feasibility of model replication and evaluation has been established in two

unrelated pilot studies. The first study addresses specific issues of the Nuclear

Review in the UK. The well-known techniques of sensitivity analysis and risk

analysis were applied to a comparison of levelised costs of nuclear, coal, and gas

plant based on data taken from various OECD countries. Documented fully in

Appendix A, this study exemplifies the level of detail pursued in the modelling

experiment. The second study used data from the Hinkley C public inquiry for

replicating the deterministic approach, which is later subsumed into the first stage

of the experiment in Appendix B. The second proved the feasibility of evaluation

as well as replication.

4.3.3 Method of Model Evaluation and Comparison

This case study based modelling experiment contains several standard research

components: comparative analysis, case study, and experimental design.

Regrettably, few examples in the literature employ such a combination.

Most model comparison studies are not case study based but mere reviews of

model specifications. The studies by Dixon (1989) and Davis and West (1987)

belong to the category of case study based comparative analysis of models. While

Dixon compares existing models to critique and improve upon them, Davis and

West compare to show off the model they developed. Dixon comments only on

137
the input and output thus treating the models as black boxes. On the other hand,

Davis and West probe into the trade-offs of specific techniques employed in the

models. Neither study defines the criteria for comparison beforehand. The basis of

comparison is very general and superficial, e.g. strengths and weaknesses, ease of

use, and impact of models.

Not enough has been written about how to evaluate models. Mulvey (1979)

develops a workable procedure for comparing competing models for selection

purposes, beginning with an evaluation of individual models against a pre-defined

criteria. His five dimensions for evaluation consist of performance,

realism/complexity, information requirements, user friendliness, and

computational costs. He argues that it is possible to overcome the technique-

driven bias provided a methodology exists for evaluating models which are based

on different techniques. His models are compared ordinally by preference ranking

on each dimension, with the final results compared linearly by dominance. Morris

(1967) suggests broad characteristics of models, which are useful but not specific

enough for the purposes of assessing large complex models. Beyond roughly

describing a model as simple or complex, he proposes other characteristics such as

relatedness, transparency, robustness, fertility, and ease of enrichment.

In contrast to above, our evaluation criteria originates from an extensive literature

review, hence more detailed and comprehensive than earlier studies. Instead of

ranking all models on one criterion, our evaluation method assesses each model

individually against all criteria.

4.3.4 Evaluation Criteria

The detailed enumeration of uncertainties and modelling requirements in chapters 2

and 3 was proved feasible for evaluation purposes in the second pilot study. This

original list of requirements, however, was difficult to use for comparison

138
purposes. For example, it was not always possible to compare models with

different assumptions and properties. A reduced checklist was more workable but

lacked the detail and comprehensiveness for which the evaluation and synthesis

were intended. The reduced criteria consist of five major categories for evaluation:

1) level of detail, 2) desirable model characteristics, 3) decision focus, 4) output,

and 5) uncertainty representation and analysis. Their sub-criteria are shown in

table 4.1 and discussed thereafter.

Table 4.1 Model Evaluation and Comparison Criteria

Main Category Sub-Criteria (Elements)

Level of detail number of variables included


operational detail (technical parameters)
financial detail (costs)
types of plants / technology

Desirable model characteristics simplicity (more detail, less simple)


transparency
comprehensiveness (more detail, more comprehensive)
extensibility
complexity
comprehensibility

Decision focus types of decisions captured


multiple stages
generation of alternatives

Output range of insight


richness of solutions
range and diversity of alternatives
business risk

Uncertainty representation and discrete vs continuous


analysis conditional
number of iterations (sampling)
representativeness
dimensionality (or in level of detail)
computational tractability (or in level of detail)

The level of detail relates to input variable specification which directly contributes

to the comprehensiveness of the model. Although the format of input specification

was largely ignored in the evaluation, it directly affected the transparency of the

model. This category includes questions like how many variables are, can, or must

139
be included; how many types of plants are considered; and the level of operational

and financial detail. Operational detail relates to parameters such as load factor,

utilisation rate, and thermal efficiency. Financial detail covers costs, e.g.

operations and maintenance charges, interest during construction, tax, and discount

rates.

Model characteristics can be assessed from the process of replication. A model

that is comprehensive in input yet comprehensible in output requires a trade-off of

the following desirable features: simplicity and comprehensiveness,

comprehensibility (transparency) and complexity, and other features like

extensibility and reusability. A simple model structure can hardly capture all

details required, i.e. simplicity in structure versus comprehensiveness in

specification. The complexity of plant economics and dimensionality of different

kinds of uncertainty call for a transparent model for communication purposes.

Comprehensibility by the user is important for the understanding of uncertainty.

Decision focus refers to the representation of the decision makers perspective,

such as risk attitude, sequential stages, and uncertainty resolution. Risk attitude

can be incorporated in the discount rate, utility functions, and risk tolerance

coefficient. Decisions and uncertainties in capacity planning do not occur

simultaneously, as modelled in the deterministic and probabilistic approaches.

The quality of the output is related to the level of detail in inputs. The range of

insight and richness of solutions surface from the range and diversity of

alternatives. These and other aspects of the problem, such as business risk, reflect

the benefit of using a particular approach.

The last category of uncertainty representation and analysis consists of the

following assessments: discrete and continuous probabilities, conditional

dependence, number of iterations (data points sampled in the distribution),

140
representativeness, dimensionality, and computational tractability. These pre-

defined criteria act as cues for the investigation of the potential, limitation, and

effectiveness of each approach in modelling uncertainty.

4.4 Case Study Based Modelling Experiment

4.4.1 Case Study

The case study captures a snapshot of the UK electricity supply industry as at July

1993. It is generic enough to embrace a range of objectives but specific enough

(by consolidation of published data) to relate to current industry practice. It

reflects the on-going controversies surrounding the major fuels, competition in

generation, impact of environmental limits, and other uncertainties.

The utilities involved in power generation in the UK Electricity Supply Industry fall

into three categories, broadly labelled as unprotected and dominant, protected but

competitive, and unprotected but encouraged. Each of these perspectives are

briefly described below, and their plant mix as at July 1993 summarised in

associated tables.

The unprotected but dominant utility describes the two major power generators,

National Power and PowerGen. These duopolists are primarily concerned with

sustaining marketshare while increasing return to shareholders and therefore

strongly motivated by profit. Despite having the financial muscle to invest in

different types of plant, these companies face considerable regulatory uncertainty,

e.g. threat of MMC2 referral, caps on electricity prices, the Regulators scrutiny of

anti-monopolistic behaviour, and stringency of environmental allowances. National

Powers plant mix as at July 1993 is summarised in table 4.2.

2 Monopolies Mergers Commission

141
Table 4.2 Unprotected but Dominant Utility: National Power

Type of Plant Number Capacity in MW

Large Coal 7 13,103

Medium Coal 4 3,412

Small Coal 7 1,784

Oil 3 4,484

Open Cycle Gas Turbine (OCGT) 16 1,565

Combined Cycle Gas Turbine (CCGT) 1 620

Hydro 3 40

TOTAL 41 25,008

The protected but competitive utility characterises Nuclear Electric, which has not

been privatised. Despite heavy subsidy of the nuclear levy and other government

protective measures, the nuclear generator has an equally strong profit motive, i.e.

to show that it can eventually compete in the private sector when the subsidy

expires in 1998. Many of these uncertainties will be resolved with the outcome of

the Nuclear Review due in 1995, i.e. privatisation, subsidies, public support,

financing of power stations, and the future of nuclear power. Nuclear Electrics

plant mix as at July 1993 is summarised in table 4.3.

142
Table 4.3 Protected but Competitive Utility

Type of Plant Number Capacity in MW

Magnox 7 3,293

AGR 5 6,039

Hydro 1 30

TOTAL* 13 9,362

*Excludes Sizewell B currently under construction

Finally, the unprotected but encouraged utility reflects the views and opportunities

of independent power generators. Independent power producers are building

CCGTs which are tied to back-to-back 15 year fuel contracts as a way for the

regional electricity companies to diversify their own distribution and supply

business as well as to gain a competitive advantage in this market. Table 4.4

summarises the combined portfolio of all independent power generators by status

as at July 1993.

143
Table 4.4 Unprotected but Encouraged Utility

Status (CCGT, CHP) Number Capacity in MW

Transmission Notice and Under Construction1,2 9 4,936

Transmission Notice and Section 36 Consent Given 6 4,619 - 5,156

Section 36 Consent Given Only3 6 3,124 - 3,354

Transmission Notice Only4 10 8,497

Public Inquiry 2 506

Application for Planning Permission 15 5,150 - 5,360

Early Stages (None of Above) 6 1,200 - 1,300

TOTAL5 54 28,032 - 29,109


1 Excludes BNFL's Sellafield CHP 162 MW under construction but not directly connected to system
2 Excludes Sizewell B PWR 1254 MW Under Construction
3 Excludes Hinkley C PWR 1200 MW which is very unlikely to be built
4 Excludes Scottish Interconnectors 750 MW
5 Includes projects in which National Power and PowerGen have stakes

In an increasingly more competitive electricity trading market, with tighter

environmental regulation and potential over-capacity, the case study addresses the

following question.

How should a power generating company in the UK plan for capacity


expansion in terms of timing, capacity levels, and plant mix?

These investment decisions depend on the kinds of technologies available, their

economics, and impacts of uncertainty. The cost of a capacity expansion plan is

calculated from totalling all investment and operating costs. The relative

economics of plant can be determined by its merit order in the entire system.

To answer the capacity planning question, two main types of uncertainties

surrounding the decisions to invest new plants and retire old ones are explicitly

considered: demand and fuel price. These uncertainties at the industry level

144
concerns all types of power generators. Therefore differentiation amongst the

companies is not required. Uncertainties at the firm level would require

differentiation by type of utility; however these firm level uncertainties have not

been addressed in this case study.

1) Demand uncertainty surrounds the seasonal fluctuation of demand as expressed in


load duration curves as well as the period growth of demand. The growth and
shape of demand depend on factors such as energy efficiency, consumer
consciousness, demand side measures, weather, load management, fuel switching,
VAT regulations, economic growth, and responsiveness to electricity prices.

2) Fuel price uncertainty affects the types of fuels used in the technologies. The main
factors describing fuel price uncertainty are base price and subsequent escalation
rates. Emission regulation, spot prices, and related fuel prices determine the
direction and rate of fuel price escalation.

The consolidated industry data used in the replicated models are contained in

Appendix B. Briefly, input data specification of plant consisted of 85 existing plant

in the system totalling 60 GW of capacity of 10 different technologies (magnox,

AGR, large coal, medium coal, small coal, oil, OCGT, CCGT, hydro, and the

Scottish and French links). Eight different seasonal load duration curves were

specified to correspond to peaks and troughs in demand during the year. Linear

trend forecasts were specified for peak demand and each type of fuel for each

period in the planning horizon. Capital and operating costs were specified for each

of the 85 plants. New alternatives included fossil fuel plant, nuclear, and

renewables.

4.4.2 Stage 1: Three Archetypal Modelling Approaches

The first stage of the modelling experiment examines three modelling approaches

representative of those followed in the industry. Two extreme approaches

characterise capacity planning in the UK electricity supply industry: deterministic

145
and probabilistic. These have been highlighted in public inquiries into proposals to

build new nuclear power plant. The deterministic and probabilistic approaches

centre around an optimisation of investment schedules to electricity demand and

fuel forecasts. In the US, on the other hand, regulatory hearings have increasingly

made use of the decision analytic approach which combines some features of the

deterministic and probabilistic approaches. These three modelling approaches

(deterministic, probabilistic, and decision analytic) represent quite distinct norms in

the industry and span the range of basic approaches.

1) The deterministic approach is typical of large, public sector, monopolistic power


companies, e.g. the former Central Electricity Generating Board (CEGB) in the
UK. The CEGB used the techniques of scenario analysis, optimisation, and
sensitivity analysis. Their approach is well documented in Greenhalgh (1985),
Vlahos and Bunn (1988b), and the Sizewell and Hinkley public inquiries (Layfield,
1987). Five scenarios are postulated from assumptions on world growth and the
UK economy. Within each scenario, a linear programming optimisation is
performed to produce the best solutions. Uncertainty is investigated afterwards
through sensitivity analysis by changing one variable at a time. This deterministic
approach considers a few uncertain parameters that are sequentially and
independently varied over limited ranges.

2) The probabilistic approach is effectively an expanded risk analysis demonstrated by


Evans (1984) and also discussed in Evans and Hope (1984), Kreczko et al (1987),
and Jones (1989). It gives attention to the kind of uncertainty analysis
recommended by the inspector Sir Frank Layfield in the conclusions of the Sizewell
B public inquiry. However, this recommendation was not followed in the ensuing
Hinkley enquiry. This approach is an extension of the first pilot study with the
major difference that an optimisation sub-model is run several times using the
sample values from the risk analysis-generated probability distributions. By
varying more than one variable at the same time, it is possible to get different
combinations of input values. In the Sizewell study, fifteen input variables were
explicitly included for this uncertainty analysis, with justification for the exclusion
of other major variables such as plant lifetime and discount rate.

146
3) The decision analytic approach is patterned after the North American decision
analysis school, a discipline actively practised by consulting firms such as SDG
(Strategic Decisions Group) and ADA (Applied Decision Analysis). We replicate a
variant of the Over/Under Model of Cazalet et al (1978). This kind of decision
analysis approach is illustrated in Anders (1990) and Peerenboom et al (1989). The
approach is heavily decision analysis oriented with emphasis on the technology
choice decision. A decision tree is structured to capture the major decisions and
uncertainties.

Details of these approaches, their replication, and evaluation are given in Appendix

B. Briefly, the deterministic and probabilistic approaches were straightforward

replications of the Sizewell B and Hinkley C models but with industry data updated

to 1993. The decision analytic approach required more extensive prototyping, i.e.

re-structuring of the basic problems in capacity planning.

4.4.3 Comparison of Approaches

The three approaches were replicated and evaluated independently of each other.

The plant schedule optimisation central to the deterministic approach provided a

link to the probabilistic approach. However, the transition between the

probabilistic and decision analytic approaches required a major re-orientation in

conceptualisation and construction.

The optimisation programme used in the deterministic and probabilistic approaches

captures the operational and financial details missing in the decision analytic

approach. Unfortunately, the level of detail required by the capacity optimisation

programme is difficult to attain in the new competitive environment. Commercial

confidentiality restricts the publication and availability of detailed plant

characteristics and costs. In the absence of actual plant data, estimates may reduce

the level of accuracy and detail and adversely affect the reliability of the output.

This aspect of modelling, i.e. the inability to model the full system given the

difficulty of obtaining competitors data, is even more crucial now.

147
The level of detail leads to the comprehensiveness of specification which are met

by the first two approaches. Both are developed from a full model specification of

the capacity expansion problem. They take a simultaneous approach to decision

making, i.e. a single plan is produced rather than the usual multi-staged contingent

nature of planning. The decision analytic approach, in contrast, decomposes

capacity planning into a sequence of decisions. All three approaches can be

extended to include more scenarios, uncertainties, time periods, decisions,

alternatives, etc. However, the probabilistic and decision analytic approaches run

into dimensionality problems.

The deterministic approach treats uncertainty in an expanded what-if analysis,

whilst saying nothing about the preferences and risk attitudes of the decision

maker. Assigning probabilities to scenarios and uncertain parameters enables the

consideration of likelihoods. The probabilistic approach merely produces a robust

plan, i.e. results that lie within an acceptable range. It gives no indication of the

sequence of decisions that should be undertaken. Decision analysis by definition is

a decision-focussed technique. It captures the risk attitude and value preferences

of the decision maker as well as the multi-stage nature of capacity planning,

thereby allowing the explicit consideration of each perspective.

Range and richness of insight in the output depend on the specification of the

input. The probabilistic approach allows the expression of all important

uncertainties at once and also gives risk profiles of different output parameters at

the end of the simulations. The decision analytic approach requires the sequential

consideration of inputs and the fulfilment of mutually exclusive and collectively

exhaustive conditions. As a result, outputs from decision analysis come from a

smaller set of permutations than possible from risk analysis. Risk profiles as

constructed from cumulative output distributions can be compared for cost ranges

of different alternatives. On the other hand, the decision analytic approach

148
produces discrete pictures of these alternatives, which give much less information.

Although the deterministic approach produces outputs that are scenario dependent,

it gives different combinations of plant alternatives, which cannot be achieved in

decision analysis due to limited input specification.

In theory, continuous probability distributions can be attached to chance nodes in

decision analysis. In practice, finite states and the dimensionality of multiple stages

prevent such a formulation. The probabilistic approach, on the other hand, uses

efficient sampling methods to propagate continuous distributions to the output.

However, these sampling methods assume independent probability distributions.

The deterministic approach treats uncertainty in a static and limited fashion. There

is no account of asymmetry or likelihood. Table 4.5 summarises the above

comparison of modelling approaches by evaluation criteria.

Table 4.5 Comparison With Respect to Evaluation Criteria

Criteria Deterministic Probabilistic Decision Analytic

Level of detail high high low

Model comprehensive comprehensive comprehensible


characteristics

Decision focus optimisation simulation decision focussed

Output scenario-dependent risk profiles decision sequence


plan paths

Uncertainty static, discrete cumulative sequential resolution

by sensitivity by sampling by discrete states

In terms of modelling effort and focus, the deterministic approach is input

intensive as it requires the generation of scenarios and detailed specification of

input. The probabilistic approach is output intensive as evident in the sheer

volume of sampling data and simulation results that must be consolidated. In

contrast, the decision analytic approach is structure intensive, as it forces the

149
problem to be addressed in terms of controllable and uncontrollable events, i.e.

decisions and uncertainties. Table 4.6 summarises the main points in the evaluation

of each approach with respect to their contribution to a comprehensive yet

comprehensible model.

Table 4.6 Summary of Approaches

Approach Positive Features Negative Features

Deterministic credible basis none of the scenarios may


occur
easy
may have sub-optimal plans
detailed
tendency towards over-
captures complexity of merit optimism
order
no account of asymmetries

static view of uncertainty

Probabilistic robustness time consuming iterations

simultaneous computation of manageability and data control


all probabilistic effects
independence of probabilities

Decision Analytic perspectives lack of detail

multiple stages dimensionality problem

risk attitude

flexible construction

As expected, each approach is incomplete insofar as capturing all areas of

uncertainties. Deterministic optimisation requires considerable input data. Scenario

and sensitivity analyses give limited insight to the kinds of uncertainties that

prevail. Risk analysis by means of Monte Carlo simulation of the optimisation

model improves the representation and treatment of uncertainty but produces

output risk profiles at a high computational cost. Although decision analysis

considers decisions and uncertainties explicitly, its structural simplicity cannot

incorporate the complicated production costing in electricity generation. These

150
results point to the strengths and weaknesses of each approach and the need to

balance the desirable model characteristics.

The deterministic and probabilistic approaches centre around the optimisation

algorithm of fitting investment schedules to forecasts of demand, fuel, and other

uncertain parameters. Fitting plans to forecasts relies on the accuracy of forecasts.

Sophisticated trend-based forecasting methods has performed poorly in the

turbulence of the last two decades. Inaccurate forecasts lead to sub-optimal plans.

The traditional approach of fitting plans to forecasts of fuel supply and electricity

demand is a static answer to a dynamic reality.

Modelling different perspectives in deterministic and probabilistic approaches is

difficult, because the optimisation programme optimises the entire system and not

with respect to ownership. A decision focus can only be manifested in decision

analysis which, on the other hand, is incapable of modelling the intricacies of the

power system.

Throughout the experiment, difficulties in meeting the main conflicting criteria of

comprehensiveness (completeness) and comprehensibility (manageability and

transparency) are evident. Each approach is either comprehensive but not

comprehensible or vice versa, never both, as summarised below.

1) The deterministic approach is incomplete and inadequate in the treatment of


uncertainty.

2) The probabilistic approach generates too much data to be manageable.

3) The decision analytic approach is unable to capture the details required in power
systems planning.

Given the above conclusions, the next stage of the modelling experiment attempts

to overcome the deficiencies of individual approaches and resolve the conflicts

151
through synthesis of the essential feature of the first two approaches (optimisation)

and an attractive feature of the third approach (decision analysis).

4.5 Model Synthesis

4.5.1 Rationale

Applications based on single techniques lack the breadth to cover the range of

strategic issues or the detail to represent aspects of the power system.

Mathematical restrictions, computational difficulties, and the maintenance of

intuitive understanding prevent more extensive specifications of the complete

problem. The three representative approaches are individually unable to meet the

conflicting criteria of comprehensiveness and comprehensibility. These conclusions

suggest that a synthesis of techniques, resulting in a larger but more integrated

model, should achieve the kind of balance and completeness unattainable by any

single technique. Some people, e.g. Linstone (1984) and Brown and Lindley

(1986), suggest that it is only by approaching a problem from multiple perspectives

that reliable insights can be developed.

Model integration, composite models, combining methods, and complementary

modelling all refer to the term we coined model synthesis. The final synthesized

model consists of model components which may be models or techniques. The

idea of synthesis is appealing for six reasons.

1) By exploiting synergies between techniques, a synthesis reflects the notion that the
whole is greater than the sum of the parts.

2) A synthesis uses the complementarity of the strengths and weaknesses of


components to achieve completeness.

3) Due to the high cost of development (Balci, 1986), it seems easier to use existing
readily available models and tools rather than to develop one from scratch.

152
Synthesis capitalises on familiarity and reusability of existing models. Reisman
(1987) urges a synthesis of models rather than the development of more models, as
there are too many models already. Synthesis through generalisation and
systematisation reduces the jargon and effort required to master new techniques.

4) Instead of new investment (of knowledge, resource, etc), synthesis involves issues of
integration and automation.

5) Synthesis makes use of specialisation. Each component in the eventual synthesis


addresses what it is good at.

6) The above five reasons are supported by the noticeable modelling trend seen in
practice, as explained below.

The energy modelling literature indicates an inevitable trend towards building

bigger models through synthesis of existing approaches, e.g. FOSSIL2, MARKAL,

NEMS, and WASP in Ruth-Nagel and Stocks (1982), Beaver (1993), and IAEA

(1984). While advances in software, hardware, and human capability may help to

achieve these modelling goals of completeness, the required amount of effort and

resource may well exceed that available to a single utility. This thesis addresses

this practicality issue, i.e. the costs versus the benefits.

The second stage of the experiment determines the feasibility of model synthesis

for a typical power company in the UK, given its limited resources. Different

prototypes of model synthesis are constructed. Ways to combine other techniques

within a decision analysis framework are conceived and tested. Models of models

(explained in section 4.5.4) are built to facilitate this synthesis. Beginning with a

full conceptualisation of the issues involved in model synthesis in section 4.5.2, the

second stage ends with a discussion of the practicalities of synthesis in section

4.5.5.

153
4.5.2 Conceptualisation of Model Synthesis

Applying more than one technique to achieve synthesis involves the following

considerations: familiarity, dimensionality and complexity, system integrity,

extensibility and reusability, compatibility, functional and structural synergies. We

briefly discuss these concerns and then summarise the conceptualisation of model

synthesis found in Appendix C.

Familiarity with these techniques is required beyond a superficial level. The

modellers choice of technique, ability to handle different modelling frameworks

and assumptions, and ability to exploit synergies between techniques depend on his

familiarity with the model components. This technique-driven bias arises out of the

learning curve effect and cognitive limitations.

Dimensionality arises from increases in shared data, interacting variables, and

other permutations of uncertainties and errors. These dimensionality issues in turn

imply concerns of manageability of data and model, validation, error tracking and

diagnosis, conversion and translation of data, and compatibility of model

components. Composite models with integrated methodologies contain a higher

level of complexity, including difficult to trace information flow. Higher levels of

complexity also arise from dimensionality.

Changes in problem specification and assumptions must be propagated through the

model such that resulting changes in the components are consistent and the

systems integrity is preserved. Any extension or changes in problem

specifications or using the synthesized model for different purposes will require re-

examining all components to preserve system integrity. Extensibility and

reusability of model components are required to facilitate synthesis.

154
One of the difficulties of synthesis results from the (lack of) compatibility of

underlying (necessary) assumptions, functionality, and theoretical foundations of

model components. If the incompatible or even conflicting fundamental

requirements cannot be resolved, the final model may not work. Ironically, the

strength and appeal of model synthesis lies in the diversity (and complementarity)

of model components!

Synthesis should exploit structural as well as functional synergies between

techniques. However, methods to facilitate this are not always obvious. Appendix

C proposes examining similarities between techniques as a starting point. The

above concerns and implications are summarised in table 4.7 below.

Table 4.7 Major Concerns in Model Synthesis

Concern Definition and Description Implications

Familiarity Modellers acquaintance and tendency to make use of or rely on


understanding of model familiar techniques and under-
components, i.e. techniques and exploit the less familiar ones
models which are used in the final
synthesis. modellers ability to handle
different frameworks and
assumptions to achieve a useful
synthesis

switching and transitional costs

Dimensionality Increases in and permutations of higher level of complexity


inputs, outputs, interactions, and
interfacing of data and variables. data manageability

error tracking and diagnosis

compatibility

conversion and translation

validation

System integrity The wholeness of synthesis. consistency throughout

extensibility of components

reusability

155
Compatibility Co-existence of model components feasibility of synthesis
in a synthesis depends on the
underlying assumptions and effort required in resolving
theoretical foundations. conflicts of interest

Synergies Similarity or affinity in added contribution to final


functionality and structure of synthesis
model components.

Model integration is a hot topic in model management. The decision support

literature is full of new modelling languages, most notably Geoffrion (1987), but

devoid of ways to synthesize existing techniques or models. A conceptualisation of

model synthesis in Appendix C attempts to bridge this gap.

Given the availability of so many different types of techniques and models,

strategies for synthesis appear necessary to narrow down the possibilities and

avoid the costly method of trial and error. [Appendix C gives three main strategies

called modular, hierarchical, and evolutionary.]

Beginning with definitions to distinguish between techniques and models, the

conceptualisation highlights synergies between techniques in terms of 1) structure,

2) functionality, and 3) the complementary contributions they add to a synthesis.

Structural considerations include the a) selection, b) ordering, and c) linkage of

model components.

Model linkage is complicated by four main factors: dimensionality, communication,

interface, and interaction.

1) The required number of interfaces increases with the number of techniques, and this
contributes to the dimensionality problem.

2) Data transfer and sharing complicate the communication between different models.
Output data from a model component is rarely in a form acceptable by another,
hence requiring some transformation.

156
3) Any components requiring direct user input must have appropriate user interface.

4) The frequency and manner of user involvement determine whether a model is


interactive or non-interactive. The former relies on the user or the modeller to
guide the process, while the latter only involves the user in the beginning. Linear
programming, for example, is traditionally a non-interactive method, as the
modeller specifies the inputs in the beginning but does not interfere with the
intermediate algorithms.

The above structuring issues are detailed in Appendix C, but summarised in table

4.8 below.

Table 4.8 Structuring Issues

Selection of Components Ordering Linkage (supporting argument)

structural synergies increasing sequential (good for error tracking and


complexity checking but possible bottlenecks and
functionality slow execution)
most relevant
execution costs aspect parallel (faster than sequential linkage
but issues of compatibility and
input requirements most intuitive interfacing)
model to get
applicability decision makers feedback or iteration (useful for
involved convergence but may be time-consuming)
software availability
peripheral models embedded or nested (complexity and
technique familiarity (scenario analysis) dimensionality issues)

manageable level of detail multi-level, e.g. hierarchical


(organisational issues)
complementarity
integrating module whose sole task is to
compatibility synthesize and coordinate (extra effort in
constructing this)

In addition to the above, a distinction into weak and strong forms of synthesis is

proposed. This conceptualisation somewhat corresponds to the three levels of

synthesis associated with needs of an organisation (Dolk and Kottemann, 1993):

combination, aggregation, and integration. At the lowest rung of the

organisation ladder, models are generally stand-alone or weakly synthesized, i.e.

combined, for operational planning purposes. At the middle level, models are

157
aggregated to pull the information together. At the top level, models are

integrated (strongly synthesized) for decision making purposes. In our distinction,

the level of synthesis depends on the degree of dependence or communication

between model components. A weak synthesis has less inter-component

dependence than a strong synthesis and hence easier to build but possibly more

cumbersome to assimilate the results. Here, the model components are not tightly

coupled or integrated at all. The deterministic and probabilistic approaches

represent weak forms of synthesis, whereas the second stage investigates stronger

forms of model synthesis. The strongest level of synthesis is full integration, where

each component contributes to each other. In the strong form, individual

components are no longer distinct from each other. While the strong form may

require more work for the modeller initially, the resulting synthesis provides less

work for the user. The modelling work involved in synthesizing is a fixed

investment cost, while the additional work involved in using the resulting model is

a variable operating cost. Thus model integration provides the rationale for

reducing the variable cost (of the user).

After the above conceptualisation, various prototypes are constructed to

investigate the practical issues of synthesis. Stage two of this experiment attempts

to answer the following questions via the construction of a decision analysis

framework explained in section 4.5.3.

1) How can model synthesis facilitate more extensive uncertainty analysis?

2) How can these conceptual methods be practically implemented?

158
4.5.3 Decision Analysis Framework

To overcome the conceptual difficulties in synthesis, decision analysis is proposed

as a front-end, i.e. a modelling framework to unify and organise the model

components. Such a framework brings the complex issues of capacity planning

close to the decision maker in a reduced form. The decision tree structure is

envisaged as a means of organising other complementary techniques via nodal

linkage.

Chapter 3 and Appendix B reveal potentially useful features of decision analysis.

The decision tree structure of nodes and branches has synergies with other

techniques, as shown in Appendix C. It is simple enough for the representation of

decisions and uncertainties and the communication of strategic issues. Until the

advent of desk top decision software, decision trees were restricted to structuring

simple problems. The tedious task of expected value calculation especially in large

multi-state, multi-stage decision trees can now be automated by software such as

DPL (ADA, 1992). These developments motivate a re-use of the age-old decision

tree for new purposes of synthesis.

Decision trees have rarely been used as a framework for incorporating other

techniques or models. This novel approach requires investigation into model

interface, i.e. dynamic or static linkages in the decision and chance nodes; a method

of capturing single point results of the capacity planning optimisation programme;

the propagation of decisions by conditional events as opposed to time intervals;

and the construction of a decision tree and its equivalent influence diagram.

As concluded from model replications of the first stage of the experiment, decision

trees are too simplistic to incorporate the level of detail required of capacity

planning. Even in a decision analysis framework, nodal linkages to separate

159
techniques of optimisation or simulation become overwhelming and troublesome.

Using the core optimisation model in its existing data-intensive form within a

decision analysis framework poses three operational difficulties.

1) It is very time consuming to generate several scenarios, each to correspond to a


path in the decision tree. The optimisation model based on Benders
(mathematical) decomposition uses iterative convergence to reach the optimum.
Each run can take anywhere from a few minutes to well over an hour, depending on
the data involved.

2) Each optimisation gives results that require considerable reduction and conversion
for further use in decision nodes.

3) Interim processing is required to organise the inputs and outputs into an acceptable
form.

The above difficulties imply that any further sensitivity analysis or alternative

scenario analysis will be time-consuming as well as data intensive.

There are at least three ways to overcome these operational difficulties. 1) One is

to build an interface to this optimisation model, i.e. a front-end to act as a filter. 2)

Another is to design an interface with other models to generate the necessary data.

However, neither solution improves upon the speed and ease of the original

optimisation as they are both static links3. For these reasons, they have not been

further pursued in this thesis. 3) A third proposal is to develop a reduced model of

the original large model, which we call a model of model.

To reduce the complexity of model linkages while still adhering to the

completeness of the original optimisation model, we investigate the use of a model

3 Static ways of referencing a bigger and more complicated model include 1) setting up look-up tables, 2)
keeping a database of feasible solutions,3) approximation, 4) using an aggregate function, and 5)
sampling and interpolation to extract or read off values from the original model.

160
of model to facilitate dynamic linkages. The next section gives a detailed account

of this investigation.

4.5.4 Model of Model

4.5.4.1 Introduction

A model of model refers to a reduced model which summarises or approximates

a larger model. It is a deliberate simplification of the original more complicated

model. The reduced model answers the need for less input and output and more

speed. It is useful under the following conditions.

1) The original model is too time consuming to execute.

2) Many executions of the original model is required, making it impractical to use.

3) The original model produces too much output, mostly unnecessary for its intended
use or requires further manipulation or reduction to be useful.

4) Full accuracy is not necessary.

5) The original model requires too much input data which cannot be obtained easily.

6) The original model is not an end in itself, but a means to an end, therefore
approximation is acceptable.

7) The original model cannot be used in model synthesis in its existing form.

In the physical and engineering sciences, response surface methodology (Box and

Draper, 1987) is an established way of building a simpler model from the inputs

and outputs of a larger model. It comprises a group of statistical techniques, the

most common being the least square method for regression. The reduced

regression model can be compared with the full optimisation model for model fit,

161
whereas most regression models are fitted on data which cannot be generated at

will.

To operationalise the decision analysis framework, we need to determine the

feasibility, practicality, and reusability of a reduced model of the core capacity

planning optimisation model. Feasibility refers to acceptability and reliability. In

other words, are the simplifications and approximations acceptable? Is the

validation reliable? Practicality refers to the worthwhile effort in producing and

validating a model of model as opposed to constructing a new model. Reusability

is related to the previous two criteria. A reduced model is intended for further use,

for example, as input to another model to facilitate further sensitivity or risk

analysis. The additional effort required to adapt or transform this model must not

be excessive.

4.5.4.2 Methodology

We developed and followed a systematic method of approximating the core

optimisation model using multiple regression. Such a systematic method can be

repeated for different values of independent variables and different forms of the

final reduced model. The next paragraphs explain the seven main steps.

1) Determine k desired outputs (dependent variables Yj, j = 1 to k)

First we determined the payoffs and values needed in the decision analysis

framework, as listed in table 4.9 below. These incremental payoffs were intended

for attachment to nodes of the decision tree, subject to values of other independent

variables or nodes along the path. A regression equation was built for each

dependent variable.

162
Table 4.9 Dependent Variables in the Reduced Model

Dependent Variable Original Output File in Optimisation Model


(name of file appendix)

Investment cost

Operating Cost Optimal Expansion Plan (*.OEP)

Total cost

Cumulative new plant installed capacities per Production Costing Results (*.PCR)
plant type for a certain time period (in pre-
selected periods)

Marginal Fuel Savings (MFS) of new plants

Net Capacity Credit at 88% availability Net Capacity Credit (*.NCC)


(economic attractiveness of newly installed
plants)

2) Select m associated inputs (independent variables Xi, i = 1 to m)

In the manner of the first pilot study (Appendix A), we used sensitivity analysis to

find the main independent variables that determine the previous dependent

variables. The reduced form should be much simpler than the full model, hence,

which original variables to include is an important decision. Relationships between


Xi and Yj indicate which Xs to select. The choice of Xs also depend on the

choice of which input variables to fix and which to vary in the optimisation model.

The most important variables are listed in table 4.10 below.

163
Table 4.10 Independent Variables in the Reduced Model

Independent Variable in Reduced Model Original (Input or Output) File in


Optimisation Model (name of file appendix)

Reserve margin Input: Period Demand File (*.PRD)

Diversity in plant mix Output: Optimal Expansion Plan (*.OEP)

Type of plant available as an alternative in a


particular period

Capacity cost Input: New Plant File (*.NEW)

Fuel price in base year

Fuel escalation rate Input: Escalation File (*.ESC)

3) Determine value ranges for each Xi

Once the dependent and independent variables have been selected, we determined
the value ranges for each Xi. We anchored the average value for each Xi, i.e.

E(Xi), then fixed a margin above and below it. Thereafter, we adjusted the

margins to give a combination of symmetric and skewed ranges.

4) Generate n sets of datapoints for Xi = n*m

There are two main ways to generate data points for independent variables:

factorial design or probability distribution.

1) Factorial design, for the generation of combinatorial scenarios and equal-interval


permutations, refers to assigning sets of combinations of different values of Xs.
Bunn and Vlahos (1992) used equal interval permutations to get the data for
regressing the model, and then random sampling to get additional data for
validation. Full factorial design covers all possible combinations of X values, but
this may include meaningless and inadmissible combinations.

2) Probability distributions reflect how likely and how frequently the input data
(values of independent variables) will occur. Hence it is a more realistic (accurate)

164
reflection of how one would expect to get the data (if available) than the factorial
design. The distribution method also allows the use of sampling techniques, such
as Monte Carlo and Latin Hypercube Sampling within risk analysis. As a shortcut,
we used the sampled data generated from the Probabilistic Approach.

We expect n (sample size) to increase with m (number of independent variables.

This is supported by Morgan and Henrion (1990), who observed that the

complexity of the factorial design increases exponentially while the sampling of

probability distributions increases linearly.

5) Automate optimisation runs to get Yj

We modified the spreadsheet macros created during the replication of the

Probabilistic Approach to generate new data for regression analysis. We then

extracted sets of data for independent variables into text input template files and

editted them into an acceptable form for the optimisation programme. Each data

set was used for one optimisation execution (run), resulting in one set of output

data. The relevant dependent variables were extracted from this output into a

spreadsheet. This was repeated for n sets of data and runs. One hundred to one

thousand runs were made for each combination of Y and Xs. After all data sets

have been processed, we modified the formats to prepare for regression analysis.

6) Regression Analysis

The latest releases in desk-top statistical software offered not only statistical but

also visual model fitting facilities. We used the Curve Fit facility in the statistical

package SPSS Windows to identify the kind of relationship between a pair of X

and Y. We checked various transformations, e.g. linear and non-linear. To build a

good regression model, we used facilities such as forced entry (all variables

considered at once), forward or backward elimination, and stepwise regression.

We checked t statistics to eliminate non-significant estimators and adjusted R

165
square to get the overall fit. We also looked at possible interaction terms, outliers,

influential variables, etc.

The R squares were very low, ranging from 0.05% to 32% at best, indicating a

poor fit. The variance of R squares was very large. This implied that the form of

regression equation or regression as a technique altogether was not satisfactory.

The variance of residuals was also large.

7) Validation

We validated the resulting regression equations by generating new data by

permutation 1) within the original X ranges, and 2) outside of original X ranges.

These two kinds of validation (within and outside range) were aimed to show the

acceptability and re-usability of the regression model. A small variance of R

squares would indicate consistency and reliability of these models.

The within and out of range validation tests were unsatisfactory as there was no

pattern to the outcomes of using new data on reduced models. Along with large

residual variance, these results made the reduced models unacceptable and not

reusable.

Our negative results seem to contradict that of Bunn and Vlahos (1992) who

managed to fit a regression model on a similar optimisation model. Theirs was

fitted on a sample size of 1000, i.e. a thousand runs in which 6 independent

variables were varied: demand escalation rate, nuclear capacity cost, discount rate,

coal price, coal price escalation rate, and the level of Non-Fossil Fuel Obligation

(NFFO). The resulting dependent variable is the difference in total cost of the

optimal plan without the NFFO and with the NFFO. The regression model was

validated against a further 250 new scenarios. Building such a model was mainly

used to demonstrate that such a simplified model could be produced, could be

helpful in adversarial debates, and could be useful for subsequent uncertainty

166
analysis. Our choice of independent variables is totally different from theirs as are

our dependent variables. The background scenarios (fixed variables not entered

into the regression) are also different. In addition to equal permutation (as they

have done), we also used probability distributions, which should give a more

credible model. These differences question the generalisability of a reduced

regression model of the optimisation programme.

4.5.4.3 Conclusions

After substantial modelling effort in which different data sets were produced, we

were unable to arrive at a convincing argument for model of model using the

method of regression analysis for approximating the capacity planning optimisation

model. We conclude that model of model as a means to capture the production

costing detail of the optimisation programme for the decision analysis framework is

infeasible, impractical, and not re-usable. These conclusions are supported

below.

1) INFEASIBLE

None of the regression models were reliably and consistently representative of the

original model. The residuals were large and varied, with no apparent pattern.

This made results unpredictable. Poor R square implied that regression may not be

a good basis for model building. These results were perhaps due to the parameters

chosen.

It was difficult to ensure that the artificial data generated from successive runs of

the original model could produce meaningful and admissible combinations for

regression analysis.

167
Both within range and out of range validation of reduced models failed to give

convincing results. This not only questions the acceptability of the reduced model

but also its propensity for further use.

2) IMPRACTICAL

The effort in producing and validating a regression model was quite large. In fact,

it was greater than the sampling and risk simulation work involved in the

Probabilistic Approach. This effort should not exceed that of re-using the original

model for the same purposes.

3) NOT REUSABLE

The previous two criticisms (infeasible and impractical) foreshadows its reusability.

The reduced model, even if well-calibrated to the original, could only be used for

the background scenario given, hence of limited use. In other words, each form of

the reduced model is confined to the background scenario. This implies that any

variation in background scenario requires the construction of a new reduced

model. Likewise, changing any parameter that was originally fixed to produce the

reduced model does not guarantee valid results as out of range validation showed

that the model was limited to the independent variables and ranges specified.

4.5.5 Second Stage Conclusions

Several prototypes within the decision analytic framework were constructed.

However, they could not be operationalised to the level of detail or functionality

required for capacity planning. In its existing form, the core capacity optimisation

model was incompatible with decision analysis in data (input and output),

structure, and level of detail. The output of optimisation was not meaningful for

168
linking without further reduction. These complexities of data size and form added

to the dimensionality problem.

One way to overcome the above problems was by a model of model. After

extensive tests, this approach failed to meet the criteria of feasibility, practicality,

and reusability. The difficulties in implementing the conceptualisations of model

synthesis are summarised in table 4.11.

Table 4.11 Difficulties of Model Synthesis Implementation

Area of Difficulty Description

optimisation/decision analysis incompatible data (size and form)


interface
conflicting technique assumptions

model of model not feasible, practical, or re-usable

resource limitations: software available software not capable of dynamic linkage


engineering issues
incompatible interfaces between applications

programming required for data conversion

application handler needed to achieve multi-tasking

no model management system available to overcome


above issues

These practical limitations of models synthesis are due to the conceptual difficulties

given in Appendix C and the operational difficulties which reveal the importance of

compatibility of components at different levels. We raise the following hypotheses

for further research, which together with our experimental findings help to explain

the impractical pursuit of model synthesis given the limitations of our current state-

of-the-art software and the resources and capability of a single utility.

1) Model synthesis requires the resources and capability beyond a single model
builder. Utilities in the UK ESI, especially new entrants, have limited resources.
Model synthesis may not be a practical solution. Furthermore, a single model
builder is biased by the technique familiarity, choosing only to use techniques that

169
are most familiar and available, thus unable to see the synergies for model
synthesis.

2) Even if model synthesis is workable, it is hard to say if the insights from the
resulting model are more useful than using different techniques, i.e. without any
synthesis. The fixed cost of synthesis may be too great especially if not re-usable.

3) The case study contained too many dimensions to be comprehensibly modelled:


perspectives, questions, objectives, and uncertainties. Yet, this case study was
already a deliberate simplification of reality. This implies that the actual problem is
far more complicated, and may not altogether be sufficiently addressed from a
modelling perspective.

Our earlier prescription of complementary but compatible techniques for

comprehensive and comprehensible models is a difficult goal to achieve by model

synthesis. While complementary may be a means to comprehensiveness or

completeness, compatible is not a means to comprehensibility but, rather,

synthesis. Synthesis implies some form of co-existence. As a means to

completeness, it requires compatibility of techniques, data, interface, assumptions,

and other issues beyond a superficial level.

On the basis of the above conclusions, we turn to other ways of dealing with the

range of uncertainties as suggested in the literature. Recent electricity planning

literature has called for flexibility in planning and the consideration of flexible

technologies. [See Chapter 5 for discussion and references.] Flexibility is

frequently mentioned as a response to uncertainty but without guidance to how it

can be used within this context. Flexibility as an end in itself radically departs from

the previous focus of model synthesis.

170
4.6 Motivation for Flexibility

4.6.1 Completeness and Model Unease

Implicit in the modelling approach is the goal of completeness. Model

completeness refers to the comprehensive coverage of all aspects of the problem,

i.e. capturing all uncertainties, as a means to deal with strategic uncertainties. For

example, our modelling experiments aimed to achieve completeness by

thoroughness of approach, i.e. the consideration of all significant variables for

adequate problem representation, close representation of reality (good

approximations), and systematic treatment of uncertainty.

Completeness is difficult to achieve since a model, by definition, is a simplification

of reality so necessarily incomplete. This begs the question: is completeness a

reasonable and achievable modelling goal in the first place? Aversion to large

models in strategic planning has led to simple models, e.g. Ward (1989), for the

understanding of uncertainty and related issues.

We argue that completeness is only a means of increasing the level of confidence

for the user of the model, i.e. the final decision maker who relies on the model for

guidance and defensibility. A high level of confidence ensures that the resulting

model will be used and re-used. A low level of confidence suggests that the user

experiences unease in using the model fully or at all. The real question is: is it

possible, and if so, how to remove that model unease ?

We illustrate in table 4.12 our interpretation of Mandelbaum and Buzacotts (1990)

statement flexibility compensates for model unease. The left-hand column gives

the users general belief about model completeness, i.e. whether or not models can

be complete. The top row gives what the user has been told about the model in

171
question. There is no unease if the user believes that models can be complete, and

this particular model is complete. There is model unease whenever the model in

question is not complete or if the user does not believe in completeness.

Table 4.12 Completeness and Unease

Modellers Assertion or Model is complete. Model is not complete.


Users belief about this
particular model:

Users Belief about all


models:

Models can be complete. No unease. Intra-model unease. More


modelling required.

Models are never Extra-model unease. Model unease. Flexibility


complete. Flexibility needed. needed.

We distinguish between intra and extra model unease. Intra-model unease refers

to the lack of completeness within a model, but may be amended by further

modelling such as the use of sensitivity analysis. Model synthesis is an attempt at

removing this kind of unease. Extra-model unease refers to the gap between the

user and the model, i.e. the user believes that models can never be complete.

Therefore there will always be an unease about what the model gives and what the

user desires from the model. This gap between the user and the model

characterises the style of decision making in this industry because the decision

maker is not the builder of capacity planning models. This gap can be argued as

follows.

1) Model and forecast errors always exist. Traditional approaches rely greatly on the
accuracy of forecasts. However, forecasts by definition are predictions.
Discrepancies (errors) between the actual and the forecast, no matter how small,
will always occur. Models are simplifications.

172
2) The dynamics of model building, decision making, and realisation of plans imply
that there is always a gap due to lead time. The nature of the generation business is
such that investments have to be made before they are needed, during which time
any number of factors may occur and change the expected performance. There are
lead times to the construction of a useful model, the effective communication of its
results, and understanding and acceptance by the final users.

3) Models do not supply everything the user wants. The user may not understand the
model fully, hence unease. The user may want to retain own control, i.e. not rely
on the model completely. Other organisational and political reasons may prevent
full acceptance of the model.

If we believe that models can never be complete, then there will always be unease.

According to Mandelbaum and Buzacott, we should use flexibility to hedge against

model unease. This also suggests that the real goal we should be aiming towards,

in addressing the problem of uncertainties in electricity planning, is not modelling

towards greater completeness, but modelling for (or to produce) practical solutions

to cope with uncertainty. Practical means to cope with uncertainty are given in the

next sub-section.

4.6.2 Coping with Uncertainty

Practical means of coping with uncertainty have been suggested by Hertz and

Thomas (1983) and Hirst and Schweitzer (1990). 1) Ignoring uncertainty allows

one to focus on the complexities albeit at a high cost. 2) Building more accurate

forecasts may help to achieve more accurate optimisation, but this does not

prevent forecast errors. 3) Planning so that future decisions are unnecessary is a

form of robustness, but this does not eliminate uncertainties. The remaining

measures to cope with uncertainty are variations of the flexibility theme.

173
4) Defer decisions by waiting until additional information is available or until

important uncertainties are resolved. The cost of waiting includes the opportunity

cost of expired options.

5) Purchase additional information to reduce uncertainties. This requires the

assessment of perfect and imperfect information to eliminate and reduce

uncertainties respectively.

6) Sell risks by conducting auctions for supply and demand resources. Negotiate

long term fuel supply and demand contracts.

7) Adopt a flexible strategy that allows easy and inexpensive changes. One way is to

invest in flexible technology, which is characterised by short construction lead time

and small modular unit size. Recent electricity planning literature (e.g. CIGRE,

1991 and 1993) has also proposed technical means of achieving flexibility in

planning and systems.

4.7 Conclusions

How can we cope with increasing uncertainty and meet the conflicting criteria
of comprehensiveness and comprehensibility?

One obvious answer is to build bigger models through model synthesis.

Conceptually, model synthesis should be able to overcome deficiencies of

individual techniques by exploiting synergies between them. It promises a more

comprehensive coverage of areas of uncertainty and a more versatile treatment of

different types of uncertainties. Several ways to achieve synthesis have been

suggested and two of them pursued in this thesis, namely decision analysis as an

organising framework and a reduced model of the full optimisation model to

capture the relevant details, i.e. model of model.

174
We investigated the feasibility of model synthesis by conducting a two staged

modelling experiment which consisted of model replication, evaluation,

conceptualisation, and prototyping. The main findings, listed below, cast

considerable doubt on further pursuit of modelling for completeness.

1) The first stage of the modelling experiment showed that existing approaches were
incapable of dealing with the conflicting criteria of comprehensiveness and
comprehensibility.

2) The second stage revealed the limitations of model synthesis as a singular approach
to uncertainty modelling.

3) In particular, model synthesis via a decision analytic framework employing model


of model for data interface and dynamic linkages was infeasible and impractical.

4) These results suggest that model synthesis is not a trivial undertaking, and the work
involved may well exceed the capacity of a single modeller and the limited
resources of a single utility.

5) Furthermore, synthesis requires compatibility beyond a superficial level.

We then examined the goal of model completeness and concluded that reducing or

removing model unease may be a more appropriate goal for dealing with the

range of uncertainties in electricity planning.

Flexibility has been suggested as a hedge against model unease and as a practical

means to cope with uncertainty. It is an intuitively obvious concept that appeals to

the decision maker. That such an ill-defined concept could complement or even

substitute the traditional approach of rigorous modelling seems far-fetched. It

seems inappropriate to the capital intensive electricity industry, which is

characterised by irreversibility (of capital investments and function-specific

infrastructure), inflexibility (of long lead times and high sunk cost), and illiquidity

(as the sale of uneconomic plant is still a relatively new phenomenon). Assets in

175
the electricity supply industry are not as easily exchangeable or tradeable as those

in the financial markets where flexibility is synonymous with liquidity. Electricity

trading is not as competitive as the manufacturing and labour markets where

flexibility is a much discussed operational objective. The strong engineering

culture of the electricity industry requires detailed specification of model

requirements and data input; thus the vague concept of flexibility must be precisely

defined to be useful to capacity planning.

Although conceptually promising, flexibility requires further research to ascertain

its practical usefulness. We need to be able to define, measure, and apply it to our

problem. The second part of the thesis clarifies the concept of flexibility through a

broad review of its definitions and applications from various disciplines.

176
APPENDIX A

Pilot Study 1

A Comparison of the Economics of Nuclear, Coal, and Gas Power Plant


Using Sensitivity Analysis and Risk Analysis

A.1 Introduction

This pilot study is aimed at 1) establishing the feasibility of model replication, 2)

examining the methodological issues involved in a case study based modelling

experiment, 3) determining the limitations of sensitivity analysis and risk analysis in

modelling uncertainty, 4) exploring the implications for model completeness, and

5) providing the rationale for model synthesis. The level of detail documented here

is reflective of subsequent modelling exercises.

At time of writing in July 1992, the nuclear review in the UK ESI promised for

1994 provided a rich background and rationale for the comparison of plant

economics. Nuclear power has always been a highly controversial topic, with

disagreements surrounding its real costs, technical complexities, huge uncertainties,

and the evaluation of intangibles. It is a real event that is bound to provoke debate

up to the actual date of review, thus providing plenty of evidence and results for

our comparison. The following script (courtesy of Kiriakos Vlahos) gives a brief

background of the inquiry.

While the Hinkley Point C inquiry was under way, the UK government took the decision
to postpone any decisions about the nuclear development programme until a review of
nuclear in 1994. It also withdrew the nuclear industry from the privatisation
programme and formed a new company, called Nuclear Electric which would operate
existing nuclear stations.

177
The Hinkley Point C inquiry approved the development of the new power station on the
grounds of the Non-Fossil Fuel Obligation, although the economics of nuclear at the
time looked desperate compared to either coal or gas.

Since then, concern about the environment and especially the greenhouse effect has been
growing, and the EEC is planning to introduce a carbon tax on fossil fuels. Such a tax
would improve the economics of nuclear power stations, since they do not produce the
main greenhouse gas CO2, neither do they produce SO2 and NOx in the generation of
power.

In addition, Nuclear Electric has been performing quite well in financial terms,
producing substantial profits, of course to a large extent due to the nuclear levy. But
they did manage to improve the availability of the AGR stations and to increase the
market share of nuclear overall. Nuclear Electric and BNFL, the two main companies of
the UK nuclear industry, are keen to build new nuclear power stations and they even
called for the review date to be brought forward. This has been declined by the
government.

Developments in the electricity and gas markets are also relevant. A large power station
building programme coincided with privatisation, and Combined Cycle Gas Turbine
(CCGT) is the type of plant that will inevitably dominate the new power station market.
If projections materialise, gas consumption will double in the UK by the year 2000.
Whether the gas industry can produce that much gas at competitive prices is an open
question. The UK and European gas supply and demand situations need to be carefully
examined.

The latest news is that Nuclear Electric wants to build Sizewell C a successor to
Sizewell B, but with double the size (about 2.5 GW). They estimate that this follow
up will achieve significant economies of scale and will be economic compared to
competing electricity generation technologies.

The governments decision in 1994 depends, amongst others, on the ability of

nuclear power to compete against other plant especially in anticipation of the likely

over capacity due to gas-fired plants, which are expected to dominate the early part

of next century. This pilot study examines the economics of nuclear power and its

immediate competitors, coal and gas, the most influential factors affecting the final

178
cost of electricity, the impacts of the proposed EC carbon tax, and the overall

effect of uncertainties.

Marginal or levelised cost analysis is used for assessing plant economics rather than

constrained optimisation of the entire system. The main data used in this study

originates from UNIPEDE (1988) and OECD/NEA (1989) reports, which are

referred as UNIPEDE and OECD throughout the study. Electricity costs are

analysed in a global context to give a broad perspective on realistic ranges. Major

components of cost are then presented and discussed. Uncertainties are assessed

by the techniques of sensitivity analysis and risk analysis. Extensions to this study

are suggested at the end.

A.2 Modelling Approach

The levelised cost of electricity, also known as the average or uniform discounted

cost, is the accepted method for comparing the economics of different power

plants. This method is extensively detailed in IAEA (1984), UNIPEDE, and

OECD. All the costs are discounted to the present value at a certain point in time

so that the terms, which are expressed in constant money of that given date, can be

summed and divided by the discounted electrical output. For a project in the

United Kingdom, this value represents the cost of electricity generation in

pence/kWh.

This study identifies the main components of the cost of electricity and the major

factors that influence them. It explores the extent to which these components

contribute to the final levelised cost under the impacts of varying discount rates

and other factors.

Instead of giving a point estimate for the cost of electricity, this study uses

sensitivity and risk analyses to give a realistic range of estimates. A realistic range

contains the most likely values, and in this case, is qualified within the international

179
context, specifically of plants to be commissioned in the last five years of this

century in industrialised countries. This deterministic analysis reveals the impacts

of the major factors upon baseload coal-fired and nuclear power stations in the

United Kingdom. The same approach can be extended to other types of plant.

First, the factors that influence power plant economics are isolated by taking

international comparisons, thus giving a broader perspective on uncertainty and

possible interactions between these factors. Ranges for the main parameters are

extracted from the recent OECD and UNIPEDE reports by conversion to a

common currency and then taking the minimum and maximum values for all OECD

countries surveyed. The range for a given parameter is then reduced by discarding

those outer values that reflect unrealistic circumstances in the UK sense, e.g.

extremely high fuel prices or subsidies specific to a particular country. These

ranges are then used as bounds in the sensitivity analysis. Data for UK coal and

nuclear power plants are taken from both studies and used as base cases in this

paper. Many countries are represented in both studies, thus allowing for data

comparison and validation. Where input data is not available, the parameters are

calculated from the respective contributions in levelised costs.

The approach of following sensitivity analysis with risk analysis has been

advocated in standard texts on investment appraisal under uncertainty, e.g. Hull

(1980) and Clemen (1991). Indeed, sensitivity analysis has been widely acclaimed,

e.g. by Rappaport (1967) and Hertz and Thomas (1984), as a logical adjunct to

deterministic capital budgeting, if not a necessary first step in understanding the

nature and impact of risk.

The important factors that influence generation cost are first identified. The ranges

of values are extracted from published sources for sensitivity analysis and risk

analysis. The basic factors are defined in terms of the likely ranges of values and

their impacts, the nature of relationships (linear or non-linear), and the magnitude

180
of impacts. Progressively the assumptions are dropped and constraints tightened,

until a sufficiently realistic model of uncertainty is represented. The steps are

illustrated in the shaded region below.

Figure A.1 Uncertainty Modelling

scenario isolation of
analysis factors

ranges

causal sensitivity
analysis analysis ranking
of factors

risk dominance
analysis of
technologies

decision
analysis

A.3 The Cost of Electricity Generation

A.3.1 Range of Levelised Costs

The levelised costs of plants to be commissioned in the near term are the costs to

the generators not the price charged to consumers. This study examines the

levelised costs of plants to be commissioned between the years 1995 and 2000 for

all the OECD countries surveyed in OECD and UNIPEDE reports.

The levelised cost of electricity generation by coal-fired plants given in the OECD

and UNIPEDE reports ranges from 1.33 to 3.99 pence/kWh. These costs were

181
calculated from the raw data supplied by OECD countries discounted at 5% for 30

years to constant currency of January 1987. The levelised cost consists of three

components, namely, contributions from the initial investment, annual O&M, and

the variable fuel cost. The variability in contribution by fuel is the greatest for coal,

0.47 to 2.64 pence/kWh, over twice as much as investment, which ranges from

0.47 to 1.22 pence.

Calculations for nuclear power stations to be commissioned in the same period

reveal a narrower range of costs, 1.33 to 2.94 pence/kWh, with the reverse order

for coal in relative contributions of investment and fuel. This is not surprising as

investment costs are much higher for nuclear than for coal plants. Fuel costs show

great variability because the expectations of future fuel prices differ widely

amongst these countries. Figure A.2 illustrates the range of values for all the

countries studied.

Figure A.2 Horizontal Analysis of Value Ranges

Range of Values (horizontal analysis)


At 5% discount rate, over 30 years life
4.00

3.50

3.00

2.50
pence/kWh

2.00

1.50
1.22
1.00 0.90 0.86
0.50

0.00
COAL COAL COAL fuel COAL total NUCLEAR NUCLEAR NUCLEAR NUCLEAR
investment O&M investment O&M fuel total

This kind of horizontal analysis shows the range of costs across the countries

surveyed. Although cost differences are due to different conditions in each

182
country, the general order and magnitude of difference can be used to assess

sensitivities of cost in a particular country.

A.3.2 Variability in Cost Components

Variability across countries can be examined by comparing the ratio of the largest

to the smallest cost by component, e.g. the maximum divided by the minimum for

cost due to investment (or O&M or fuel) across all countries. Here, the greatest

difference is the contribution by the O&M cost of coal-fired plants, the largest

being 8.31 times the smallest. But the range is small: 0.11 to 0.90 pence/kWh in

absolute terms compared with other costs. Even the smallest discrepancy claims a

factor of two, i.e. the largest nuclear investment cost is twice the smallest. For

both coal and nuclear fuel cost contributions, the largest is 5.55 times the smallest.

Although differences between countries are not the subject of this study, it is

nevertheless interesting to note such a range of difference in similar technologies

among industrialised nations.

A.3.3 Contribution to Cost

While comparing costs across various countries raises the uncertainties of

exchange rates and different assumptions made by each country, a vertical analysis,

as shown in figure A.3, eliminates these issues by looking at each cost component

in relation to the total. Again the minimum and the maximum are taken of the

proportions for all OECD countries surveyed to set the maximum bounds used in

the sensitivity analysis that follows.

183
Figure A.3 Vertical Analysis of Cost Contribution

Contribution to Cost (vertical analysis)


At 5% discount rate, over 30 years life
100

80
71 70
60
%
47
44 43
40
34
29
24
20 18
11 12
8
0
COAL COAL O&M COAL fuel NUCLEAR NUCLEAR NUCLEAR
investment investment O&M fuel

Compared to other components, O&M contributes least to the total cost for both

coal and nuclear. Investment contributes between 18 and 34% to the cost of coal,

while it is much greater for nuclear, being 47 to 70%. The relationship between

investment and fuel is again reversed for nuclear and coal, i.e. the contribution by

investment is much higher than that by fuel for nuclear (and the opposite for coal).

Such range diagrams depict the relative importance of different components within
a single technology and the same component between different technologies. The

length represents variability, the longer it is the greater is the range of possible

values. Nuclear fuel has the greatest variability, contributing anywhere from 12 to

43% of total cost. The height represents the importance of the component, the

higher it is the greater the contribution. The major components of coal and nuclear

are due to fuel and investment costs, respectively, each of which contributes up to

70% of final cost of electricity generation.

184
A.4 Major Components of Cost

The UNIPEDE and OECD studies were conducted in 1988 and 1989 respectively,

before discussions of a carbon tax came into full swing. Grubb (1989) and others

have discussed the effect of a carbon tax on the relative competitiveness of fossil

fuels for electricity generation and its effectiveness in curbing global warming.

Using raw data and established conversion rates from these reports, a carbon tax

can be calculated to see its effect on the final cost. The carbon tax component is

included in this study to represent increasing environmental concerns through

externalities.

Aside from the costs of investment, O&M, fuel, and carbon tax, the drivers of the

four cost components are discount rate, life, load factor, escalation rates,

efficiency, and carbon dioxide emission factor. These factors are depicted in figure

A.4, with arrows representing influences.

Figure A.4 Factors Influencing Cost

discount rate

total investment
lifetime

load factor
annual O&M
escalation levelised cost

fuel escalation fuel cost

efficiency

carbon tax
emission factor

tax rate

185
Each of these factors are defined next. The ranges of input values are taken from

the two reports and adjusted with assumptions and approximations. The costs are

expressed in constant US$ and ECU of January 1987 in OECD and UNIPEDE

reports respectively. These cost figures are converted into sterling using the

equivalent exchange rates at that time.

A.4.1 Assumptions

The UNIPEDE and OECD reports could not have based their calculations on fixed

uniform scenarios, for the figures were simply collected from the participants

without adherence to a priori rules. Instead their aim was solely to calculate

levelised costs using the input figures provided. The input figures, such as capital

costs, fuel costs, and load factors, vary from country to country, and the

differences are explained in the two reports. Each country has its own assumptions

about fuel prices and trends, especially for the stages of the nuclear fuel cycle.

Most countries expected future coal prices to soften such that nuclear generation

costs would not be that much cheaper than coal generated electricity. OECD goes

further to seek an independent view from the Coal Industry Advisory Board,

whose average of best estimates was significantly lower than majority of the

countries although higher than some respondents.

Generally speaking, differences in capital costs are associated with costs of actual

construction, commercial bids, design studies, regulatory requirements, and

updating of older data. Cost of labour and welfare charges factor into the running

costs and, along with other figures, are tempered by the economic situations in

each country. As all costs are converted into a common currency, some

conversions may be distorted due to the over or undervaluation of the national

currency to ECU or US$ in January 1987 when the exchange rates were taken.

186
Capital investment of new coal-fired plants is largely burdened by the additional

cleaning equipment, which depends on the type of sulphur and nitrogen oxide

removal processes. Other capital cost differences are due to the economies of

scale in unit sizes (315 to 840 MW for the UNIPEDE study, 165 to 850 MW for

OECD study.) Similarly, the number of units on the same site contributes to the

scale effects in investment and running costs. In other words, marginal costs

decrease with each incremental MW or unit on the same site.

With reference to long term uncertainties, UNIPEDE considered the risk of error

in determining total generating costs. This risk lies with the price of natural

uranium and the cost of irradiated fuel management for nuclear generation and in

the long term price of coal for coal-fired generation. However, it did not

investigate the magnitude or likelihood of the uncertainties nor the time scale of

what is meant by long term.

A.4.2 Ranges of Values

Rather than inventing pseudo highs and lows for the different values needed to

calculate costs or basing the study on historic costs, we use realistic ranges of

similar plant types in OECD countries. These figures refer to plants that will be

commissioned around the same period of time. They have already been deflated

and discounted to a given date and converted to a common currency.

As mentioned previously, each country submitted its figures based on its own

assumptions and expectations of the future. Taking the range of input figures runs

the risk of ignoring conflicting assumptions about fuel prices, regulatory scene,

environmental standards, and technological developments. For example, the low

cost of fuel in one country may be due to its proximity to the source, whereas, the

high cost of fuel in another could be due to its unlucky experience in procuring a

different grade of fuel in the past. However, by taking the entire range in such a

187
sensitivity analysis, we can be sure of encompassing all possible and some

improbable.

Some cost differences are structural due to the regulatory framework, economic

conditions, industry organisation, or existing infrastructure in a country. Existing

and target plant mix, status of over or under capacity, government subsidies,

environmental concerns, and various other factors contribute to new plant costs.

These differences cannot be generalised for any ranges, but for the sake of

completeness, the range over all countries ensures that all possible values are

included.

Initially, ranges are taken from international studies for the sensitivity analysis.

Later, these ranges are adjusted for the UK case and revised by more current

views.

A.4.3 Investment

Most capital and related costs are incurred in the form of construction cost during

the construction period that precedes commercial commissioning. During this

period, interest during construction (IDC) is accumulated according to the

investment schedule and prevailing interest rates. Both OECD and UNIPEDE

studies computed the interest during construction using an interest rate equal to the

discount rate. Therefore, when the discount rates were varied in the sensitivity

analysis, IDC would change accordingly. It may be argued that the use of an

interest rate in the calculation of the IDC relates to a financing decision whereas

the use of a discount rate in the calculation of levelised cost follows an investment

decision. By this token, the interest rate and discount rate are not necessarily be

the same, especially since the interest rate used to calculate the IDC can vary with

time and the kind of financial arrangement. In contrast, a fixed discount rate is

used to revalue all other costs to a specific date. This aspect of capital cost

188
requires detailed modelling and in-depth investigation beyond the scope of this

pilot study. For simplicity, the IDC is taken as a lump sum and included in the

investment cost in the sensitivity analysis. Thus changing the discount rate would

not affect the IDC, and the financing and investment evaluations are kept separate.

We also include the IDC in the risk analysis that follows, as the IDC relates to the

time value of money.

In anticipation of stricter environmental legislation in the future, the costs of

desulphurisation and denitrification equipment are included in the initial investment

for coal plants. It is assumed that the reduced plant efficiencies due to this

additional equipment have already been taken into consideration.

Considerable scientific uncertainty surrounds the last stages of nuclear plant

operation, namely, dismantling and decommissioning. Actual decommissioning

costs could exceed provisions. Therefore it is important to consider this in the

capital costs. For comparison purposes, this is included in the investment costs for

both coal and nuclear. The provision for nuclear is much greater than that for coal

and the magnitude is more uncertain. This provision depends on whether the

dismantling is partial or total and the time elapsed between the final shutdown of

the station and the start of decommissioning. The impact of this provision is not

considered in detail here.

The investment cost for coal ranges from 475 to 1,211 per kW of installed

capacity. For nuclear, it is 868 to 1,725 per kW of installed capacity.

A.4.4 Operations and Maintenance

In general, fuel costs make up 80% of the running costs, with the remaining 20%

due to operations, maintenance, and labour . Although there are fixed and variable

portions to the O&M cost, it is approximated as a single fixed annual cost in this

study. O&M cost component is the smallest of the three major components after

189
fuel and investment as shown in the previous section on costs. Specific cost of

labour and welfare charges depend on the economic situation in each country.

The OECD report gives total O&M costs. This figure is taken as an annual fixed

O&M cost with zero variable O&M cost. The bulk of this cost is due to the

portion of labour in fixed costs and the amount of labour employed at site.

Because the UNIPEDE report does not list O&M costs, a slight approximation

must be made to derive the O&M cost as an input. The annual fixed O&M cost is

calculated from multiplying the average discounted O&M cost per kWh (in the

reference case of 5% discount rate and 25 year life) by the annual utilisation of

6,600 hours. Again, the resulting annual fixed cost is assumed to include the

variable component of O&M cost.

O&M costs for coal varies between 6.92 to 59.74 per kW per year, while the

range is smaller for nuclear, being 16 to 49 per kW per year. To compensate for

the variability in O&M costs, a modest range of escalation rates is applied in the

sensitivity analysis.

A.4.5 Fuel

The treatment of nuclear fuel differs greatly from that of fossil fuels in generating

electricity. The calculation of the final cost of electricity generation due to the

complicated nuclear fuel cycle requires additional coefficients, which are not

evident in the two reports. For this reason, the output values of pence/kWh

attributed to fuel was used. A more detailed study could calculate the conversion

from raw uranium concentrate, through the nuclear fuel cycle, to a more accurate

cost of fuel contribution.

Fossil fuel prices are available by equivalent heat, weight or volume. For instance,

oil is typically measured in barrels, coal in tonnes, and natural gas in cubic metres

190
or therms. Measurement by heat content standardises for all fossil fuels. To relate

these fuel prices to the plant heat rate, the price per equivalent heat is used, e.g.

per GJ.

Coal prices vary considerably between countries depending on the location of the

power plant (i.e. its proximity to sources of coal), whether it is imported or

domestic, and the subsidies and taxes on fuel. The high prices of domestic coal in

Germany and Spain are an order of three to four times the cost of imported coal

elsewhere. Likewise, the cheap price of abundant domestic coal in Western

Canada is half the cost of the cheapest imported coal in the world. In both reports,

imported coal prices were given in the UK values. Given that the final analysis is

aimed at sensitivity of costs in the UK, a logical conclusion is to tighten the range

of possible coal prices by restricting the analysis to imported coal. These imported

coal prices varied from 89 pence to 2.24 per GJ, which compares reasonably with

international traded prices.

The price of steam coal in the UK (IEA, 1991) increased from 1.26 to 1.81 per

GJ between 1980 and 1990, with an average high of 1.86 in 1988. After

deflating, the price has fallen in real terms. Negotiations between the major

generators and British Coal (Financial Times, March 1992) indicated an expected

price of 1.50 to the current 1.63 per GJ range, while Scottish Power had been

able to procure coal at 1.00 per GJ. Average prices of coal purchased by the

major UK electricity generating companies (Department of Energy, April 1992)

reached as high as 1.99 /GJ between 1986 and 1992. Thus the derived range of

0.89 to 2.24 per GJ is not unrealistic for evaluating the case of UK coal fired

stations. The levelised costs reported in OECD and UNIPEDE are most sensitive

to assumptions about future fuel prices. For this reason, comparisons with

published sources are necessary to establish credibility.

191
A.4.6 Carbon Tax

The parameters specific to fossil fuels necessary to calculate the carbon tax are

absent from these two studies. Conversion rates such as carbon dioxide emission

factors, heat content, and plant efficiency are readily found in recent policy and

economic studies on the carbon tax. However, these policy-oriented papers do not

specifically state many of their assumptions for the numbers. Tax units are

expressed in $/ton and $/BOE. While BOE is understood to be the amount of fuel
equivalent to the CO2 released from burning a barrel of oil, it is unclear whether

the unit of ton refers to the long ton, the short ton, or the metric tonne. Not only

are the units misleading, there are at least two ways to calculate the tax effect: by

carbon content and molecular weight or by carbon dioxide emission factor. To

establish a common basis for the calculation of carbon tax, values for these

parameters of oil, gas, and coal are taken from various papers and recalculated to

tally against the original results. This type of multi-source analysis establishes the

inter-relationships and produces a range of credible values. Realistic ranges of

such parameters can be found by using figures from various sources. [See Ontario

Hydro 1989, Hoeller and Wallin 1991, and Eyre 1990.]

The EC carbon tax is a specific tax levied on the equivalent amount of CO2

released by burning a barrel of oil. A barrel of oil is approximately equivalent to

7.64 tonnes of oil. Given a heat content of 42.6 GJ/tonne and an emission factor of
75 kg CO2/GJ, burning a barrel of oil emits approximately 418.19 kg CO2. The

fuel equivalent emission factor used in the calculation for other fuels is 7.64 / 42.6 /
75 = 0.00239 BOE/kg CO2. Without specifying the exact grade of coal for the

range of emission factors (71.7 to 108 kg CO2/GJ) and plant efficiencies (25 to

45%) and $/ exchange rates ($1.40 to $2 = 1.00), the effect of a $3 per BOE

carbon tax would range from a low of 0.21 pence/kWh to a high of 0.80

pence/kWh. Based on the relatively uniform generating technology of coal-fired

192
stations surveyed in the OECD and UNIPEDE reports, this study assumes that

similar types of coal could be used in all stations. In other words, an average
carbon dioxide emission factor of 88.11 kg CO2/GJ could be used to calculate

carbon taxes.

In 1992, the European Commission is proposing an incremental carbon tax of $3 in

1993, $4 in 1994, rising by $1 each year until $10 in the year 2000. In this study,

the same amount of tax is applied to every single year for the entire economic life

of the plant. Thus the no tax scenario can be compared to the high tax scenario of

$10/BOE. The actual impact of an incremental carbon tax would lie in between the

two. Figure A.5 illustrates the impact of a carbon tax relative to plant efficiency.

Figure A.5 Carbon Tax Calculations for Coal-fired Plants

Assumptions: 1.38 = Carbon Tax Calculations for Coal-fired Plants


$1.00 exchange rate;
88.11 kg CO2/GJ
2.00
emission factor
Plant Efficiency
33%
1.60
37%
41%
pence/kWh

1.20

0.80

0.40
0 1 2 3 4 5 6 7 8 9 10
0.00

$/BOE

As seen from above, the contribution of carbon tax is fairly insignificant at the $3

level. But at the $10/BOE level and assuming a low plant heat rate, it could double

the cost of cheap coal-generated electricity.

193
A.4.7 Efficiency

The amount of carbon dioxide released when a fuel is burned depends on its carbon

content (translated into carbon dioxide emission factor.) Likewise, the amount of

useful energy converted from this parallel process depends on the plant heat rate.

Given that 3.6 GJ of heat is equivalent to 1 MWh of energy, the remaining factor in

the heat rate is simply the plant efficiency rate. Coal-fired stations in the UK have

efficiencies between 30 and 40% (Eyre, 1990).

If the plant heat rate is not given, efficiency is approximated by the contribution of

fuel to the final discounted cost of electricity generation and the raw fuel price.

Hence efficiency expressed in GJ/kWh is derived from dividing the levelised fuel

cost in pence/kWh by the raw fuel price of pence/GJ. This calculated efficiency

compares realistically with published sources. In the absence of descriptive fuel

parameters such as carbon and heat content, all possibilities are considered from a

range of values.

Plant efficiency links the fuel to the calculation of the carbon tax component and

the fuel component as both require a heat to energy conversion. Plant efficiencies

derived from the two reports vary from 25 to 45.45% for coal-fired stations. Plant

efficiency is not required for nuclear which is expressed in kWh terms (a crude

approximation of the nuclear fuel cycle). Furthermore, carbon taxes do not apply

to non-fossil fuel plant.

A.4.8 Load Factor

Fixed costs such as investment and O&M are divided by the actual generating

hours to arrive at the pence per kWh figure. This utilisation rate is determined by

the scheduled and unscheduled outage rates, in other words, the percentage of time

that a plant is scheduled to operate less the percentage of time it is out of service

194
due to planned and unplanned shutdowns for maintenance, refuelling, and other

reasons.

Plant utilisation rate is frequently expressed in terms of capacity factor, load factor,

and availability factor. The OECD report uses the term load factor as a percentage

of total hours in a year, while the UNIPEDE report uses hours in a year to reflect

utilisation. The load factor convention is chosen for this paper, and the UNIPEDE

hours are divided by 8,760 hours in a year to arrive at an annual percentage figure.

The UNIPEDE study uses an incremental utilisation rate in all scenarios, i.e. the

load factor is increased from 45% in the first year to 57% in the second year, and

finally 75% for the rest of the life time. On the other hand, the OECD study uses a

levelised load factor of 72%, which was derived from averaging the increasing load

factors after initial commissioning and the settled down load factor of 75.3%. For

simplicity, a constant load factor is used in this pilot study, with the assumption

that it represents the levelised lifetime annual utilisation rate. For coal, the

sensitivity ranges from 63% to 80%. For nuclear, it is slightly higher, from 65% to

85%.

Using the same O&M cost and the same load factor for every single year of a

plants lifetime underestimates the cost in the initial years where O&M costs are

higher than usual and load factors are lower than usual. Also during the latter

years when mid-life refurbishment and additional maintenance costs are necessary,

O&M costs are expected to increase with the decrease in load factor.

A.4.9 Escalation Rates

It is unreasonable to expect all costs to remain the same for every single year of

plant operation. More likely, the O&M and fuel prices will fluctuate year by year.

Instead of computing yearly cashflows, the levelised method uses escalation rates.

195
A modest range of -1% to 3% per annum is assumed in the sensitivity analysis.

For the base case, however, no escalation is assumed.

Yearly escalations do not take into consideration seasonality or daily fluctuations in

load. It is assumed that these detailed fluctuations have been averaged in this

study.

The effects of incremental carbon tax in accordance to EC legislation and load

factors in the normal course of operations can be modelled with escalation rates.

No escalation rates are assumed for other parameters.

A.4.10 Life

Economic or amortising life differs from technical life depending on the accounting

conventions practised in each country. For discounting purposes, the economic life

is used.

The two reports use standard lifetimes of 25 and 30 years to compute the levelised

costs. However, in the national calculations, the actual lifetimes used by each

country vary from 13 to 45 years. Technical lives are determined by the

performance of the plant and are usually longer than economic lives which are used

for accounting purposes. At the lower end are Italy (13 years) and Japan (15 and

16 years) for economic life. Since UK falls on the higher side (45 years), the range

used for sensitivity analysis in this paper is extended to 50 years.

The economic or amortising life of a project tends to be shorter in the private

sector than the public sector, to reflect the degree of risk. A shorter life is

preferred in the payback method of appraising projects, allowing costs to be

recovered more quickly, albeit at a higher cost to the consumer. The level of

business risk is captured in the choice of economic life and the choice of discount

rate.

196
A.4.11 Discount Rate

The UNIPEDE study used a 5% discount rate for two reference lifetimes of 25 and

30 years. OECD used a 5% and a 10% discount rate for a 30 year life. The choice

of a discount rate is very particular to the circumstances surrounding a country and

a utility. The rates also differ between the public and private sectors.

The former CEGB gave the public sector price of 3.22 pence/kWh (at 1987 prices)

for PWR in the House of Commons Energy Committee (1990) inquiry into the cost

of nuclear power. This was calculated using an 8% internal rate of return

(equivalent to discount rate) over a life of 20 years. The private sector price of

6.25 pence/kWh was calculated by National Power in the run up to privatisation,

using a discount rate of 10% to reflect the degree of risk perceived by the private

sector. The two reports and this pilot study show that the levelised cost of

electricity generation is highly sensitive to the choice of discount rate. The

discount rate reflects not only the opportunity cost of capital but also the time

value of money, cost of borrowing, and business risk.

The discount rate used in the public sector tends to be much lower than that used

by the private sector because regulated monopolies with guaranteed rates of return

on capital can obtain low costs of borrowing. The discount rate perceived by the

private sector tends to reflect the return on capital that can be invested in various

markets, including the return to shareholders on the equity vested in the private

utility.

Each country used the same discount rate to calculate its coal and nuclear costs.

Across all countries, discount rates varied from 4 to 10%. These rates are based

upon market rates, reference rates used in previous energy plans, government

advice, and considerations of economic growth and development. It may be

argued that higher discount rates should be applied to nuclear projects to reflect

197
the greater investment risk. One study (Virdis and Rieber, 1991) even proposed a

discount rate as high as 20%!

There are many ways to determine which discount rate to use. The OECD study

suggests some basic approaches to selecting a discount rate:

1) purely policy related aimed at reaching specific social, economic, or political


goals in a country,

2) derived on an economic or financial basis, such as

a) based on the real costs of investment funds over the time scale of the project,

b) reflecting the opportunity cost of capital at the time of investment as


determined by the income it could potentially generate in alternative uses,

c) based on social time preference reflecting societys desire to protect the


interests of future generations, and

d) some mixture of these concepts.

The selection of a discount rate therefore would depend on the projected rates of

inflation, interest rates, and other market based rates.

Ottinger et al (1990) list four different ways to measure future economic benefits

and costs with today's benefits and costs:

1) the social rate of time preference,

2) the consumption rate of interest,

3) the marginal private rate of return on investment, and

4) the opportunity cost of public investment.

Aside from financial determinants of the discount rate, Ruth-Nagel and Stocks

(1982) warn of the social opportunity cost of capital.

For the moment, a modest sensitivity range of 4 to 15% is used for discounting

coal and nuclear.

198
A.4.12 Consolidating the Range

One of the main interests of this study is to analyse the UK base case and how it

varies within the ranges given by the international context. The ranges, as

postulated previously, reflect the possible values and include the improbable as well

as the probable. It is assumed that the extreme anchors are less likely than the base

value, but no attention is paid to the extent of probabilities so far. As argued

before, it is more informative to consider all possible values than a fixed percentage

about the base value.

It is possible that OECD and UNIPEDE reports may not capture the full range of

values for the UK. Comparisons with other published sources are required.

Furthermore, the ranges may vary for different periods in time. The ranges

captured in January 1987 reflect each countrys expectation of the future at that

point in time. Many events have taken place since then, and the ranges should be

re-adjusted in light of revised expectations of the future. This sense of range is

useful even if a seven year update is needed.

The bounds taken from the two reports are converted into sterling using the

exchange rates given at the beginning of January 1987, that is, 0.7241 = 1 ECU,

and 0.678 = $1.00, i.e. 1.00 = $1.475. The minimum is taken of the lower

bounds of both UNIPEDE and OECD studies. Likewise the maximum is taken of

the upper bounds. The minimum and maximum are re-adjusted so that the ranges

sufficiently surround the UK input parameters to permit a good sensitivity analysis.

In addition, annual O&M and fuel escalation rates are assumed. Carbon tax rates

are taken from recent EC (Commission of European Communities, 1992)

discussions.

Table A.1 gives the revised bounds in -equivalent.

199
Table A.1 Consolidated Range

Constant @ Jan 1987 Coal Nuclear

FACTOR Lower Bound Upper Bound Lower Bound Upper Bound

efficiency 25 % 45 % n/a n/a

discount rate 4% 15% 4% 15%

life 13 years 50 years 13 years 45 years

load factor 63 % 80 % 65 % 85 %

investment cost 475 /kW 1,211 /kW 868 /kW 1725 /kW

annual fixed O&M cost 6.92 /kWa 59.74 /kWa 16 /kWa 48.8 /kWa

fuel cost 0.89 /GJ 2.24 /GJ 2.10 /MWh 11.66 /MWh

O&M escalation -1 % pa 3 % pa -1 % pa 3 % pa

fuel escalation -1 % pa 3 % pa -1 % pa 3 % pa

carbon tax 0 $/BOE 10 $/BOE n/a n/a

While these ranges may appear too large for analysing the sensitivities of UK

parameters, it is more justifiable to reduce the range than to expand it later. The

analysis has to be qualified in the international context.

A.5 Sensitivity Analysis

One of the main motivations of this study is to understand the factors that influence

the cost of electricity generation. The input parameters are assumed constant

throughout the plant life to simplify the NPV annuity method of computation. The

ranges selected from the UNIPEDE and OECD reports are applied to UK base

cases.

200
A.5.1 Calculation Method

By assuming constant parameters, the average discounted or levelised cost of

electricity generation reduces to a method of annuity calculations. A handy

spreadsheet function (PMT) returns the annuity or the equivalent constant annual

amount that arises for a given number of years. In other words,

Annual Value of a lump sum amount to be spread over a given period at a given discount rate =
PMT(discount rate %, years in period, present value of total payment)

This PMT function is useful in determining the equivalent annual amount of

investment cost.

Cost of electricity generation due to investment = PMT(discount rate, life, investment)


load factor % * 8760 hours in a year

The remaining components of levelised cost are calculated as follows:

cost of electricity generation due to O&M = fixed annual O&M cost


load factor % * 8760 hours in a year

cost of electricity generation due to fuel = fuel cost /GJ * 3.6 heat/energy conversion factor
efficiency %

for nuclear: fuel cost already expressed in /MWh

carbon tax * emission factor * heat to energy * exchange


carbon tax component = $/BOE BOE/kg CO2 3.6 GJ/ MWh rate /$
efficiency %

therefore, the average discounted levelised cost = investment + O&M + fuel + tax

When annual escalation rates for fuel and O&M are introduced, a leveling factor

must be multiplied to the existing formula to discount the compounded rates back

to present value terms.

201
r (1 + r ) T k (1 k T )
The general notation for this leveling factor is
(1 + r ) T 1 1 k

(1 + e)
where r = discount rate in %, T = life in years, and k = where e = annual
(1 + r )

escalation rate in %. This is the power expansion of the same expression in sigma
T (1 + e) t

t =1 (1 + r )
t
notation: T

(1 +1r ) t
t =1

This calculation method is derived from the levelised bus-bar method explained in

IAEA (1984). It is broadly consistent with the methods used in UNIPEDE and

OECD, which follow the convention adopted by the Commission of European

Communities (EUR 5914 of Commission of European Communities, 1990).

The average discounted cost offers several advantages in comparing future power

plants. The ratio of discounted total generation cost over the plants entire lifetime

to the discounted sum of electricity generated over the same period is independent

of the date of discounting and the current or future inflation rate. All figures are

therefore real, that is, free of inflation.

A.5.2 UK Parameters

Figures for UK Coal and Nuclear were submitted to both UNIPEDE and OECD

reports. In the UNIPEDE study, data was provided for two units of 840 MW coal

plant, cooled by sea water and equipped with sulphur and nitrogen oxide removal.

The nuclear power plant is a 1,155 MW Pressurised Water Reactor (PWR) which

includes one reactor of total capacity and two turbo generators of half capacity.

Given the date of submission, it is probably the data for Sizewell B. Meanwhile,

data for Hinkley C PWR was provided for the OECD study, and the figures for this

1,175 MW reactor are consistent with those submitted in October 1988 to the

public inquiry.

202
Converting the units from ECU and US$ into equivalent at the beginning of

January 1987 yields the following input values for the UK.

Table A.2 UK Parameters

Constant @ Jan 1987 Coal Nuclear

FACTOR UNIPEDE OECD UNIPEDE OECD

efficiency 38.50 % 26.06 % n/a n/a

discount rate 8% 8% 8% 8%

life 40 years 45 years 35 years 40 years

load factor 74 % 75 % 72 % 75 %

investment cost 891 /kW 892 /kW 1,578 /kW 1,543 /kW

annual fixed O&M cost 30.59 /kWa 23.73 /kWa 26.07 /kWa 22.10 /kWa

fuel cost 1.65 /GJ 0.89 /GJ 5.36 /MWh 4.47 /MWh

O&M escalation 0 % pa 0 % pa 0 % pa 0 % pa

fuel escalation 0 % pa 0 % pa 0 % pa 0 % pa

carbon tax 3 $/BOE 3 $/BOE n/a n/a

These costs and assumptions were made in 1987 and 1988, after the Sizewell B

inquiry, during the Hinkley C inquiries, before privatisation, and before the decision

to retain nuclear in the public sector. These figures should be adjusted in light of

privatisation in 1990, the demise of new nuclear plant until the 1994 nuclear review

and the current dash for gas phenomenon. Instead of using 5% discount rate

given in UNIPEDE reports, 8% is selected to reflect the onset of privatisation. For

purposes of modelling insight, only one set of values is necessary, thus UNIPEDE

is retained while OECD values are dropped for the rest of the study. The base

costs for the UK figures given above are summarised in table A.3.

203
Table A.3 Base Costs for the UK

Constant @ Jan 1987 Coal Nuclear

pence/kWh UNIPEDE UNIPEDE

investment 1.15 2.15

O&M 0.47 0.41

fuel 1.54 .54

Carbon tax .40 0

Total Cost 3.56 3.10

Alternatively the contribution to final cost can be viewed in figure A.6.

Figure A.6 Contribution to Final Cost

Contribution to Final Cost

40.00
35.00 Carbon Tax Component
30.00
0.1 pence/kWh

25.00 O&M
20.00
Fuel
15.00
10.00 Investment
5.00
0.00
Coal Nuclear

Although initially coal is more expensive than nuclear, the choice of discount rates

can change the relative attractiveness of coal and nuclear. Figure A.7 depicts the

effects of varying the discount rate. The slope of nuclear plants is steeper than that

of coal plants because the investment costs are considerably greater and the

magnitude of investment to fuel costs are reversed for the two plants. In the base

204
case with $3 carbon tax applied to coal, the cross-over or breakeven discount rate

occurs at approximately 13%. Only then does nuclear become more expensive

than coal.

Figure A.7 UK Coal vs Nuclear Trade-off Curves with $3 Carbon Tax

UK Coal vs Nuclear Tradeoff Curves


Base Case (Carbon tax at $3/BOE)

6.00
Coal

5.00 Nuclear
pence/kWh

4.00

3.00

2.00
4 9 14 19 24

Discount Rate

If this carbon tax is increased to $10/BOE, then coal will definitely be more

expensive than nuclear. As seen in figure A.8, even at the unlikely discount rate of
20%, coal is still more expensive than nuclear.

205
Figure A.8 UK Coal vs Nuclear Trade-off Curves with $10 Carbon Tax

UK Coal vs Nuclear Tradeoff Curves


Base Case (Carbon tax at $10/BOE)

6.00

5.00
pence/kWh

4.00

Coal
3.00
Nuclear

2.00
5 10 15 20

Discount Rate

Discount rates are particularly significant in capital intensive projects, like coal and

nuclear plants. This is shown in the ascending impact of discount rates on

investment costs.

A.5.3 Sensitivity to Range

By applying the derived ranges from the two reports to the calculations of UK base

values, it is possible to find the impacts of different factors. The tornado diagram

of figure A.9 shows the importance and impact of various costs in descending

order.

206
Figure A.9 Coal

COAL (Base = 3.56 pence/kWh)

Discount rate 4 to 15 3.11 4.48


Efficiency 25 to 45.45

Carbon Tax 0 to 10

fuel cost (imported) 0.89 to 2.24

Investment 475.28 to 1211.42

fuel escalation rate -1 to 3

fixed O&M cost 6.92 to 59.74

Life 13 to 50

Load factor 63 to 80

fixed O&M escalation rate -1 to 3

Carbon Tax X-Rate 0.71 to 0.5

2.00 2.50 3.00 3.50 4.00 4.50 5.00

pence/kWh

Each bar denotes the range of costs computed by varying its corresponding factor

without changing the other parameters. In the coal example, lowering the discount

rate from the base case of 8% to 4% lowers the levelised cost from 3.56 to 3.11

pence/kWh. Similarly, increasing the discount rate to 15% while keeping all other

factors the same, raises the cost to 4.48 pence/kWh.

As discovered earlier, the effect of discount rate accentuates with higher capital

costs. The nuclear case in figure A.10 shows the overwhelming importance of

discount rate as opposed to investment, life, and other factors. Costs are

particularly sensitive to discount rates if investment is high. Fixed O&M costs by

comparison has minimal effect. Efficiency rates and carbon taxes do not apply in

the nuclear case.

207
Figure A.10 Nuclear

NUCLEAR (Base = 3.10 pence/kWh)

Discount rate 4 to 15 2.29 4.73


Investment 868.2 to 1725.53

Life 13 to 45

fuel cost 2.1 to 11.66

Load factor 65 to 85

fixed O&M cost 16 to 48.82

fuel escalation rate -1 to 3

fixed O&M escalation rate -1 to 3

Efficiency 0 to 0

Carbon Tax 0 to 10

Carbon Tax X-Rate 0.71 to 0.5

2.00 2.50 3.00 3.50 4.00 4.50 5.00

pence/kWh

In reality, discount rates and lifetimes do not fluctuate. These factors are decisions

undertaken by the generator. Such controllable variables should not be compared

on the same basis as other factors, which are highly affected by external

circumstances.

Deterministic sensitivity analysis as a study of variability uncovers the relative

importance of factors. Ranges give more information than point estimates. How

likely the parameter will take on a value in the range requires the additional

dimension of likelihood, frequency, or probability. This additional information of

probabilities can be assessed through risk analysis.

A.6 Risk Analysis

A sensitivity analysis tells us how much the output varies with variations in the

input values but gives no indication of relative likelihood. A one-way analysis

shows the effect of varying one parameter at a time. A two-way analysis shows the

208
result of varying two parameters at a time. Useful insights can be drawn provided

the variables are independent of each other. The computation slows down

exponentially as more parameters are varied simultaneously, i.e. the curse of

dimensionality.

A better representation of uncertainty can be achieved by describing the likelihoods

of a variable to take on a specific value. Biases in the input ranges can be

represented by probability distributions, where some values are more likely than

others. The introduction of probability into this analysis describes the ranges in a

more informative way. In the simplest case, every variable has an equal chance of

taking any value in a range. Since the previous sensitivity analysis was based on

taking a range about a base value, a more informative representation of uncertainty

would be the triangular distribution, in which the base value is most likely and has a

greater chance of occurring than any other value, with the lower and upper bounds

having the least chances of occurring.

A.6.1 Methodology

First of all, factors are distinguished between decision variables and uncontrollable

external events. Decision variables are treated in the manner as sensitivity analysis,

that is, changing one value at a time, whereas external variables are approximated

using probability distributions and varied simultaneously. For this study the

discount rate is the only decision variable, the others are external variables. Life is

fixed for each type of plant.

As described in chapter 3, the simulation approach to risk analysis is preferable to

the analytical approach in circumstances where the input distributions are not

symmetric or standard and where computing facilities are available. The Latin

Hypercube Sampling (LHS) method was chosen as it performs better than the

Monte Carlo method, which tends to take much longer to approximate a given

209
distribution. LHS divides the distribution into equal intervals of the number of

iterations selected. Sampling is then taken randomly within each interval, without

replacement. This complete coverage of the distribution through stratification

avoids the clustering problems found in the Monte Carlo method. LHS converges

more quickly than completely random sampling.

The number of iterations required for accurate sampling depends on the number

and types of input probability distributions to be sampled. After a certain threshold

number, output distributions cannot get any smoother. This can also be validated

by using different random number generator seeds. Sampling at 300 iterations

gave jagged risk profiles. As a result, iterations were increased to 600 to reach

smoothness.

A.6.2 Revised Values

The previous sensitivity analysis already established the ranges and the ranking of

important factors. Now it is necessary to use a coherent set of base values for the

different plant types. Rather than using base values from both reports, we used

values from UNIPEDE, which correspond closely to the values selected in a report

by Bunn and Vlahos (1989). Values for combined cycle gas-turbine plants have

been approximated. To account for post-privatisation period, discount rates of 8%

have been used.

Unlike the previous analysis, the range is considered for the construction cost

rather than the investment. This establishes the dependence of the interest during

construction (IDC) upon the discount rate and the construction cost. Here, the

discount rate is represented by the interest rate. The IDC is made a function of the

base IDC, old interest rate and the new interest rate, as follows:

210
old interest during
construction
log (1 + )
old construction cost
new interest
log (1 + old interest rate )
during = construction * [(1 + new interest rate) -1]
construction cost

The external variables of load factor, construction cost, O&M, and fuel cost are

varied simultaneously according to their probability distributions. These sets of

values are analysed for different discount rates and economic lives. For coal and

gas, carbon tax is also varied between for the no tax case, $3 minimum tax, and the

maximum $10 tax. Again, the incremental effect is not captured in this calculation

method.

A.6.3 Nuclear

Before the privatisation of the UK electricity supply industry, a 5% discount rate

was adequate. Privatisation introduced higher business risk and expected return on

projects. To compare nuclear on an equivalent basis, its sensitivity to higher

discount rates is required, especially in light of the discussions in the House of

Commons inquiry (Energy Committee, 1990) into the cost of nuclear power. Two

discount rates at 40 year lives are selected: 8% and 10%. The external variables

follow triangular distributions around the base value. Simulation was performed

on the following values for nuclear:

211
Table A.4 Simulation Parameters for Nuclear

Factor base value treatment expected value


(most likely)

discount rate 8% sensitivity 10%

life 40 years fixed

load factor 75% triang: 65, 85% 75 %

construction cost 1,254 /k@ triang: 868, 1725 1,282.33 /kW

provision for 10.14 /kW triang: 0, 20 10.04667 /kW


decommissioning

annual fixed O&M cost 26.8 /kWa triang: 16, 48.82 30.54 /kWa

fuel cost 5.4 /MWh triang: 2.1, 11.66 6.38667 /MWh

Note that provision for decommissioning is extracted from investment rather than

taken as a lump sum. A triangular distribution of minimum value 0, most likely

value 10.14 and maximum 20 should ideally be computed against a lower discount

rate from the rest of the project, as is the current practice. Quite a controversy

surrounds this, reflecting an important source of risk. Experience with

decommissioning of Magnox stations gives evidence of this. Isolating this factor

reflects the uncertainty. The result after 600 iterations is depicted in the chart

below.

212
Figure A.11 Risk Profiles for Nuclear

Nuclear

0.14

0.12 8%, 40 yrs

0.1 10%, 40 yrs


probability

0.08

0.06

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

The vertical axis gives the probability or frequency, while the horizontal axis gives

the output range expressed in 0.1 pence/kWh. The levelised cost at 10% discount

rate halfway overlaps the 8% case. Reading from the chart, we can say with 100%

probability that nuclear will not cost more than 4.5 pence/kWh if calculated at the

8% discount rate. Compare this with cost estimates given at the Cost of Nuclear

inquiry by Energy Committee (1990): estimates for Hinkley Point C varied from

4.31 pence/kWh to 7.12 pence/kWh at 1987 prices. These were due to the

differences in public and private sector assumptions, a so-called protection against

uncertainty, and adjustment for inflation. In fact, alternative estimates (other than

CEGB) for private sector prices ranged from 4.91 pence/kWh at 8% discount rate

to 5.62 pence/kWh at 10% discount rate. Our simulation is not that far off.

A.6.4 Coal

The uncertainty in carbon tax is reflected discretely: $3 tax, $10 tax, or no tax.

According to current debate, the tax is expressed in US dollars which has been

213
highly susceptible to exchange rate fluctuations. Therefore an external variable for

the exchange rate is built in the model for coal and gas plants.

Table A.5 Simulation Values for Coal

Factor base value treatment expected value


(most likely)

efficiency 38.5% fixed

discount rate 8% sensitivity 10%

life 45 years fixed

load factor 77% triang: 63, 80 73.33 %

construction cost 761 /kW triang: 475, 1211 815.6667 /kW

annual fixed O&M cost 33 /kWa triang: 6.92, 59.74 33.22 /kWa

fuel cost 1.65 /GJ triang: 0.8882, 2.23 1.5894 /GJ

carbon tax 3 $/BOE sensitivity to no tax

carbon tax /$ exchange 0.678 /$ triang: 0.5, 0.714 0.630667 /$


rate

The risk profiles for coal with $3 carbon tax are shown in figure A.12. At the

higher discount rate of 10%, however, more uncertainty is seen in the larger output

range. At 8% discount rate, the most likely cost is 3.7 pence/kWh (peak of risk

profile). At 10%, the most likely cost lies between 4 and 4.5 pence/kWh.

214
Figure A.12 Risk Profiles for Coal

Coal with $3/BOE Carbon Tax

0.16
0.14 8%, 45 yrs

0.12
10%, 45 yrs
0.1
probability

0.08

0.06
0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

A.6.5 Gas

In the last five years, the UK has seen a build-up of natural gas fired plant (CCGT)

which have the advantages of high efficiency (typically 45 to 55%), lower carbon

dioxide emissions, shorter construction lead time, and modularity of unit size. For

these reasons, it is included for completeness. The following base values are taken

from Bunn and Vlahos (1989) and the ranges subsequently adjusted to UNIPEDE

and OECD reports and modified by current views.

215
Table A.6 Simulation Values for Gas

Factor base value treatment expected value


(most likely)

efficiency 45% fixed

discount rate 8% sensitivity 10%

life 40 years fixed

load factor 80% triang: 70, 90% 80%

construction cost 400 /kW triang: 350, 450 400 /kW

annual fixed O&M cost 25 /kWa triang: 23, 27 25 /kWa

fuel cost 2.3 /GJ triang: 2, 2.6 2.3/GJ

carbon tax 3 $/BOE sensitivity to zero


and $10 tax

carbon tax /$ exchange 0.678 /$ triang: 0.5, 0.714 0.630667 /$


rate

The greatest uncertainty lies in the fuel price, as natural gas is a premium fuel.

With the build up of gas turbines in this country, there is speculation that the fuel

price may rise with increasing demand. Up to 60% over capacity is expected in the

next decade according to Financial Times (21 Sept 1992). Investment costs are

generally very low as its construction period is relatively short compared to coal

and nuclear, thus keeping the interest during construction very low. This is the

main reason why the costs are not as sensitive to discount rate as the other two

types of plants. Risk profiles in figure A.13 show little difference between the 8%

and 10% cases, both between 2.7 and 3.3 pence/kWh.

216
Figure A.13 Risk Profiles for Gas

Gas with $3/BOE Carbon Tax

0.12
8%, 40 yrs
0.1
10%, 40 yrs
0.08
probability

0.06

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

A.6.6 Trade-off Curves

The risk profiles for all three types of plants are combined for a ranking of plant

types. Such a comparison is reasonable as all simulations were kept independent.

In the base case without carbon tax, gas is the cheapest option, with nuclear and

coal in competition. The overlap of risk profiles in figure A.14 shows a small

chance that gas may be more expensive than coal and nuclear.

217
Figure A.14 Trade-off Curves for Coal, Nuclear, and Gas (no tax)

Cheapest Case

0.14

0.12 gas 8%, 40 yrs, no tax

0.1 coal 8%, 45 yrs, no tax


probability

0.08 nuclear 8%, 40 yrs

0.06

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

A carbon tax levy such as that proposed by the EC would invariably favour the less

polluting plants. However, the high capital cost of tax-free nuclear makes it more

costly than gas with tax. The most likely case is presented in figure A.15. Here

coal with carbon tax becomes more expensive than nuclear power.

Figure A.15 Most Likely Case

Most Likely Case

0.16

0.14 gas 8%, 40 yrs, $3 tax

0.12
coal 8%, 45 yrs, $3 tax
0.1
probability

nuclear 8%, 40 yrs


0.08
0.06

0.04
0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

218
In the extreme, i.e. most expensive case, we apply $10 carbon tax on gas and coal

and assume the risks of nuclear power translate into a 10% discount rate. The

results in figure A.16 show that the cost of nuclear is much more uncertain than

coal as it spreads over a larger range: 3.0 to 5.7 pence/kWh (nuclear) as compared

to 3.5 to 5.7 pence/kWh (coal). Coal is still more expensive than nuclear and gas.

Figure A.16 Most Expensive Case

gas 8%, 40 yrs, $10 tax


Most Expensive Case
coal 8%, 45 yrs, $10 tax

0.14
nuclear 10%, 40 yrs
0.12

0.1
probability

0.08

0.06

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

A.6.7 Impact of Carbon Tax

As stated earlier, the incremental nature of the proposed carbon tax is not modelled

in this study. The true effect of such a tax lies somewhere between the $3 and $10

case where a fixed amount is levied for every single year of the project. When

applied to coal, risk profiles show the significance of a $10 tax. A $10 tax could

reasonably double the price of cheap coal-generated electricity, as seen in the

following chart.

219
Figure A.17 Carbon Tax on Coal

Carbon Tax on Coal (8%, 40 yrs)

0.16

0.14 no tax

0.12 $3 tax

0.1
probability

$10 tax
0.08
0.06

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

The effect of a $10 tax on gas is not as great as that on coal. This is due to the

considerably lower emission factor as well as the higher plant efficiency rate. See

overlaps in figure A.18.

Figure A.18 Carbon Tax on Gas

Carbon Tax on Gas (8%, 40 yrs)

0.12

0.1 no tax

0.08 $3 tax
probability

0.06 $10 tax

0.04

0.02

0
22 27 32 37 42 47 52 57

0.1 pence/kWh

220
A.7 Summary and Conclusions

This study reveals the factors that influence the cost of electricity generation as a

precursor to wider issues in modelling uncertainty. A top down approach begins

by focusing on the major components of cost and isolating the important drivers.

Base values for UK coal and nuclear are extracted from OECD and UNIPEDE

reports and further modified by Energy Committee (1990). The simplified

calculation method is consistent with the levelised methods of IAEA, UNIPEDE,

and OECD because it uses constant values and minimal escalation rates.

Nuclear, coal, and gas plants are compared. A ranking of technologies shows that

gas (CCGT) is cheapest of all three, even with a carbon tax levy.

At low discount rates, fuel cost has a greater impact than investment costs. At

high discount rates, the reverse is true. In practice, a firm faces the greatest

uncertainty in fuel prices.

A simple risk analysis using very crude uncertainty approximations provides

greater insight than a rigorous sensitivity analysis which gives no indication of

relative likelihood.

Although realistic results are important, this study focussed primarily on

methodological issues. To compare with the current scene, the 1987 values used in

this study must be updated. We have used risk analysis with the bold assumption

of parametric independence which then allowed us to simulate simultaneously.

This assumption not only disregards the dependence between the factors but also

takes a single staged view of the problem rather than the multi-staged nature of

capacity planning.

221
To achieve a more realistic and complete representation of power plant economics,

this study can be extended in three directions, as shown in figure A.19. Greater

disaggregation, i.e. decomposing the aggregate variables into their components,

not only improves completeness of modelling but also allows a closer examination

of detail. As the number of parameters increases, so do the inter-variable

dependence and interaction effects. Meanwhile, the nature of values must be

extended from the constant to the varying. Most parameters exhibit yearly

fluctuations while others vary even more frequently during the operating life of a

plant. Seasonality must be incorporated in some form. Ultimately, to understand

uncertainty, the level of modelling should be extended in this direction.

Figure A.19 Modelling Directions

varying according to
profile

NATURE of VALUES

escalation
factors

constant

deterministic few (macro) many (detailed)

NUMBER of PARAMETERS
DEGREE of UNCERTAINTY

probabilistic

Disaggregation means breaking down large components into smaller ones for

greater manageability or to achieve a greater level of detail. For example, to

222
understand investment cost, we must look at its components of construction cost,

interest during construction, desulphurisation or denitrification fittings if applicable,

provision for decommissioning, and other capital costs. Similarly, operations and

maintenance should be evaluated against its fixed and variable components, unlike

the assumption of fixed O&M in this study.

Interest during construction can be viewed as a financing cost or an investment

cost as it reflects the cost of capital. The interest rates used in the calculation

depend on the interpretation and the subsequent risks involved. If business risk is

not incorporated in the discount rate, it should be incorporated elsewhere. The

interest rate also depends on the size and economic life of the project. The

construction period affects the size of the IDC, particularly in the form of

construction cost draw-downs. Interest during construction can be further

analysed by varying the interest applied to the investment schedule prior to the

commissioning date.

Actual utilisation of a power plant depends on its planned and unplanned down

time and the merit order. Strictly speaking, utilisation should come from the

combined effects of availability and load factor. In this case, load factor is given by

the merit order of operations, typically determined by the fixed and variable costs.

Ignoring emission constraints, plants with high fixed costs and low variable costs

are loaded before those with low fixed cost but high running costs. Alternatively,

we can introduce demand by way of load distribution curves, which are then

aggregated and averaged to give load duration curves.

The privatised power companies have the additional objective of profit

maximisation. It is in their interest to take every advantage of capital allowances,

tax shields on depreciation, and inflation accounting. Although this study has

examined the economics net of inflation and taxes, it is worthwhile to introduce

corporate tax and inflation rates to see the effects on cashflow planning.

223
From a technical perspective, this study has restricted the power plants to typical

or generic ones according to the type of fuel used. In reality, there is no

generic coal plant. Different grades of coal have different levels of carbon,

sulphur and heat contents and , in turn, burn at different efficiencies and release
different quantities of CO2, SOx, and other gases. Consequently, the resultant

carbon taxes will vary. Plant efficiencies are also related to operational efficiencies

and retarded by FGD and other cleaning equipment.

The uncertainties at the back end of the nuclear fuel cycle and the last stage of

decommissioning are much cause for concern. This decade will witness the

decommissioning of the older nuclear reactors in the UK. Analysis into the

treatment of provision for decommissioning is therefore important as it makes up a

great proportion of total costs, which carry future risks and responsibilities.

With the exception of fixed annual escalation rates, all values have been kept

constant throughout the economic life of the plant. This is an unrealistic

assumption as it does not allow fluctuations in load factor and fuel prices. To

account for these fluctuations, the levelised cost approach must be expanded to

handle yearly cashflow calculations so that, at the very least, yearly fluctuations can

be incorporated. Some parameters exhibit annual patterns, e.g. seasonality in

availability. Some vary constantly, e.g. spot fuel prices. Utilisation depends on

demand which varies according to the season and time of day.

Economies of scale is a non-linearity that can be modelled by quadratic functions.

Following a detailed causal analysis and an understanding of the inter-relationship

between factors over their entire ranges, these effects can be modelled by fitting

suitable equations.

This pilot study has established the feasibility of model replication of sensitivity and

risk analyses with available desk-top computing tools. The incremental manner in

224
which details are added and complexity increased maintains the modelling at a

comfortable and manageable level.

By extracting values from different international sources, we are able to get a range

of possible values for each factor, which not only improves upon the traditional

point estimates but also gives us insight into causality and broader perspectives.

However, we had to take a view on which base values to use and which extreme

values to include. These international reports, being also seven years out of date,

do not give us adequate detail to the economics of UK plant.

Next, we used sensitivity analysis to rank the factors, thus allowing us to focus on

the important and highly sensitive variables, such as discount rates and capital cost.

Tornado diagrams are helpful aids for this analysis even though the ranking

depends on the base values and the extreme values. During this process, there

emerged a need for guidelines on the number of factors sufficient for an analysis

such as this. Without detailed knowledge of the relationships between factors, we

are not able to utilise the two way sensitivity analysis.

Finally, we applied risk analysis to get the extra dimension of likelihood. The two-

dimensional risk profiles allowed us to evaluate the stochastic dominance of

different types of plants, although at this stage only the risk profiles. However, use

of probability distributions introduced several new issues. Although we have used

triangular distributions, the base and extreme values can equally define the finite

normal, the beta, or the uniform distribution. We need to analyse which

distributions are more appropriate or otherwise develop a distribution selection

criteria. For factors without extreme bounds, it is unclear whether we should set a

fixed percentage around the base values or a variable percentage. Our assumption

of total independence allowed model simplification and avoided having to consider

multi-variate probability distributions. Dependence of factors led to correlations

between distributions. Using the Latin Hypercube sampling method, we

225
determined that 600 iterations on one random seed was sufficient to get smooth

output profiles. We need to validate this by testing with other random seeds and

more iterations. Finally, we have approached this case study from a neutral

position, whereas the actual case study would undoubtedly be assessed very

differently by the regulator, Nuclear Electric, major generator, independent power

producer, and the consumer.

Realistic modelling requires a thoroughness of approach, which is examined along

the lines of financial, technological, and modelling. The financial aspects relate to

isolating the discount rates used in calculating the interest during construction,

provision for decommissioning, and the other costs. In other words, the cost of

capital requires a much closer examination into what it represents. In the private

sector, corporation taxes and inflation impact cashflow management, which cannot

be ignored. Business risks can also be modelled through a redefinition of the

treatment and use of the discount rate. The technology issues relate to the

treatment of utilisation rates, fuel types, plant types, etc.

The three directions of increasing the number of parameters, varying the values,

and increasing the uncertainties represented in the problem are a mere framework

for a thorough approach. Thoroughness lies in the consideration of all significant

variables, close representation of reality, and systematic treatment of uncertainty.

Throughness is a means of achieving greater completeness in modelling. However,

this comes at the expense of manageability and tractability. It may be necessary to

use other techniques to facilitate a greater level of detail and modelling capability.

In this respect, model synthesis may provide the answer to greater completeness

and manageability, i.e. to meet the conflicting criteria of comprehensiveness and

comprehensibility.

226
APPENDIX B

Stage One: Three Archetypal Approaches


Data Consolidation, Model Replication, and Evaluation

This appendix describes three representative or archetypal modelling approaches

and their method of replication and final evaluation. Comparison of these

approaches are documented in Chapter 4. These modelling approaches make use

of techniques of scenario analysis, linear programming, mathematical

decomposition, sensitivity analysis, risk analysis, and decision analysis. The main

objectives of this stage of the two staged modelling experiment are 1) to determine

the limitations of each approach, 2) to assess the potential for synthesis, and 3) to

evaluate model completeness with respect to uncertainties.

The next section describes the data used in the capacity planning optimisation

programme which is the core technique in the deterministic and probabilistic

approaches. After this, the replication and evaluation of the three approaches are

documented.

B.1 Data Consolidation

Accurate details of all power plants in the UK, especially the status of new plants,

are highly confidential and proprietary. As a result, the task of data consolidation

becomes one of reconciliating different and conflicting sources of published

information. Before presenting the consolidated data of all plants in the NGC

system, we discuss some problems with obtaining scarce data and reconciliating

different sources of information. These problems present uncertainties in the data.

The main source of information for planning purposes is the annually published

Seven Year Statement by the National Grid Company (NGC). This document is

227
released in April each year and updated in July, October, and December. It

contains the status of every plant by ownership and technology type in the NGC-

operated transmission system as well as new plants that will be connected in the

future. However, it does not give details of actual load factors, capital costs, fuel

costs, thermal efficiencies, and emission factors. By the time it is released, some

details may have changed. Therefore it is necessary to consult other sources,

which are listed in table B.1. In case of conflict, the more reliable and recent

publication is used.

228
Table B.1 Sources of Information

Code Title of Publication

0 Offer report

1 Inside Energy 26-9-1991

1.1 Comments on LBS Model Inputs, Oct 92 (Southern Electric)

1.2 Inside Energy, 22-04-1993, vol 3 no. 21

2.1 NGC Report: 7 Year Statement, March 1992

2.2 NGC Update, Oct 1992

2.3 NGC Report: 7 Year Statement, April 1993

2.4 NGC Update, July 1993

3 National Power Press Release or Annual Report

3.1 National Power News, Aug 1992

3.2 National Power News, Sep/Oct 1992

3.3 National Power News, Dec 1992

3.4 National Power News, April/May 1993

4 PowerGen Press Release or Annual Report

4.1 The GEN (PowerGen newsletter)

5 Electric Power International, Mar 93

6 Newspaper articles (date given)

7 Hoare Govett: Independent Generation, Aug 1991

8 Energy World (monthly magazine of the Institute of Energy, UK)

9 International Coal Report, 2 Oct 1992

10 White Paper: The Prospects for Coal Conclusions of the Government's Coal Review,
March 1993

11 Electricity Association: UK Electricity 1992

12 Power in Europe 23 Apr 1993, Issue No. 147 (ILEX UK Power Station Monitor)

229
Reported capacities vary widely, according to individual approximations of either

registered or declared net capacities. Actual registered and declared net capacities

may also change, making it difficult to monitor. Unless otherwise stated,

registered capacities from NGC reports have been used.

There is some confusion over a unique name for a plant, which is usually the name

of its location or owner. For new projects, sometimes no plant name is given, only

the owners name, but owners keep changing as different joint ventures or

consortiums are formed. This is especially true of new plants which go through

various name changes in the early stages of the project. For example, Greystones

and Wilton, Teeside never appear together in the same source. So it can be

assumed that they refer to the same plant, that of the largest CCGT. Obvious

duplications have been eliminated where they correspond to different units of the

same plant.

The life of a plant depends on a number of factors. Owners need only to give six

months notice to the NGC for closure of plant, but permission to extend the life of

a plant may enter into a time-consuming public inquiry. Because plant closure

implies job losses, such announcements are not made in company newsletters

unless the closures are 100% certain.

Uncertainties in commissioning of new plant are related to the stages in the

process, as indicated by its status in table B.2. A company may sign a System

Connection Agreement with the NGC before Section 36 Consent is given by the

government. Transmission contracted plant (T) does not mean that it will go

ahead. The best indicator of go-ahead is a combination of T, S, and U: Section 36

consent given (S) and Under construction (U) and Transmission contracted (T). In

many cases, the announcement of new plant is merely a strategic move, signalling

additional capacity. The major generators have employed this market signalling

230
strategy to deter new entrants. Information on new plants in their early planning

stages are difficult to obtain and verify.

Table B.2 Status of Plant

Code Status Description

blank not in NGC report and not sure of status

* not directly connected to NGC

A has applied for S36 planning permission, government consent under consideration

Ap has applied for S36 planning permission, but pending results of public inquiry

D decommission or closure of plant notice given

d already decommissioned or in the decommissioning stage

E existing plant in NGC reports/transmission system

I has import facilities, e.g. to import coal, according to Kleinwort Benson Securities
(1990) The Electricity Handbook

P postponed or deferred

R significant reduction in registered capacity

S has Section 36 Consent

T transmission contracted (agreement with NGC made)

U under construction

X transmission agreement with NGC cancelled (pertains to new plant)

Z notified zero registered capacities for next 7 years (to 2000). The registered capacity
shown here is the remaining capacity of the plant in the system.

The decommissioning years for all nuclear plant were taken from National Audit

Office (1993) Costs of Decommissioning Nuclear Facilities, HMSO.

Plant data consolidated from various publications are presented in table B.3 and

summarised in table B.4.

231
Table B.3 Existing Plant as at July 1993

SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid


T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MISS /M
T COS ITY R* ON IONI Wh
U T (MW) ED NG
S mio
2.1,2.3 E Dungeness B AGR 720 1,120 NE 1985 2010 1.00

2.1,2.3 E Hartlepool AGR 1,020 1,176 NE 1989 2010 0.50

2.1,2.3 E Heysham 1 AGR 1,020 1,155 NE 1989 2010 1.00

2.3 E Heysham 2 AGR 1,230 1,340 NE 1989 2018 1.00

2.3 E Hinkley Point B AGR 1,120 1,248 NE 1976 2007 1.00

11 Maentwrog Hydro 30 NE 1928

1,1.1,2.3 E Bradwell magnox 245 248 NE 1962 1997 1.00

2.3 E Dungeness A magnox 424 450 NE 1965 1995 1.00

1,2.3 E Hinkley Point A magnox 470 470 NE 1965 1995 1.00

2.3 E Oldbury magnox 434 440 NE 1967 1998 1.00

2.3 E Sizewell A magnox 420 430 NE 1966 1996 1.00

2.2,2.3,2.4 E Trawsfynydd magnox 390 240 NE 1965 1993 1.00


D
2.3 E Wylfa magnox 840 1,015 NE 1971 2001 1.00

TOTAL 8,363 9,332


ABOVE
2.2,10,1.2, E Killingholme CCGT 250 620 NP 1993 14.10
2.3 NP1
11 E Cwn Dyli Hydro 10 NP 1989

11 E Dolgarrog Hydro 27 NP 1924

11 Mary Tavy Hydro 3 NP 1932


Group
2.3 EI Aberthaw B large coal 1,401 1,455 NP 1971 15.25

2.3 E Didcot large coal 2,060 1,960 NP 1972 15.48

6 EI Drax large coal 3,890 3,870 NP 1974 13.51


(26.5.93),2
.3
2.3 EI Eggborough large coal 1,971 1,940 NP 1968 14.09

2.3 E Ironbridge B large coal 984 970 NP 1970 13.74

2.3 E Rugeley B large coal 1,016 976 NP 1972 13.30

2.3 E West Burton large coal 1,988 1,932 NP 1967 13.89

2.3 E Blyth B medium 620 626 NP 1963 1993


D coal

232
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
2.3, 2.4 EI Thorpe Marsh medium 1,098 1,050 NP 1963 1994 13.48
D coal
2.3 EI Tilbury B medium 1,412 1,360 NP 1968 15.26
coal
2.3 E Willington B medium 376 376 NP 1962 12.76
coal
3, 3.2 D E Eggborough GT OCGT, 68 NP 1968 1993
but 2.3 RI aux
incl Z
6(9.4.93),2 E Aberthaw B GT OCGT, 51 NP 1967 93.81
.3 aux
0,2.3 E Didcot GT OCGT, 100 NP 1972 100.99
aux
0,2.3 E Drax GT OCGT, 140 NP 1974 93.16
Z aux
0,2.3 E Fawley GT OCGT, 68 NP 1969 94.30
aux
0,2.3 E Ironbridge GT OCGT, 34 NP 1967 99.70
aux
0,2.3 E Littlebrook GT OCGT, 105 NP 1982 93.01
aux
0,2.3 E Pembroke GT OCGT, 75 NP 1970 95.76
aux
0,2.3 E Rugeley B GT OCGT, 50 NP 1969 84.54
aux
0,2.3, 2.4 EI Thorpe Marsh OCGT, 56 NP 1966 1994 79.78
D GT aux
0,2.3 E Tilbury GT OCGT, 68 NP 1965 130.82
aux
0,2.3 EI West Burton GT OCGT, 80 NP 1967 94.14
aux
0,11,2.3 E Cowes GT OCGT, 140 140 NP 1982 93.03
main
0,2.3 E* Letchworth GT OCGT, 140 140 NP 1979 93.16
main
0,2.3 E* Norwich GT OCGT, 110 110 NP 1966 94.46
main
0,2.3 E* Ocker Hill GT OCGT, 280 280 NP 1979 97.37
main
2.3 E Fawley oil 1,034 968 NP 1969 40.76
R
Z
6(9.4.93),2 E Littlebrook D oil 2,160 2,055 NP 1982 25.66
.3
2.3 E Pembroke oil 1,530 1,461 NP 1970 25.64
R
Z
0, 2.2, E Aberthaw A small 376 192 NP 1960 1993 18.07
3.2,2.3 RI coal
Z
2.3 E Blyth A small 448 456 NP 1958 14.25
coal

233
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
0,2.2,3,9,2 E* Rugeley A small 560 228 NP 1961 1993 17.67
.3 Z coal
0,2.2,3,9,2 E Skelton Grange small 448 228 NP 1961 1993 16.41
.3 R coal
Z
2.3 E* Staythorpe B small 336 354 NP 1960 15.59
coal
2.2,2.3 E Uskmouth small 336 228 NP 1961 1993 18.03
R coal
Z
0,2.2,3,9,2 E Willington A small 392 98 NP 1957 1993 16.84
.3 R coal
D
Z
TOTAL 25,146 24,968
ABOVE
3.4, E Killingholme CCGT 300 900 PG 1992 14.10
6(20.4.93), PG1
8,1.2,2.3
4.1, 11 Rheidol Hydro 49 PG 1966

2.3 E Cottam large coal 1,970 1,988 PG 1969 14.38

6 EI Ferrybridge C large coal 1,966 1,960 PG 1966 14.16


(9.4.93),2.
3
2.3 EI Fiddler's Ferry large coal 1,914 1,940 PG 1971 15.49

2.3 EI Kingsnorth large coal 1,954 1,940 PG 1970

6 E Ratcliffe On Soar large coal 1,974 1,990 PG 1968 14.27


(26.5.93),2
.3
2.3 E Drakelow C medium 910 999 PG 1965 15.29
coal
2.3 E High Marnham medium 930 945 PG 1959 15.23
coal
0 D but EI Ferrybridge C OCGT 34 PG 1966 1992
2.3 incl Z GT
0 D but EI Fiddler's Ferry OCGT 34 PG 1969 1992
2.3 incl Z GT
0, 4 D but EI Kingsnorth GT OCGT, 34 PG 1967 1992
2.3 incl Z aux
0 D but E Ratcliffe GT OCGT, 34 PG 1968 1992
2.3 incl Z aux
0,4 D, but E Cottam GT OCGT, 50 PG 1969 1992
2.3 incl DI aux
Z
0,2.3 E Grain GT OCGT, 145 PG 1979 157.22
aux
0,2.3 E Ince GT OCGT, 50 PG 1979 151.88
aux

234
Table B.3 continued
SOURCE S NAME PLANT CAP DNC REGIS O CO DEC Bid
T TYPE ITA [11] TERED W MM OM Price
A L CAPAC NE ISSI MIS /MW
T COS ITY R* ON SIO h
U T (MW) ED NIN
S mio G
0,2.3 E* Taylor's Lane GT OCGT, 140 132 PG 1979 152.00
main
0,2.3 E Watford GT OCGT, 70 70 PG 1979 110.79
main
6(9.4.93),7 E Grain oil 2,068 2,700 PG 1979 39.52
,2.3
2.3 E Ince B oil 1,010 960 PG 1982 34.84

2.3 E Richborough oil 342 342 PG 1962 21.01

0, 2,4, E Castle Donington small 564 210 PG 1956 22.58


9,2.3 R coal
TOTAL 15,861 17,457
ABOVE
3.1,1.2,2.3 E* Brigg CCGT 100 272 IN 1993 14.10

10,1.2,2.3 E* Corby CCGT 200 412 IN 1993 14.10

1.2,2.3 E* Peterborough CCGT 200 405 IN 1993 14.10

3.1,2.3 E* Roosecote CCGT 230 229 IN 1991 14.10

3.4, E Teesside/Greyst CCGT/C 850 1,875 IN 1993


6(19.5.92), ones HP
1.2,2.3
TOTAL 230 3,193
ABOVE
2.3 E Dinorwig Hydro 1,740 OT 1984

2.3 E Ffestiniog Hydro 360 OT 1963

1,2.1,2.2,2 E* Calder Hall magnox 198 192 OT 1956 1996 0.00


.3
TOTAL 198 2,292
ABOVE
2.3 E FRANCE external 1,972 LI 8.00

2.3 E SCOTLAND external 850 LI 11.33

TOTAL 2,822
ABOVE

* Owner Groups: NE = Nuclear Electric, NP = National Power, PG = PowerGen, IN = Independent power


producer, OT = Other: BNFL, NGC, LI = Link

235
Table B.4 Summary of All Plant in England and Wales NGC System as at July 1993

Type of Plant Number Capacity in MW Proportion

Nuclear: Magnox 7 3,485

Nuclear: AGR 5 6,039 15.8%

Large Coal 12 22,921

Medium Coal 6 5,356

Small Coal 8 1,994 total coal: 50.3%

Oil 6 8,486 14.1%

Open Cycle Gas Turbine (OCGT) 25 2,148 3.6%

Combined Cycle Gas Turbine (CCGT) 7 4,713 7.8%

Hydro 7 2,219 3.7%

Link (Scotland and France) 2 2,822 4.7%

TOTAL 85 60,183 100%

B.2 Deterministic Approach

B.2.1 Description of Approach

The Central Electricity Generating Board used scenario and sensitivity analyses to

assess the impacts of major uncertainties on its plans for capacity expansion, mainly

to support the decision to invest in a new type of plant. Their approach is

deterministic, starting with the construction of several plausible future scenarios.

Their scenario analysis rested on major scenario drivers and painted interesting

pictures of the future, typically reflecting most likely and extreme cases. A detailed

electricity planning model is then run for each scenario to determine the optimal

mix of capacity over the planning horizon. This optimisation programme is data-

intensive and non-transparent but enables the assessment of both marginal plant

economics and total costs. The main advantage of such an optimisation is that

236
different constraints can be included. While scenario analysis covers a range of

futures, it does not take into account fluctuations within or deviations from each

scenario. Sensitivity analysis is needed to explore the uncertainties in individual

parameter values and variations of scenarios. In fact, the CEGB relied on

extensive sensitivity analysis to defend their estimates in the Sizewell B and

Hinkley C public inquiries against opposing assumptions and appraisal methods.

The deterministic approach is easy to follow, thus lending itself to credibility and

immediate acceptance.

Capacity planning in electricity generation has been traditionally formulated as a

resource allocation optimisation problem. It allocates different types of plants,

each with different capital and operating costs, to different periods with the further

specification of plant capacity to meet pre-specified demand. Without the

optimisation formulation, it is more difficult to assess the choice of plant via a

marginal cost analysis in the manner of the first pilot study (Appendix A), the

amount (unit capacity size and total number of plant) to install in terms of total to

meet demand, and the timing of installation by a decision tree as details of

production costing which contribute to merit order will be lost, and seasonal plant

availabilities and load duration curves are absent. The timing decision is a function

of the lead time or construction period, expected demand, existing capacity, and

type of plant. Decisions are made at one point in time for all future periods. This

kind of deterministic optimisation treats the different periods equally and does not

consider contingencies.

B.2.2 Description of Replication

The replication of this approach consisted of spreadsheet analysis to generate the

scenarios and then spreadsheet macros to translate them into input files for the

optimisation programme. The ECAP (Vlahos, 1991) optimisation programme is a

proprietary PC-based application which runs in the Windows operating system and

237
uses Benders decomposition. ECAP has been validated in (Vlahos and Bunn,

1988b) against CEGBs mixed linear programming mainframe-based MIXLP

programme and found to be about 100 times faster. It makes use of iterations to

close the gap between lower and upper bounds to a user-defined tolerance level.

The smaller the tolerance level, the more optimal the solution, and the longer it

takes to reach convergence.

Permutations of demand growth over time, seasonal load duration curves, and fuel

escalation rates resulted in 36 different scenarios. The demand uncertainty, for

example, can be reasoned as follows: extreme weather conditions contribute to

more peaks in demand, translating to a steeper load duration curve for that

particular season. The base case contained 8 load duration curves (LDC) to

correspond to each of the seasons. On top of the original LDC, we constructed the

case for more base load and more peak load, hence ending up with two additional

LDCs for each season, totalling 24 LDCs altogether. LDC for season 1, for

example, is illustrated in figure B.1.

238
Figure B.1 Load Duration Curves for Demand Uncertainty

LDC Season 1
original more base more peak
load load
100
90
80

70
60
Load %

50
40
30
20
10
0
0 0.2 0.4 0.6 0.8 1

Hours

Figure B.2 shows how these three factors are used to generate the scenarios.

These scenarios reflect status quo and extreme conditions. Within each scenario,

the optimisation programme is run to get the optimal capacity plan. Of the 36

outcome plans, those that generated the highest and lowest optimal expansion

costs are then analysed in depth. Next, inputs are toggled to create a plan where

new nuclear plants will be economic to install. Sensitivity to planning margin is

also tested by comparing the selected scenarios under the status quo 20% reserve

margin conditions to 40% at low and high rates of demand growth. We took a

short cut, the parameters varied in the sensitivity analysis were the same ones used

in constructing the scenarios.

239
Figure B.2 Scenario Generation

Scenarios: explicit uncertainties and factors that drive them LDC


Seasonal load duration curves
SET++ are ECAP file names (same LDC for each period)
SET+8 SET+5 SET+4 More Base Load:
mild weather conditions
demand side management
tariff incentives
better load management

SET+7 SET+6 SET+3 More Peak Load:


extreme weather conditions
boom & busts in economy
unpredictable circumstances

Status Quo:
SET+0 SET+1 SET+2
same as expected from
STATUS previous planning exercises
QUO

Status Quo: Low: High: SET0+ Status Quo: HFO 3%, DIST 3%, AGR
continue as before conservation population explosion 1%, Gas 1%, Coal & nuclear 0%
(1% growth) consumer consciousness (immigration?)
energy efficient appliances shift to more energy SET1+ No Growth: 0% for all fuels
PRD
fuel switching intensive industries
period SET2+ High Gas: same as Status
VAT on fuels economic growth
demand Quo, but Gas 4%
5% annual growth 1.5% annual growth
growth
SET3+ High Coal: Same as Status
Quo, but Coal 4%

Levels of peak demand expected in each period of planning horizon, Fuel price escalation rates
assuming a 20% planning margin ESC

240
The 65 existing plants in the system total 55,616 MW (or 55.6 GW) of capacity.

The breakdown by type of plant and ownership are given table B.3. The base case

or status quo scenario takes the updated list of existing plants and subjects them to

technical parameter values as used by the former CEGB. No minimum or

maximum new capacity is specified in the optimisation, except for renewables.

Renewable technology, such as wind and tidal power, are assumed to grow at a

rate according to incentives under the Non-Fossil Fuel Obligation subsidy.

Availabilities, generation costs, basic seasonal load duration curves, and other

factors are assumed not to have deviated from the last CEGB run in 1990. New

plant options of the same technology have the same characteristics as existing

plants. For example, a new nuclear plant, whether AGR or PWR, has a technical

and economic life of 40 years, interest during construction1 of 4.2, and 1175 MW

capacity per unit. The six types of new plant options are nuclear, coal, CCGT for

baseload, CCGT for peakload, gas turbine (open cycle), and renewables. A further

breakdown is not considered. The first priority in meeting demand is taken up by

renewables. The capacity cost per plant per season is the same for plants of the

same technology. Generation cost varies marginally for each season. Operations

and maintenance (O&M) costs represent an annual fixed cost per plant.

Factors that contribute towards the status quo scenario are called status quo files.

The status quo period demand grows at 1% per year starting from 51,400 MW

peak demand in 1994. The status quo fuel escalation rates are annually 3% for

heavy fuel oil, 3% for diesel, 1% for AGR, 1% for natural gas, and 0% for coal and

nuclear. There are two kinds of fuel escalation rates, one for AGR, and the other

for Magnox and PWR. Two kinds of fuel escalation rates correspond to British

1 Interest during construction can be expressed as a lump sum monetary value, as a rate of
interest, or, in this case, a number of years of interest during construction. This is simply
half the construction period.

241
Coal and imported coal. All oil-fired stations take heavy fuel oil. All gas turbines

take diesel (DIST). All Combined Cycle Gas Turbines (CCGT) take natural gas.

Plant availabilities and load duration curves are described for each of the eight

seasons. These eight seasons correspond to the weekday and weekends of four

kinds of season of the year with respect to peak and plateaus. Plant availabilities

have more variation than fuels, e.g. four kinds of availability patterns exist for

nuclear plants: magnox, AGR, nuclear A, and nuclear B. Availability for new

plant options is considerably lower than existing plants as seen in practice. The

distribution of seasonal availability over the planning horizon is kept as simple as

possible, not more than three sequences per plant. The seasonal load duration

curves are taken from old CEGB statistics, with the assumption that overall

seasonal patterns of demand have not changed. For extreme scenarios, the LDCs

were varied towards more peak or more base-load. The existence of fuel supply

contracts implies minimum energy constraints which reflect fuel supply contracts in

place, i.e. power stations called to run must meet minimum utilisation levels.

Running with and without minimum energy constraints had little impact on the final

results, with the only interesting observation being those plants in the merit order

which satisfy the minimum energy constraints exactly. Hence, to speed up the

optimisation runs, capacity plans under all scenarios have been generated without

minimum energy constraints.

In the status quo case, a 20% planning or reserve margin was assumed for all 12

periods in demand growth. This means that minimum capacity required is 20%

above the peak demand expected in that period. This assumption was found to be

well justified by another study (Bunn et al, 1993) where a system dynamics model

found that a 24% planning margin achieved equilibrium conditions in the electricity

market. This planning margin is reflected by the setting of the Value of Loss of

Load (VOLL) by the electricity regulator, ensuring that there is no incentive to

build too much (above 20%) or too little capacity in the long run. Other factors

242
reflect the current assumptions of the industry: 10% discount rate, 10% Non

Fossil Fuel Obligation, and 33% corporate tax rate.

The transient nature of the industry means that data accuracy and model precision

are not of high priority in this exercise, as the focus is on modelling methodology

not policy insight. With 36 scenarios to configure, the replication was designed to

generate results quickly by means of short optimisation runs, with minimal

variation in input requirements through simplicity and standardisation of

specification. For shorter runs, the tolerance level for convergence of the

mathematical decomposition-based optimisation algorithm was set to 0.05 instead

of the more precise 0.005 or 0.0005. Thus fewer iterations are required to close

the gap between the upper and lower bounds for the total cost of the final capacity

plan. Tightening the tolerance changes the optimal expansion plan by lowering the

total cost at the expense of 12 iterations instead of 4 to reach convergence. The

more optimal plan (say at 0.005 tolerance level) calls for building CCGT for

peakload as well as base load, building more gas turbines, but much less CCGT

capacity altogether. It is assumed that as long as the tolerance level used is same

across all scenarios, the results can be comparable.

The 35 non-status quo scenarios are generated by varying the period demand

growth in three ways, the seasonal load duration curve in three ways, and the fuel

escalation rates in four ways. The status quo period demand assumes a 1% annual

growth rate, while the low case assumes only a 0.5% annual growth rate, and the

high case of 1.5%. The low growth case is expected when any or all of the

following takes effect: conservation measures, increasing consumer consciousness

of energy and environment and energy saving schemes, introduction and

penetration of energy efficient appliances, fuel switching behaviour of consumers,

and VAT on fuels to curb electricity consumption. The high growth case is not

very likely to occur since the economy is unlikely to take a sharp turn upwards and

243
grow faster than the previous decade. Neither is it likely to shift to more energy

intensive industries and foresee a population explosion in Britain. However, a truly

deterministic approach does not take account of the likelihoods, so this aspect is

not explored further. Variations of load duration curves merely takes the same

seasonal status quo, trimming it down for baseload or lifting it higher for peakload.

(This process began with a visual and graphical inspection and adjustment,

followed by coordinate inference. A more accurate and defensible method consists

of taking demand data directly from the National Grid Company which supplies

such information as forecast for future periods and load distribution curves, and

then converting them into load duration curves.) In the Sizewell B public inquiry,

the CEGB had used negative growth rates for the low scenario and 2.6% per

annum for high demand growth. These seasonal LDCs may steer towards more

base load if any of the following occurs: mild weather conditions (more stable

demand), demand side management, tariff incentives, and better load management

on the suppliers side. If the industry evolves more vertically (or incentives for

cross functions) such that generators supply electricity and suppliers also generate

electricity as the likely trend observed now, then demand side management may

become popular.

Fuel price uncertainty is reflected in four ways. The status quo, as stated before,

contains escalation rates reflecting todays trends. The no growth case assumes

0% escalation for all types of fuels, a scenario that is likely if all existing and new

plants are tied to fixed fuel supply contracts. The high gas case uses 4% instead of

status quo 1% annual growth rate for natural gas, reflecting the premium on gas if

more CCGTs are built or if major interruptions in supply occur. The current

custom of back-to-back contracts for new CCGTs, however, makes it unlikely that

a steep growth in gas prices will occur. Finally, the high coal case tags coal prices

to 4% instead of 0% pa. The escalation rates of domestic and imported coal are

assumed to be the same here. This situation may occur if the closure of coal pits in

244
the UK is a result of an under-estimation of production capacity and demand for

coal, i.e. leading to a scarcity of coal. Similarly imported coal may become more

expensive but necessary if such a scenario exists. In all scenarios, the discount rate

was set at 10%, NFFO at 10%, corporate tax at 33%, and tolerance at 0.05.

B.2.3 Results of Replication

The status quo scenario was examined with respect to the optimal expansion plan,

the merit order of plants in the next 15 years, and the relative economics of each

plant. Three additional scenarios were generated for in-depth study: those giving

the highest and lowest costs and the third arising from the combination of high gas,

high period demand, and peakload duration curves. Each scenario was also tested

for sensitivity to certain parameters.

Status Quo Scenario

During the next 15 years, the optimal expansion plan prescribes the installation of

up to 10,370 MW of new CCGT as baseload, 2050 MW of renewable plant, and

building 6014 MW of open cycle gas turbine (OCGT) in the first three years. This

amounts to an investment cost of 12.652 billion for the next 44 years. By the year

2010, the newly installed CCGTs would have pushed the just installed CCGTs in

1993 down the merit order. Renewables would, of course, lead the merit order,

followed by nuclear plant, links to France and Scotland, coal station, CCGT, oil,

and new gas turbine. However, the marginal fuel saving2 (MFS) of CCGT is

substantially lower than the Scottish Link and coal stations just before them in the

merit order. OCGTs (open cycle gas turbines) offer no marginal fuel savings as

2 The fuel saving in the total system if this plant is introduced.

245
they are retained for peakload purposes. Presumably by then, all existing OCGTs

would have reached their end of life or else be pushed out of merit completely.

This status quo scenario was also tested for sensitivity to discount rate. At 6%

discount rate (interpretation: still in the public sector), more coal and less CCGT

should be built. While CCGT may suffice in the earlier years, i.e. the first 15 years,

it is more economical to build coal plants in the latter part of the 44 year planning

horizon. But as a result, the optimal expansion plan is more expensive by 64%.

Coal plants have higher investment costs than CCGTs.

Imposing minimum energy constraints gives slightly higher overall cost but also

lowers the utilisation and load factors of plants considerably.

Scenario 1: High coal prices and high demand growth

This is an extreme but unrealistic scenario which gives the most expensive option

(45.4 billion of which the investment cost is 13.95 billion). The average cost of

all 36 scenarios was 36.7 billion. Given that coal is not a suitable alternative

because of its high price escalation, the need to install CCGT in almost every

period to meet high demand makes it very costly. By the year 2010, 18.3 GW of

CCGT, 9 GW of gas turbines, and 3 GW of renewables should be installed. By

then, all newly installed CCGT would expect to move up in merit and become

baseload, hence preceding existing coal-fired stations in the merit order. The load

factor of these coal stations fall from 75% in the status quo case to 60% and down

to as low as 20%. We would expect the early retirement and mothballing of

existing coal plant in such a scenario. The sell-off option was not considered here.

246
Scenario 2: No growth in fuel prices, base load duration curves, and low demand
growth

This combination describes very stable circumstances, hence the plan is least costly

of all scenarios. As demand is not expected to grow much (0.5% pa), the total

investment cost for the entire planning horizon is only 9.478 billion. New gas

turbines in the first 25 years can cope with any peaks in demand. Thereafter,

CCGT for peaking load should be built. As with other scenarios, new renewables

are built at increasing capacities every year, to reflect the Non-Fossil Fuel

Obligation and also the industrys inclination towards more environmentally clean

methods of generation. No new nuclear, coal, or baseload CCGT need to be

installed. The annual fuel cost in the year 2010 (just after our 15 year evaluation)

is almost half that of the status quo case in the same year.

Scenario 3: High gas prices, peakload duration curves, and high demand growth

This is a scenario driven by extremely uncertain factors. However, the results are

not as extreme as expected. High gas prices make CCGTs unattractive. High

demand growth calls for new capacity. This combination makes coal an attractive
option. After the first twelve years, coal plants should be installed every year,

amounting to 70.789 GW of total new coal capacity. Meanwhile, peak load

duration curves call for more gas turbines to be built, total ling 19.673 GW of

OCGT over the 44 year planning horizon. What is more interesting is that by the

year 2010, oil stations will have moved up the merit order, replacing CCGT.

However, the new open cycle gas turbines will still trail the build-up of CCGTs.

247
Further scenario analysis

The above analysis prompted further scenario generation to answer two questions.

1) What effect will falling coal prices have on capacity planning? 2) What

conditions are necessary to induce the installation of new nuclear plants?

If coal prices fall at an annual rate of 1%, new coal installation becomes attractive

but not until later periods, the earliest being the 25th year. New CCGT should still

be installed in the early periods to meet rising demand and to replace retired plant

capacity. New capacity to meet the high rate of demand growth will thus be met

by new CCGT in the early periods and new coal in the later periods.

The easiest way to create a nuclear scenario, i.e. to make the nuclear option more

attractive, is to make other fuels less attractive. Hence, the fuel escalation rates for

coal and gas were raised to a high of 4% per annum, while no escalation was made

for nuclear. No coal plant but only 1.875 GW of CCGT should be built. Starting

in the 7th period (year 2012), a substantial 11.6 GW of nuclear capacity should be

installed, rising to a total of 37.5 GW by the end of the planning horizon. Diesel is

also more attractive, hence new gas turbines should be installed every year except

two periods. If we lower the discount rate from 10% to 6%, i.e. assuming a public

sector scenario, then even more nuclear capacity (total of 44.7 GW) should be built

and earlier too (4th period instead of 7th). This additional nuclear capacity

displaces much of the gas turbines in the 10% discount rate case. By the year

2010, the order of merit is obvious: renewables, nuclear, links, coal, oil, CCGT,

and OCGT. The high operating cost of OCGT forces it to the bottom of the order

in all cases.

248
Sensitivity Analysis

Sensitivity to tolerance level and minimum energy constraint has already been

discussed earlier. Sensitivity to planning margin was conducted to assess the

impacts of over or under capacity.

A 40% margin boosted up the minimum capacity levels required of all periods in

the high growth and low growth cases. These were applied to the four scenarios

described above: status quo, high coal price and high demand growth, no growth,

and high gas price. These combinations led to 8 scenarios with 40% planning

margin implied in the period demand growth. The results were examined and

compared with scenarios closest to them, not necessarily mentioned above. The

striking outcome is that in all 8 scenarios, 16.3 GW of new gas turbine (OCGT)

plant should be built in the first three years. The fastest way to meet a 40%

planning margin is by constructing those fossil fuel plants which have the lowest

interest during construction (hence shortest construction time) and much lower

O&M cost than that of CCGT. In general, the additional capacity is met by

building more gas turbines. Other characteristics do not change very much, if at

all. Another way of looking at sensitivity to planning margin is to vary the planning

margin per period, e.g. start with 20% and gradually move up.

B.2.4 Conclusions and Extensions

Several criticisms of the deterministic approach lead to the consideration of a

probabilistic approach. But the deterministic approach itself can be extended in

several directions towards more model completeness. The values we used have

been central estimates, hence the extreme scenarios are symmetric around the base

case. Asymmetry is more likely the case, and should be considered. We have only

looked at two types of uncertainties. As mentioned in Chapter 2, the types of

249
uncertainties may change over time (in relative importance). Hence it may be

useful to include other uncertainties. We have taken a short cut of using the same

uncertain parameters in the construction of the scenarios for sensitivity analysis.

Some aspects of the model are more difficult to change because they require re-

configuring the optimisation programme. However, they are necessary to reflect

the electricity market: different discount rates for different plants or ownership,

optimisation by ownership or by different types of load, and making it more user

friendly for quick changes.

B.3 Probabilistic Approach

B.3.1 Description of Approach

In the Sizewell B public inquiry, the inspector (Layfield, 1987) recommended that

the CEGB should include probabilities in their analysis of the capacity expansion

decision. A more rigorous analysis of uncertainty was only one of several reasons

for using a second model. A second model of the form described in Evans (1984)

was needed to test CEGBs model because of the complexity and importance of

the calculations. It was also necessary to derive cost estimates for different sets of

assumptions especially to test the sensitivity of the results to changes in specific

assumptions. A probabilistic method was regarded as complementary if not more

informative for analysing uncertainty, particularly if sensitivity tests give mixed

results, i.e. some favourable some not. A probabilistic method enables the results

to be drawn with a high degree of confidence, whereas a deterministic method

gives no indication of the likelihood of results. It helps to resolve conflicting

views. Furthermore, a probabilistic approach would have merit if the results from

a deterministic approach were not robust. However, this implies that CEGBs

approach must address robustness by accounting for very extreme and adverse

scenarios.

250
A probabilistic approach appeared more favourable given the harsh attacks on the

deterministic CEGB approach. Evans (1984) called it probabilistic decision

analysis but in actual fact used the techniques separately, decision analysis merely

to structure the problem without any computation in the decision analytic sense.

Critics of the CEGB model complained of the underestimation of uncertainty and

tendency to err on the side of optimism. The uncertainties prevalent in long

planning horizons imply a danger of using single estimates as a basis for decision

making. Evans (1984) approach has been used in Kreczko et al (1987) and Evans

and Hope (1984). Evans did not consider uncertainties in discount rate or plant life

times as they were not a concern ten years ago. However, the choice of discount

rate and length of operating lives of nuclear plants have become major issues now.

This goes to show that no matter how sophisticated the model, there are some

things that cannot be foreseen and the resulting model may be incomplete.

To keep the replication simple and tractable, we confine this approach to an

application of risk analysis, that is, expressing uncertainties as probability

distributions, a step beyond merely attaching discrete probabilities to values. Risk

analysis is performed using the same optimisation technique of the deterministic

approach but with Latin Hypercube Sampling. Effectively, the deterministic

approach is simulated hundreds of times to arrive at outputs that can be

summarised in cumulative probability distributions which are called risk profiles.

This interpretation of Layfields recommendation is based on conclusions of

McKay et al (1992), that "uncertainty in the output due to uncertainty in input

values can be described by probability distributions. Uncertainty due to model

structure is not discussed here, as it is assumed that uncertainties expressed in the

output are due entirely to the input configurations. In the extreme, uncertainty

analysis, according to Inman and Helton (1988) involves the determination of the

variation or imprecision in the output that results from the collective variation in

251
the inputs. The main departure from the deterministic approach comes from the

introduction of probabilities which adds another dimension to value and insight.

B.3.2 Replication and Results

The replication follows that of Evans (1984) but with fewer input probability

distributions so that the uncertainties can be compared with the deterministic

approach. The method of replication is illustrated in figure B.3.

An incremental and exploratory approach to modelling was needed to establish the

feasibility and limitation of available hardware and software. Simple probability

distributions were assigned to a few uncertainties. After the runs had been

smoothly initiated and completed, the number of input uncertainties were

increased. Triangular distributions depicted the simplest type of asymmetric

distributions, easy to specify and meaningful to the user. In practice, the choice of

distribution parameters such as mean and mode is difficult to substantiate.

252
Figure B.3 Replication of the Probabilistic Approach

Select uncertain parameters

Assign m probability distributions

@RISK
Sample n data points from m distributions

Create n*m text files for input

Run optimisation n times

Extract and merge output

Create risk profiles

Meaningful input probability distributions are expected to deliver meaningful

output distributions given enough iterations. 100, 300, and 1000 iterations were

tested. Whilst 100 iterations can be completed in 2 days, 1000 iterations required

2 weeks. The number of iterations is the same as the number of data points in

distribution sampling. The inclusion of additional uncertain parameters requires an

increase in the number of iterations to maintain the same level of sampling

continuity. Since the exact relationship between number of distributions and

number of iterations was not clear, more iterations were used than necessary.

The input distributions were specified in the @RISK add-in package (Palisade

Corporation, 1992) to Excel spreadsheet. Latin Hypercube Sampling promises a

more efficient randomisation design compared to Monte Carlo (completely random

sampling) and other types of stratified sampling methods. Fewer iterations, i.e.

smaller samples, are needed to recreate the probability distribution. These data

253
points were then extracted from the @RISK output spreadsheet and consolidated

into an equivalent number of text files for input into the optimisation programme.

Automating the process of specifying input data files, running the optimisation

programme, and saving the output files accordingly was accomplished by writing a

series of Excel macros to perform the simulation iteratively. Three applications

were used for simulation: Excel, PFE file editor, and BENDERS (optimisation

algorithm in ECAP). The time to convergence of decomposition depends on input

files, in particular, the escalation rate for the fuel (diesel) used in gas turbines.

Because it was impossible for BENDERS to signal the end of its run to the parent

Excel macro, it was necessary to pre-specify how long Excel had to wait in this

multi-tasking WINDOWS environment. If this could be improved, say by use of a

window handler facility, the simulation could take less time.

The optimisation programme produced five different files for each run. Two were

discarded and three files retained for each run. The intermediate results file INR

was kept to monitor the duration of run, as it was used in monitoring and

subsequent adjustment of the waiting time of the parent Excel macro. The OEP

file contained the optimal expansion plan in terms of investment, operating and

total costs and also newly installed capacities per type of plant per period. The

production costing results PCR file contained the merit order of all plants in the

system for the periods requested -- first, second, and fifth periods in the planning

horizon.

Inputs and outputs of the deterministic and probabilistic approaches are largely

determined by the input and output files of the core optimisation programme

ECAP. Table B.5 lists the input data files. Table B.6 lists the output data files.

254
Table B.5 Input Files to ECAP

File Name Description


Name

LDC Load Duration Curve Demand for electricity described by the load duration
curve which is approximated by a step function. 8
seasons are specified.

PRD Period Definition Defines the periods of the planning horizon and
minimum and maximum total plant capacity.
Reserve margin.

ESC Escalation Rates Fuel escalation data which determines how the
variable plant operating costs are escalating, defined
by escalation codes and patterns.

OLD Existing Plant File Plants in the system, e.g. table B.3, containing name
of plant, scrapping life in years, capacity in MW,
availability, generating cost, escalating code for fuel
used.

NEW Plant Alternatives Technology alternatives, description by type of fuel,


escalation rates, economic and technical lives in
years, number of years of interest during
construction, standard plant size in MW, generating
cost escalation code, fixed operating cost.

AVL Availability Availability patterns and codes for plants for 8


seasons in the year.

TOP Take or Pay File Seasonal minimum utilisation constraints used to


model the take-or-pay obligations of generators
resulting from fuel contracts.

255
Table B.6 Output Files from ECAP

File Name Description


Name

OEP Optimal Expansion Plan Main output file: investment and retirement
schedule containing all existing plants and new
plants that meet objectives in cumulative block form.

NCC Net Capacity Costs Annual Capital Cost + Fixed operating Costs - Fuel
Savings for all new plants in the middle year of all
periods that the plants operate.

PCR Production Costing Results Details of plant operation in the middle year of each
period of planning horizon, displayed according to
merit order.

SMP System Marginal Price Information about SMPs in different seasons and
periods of the planning horizon.

INR Intermediate Results All intermediate expansion plans before convergence


to the final optimum.

B.3.3 Extensions of Probabilistic Approach

The use of probabilities introduces the issues of choice of probability distribution

and sampling method as well as subjective probability elicitation and encoding.

Triangular distributions were used because they were simple to specify, requiring

only three parameters, and yet reflected asymmetry. There are other probability

distributions that may be more appropriate for different parameters, i.e. not all

uncertain parameters display such asymmetry. Latin Hypercube Sampling was

used because it was most efficient. However, there are other sampling methods in

the domain of uncertainty analysis proper. Morgan and Henrion (1992) describe

extensively different types of uncertainty analysis and uncertainty propagation

methods such as stratified sampling and importance sampling. Different planners

and different utilities will have different views on which probabilities to use.

256
While we can increase the number of uncertain parameters considered, we must

also beware of the dimensionality problem of considering them all simultaneously.

With sampling, there is also the question of independence of probabilities to

consider. The few uncertainties considered here alone have already led to

problematic display of output due to the sheer volume of data points in the output

distributions.

Another interpretation of this approach includes the use of risk analysis to screen

out dominated or infeasible scenarios before the real analysis. This reduces the

problem size. Rigorous risk analysis is essentially a test of robustness. There is

also a facility to combine sensitivity analysis with risk analysis. But these tests

come at the expense of increasing complexity and dimensionality.

B.4 Decision Analytic Approach

B.4.1 Description of Approach

The third modelling approach encompasses deterministic and probabilistic aspects

of uncertainty modelling. Decision analysis is a technique that structures the

problem in terms of decisions and chance or uncertain events. The impact of

uncertainty is resolved in stages over time. The Electric Power Research Institute

(EPRI) in the US has funded and sponsored a number of planning projects which

used decision analysis as the main technique for structuring and analysis of

uncertainty. The essence of EPRIs methodology is explained in Barrager and

Gildersleeve (1989). Characteristics of the deterministic approach can be elicited

from various EPRI-sponsored projects, e.g. Cazalet et al (1978), Baughman and

Kamat (1980), Hobbs and Maheshwari (1990), Keeney and Sicherman (1983), and

Keeney et al (1986).

257
Capacity planning can be defined by three types of decisions relating to the type,

size, and timing of plant investment. Sullivan and Claycombe (1977) suggest a

sequential approach. First decide on the type of plant, which is based on

investment and operating cost and expected demand. Then decide on size, which is

based on type, economies of scale in investment cost, and system-wide reliability

requirements. Finally decide on timing. The timing decision is based on the type,

size, system reserve requirements, future load characteristics, and economic

forecasts.

The Over and Under Model (Cazalet et al, 1978) can be adjusted to suit the UK

context. Instead of reliability as an objective, over and under capacity is

propagated by profitability in relation to the implicit reserve margin set by the

pricing formula (VOLL, value of loss of load). This extends to the multi-staged

project decision by discounted cashflow.

When restructuring the capacity planning problem in the decision analytic

approach, the focus changes from the system as a whole to that of sequential

decisions regarding individual plants or characteristics of these plants. A state of

the art decision software DPL (ADA, 1992) enabled the use of decision trees and

influence diagrams to structure various prototypes investigated.

Within a decision analysis framework, capacity planning becomes that of

generating alternatives, evaluating them, and selecting an appropriate course of

action. The three types of decisions, namely, technology choice, capacity size, and

timing can be considered separately or together. The technology choice model of

Keeney et al (1986) exemplifies the stand-alone analysis of the technology decision.

However, many other studies have argued the need to consider all three decisions

in the context of optimising the entire system (portfolio) because any optimal

allocation affects merit order. This multi-staged analysis of decisions results in the

258
introduction of a specific technology of a specific capacity size at a specific future

date. The decision model replicated follows that of Cazalet not Keeney.

B.4.2 Three Prototypes

Unlike the previous two approaches, the decision analytic approach is not based on

optimisation by which capacity planning can be fully specified. Model specification

in decision analysis centres around the structure rather than the data intensiveness

of previous approaches. Structure is characterised by symmetry or asymmetry of

the decision tree, repetitive decision sequence, number of alternatives, and other

distinguishing features of decision trees. Alternative structures of capacity

planning can be examined via the construction of prototypes. A prototype refers to

a model configuration through which aspects of the approach can be investigated.

Prototypes combine features of the above studies and feasible formulations of

capacity planning in the UK context. The simplest is the single project timing

decision in which the type and size of plant have already been determined. Next is

the technology choice model, i.e. a decision regarding the selection of one of two

competing technologies. The first prototype focusses on the multi-stage nature of

planning decisions. The second shows the resolution of uncertainty. The third

looks at technology choice as a function of annual costs. Although other

formulations are possible, these three prototypes capture the most important

aspects of capacity planning that can be modelled in decision analysis.

Figure B.4 shows the first prototype configuration. At each period, the decision

maker can choose to continue or abandon the project. If he/she chooses to

continue, there is still an uncertainty about delay. Let the current stage be i. If the

next stage advances to i+1, i.e. progress, then a payment of the interest during

construction (IDC) is incurred. If the next stage remains at i, then there is a delay

and a delay penalty must be paid. If the following stage still remains at i, then there

has been no progress and a no-progress penalty must be paid. If this stage is 0,

259
then the project has been abandoned. The penalty values are levied as follows: a

delay implies interest accumulation (extra interest payment if the construction cost

is borrowed in full or else the cost of extending the borrowed funds to cover

capital expenditure) or the difference between interest earned and paid and ongoing

recurrent costs. No progress implies some kind of ongoing cost that has to be

paid. Abandonment of a project implies a payment of a one-off fee to end the

contracts. Once the project is completed after 3 stages, it can begin to earn

revenue. The revenue is determined by the actual demand and level of total

capacity in the system. Demand is a chance event. Demand in each period is

conditional on the demand probability of the previous period. This model allows

discrete uncertain events to be introduced at any stage. In addition, structuring the

problem in this manner allows the consideration of the impact of delays.

Figure B.4 Prototype One: Single Project

Year 2
Continue2?
yes b
x1
x2
Year 1
Continue1? IDC
yes no
x0
Year 0 abandon
Start? x2
b
x1 x1 xx1
x1
IDC delay
yes no progress
no

x1 a
x0 xxo
no progress
x0
no no progress

260
B.4.3 Marginal Cost Analysis

Marginal cost analysis as employed in the first pilot study (Appendix A) is useful

for comparing plant alternatives that differ in technology (type), size, and timing.

A simplified spreadsheet is linked to an influence diagram model. Different kinds

of uncertainties are inserted in the decision tree to show their impact on cost. In

figure B.5, the choice of discount rate is determined first, followed by the type of

plant (according to thermal efficiency).

Figure B.5 Prototype Two: Marginal Cost Analysis

Discount Rate Choice Construction Cost Fuel Price

low A same same

high B grow grow

The technology choice decision can also be evaluated by a typical annual cost

breakeven analysis as prototype three. Here fixed and variable costs for a typical

year of operation are compared against an equivalent annuitisation of competitive

electricity prices. The electricity price of an equivalent operating technology is

converted to an annual cost figure for comparison. A plant is only worth building

if it satisfies two conditions. One, it can be bid into the pool. Two, it can be

profitably operated.

The decision tree allows us to focus on different sequences and order of decisions

and uncertainties. However, without constrained optimisation, decision analysis

261
cannot capture the details of plant for merit ordering and other operational

intricacies of power generation.

262
APPENDIX C

A Conceptualisation of Model Synthesis

This appendix contains the authors own conceptualisation of model synthesis to

fill the void in the modelling literature. There are many possibilities for synthesis

but no criteria or strategies to this end. The decision support literature, e.g. Dolk

(1993), discusses model integration and model management systems with respect

to developing new modelling languages and environments but not ways to

synthesize existing techniques and models. The host of conceptual issues suggests

that model synthesis is not a trivial undertaking by any means.

This appendix is organised as follows. Section C.1 clarifies the concept of model

synthesis, e.g. the difference between a technique and a model. Section C.2 shows

the potential for synthesis between various techniques by illustrating the similarities

between them. Section C.3 examines model structuring issues of technique choice,

ordering, and linkage. Section C.4 proposes a distinction between weak and

strong forms of synthesis as a taxonomy for levels of integration. Section C.5

offers different strategies for synthesis.

C.1 Definitions

A technique is a proven, standardised method which solves a specific class of

problems. Techniques are defined by their functionality such as optimisation,

simulation, and other types of analysis. Many operational research methods and

financial techniques have been incorporated into commercially available software

tools, e.g. Excel add-ins.

A model is an application of a technique to solve a more specific problem. A

technique becomes a model when input and output variables are specified and data

263
is applied. A technique is the engine without the data, whereas a model employs a

technique with data. A model evolves into a new technique when its algorithm or

driving engine becomes generic, i.e. applicable to a class of problems but not

confined to a context-specific problem. A model is a smaller and simpler

interpretation of a bigger and more complex reality. [An approach refers to a

method, such as the use of a technique or model, or even a combination of

techniques and models to solve a particular problem.]

A composite model (Kydes and Rubin, 1981) is any model which is made up of

separate components, each independently developed and not originally designed to

be compatible, and built by integrating (linking) two or more separate, dissimilar

types of methodologies. Model synthesis concerns the use of more than one

technique or model to build a composite model. A related term, integrated

modelling (Geoffrion, 1987) refers to the coordinated unification of two or more

distinct models, enabling results and insights that cannot be achieved by separate

models, hence a need to preserve the conceptual integrity of sub-models or

components. Model synthesis refers to synthesis of methods or features of

approaches, not just at the output level of, say, combining forecasts.

C.2 Synergies Between Techniques

C.2.1 Decision and Uncertainty Nodes

If represented as a sequence of decisions and uncertainties in figure C.1, the

techniques of decision analysis, risk analysis, scenario analysis, and sensitivity

analysis have some similarities, suggesting possibilities for synthesis. The

deterministic base case model, where single mean values represent uncertainties, is

limited in scope when compared to other models of decision analysis, scenario

analysis, sensitivity analysis, and risk analysis. Scenario analysis, sensitivity

analysis, and risk analysis are single staged while decision analysis is multi-staged.

264
Another difference lies in the order of deterministic and uncertain nodes. Decision

analysis and risk analysis are able to consider continuous probability distributions

while scenario analysis and sensitivity analysis are limited to the discrete. The

decision makers attitude to risk is lacking in all except decision analysis.

Figure C.1 Similarities of Techniques

Deterministic mean value model (baseline plan)

mean mean mean uncertainty


value value value

decision

Decision Analysis
single

or
discrete

Scenario Analysis
continuous

repeated
Sensitivity Analysis

Risk Analysis

The above differences suggest a complementarity between probabilistic and

deterministic methods. Deterministic methods by themselves fail to consider the

likelihood of possible outcomes and, in the case of power planning, may result in

selecting a technology even when its cost advantage is much smaller than the

degree of uncertainty. Risk simulation promises the rigorous uncertainty analysis

that formal algorithms are unable to offer. Some compromise may be achieved by

a synthesis of probabilistic and deterministic models. In fact, the Sizewell B public

265
inquiry (Layfield, 1987) concluded that probabilistic analysis is a helpful and

complementary method for analysing uncertainty and therefore has been

recommended in making proposals for future power stations.

The computational rigour of optimisation algorithms is a complement to the

structural strength of decision analysis. Optimisation can handle many

quantitative variables while decision analysis can deal with the non-quantitative,

subjective, and judgmental variables, thus incorporating the preferences and values

of the decision maker. These two techniques offer a balance of hard/soft,

prescriptive/descriptive, and deterministic/probabilistic characteristics. Davis and

West (1987) suggest using linear programming to pre-generate alternatives for

decision analysis to assess. This approach is similar to the modelling-to-generate

alternatives decision support system of Brill et al (1990). There is a trade-off

between complete analysis of all potential feasible solutions through linear

programming and partial analysis of limited alternatives with decision analysis.

C.2.2 Sensitivity Analysis, Risk Analysis, Decision Analysis

Similarities in project appraisal, risk analysis, and decision analysis processes

suggest a merger in figure C.2. In risk analysis, probability distributions replace

single point forecasts, thereby adding more information to the analysis of basic

project appraisal. Decision analysis brings in the values and preferences of the

decision maker through utility functions which reflect risk attitude. These

functional similarities strengthen the structural synergies between decision analysis

and risk analysis in the previous figure C.1.

266
Figure C.2 Risk Analysis and Decision Analysis

Rate of return
Basic Project appraisal Single valued (sensitivity
forecasts analyses)

CAPITAL
INVESTMENT Managerial
Information
PROPOSAL judgement and DECISION
review

Intangibles,
other decision
parameters

Probability
Probability distribution for
The risk analysis process distributions NPV, IRR, and
for decision payback
variables
SINGLE
CAPITAL Managerial
Information
INVESTMENT judgement and DECISION
PROPOSAL review

Intangibles,
other decision
parameters

The decision analysis process Generation of risk


attitude in terms of
utility function
Probability
Probability
distribution for
distributions for
NPV, IRR, and
SINGLE decision variables
payback
CAPITAL Information
INVESTMENT
PROPOSAL Managerial DECISION
Intangibles, judgement and
other decision review
parameters
Source: Hertz and Thomas (1983)

When the chance nodes of a conventional decision tree are replaced with

continuous probability distributions, the analysis becomes that of a stochastic

decision tree, as first described in Hespos and Strassman (1965). All quantities

and factors, including chance events, can be represented by continuous, empirical

probability distributions. The information about the results from any or all possible

combinations of decisions made at sequential points in time can be obtained in a

267
probabilistic form. The probability distribution of possible results from any

particular combination of decisions can be analysed using concepts of utility and

risk. However, like most decision trees, it quickly becomes messy, too large and

cumbersome. The rapid increase in size is compounded by the assessment of

additional uncertain quantities and the existence of time-series dependence

between variables. Reduction methods which screen out dominated options would

be helpful. Again, as seen in other modelling approaches, there is a trade-off

between model complexity and assessment efficiency.

C.3 Structuring

C.3.1 Selection of Components

Synthesis is achieved within the context of available techniques or models. Rather

than developing new model components, existing available models or techniques

are used so that the main task of synthesis becomes that of structuring and

integrating. The selection of which technique to use is mainly driven by

functionality, e.g. optimisation, scheduling, simulation, etc. However, matching

existing models or techniques to the problem also depends on execution costs,

input requirements, applicability, and other appraisal criteria (Ghosh and

Agarwal, 1991.) These factors determine the number and kinds of techniques to

use and the method of synthesis.

The high cost of model development (Balci, 1986) is one of the main reasons for

using commercially available software packages that provide the algorithms and

environments for model synthesis. In recent years, a proliferation of such software

has facilitated rapid model development as well as model synthesis. The use of

such software pushes verification and validation to others, reduces the overall

validation effort, eliminates extensive programming, and allows model builders to

concentrate on careful problem analysis, formulation, and sound data collection.

268
Therefore, software availability is an important determinant of technique

selection. With regard to multi-technique software, Excel is beginning to do for

modelling what statistical packages have done for data analysis.

The modellers familiarity with model components greatly reduces modelling time

and effort. Recent trends in user-friendliness, speediness of software upgrades,

and standardisation of software and hardware help to flatten the learning curve and

accelerate familiarity. However, familiarity produces a technique-driven bias,

steering the modeller away from those components that may be more conducive to

the problem at hand. A model-building team, made up of different technique

experts, instead of a single modeller may be needed to absolve this technique-

driven bias.

The level of detail that can be incorporated by each technique varies greatly. Some

are better at specifying technical and operational detail, while others are more

capable of explicit uncertainty treatment. A manageable level of detail calls for

technique specialisation, i.e. selecting the right techniques to represent the

following types of detail: accounting, financial, causal, intangibles, non-linear

effects, dependence, uncertainty, and time dynamics. In fact, this is one of the

main reasons for synthesis, i.e. each model component is selected for its functional

specialisation. It is necessary to keep the different levels of detail in check and to

avoid runaway complexity to maintain manageability. The intention

(Greenberger, 1981) is to keep the model as simple as possible while still capturing

the essence of the problem.

The right mix of functionality also helps to achieve completeness in problem

specification. Comprehensive modelling exhibits holism and balance so that no

single aspect is distorted (Goldberg, 1987). Chapter 3 proposed complementarity

by way of balancing the hard and soft, descriptive and prescriptive, and the

deterministic and probabilistic for greater model completeness. Complementary

269
techniques compromise their individual capabilities and limitations. For example,

the combination of linear programming followed by sensitivity analysis exhibits a

balance of hard, prescriptive, and deterministic linear programme with the soft and

descriptive functionality of the latter technique. Such a sequence of scenario

analysis, linear programming, and sensitivity analysis was actively used by the

CEGB in the public energy inquiries.

To utilise the results of different components, compatibility is required.

Compatibility means the ability to co-exist and work together. In model synthesis,

compatibility resides at the data and theoretical levels. At the data level,

techniques must be able to share data in the same form or convert them into the

form they need. At the theoretical level, basic axioms must not be violated.

Compatibility between components is a function of the communicability between

different interfaces and protocols. Much of this depends on the interaction

between the components. For instance, the heavy data demands of linear

programming may not be met by the simplistic results of scenario analysis;

therefore they are not compatible at the data interface level.

C.3.2 Ordering

The order in which model components are activated in the synthesis relates to the

strategy of synthesis, which is discussed later in section C.5. In this section, we

discuss two main kinds of ordering: increasing complexity and most relevant

aspect first.

The strategy of increasing complexity, prescribed by Kendall (1969), starts with a

simple model and works towards a more complex model by integration. By

simplicity, it is not clear whether we should build a crude but symbolic version of

the target model and add incremental detail, such as the way we add flesh to the

skeletal frame, or start with the smallest and simplest part of the problem to model

270
and increase in scope and complexity. From a technique point of view, it seems

reasonable to capture the main elements of the problem in a simple model initially

and then refine by adding greater detail incrementally.

The strategy of decreasing importance calls for first capturing the most relevant

aspect of the problem. Each subsequent technique or model-component addresses

a less important aspect. For example, if the technology choice decision is the most

important aspect of capacity planning, cost-benefit analysis or multiple attribute

decision analysis should be selected. On the other hand, if the investment and

retirement of plants over the forty year planning horizon is more important, we will

need a scheduling device or resource allocation technique.

There are other beginnings. Using most intuitive model first to get the decision

makers involved ensures that the basis for the model is user-driven and as a result

credible. Scenario analysis suggests starting with peripheral models to avoid

anchoring bias.

C.3.3 Linkage

A composite model organises the components so that the problem can be

addressed in whole. Models that coexist in a given framework are part of a larger

composite model only if their total contribution is greater than the sum of each.

Model synthesis reflects the definition of a system: the whole is greater than the

sum of its parts. While such components may co-exist and still stand alone and

not interact, some linkage is required to pull the outputs together. In most cases,

the components are linked in one or more of the following ways as illustrated in

figure C.3. Two techniques are linkable if they are compatible, that is,

communicable.

271
Figure C.3 Types of Model Linkages

(1) sequential (2) parallel

(3) feedback (5) mixed

(4) embedded

1) The easiest method of integration is sequential, whereby the results from one

component are fed into the next. The sequential manner in which data is passed

limits the number of model linkages. However, sequential modelling is quite time

consuming, because a new stage cannot begin until previous stages have ended.

Once a component has passed its output to the next, it can be shut down or

freed to process more data in an assembly line fashion.

2) Computational speed can be increased by using more than one computer or a

system with multi-tasking ability. Modularity also permits a kind of parallel

processing, i.e. several models are run simultaneously, and the results fed into a

final model. Because computer costs are quite high, in reality this parallel method

is achieved sequentially, with the results of each model saved and entered into the

final model at the end. Models in the same stage of analysis can commence in any

order.

272
3) A variation of the sequential method is the incorporation of feedback or iteration.

Here, results from one model are fed into a previous model, increasing the number

of interfaces to three if the previous model is not the first model used. Feedbacks

occur in real life; thus, feedback modelling helps to refine the data and correct

earlier assumptions.

4) In an embedded, or nested synthesis, a smaller model resides in a larger model.

The embedded model provides results which are needed by the larger model. In a

concentric structure like the layers of an onion, outer components are highly

dependent on inner components.

5) A multi-level modelling approach to deal with the long range capacity expansion

problem proposed by Butler et al (1992) overcomes some of the difficulties and

inadequacies of certain stand-alone techniques. For example, optimisation models

are not effective in predicting performance based on short term uncertainties or

fluctuations. Simulation would be more suitable, even though it is very data

intensive. A hierarchical modelling process, i.e. modelling at more than one level,

gives the opportunity to test the consistency of various types of decisions.

Components at lower levels report or output to those at higher levels.

Other methods of synthesis are exemplified in practice. For example, the

integrating module of the large energy model NEMS (DOE, 1994) is solely

dedicated to converting, linking, and coordinating other components. In computer

networking, gateways exhibit the same characteristics. Their primary task is to

link different communication networks and protocols. These dedicated

integrators act as middle-men or translators.

Possibilities for synthesis increase with the number of techniques. The greater the

number of techniques and linkage methods, the greater are the number of

permutable linkages. Linkages are restricted by the level of complexity we can

273
handle and the operational implications of the components themselves. We have

shown the possibilities, but there is a need for guidelines concerning which

ordering to follow, which linkage is best, the circumstances under which model

synthesis is preferred to model development, and whether a dynamic or static

linkage is required. Other issues include unidirectional or bi-directional data

transfer; interfacing; exact or reduced form for data conversions; when to

standardise the inputs and output and types of inputs and outputs; different subsets

of data passed to different modules or same data output to more than one module.

C.4 Weak and Strong Forms

We propose a distinction into weak and strong forms of synthesis to reflect the

levels of integration and to characterise model synthesis in a more formal way.

Whether or not a model is weak or strong depends on the factors addressed in the

previous sections, i.e. to do with technique selection, ordering, and linkage.

A composite model is weakly composed (integrated) if the model components are

not highly dependent on each other. The weakest form is given by a model which

consists of stand-alone components, which can be run in isolation or in parallel and

as such can be run independently of each other.

A model constructed from a combination of techniques or models refers to the

existence of two or more components which are not necessarily linked. An

integrated model is a combination model with linkage, hence a stronger form. In

the strong form, model components are tightly integrated and contribute towards

each others informational and functional needs. Factors that contribute towards

the strength of synthesis include the degree of cohesion, interaction,

communication, contribution, and dependence. The stronger the synthesis, the

greater is the integrity of the overall whole.

274
C.5 Strategies for Synthesis

The questions we have raised in model structuring pertain mostly to trading off

obvious contradictory criteria, such as completeness and simplicity. Do we

employ a top-down or bottom-up approach? Do we follow pre-specified

instructions or do we follow our instincts? These issues of style and approach

relate to an overall modelling strategy whose determinants are not yet clear. We

suggest three strategies for synthesis: modular, hierarchical, and evolutionary.

C.5.1 Modular

Model synthesis by its very nature of combining different components is modular

in approach. Miller and Katz (1986) recommends a modularisation scheme in

which components are worked on and developed individually. Modularisation

allows parts of the model to be changed without affecting the rest. It is easy to

expand and contract. Different people can work on different parts of the model

without having to understand each other. Modularising over time is equivalent to

the staged approach where modules can be run in stages if necessary. Each

module represents a complete, enclosed aspect of the problem. Both modular and

staged approaches help to reduce the complexity and increase the manageability.

Because different modules have different assumptions, some standardisation is

required otherwise cognitive adjustments are needed.

C.5.2 Hierarchical

The concept of hierarchies is related to modularity but with the added dimensions

of order, rank, and organisation. Thus a hierarchical synthesis is a more organised

and stronger form of synthesis than modularisation.

Among the many approaches, Thompson and Davis (1990) describe the problem-

driven method. A problem is broken into a series of decision levels, with the

275
highest being aggregate, that is, containing the smallest number of variables by

grouping similar resources and subdividing the planning horizon. Linked

hierarchically, each model-component addresses one of the decision levels.

Hierarchical means not equal, so not everything can be passed up without

filtering, screening, and condensing the data.

Nested techniques follow the hierarchical approach. Those at the top level are

dependent on those at the bottom. Geoffrion (1987) supports this method of

getting the big picture right and adding the details later.

C.5.3 Evolutionary

Evolutionary means becoming more developed, more complex, more

differentiated, more advanced, and more integrated. Balci (1986) conceives of

an evolving model which is repeatedly redefined to reflect the new and increased

understanding of the problem, the changing objectives, and the availability of new

data. Ward (1989) suggests that models generating different levels of detail should

be developed and introduced in an evolutionary manner to meet that level of

integrative complexity most optimal or acceptable to the user.

An evolutionary approach reduces the effort involved in model synthesis by

incremental additions in model detail. At each step, the level of complexity is

kept manageable. Starting from a simple model with few parameters but

encapsulating the big picture, additional factors and dimensions are introduced

with a view to test the feasibility and attractiveness of different techniques. The

exploratory way in which increasing level of detail is added enables the

examination of intricate interactions between model parameters. The evolutionary

approach facilitates a thorough analysis of uncertainty as the absence of a

prescriptive element is conducive to learning and testing different possibilities.

276
In spite of these favourable characteristics, this method of investigation has several

shortcomings. The biggest drawback to an evolutionary approach is the time

commitment. Being exploratory in nature, modelling without a time constraint

runs the risk of never finding anything. Modelling without an end goal or without

defining the boundaries or standards beforehand is indefinite and inappropriate.

C.5.4 Other Approaches

In building decision support systems, Sprague and Carlson (1982) describes three

tactical options: the quick hit, staged development, and the complete system. 1)

The quick hit has the lowest risk in the short run but no re-assurance of re-

usability, flexibility, or generalisability, as it is the approach of developing a

specific model using whatever is available quickly and without any plans for

upgrades. This lack of foresight means that it is likely to require much

maintenance over time. 2) Staged development is iterative leading to an

accumulation of knowledge over time. It is similar to the evolutionary approach

which allows frequent opportunities to change direction of modelling. 3) Finally,

the complete system approach is most comprehensive and ambitious and by default

most time-consuming. It requires a lot of foresight and planning but bears the risk

of technological obsolescence.

The need for a uniform modelling framework led Geoffrion (1987) to develop

what is known as structured modelling. It encompasses a formal mathematical

framework and computer-based environment for conceiving, representing, and

manipulating a wide variety of models that are hierarchically organised and

partitioned. This modelling language differs from Lendaris (1980) structural

modelling, which refers to a collection of elements and their relationships with

emphasis on qualitative structural (geometric and topological) rather than exact

numerical or statistical properties. Both modelling paradigms have been developed

to address the fragmented modelling world where low productivity and poor

277
managerial acceptance prevail. Structured modelling is a bold attempt to reduce

the multiple representation of models, interfacing problems, proliferation of

different types of models, and resulting difficulties in model communication. The

structured modelling language aims to provide ease of software integration.

However, it is not commercially available at time of writing.

Model synthesis requires adjustment to different terminology within different

modelling environments. The lack of a common modelling language and

framework means that a consistent level of detail and scope cannot be maintained

easily. Just as sensitivity analysis is used to identify a subset of factors, we need to

develop criteria and methods to extract a subset of models from the grand design.

Without sufficient empirical evidence and theoretical foundation, we are unable to

give an exhaustive list of criteria and strategies for model synthesis. Nonetheless,

these conceptual issues provide the basis for further research into model synthesis

and the dimensions of composite models.

278
Part Two

Flexibility for Uncertainty


CHAPTER 5

Cross Disciplinary Review

5.1 Introduction

For flexibility to be of any use to addressing the uncertainties in electricity capacity

planning, its practical and theoretical aspects must be understood. Most

discussions of flexibility concentrate only on partial aspects of flexibility, and this

narrow focus cannot be transported elsewhere. A review of previous studies of

flexibility suggests that the best way to allow such portability is to study how

flexibility is used, defined, classified, and measured across disciplines and

industries. Such a broad review preserves those aspects or properties of flexibility

that are lost or de-emphasized in any single discipline. The last major cross

disciplinary review of flexibility was conducted over a decade ago by Evans

(1982). Since then, new uses and measures of flexibility have appeared. An

update is therefore timely.

The idea of flexibility appears in many disciplines. In banking and finance,

investors preference for flexibility translates into the notion of liquidity, or the

ease in which assets can be transformed. In operations management, flexible

manufacturing systems replace the function and product-specific machines of the

past. In the labour markets, employers allow flexible hours to attract better skilled

workers. In turn, a multi-skilled worker can entertain more job opportunities.

Flexible information systems offer users more functionality. The so-called dash

for gas in the present UK Electricity Industry refers to the rapid build-up of a

flexible technology called combined cycle gas turbine which can be quickly built

off-site in small but modular unit sizes. In all of these areas, flexibility represents a

desirable property or goal.

281
In spite of its popularity, flexibility has not received the formal recognition worthy

of mature and welldefined concepts like optimality or profitability. Although its

qualitative importance is obvious, methods of evaluating its quantitative impacts

(Orr, 1967) remain elusive. Its usefulness may have been overlooked, particularly

its potential contribution to aspects of modelling that are not well served by

existing ideas.

This chapter reviews definitions, applications, and measures of flexibility and

related words in the industries covered by the ABI-INFORM (1970 - 1994) CD-

ROM. Research implications from this review and specific studies of flexibility are

also noted. This review provides the basis for its conceptual development in the

next chapter and its operationalisation in chapters 7 and 8 via measuring and

modelling.

5.2 Energy Sector and Electricity Planning

As early as the 1970s, utility executives had called for flexibility (Schroeder et al,

1981.) The need for flexibility in power generation planning has been suggested by

Berrie and McGlade (1991), Vlahos (1990), Clark (1985), Borison (1982) and

others, none of whom have defined nor shown how it can be used. This supports

the contention of Hobbs et al (1992) that the importance of flexibility to planning is

widely acknowledged, but the concept is rarely defined precisely, much less

quantified.

Experience of deviations from plans has convinced Southern California Edison

(SCE, 1992), a US utility, of the importance of flexibility in planning. Their

method emphasizes planning flexibility by developing built-in on and off ramps

that allow quick responses to changing conditions by means of first constructing

plausible scenarios and then preparing flexible responses to each of them.

282
Flexibility in planning (Hirst, 1990) comprises the selection of a resource portfolio

that can be easily adapted to various conditions, e.g. small unit sizes, short lead

times, and demand side management programmes. Approximating discrete supply

to continuous demand, small plants could more easily follow load growth than

large plants. Extending the argument to the extreme, a series of zero-lead time,

infinitesimally small power plants can completely eliminate all periods of excess or

deficit capacity. The drawback to maximal flexibility is that small plants cost more

per kW of capacity and per kWh of output, i.e. no economy of scale. Another

flexible planning approach (Yamayee and Hakimmashhadi, 1984) involves

shortening the lead times for different types of plant through the use of option

concepts. Such a plan contains flexible elements (options) and uses a decision rule

to instil flexibility. CIGRE (1991) define flexibility as the ability of the power

system to adapt itself quickly to new circumstances to be permanently used in


the best way.

Evans (1982) lists four ways to induce flexibility so that positive consequences

occur while avoiding negative ones. He considers them appropriate for research

into flexibility in technology assessment. To illustrate their uses, we add our own

examples from the electricity industry.

1) Flexibility is the ability to bend or change, as in physical malleability. The


flexibility in a class of power plant that is decommissioned but maintained allows it
to be reactivated if demand increases. The new type of combined cycle gas turbine
plants have short lead times in construction, modular units, small unit sizes, and
fast start-up and shut-down times. These characteristics enable the generator to
meet shifts in demand more quickly.

2) Flexibility denotes yielding to pressure or change triggered by a shift in


environmental conditions. Pumped hydro generators and night storage space
heaters provide peak generating capacity by utilising cheap off-peak supplies of
baseload capacity. When peak demand exceeds on-line capacity, water is forced
through the turbines by gravity. Off-peak night storage units hold heat in bricks
warmed up during the period when demand is lowest.

283
3) Flexibility is the susceptibility of modification or the ability to effect alterations,
such as the liquidity of an investment, a business, or a technological portfolio.
Flexibility is achieved through the overall configuration (via balance, risk
diversification, fuel diversification) rather than through the individual properties of
various components. The flexibility of a capacity mix is determined by the ease in
meeting different conditions of shifting demand levels and load duration curves.

4) Flexibility is the capacity for ready adaptation to new situations. A complex


technological system is planned in such a way to accommodate variations in
demand. One method is through reserve margins, i.e. surplus capacity. A flexible
legal contract contains built-in clauses to enable orders to be cancelled or power to
be bought or sold immediately. One flexible strategy suggested by Merkhofer
(1975) justifies the cost of keeping options open by the expectation that additional
information will permit the decision to be made more effectively later.

5.3 Economics

Since the 1930s, flexibility has been recognised in separate studies as a component

of a wide range of economic decisions. Despite the substantial theoretical attention

in this area, Jones and Ostroy (1984) claim that it plays a limited role in

conventional micro-economic theory due to the difficulty of defining flexibility for

universal application. In other words, flexibility is not formal or central to any

discipline, as it is only desirable but not necessary.

One of the earliest advocates, Stigler (1939) discusses the relationship between

flexibility and adaptability by analysing average and marginal cost curves in the

problem of choosing among alternative plants. Flexibility of operation allows a

plant to be passably efficient over a range of probable outputs. The amount of

flexibility built into a plant depends on the costs and gains of flexibility, hence

implying that flexibility is not a free good. A plant is flexible if it could produce a

wide range of output quantities by incurring relatively small increases in average

cost. This translates to a flat average cost curve, i.e. the flatter it is, the more

flexible the firm.

284
From his chronological account of the development of flexibility in the economics

literature, Carlsson (1984) concludes that flexibility gives a firm the ability to deal

with all forms of turbulence or uncertainty in the environment.

5.4 Corporate Planning / Business Strategy

The firms need for flexibility directly depends on the stability of the environment in

which it operates. Low flexibility is sufficient for stable environments, and high

flexibility required for unstable environments. Supporting this argument,

Mascarenhas (1981) describes two ways to realise this: increase options (hence its

flexibility) or control its environment (make it more stable).

Flexibility is recognised as a key aspect of a firms response to competitive

markets. Flexible specialisation and integration (Gertler, 1988) are strategies for

competition between firms, referring to the firms ability to respond to fluctuations

in market demand and to adopt new products quickly. Eppink (1978) notes that

the more uncertain the situation, the more an organisation will need flexibility as a

complement to planning. As such, it is a characteristic of an organisation that

makes it less vulnerable to unforeseen strategic change.

In policy formation, Evans (1982) defines strategic flexibility as the capability

which aids repositioning when conditions change. Flexible is propelled as a

desirable trait of a planning strategy, structure, and plan. Later, Evans (1991) uses

the polymorphous nature of flexibility to develop a framework for strategic

flexibility.

In entrepreneurial management, Stevenson (1985) points out the difference

between entrepreneurial and administrative decision making styles. Entrepreneurs

value flexibility and tend to make commitments in stages thereby limiting the

amount of commitment while staying flexible.

285
5.5 Labour Markets

Labour markets have witnessed the spread of flexibility in employment practices,

workforces, and career planning. Against increasing rates of change and volatility

of the labour market, flexible human resource policies have been devised to allow

faster responsiveness. A flexible workforce is multi-skilled, i.e. able to perform

various job functions to meet different needs of the organisation. Women are seen

as a flexible workforce because of their availability for part-time work. Temporary

and contract workers are also part of this group, as their working arrangements can

be adjusted to suit the needs of the employing organisation. Career planning

advises to plan for the unexpected by having a number of alternatives and by

remaining flexible enough to switch plans in case one strategy does not work out.

Flexibility, in this sense, is a contingency.

Specific studies of flexibility in labour markets (Pollert, 1991) convey notions of

responsiveness, matching needs, variety, variability, ability to choose, appealing

to many, non-restrictive, and informal. Flexibility appears in the restructuring of

labour markets and labour processes through increased versatility in design and

greater adaptability of new technology in production. Flexible specialisation, for

example, refers to a new form of skilled craft production made easily adaptable by

programmable technology to provide specialised goods which can supply an

increasingly fragmented and volatile market. Labour flexibility refers to different

types of flexibility for employers and employees and has different international

contexts. In France and the UK, for example, employers view flexibility as fixed

term contracts, the ability to lay off workers, and the introduction of flexible

working hours for the employee. Flexi-time (Ullrich, 1980) allows employees to

vary their hours of work to suit their own preferences. On the other hand,

employers in Germany and Sweden emphasize multi-skilling, qualifications, and

training as ways to increase flexibility.

286
Flexibility applies to both employees and employers. Atkinson (1985) identifies

three kinds of flexibility that are sought by employers. 1) Functional flexibility

refers to the smooth and quick deployment of employees between activities and

tasks. Through multi-skilling and retraining, the same labour force may be

redeployed to different job functions as required. 2) Numerical flexibility allows

worked hours to be quickly, cheaply, and easily varied in line with short term

changes in the demand for labour. [The author might have equally used the term

temporal flexibility.] Contractual relationships govern the kinds of part-time,

temporary, and contract arrangements. 3) Financial flexibility enables a firm to

manipulate labour costs according to the state of supply and demand in the external

labour market, thereby differentiating between performance-based pay and rate-

based pay. Employees, on the other hand, desire two different kinds of flexibility

(Atkinson, 1989): enhanced mobility in the firm and transferability of accrued

benefits, status, and advantages between employers.

The term flexibility has been applied in contradictory contexts. On the one hand,

flexible means widely applicable or accommodating. On the other hand, it also

means tailoring to specific needs, i.e. matching individual requirements. These

opposing connotations are united if flexibility is interpreted as the potential to

tailor to a wide range of specific needs.

5.6 Technology / Information Systems / Telecommunications

Flexible and open systems have the capacity to communicate with other types of

technology protocols, i.e. they are responsive to other types of signals. Flexible

network routing is intended to accommodate rapidly growing demands that

materialise. On the other hand, an inflexible technology is slow and costly to adjust

to unexpected events.

287
In the design of effective decision support systems (DSS), Sprague and Carlson

(1982) recognise four levels of flexibility (changeable, modifiable, adaptable,and

evolutionary.) At the first level, the flexibility to solve enables the user to confront

a problem in a flexible, personal, or creative way. At the second level, the

flexibility to modify allows the configuration of a specific DSS to be modified so

that it can handle a different or expanded set of problems. At the third level, the

flexibility to adapt refers to the DSS-builders ability to adapt to changes that are

extensive in an incremental or staged fashion. The fourth and final level of

flexibility applies to the long term view, that is, the ability of the system to evolve

in response to changes in the basic nature of the technology on which the DSS is

based.

5.7 Manufacturing

The manufacturing sector has seen the greatest proliferation of definitions,

classifications, and applications of flexibility. While cost predominated throughout

most of the industrial era, followed by quality in the 1970s and 1980s, Chandra

and Tombak (1992) observe that flexibility is now the dominant theme in the

1990s because of shorter product life cycles, more competitive markets, and the

availability of new technologies known as Flexible Manufacturing Systems.

Flexibility is no longer just desirable but vitally necessary, to the extent that it

should be part of a competitive strategy (Slack, 1988). Supporting this, Carlsson

(1989) argues that flexibility is as important as costs in determining international

competitiveness. Recent surveys, e.g. Gerwin (1993), show firms ranking of

flexibility after productivity, delivery, and quality in importance for

competitiveness.

In Flexible Manufacturing Systems (FMS), flexibility refers to relaxing the rigidity

in scheduling with production volume which may be varied almost instantaneously.

The flexibility of FMS, according to Hutchinson and Sinha (1989), provides

288
economic advantages, such as the ability to rapidly introduce new parts and to

change production mix to respond to short-run fluctuations. Flexible couplings

give trouble-free service and can accommodate faults. Flexible or robust designs

relate to survivability under all types of conditions.

The numerous interpretations of manufacturing, as listed below, have led some

researchers to classify different types of flexibility and to develop conceptual

frameworks to capture the essential dimensions. Slack (1983) says flexibility refers

to how far and how easily you could change what you want to achieve, thus

containing the dimensions of range and ease. Along the same lines, Kumar (1987)

says that flexibility in action of an individual or system depends on the decision

options or the choices available and on the freedom with which various choices can

be made. Similarly, flexibility of a population with respect to a set of alternatives

depends not only upon the number of alternatives but also on the extent to which

the diversity of choice is determined by certain circumstances. Verter and Dincer

(1992) define flexibility as the ability of a system to cope with changes effectively.

Gunasekaran et al (1993) define flexibility as the ability of a manufacturing

system to cope with changing environments.

Attempts at unifying manufacturing flexibility are superseded by new reviews, new

definitions, and new applications, e.g. Sethi and Sethi (1990), Gupta and Goyal

(1989), and Bernardo and Mohamed (1992). A unique, undisputed definition does

not yet exist as the concept is manifested in different types of flexibility:

operational flexibility, design flexibility, short term, long term, scheduling, job, mix,

action, adaptation, adequacy, process, machine, expansion, volume, production,

etc. The same term, e.g. product mix flexibility, often has different meanings, e.g.

Slack (1983) defines it as the ability to manufacture a particular mix of products

within a minimum planning period. Meanwhile, Gerwin (1982) defines product

mix flexibility as the ability to manufacture a mix of products simultaneously.

289
5.8 Other Areas

In other areas, flexibility communicates the same kind of diversity, variety, and

desirability. Flexible insurance policies target a broad market because of their

specific tailor-made end-products. The resilience of ecological systems (Holling,

1973) is a kind of flexibility maintained for survival in a range of conditions.

Flexibility of a production system, as noted by Zelenovic (1982), is a measure of

its capacity to adapt to changing environmental conditions and process

requirements. Flexible production systems can therefore provide stable functioning

of systems under the following conditions seen today: high rate of environmental

changes, increasing international competition, and higher quality of technology

innovations. In the field of counseling psychology, Anastasi (1990) sees diversity

of viewpoints towards testing and assessment as leading to a more comprehensive

and flexible model of counseling. In psychological terms, a flexible person is

open-minded and adaptable, whereas an inflexible person is unable to deal with

ambiguity and uncertainty.

5.9 Observations and Conclusions

As apparent from this review, the literature on flexibility is rich with definitions,

measures, and applications. Further analysis is required to resolve the confusion

and clarify the following six points.

1) Research on flexibility is fragmented across many disciplines, which together point


to the versatility of its use and also a lack of consensus. For example,
manufacturing flexibility is poorly understood because different studies have
emphasized different aspects of flexibility (Kumar, 1986).

2) Identical terms used in different studies do not necessarily have the same meaning.
For example, decision flexibility, according to Heimann and Lusk (1976), is an
alternative criterion to the expectation model, whereas Merkhofer (1977) defines it
as the availability of alternatives or the size of choice set for a decision. Clearly,
criterion and choice set are not the same.

290
3) The interpretations of flexibility are often confusing. For example, Mandelbaum
(1978) considers diversity a source of flexibility, i.e. a way to achieve or increase
flexibility. On the other hand, DeGroote (1994) says that flexibility is a hedge
against diversity. Stirling (1994) says flexibility is a form of diversity, but at the
Sizewell B public inquiry, diversity was largely regarded as a hedge against
uncertainty.

4) In the extreme, the orthodoxy of flexibility has been criticised by Pollert (1991) as
being an ideological fetish. In connection with labour markets, for example, it is
ambiguous, multi-form, generic, confusing, and heavily value-laden.

Our intuitive understanding of flexibility is not confined to a single discipline but

rather inferred from its uses in a range of disciplines. Previous studies of flexibility,

notably Merkhofer (1975), Mandelbaum (1978), Eppink (1978), and Evans (1982),

support the six main observations we make from this review.

1) Flexibility along with other closely related words is widely used across different
areas. Although we have only reviewed the uses and definitions of flexibility in
business-related fields, we can expect the notion of flexibility to exist everywhere.
Table 5.1 lists the main uses of flexibility as covered in this review. It shows that
flexibility is a characteristic of the product as well as the process, e.g. plan and
planning, decision maker and decision making.

2) Reference to flexibility has increased in the last two decades. The word
flexibility or flexible is used much more often today than it was a decade ago
(ABI-INFORM, 1970 - 1994.) This is due both to new conditions under which
flexibility is useful and new ways in which flexibility takes form. This review
suggests that the increased reference to flexibility is partly due to the more
competitive markets and uncertain environments as well as the greater availability
of technological options and increased functionality of systems to meet these needs.

3) It is more strongly advocated as a desirable property than a necessary one.


Flexibility, along with other related words, has been hailed a desirable property of a
system, an organisation, a portfolio, a plan, and a process such as planning or
decision making. It is advocated as a response to uncertainty as well as to
competition, especially in new markets and unstable environments.

291
4) The concept implies having or offering an abundance as well as a variety of
choices. It is a way of coping with unavoidable uncertainty by having or offering
many different choices or functions.

5) It also conveys a responsiveness by active behaviour, i.e. reacting to change, as


well as inactive behaviour, not having to react by tolerating change.

6) Flexibility seems to fill the gap between planned (intended) and actual outcomes,
especially in situations involving high cost, long lead time, and heavy
commitments.

Table 5.1 Uses of Flexibility

Mention or use of energy, eco- corporate labour informa- manu- other


flexibility as electricity nomics planning, markets tion and factur- areas
applied to supply business telecom- ing
(column) in area strategy munica-
(row) tion
systems

plan: planning, Y Y Y
planning strategy,
planning process,
planning approach

decision maker: Y Y Y person


firm, organisation

system, structure, Y Y Y restruc- Y Y produc-


plant turing tion
of system
labour
markets

decision, choice Y Y

Others model, sche- design


portfolio, dule,
capacity work-
mix force

292
Five immediate directions for further research arise from this review.

1) Several concepts are closely related, e.g. flexibility increases with increasing
number of alternatives, diversity of choices, response to change, etc. These
relationships suggest that flexibility may be a function of more established concepts.
How does flexibility relate to more established concepts?

2) In all of the sectors reviewed, flexibility is advocated as desirable, albeit not without
negative effects and provisioning costs. This begs the question, is flexibility always
desirable? Under which conditions is flexibility useful? What happens when we
have too much or too little flexibility? How much flexibility is enough?

3) If flexibility is not always desirable, this poses another question. Under what
circumstances is it no longer desirable? Gerwin (1993) warns of the downside of
flexibility, to which little, if any, attention has been given. For instance, excess
amounts of flexibility, such as quick short-term responses to uncertainty, could lead
to wasteful activities.

4) While a unique and formal definition may not yet exist, the manner in which it is
used and promoted may help to elaborate the concept. By this, we mean identifying
necessary elements to define flexibility, such as Eppinks (1978) designation of
type, aspect, and components of flexibility.

5) To harness this concept for practical use, we need to know how to operationalise
and measure flexibility.

In addition to the above, specific studies of flexibility have raised research

questions that apply to our assimilation of the concept. Most of Mandelbaums

(1978) open research areas still hold today. For example, problems of continuous

action space make the range aspect of flexibility difficult to measure. Practical

development of flexibility attributes and implications of their use are required, thus

highlighting a need to tie together the practical and theoretical aspects.

Mandelbaum also suggested using stochastic dynamic programming to study

flexibility in traditional multi-period stochastic models. Subsequently, Kulatilaka

293
(1988) uses this technique as a means to capture the option value of flexibility.

Mandelbaums suggestion raises a further interesting question: do certain

modelling approaches facilitate measures of flexibility better than others? In other

words, is flexibility a feature of the modelling approach?

Having established the practical importance of flexibility, we now turn to the

theoretical side to address the research issues raised above.

294
CHAPTER 6

Conceptual Development

6.1 Introduction

We first clarify and unify the multi-faceted meaning of flexibility by summarising a

conceptual analysis (section 6.2) and then by developing a conceptual

framework (sections 6.3 to 6.8). The conceptual framework examines how

flexibility relates to more established concepts, like robustness, optimality, risk,

regret, commitment, confidence, options, and uncertainty. Some of these

relationships have been formally proven. Others are illustrated by examples. These

conceptual relationships are depicted by triangles in figure 6.1 and discussed in the

numbered sections. Following this, we determine the conditions under which

flexibility is useful (section 6.9), discuss its downside (section 6.10), distill

necessary elements to define flexibility (section 6.11), highlight the important

concept of favourability (section 6.12), and suggest strategies to operationalise it

(section 6.13.) The final section (6.14) concludes the main findings and raises the

need for measuring and modelling flexibility.

Figure 6.1 Conceptual Framework

commitment options

6.6 6.7
confidence flexibility robustness regret

6.3 6.5
6.4 6.8

optimality uncertainty risk

295
6.2 Conceptual Analysis

One way of understanding a concept is through word association. Evans (1982)

so-called conceptual analysis of flexibility involves a semantic assessment of

related words. We briefly describe those words which share the closest meanings

with flexibility: adaptability, elasticity, liquidity, plasticity, robustness, resilience,

and versatility.

Adaptability is the ability to respond to foreseen changes, while flexibility is the

ability to respond to unforeseen changes (Eppink 1978, Evans 1982).

Adaptability is necessary but not sufficient to provide flexibility. Elasticity is

similar in the context of return to a normal state. Liquidity, meaning the ease of

conversion, is also a kind of flexibility, being the ease of transition from one time

period to a desired position in the next period (Jones and Ostroy, 1975). In this

sense, flexibility as defined by Goldman (1974) is the capacity of a portfolio to

furnish a variety of consumption plans. Both plasticity and flexibility denote some

form of malleability. While plasticity denotes the ability to maintain a state,

flexibility, in addition, embraces the ability to influence successfully a transition to

other states. Robustness and resilience are closely related; the former refers to the

ability to satisfactorily endure all envisioned contingencies while the latter refers to

the ability to absorb or accommodate unforeseeable shocks and discontinuities.

Hashimoto (1980) and Hashimoto et al (1982) make use of robustness in water

resources planning. By far the closest resemblance to flexibility is captured in the

word versatility. Versatility is sought as a hedge against state changes, and as

such, is optimal for an infinite sequence of decisions (Bonder, 1979).

296
6.3 Flexibility and Robustness

6.3.1 Two Types of Flexibility

Gupta and Buzacott (1988), Mandelbaum (1978), Eppink (1978), and Ansoff

(1968) see two fundamental ways of responding to change and uncertainty, which

correspond to two types of flexibility. Active or action flexibility is the ability to

respond by changing or reacting. Passive or state flexibility, on the other hand,

exists when there is no need to react because of immunity, insensitivity, or

tolerance. It is the innate capacity to function well in more than one state and thus

possible to ignore changes. We refer to the second type of flexibility as

robustness. This dichotomous interpretation of flexibility is summarised

chronologically in table 6.1.

Table 6.1 Flexibility and Robustness

Source Flexibility Robustness

Gupta and SENSITIVITY: the degree of a STABILITY: the maximum size of


Buzacott (1988) change tolerated before a a disturbance for which the system
deterioration in performance takes can still meet the performance
place. The higher the degree of targets via some corrective action.
tolerable change, the less sensitive
the system is to that change.

Mandelbaum ACTION: the ability to respond to STATE: the innate capacity to


(1978) change by taking appropriate action function well in more than one state

Eppink (1978) ACTIVE: the response capacity of PASSIVE: the possibility to limit
the organisation the relative impact of a certain
environmental change

Ansoff (1968) INTERNAL EXTERNAL

Mandelbaum (1978) observes that action flexibility is only needed when we have

less than perfect information. It is acquired by taking appropriate action after the

change takes place to take advantage of the new state. This kind of flexibility is

297
only desirable when there is uncertainty about what actions to take and useful if

that uncertainty is reduced.

State or passive flexibility, on the other hand, already exists in the new state when

the change takes place. Therefore it is not necessary to learn about the present

state. This built-in flexibility is analogous to prevention rather than cure. A

systems ability to cope with changes is robust if it is independent of the choice of

future actions. In other words, it is able to continue functioning despite the

change.

These two types of flexibility agree with the conceptual analysis of Evans (1982).

Flexibility is the inherent capability to modify a policy to accommodate and

successfully adapt to such changes, whereas robustness refers to the ability to

endure such changes.

The main argument against this dichotomous characterisation of flexibility is that it

is neither exhaustive nor exclusive. Where flexibility denotes speed of change,

diversity of alternatives, or abundance of possibilities, any planning approach to

this end readily encompasses both notions of flexibility and robustness and more.

The distinction between passive and active forms of flexibility becomes less clear

when used simultaneously. In other words, a person may choose to be both

flexible and robust or maintain a system that is robust overall but containing

flexible elements. As flexibility and robustness are so closely related, we

investigate the meaning of robustness in the next sub-section and then compare it

with flexibility afterwards.

6.3.2 Robustness

Robustness is a term in its own right. It has several definitions. In quality control,

robust quality represents the zero defect concept. In statistics, robust regression

refers to a resistance in the key components of a model to the effects of outlying

298
observations. Robustness to likely errors is the ability of a procedure to give good

results under less than ideal conditions. Accuracy in problem representation can

be increased by model robustness considerations which reduce the sensitivities of

individual variables.

In the context of systems planning, robustness is a desirable goal. Hobbs et al

(1994) define a robust plan as one whose cost varies little with changes in

assumptions. Hence, robustness communicates the notions of predictability and

stability. Hashimoto et al (1982) associate robustness with the probability that the

actual cost of the system will not exceed some multiple of the minimum possible

cost of a system designed for the actual conditions that occur in the future.

Hashimoto (1980) says that robustness is related to Stiglers (1939) concept of

flexibility, i.e. acceptable over a range. Merrill and Wood (1991) equate

robustness with the proportion of possible futures a given plan would be best in.

Gupta and Rosenhead (1968) state that it is closely related to the term

adaptability in the context of the number of irrevocable decisions that must be

made now versus the number and diversity of options left open. Paraskevopoulos

et al (1991) pose robustness as the (in) sensitivity to different sources of

uncertainty which directly translates to testing a plan that is optimal under a given

scenario against other scenarios and parameter sensitivities.

Robustness is a necessary pre-requisite of a solution, a result, a model, and a

method if it is to be generalisable, reliable, and widely applicable. Statistical

robustness guarantees that a solution is not vulnerable to error. Robustness of

results is tested by using alternative methodology. Model robustness is tested by

using an alternative set of data or by changing parameters. Robustness

demonstrates how powerful the method is, how applicable it is, and how well it

performs regardless of changes. Other words that are implied by the word

299
robustness are consistency, insensitivity, tolerance (of error and change), long-

lasting, durability, and sustainability.

6.3.3 Flexibility versus Robustness

The words flexibility and robustness often appear in the same articles and even

used inter-changeably as if they mean the same thing. They have also been used to

define each other, which causes a common source of misinterpretation. For

example, CIGRE (1991) suggest that flexibility at the planning stage ensures

certainty of a robust power system in the future. Rosenhead (1980) defines

robustness as the useful flexibility preserved by early decisions in the decision

sequence. Pye (1978) defines robustness as a method of trading off flexibility

against expected value. Hobbs et al (1992) confuse this further by suggesting that

(the pursuit of) flexibility can result in a robust plan that will be satisfactory

under a range of possible market and regulatory conditions even if that plan fails

to be best under any one of them.

In general, when we speak of flexibility, we mean the ability to change by (quickly)

moving to a different state, selecting a new alternative, or switching to a different

production level. Robustness, on the other hand, is associated with not needing to

change. While flexibility is a state of readiness such as the ability to react to

change, robustness is a state of being such as a resistance or an immunity to

change. Flexibility and robustness are not opposite or the same, but merely two

sides of a coin, corresponding to two ways of responding to uncertainty. We

illustrate this distinction in six ways, as follows.

1) Characteristics of Electricity Planning

2) Functional Requirement of Systems

3) Present and Future Costs

300
4) Over and Under Capacity

5) Response to Uncertainty

6) Feature of Modelling Approach

CHARACTERISTICS OF ELECTRICITY PLANNING

Future uncertainty necessitates the introduction of more flexibility in the planning

of power systems. The resulting plan will not be optimal for a particular future but

satisfactory for most of the possible futures. CIGRE (1991) propose two

approaches to meet this need: devise a system sufficiently robust to withstand

impacts or incorporate flexibility within system development. Robustness

indicates the overall power system strength to withstand external impacts. They

argue that flexibility is superior to robustness because the latter is no longer

adequate when the development parameter variations become too large and that

providing robustness becomes too expensive. To support these arguments, it is

necessary to trade off cost and optimal performance.

The degree of flexibility varies with different capacity mixes. A diversified

portfolio is better equipped to cope with variations in electricity demand because

each type of plant represents an open option. This aspect of flexibility is closely

related to Stirlings (1994) notion of diversity in energy supply investment as a

means to security of energy supply. Diversity in plant mix is measured by the

number of different plants according to type of fuel, capacity size, and other

characteristics like operating or scrapping lives and load factor.

301
FUNCTIONAL REQUIREMENT OF SYSTEMS

The notions of flexibility and robustness describe two mutually exclusive functional

requirements of information or decision support systems. A flexible system is

capable of performing many functions, thereby supporting multiple patterns of

usage. Robustness, on the other hand, is a preventative, fail-safe, fault-tolerant, or

fool-proof characteristic that ensures the system will not crash. A robust

programming language has a consistent style to its syntax and semantics, whereas a

flexible one can be manipulated to meet different needs.

PRESENT AND FUTURE COSTS

One of the main differences between robustness and flexibility is that the former

infers a present cost whereas flexibility implies a future cost. Robustness by means

of keeping a reserve margin, i.e. over-capacity, contains a present holding cost

which will not be eliminated until demand reaches that margin level in the future.

This is the opportunity cost of providing flexibility as indicated by the level of idle

capacity or the cost of over building.

On the other hand, flexibility is a potential change, reflecting a future cost that has

not occurred yet. But the provision of flexibility, that is, the capability to be

flexible, may incur a present cost. For example, importing electricity when demand

rises implies a future cost, but the option to import in the future implies a present

cost.

The use of fixed and variable costs to describe flexibility was first made by Stigler

(1939) who proposed that flexibility increases when resources are transferred

from the fixed to the variable. A firm is more flexible if it is able to incur

(variable) costs only when necessary.

302
OVER- AND UNDER-CAPACITY

The provision of flexibility implies an opportunity or idle cost usually associated

with robustness. The capacity to accommodate future changes is the opportunity

cost for under-utilisation (Son and Park, 1987). Reserve margins are typically built

into capacity plans to ensure reliability, especially in meeting peak demand.

Robustness and flexibility translate into over- and under-capacity to deal with

demand uncertainty. To appreciate and quantify this distinction, we perform a

separate study in Appendix D.

RESPONSE TO UNCERTAINTY: Uncertainty Reduction vs Adaptation

Robustness may be associated with uncertainty reduction, by minimising

surprises. Flexibility, on the other hand, refers to adaptation, i.e. we expect there

to be surprises to which we can react by changing. In this dichotomy, Gerwin

(1993) offers several delivery methods to cope with uncertainty, depending on the

nature of uncertainty. Although he does not call these two methods by the names

of robustness and flexibility, the parallelism is evident from his table on page 406,

as reproduced below.

303
Table 6.2 Gerwins (1993) Methods of Coping With Uncertainty

Nature of Uncertainty Uncertainty Reduction Adaptive Method


Method (robustness) (flexibility)

Market acceptance of kinds of Long-term contracts with Small setup times, modular
products customers products

Length of product life cycles Life extension practices Less hard tooling and
backward integration

Specific product Cross-functional design teams CNC machines


characteristics

Aggregate product demand Leveling demand High capacity limits,


subcontracting

Machine downtime Preventive maintenance Redundant equipment

Characteristics of materials Total quality control Automated monitoring


devices, human inputs

Changes in the above Large size Re configurable equipment


uncertainties

In the context of electricity planning, Gerwins distinction of responses to

uncertainty applies to those areas of uncertainty identified in Chapter 2. Table 6.3

suggests some ways to respond to these uncertainties.

Table 6.3 Response to Areas of Uncertainties in Chapter 2

Areas of Uncertainty Uncertainty Reduction Method Adaptive Method


(robustness) (flexibility)

plant economics: cost of contracts to stabilise pool avoid plant with high capital
production prices cost

fuel supply diversity of plant co-firing

demand level and growth over-capacity extra production, demand side


management

technology: lead time plants at different stages of import or export power when
construction needed

financing requirements back to back contracts minimise commitments

304
FEATURE OF MODELLING APPROACH

Traditional approaches deal with the uncertainty of meeting demand by forecasting,

setting reserve margins based on past volatility of demand, and optimising against

this to produce a capacity expansion plan. This method of dealing with uncertainty

exemplifies the deterministic and probabilistic approaches, which are oriented

towards robustness, i.e. ensuring that the resulting decision or plan is acceptable

with respect to the uncertainties and scenarios considered. Robustness as a means

to model completeness depends on the accuracy of the forecast and the

extensiveness of scenario, sensitivity, and risk analyses. Robustness does not

guarantee that the result will be optimal in the actual future, only that it is optimal

against the anticipated future.

Hobbs and Maheshwari (1990) suggest the use of Monte Carlo Simulation (or risk

analysis) to assimilate total costs from different sources of uncertainty, sensitivity

analysis to show the effects of changes in important parameters on a plan, and

scenario analysis to check how well a plan performs under a particular future.

These three types of analyses are different ways to show how applicable the object

is without having to change it, i.e. robustness. Robustness analysis is a confusing

term as it has been coined by Rosenhead (1989) to describe a heuristic in soft

systems methodology (SSM), which is a problem structuring method to determine

the sequence of decisions that would maximise future flexibility. In modelling

terminology, however, robustness analysis refers to an analysis of robustness by

means of the above techniques or by statistical tests.

Flexibility, on the other hand, is a measure of contingency against uncertainty. In

this sense (of Knight, 1921), only the area of uncertainty is identifiable, as the

uncertainty itself is unquantifiable. Techniques based on structures similar to

decision trees, such as decision analysis, contingent claims analysis, and

stochastic dynamic programming, have been used in valuing flexibility with

305
respect to uncertainty and contingency (Dixit and Pindyck 1994, Kogut and

Kulatilaka 1994, Smith and Nau 1990, Triantis and Hodder 1990, Trigeorgis and

Mason 1987, and others.) Flexibility, in this sense, is a feature of the modelling

approach, i.e. capable of representing uncertainty, choices, and contingency.

6.4 Flexibility versus Optimality as a Decision Criterion

The expected utility model follows the principle of optimisation, that is, maximising

the expected utility to achieve the most desirable outcome. In environmentally

dynamic situations, Heimann and Lusk (1976) argue that flexibility offers a better

model than expected utility. In situations with multiple periods, uncertain

availabilities of alternatives, and uncertain occurrences of states of nature,

maximising the probability of achieving a pre-selected output level rather than

expected value is a more feasible and sought after goal. When the decision maker

does not have full confidence in the model, Mandelbaum (1978) argues that

flexibility is preferred to optimality.

The well established operational research criterion of optimality is often

unacceptable in practice. Profit maximisation or cost minimisation disregards or

distorts situations in which different objectives are not commensurate. In response

to these issues, Rosenhead et al (1972) consider selecting an initial decision that

leaves as many options open in the future as possible, rather than immediately

seeking the course of action that will lead to the highest payoff.

6.5 Robustness, Risk, and Regret

The use of risk as a surrogate for uncertainty is seen in the utility theory of risk

attitudes, risk profiles, and risk preferences. Montenegro (1978) develops a

flexibility preference model based on parallel relationships between risk and

flexibility, e.g. whereas risk relates to returns, flexibility relates to costs. He

306
defines flexibility as the management and engineering margins implemented in a

system to cope with the uncertainties of its system requirements. Increased

flexibility maximises the expected risk utility function of the decision maker. He

applies this flexibility preference theory to a telecommunications system, which

shares similar traits with energy systems, e.g. capital intensiveness, long planning

horizons, and heavy infrastructural investments. He constructs flexibility profiles

as opposed to risk profiles. The resulting system is robust because of the margins

set in place, i.e. the system need not change.

Risk is sometimes regarded as a negative consequence of uncertainty, such as the

hazards to which an electricity company is exposed. [A more common

understanding is given by Knight (1921) and discussed in Chapter 2.] Merrill and

Wood (1991) proposes two measures for this type of risk: the likelihood of

making a regrettable decision (which they call robustness) and the amount by

which the decision is regrettable. Concurrently, Merrill and Wood also advocate

robustness as a way to manage risk. Robustness is defined as a measure of the

safety or lack of risk of a decision. A plan is robust if it represents a reasonable

trade-off with 100% probability or for all possible values of uncertainties. A robust

plan is one which could be selected from every future, no matter how the

uncertainties turn out. The above analysis shows that robustness, risk, and regret

are closely related.

One way to reduce risk is to select a course of action that minimises future regret.

Minimax regret, otherwise known as the Savage criterion, follows the decision rule

of selecting the path of least regret. Regret is defined as the opportunity cost of an

action, i.e. the assessment of a lost or foregone opportunity made in hindsight. For

some individuals, regret or remorse ranks high in terms of emotional distress. The

relationship between risk, regret, and robustness is further supported by the

307
robustness definition of Gerking (1987): a robust decision is one for which

elements will not have to be regretted.

6.6 Commitment, Confidence, and Flexibility

Commitment, confidence, and flexibility are also related in a similar fashion. If a

decision maker is unsure about his preferences and also not confident of his model,

then he will be reluctant to commit himself to a course of action. In this case, he

would seek flexibility, for instance, by delaying the decision until he gets more

information or by limiting the degree of his commitment. By reducing

commitment, he increases his flexibility, i.e. the ability to select another course of

action.

Commitment refers to the bindingness of a contract. A commitment becomes a

liability when it is no longer needed. A commitment contains an irreversible

element which prevents one from undoing the situation or choosing a different

course of action easily. Once a person has committed to a course of action, he

gives up the opportunity to wait for new information that might affect his decision,

as he can no longer take advantage of this information.

Mandelbaum (1978) and others observe the relationship between a decision

makers confidence and his commitment. Confidence is a measure of the subjective

certainty the decision maker has in his preferences, his perception of the future, and

other specifications of the model. However, no one has subsequently made the link

between commitment and flexibility.

We propose that flexibility also describes the quality or degree of commitment to

an existing course of action, i.e. the degree of reversibility or opting out. Similarly,

the amount of desired flexibility varies directly with the amount of uncertainty

perceived by the decision maker, such as his confidence in the model, sureness of

his own preferences, and his willingness to commit.

308
Kreps (1979) asserts that the decision makers preference for flexibility is increased

when he is unsure about his preferences. Being unsure is equivalent to lacking

confidence. Mandelbaum and Buzacott (1990) show that flexibility compensates

for model unease or lack of model confidence. Someone who is unsure is less

likely to commit himself than one who is sure. Jones and Ostroy (1984) observe

that the variability in a decision makers beliefs is directly related to the amount of

flexibility he desires. A decision maker who is unsure is more likely to retain some

flexibility than one who is sure about his preferences. Similarly, a person with

minimal commitment, i.e. few and breakable obligations, has more flexibility than

one with more responsibility. The following relationship can be deduced from the

above analysis: lack of confidence reduces the desire for commitment and

increases the preference for flexibility.

6.7 The Right But Not the Obligation

The two types of flexibility correspond to two basic ways of responding to

uncertainty. Passive flexibility or robustness allows uncertainty to be ignored, that

is, insensitivity to external stimuli. It need not respond or change. Active

flexibility refers to the ability to change. These are incorporated in the finance

definition of options, i.e. the right (ability) but not the obligation (need) to

transact (change).

In finance, an option is defined as the right to purchase or sell the underlying asset

at a specific price with the right lasting for a specific time period. The right can

be interpreted as the ability or capability to change, which defines action flexibility,

i.e. the ability to respond to change by changing. Hirshleifer and Riley (1992)

maintain that remaining flexible is like buying an option as to what later action will

be taken: the more flexible position chosen, the greater is the value of the

option. This relationship suggests that techniques based on option pricing theory,

e.g. contingent claims analysis, can be used to assess flexibility.

309
In the options framework, it can be shown that the value of managerial flexibility is

greater in more uncertain environments and in periods of higher interest rates

(higher opportunity cost) and investment opportunities of longer duration.

Volatility and risk are surrogate measures of uncertainty, and Tomkins (1991)

states that the greater the risk (or volatility), the higher the option value. The more

volatile the underlying market, the riskier it is. The riskier the market, the greater

the value of the option. Just as volatility determines the option value, we can

expect uncertainty to determine the value of flexibility.

The traditional net present value (NPV) rule of investing in a project when the

NPV of its expected cashflows is at least as large as its cost ignores the

opportunity cost of making a commitment now and giving up the option of waiting

for new information. NPVs do not take into account the considerable uncertainty,

irreversibilities, and possibility of postponement associated with these investment

decisions. Instead of using the NPV rule, Dixit and Pindyck (1994) advocate the

options approach for more accurate analysis (and explicit consideration of

flexibility, which they have narrowly defined as postponement of decisions).

In electricity generation, analysis of flexibility using the notion of options is

explored by Yamayee and Hakimmashhadi (1984). What needs further

development is the existence of such options, e.g. legal contracts allowing the

postponement or cancellation of new plants without great penalty. The UK

Electricity Supply Industry is just beginning to see some variations in contractual

arrangements, although most new ventures are still completely hedged in back-to-

back supply and fuel contracts. These completely hedged contracts reflect the

robust rather than the flexible response. The Regional Electricity Companies

entry into the electricity generation business is a form of strategic flexibility, as is

the investment of CCGTs.

310
Financial options are financial instruments that can be used for hedging purposes.

Marschak and Nelson (1962) point out the difference between hedging and

flexibility. One hedges because of the uncertainty and desire to avoid high

variance of returns. On the other hand, one takes flexible initial actions when

expecting to learn more about the world and to take advantage of that learning

before making subsequent moves. In short, flexibility can increase the variance

of expected payoffs whereas hedging cannot. Those who hedge are risk averse,

but those who take flexible actions want high payoff. Evans (1982) also notes the

difference between hedges and flexible responses. Hedges and compromises are

options that make the worst outcome a little better at the expense of making the

best outcome a little worse. Hedges involve a negative approach, while flexible

responses cope with various unavoidable uncertainties about the future.

6.8 Uncertainty and Flexibility

The relationship between uncertainty and flexibility was first formally noted by

Stigler (1939). Later Marschak and Nelson (1962, page 52) state the value of

flexibility is a function of variation in price and how well that variation can be

predicted before the decision is made, i.e. the specific uncertainty to which

flexibility deals with and the quality of information regarding this uncertainty.

Based on this relationship, they prove that the greater the uncertainty, the greater

the value of flexibility. From a decision theoretic perspective, Merkhofer (1975)

confirms this complementarity of uncertainty, flexibility, and learning. If learning is

expected through resolution of uncertainty, reduction of uncertainty, or acquisition

of additional information, flexibility provides the ability to take advantage of that

learning or new information. DeGroote (1994) formally proves several properties

of flexibility and diversity , a term which conveys notions of variability, variety,

or complexity. He shows that an increase in diversity of the environment makes it

more desirable to select a more flexible technology. Similarly, an increase in the

311
flexibility of the technology makes it more attractive to operate in a more diverse

environment. These properties support the relationship between uncertainty and

flexibility as uncertainty encompasses diversity and complexity in the environment

as well as any future unknown.

Fine and Freund (1990) suggest the use of flexible capacity to hedge against

uncertainty in future demand. Flexible technology enables a firm to rapidly

introduce new product models, reduce the need for interperiod inventories, and

expand product scope to invade competitors markets.

Related to uncertainty and flexibility are other concepts such as liquidity, learning,

and risk which we mention now.

LIQUIDITY AND LEARNING

Uncertainties of the capital market, Hart (1937) argues, require the maintenance of

flexibility, more specifically known as liquidity. Marschak and Nelson (1962)

observe: the greater the uncertainty as to what investment opportunities will be

next period and the more the investor expects to learn about them between today

and tomorrow, the more he should be willing to pay for flexibility (liquidity).

RISK AND UNCERTAINTY

Carlsson (1989) assigns Kleins (1984) type I and type II flexibility to risk and

uncertainty respectively. Type I flexibility deals with foreseeable events and as

such can be built into production processes. Type II flexibility is built into

organisations, the risk-taking attitudes of its people, their expectations of change,

and their interactions in the long term. Firms must be alert to new opportunities

for new products and processes, i.e. they must be able to rapidly respond to

uninsurable changes in market conditions and unprogrammable advances in

312
technology. These two types of flexibility are not equivalent to the passive and

active forms of flexibility described earlier.

6.9 Conditions Under Which Flexibility is Useful

The conceptual framework reveals that flexibility is useful when there is

uncertainty. We identify three kinds of uncertainties which relate to our

consideration of flexibility: the environment, the model, and the decision maker

(user of the model). Uncertainties in the environment have been described in

Chapter 2 as areas of uncertainties, e.g. plant economics, demand, regulatory, and

public opinion. Uncertainties in the model refer to the lack of completeness,

accuracy, or adequacy. Uncertainties in the user refer to his unease in the model,

unsureness about his own preferences, and his lack of (detailed) information.

In Chapter 2, we had proposed two ways to deal with these uncertainties:

modelling and flexibility. The traditional approach of modelling uncertainties

implicitly assumes completeness as a goal. We have earlier suggested model

synthesis as a means to this end, but our subsequent investigation revealed the

conceptual and technical difficulties of synthesis. In Chapter 4, we questioned the

goal of completeness, whether it is indeed realistic and possible. If completeness is

possible, then model synthesis and other types of rigorous modelling should

eventually produce a complete model. Until then, we need to look for other ways

to compensate for that lack of completeness. If completeness is a futile goal, then

the modelling approach may not be appropriate. The close relationship between

uncertainty and flexibility suggests that flexibility may be a way to compensate for

lack of completeness in modelling uncertainty, i.e. to cope with those uncertainties

not captured in the model, and to deal with uncertainties independent of the

modelling approach.

313
We also argue that even if a model is complete, there may still exist a gap between

the model and the user. In other words, the user may not have full confidence in

the model and, according to our conceptual framework, may seek flexibility

instead. The uncertainty external to the model, i.e. model-exogenous uncertainty,

calls for that extra-model flexibility.

In Mandelbaums (1978) terms, model unease refers to the uncertainty felt by

the user regarding the model, and he argues that it motivates the consideration of

flexibility, but he does not elaborate on how flexibility can be used to compensate

for this unease. We distinguish between intra-model unease and extra-model

unease. Intra-model unease refers to the lack of completeness in a model as

perceived by the user, who believes that completeness can be achieved. The user

may request additional sensitivity analysis to ensure the robustness or

completeness of the model. Extra-model unease refers to the lack of confidence

the user has in the model, regardless of his belief in model completeness. If the

model is incomplete, the user may turn to flexibility instead. However, even if the

model is complete, the user may wish to retain flexibility as he may not wish to

rely entirely on the model. In the latter case, robustness is no longer sufficient.

Because flexibility is not a free good (Stigler, 1939), there is no point being flexible

or having flexibility if it is not needed or desired. [We use the word useful as

opposed to desirable and necessary, since the difference remains a research

question.] This implies that uncertainty must be important or costly enough for the

consideration of flexibility. Furthermore, there is a need to weigh the costs and

benefits of flexibility, i.e. measuring flexibility.

Flexibility can only be considered if there exists at least one other alternative other

than status quo, i.e. staying in the present state or continuing with the present

course of action. Means of operationalising flexibility are given in section 6.12.

314
In table 6.4, we translate Mandelbaums conditions under which flexibility is not

useful to their converse, i.e. conditions under which flexibility is useful, thereby

showing that flexibility is useful to modelling uncertainty in electricity planning.

Table 6.4 Mandelbaum (1978)

When Flexibility is Not Useful Why Flexibility is Useful for Electricity


(Mandelbaum, 1978) Planning (inferred from Mandelbaum, 1978)

A well modelled and solved problem Presence of new types of uncertainties and
inadequacies of existing techniques and
modelling approaches.

The decision maker has enough faith in the Allowance for unexpected changes is made in
model to implement results with little or no the form of reserve margins or plants being
allowance for unexpected changes built; even so this is very costly.

Learning is not expected (the value of Planning is a continuous process, with


flexibility depends on finding out more about revisions to existing plants constantly being
what we do not already know) made as new information arises.

Delays are not possible or have a detrimental Regulatory delays are possible and sometimes
or negative impact inevitable.

Complex situation with multiple interested While limiting changes may reduce conflict of
parties; changes are not desirable because interest, it also reduces the responsiveness.
lengthy debates are necessary

No uncertainty Uncertainty prevails in many forms.

6.10 Downside of Flexibility

The literature on flexibility is dominated by discussions of its positive aspects,

giving the illusion that it is always desirable. Little has been said about its

downside. In this section, we identify the negative aspects of flexibility and those

decision makers who do not desire flexibility.

The two-way relationship between flexibility and uncertainty suggests that there

may exist an optimal level of flexibility, beyond which more flexibility is not useful.

It also suggests that too much flexibility may be harmful. Too much flexibility, e.g.

too many options, may complicate the analysis and confuse the decision maker.

315
The value of flexibility may follow the diminishing marginal return rule, i.e. the

marginal benefit of an additional option decreases as the number of choices

increases. To assess this, we need a method of measuring flexibility and trading

off its costs and benefits.

Postponing or delaying a decision could produce discomfort, the anxiety of

waiting, or greater uncertainty. There are costs associated with continuous

monitoring, waiting, acquiring more information, and the regret of expired options.

There are also cost penalties to small unit sizes and shorter construction periods in

the form of dis-economies of scale.

Gerwin (1993) notes that increasing (product) variety leads to complexity and

confusion which in turn raises overhead costs. Technological developments may

make existing flexible technology obsolete. Having flexibility makes one less

careful to get it right the first time and thus may be more costly in the long run.

The close relationship between flexibility and uncertainty merely establishes that 1)

flexibility is valuable when there is uncertainty, and 2) flexibility is a way of coping

with uncertainty. There is no evidence that flexibility reduces uncertainty. In

fact, thinking about flexibility, i.e. brainstorming and permuting the number of

possible choices, may create more uncertainty for the decision maker.

Even though uncertainty is not necessarily undesirable, some decision makers seek

to eliminate it completely. The intolerance of uncertainty motivates such decision

makers into early or pre-commitment to get rid of uncertainty, hence flexibility

altogether. The cautious decision maker may prefer fewer decision choices to

avoid possible mistakes and heavy responsibility. The hesitant or indecisive

individual may want less freedom of choice to avoid having to make decisions, i.e.

he prefers not to have any difficult decisions to make, especially if many

316
possibilities exist. Hence, even under conditions of uncertainty, not everyone

desires flexibility.

6.11 Necessary Elements to Define Flexibility

Flexibility definitions as surveyed by Mandelbaum (1978) are all consistent.

Flexibility reflects the potential (Slack, 1983) or the capability to respond to

change. Equivalently, it is the

general capacity to deal effectively with the widest range of possibilities;

ability to perform well both in the old state before a change and in the new state
after the change;

ability to switch from the first period position to a second period position at low
cost;

set of remaining programmes after the initial choice has been made; and

systems ability to perform different jobs that may occur or to perform one job
under different environmental conditions.

We have described flexibility from a systems point of view as well as from a

decision perspective. These two ways of viewing flexibility are equivalent, i.e.

inferring one another. Collingridge (1979) shows that a system which is easy to

control can be seen as a sequential decision of high flexibility. The ease in control

is a function of the number of options open to the decision maker. To keep ones

options open is to invest in an easily controlled system. [In Chapter 7, we see that

the systems and decision views of flexibility with respect to entropy are not

equivalent.]

Common across different uses and classifications of flexibility are types of

flexibility, which are context-dependent, and elements of flexibility, which are

317
context-free. Types or kinds of flexibility relate to the conditions under which it is

useful. We propose an uncertainty-flexibility mapping to identify types of

flexibility. [This is clarified and applied in Chapter 7.] For example, volume

flexibility addresses demand uncertainty. Types of flexibility have been defined and

used in manufacturing where it has received the most attention. It is not necessary

to have different types of flexibility to use the concept. However, any discussion of

flexibility should include the essential definitional elements, which many authors

have proposed as seen below.

Eppink (1978) identifies three dimensions of flexibility (types, aspects, and

components) that are not independent of each other and to-date only apply to an

organisational context.

Slack (1988) gives range and response dimensions for each type of flexibility he

defines. Range refers to the ability to adopt different states, while response refers

to the ability to move between states. In an earlier paper, Slack (1983) gives the

dimensions of range and ease, where ease is the cost and time to make the change.

Cost and time are frictional elements to do with the difficulty of changing.

Gerwin (1993) stipulates three necessary elements in defining flexibility: range,

time, and discretion. We interpret these as follows. Discretion refers to the ability

and potential (and willingness) to change or fill a gap. Range refers to the number

and diversity of choices available. Time refers to responsiveness, lead time, and

time to change.

In addition to range and time, Schneeweiss and Khn (1990) add five more

elements: goal, objective (relates to condition), stochastic and not deterministic

(uncertainty), evaluation of elasticity, and the possibility to plan for it. They assert

that elasticity is a partial aspect of flexibility.

318
The above elements of flexibility are closely related to Kogut and Kulatilakas

(1994) three conditions under which options are valuable: uncertainty, time

dependence, and discretion (the ability to exercise and change.) Decisions depend

on time, and the value of flexibility comes from investing in the capability to

respond favourably to uncertain future events.

In sum, five elements appear necessary to define flexibility.

1) Flexibility conveys a change, usually in the future tense, i.e. a potential. This is
implied by the transition between two states, choosing between alternatives, barriers
to change, and switching cost.

2) Flexibility denotes more than one way of responding to change, hence the notion of
range. Range includes the size of choice set, number of alternatives, the extent to
which demand can be met, and levels of change.

3) Flexibility is different from gradual change. The time element is very important
here, as typically we speak of a rapid response. Time includes responsiveness,
lead time, and time to change.

4) The fourth element relates to the conditions posed in the previous section, i.e.
existence of uncertainty and alternatives or strategies for the consideration of
flexibility.

5) Inherent in the concept of flexibility is the notion of favourability which


differentiates between the choices available. Favourability has not been addressed
in the literature at all. It deserves a separate further discussion as some measures
give attention only to this aspect of flexibility.

6.12 The Concept of Favourability

The concept of favourability reflects the value or benefits of change. These are the

positive values associated with acquiring and realising the flexibility. Favourability

is what makes flexibility desirable. We move from an initial position to a new

position to take advantage of the new situation and get a better outcome. We

319
move to a new position to avoid or minimise a bad outcome, such as the loss of

revenue or incurring higher costs or not being able to get out of the situation. If

there are several states we can move to, we will move to the one that gives the

most benefit. Similarly, the choice we select in the second stage will be the one

which gives the best outcome.

Favourability refers to the decision rule of value optimisation such as cost

minimisation or revenue maximisation. Mandelbaum and Buzacott (1990) observe

that flexibility and favourability are two separate decision criteria, requiring a

trade-off between the number of choices and expected value. They suggest either

to satisfice on one attribute and optimise on the other or to determine a utility

function over the two attributes and then optimise on it. In the latter case, the kind

of utility function depends on the underlying probability. If the probability

distribution of the uncertainty is uniform, the utility function is additive on the

value function and flexibility (number of choices). If the underlying probability

distribution is exponential, the utility function is multiplicative. In the former case,

Heimann and Lusk (1976) give a treatment of satisficing on value but maximising

on flexibility.

In summary, favourability is that aspect of flexibility which relates to value

optimisation. Besides this, the multi-faceted concept of flexibility contains other

aspects, e.g. the ability to change, number of choices, and responsiveness, which

may conflict with favourability.

6.13 Operationalising Flexibility

The conceptual development of flexibility provides a unifying theoretical basis for

its application. The operationalisation of flexibility refers to practical means of

introducing or increasing flexibility into a system, in planning, or in decision

making.

320
We propose a distinction between two types of operationalisations, namely 1)

options, which provide flexibility, exhibit characteristics of flexibility, or lead to

more options in the future; and 2) strategies, which preserve, introduce, or increase

flexibility as courses of action. Short lead time, modular, and small unit

technologies promote flexibility by faster responsiveness, incremental additions,

and limited commitment. Technical means of achieving system flexibility are listed

in CIGRE (1991). Hobbs et al (1994) and Hirst (1989) give examples of

operationalising flexibility by such technological options. Strategies for increasing

flexibility include selecting a portfolio that contains flexible elements (Hirst, 1990),

Harts (1937) ways to preserve flexibility as summarised below, SCEs (1992)

scenario planning approach mentioned in Chapter 5, and Mandelbaums (1978)

sources of flexibility and other strategies described below.

Hart (1937) advises of several ways to preserve flexibility: holding inventory to

avoid uncertainty, deferring decisions until more information arrives, offsetting

uncertainties through a diversified portfolio, and eliminating uncertainty by

purchasing futures contracts and insurance.

MANDELBAUMS SOURCES OF FLEXIBILITY

Mandelbaum (1978) suggested six different ways to provide or increase flexibility.

These sources of flexibility have also been suggested and confirmed by others such

as Eppink (1978), Gustavsson (1984), and Collingridge and James (1991).

1) Sequentiality or staging limits the irreversibility of changes by dividing a decision


into a sequence of decisions thus limiting the responsibility or commitment of each
act. Limited commitment keeps options open and retains flexibility. Those
decisions that do not need to be made immediately can be postponed. The planning
process is made more frequent and decisions made more informed while
simultaneously limiting the kinds of power plants committed. Legal contracts
containing break clauses are more valuable for expensive capital investments than a
totally irreversible signed and sealed agreement.

321
2) Partitioning the action space, resources, or opportunities as time proceeds not only
enlarges the choice set but also allows more elements (members of the choice set) to
move freely. By dividing what seems like one capacity size decision into several
decision variables, we have more control over each unit. Partitioning also gives the
ability to decide sequentially. In the power industry, modular and small unit sized
combined cycle gas turbine plants exemplify this kind of flexibility as they can be
built incrementally. Gustavsson (1984) supports the use of standardised modular
components to increase flexibility design of products and systems. Standardisation
improves economy while modularity increases flexibility by the allowance of
different combinations of sizes and types of technology as well as incremental
additions.

3) Postponement of action gives time and opportunity to obtain more information, for
uncertainties to be resolved, and new options to open up and be developed
simultaneously, such as the use of temporary arrangements. This delay is not
usually free, neither is the additional information free, hence the value of this
information must be worth the delay. Paying a premium for the option to delay,
building reserves as in uncommitted funds (thereby enlarging the choice set), etc,
are all examples of postponement.

4) Searching for additional actions is a way to enlarge the choice set. One definition
of flexibility is the number and variety of choices available. Option-generating
techniques as described in Keller and Ho (1988) assist in the search for more
solutions to a problem. This is based on the rationale that the more choices
available, the more and different types of futures (uncertainties) can be met.

5) Reducing the resistance to change makes it easier and cheaper to change. This is
accomplished by removing or relaxing constraints as well as lowering the cost of
change. Together with the fourth source, this strategy enables decisions to be made
more frequently while increasing the number and quality of options available at
each point. The removal of technological barriers results in the development and
availability of new plants. Removing constraints enlarges the choice set. This is
equivalent to reducing lead times, cost of changing, and other barriers to change,
i.e. disablers.

6) Diversity, as Mandelbaums sixth source of flexibility, encompasses two notions of


variety and tolerance. They increase the bearing capacity of a system, thereby
meeting a broader spectrum of needs. Variety is akin to risk diversification or

322
having a balance of technologies. Tolerance is a way of increasing state flexibility,
i.e. catering to many. Generating companies typically have a good mix of
technologies by type and capacity size and timing (commission and retirement
dates).

These six sources of flexibility are closely linked to the five criteria proposed by

Collingridge and James (1991) for increasing flexibility in policy making. These

criteria collectively reduce the time horizons of decisions, diminish their

sensitivity to individual variables, and increase organisational recognition of

uncertainty.

1) The first criterion is called incrementalism, as incremental development strategies


translate to small commitments in stages.

2) Maximum substitutability reduces the sensitivity of decisions to individual


variables.

3) Maximum diversity decreases the dependence on any fuel, hence lowering risk.

4) Sophisticated monitoring and appraisal give more information and increase


responsiveness and control.

5) The final criterion suggests sophisticated contingency planning to avoid panic


responses.

Likewise, Eppink (1978) offers two ways to respond to uncertainty, equivalent to

two ways to increase flexibility.

1) Reducing the relative impact of external changes makes oneself less vulnerable.
For example, multi-product firms with highly diversified portfolios exhibit high
external flexibility or robustness.

2) Increasing its response capacity is achieved by enhancing logistic flexibility or


action flexibility. This is exemplified by early warning systems, e.g. the provision
of information, multi-purpose equipment, and smaller units.

323
In the electricity context, Yamazee and Hashimmashhadi (1984) suggest four ways

to achieve flexibility, paralleling ways to reduce uncertainty.

1) Shortening the lead time to acquire a resource or plant reduces the uncertainties
surrounding future conditions.

2) Lowering capital costs limits fixed financial commitment. This re-iterates Stiglers
(1939) view of flexibility as the transfer of costs and resources from the fixed to the
variable.

3) Reducing resource sizes lessens the risk (and commitment).

4) Allowing interim or intermediate decisions reduces the size of investments.

6.14 Conclusions

We have answered the questions raised in chapters 4 and 5 by conceptual

development.

1) First, we summarised the meaning of flexibility as elicited from an analysis of

similar words. Flexibility is the ability to easily respond to unforeseen changes

in a variety of ways. This definition of flexibility is later validated by the

conceptual framework.

2) Second, we examined its relationships with more established concepts and

developed a conceptual framework.

a) We identified two types of flexibility (passive and active) and clarified the
distinction between flexibility and robustness.

b) We discussed its role as a preferred decision criterion under conditions of


uncertainty. This was briefly posed in Chapter 4.

c) We established by transitive argument the following relationships:

robustness as minimising risk and regret

324
lack of confidence leads to less commitment and more flexibility.

d) We noted the concepts of flexibiltiy and robustness as embedded in the definition of


a financial option: the right but not the obligation.

e) We emphasized the important relationship between uncertainty and flexibility.

3) That uncertainty makes flexibility valuable translates into conditions under

which flexibility is useful. We identified and distinguished between intra and

extra-model unease which can be met by robustness and flexibility

considerations.

4) Even with uncertainty, flexibility may not necessarily be desirable. We discussed

the downside of flexibility, which has not received enough attention in the

literature.

5) To preserve the multi-faceted meaning of flexibility, we identified necessary

elements in its definition, rather than giving a single formal definition. These five

elements are change, range, time, uncertainty conditions, and favourability.

6) Finally, we proposed that flexibility may be operationalised via options or

strategies and gave examples of how this may be achieved.

This conceptual development has unified and clarified the definitions and

applications of flexibility from the cross disciplinary review. To make use of this

conceptual framework, a utility in the UK ESI must be able to operationalise and,

perhaps even, measure flexibility. Measures may be needed to defend plans, make

comparisons between different strategies, and trade off conflicting objectives. In

the next chapter, we give a rigorous assessment of three groups of measures which

emerge as most popular and most promising from our cross disciplinary review.

325
CHAPTER 7

Measuring Flexibility

7.1 Introduction

The previous chapter indicated a need to measure flexibility. To make use of the

operationalisation of flexibility, i.e. options and strategies to introduce or increase

flexibility, we need to assess its costs and benefits. Measuring flexibility facilitates

the comparison of options and strategies.

One of the greatest challenge put forth by many researchers is that of unifying

definitions of flexibility. There have been as many measures as definitions of

flexibility. Mandelbaum (1978) reviewed at least twenty. The previous chapters

have shown the difficulty of reconciling its many interpretations and applications.

Its multi-dimensional aspects make the quest for a single best measure impractical

if not impossible. In the manufacturing sector, where flexibility is a well-

recognised and necessary objective, new measures are continually being suggested.

There, Sethi and Sethi (1990) and Gupta and Goyal (1989) have provided the most

comprehensive classifications and descriptions of different types of measures of

manufacturing flexibility up to 1990. Instead of deriving a single measure, Slack

(1983) suggests developing a methodology to identify ways and costs of providing

any new flexibility. Motivated by these comments, we focus on ways of

measuring flexibility rather than finding a single measure.

In the manufacturing sector, Gerwin (1993) believes that the research on flexibility

needs to have a more applied focus to complement the theoretical work, and that

the main barrier to advances on both theoretical and applied fronts stems from the

lack of measures for it and its economic value. On the contrary, our review has

found a proliferation of measures. Because these measures come from different

327
disciplines, they individually do not reflect the rich cross-disciplinary interpretation

of flexibility.

Among others, Bernardo and Mohamed (1992) argue for an explicit consideration

of flexibility by definition and classification of measures. Not only is defining and

classifying flexibility a major challenge, developing operationalisable measures is

also difficult. Gerwin (1993) gives five reasons for this.

1) Little agreement exists on relevant dimensions of the concept. [Our conceptual


development has clarified much of the confusion and unified the essential concepts
inherent in flexibility.]

2) Multi-dimensionality compounds the effort that goes into creating scales for
assessment. [As concluded from our cross disciplinary review and conceptual
development, the concept of flexibility is multi-faceted, containing both the
context-dependent types and context-free elements of flexibility.]

3) Because flexibility can be studied at different system levels, such measures require
collections of disparate data sets.

4) Operationalisations that span industries are more useful albeit more difficult to
create for research than those based on a single industry. [We have studied the uses
and definitions of flexibility from a broad basis, i.e. a cross disciplinary review that
spanned industries.]

5) There is a lack of communication between those formalising flexibility concepts and


those operationalising them (as in manufacturing flexibility). We address the fifth
difficulty by investigating measures of flexibility to tie together the theoretical and
practical aspects.

There are other fundamental difficulties in measuring flexibility. Flexibility is

evasive because it is a potential, which depends on what happens in the future.

However, the future is shrouded in uncertainty, which in the strictest sense of

Knight (1921), cannot be measured or predicted with accuracy. Therefore, the

value of flexibility is difficult, if not impossible, to ascertain. Pye (1978) even

328
suggested measuring this uncertainty as a substitute for flexibility. In this thesis,

we relax the definition of uncertainty to allow a probabilistic treatment.

We propose the following criteria for measuring flexibility. A measure of

flexibility should be meaningful, reflecting our conceptual understanding as

developed from the cross disciplinary review. As such, there is a need to represent

the different aspects of flexibility.

1) It should contain the necessary definitional elements of


a) range,
b) time,
c) change,
d) conditions of uncertainty, and
e) favourability.

2) It must not contradict relationships from the conceptual framework.

3) It should be simple and operationally possible from a modelling perspective, that


is, it can be derived or calculated from existing techniques.

4) A measure of flexibility should distinguish between the flexible and the inflexible,
and ideally distinguish between different degrees of flexibility.

5) Finally, such a measure should facilitate a trade-off between the notions of size of
choice set and value inherent in flexibility, that is, where flexibility conflicts with
favourability.

We classify our extensive review of measures of flexibility into three groups:

indicators, expected value, and entropy. Most measures belong to the first group

and reflect partial aspects of flexibility. To summarise these partial measures, we

translate and organise the necessary definitional elements of Chapter 6 into

indicators of flexibility in section 7.2.

The second group of measures are based on the concept of expected value in

decision analysis. Three expected value measures of flexibility are reviewed in

section 7.3. 1) The relative flexibility benefit (Hobbs et al, 1994) puts a positive

329
value on the ability to take advantage of favourable uncertain conditions. 2) The

normalised flexibility measure (Schneeweiss and Khn, 1990) deals with the

states of the uncertain condition matched to the states or choices of the second

stage decision, thereby conveying the notion of slack. 3) The expected value of

flexibility (Merkhofer 1977, Mandelbaum 1978) relates to the expected value of

information. The first expected value measure deals with the number of

uncertainties, the second with the states of uncertainties, and the third with the

order of decisions and uncertainties.

The third group of measures are based on the scientific concept of entropy (section

7.4), which has not received enough attention with respect to flexibility. Two

types of entropic measures corresponding to the decision and system views of

entropy are assessed.

Section 7.5 compares expected values with entropy. The final section 7.6

summarises the critique and comparison of these three groups of measures and

proposes a way forward.

7.2 Indicators of Flexibility

We give a new terminology for partial measures of flexibility. These measures are

translated and organised into indicators which reflect those necessary definitional

elements of flexibility. The term indicator is used to indicate rather than measure

flexibility, as measure gives the connotation of completeness. An indicator by

itself is a partial measure of flexibility.

Centrally embedded in the concept of flexibility is the capability to change,

reflecting a potential or a provision for change that is available in the future.

Therefore any measure of flexibility should reflect the potential but not necessarily

the realisation.

330
In their analysis of flexibility and uncertainty, Jones and Ostroy (1984) observe that

flexibility is a property of an initial period position, referring to the cost or

possibility of moving to various second period positions. It has been pictured in

Hobbs et al (1994), Hirst (1989), and Schneeweiss and Khn (1990) as a sequence

of decisions in a minimum of two stages where the first stage is the initial position,

providing the flexibility which can be realised in the second stage. It has also been

interpreted as a state transition (Kumar 1987 and others), where the initial position

can move to another state. Either way, flexibility is associated with the initial state

but measured by the number of states it can move to or the number of choices

available in the second stage.

The amount of flexibility is relative not absolute. It is necessary to have a default

case for comparison purposes, i.e. the inflexible path with no subsequent options.

The choices in the first stage decision lead to different levels of flexibility. An

initial position has flexibility if there is at least one other state it can move to.

Similarly, a first stage decision should have in the following second stage decision

at least two choices, with the minimum being change or do not change. The

do not change is a default or status quo option. A decision sequence that

proceeds from the first stage to the second stage do not change is similar to

staying in the same state.

The change or transition is a purposeful one. In other words, it is neither

accidental nor unpredictable. It is induced by a stimulus or a reason. We do not

want flexibility for no reason at all, for it is not free. This reflects our

understanding that flexibility is a response to uncertainty, complexity, and other

motivating forces. We represent this change by mapping the type of uncertainty to

type of flexibility, i.e. uncertainty-flexibility mapping. This mapping consists of

trigger events, trigger states, decisions, and choices, as explained below.

331
Change inducers, i.e. relevant uncertainties, are called trigger events, and they

determine the type of flexibility that is required. Examples of the flexibility to

respond to the trigger event of demand uncertainty include changing production

levels, purchasing or selling extra production capability, and holding reserve

capacity. The change is the decision makers response to the trigger event(s) and

is represented by a decision. Trigger events affect flexibility decisions. There may

be several events that trigger a second stage decision. Similarly, a second stage

decision may consist of several decisions in a sequence.

A trigger event represents an uncertainty that has two or more possible states. For

example, demand uncertainty can be high or low. If each state pre-empts or

matches a subsequent decision choice, then it is called a trigger state for that

choice. Just as different trigger events pre-empt different types of decisions,

different trigger states pre-empt different choices For example, purchasing

additional capacity is associated with high demand while selling extra capacity is

associated with low demand, with demand being the trigger event. High and low

demand are trigger states for the purchasing and selling options, respectively.

Mandelbaum and Buzacott (1990) define flexibility as the number of options open

at the second period. Mandelbaum (1978), Merkhofer (1975), and Rosenhead

(1980) support this definition of flexibility, i.e. the size of a choice set associated

with alternative courses of action. The size of choice set can be directly deduced

from that meaning of flexibility which relates to having many choices and multi-

functionality. It is one of the measures proposed by Marschak and Nelson (1962),

who admit that it is insufficient by itself and subject to the partitioning fallacy. In

fact, Evans (1982) accuses Marschak and Nelson of confusing the property of

flexibility with its measure, implying that the size of choice set is only one aspect of

flexibility. Some differentiation of desirability, quality, and diversity among the

332
choices is necessary to avoid triviality, e.g. choices that are feasible but unlikely to

be chosen.

According to Upton (1994, p. 77) flexibility is the ability to change or adapt with

little penalty in time, cost, and effort of performance. Reflecting the difficulties in

changing and the barriers to change, these penalties are what Slack (1983) calls

frictional elements. Reducing the lead time or response time makes it faster to

change. Reducing the switching cost makes it cheaper to change. Mandelbaum

defines switching cost as the average sum of gains and losses in the transition,

which is only incurred if the change occurs. Removing barriers makes it easier to

change. We call these indicators disablers.

There is a difference between the cost of providing the flexibility and the cost of

changing. The enabler reflects the cost of providing flexibility, i.e. it guarantees

future flexibility and reflects the premium on flexibility. The flexibility associated

with any investment or initial position is largely valued by the initial sunk cost

commonly known as fixed investment costs or premium of an option. With respect

to costs, the enabler and disabler are similar to fixed and variable costs. The

enabler is the premium or cost associated with the first decision which guarantees

flexibility later on. The disablers indicate the availability of an option by minimal

cost, minimal time, and other reduction of barriers. The disabler includes the

switching cost when flexibility is realised. The enabler is like Stiglers (1939) fixed

cost, while disablers are variable costs which may or may not occur. These costs

are similar to the loss or benefit if the change occurs (Buzacott 1982, Gupta and

Goyal 1989) and Mitnicks (1992) marginal or incremental cost of the additional

facility which adds flexibility.

We translate the favourability inherent in flexibility into positive values which are

desirable. We call these elements motivators. These are the benefits or payoffs

associated with the choices available.

333
Mandelbaum (1978) suggests that flexibility can be measured by the effectiveness

and likelihood of change. But he does not decompose these elements further nor

apply them. From our perspective, effectiveness refers to the number of favourable

choices, i.e. choices that do actually lead to favourable outcomes. Likelihood

refers to the probability of the occurrence of the trigger state as well as the

combined effects of disablers and motivators, indicating the probability of the

subsequent choice being selected. As probabilities of change, these likelihoods also

reflect that element of potential. Eppink (1978) associates the likelihood of state

transition with the degree of commitment or the decision makers willingness to

abandon his current position. We interpret the likelihood as a function of the

disablers (the more difficult the change, the less likely) and the motivators (the

better the outcome, the more likely the change.)

The above classification portrays a multi-dimensional picture of flexibility.

Flexibility is a potential or capability to change, associated with an initial

position, but measured by the number of favourable choices that are available

later. Favourability is embedded in positive returns or benefits called motivators.

The value of future flexibility is captured by enablers which guarantee that

provision of flexibility. The availability of these choices depends on switching

costs and other frictional elements called disablers. The type of flexibility depends

on the type of uncertainty which triggers the subsequent choices. The likelihood

of change depends on the probabilities of the trigger states, disablers, and

motivators of the choices. Flexibility is relative to other choices in the first stage.

Flexibility increases with the number of choices in the second stage, likelihood of

favourable choices, and ease of change. Table 7.1 translates essential elements of

flexibility into measurable indicators. Although not all are necessary, each

indicator is insufficient on its own.

334
Table 7.1 Elements and Indicators of Flexibility

Elements Indicators Representation


Potential, Capability to change two-stage decision investment in first stage,
characteristics or capabilities
state transition
in second
Purposeful change, trigger event uncertainty or chance node
response to stimuli
trigger state states of uncertainty
Availability (existence) of number of states number of options in second
transitional states or stage
size of choice set
possibility of change
Likelihood of change probability of trigger state or probabilities, costs, lead time
proportional representation of
disablers
Provision, property of initial enabler premium, cost of providing
position, guarantor the capability
Barriers to change disablers switching cost, response time,
speed of change, constraints
Favourability motivators profit, income, value, benefits

7.3 Expected Value Measures

7.3.1 Relative Flexibility Benefit

Hobbs et al (1994) define flexibility as the ability to adapt a systems design or


operation to fluctuating conditions. They propose a measure called the relative

flexibility benefit to capture the benefit or cost-savings in contrasting how well a

system performs under a single set of expected future conditions against how well

it performs, on average, if all possible conditions and their probabilities are

considered. We analyse their example in detail in section 7.3.1.1. Afterwards in

section 7.3.1.2, we extend this example to test their claim that the measure can be

used to compare investments that differ in the degree of flexibility.

335
7.3.1.1 Single Investment: Flexible vs Inflexible

Hobbs et al illustrate their measure through a simple example of a utilitys decision

to add flexibility to cope with future environmental restrictions. The default option

is to burn coal. If co-firing is installed, it can burn either gas or coal. The

flexibility of co-firing is merely the additional option of burning gas. Using our

terminology of section 7.2, the trigger event for the flexibility of co-firing is the

uncertainty of natural gas prices. The future price of natural gas is equally likely

to be one of the following: $2.40, $3.20, and $4.00 per million British Thermal

Units. If the capability to co-fire is not installed, the annual generation cost would

be $100M, for the default option of burning coal. With co-firing capability at an

annual investment cost of $0.5M, the firm can choose to co-fire and pay an annual

$96.7M, $100.7M, or $104.7M depending on the price of natural gas. These

annual generation costs are added to the annual investment cost to get $97.2M,

$101.2M, and $105.2M respectively in table 7.2 and in brackets in figure 7.1. If

co-firing is invested but not used, i.e. natural gas prices make it too expensive, the

utility firm would incur the annuitised investment cost ($0.5M) plus the default

generation cost of $100M, totalling $100.5M. The firm gains flexibility from co-

firing because it can burn gas when natural gas price is low and burn coal

otherwise, i.e. two choices instead of one. This example treats burning gas and

burning coal as mutually exclusive, i.e. there is no intermediate choice of burning

both gas and coal.

Table 7.2 Annual Costs

Trigger Event: Natural Gas Price Second Stage Decision of Co-firing No Co-firing
(includes annual $0.5M investment) (no investment)
State Probability Burn Gas Burn Coal Burn Coal
At $2.4/mmBTU 1/3 $97.2M $100.5M $100M
At $3.2/mmBTU 1/3 $101.2M $100.5M $100M
At $4.0/mmBTU 1/3 $105.2M $100.5M $100M

336
Figure 7.1 depicts this example in a decision tree. The expected value associated

with the portion of the tree to the right of a node, i.e. all branches emanating from

the node, is enclosed in brackets above the branches. The direct payoff

assignments (annual costs) are given below the associated branches. The expected

values associated with end nodes (tip of the right most branches) are totals of the

individual payoffs along the path. For example, the total cost of Burn Gas given

Gas Prices are $2.4/mmBTU is [97.2] = 96.7 below the branch + 0.5 install

capability to co-fire. All numbers in brackets are calculated by the expected value

procedure of averaging out and folding back.

Figure 7.1 Hobbs Example

To read the decision tree, we start from the left. Invest $0.5M a year to install the

ability to co-fire. If the natural gas price is $2.40 (and this occurs with 1/3

probability), the best option is to burn gas and pay a further $96.7M (instead of

$100M). If the natural gas price is $3.20 or $4.00, it is cheapest not to burn gas

but burn coal instead. If co-firing is installed, one-third of the time it would cost an

extra $96.7M to burn gas compared to an extra $100M to burn coal. This gives a

savings of $3.3M. [$3.3M = $100M (not installed ) - $96.7M (install and run

337
favourably)]. Equivalently, the total annual cost of $100.5M - $97.2M = $3.3M.

The benefit of installing co-firing is therefore (1/3) * ($3.3M) = $1.1M cost-

savings per year.

Alternatively, this benefit may be calculated by reconstructing the decision tree.

The Value of Co-firing Under Expected Conditions minus Expected Value of Co-

firing = V ( Co-firing | E ( Uncertain Gas Price ) - E (Co-firing | Uncertain Gas

Price). The first term (value of co-firing under expected conditions) is taken from

the top portion of the same decision tree with new probability assignments (figure

7.2) which treats the uncertain event as certain on the average state and

improbable under other states. The second term (expected value (EV) of co-

firing given uncertain gas price) is taken from the top portion of the decision tree in

figure 7.1. Under expected conditions, the price of gas is 1/3 * ($2.4) + 1/3 *

($3.2) + 1/3 * ($4.0) = $3.2/mmBTU. To get the Value of Co-firing given

Expected Conditions, we reassign the $3.2/mmBTU to 100% probability of

occurrence. All other prices are set to 0% probability for the purposes of expected

value calculation. [Note: This re-assignment of probabilities is possible because

the original probabilities were equal. If the original probabilities were not equal,

we may require a new state to represent the average.] In this example, the values

V and expected values E refer to costs and expected costs, respectively, in which

case, the best payoff comes from not burning gas. The benefit or cost-savings of

co-firing is thus $100.5M - $99.4M = $1.1M.

338
Figure 7.2 Expected Conditions

To avoid confusion, we distinguish between the value of installing the capability

to co-fire and the value of co-firing. The value of installing the capability to co-

fire is the difference between the expected value of the top branch (install) in figure

7.1 and the expected value of the bottom branch (no co-firing) less the investment

cost. In cost terms, -$99.4M - (-$100M) - $0.5M = cost savings of $0.1M. The

value of co-firing depends on the price of natural gas. In other words, the value is

highest when gas is cheapest, and lowest when gas is most expensive.

Hobbs relative benefit of flexibility is the difference between the expected value

from considering the uncertainty of natural gas prices and that from treating natural

gas prices as one average value. It is the difference between considering several

futures (three states of the world) and one representative future (one average state

of the world.) The bottom portion of the decision trees (No_co-firing) serves as

the default case for comparison purposes, as flexibility is a relative value. The

value of flexibility is relative to the zero value of the default inflexible case. It

shows that the default case is not affected by the future price of natural gas.

339
The above analysis agrees with our understanding that flexibility is a potential and a

property of the initial position. When we speak of the flexibility of co-firing, we

mean installing the capability to co-fire. We measure this potential by a

probabilistic weighting of the optimal realisation, i.e. burn gas when natural gas

price conditions are favourable. This flexibility is only favourable when the natural

gas price falls to $2.40/mmBTU, which occurs a third of the time.

7.3.1.2 Comparing Investments: Flexibility vs Favourability

Hobbs et al claim that this relative flexibility benefit can be used to compare two

different investments offering different degrees of flexibility. The relative flexibility

of X compared to Y is simply the difference between them, i.e. F(X) - F(Y), where

F(X) = V(X | E(uncertain condition)) - E(V(X| uncertain condition)).

We test their assertion by considering two co-firing investments X (described in the

previous section) and Y (described below). An alternative co-firing investment Y

gives cost-savings (compared to the No_co-firing case) for two out of three natural

gas price levels. Y is more flexible than X as it offers one more favourable option

than X does. As a result it costs more to invest in Ys capability, $0.6M per year

(instead of $0.5M for X). Figure 7.3 shows the relevant decision tree for Y.

340
Figure 7.3 Investment Y

Although Y gives favourable payoffs ($98.7M and $99.8M) more often than X

does, its expected cost ($99.7M) is greater than Xs ($99.4M). The value of co-

firing Y under expected conditions is $99.8M. Its flexibility benefit ($0.1M =

$99.8M - $99.7M) is much lower than F(X) which was $1.1M. Y is more flexible

but less favourable than X. However, on grounds of flexibility, the firm should

favour Y because it is more likely than X to give cost-savings. On average, Y

would be 2/3 favourable compared to 1/3 for X. The flexibility benefits F(X) and

F(Y) indicate incorrectly that X gives more benefits of flexibility than Y.

This measure of relative flexibility benefit over-emphasizes favourability and

neglects other aspects of flexibility, e.g. likelihood and number of choices.

Favourability refers to value optimisation, e.g. maximising profit or minimising

costs, and uncertainty is not required for its assessment. In this example,

favourability refers to minimising annual costs. Flexibility refers to having choices

to increase the likelihood of reaching a favourable outcome in the future. X has

one out of three chances to give a favourable payoff of $97.2M, while Y has two

out of three chances ($98.7M and $99.8M). Although Ys payoffs are not as

341
favourable as Xs, Y seizes more opportunity and offers a larger (favourable)

choice set than X. Therefore Y is more flexible by this definition. However, in one

of the three cases, X gives greater cost-savings than Y.

Why have the authors confused flexibility and favourability? Their argument is

sound: if flexibility is desirable, then it should give benefits. The more flexible it is,

the more benefits it should give. Expected value is a good candidate for the

measure of flexibility because it incorporates uncertainty as well as value. Their

relative flexibility benefit fails when comparing two investments that conflict on

flexibility and favourability, i.e. X is more favourable (more cost-savings) but less

flexible (fewer favourable choices) than Y. The relative flexibility benefit measure

fails to distinguish between the two. In more complicated examples where a

multitude of uncertainties and decisions prevail, the relative flexibility benefit could

lead to a poor recommendation. Hobbs et al have assumed that the more flexible it

is, the more favourable it is, i.e. no conflict between the two. In doing so, they

have disregarded Mandelbaum and Buzacotts (1990) work on flexibility and

favourability as two separate decision criteria.

Hobbs et al do not emphasize enough the importance of the uncertainty of natural

gas prices. This is the trigger event, without which flexibility cannot be valued.

They also fail to mention the implications of the number of trigger states and their

probability distribution, i.e. the gas prices. If $2.40/mmBTU is most likely, then

installing X gives the best favourability and flexibility. If $3.20/mmBTU is most

likely, then Y gives the best outcome. If $4.00/mmBTU is most likely, then co-

firing capability is not valuable, as natural gas becomes too expensive for co-firing.

The trigger event (natural gas price uncertainty) is an indicator of flexibility. The

trigger states are indicators of specific options.

Defining flexibility as the number of choices and favourability as optimising the

final payoff, we can reduce this problem to the following. X gives one out of three

342
favourable options in the second stage. Y gives two out of three. Thus Y is the

most flexible. On favourability, X has a better expected value, so it is most

favourable. If we choose Y, we optimise on flexibility and satisfice on

favourability. If we choose X, we optimise on favourability and satisfice on

flexibility. A measure of flexibility must give indication of likelihood, which has

been lost in expected values.

The likelihood of reaching a favourable outcome is embedded in expected values

but averaged out when comparing X and Y. The number of favourable choices is

not that which is offered by the co-firing option, i.e. burn gas or not, but in context

of natural gas price uncertainty. So X actually offers fewer favourable options than

Y if there are three equi-probable prices. A cumulative probability notation may be

more appropriate than counting the number of choices.

A decision tree analysis, without examining the structure of the tree itself but only

looking at expected values, will gloss over this. This of course depends on the

firms view of natural gas price uncertainty, being only three states and of equal

probability in this example.

7.3.2 Normalised Flexibility Measure

The relative flexibility benefit is unable to distinguish between investments with

different degrees of flexibility. The normalised flexibility measure of Schneeweiss

and Khn (1990) is intended for the comparison of more than two options. It is a

ratio of differences of expected values. The denominator is the difference between

the ideally flexible V+ and the most inflexible V- options considered in the analysis.

The numerator is the difference between the chosen investment V* and most

inflexible V-. So, N(V*) = (V* - V-) / (V+ - V-). A normalised measure provides an

index between 0 and 1 to rank different options. The problem is finding

343
appropriate values for V that capture both flexibility and favourability for use as

anchors in the denominator.

The most flexible option is the first stage option which gives the most number of

choices in the second stage. Each choice (available in the second stage) meets each

uncertain condition favourably, i.e. gives the best payoff provided that the trigger

or favourable state occurs. The choice that gives the most flexibility is able to meet

(or take advantage of) each uncertain condition exactly. The less flexible options

do not meet the uncertain condition exactly.

One choice corresponds to the most inflexible case (C in figure 7.4), that is, where

uncertain conditions cannot be dealt with at all in the default case, i.e. there are no

future options available. Another choice represents the ideally flexible case (A)

which may or may not appear in the problem, as long as its value is known. The

ideally flexible case represents perfect mapping between states of uncertainty and

choices of the decision. The remaining choice (B) has a degree of flexibility

between the two extremes. Any path that gives some flexibility, i.e. more than one

option in the second stage, has a value between the two extremes. Thus the

normalised flexibility measure of the chosen branch (B) from the first decision is

N(B) = [E (chosen) - E (inflexible)] / [E(ideally most flexible) - E(inflexible)]. The

most flexible is N(A) = 1 and most inflexible is N(C) = 0. This type of flexibility

has to do with matching of trigger states to choices in the second stage decision. It

can be generalised to figure 7.4 below.

344
Figure 7.4 General Structure of Normalisation

Using Hobbs flexibility benefit measure, we would get figure 7.5 for the values

given expected conditions. That is the E(Conditions) = branch Three with 100%

probability, and A3 is the option that gives the most favourable outcome or B2 if B

is invested.

345
Figure 7.5 Expected Conditions

Schneeweiss and Khns example concerns machines A and B with different

production level settings to meet different demand levels, with C as the default do

nothing case, in which case Pay C = 0. Pay A and Pay B are enablers. The

machine with the most number of levels minimises slack production in meeting

demand d1 and d2 in figure 7.6. Theirs is one of constrained optimisation, adjusting

production levels to meet demand, otherwise incurring a shortage cost. So

flexibility is embedded in the notion of minimising slack. The more options

(production levels) available, the better is the match.

346
Figure 7.6 Schneeweiss and Khn

In their example, there is no second stage decision following the inflexible option

of C, implying that the payoffs cannot be adjusted like A and B. The expected

values of A, B, and C options include the amount of slack arising from the

imperfect mismatch of supply and demand. The normalised flexibility measure

permits the comparison of a variety of machines and situations commonly found in

the manufacturing sector. For example, Rller and Tombak (1990) use machines

capable of producing different products to cater for different demand requirements.

In electricity capacity planning, this generic example extends to capacity levels,

loading order (black-start capability), and unit size as they correspond to demand

levels and load distribution. This measure is highly dependent on the partitioning

347
and specification of the states of the trigger event and mapping to the second stage

options. Implicitly, the more options available, the better is the match. As

decisions are discrete, in the limit, partitioning approximates to the continuous case

of meeting different states of uncertain conditions, thereby reducing slack (and

giving a higher normalised flexibility measure.) As the trigger event must be the

same for all options considered, this measure cannot be used to assess different

types of flexibility.

7.3.3 Expected Value of Information

According to Merkhofer (1975), flexibility is valuable (and thus desirable) when

new information can be obtained, i.e. one expects to learn about the future. The

value of this flexibility is directly related to the value of this information.

In decision analysis, the value of information is the most a decision maker would

pay to resolve an uncertainty, effectively the price of advance information. This is

normally calculated by reconstructing the decision tree such that all uncertainties

occur before the decision nodes, so that appropriate action may be taken to

optimise payoffs. The difference between the flipped tree and the original tree is

the value of information. Mandelbaum (1978) says the Expected Value of

Flexibility is the most that a decision maker is expected to gain by delaying

decisions. [Note: Delaying decisions is only one strategy for increasing flexibility.]

It is the maximum a decision maker can gain by taking the flexible initiative. He

associates a given amount of information with a given amount of flexibility.

In a related application, Merkhofer (1977) suggests a reordering of decision and

chance nodes to manipulate the amount of decision flexibility. He defines decision

flexibility as the size of choice set associated with a decision. [Note: The size of

choice set is only one element of flexibility.] The value of information, that is,

resolving the uncertainty before making the decision, depends on the amount of

348
decision flexibility. This can be interpreted as follows: the usefulness of putting

the chance node before the decision node depends on the degree to which the

subsequent payoff can be favourable (instead of unfavourable). Re-ordering of

these nodes is achieved by delaying the decision until uncertainty is resolved or

obtaining advance information to resolve the uncertainty before the decision is

made. Either way, there is a cost to uncertainty resolution, and this is equivalent to

the value of information.

Merkhofers Expected Value of Perfect Information Given Undiminished Flexibility

(EVPIGUF) is the upper limit to the value of information given some level of

decision flexibility, just as the Expected Value of Perfect Information (EVPI)

computed from flipping a decision tree gives the upper limit to information

gathering. [Note: The upper limit to information gathering is not the upper bound

to maximum flexibility.] His measure is most relevant when a decision maker has

several decision variables to manipulate, i.e. several decisions to make, the order

and the timing of which can be adjusted to allow the resolution of uncertainty

beforehand.

The trigger event acts as perfect information for the subsequent second stage

decision. Hobbs relative flexibility benefit is merely the difference between the

expected value of perfect information and value of expected information. Perfect

information is the exact, actual, and zero error value or prediction. Figure 7.7

illustrates the calculation of EVPI, which has similarities with Hobbs relative

flexibility benefit in figure 7.8.

349
Figure 7.7 EVPI

Figure 7.8 Relative Flexibility Benefit

Mandelbaums and Merkhofers suggestions of using the expected value of

information is confined to that aspect of flexibility that corresponds to the number

of choices and that operationalisation of flexibility that is achieved by postponing

decisions. Further tests are required to determine its generalisability. In the next

section, we explore the possibility of combining features of expected value

measures to improve upon the overall flexibility measure.

350
7.3.4 Towards an Improved EV Measure

We propose and test two ways of improving upon the expected value measures

found in the literature. First, we propose to recalculate the normalised flexibility

measure using EVPI and Deterministic EV as anchors in the denominator.

Second, we propose to weight this new normalised measure by the cumulative

likelihood of the favourable trigger states. These measures are tested and

compared against existing expected value measures.

Putting all uncertain events before all decisions gives the highest expected value

associated with the maximum flexibility possible with postponement of decisions

(EVPI). Putting all uncertain events after all decision sequences gives the expected

value associated with zero decision flexibility (Deterministic EV). The

Deterministic EV configuration is given in figure 7.9. Since the EVPI and

Deterministic EV bound the reasonable spectrum of expected values (from the

most inflexible to most flexible), they are candidates for anchoring Schneeweiss

and Khns normalised flexibility measure. Any intermediate configuration in

which decisions and uncertainties are reordered should give an overall expected

value that falls between the EVPI and Deterministic EV. Expected values that fall

outside the range defined by the Deterministic EV and EVPI are not worth

considering as these options are too unfavourable compared to the flexibility they

offer. In the flexibility decision tree configuration, both X and Y provide expected

values between EVPI and the Deterministic EV, thus indicating that X and Y

provide some degree of flexibility.

351
Figure 7.9 Deterministic EV

The flexibility of X and Y depends on the number of favourable choices which in

turn depends on the likelihood of the trigger states of the trigger event (natural gas

price uncertainty.) This suggests that expected values should be weighted by the

probabilities of the favourable outcomes. We compare the effects of assigning

different probabilities of natural gas prices on subsequent expected value measures

of the extended example in section 7.3.1.2 in table 7.3.

352
Table 7.3 Comparison of Expected Value Measures

Scenario A B C D

Natural Gas Price: 1/3; 1/3; 1/3 1/5; 2/5; 2/5 3/5; 1/5; 1/5 1/5; 1/5; 3/5
P($2.4); P($3.2); P($4.0)

Average Gas Price $3.20 $3.36 $2.88 $3.52

V(X|E(gas price))* 100.5 100.5 97.2, 100.5 100.5

E (X) in Flexibility 99.4 99.84 98.52 99.84

F(X) = V(|E) - E (X) 1.1 0.66 -1.32, 1.98 0.66

V(Y|E(gas price))* 99.8 99.8, 100.6 98.7, 99.8 99.8, 100.6

E (Y) in Flexibility 99.7 99.9 99.3 100.06

F(Y) = V(Y|E) - E (Y) 0.1 -0.1, 0.7 -0.6, 0.5 -0.26, 0.54

EVPI 99 99.36 98.28 99.4

Deterministic EV 100 100 99.6 100

Normalised X 0.6 0.25 0.818 0.267

Normalised Y 0.3 0.1563 0.227 -0.100

Weighted Normalised X 0.2 0.0500 0.4909 0.053

Weighted Normalised Y 0.2 0.0937 0.1818 -0.040

*Lacking more information, we take boundary values rather than interpolating.

Average gas price is computed by summing the gas prices weighted by

probabilities. For example, $3.20 = 1/3 ($2.4) + 1/3 ($3.2) + 1/3 ($4.0).

V(X|E(gas price)) refers to the value of the best choice for second stage X given

the average gas price. The relative flexibility benefit gives negative values in some

instances. This is misleading as a different probability assignment should not

change the scale of benefits so dramatically.

The EVPI is the expected value of the decision tree with the natural gas price

uncertainty resolved before the decisions are made. The Deterministic EV refers to

353
the expected value of the decision tree under the deterministic configuration in

figure 7.13, i.e. all decision nodes followed by all chance nodes, not Hobbs value

given expected conditions. The difference between the EVPI and Deterministic

EV serves as a normalising denominator. EVPI is the upper bound to flexibility.

Deterministic EV reflects the EV of the most inflexible case. EV (X) in

Flexibility is the expected value of investing in X in the flexibility configuration,

i.e. investment decision followed by uncertainty followed by the second stage

decision of burning gas or burning coal. The normalised measure is the difference

between the expected value of the particular investment and the deterministic EV

divided by the normalising denominator.

The weighted measure is simply the product of the probability of favourable

conditions and the normalised measure. The weighting indicates how likely or,

more appropriately, how often the normalised measure is expected. This says that

if the three different natural gas prices are equally likely to occur (probability of

1/3), then X and Y are equal (0.2) or incomparable. In this scenario, we cannot

distinguish between the two investments on the basis of this measure. If the

probability assignments are 1/5, 2/5, and 2/5, this means that X will be favourable

1/5 of the time and Y will be favourable 3/5 of the time. The normalised measure

does not indicate this, but the weighted measure does. In this case, the weighted

measure clearly indicates that Y is preferred, because it capitalises on favourable

outcomes. On the other hand, if the probabilities are distributed 3/5, 1/5, and 1/5,

X would be the preferred investment as cost-savings outweigh the chances that Y

could make. If the probability assignment shifts in the direction of the most

expensive natural gas price, where neither X nor Y could give favourable payoffs,

the normalised measure for Y becomes negative. Thus Y should not be considered

as the choice between $103.6M burn gas and $100.5M burn coal (given co-firing is

installed) is altogether unfavourable.

354
In this particular example, the weighted normalised measure outperforms the

relative flexibility benefit and the normalised flexibility measure. It correctly ranks

investments differing on degree of flexibility and favourability that are included in

the same decision tree. The weighted measure is more meaningful than the

normalised flexibility measure. It is not misleading like the relative flexibility

benefit measure. A zero or negative measure means that any flexibility provided by

the associated investment is outweighed by its negative favourability.

Our proposed weighted normalised measure has some weaknesses.

1) It may not be possible to re-order the nodes so that all chance nodes occur before all
decisions to get the EVPI. These are due to the conditionality expressions defined
in the corresponding influence diagram. Thus we do not get the true EVPI as the
upper bound but a lower EV of less than perfect information.

2) Similarly, it may not be possible to put all chance nodes after all decision nodes to
get a Deterministic EV. The inflexible or status quo initial option may not be most
favourable. The deterministic EV does not necessarily refer to the worst expected
value, only that the inflexible case is the best because uncertainty is not considered
and no premium is included in its cost.

3) Weighting by the proportion of favourable states may also invite criticism. The
weighting does not reflect the degree of favourability, but only distinguishes
between the favourable and the not, i.e. the triggered and untriggered states.

7.4 Entropic Measures

The close relationship between uncertainty and flexibility suggests the use of

entropy to measure flexibility. Entropy originates from the Greek word meaning

transformation. Entropy is a logarithmic measure of uncertainty in information

theory, randomness in nature, disorder in thermal dynamics, information in

cybernetics, concentration in economics, and diversity in ecology. It reflects the

number and balance of elements in a closed system. Kumar (1987) sees the

immediate parallel to flexibility as the number and freedom of choices. From a

355
decision theoretic perspective, Pye (1978) associates entropy with the uncertainty

in a decision maker's future moves as retained in the amount of flexibility. In

energy policy, Stirling (1994) uses entropy as an index for diversity, which is an

aggregate or portfolio view of flexibility. Long established and widely used,

entropy seems to offer a way of measuring flexibility.

This section investigates the properties of entropy (section 7.4.1) that make it

attractive as a measure of flexibility, as suggested by Pye (1978) and Kumar

(1987). Pye treats flexibility as the uncertainty of future decision sequences, and

transitions are irreversible. Kumar applies flexibility to manufacturing systems,

where state transitions are reversible. Pye uses logarithm to the base 2 while

Kumar uses the natural logarithm. Their arguments are theoretically based though

not implemented in practice. What they overlook are those properties (section

7.4.4) that make entropic measures unreliable and misleading as an indicator of

flexibility. These criticisms support and expand those earlier attacks by White

(1975) and Evans (1982).

7.4.1 For Entropy

Uncertainty and flexibility are uni-directionally linked, i.e. as uncertainty increases

more flexibility is required but not vice versa. A measure of uncertainty indicates

the required but not actual amount of flexibility. In this context, entropy has been

suggested by Pye (1978) and Kumar (1987) as a measure of the number of choices

and the freedom of choice. Freedom of choice can be interpreted as the probability

of selecting a particular option.

The basic formula for entropy is the negative sum of logarithms of the probabilities
m m
weighted by the probabilities: H(a) = -
i =1
p(ai)LN(p(ai)) =
i =1
p(ai)LN(1/p(ai))

where ai are states of the uncertain event following the previous node a, which is

an initial position that leads to a situation of m states as depicted in figure 7.10. In

356
the one stage case H(a) = S(a1, a2, ... , am) and the above formula applies. Two or

more stages can be computed using the decomposition rule explained in 5) below.

The number of states n(a) = m. The probability that the ai th state occurs is

represented by p(ai). The basic formula has the property that 0 H(a) LN(n(a)).

Figure 7.10 Notation

Stage 1 Stage 2
a
11
a 12


a1

a2

a
a 1k 1

a
m

Below are properties of entropy that reflect our notion of flexibility.

1) Entropy is zero if all states except one have zero-probability, otherwise it is

positive. If we only have one choice, then we have no flexibility. LN(1) = 0.

2) Entropy increases with the number of states. The more choices we have, the more

flexibility. Figure 7.11 shows the logarithmic relationship between the number of

states n(a) and entropy H(a).

357
Figure 7.11 Maximum Entropy as a Function of States

L o g a rithmic R e la t i o n s h i p s
Maximum Entropy H(a)

5 LOG(n,2)

LN(n)
4
LOG10(n)
3

0
0 5 10 15 20 25
Number of States n(a)

3) Maximum entropy occurs if each state has equal probability. For any a with m

states, p(ai) = 1/m for all i = 1 to m. H(a) = LN(n(a)). We have maximum

flexibility if each choice has an equal chance of being selected. For n = 2, figure

7.12 shows that entropy is greatest when p = 0.5 where standard deviation of the

distribution is 0.

Figure 7.12 Entropy and Standard Deviation

M e a s u r e s o f D ispe rsion

0.8

0.7

0.6
Entropy
0.5

0.4 S t a n d a rd
D e v ia t i o n
0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
P robability

358
4) Entropy is a symmetric function. If the state probabilities are permuted amongst

themselves, the entropy will not change. That is, S(1/3, 2/3) = S(2/3, 1/3) as in

figure 7.13. Similarly S(1/6/,1/6,1/2,1/6) = S(1/2,1/6,1/6,1/6). This means that

entropy is independent of value or payoff and does not distinguish between states.

Entropy depends only on the total number of states and the permutation of

probabilities.

Figure 7.13 State Discrimination

a1 1/3 a1 2/3

a2 2/3 a2 1/3

5) The decomposition rule states that every multi-staged tree can be reduced to its

equivalent one stage multi-state form. The three probability trees in figure 7.14

have equal entropies of LN(5). They follow the decomposition formula of between

groups and within groups. Within a group, the basic formula applies. Between

groups they are additive.


m
H(a) = S(a1, a2, ... , am ) +
i =1
p(ai)H(ai) where S(a1, a2, ... , am ) =
m

i =1
p(ai)LN(1/p(ai))

H(ai) = 0 if ai are terminal nodes. The first tree is single staged, so the basic

formula applies: H(a) = 5*1/5*LN(5) = LN(5). The entropy for the second tree is

1/5LN(5) + 4/5LN(5/4) + 4/5[LN(2) + 1/2LN(2) + 1/2LN(2)]= 1/5LN(5) +

4/5LN(5/4) + 8/5LN(2) = LN(5). The entropy for the third tree is 2/5LN(5/2) +

2/5LN(2) + 3/5LN(5/3) + 3/5[1/3LN(3) + 2/3LN(3/2) + 2/3LN(2)] = LN(5).

Figure 7.14 Decomposition Rule

359
1/2
1/5 1/5

1/2
2/5
1/5 1/2

1/5 1/2
4/5 1/2 1/3
3/5
1/2 1/2
1/5 1/2
2/3

1/5
1/2 1/2

6) Entropy increases if the states are brought closer together, i.e. probabilities

becoming more equal. The more indifferent we are to the choices or the more

equally favourable the choices are to us, the greater the flexibility. In this sense,

entropy is also a measure of dispersion. For example, S(1/7, 6/7) < S(2/7, 5/7) <

S(3/7, 4/7) < S(1/2,1/2). Any averaging procedure that brings the probabilities

closer together will increase entropy.

7) Entropy is continuous and differentiable, as evident from figure 7.12. A maximum

can be found. It is therefore more attractive than the size of choice set measure,

which is discrete, and other measures of dispersion, like standard deviation, which

are not uniformly continuous or differentiable.

7.4.2 Decision View (Pye, 1978)

Pyes notion of flexibility reflects our basic understanding: the ability to adapt or

be adapted to changing circumstances. His treatment of flexibility as the amount

of uncertainty which the decision maker retains concerning the future choices
he will make, however, deviates from the expected value treatment of flexibility.

By considering all future decisions as uncertain, he uses entropy to measure this

uncertainty. Since the basic entropy formula contains probabilities not values, he

360
suggests two ways of incorporating value: weighting value as probabilities or

using a cut-off satisfactory value level to reduce the total number of choices.

He further defines robustness as a method of trading off flexibility against expected

value and proposes entropic measures for three types of robustness.

1) Pyes first robustness measure reflects the independence of flexibility and value.

The most robust move is that which retains maximum flexibility. Maximum

uncertainty occurs when all moves are equally likely, in which case, entropy is

simply the logarithm of the number of moves. For any multi-staged probabilistic

tree, the decomposition rule applies.

2) The second robustness measure is based on Simons (1964) satisficing model.

The most robust move is one which retains the most flexibility subject to the

condition that the decision maker's estimate of the probability of choosing a future

sequence of moves depends on its value. The value associated with each move is

translated into a probability of the move by weighting the value by the sum of all

values. It is intended for multi-dimensional values that cannot be linearly

combined into a single utility function. However, Pye does not show how to

combine these values otherwise.

3) The third robustness measure combines probabilities and values using a cut-off

satisfactory level. If the value is above the cut-off level, the difference is taken and

weighted accordingly. Any value below this level is not included in the final

weighting.

There are five problems with the practical implementation of this theoretical

treatment.

1) The first robustness measure works only with equal probabilities. If the number of
choices are reduced, the remaining probabilities must be adjusted to equal
probabilities rather than re-weighted. In practice, we would re-weight the
remaining choices and get unequal probabilities.

361
2) The second robustness measure weights the individual values by total values and
performs expected value on probabilities and values. These two rounds of
averaging distorts the real picture as entropy will undoubtedly increase.

3) The usefulness of the second measure for values that cannot be linearly combined
eludes the reader as Pye does not discuss the method of value combination either
before or after the weighted probability transformation.

4) Negative values cannot be weighted into probabilities, but Pye does not indicate
whether they should be rescaled to positive values or dropped from the calculation.

5) The third robustness measure combines aspects of the former two measures but
ignores the imbalance of re-weighted and rescaled value-transformed probabilities.

In addition to the above criticisms, Pye touches upon four issues that are

controversial to our understanding of flexibility. The first paradox concerns

information, value, and flexibility. The second paradox surrounds decision analysis

and entropic treatment of flexibility. The third issue is about value and flexibility

trade-off. The fourth issue concerns the transformation of values into probabilities.

These issues are discussed below.

1) Pye states: the introduction of information about the value of sequences of moves
reduces the decision makers generally desirable uncertainty concerning his future
moves and so reduces flexibility. This means that the decision maker retains
maximum flexibility when no values are known and all future moves are equally
likely. Thus the size of choice set is maximally large and maximally uncertain. As
soon as the value of any choice is revealed, the set becomes differentiated and
flexibility is reduced. However, Merkhofer and others state just the opposite. Any
new information resolves some degree of uncertainty before the decision is made
and consequently aids the decision maker in improving upon the outcome. The
value of this information depends on the degree to which the final payoffs can be
effected. Any information about value should help the decision maker assess the
amount of flexibility he has. These two views of information, value, and flexibility
reflect a paradox: value of information versus information about values and the
effect on flexibility and the value of flexibility. Pyes statement suggests that
flexibility is inversely related to value, but this contradicts the size of choice set
definition of flexibility, i.e. the number of favourable (valuable) choices.

362
2) A second paradox concerns ideological differences between decision analysis and
flexibility. Pye recalls: in classical decision analysis, value is maximised and a
dominated move would be eliminated from the set of moves under consideration on
the basis of estimates of value made at the time of the initial decision. When
maximising flexibility it is most inappropriate to eliminate a move, since the fewer
are the moves, the smaller is flexibility, unless the sequence of moves is rejected as
unsatisfactory. Decision analysis uses dominance of values to eliminate
unfavourable moves, while Pyes entropic treatment of flexibility leaves options
open. This paradox demarcates two ways of thinking: flexibility and favourability.
However, flexibility has no value without considering the favourability of options,
which is necessary to differentiate between the choices and eliminate the less
favourable ones.

3) A third contention concerns Pyes definition of robustness, that it trades off


flexibility and value. In our conceptual framework, we define robustness as
tolerance against uncertainty, implying no need to change. His method of trading
off the number and equal probability of choices (as contained in maximum entropy
and maximum flexibility) against value by means of a satisfactory cut-off level,
reduction of choice set, and re-weighting oversimplifies and overlooks the real
issues in trading off flexibility and favourability as studied by Mandelbaum and
Buzacott (1990).

4) Finally, Pyes method of linearly weighting values as probabilities assumes that the
probability of a move is linearly dependent on the value of the move. He does not
address the implications of eliminating unfavourable moves and rescaling values.
Any averaging method tends to increase entropy, and this is misleading for value
optimisers. Furthermore, the transformation of values to probabilities assumes that
the relative importance of values is the same as the freedom of choice. The relative
importance of a choice is not necessarily the same as the likelihood of selecting it.
Other factors such as the disablers of time, cost, and effort should be taken into
account in determining the freedom of choice. Even so, such value-probability
weighting does not transform the underlying process into a stochastic one, which is
the essence of entropy.

363
7.4.3 Systems View (Kumar, 1987)

Originally, the entropy concept was applied to a closed system, from which two

views emerge and often coincide. One concerns the uncertainty of selecting an

element from a system. The other concerns the transformation of the system into a

different state. Pye and Kumar do not distinguish these two system views nor the

shift into decision terminology whence states become choices and options. In

applying entropy to manufacturing systems, Kumar equates the states of a system

with choices of workstations. Kumar (1987, p. 958) defines the flexibility in

action of an individual or a system depends on the decision options or choices

available and on the freedom with which various choices can be made. Entropy is

exactly such a function of the number of choices and the freedom of choice.

Kumar (1986) applies entropic concepts from a Markovian system (with reversible

state transitions) to measure the loading flexibility and operations flexibility of a

Flexible Manufacturing System. The decomposition property of entropy, which

enables the addition of between group and within group entropies, is attractive for

measuring the operations entropy of workgroups. Later, Kumar (1987) develops

(but does not apply) four entropic measures to satisfy desirable properties of

flexibility measures.

Kumar claims that entropy offers an objective basis for measuring flexibility but

does not mention the way these probabilities are derived. The probability

associated with state ai (reflecting the probability that ai is chosen) is the same as

the transitional probability from a to ai. These transitional probabilities are not the

same as the probabilities associated with the states of the trigger event. Kumars

probabilities reflect the freedom of choice, which seem to encompass our indicators

of disablers and motivators as reflected in the proportion of favourable outcomes,

364
availability of options, probability distribution of the trigger event, and other

factors that affect the likelihood of choosing the second stage option.

Although entropys attractive properties reflect our notion of flexibility, the

treatment of values, the derivation of probabilities for freedom of choice, and the

rules for determining which choices to include have not been addressed by Kumar.

To investigate this further, we consider different ways of deriving these

probabilities: transforming values such as switching cost and response time into

probabilities or using subjective probabilities. The conclusion is the same: no

matter how sophisticated the method of probability derivation, entropic measures

cannot distinguish between states. Therefore, any entropic measure will not be

able to trade-off value and probability. While it may be true that the greatest

flexibility exists when all choices are equally likely, entropy fails when the choices

are not equally likely. That is, it is possible to have H(a) > H(b) when n(a) < n(b)

because choices are not equally likely. These fundamental problems with entropy

are listed in the next section.

7.4.4 Against Entropy

Entropy was introduced to operations research and systems modelling in the

sixties. A series of pro-entropy and against-entropy articles appeared in the OR

literature: Dreschler (1968), White (1969), Wilson (1970), White (1970), and

White (1975). These, along with an excellent review by Horowitz and Horowitz

(1976), seemed to have sealed the fate of entropy in this field. As these criticisms

were not specific to flexibility, entropy received another comeback in Pye (1978)

and Kumar (1987). While heralding the properties of entropy, Pye and Kumar

have glossed over fundamental issues, such as the derivation of probabilities and

the importance of state discrimination. This section discusses these issues and their

implications for flexibility measurement.

365
1) Entropy concerns uncertainty, at best, probabilities. Flexibility is about choices,

which are states of a decision. The selection of choices depends on decision rules

of value optimisation and trade-offs of decision criteria. To use entropic measures,

we need to represent these decisions either as uncertainties or associate them with

the trigger event, as in figure 7.15. If they are represented as uncertainties, the

decision rules that determine their selection must somehow be transformed into

probabilities. However, if all future decisions are treated as uncertain, there is no

way of distinguishing between choosing or not choosing other than probabilities of

1 and 0, in which case it reduces to a trivial configuration.

Figure 7.15 Decision Tree Transformation for Entropic Treatment

Trigger 2nd Stage


Enabler Event Decision

Original Formation 2nd Stage Decision as Uncertain Favourable Choices

2) Entropy does not give any more information than the permutation of probabilities

(reflecting balance or dispersion) and the total number of states. If the same

probabilities are reassigned to different states, the entropy remains the same.

Adding new states with zero-probabilities does not change the entropy. Thus any

permutation or addition of zero probability states will not change the entropy.

3) Entropy does not discriminate between states. Any re-assignment of the same

discrete probabilities to different states will give the same entropy. For n = 2, it is

not possible to distinguish between choosing or not choosing.

366
4) As the number of non-zero probability states increases, entropy will increase.

Intuitively, as the number of stages increases, entropy should also increase. The

attractive decomposition property of entropy, however, implies that multiple stages

may not necessarily give higher entropy than single or fewer stages.

5) H(a) is always greater than H(b) if n(a) > n(b) and p(ai) = 1/n(a) and p(bi) =

1/n(b). Without equal probability, H(a) is not necessarily greater than H(b). As

such, entropy is not a reliable indicator of number of choices.

6) It is meaningless to use entropy in the expected value manner, i.e. to weight and

aggregate them because the maximum entropy of equal probabilities is no longer


m
apparent. That is, E(H(a)) = - i =1
p(ai)v(ai)LN(p(ai)), where v(ai) is the value or

payoff associated with the aith state.

7) Entropic calculations do not reflect any dependence of probabilities. Entropy

remains the same regardless of dependence relationships as long as all multi-stage

trees decompose to the same permutation of probabilities in the same number of

states. Several dependence relationships are thus ignored: state, probability, value.

8) Even as a measure of dispersion, it does not provide more information than


standard deviation, which takes value assignments into consideration. For

example, S(1,1,1,2,3) = S(2,2,2,4,6) = S(0.125, 0.125, 0.125, 0.25, 0.375) =

1.494. Standard deviation of (1,1,1,2,3) = 0.894. Standard deviation of

(2,2,2,4,6) = 1.789 which is twice the previous standard deviation.

9) Cumulative probability is meaningless in the entropy context. Any regrouping

within a stage by cumulative probabilities will change the entropy because the

number of states and permutation of probabilities change. For example,

S(1/3,1/3,1/3)>S(2/3,1/3) even though two of the three states may be similar.

367
10) Entropy is an absolute measure, not relative. Flexibility is relative with respect to

events and decisions, states and choices, probabilities, and values. These additional

value indicators are simply not considered in entropic calculations.

11) Transforming values into probabilities for entropic calculation is not as

straightforward as Pye has described. As negative values are not allowed, they

must be dropped or rescaled. If they are dropped, entropy is reduced. Additive

rescaling (adding all values by a constant) changes the probabilities and entropy but

does not change the standard deviation. Any averaging procedure brings

probabilities closer together hence distorting the original dispersion and balance.

Transforming values into probabilities does not make it a stochastic process.

These state probabilities depend on how easy, how favourable, and how likely we

are to choose it compared to other options available from the same decision stage.

Such value-probability transformation confuses likelihood with relative importance.

12) The concept of entropy was originally developed for systems not decisions. The

system (state transition) and decision (stages) views are quite different. State

transitions in systems are reversible while most decisions are irreversible and the

choices are mutually exclusive, in-so-far as decision analysis is concerned.

13) The decomposition property separates the entropy between groups and the entropy

within groups. It is meaningful for the systems view but not the decision view.

For the decision view, stages represent passage of time, whereas in the systems

view, stages are only state transitions not chronological sequences. In decision

sequences, the number of stages reflect not only passage of time (the more stages,

the more time) but also conditionality and additional costs.

14) Any re-order or reversal of independent chance events in a symmetric sequence

produces the same entropy. That is, H(A, B, C) = H(C, A, B) = H(A, C, B) =

H(B, A, C) where A, B, and C are independent chance nodes. There is no concept

368
of value of information as pertained to the early resolution of uncertainty, because

symmetric reversals give the same entropy. In this respect, White (1975) observes

correctly that entropy fails to relate information to its uses.

15) Entropy is not a unique measure. Table 7.4 shows different distributions of five,

four, and three states each giving the same entropy 1.099, while probability

assignments as well as standard deviations differ in each case. A linear

transformation of values into probabilities will not give a unique entropy.

Table 7.4 Equal Entropies for Different Number of States

States v(ai) P(ai) H(ai) v(ai) P(ai) H(ai) v(ai) P(ai) H(ai)
a1 1 0.0388 0.1262 1 0.0456 0.1409 5 0.3333 0.3662
a2 1 0.0388 0.1262 3 0.1369 0.2722 5 0.3333 0.3662
a3 1 0.0388 0.1262 6 0.2738 0.3547 5 0.3333 0.3662
a4 10.7476 0.4174 0.3647 11.9150 0.5437 0.3313
a5 12 0.4661 0.3558
entropy 1.099 1.099 1.099
standard 5.6992 0.2214 4.7936 0.2187 2.7386 0.1826
deviation

Some of these criticisms have broader foundations, as noted by Horowitz and

Horowitz (1976) below in 16 and 17.

16) Entropy is based on the analysis of a stochastic process. Decisions are not

stochastic. Weighting values so that they sum to one does not make the process

stochastic. There is no observational or theoretical basis for the analogy between

physical and economic processes.

17) Entropy can be derived from parameters of moment distributions, so it does not

contribute any additional information.

The above arguments suggest that entropy is not suitable for our conceptual

treatment of flexibility where decisions reflect value optimisation. Evans (1982)

369
points out that Pyes model can lead to bad decisions. Entropic measures are only

applicable to situations where preferences can be quantified and changed into

probabilities. Entropy is overwhelmingly dictated by the size of choice set and

non-trivial dispersion of the probability distribution while completely excluding

payoffs. Although entropy promises a precise, coherent, and objective measure, it

does not capture the multi-dimensional aspects of flexibility. It may be useful as an

overall indicator of number of choices and balance (dispersion) in situations where

there are many choices and many stages, thus necessary to prune the tree before

further analysis. However, entropy cannot deal with the trade-off between balance

(freedom of choice) and number of choices unless these choices are equally likely

or equally attractive, in which case it is just as easy to count them. If maximum

flexibility is reflected by equal probabilities, then the decision maker must be

indifferent to the choices, meaning that all choices are identical. This goes against

Mandelbaums (1978) diversity as a source of flexibilty. Diversity increases

flexibility because it increases the capability of responding to different conditions.

But entropy decreases as diversity (of probability assignments not value) increases.

7.5 Comparison of Entropic and Expected Value Measures

Entropy and expected value are methods of weighting and aggregating

probabilities. Both have been proposed as measures of flexibility.

The recursive equations of entropy and expected values for a multi-staged tree can

be set such that H(a) and E(a) are equal.


m
H(a) = S(a1, a2, ... , am ) +
i =1
p(ai)H(ai) where S(a1, a2, ... , am ) =
m

i =1
p(ai)LN(1/p(ai))

H(ai) = 0 if ai are terminal nodes.

370
m ki
E(a) =
i =1
p(ai)E(ai) where E(ai) =
j =1
p(aij)E(aij).

E(aij) = V(aij) if aij is a terminal node. V(aij) = v(ai) + v(aij) [all values along the

path to the final node.]

H(a) = E(a) if v(ai) = LN (1/p(ai)) or -LN(p(ai))

Entropy and expected values are equal for a probabilistic decision tree (no decision

nodes) if values and probabilities are related in a logarithmic manner.

Other than the above commonalities, expected values and entropy have entirely

different theoretical assumptions. Expected value is a statistical measure whereas

entropy is not. Entropy is simply a continuous and differentiable measure of the

number of choices and dispersion of probabilities. It appeals to our notions of

flexibility because of its decomposition and other properties. However, the

arguments against entropy in section 7.4.4 suggest that simpler measures of

dispersion and proportion (cumulative probability) of favourable choices and

number of favourable choices may be more apparent and more widely applicable.

Measures of flexibility depend on definitions of flexibility. Each author cited in this

chapter defines flexibility differently. As such, they make use of different indicators

of flexibility. Pye defines flexibility as the uncertainty regarding decision makers

moves. Hobbs defines flexibility as choices available to respond to uncertainty,

where a first stage decision guarantees the certainty (availability) of future choices.

Thus future decisions have no uncertainties other than the trigger events, and they

are only subject to the favourability criterion. Hobbs uses enabler and motivator,

but no disabler. So flexibility is positively good, ignoring the downside of

flexibility as mentioned in the previous chapter. Kumar uses disabler, i.e.

availability or freedom of choices and no enabler or motivator. Pye uses both

disabler and motivator, but no enabler. Hobbs assumes that the first decision

371
guarantees future flexibility, while Pye views all future decisions as uncertain,

regardless of initial positions. The comparison between expected value and

entropy highlights the important difference between favourability and flexibility.

Expected values and entropy appear complementary in many respects. Expected

values discriminate between states because payoffs and their relative weights (as

weighed by the series of probabilities associated with previous states) distinguish

between states. But expected values do not indicate the number of states.

Meanwhile, entropy reflects the number of states and their dispersion but does not

discriminate between states. Expected values are based on the principle of

dominance and elimination of sub-optimal choices. Entropy is based on the

number and balance of states. Expected values mainly depend on values, with

probabilities as a secondary element. Thus expected values reflect favourability.

Meanwhile, entropy is value-free and does not indicate favourability.

Entropy could be a good indicator for that aspect of flexibility that is value-free

and a screening device for large multi-staged probabilistic trees which treat future

decisions as uncertain. However, flexibility is not value free, and the first stage

guarantees flexibility in the second stage decision. Hence it is necessary to retain

decisions as decisions and not uncertainties. Except for a few of its attractive

properties, the information provided by entropy can be replaced by loose indicators

such as the size of choice set and standard deviation. In addition to the above,

various arguments in section 7.4.4 have shown the inadequacies of using entropy

for our purposes.

7.6 Conclusions

Individually, each measure reviewed in this chapter does not capture the multi-

faceted meaning of flexibility. We propose the combined use of indicators and

expected values to measure flexibility more fully. Indicators are individually not

372
sufficient, but together meet the criteria for measuring flexibility. Expected values

appear in different forms, most of which over-emphasize the favourability aspect.

We caution against total reliance on expected values and propose the use of

indicators to supplement the deficiency, such as the trade-off between different

aspects of flexibility. Entropic measures, though possessing attractive properties,

fail to meet our criteria for measuring flexibility. We dismiss any measures based

on entropy. These conclusions are supported in detail below.

INDICATORS

Indicators originate from those definitional elements of flexibility. These indicators

(number of choices, enabler, disabler, motivator, trigger event, trigger state,

likelihood, and two stage decision sequence) describe the essence of flexibility and

provide a framework for structuring options and strategies for flexibility. Not all

indicators are necessary, but each indicator alone is not sufficient to capture the

multi-faceted meaning of flexibility.

EXPECTED VALUE BASED MEASURES

Expected values disguise the number of options available and the uncertainty to

flexibility mapping. As an aggregate measure, they emphasize the favourability

aspect of flexibility. Although useful for summarising decision trees, expected

values do not resolve conflicts between favourability and flexibility. Individual

probabilities are aggregated but lost in the final expected value measure.

1) Relative Flexibility Benefit

Hobbs relative flexibility benefit measures the difference between considering

uncertainty as opposed to treating it as 100% probable. This measure is contingent

on a mapping between the decision that realises the flexibility and the uncertain

condition that provides the opportunity for it. The flexibility conveyed is that of

373
capitalising on the maximum number of uncertain events. It differentiates between

flexibility and no flexibility, but not the degree of flexibility. It does not resolve the

conflicts between the flexibility and favourability of different investments.

2) Normalised Flexibility Measure

This measure depends on the matchability or calibration between trigger states

and second stage options. A poor match gives a large slack and less favourable

outcome than the tighter fit of a better match between the states of the condition

and the options available. The initial choice that leads to minimum total slack gives

the most flexibility. It provides an index between 0 and 1, and is useful for

comparing different degrees of flexibility.

3) Expected Value of Information

The expected value of information explains the structuring of trigger events before

decisions that give flexibility. Defined for decision flexibility, it pertains only to

postponement of decisions.

4) Towards an Improved Measure: Weighted Normalised Measure

In an attempt to overcome deficiencies of existing expected value measures, we

investigated the use of EVPI and Deterministic EV to anchor the normalisation

and a cumulative probability distribution of the trigger event to weight the

normalised measure. Although the resulting weighted normalised measure

performs better than existing expected value measures in our extension of Hobbs

example, we caution its use for the following reasons.

1) EVPI and Deterministic EV requires restructuring the decision tree, which may not
be possible due to conditional dependence of events. Furthermore, restructuring the
tree may not be realistic, e.g. uncertainty occurs only after decision is made; no way
to postpone decision or get information to resolve the uncertainty; inflexible
decision structure.

374
2) EVPI is meaningful only in Merkhofers terms, i.e. in the context of decision
flexibility, not generalisable, thus missing out on the richness of flexibility.

3) The method of using loose indicators such as probability (likelihood) may be more
revealing, simpler, and more accurate than weighting and normalisation, which
tend to disguise the simple elements.

4) Further tests are required to establish its reliability and generalisability.

ENTROPY

The underlying theory behind entropy is not appropriate for our conceptual

understanding of flexibility. Entropy does not discriminate between states,

therefore it cannot differentiate between the favourable and the unfavourable. It

does not recognise value and thus ignores the favourability inherent in flexibility.

The permutations of probabilities is not unique. To use entropy, it is necessary to

treat the future as uncertain, thus removing the value preferences of the decision

maker.

To develop practical guidelines for the assessment and operationalisation of

flexibility, we propose to

1) use indicators and expected values to measure different flexibility options and
strategies, via

2) illustrative examples pertaining to the UK Electricity Supply Industry, and

3) translate Mandelbaums sources of flexibility to this structuring framework.

We call this modelling flexibility in Chapter 8.

375
CHAPTER 8

Modelling Flexibility

8.1 Introduction

The title of this chapter, Modelling Flexibility, refers to the structuring and

assessment of flexibility. The terminology of chapters 6 and 7 is used to represent

flexibility. Decision trees and influence diagrams are used to structure flexibility.

Indicators and expected values of Chapter 7 are used to measure flexibility.

We present practical guidelines for the representation and assessment of flexibility

via

1) the application of indicators and expected value measures as studied in Chapter 7;

2) the construction of basic decision models relating to capacity planning in the UK


Electricity Supply Industry as discussed in Chapter 2, i.e. uncertainties affecting
plant economics and pool price; and

3) illustrative examples of the operationalisation of flexibility through options and


strategies as given in Chapter 6.

We show the application of these guidelines to structure and assess strategies in

the relevant context of capacity planning in the UK ESI to answer two outstanding

questions concerning the usefulness of flexibility.

1) How can we model flexibility?

2) How can modelling flexibility apply to electricity planning?

These guidelines have been developed from extensive analysis to test the breadth of

application. However, we present only a few illustrative examples to support the

377
main points, as they have already been analysed to the same level of depth as those

in the previous chapter.

This chapter is organised as follows. Section 8.2 presents the guidelines for

structuring. Section 8.3 presents the guidelines for assessment. Section 8.4

presents basic models of plant economics and pool prices, respectively, using the

terminology and guidelines of previous sections. Section 8.5 presents generic

examples of operationalising strategies of partitioning, sequentiality,

postponement, and diversity. The final section 8.6 summarises the guidelines in

brief.

8.2 Structuring

The examples in Chapter 7 indicated some essential requirements in structuring

flexibility for further assessment. Structuring refers to the representation of

types and elements of flexibility to facilitate the analysis of flexibility and

uncertainty. We propose the use of 1) decision trees and influence diagrams for

structuring the model (section 8.2.1) in 2) a minimum of two stage sequence

(section 8.2.2) with 3) three types of uncertainties, namely trigger, local, and

external events (section 8.2.3).

8.2.1 Decision Analytic Framework

The term decision analytic framework first appeared in Chapter 4 of this thesis as

a proposal to make use of decision trees and influence diagrams to organise other

techniques. Here the same term refers to a modelling framework of decision tree-

based techniques capable of representing and assessing flexibility. This

framework is appropriate for modelling flexibility for the following reasons:

1) Flexibility is a feature of modelling approach;

378
2) Decision trees and influence diagrams are structuring tools for uncertainties,
decisions, and contingency; and

3) This framework facilitates the representation of options and strategies in the


operationalisation of flexibility.

FEATURE OF THE MODELLING APPROACH

Previous chapters have shown that flexibility has value only in the presence of

uncertainty. Therefore measuring flexibility requires an approach that considers

uncertainty by including and representing it in some form. This precludes the

deterministic approach, i.e. one that assumes all uncertainties do not exist or have

only one state. Other conceptual aspects of flexibility call for multi-contingency.

This precludes the probabilistic approach, where the expanded risk analysis

considers all uncertainties simultaneously. The formulation of a strategy to

proceed, i.e. a course of action, requires the identification of paths. Decision

analysis through use of decision trees allows the consideration of uncertainty, in

terms of chance events and decision points.

Other methods of assessing and representing flexibility, as described in recent

literature, are variants of the decision tree method of analysis, e.g. stochastic

dynamic programming and contingent claims analysis, as discussed in chapters 4

and 6.

STRUCTURING TOOLS: DECISION TREES, INFLUENCE DIAGRAMS

The decision analytic framework relies on influence diagrams and decision trees for

structuring the problem. Influence diagrams are used to define conditionality and

variable relationships. In influence diagrams, probabilities and values of dependent

events can be assigned and expressed easily. Decision trees are used to define

chronological sequences and uncertainty-flexibility mapping. In decision trees,

379
decisions and uncertainties can be ordered as they occur. The combination of

influence diagrams and decision trees facilitates the modelling of multi-staged

decisions and uncertainty sequences.

OPERATIONALISATION OF FLEXIBILITY

As already shown in Chapter 7, this modelling framework facilitates the

consideration of flexibility in at least 3 operationalisations: 1) options that exhibit

flexible characteristics or provide flexibility in some way, 2) the sequentiality

strategy by node decomposition, and 3) the postponing strategy by node re-

ordering. The latter two strategies (illustrated in section 8.5) are based on

Merkhofers proposal of EVPI for decision flexibility, involving the evaluation of

knowing before deciding versus deciding before knowing.

1) Decision analysis is specifically about decisions and choices. Options are


represented as states of a decision. In Chapter 7, Hobbs et al (1994) and
Schneeweiss and Khn (1990) operationalise flexibility via options.

2) Decomposing a decision into sub-decisions is an example of sequentiality or


staging, one of Mandelbaums (1978) sources of flexibility, as it gives the decision
maker more control over each sub-decision as well as the opportunity to obtain
more information pertaining to each sub-decision.

3) One way of obtaining flexibility is by examining the extent to which decisions and
trigger events can be re-ordered to get higher payoffs. Insight into timing gives the
possibility of postponing a decision until its trigger event occurs. Likewise,
flexibility is increased if trigger events can be identified and introduced.

In addition to the above supporting arguments for decision analysis as a modelling

framework, we recall that expected value measures of Chapter 7 have succeeded in

capturing the favourability aspect of flexibility and generally perform well albeit

with caution. Since expected values are based on decision analysis, this suggests

that decision trees provide an automatic flexibility calculus. We also cite the

380
rationale for such a framework from Chapter 4, e.g. technique familiarity, software

availability, sophistication of presentation, computation, and nodal linkages

between decision trees and influence diagrams.

8.2.2 Two Stage Decision Sequence

We represent the potential or capability to change as a decision sequence in a

minimum of two stages, in which flexibility is associated with the first stage but

only realised in the second stage.

FIRST STAGE

The first stage contains at least two choices, each providing a different level of

flexibility. The two choices correspond to the activating initial position and default

option. The choice that provides future flexibility, i.e. the activating initial

position, is assigned a payoff, i.e. the purchase of this flexibility at a cost, called the

enabler.

SECOND STAGE

The second stage includes an uncertainty-flexibility mapping in which the area of

uncertainty is mapped to the type of flexibility, i.e. trigger event to flexibility

decision. Within this mapping, there is also an implicit assignment of trigger state

to the choice that leads to a favourable outcome. Likelihoods, reflecting the

probability of realising this future favourable potential, are assigned to trigger

states.

For example, flexibility of capacity size, as defined by the ability to adjust total

capacity in the system according to need, depends on demand uncertainty. The

variation in demand levels can be met by adjusting capacity size. There may be

several choices in the second stage, but they lead to favourable payoffs only if

381
trigger states of the uncertain event occur. If demand is high, then high capacity is

useful. If demand is low, then low capacity is useful. Thus the flexibility of

capacity size purchased by the first decision is determined once the trigger event is

known. The alternative that does not lead to flexibility is not affected by the

trigger event. For example, if flexibility of capacity size is not provided, then

demand uncertainty has no effect on the subsequent payoff.

The choices available at the second stage depend on the choice selected in the first

stage. The second stage consists of the exercise decision which carries possible

further cost (disabler) and a return payoff (motivator). The payoff depends mainly

on the outcome of the trigger event, an uncertainty that is resolved after the first

decision but before the second.

This two stage cycle can be repeated. Furthermore, each stage can be composed

of decision/chance node sequences, i.e. nested sub-trees.

8.2.3 Local and External Events

Besides trigger events, there are other uncertainties that affect final payoffs. Local

and external events occur after the second stage decision. Local events are those

that affect payoffs irrespective of flexibility. For example, in considering flexibility

of the capacity size of a coal-fired plant, the uncertainty of renewable technologies

has no effect on the second stage decision but has consequences on the final

payoff. External events are those uncertainties that affect all choices in the first

stage decision: to provide or not to provide flexibility. For example, pool price

uncertainty affects all choices of plant investment. These uncertainties are not

usually independent. We distinguish these three types of uncertainties (trigger,

local, and external events) as they are important in modelling flexibility.

Figure 8.1 depicts the above terminologies. In Stage 1, the decision maker chooses

between Purchase flexibility A at a cost (enabler) called Premium and stay with

382
the status quo of No flexibility B. The existence of Stage 2 is only meaningful if

the Trigger Event precedes it. Option 1 in Stage 2 gives the best payoff if

Trigger state 1 occurs. Similarly Option 2 gives the best payoff if trigger State

2 occurs. If none of the trigger states occur, the Dont Exercise option in Stage

2 gives the most favourable payoff. The payoffs associated with the first stage

choice of Purchase flexibility A are affected also by the states of Local Event A

and External Event. For the No flexibility B case, the payoffs are affected by

Local Event B and External Event.

Figure 8.1 Decision Tree of Generic Example

The associated influence diagram in figure 8.2 shows that Trigger Event does not

affect the payoffs for B. Similarly, Local Event B does not affect A. However,

External Event affects both A and B.

383
Figure 8.2 Influence Diagram of Generic Example

Such a model of flexibility may contain several trigger events, local events, and

external conditions associated with first and second stage decisions. There may be

many stages in such a decision tree, but each stage that realises a type of flexibility

is preceded by its corresponding trigger event. Modelling flexibility contrasts the

flexibility provided by a course of action with the lack of flexibility in another,

which is called the status quo case or default option.

8.3 Assessment

The assessment of flexibility refers to measuring flexibility, trading off aspects of

flexibility, and comparing options or strategies that differ in the degree of flexibility

they provide. This assessment depends on the level of complexity, which spans the

spectrum from simple to complex. For simple problems, structuring is not

necessary, and indicators are sufficient for measuring flexibility. More

complicated problems require model structuring and assessment using indicators

and expected values.

384
8.3.1 Simple Problems

Simple problems do not require structuring, although structuring helps to identify

the relevant indicators for measuring flexibility. These problems include 1)

options; 2) strategies that concern individual elements (aspects) of flexibility, e.g.

reducing resistance to change; and 3) strategies related to options, e.g. searching

for additional options. The latter two examples are means to increase the

boundaries of the solution space, thereby enlarging the choice set. We explain

these three types of problems below.

1) Operationalisation by options has already been discussed in Chapter 7, e.g.


examples of Hobbs et al and Schneeweiss and Khn. Simple problems involve the
choice between two investments that offer different levels of flexibility, whether or
not to enter into a contract that provides flexibility, and adding a plant that exhibits
a characteristic that promotes flexibility. Hirst (1989) discusses plant
characteristics that offer more flexibility.

2) Reducing resistance to change by removing or relaxing constraints is another way


to increase the number of second-stage options. Examples of this source of
flexibility in the electricity industry are buying permits to pollute (this relaxes or
eliminates the emissions constraint), entering into contracts to fix electricity prices
(so that the pool volatility does not have an influence), and supply contracting to
eliminate volatility in fuel prices.

3) Searching for additional options or actions obviously increases flexibility because


it increases the number of choices (if available and if found). However, the benefits
gained from this strategy must be weighed against the costs of searching.

The uncertainty to flexibility mapping is used to identify the relevant indicator in

each case. We suggest the use of appropriate indicators to trade off conflicting

aspects of flexibility, such as number of choices vs expected returns,

responsiveness vs likelihood, and enabler vs motivators. Formal methods of trade-

385
off analysis can be found in the decision analysis literature, e.g. dominance,

ranking, elimination by dominant criteria, and multi-attribute analysis.

8.3.2 Complex Problems

More complicated problems require structuring, as they involve several indicators,

multiple stages, multiple states, etc. Because of the dimensionality implied by these

problems, indicators may not be sufficient for assessment. In such cases, it is also

necessary to examine the structure of the decision tree, e.g. counting and tracing

the decision paths.

The type of measure to use for assessment depends on the structure of the

problem. With respect to the examples studied for the development of these

guidelines, we classify complex problem structure into four categories in table 8.1.

Table 8.1 Problem Categories and Expected Value Measures

Category Description EV Measure

1) Flexibility vs No flexibility Hobbs Relative Flexibility Benefit

2) Different degrees of flexibility due to Schneeweiss and Khns


calibration, partitioning, special cases of Normalised Flexibility Measure
diversity

3) Postponement, sequentiality, and staging (to Merkhofers EVPIGUF


do with re-ordering or decomposing a
decision tree)

4) Others Use expected value measures with


care, and supplement with
indicators (to resolve conflicting
aspects of flexibility, e.g.
favourability)

To give an example of the 4th category of problem structure, we discuss the

timing decision, which is one of the main decisions in capacity planning. In the

decision analysis context, Hirst (1989) treats the timing decision as a function of

waiting for perfect information (advance notice of future load requirements) or

386
relying on imperfect information (forecast of future demand) and shows that

plants with short lead time and small modular unit size are more flexible than those

with long lead time and large unit sizes. Figure 8.3 illustrates his example which

demonstrates the trade-off between the costs and benefits of flexibility. A utility

must provide for new load, the timing of which is uncertain. If it chooses to build

a plant that takes ten years but new load arrives before the plant is ready, it will

incur a high cost of 43.41 /kWh. On the other hand, if it chooses to build a short-

lead time plant, it can afford to wait three years to get more information. The

uncertainty to flexibility mapping in this case corresponds to the timing of the new

load and the lead time of the new plant.

Figure 8.3 Hirsts (1989) Example

387
8.4 Capacity Planning in the UK Electricity Supply Industry

For illustrative purposes, we simplify capacity planning to focus on individual

decisions for the operationalisation of flexibility. We consider a typical utilitys

decision of whether or not to add a plant to its existing portfolio, similar to Hobbs

example in the previous chapter.

The uncertainties affecting the decision can be grouped into those affecting

generation cost and those affecting revenue. We categorise the areas of

uncertainties in table 2.6 into uncertainties affecting costs (plant economics) and

revenues (pool price) in the following table 8.2.

Table 8.2 Areas of Uncertainties Affecting Costs and Revenues

Areas of Uncertainty Cost Revenue

Plant Economics: capital, running costs *

Fuel: price, supply *

Demand: shape, growth * *

Technology: performance, lead time, competitors * *

Financing requirements *

Market: volatilities of the pool *

Political/regulatory * *

Environment *

Public * *

Figure 8.4 depicts the decision tree, where the first stage decision consists of

choosing to invest (in X or Y technologies) or not invest (status quo). The

uncertainties surrounding the running cost of X called Plant X act as the local

event for X and uncertainties surrounding the pool price called Pool act as the

388
external event for all three choices in the first stage. In the next two sub-sections,

we decompose the corresponding chance nodes Plant and Pool to show how

uncertainties and decisions can give insight to flexibility.

Figure 8.4 Electricity Planning Example

389
8.4.1 Plant Economics

We discuss how to represent and deal with uncertainties that affect plant

economics using the concept of flexibility. We have already extensively studied

plant economics in the first pilot study of this thesis (appendix A). These formulae

are based on IAEAs (1984) method of levelised costs to approximate the average

costs over the life of the plant.

We translate and extend the spreadsheet model of appendix A into a decision

model via an influence diagram in figure 8.5. Proceeding from left to right, this

diagram shows how the variables are related. The left most nodes are direct inputs

to the plant economics model, often areas of great uncertainty, as described in

Chapter 2. The nodes in the middle correspond to levelising constants. The nodes

on the right are components of the final levelised cost, namely, Invest/kWh, Fixed
O&M/kWh, Variable O&M/kWh, Fuel Cost/kWh, and Carbon Tax/kWh. The

Carbon Tax component indicates the plants environmental performance. It is

included to reflect environmental and regulatory uncertainty.

390
Figure 8.5 Plant Economics Influence Diagram

391
Figure 8.6 shows a possible configuration of the capacity planning problem any

operationalisation of flexibility. Typically, we would invest and then run the plant

when it is ready. The local events for plant X include the uncertainties of lead

time, fuel price of plant X, and fuel escalation rate for plant X. The local events

for plant Y are lead time and fuel price. The no-investment option serves as the

status quo. The external events Pool Price and Base_elec_costs affect the payoffs

of investing in X, Y, and None.

Figure 8.6 Plant Economics Decision Tree

Flexibility can be introduced in several ways: 1) option X or Y exhibits flexible

characteristics or provides flexibility in response to given uncertainties; 2) plant

392
characteristics, such as lead-times, are changed; and 3) decisions on investment are

delayed until relevant uncertainties (trigger events) are resolved.

To analyse the flexibility provided by the investments X and Y, we must re-order

the nodes so that the trigger event precedes the second stage decision. The first

stage investment decision precedes the trigger event for the flexibility decision of

whether to run or not. Without this trigger event, there is no additional flexibility

that X and Y can introduce to the system. The flexibility that investment X or Y

brings to the picture is simply the second stage decision of running the new plant if

the trigger states of the trigger events occur.

If the external event Pool Price is made a trigger event for both X and Y, we see

immediately that X and Y are more flexible than the status quo none option. If

lead time is made the trigger event, then the fixed capital cost is known before any

running costs are incurred. However, the subsequent decision that determines the

amount of flexibility must be able to capitalise on this information.

8.4.2 Pool Price

The elements in the pool price have been designed to encourage or discourage new
capacity investments. Described in greater detail in Chapter 2, the capacity

payment, also called the availability payment, is the expected cost of unserved

energy given to the generators in addition to the SMP for those plants called to run

during a given half-hour. The capacity payment, (VOLL - SMP)LOLP, is

composed of the Value of Loss of Load, the System Marginal Price, and the Loss

of Load Probability. Together (SMP + capacity payment) they comprise the Pool

Input Price (PIP).

A plant that has been declared available may or may not get bid into the pool. A

plant that gets bid into the pool may or may not set the SMP for that half-hour. A

plant that gets bid into the pool may or may not get called to run. These

393
uncertainties together with actual demand level and total capacity in the system are

market uncertainties that affect the overall price of electricity. Those plants that

have been bid into the pool but not called to run receive the capacity payment

because their declared availability ensures against loss of load.

The influence diagram of figure 8.7 shows the relationships between variables in

the pool price formula which indicate market uncertainty and also affect the

investment decision. This influence diagram is similar to the causal loop diagram

of the system dynamics study of Bunn and Larsen (1992) but without cycles.

Figure 8.7 Pool Price Influence Diagram

We construct the decision tree (figure 8.8) to parallel the plant economics

formulation of the previous sub-section. The utility has three choices in the first

stage: invest in plant X, invest in plant Y, or do not invest at all. The levelised

investment costs (pence/kWh) for X and Y are Pay_X and Pay_Y respectively.

After investing in a plant, it can be declared available and bid into the pool.

394
Whether or not the plant gets bid depends on the expected demand and total

declared capacity available, which determine the LOLP. Whether or not it actually

gets called to run in the corresponding half-hour next day depends on the actual

demand.

Figure 8.8 Pool Price Decision Tree

395
We specify X and Y such that if X is successfully bid, it is almost surely the most

expensive plant bid within the half-hour, in which case the SMP will be equivalent

to its bid price X_SMP. If Y is successfully bid, it is unlikely to be the most

expensive plant bid, and therefore will not set the SMP for that half-hour. The

probability of Xs bid success is much lower than that of Y. If X or Y is bid into

the pool but does not get called to run, the plant still receives the capacity

payment. [For simplicitys sake, we have not included the possibility that plants

not declared available will be called to run if actual demand is much higher than

expected.] The do-nothing case assumes that the existing old plant has an equal

chance of being bid into the pool and called to run, but the event has no effect on

the level of SMP or LOLP. Focussing on market uncertainties for the moment, we

assume that the running costs are of two states only (high or low) and not further

affected by other uncertainties. These running costs, which depend on plant

conditions, come from the extreme scenarios, i.e. minimum and maximum of plant

costs, from the previous section 8.4.

Flexibility is introduced in the same manner as described in the previous section. In

other words, re-order or add nodes to the original structure to get a flexibility

configuration specified in section 8.2. After structuring, we follow the assessment

guidelines of section 8.3, especially table 8.1 to determine which measure of

flexibility to use.

8.5 Operationalising Strategies

8.5.1 Partitioning, Sequentiality, Staging

Previously described in Chapter 6, Mandelbaums (1978) two sources of flexibility

(partitioning and sequentiality) are means of decomposing a decision node. Figure

8.9 illustrates the meaning of partitioning. Partitioning the action space implies

396
redefining the original choice set to enlarge it. The left decision with two choices

A and B is partitioned into the right decision with five choices. The states of the

trigger event may be defined to trigger the choices in the second stage decision.

Figure 8.9 Partitioning

Sequentiality or staging partitions the decision space over time. Flexibility is

introduced by increasing the frequency of decision points. As each decision is a

commitment, breaking up a decision into a sequence of smaller decisions reduces

the amount of commitment made at each stage and thereby frees up resources to

commit to better alternatives that arise. If re-ordered to capture new information,

sub-decisions that are spread over a period of time gain from the resolution of

uncertainty or acquisition of information. Figure 8.10 illustrates the meaning of

sequentiality and staging.

397
Figure 8.10 Sequentiality and Staging

Increasing the frequency of decision points can be accomplished in a number of

ways, such as 1) shortening the life of a plant, 2) adjusting the construction lead

time or time to commission, and 3) introducing additional capacity in stages, e.g.

modularity of unit size. Plants which can be built in incremental modular units and

run as they are built offer flexibility by minimising commitment. We discuss next

an example that concerns flexibility of plant lives against the uncertainty of new

competitive technology.

FLEXIBILITY OF PLANT LIFE

The well-known phenomenon of technological obsolescence arises from the

availability of more competitive technologies which may reduce running costs or

take advantage of new conditions. Figure 8.11 compares plants with varying lives

and the costs of switching to a new technology at the end of a plants life. It shows

that plants with extendable lives provide more flexibility than those without. As

398
plant lives reflect the amount of commitment, reducing commitment increases

flexibility. This supports the relationship between commitment and flexibility

earlier established in the conceptual framework of Chapter 6.

Figure 8.11 Flexibility by Plant Lives

399
[Note: the chance node labelled a appears twice in the decision tree. It refers to

the repetition of that portion of the tree to the right of the first labelled node.]

At time t, plant X and Y reach their end of life, while plant Z has not reached its

end of life. At this time, there may be new technologies available for investment.

Market and plant conditions may favour these new technologies, i.e. higher pool

price and lower running costs. If the competing technology gives better

performance, then it may be worthwhile to purchase or switch to it. In this case,

X and Y are both more flexible than Z. If the competing technology gives worse

performance, then it is not favourable to switch. In this case, investment X is more

flexible than Y because Xs life can be extended. We assume that it will not be

economically feasible to retire early and invest in the better performing technology

at the same time. If the utility firm switches to the better performing technology, it

makes a third stage decision, that of deciding on lead time. If demand is high, then

a zero-lead time technology will capture the high revenue. If demand is low then

the less costly non-zero lead time switched technology is preferable. This demand

uncertainty acts as the trigger event for the third stage decision. Plant Z with

remaining life is neither able to take advantage of the competitive technology nor

get rid of the commitment, as early retirement implies a further cost.

We analyse this example using expected value measures described in the previous

chapter. The relative flexibility benefit (Hobbs et al, 1994) and the normalised

flexibility measure (Schneeweiss and Khn, 1990) both indicate that investing in

plant X provides the most flexibility out of all three choices in the first stage.

Investing in X, as opposed to Y, depends on the trade-offs between the enabler of

investment cost, the likelihood given by the probability of better competing

technology performance, and motivator of future demand levels. If a more

competitive technology is not likely at all, then plant Z could be the best initial

choice as the inflexible option gives the highest expected value when no uncertainty

400
is considered. If there is no information about the competing technology until after

plants X and Y have been retired, then there is no flexibility.

8.5.2 Postponement and Deferral

The timing decision is much discussed in option-pricing literature, that is, when is

the optimal time to invest and to exercise your option? The timing decision is

relevant if uncertainty can be resolved by waiting for or acquiring more information

before deciding (thus deferring the decision). Postponement or deferral is

illustrated by reversing the order of decision and chance nodes. Better payoffs can

be attained if these uncertainties are trigger events for the decisions they precede.

During the period of delay, new options may arise as well as expire. For

simplicitys sake, we assume that the choice set remains the same in spite of re-

ordering. The timing decision of investing the first stage or exercising the second

stage can be portrayed as a multi-staged decision tree, where the decision to invest

or run a plant depends on the occurrence of the trigger state of the trigger event.

Deferral can be achieved by adjusting the construction lead time of new plants,

shortening or extending plant lives, or any number of ways, at a deferral cost. The

analysis reduces to a trade-off between the cost of deferral and the expected

benefits from this flexibility.

Figure 8.12 shows the basic structure of such a decision tree. Condition1 triggers

the decision to run in the second stage. Condition2 triggers the decision to run if

the initial investment decision is deferred. If Condition1 also triggers the Invest2

decision, then deferral is useful. By deferring the investment decision, one gets

information about the conditions before deciding to invest. Condition1 which

affects the decision to run after Invest in the first decision also provides

information for Invest2. The choice that corresponds to the more favourable state

of its preceding trigger condition gives a better payoff. If these conditions (trigger

401
events) are independent of each other, then deferring the decision provides no

flexibility.

Figure 8.12 Postponement and Deferral Decision Tree

Extending this to multiple stages captures the opportune time to invest or exercise.

We see that deferring only makes sense if the conditions are related and not

independent.

Merkhofer (1977, p. 719) suggests that the decision makers time preferences
should be considered to determine the value of flexibility obtained from delaying a

decision. Extending the previous example to an annual cash-flow model of figure

8.13, we should take the discount rate into consideration. Sunk1, Sunk2, and

Sunk3 correspond to the investment premiums at different periods in time. Plant1,

Plant2, and Plant3 are plant costs incurred in each of the three periods. Market1,

Market2, and Market3 are revenues gained in the corresponding periods.

402
Figure 8.13 Deferral with respect to Market and Plant Uncertainty

8.5.3 Diversity

Diversity implies a collection of different areas of uncertainties mapped to different

types of flexibility. Our example in figure 8.14 shows that adding plant X with

three attributes catering to three different uncertain conditions gives more

flexibility than plant Y that only caters to one uncertain condition. Equally, adding

both types of plants gives the most flexibility.

To build the decision model, we identify the uncertainties which flexibility answers

or responds to, effectively a one-to-one uncertainty to flexibility mapping. We are

asking, Which uncertainties can we manage with the attributes of our plant or

plant (mix)? Flexibility in size answers uncertainty in level of demand (condition

A1). Flexibility in performance of plant answers the uncertainty of environmental

regulations (condition A2). Flexibility in the timing of investment answers

uncertainty in demand growth (condition A3). A well-diversified portfolio contains

different types and sizes of plants with different lives and retirement dates.

403
Figure 8.14 Diversity Influence Diagram

The three conditions A1, A2, and A3 trigger second stage decisions for plant A. If

condition A1 is favourable, i.e. demand level goes up, then we can add another unit

of plant A to meet the higher level of demand. The trigger state is high demand

growth, and the associated flexibility choice is adding another unit to meet the

higher demand. If environmental uncertainty is resolved, i.e. condition A2, then an

appropriate action can be taken on plant A. These second stage decisions have not

been explicitly illustrated in the decision tree but follow the same kind of decision

sequences shown in the examples of plant economics and pool price in section 8.4.

404
Figure 8.15 Diversity Decision Tree

405
Selecting a plant with three attributes that contribute to flexibility is similar to

selecting three plants that increase flexibility in three ways. In Chapter 7,

Schneeweiss and Khns (1990) example of machine investment followed by

production level adjustment captures this source of flexibility. The machines differ

in the number of production levels, the selection of which are triggered by

appropriate trigger states of the trigger event (demand) preceding the second stage

decision. Adding options that contribute to the overall diversity in the system,

plant mix, or portfolio contributes to overall flexibility because these options cater

to different states of external conditions. Adding such options will always increase

flexibility provided they are free. Because they are not free, it is necessary to make

cost and benefit trade-offs, and this assessment requires the use of indicators and

expected values.

8.6 Conclusions

We have shown the applicability and practicality of our guidelines for structuring

and assessing flexibility in the context of capacity planning in the UK Electricity

Capacity Planning. These guidelines are summarised briefly below.

Practical Guidelines for Structuring and Assessing Flexibility

1) Identify important areas of uncertainties, as we have done in Chapter 2 table 2.6, to


facilitate the mapping of uncertainty and flexibility

2) Operationalise flexibility as in Chapter 6 by devising flexible responses for each


by
a) options
b) strategies

3) Structure the problem in the decision analysis framework, i.e. with decision tree
and influence diagram, to include

406
a) uncertainty-flexibility mapping, and
b) minimum 2-stage decision sequence

The decision tree is asymmetric because of the default status quo case in the first
stage. Specify the decision tree with relevant indicators, i.e. enabler, disabler(s),
motivator(s), trigger event(s), trigger states, likelihood.

4) Assess flexibility with indicators and expected values. For simple problems, use
indicators. For complex problems, follow the categories in table 8.1.

The problems in capacity planning have been greatly simplified to focus on the

contribution of flexibility to uncertainty. This simplification facilitates the mapping

of uncertainty to flexibility, i.e. trigger event to decision, and treatment of multi-

staged decisions and relevant uncertainties. The illustrative examples constructed

in this chapter show that flexibility is a useful response to uncertainty. Flexibility

does not replace the need for rigorous modelling, as completeness is still necessary.

It merely compensates for the lack of completeness or the model unease found in

the decision making style of the industry. In other words, the traditional modelling

approach (with the trend in model synthesis) is still necessary but no longer

sufficient.

407
APPENDIX D

Flexibility and Robustness:


Response to Demand Uncertainty by Over- and Under- Capacity

D.1 Introduction

Following the cross-disciplinary literature review in Chapter 5 and the conceptual

development of flexibility and robustness in Chapter 6, this appendix relates these

abstract concepts to the familiar example of supply and demand in production and

inventory management so as to derive basic cost measures. Such quantifiable costs

and their relationships are then applied to other areas to further illustrate the

differences between them. Finally, flexibility and robustness are discussed within

the context of modelling uncertainty in electricity capacity planning.

Flexibility is the ability to react or change, and robustness is the lack of a need to

react or change. Flexibility denotes immediate responsiveness while robustness

provides an insurance or cushion against undesirable events. For example, on a

windy day, neither the willow nor the oak tree will collapse because the former

bends with the wind (flexibility) and the latter withstands the wind (robustness).

Flexibility implies a future cost as it is a defense against the unexpected, e.g. the

cost of producing additional goods to meet a sudden and unexpected rise in

demand. Robustness, on the other hand, implies a present holding cost typically

incurred by extra capacity or high inventory levels to meet expected rises in

demand.

Although many types of flexibility exist, Gerwin (1993) insists that they are only

relevant in response to given types of uncertainty. This paper considers demand

uncertainty only, thus corresponding to Gerwins volume flexibility which permits

409
increases or decreases in aggregate production level. Cazalet et al (1978) have

looked at the implications of building over and under capacity with respect to

demand uncertainty from a decision analysis perspective. Here, we illustrate the

costs associated with such production levels to illustrate both flexibility and

robustness. Gerwin and others also mention necessary elements in defining

flexibility, which we have found earlier in our cross-disciplinary review to include

range and time. Range refers to the amount of change, and time refers to the

length of time to make the change. In this chapter, range refers to levels of

production that can be assigned or achieved, while time is the lead time to produce.

Gerwin also observed that the time aspect of flexibility has received much less

attention than the range. For this reason, we will expand on the time aspect.

D.2 Simple Example: no lead time, demand = supply, planned = actual

levels

We take a simple case to illustrate the difference between flexibility and robustness.

A firm produces the quantity qt at time t to meet exactly the demand dt at time t,

i.e. qt = dt. This occurs when lead time is zero and the actual quantity produced is

the same as the actual demand at time t. Lead time T is defined as the time it takes

to produce the product which is demanded. Q and D represent planned production

and forecasted demand, while q and d represent actual levels of production and

demand respectively. Capital letters Q and D denote planned whereas small letters

q and d denote actual. For the moment we do not distinguish between planned and

actual, so that it is only necessary to use Qt to represent both quantities of

production and demand at time t. Let It denote the normal production level at time

t. This firm chooses to fix It at a constant level I. Iopt is the optimal level of

normal production which minimises the total cost. Ch is the cost of holding the

produced quantity when demand is less than I. Cp is the cost of producing

410
quantities above I when demand is greater than I. The following table D.1 lists the

basic terminology and notations used in this paper.

Table D.1 Terminology and Notations

variable planned or actual planned = actual


expected
supply, production Qt qt Qt
demand Dt dt Dt
normal production level It = I I I
minimum , maximum Qmin, Qmax qmin, qmax Qmin, Qmax
Dmin, Dmax dmin, dmax

cost of normal production Cnp


cost of production (above I) Cp
cost of holding (below I) Ch
lead time T
cost of not meeting demand Cd
cost of extra production (above Qmax) Cxp
statistical distribution function for production ft(Qt)
average cost of production up to time t with I level of production Ct(I)

In the simple case where production quantity equals demand and planned equals

actual, the demand curve is the same as the production curve, as illustrated in

figure D.1. The cost of holding and production, Ch and Cp, apply to the areas

delineated by the line I and the demand curve. Cp is not the cost of normal

production Cnp which is used to calculate I, but the cost of production beyond

normal production to meet demand.

411
Figure D.1 Costs of holding and production: Ch and Cp

Cp = cost of production
Quantity Qt

Q max

Q min
C h= cost of holding
time t

The average cost up to time t with normal output at I is the sum of the cost of

normal production Cnp times the normal production level I, the holding cost Ch

times the amount not sold (I - Qt for I > Qt), and the cost of production Cp times

the additional quantity produced (Qt - I for I < Qt).

Ct(I) = Cnp * I normal production cost


t
+ E [ Ch ( I , Qt ) ] dt holding cost
0
t
+ E [ Cp ( I, Qt ) ] dt production cost
0

D.2.1 Proportional Cost

Assuming linearity, we have the following relations:

Cnp (I, Qt) = Cnp * Qt for I = Qt normal production cost

Ch (I, Qt) = Ch (I - Qt) for I > Qt holding cost

Cp (I, Qt) = Cp (Qt - I) for I < Qt production cost

412
D.2.2 Flexibility

Flexibility means the ability to change or react when necessary. This can be

achieved in several ways. It does not matter what I is as long as the firm has the

means to meet demand at whatever level, either by adjusting the level of

production as necessary or by producing additional quantity from elsewhere.

Alternatively, instead of fixing It to a constant level I, the firm can choose a

fluctuating It = Qt, though this may incur additional cost. Suppose we set I to the
minimum demand level, I = Qmin.

Expected cost is then


t Q max
Ct (Qmin) =
0 Q min
Cp (Qt - Qmin) ft(Qt)dQtdt

which depends on ft(Qt) and Cp.

D.2.3 Robustness

Robustness means the absence of a need to change or react. When the level of
normal production is set at maximum demand Qmax, it is not necessary to change

the level of production because demand will never exceed Qmax. (Recall that we

assume maximum demand at Qmax).

I = Qmax

by substitution,
t Q max
Ct (Qmax) =
0 Q min
Ch (Qmax - Qt ) f t (Qt ) dQt dt

which depends on ft(Qt) and Ch.

413
D.2.4 Flexibility versus Robustness

From these above two equations for Ct, we can tell which is cheaper (hence better).

For the risk neutral firm, we conclude the following:

Robustness is better than flexibility if Ct (Qmax) < Ct (Qmin).

Flexibility is better than robustness if Ct (Qmax) > Ct (Qmin).

D.2.5 Optimal Policy Iopt

opt
The choice of an optimal level of normal production I is determined by finding I
such that Ct(I) is minimised, as follows.

t
= Min [ Ct (I) = [Ch (I - Qt) | {I > Qt} + Cp (Qt - I) | {I < Qt }] dt ]
opt
I
I 0

D.2.6 Special Cases

opt
To find I , we consider two special cases of Ch and Cp. If the cost of holding and

the cost of production are equal, Iopt cannot be uniquely determined. If these costs
opt
are not equal, then we can solve for I by making a simplifying assumption, that

the statistical distribution for Qt is independent of time.

Ch Cp and ft(Qt) = f(Q)

1
Let f(Q) = (uniform distribution)
Q max Q min

I
[ Ch Q
Q max
(I - Qt) dQt + Cp
t
Ct (I) = (Qt - I) dQt ]
Q max Q min min I

t Ch ( I - Q ) 2 C p ( I Qmax ) 2
Ct (I) = min
+
Q max Q min 2 2

t
= [ Ch (I - Qmin) 2 + Cp (I - Qmax )2]
2(Q max Q min )

414
Setting the first derivative to 0, we solve for Iopt

dCt ( I ) t
=0= [Ch ( I Qmin ) + C p ( I Qmax )]
dI Qmax Qmin

Ch (I - Qmin) + Cp (I - Qmax) = 0

Ch I - Ch Qmin + Cp I - Cp Qmax = 0

I (Ch + Cp) = Ch Qmin + Cp Qmax

Solving for I for Iopt,


opt (Ch Qmin + Cp Qmax )
I =
(Ch + Cp)

opt
If Ch = 0 then I = Qmax (robustness)

opt
If Cp = 0 then I = Qmin (flexibility)

D.2.7 Relative Costs

opt opt
How do Ch and Cp relate to I ? Differentiating I by Ch, we find a negative

relationship between the two.

dI opt Q min (Ch + C p ) (Ch Qmin + C p Qmax )


=
dCh (Ch + C p ) 2

C p (Q min Q max)
= <0
(Ch + C p ) 2

opt
But when we differentiate I by Cp, we find

dI opt Qmax (Ch + C p ) (C h Qmin + C p Qmax )


=
dC p (Ch + C p ) 2

Ch (Qmax Qmin )
= >0
(Ch + C p ) 2

415
opt
This implies that as the cost of holding increases, I decreases, and we should hold
opt
less. Likewise, as the cost of production increases, we should increase I or hold

more. This agrees with common sense, and we can see it below in figure D.2.

These relationships can also be found by evaluating Iopt when Ch > Cp and when Ch

< Cp. When Ch and Cp are unequal and nonzero, Iopt will be somewhere between

Qmin and Qmax.

Figure D.2 Relationship between Iopt and Ch, Cp

Quantity Iopt
Cp
Qmax

Qmin Ch
Cost

D.3 Extensions of Simple Example

To make the problem more realistic, we vary the basic conditions to examine the

effects of I, Cd, T, risk attitude, Q and D, and D and d.

D.3.1 Levels of I

The range aspect of flexibility translates into the number of levels of production.

Robustness implies fixing It at a constant I, whereas flexibility allows changing It.

Maximum flexibility occurs when normal production level It can be set to Dt.

Alternatively, robustness also applies to the situation where Qmax > dmax but

flexibility refers to It = Qt = Dt.

416
D.3.2 Cost of Not Meeting Demand Cd

Consider the situation where demand may not be met by existing production
capacity. If demand Dt is not met, future demand may fall, because customers can

switch to other suppliers. If this firm does not have means to meet demand above
Dt, it is inflexible. The cost of extra production beyond maximum capacity Cxp can

be assessed relative to the cost of not meeting demand Cd and the available means

of achieving this.

Suppose Dt is never more than Qmax, i.e. P (Dt > Qmax) = 0. This may happen for

the following reasons. In an efficient market, price rises as demand rises. A price
rise will deter further increase in demand beyond, say, Qmax. However, when prices

are fixed or when demand is price-insensitive, Dt may exceed Qmax. A second


reason could be that when Dt reaches a certain percentage of Qmax, it signals the

producer to purchase or rent additional machinery to increase total production

capacity. Finally, it could well be that the producer has no interest in meeting the
Dt that exceeds Qmax because it is too costly or impossible. For example, the lead

time to acquiring additional production capacity may be too long or the producer

has no more physical space to accommodate additional machines. Alternatively, a

monopolist without any obligation to meet the additional demand may prefer to

ignore it. As long as the cost of not meeting demand is significant and nonzero, the

firm needs to consider flexibility. Thus Cd is not only an economic cost but also an

opportunity cost and one reflecting the cost of contractual or social obligations.

D.3.3 Effect of Lead Time T

Consider the effect of lead time T, so that the quantity produced is not the same as

the quantity demanded, and at best, Qt = Dt+T. As long as the cost of not meeting

demand Cd is zero, i.e. the firm has no obligation to meet demand, and the lead

417
time to production has no effect on costs. However, if there is a cost to not

meeting demand, flexibility and robustness apply. Under these circumstances, if

lead time is zero, flexibility is better. If lead time is nonzero, robustness is better.

In the case of non-zero lead time and a zero cost of not meeting demand, the firm

must decide the benefits and costs of additional revenue. This discussion is

summarised in table D.2.

Table D.2 Lead Time and Cost of Not Meeting Demand

Cost of not meeting demand Cd

Lead time T Cd = 0 Cd > 0

T=0 no need flexibility

T>0 discretion robustness

D.3.4 Risk Attitude

The above analysis assumes that the decision maker, or the firm producing the

goods, is risk neutral. The decision makers risk attitude affects his preference for

flexibility or robustness only when there is a chance that Dt may exceed Qmax. If

lead time is zero, all else being equal, the decision makers attitude to risk does not
affect his preference for flexibility or robustness. For nonzero lead time and

nonzero cost of demand, the risk averse decision maker would prefer robustness to

flexibility as the latter implies a risk that demand may not be met on time and that

future demand may be affected as a result. Thus, even if the cost of production is

lower than the cost of holding, the risk averse decision maker would prefer a

higher level of normal production to avoid the risk that the lead time (for extra

production) would entail. Implicitly, the cost of holding includes the cost of

expired goods if not sold. Table D.3 classifies the decision makers preferences

with respect to his risk attitude.

418
Table D.3 Preferences with respect to Risk Attitude when P(Dt > Qmax) >0

Risk Attitude risk averse risk neutral risk taking


Situation

T > 0 and Cd >> 0 robustness; set I > Qmax robustness with flexibility flexibility

D.3.5 Levels of Qmin, Qmax with respect to Dmin, Dmax

Robustness implies setting Qmin and Qmax to cover Dmin and Dmax. This translates

into minimising the probability that Dmax exceeds Qmax and ensuring that Qmin can

fall to Dmin without cost. Flexibility, on the other hand, implies fluctuating

production levels either by It = Qt or maintaining dynamic Qmin and Qmax to meet

changing demand. Thus Dmax can be greater than Qmax and Dmin less than Qmin

provided the capability to meet the discrepancy between Dt and Qt exists.

D.3.6 Forecasted Demand versus Actual Demand

Suppose that actual demand dt may exceed expected maximum demand Dmax. In

figure D.3 below, the area between dmax and Qmax refers to the cost of extra

production Cxp. This is the average unit cost of producing beyond maximum

production capacity.

419
Figure D.3 Cost of extra production Cxp

C xp = cost of meeting demand above


Actual Demand dt maximum production capacity

d max
Qmax

Cp
I
Ch
Qmin

time t

In this case, dmax cannot be predicted with accuracy. No matter what level of I or

the level of Qmax, there is always a chance that demand will exceed it. If we set I =

Qmax, and dmax turns out to be greater than Qmax, some demand will not be met. If

dmax < Dmax< Qmax, then we have incurred substantial holding cost, especially
opt
between dmax and Qmax. To determine I in this case, we need to consider the

probability that dmax exceeds Qmax or dt exceeds Dmax, the cost of not meeting

demand Cd, and the cost of extra production beyond given maximum production

capacity Cxp. Table D.4 summarises the conditions below.

Table D.4 Conditions for Robustness and Flexibility

Cost of not meeting demand: Cd > 0

Cost of flexibility: Cxp, Cp Cxp = Cp Cxp > Cp


Probability of dmax > Qmax

low robustness robustness, some flexibility

high flexibility high robustness, flexibility

From the above, we conclude that as long as there is a chance that demand may

exceed maximum production capacity and the cost of not meeting demand is not

420
zero, some flexibility is necessary. Where cost of production beyond maximum

production Cxp exceeds cost of production Cp, additional robustness is required.

So,

If P(dmax > Qmax ) > 0 and Cd > 0 then flexibility is necessary.

D.3.7 Errors in Forecasting, Modelling, and Planning

The existence of lead time, holding cost, production cost, extra production cost,

and cost of not meeting demand implies a need to forecast and plan ahead.

Typically, future demand is forecasted so that Dt can approximate dt, with the

objective of reducing forecasting error, by minimising deviations between dt and Dt

or by ensuring that Dmax exceeds dmax. Production levels are managed so that

planned Qt approximates actual qt. The more complicated the system, the more

likely we can expect errors in forecasting future demand, modelling of the system,

and planning decisions to occur. Robustness gives a present known cost of holding

by fixing It at a level to cover the maximum demand expected. On top of this,

flexibility is needed to cover the errors described above.

D.4 Applications by Further Examples

The above analysis can be applied to any situation involving the control of supply

and demand. We illustrate the concepts of flexibility and robustness further

through two examples. The first example concerns bank customers preferences

for flexibility in their maintenance of the chequebook. The second example

concerns a firms decision to buy or rent machinery.

421
D.4.1 Example 1: Current and Savings Accounts

A bank customer prefers to keep as low a balance as possible in the non-interest-

bearing current account and as high a balance as possible in the interest-bearing

savings account. In addition, he would like to reduce the amount of time spent on

monitoring his current account. Insufficient balance in the current account means
that the cheque will bounce and he will be charged a fee Cd (cost of not meeting

demand). The credit balance in the current account represents normal production

level It, the cost of which is the opportunity cost of not earning interest in the
savings account Ch. The cost of transferring between accounts is the cost of

production Cp. Lead time T is the amount of advance notice he has to give to the

bank or the length of time it takes to transfer between the accounts. The total
amount of money he has between the two accounts is Qmax. The minimum balance

in the current account is Qmin. The maximum total withdrawal from the current

account is Dmax. If the customer is flexible (has the capability to be flexible) and

prefers it, he would keep It (the balance in the current account) as low as possible.

If the customer is risk averse and also prefers not to have to monitor or transfer

between accounts too frequently, he would keep a high balance in the current

account, hence the robustness option. Thus, the cost of robustness is the cost of

holding, i.e., opportunity cost of the positive balance in the current account. The

cost of flexibility is the extra effort required to monitor and transfer between

accounts Cp plus associated transaction or enabling costs.

Some banks are offering all kinds of financial packages to suit customers

preferences. The uncertainties and risks these banks face with regard to

customers frequency and amount of transfers between the accounts are implicitly

built into the fees they charge. These fees include non-interest-bearing balance and

actual charges. Variations of the above include interesting bearing current account

with minimum balance, where the interest is still lower than savings and other types

422
of longer term accounts. There is also a facility for automatic transfer between

accounts at the cost of keeping a minimum balance, fixed set up fee, or transaction

fee per transfer. Alternatively, negative production levels can be associated with an

overdraft facility on the current account.

This analysis may also be applied to money market accounts, off-shore accounts,

and financial instruments which give customers the robustness and flexibility

required to cope with the demand uncertainty.

D.4.2 Example 2: Buying versus Renting

The buy or rent decision is not only an accounting issue but also a strategic one,

affecting the way a firm can deal with future uncertainty. In accounting terms,

buying is very different from renting machinery, as one of ownership and control

versus borrowing. The former becomes an asset, gets entered into the balance

sheet, gets depreciated, and eventually has scrap value. The latter becomes an

expense, reduces taxable profit, and does not get carried over to the following

year. Strategically speaking, buying ties the firm to this specific technology for the

life of the machinery whereas renting enables the firm to switch to new

technologies when necessary. Ownership and borrowing differ by the degree of

commitment or confinement to a specific technology. By renting, the firm can

terminate its commitment at any time and limit its technological confinement.

Ownership, particularly of a capital asset, pays off if the capital cost discounted

over the life of the asset is less than the total cost of renting over the same number

of years. In practice, a firm may choose to own most of the machinery and rent

additional ones when necessary. This arrangement can be seen as robustness to

deal with expected demand and flexibility to deal with the unexpected. Here cost

of holding is the purchase cost which reflects the opportunity cost of the machine.

Cost of production is the cost of renting. Lead time is translated into how soon the

423
firm can rent or return the extra machine required. The shorter the lead time, the

more flexible and attractive is the option of renting.

D.5 The UK Electricity Supply Industry

In the UK electricity industry, the business of generation and the responsibility of

meeting demand are no longer borne by one single utility, i.e. the defunct Central
Electricity Generation Board CEGB. Qmax is the maximum electricity production

capacity but not a constant level due to scheduled and unscheduled maintenance

and different availabilities of plant. For this reason, normal production level It <
Qmax. Furthermore, Qmax is the aggregate of plant capacity of the utilities in this

deregulated industry and therefore more difficult to determine. Meanwhile as new


plants are commissioned and old uneconomic plants retired or sold, Qmax will

change accordingly. The expression for Qmax thus approximates the actual qmax,t :

n
Qmax Qmax,t = Qmax,i,t where Qmax,i,t is the maximum capacity of each utility
i =1

i at time t.

The cost of not meeting electricity demand is very high, especially in this
competitive environment. The cost of not meeting demand Cd is translated into

reliability stipulations in the contracts between various parties involved. The

probability that actual demand exceeds maximum forecasted demand could be

positive. Traditional approaches have tried to deal with this uncertainty by first

forecasting demand, setting a reserve margin R above expected peak demand based

on the volatility of past demand, and finally optimising to produce a minimal

costing capacity expansion plan (Qmax = R + Dmax). Thus over-capacity is a way to


ensure that demand Dt will always be met, but it is not sufficient to guarantee that

dt will always be met. Over-capacity, like under-capacity, is costly. Thus

robustness, in the form of over-capacity, is not sufficient.

424
The cost of robustness Cr is the holding cost of over-capacity. Cost of flexibility

can be seen as the cost of responsiveness, immediate change, additional production


to meet demand. Cf = Cxp + transaction or enabling costs Cx. Cp does not exist as I

= Qmax. Thus we have:

Cr = Ch (robustness)

Cf = Cxp + Cx (flexibility)

The cost of flexibility Cf does not include cost of holding which is an ongoing

present cost. Instead, Cf includes future transaction and production costs. Thus

robustness is a kind of flexibility if the cost of holding can be passed to someone

else or if it is zero. A holding cost should also contain an interest or discount rate

to reflect the time over the period of holding. Robustness means the need not to
change, hence It = I. Robustness diminishes if It is not constant, as there is a cost

to changing normal production levels. As It tends towards continuous different

levels, one reaches flexibility.

How does one gain flexibility? An extremely flexible option has no holding cost

and gives instant response. Importing power, for instance, shifts the holding cost

(of the plant) to someone else, e.g. Scotland or France. Combined Cycle Gas

Turbines (CCGT) are quick to build and require minimal warm-up time.

Mandelbaums (1978) six sources of flexibility suggest different ways to gain

flexibility. Demand Side Management (DSM) as practised by many utilities in the


USA refers to the use of incentives to lower or shift demand so that Dt will never

exceed Qmax. Other types of contractual arrangements, such as having a break

clause in building a new type of plant, also offer flexibility. Thus flexibility can be

translated as having lots of different alternatives (availability and access) to respond


to excessive Dt and being able to use them at minimal cost and timing.

425
Portfolio Theory (Markowitz, 1952) recommends keeping a well-balanced and

diversified portfolio in order to maximise returns and spread risks. In electricity

generation, diversity in the capacity mix ensures security of fuel supply. Diversity

is a double-edged word, containing implications of robustness and flexibility. A

utility with a diversified capacity mix, i.e. different types of technologies, different

fuels, and different plant lives and other characteristics, is not only protected from

fuel supply disruptions (robustness) but also has different alternatives to cope with

the unexpected (flexibility). But once again, due to nonzero holding cost,
uncertain Qmax , and a fuel supply disruption possibility, diversity in the capacity

mix gives both robustness and flexibility.

Because flexibility requires the consideration of new solution alternatives with

respect to uncertainty, we need to take a modelling approach that facilitates the

analysis of choice and uncertainty. This motivates the use of decision analysis as

demonstrated in Chapter 8. Scenario analysis is another method to analyse

uncertainty. A robust option is one which is good for scenarios 1 to n. By having

many options 1 to m that is available for any scenario that arises, but ultimately

option j is good for scenario j for instance, then we have flexibility.

D.6 Conclusions

We have examined the differences between flexibility and robustness and the

conditions under which they are useful by means of derivation of cost measures.

The simple case of production is extended to two familiar examples to illustrate

these concepts further. While strict uncertainty makes it difficult to assess

flexibility directly, we argue that errors in forecasting and modelling require

consideration of flexibility as robustness is not enough.

We have shown that flexibility and robustness are not the same, neither are they

opposites. Flexibility is forward looking, reflecting a potential capability, useful

426
when more information can be expected, and implying a future cost. Robustness,

on the other hand, is backward looking, minimising regret and a present cost. We

can only use it as it is, whereas flexibility offers the potential to change and

transform.

We have only looked at demand uncertainty, and at the case of demand being

greater than supply. This study can be extended to look at the other side, where

demand is lower than minimum supply. This is particularly applicable to the

electricity supply industry where over-capacity means increased cost to customers.

Research in the application of flexibility and robustness can be extended in many

directions, to include responses to other types of uncertainty, increasing complexity

in the system, and other areas of application such as regulations, computer systems,

and long term investment planning.

427
CHAPTER 9

Conclusions

9.1 Main Themes

This thesis has proposed and investigated the feasibility and practicality of model

synthesis and the usefulness of flexibility to modelling uncertainty in electricity

capacity planning. The main conclusion is that 1) model synthesis is feasible but

has practical limitations and 2) flexibility is useful but not in the same sense as

model synthesis. This conclusion is supported by the main themes listed below and

the research answers that follow in the next section.

COMPLETENESS AND UNEASE

From the beginning, there appeared no link between model synthesis and

flexibility. In many respects, they seemed totally unrelated. Model synthesis is a

methodological concern, problem-driven, and rooted in the modelling domain.

Flexibility is a conceptual idea, solution-driven, and not familiar to the modelling

tradition. Until their contribution to this research problem was evident, it did not

seem feasible to consider both model synthesis and flexibility. Model synthesis

seems to fit into the discussion of model management systems and model

integration issues in the decision support systems literature. Yet there does not

exist a taxonomy suitable for it, hence the conceptualisation of model synthesis in

Appendix C. The wide application and polymorphous nature of flexibility

complicate the task of clarification and unification as different interpretations are

very confusing. Several prior attempts were made to reconcile the two apparently

unrelated concepts from the means-ends angle. In other words, is model synthesis

a means to flexibility or vice versa? Is it possible to incorporate flexibility in

modelling as a means to completeness?

429
Modelling for completeness is still necessary. But it is not sufficient, as

completeness is independent of model unease, which may exist inspite of

completeness. Completeness is intra-model, i.e. internal to the model, whereas

model unease is extra-model, i.e. external to the model. Model unease is an

unavoidable feature of decision making in this industry, referring to the gap

between the decision maker (the user of the model) and the model itself. The

relationship between completeness and unease (both intra- and extra-model) as

earlier discussed in chapters 4 and 6 resolves the themes of model synthesis and

flexibility. Model synthesis is a feasible but impractical means to completeness.

Flexibility is a practical means to compensate for the extra-model unease.

MODEL SYNTHESIS: Issues of complementarity, compatibility,

comprehensiveness, comprehensibility

Completeness or comprehensiveness is the implicit aim of the modelling approach.

Model synthesis has been proposed as one way to achieve completeness, as it

makes use of techniques that are complementary to each other in terms of

functionality and desirable features. To facilitate synthesis, compatibility at the

theoretical and data levels is required. Manageability (comprehensibility) is

essential for a usable model. Model synthesis makes use of complementary and

compatible techniques to meet the conflicting criteria of comprehensiveness and

comprehensibility. It is also appealing for the following reasons.

1) Intuitively, it communicates the notion of best of both worlds, harnessing the

balance of hard and soft techniques to address the intricacies of power generation

and the strategic nature of uncertainties in capacity planning. It reflects the idea of

using complementary techniques, models, or approaches as a means to

completeness.

430
2) Synthesis capitalises on economies of scale, reflecting the whole is greater than

the sum of its parts. It exploits the synergies between its component parts.

3) Synthesis implies co-existence, i.e. some level of interaction or communication

amongst its components. Co-existence requires compatibility of assumptions, data,

and functionality.

FEASIBILITY AND PRACTICALITY OF SYNTHESIS: CONCEPTUAL AND

OPERATIONAL ISSUES

The noticeable trend of building larger energy models through synthesis

demonstrates the feasibility but not the practicality of model synthesis. This thesis

investigated one form of synthesis to capture the two important but complementary

features of the three archetypal modelling approaches: decision analysis and

optimisation. A decision analysis framework was proposed as an organisational

tool to capture the details of the core capacity planning optimisation model. To

facilitate this, a model of model to reduce and approximate the inputs and

outputs of the optimisation model was proposed and tested. Although regression

analysis for model fitting is an established and acceptable response surface method

and indeed a similar optimisation model has been successfully reduced in this

manner, the series of experiments found that such a model of model is infeasible,

impractical, and not re-usable for purposes of uncertainty analysis.

A conceptualisation of model synthesis suggests different possibilities for synthesis,

non-trivial issues in structuring, different forms of synthesis, and various strategies

to achieve synthesis. These conceptual issues far out-number the tests achievable

in the model experiment. As a result of these conceptual and operational

difficulties, this thesis concludes that model synthesis is impractical for a utility

faced with the kinds and range of uncertainties described in Chapter 2 in the UK

electricity industry, where decision making is not totally dependent on models.

431
FLEXIBILITY AS A DECISION CRITERION

Under conditions of uncertainty, flexibility has been proposed as a preferred

decision criterion instead of optimality (Mandelbaum, 1978). That flexibility only

has value when there is uncertainty has been proved by several authors (Marschak

and Nelson 1962, Merkhofer 1975). However, the trade-off between flexibility

and optimality has not been sufficiently addressed in the literature, except for a

brief formal attempt by Mandelbaum and Buzacott (1990). To investigate this

further, the elements necessary to define the multi-faceted concept of flexibility are

distilled from the cross disciplinary review of definitions, measures, and

applications. In sum, the concept of flexibility conveys a change, encompasses the

notions of range (size of choice set) and time, requires uncertainty conditions, and

includes the value optimisation notion of favourability. Under deterministic

conditions, i.e. no uncertainty, favourability dominates flexibility, so that

optimality is the preferred decision criterion. Under conditions of uncertainty,

flexibility dominates, although favourability is still present.

FLEXIBILITY AND ROBUSTNESS

An important distinction between two kinds of flexibility, displayed in table 6.1, is

made on the basis of previous studies. 1) Active flexibility, or otherwise known as

flexibility, refers to the ability to react with minimal penalty on cost, time, and

effort. 2) Passive flexibility, or otherwise known as robustness, refers to a state

of being, in which no reaction is required as it is tolerant or insensitive to the

uncertainty. This distinction is made by argument, examples, and specific

application to clarify the meaning of flexibility in different contexts. The specific

application in Appendix D reveals the conditions under which flexibility and

robustness is more or less valuable. It concludes that under uncertainty, robustness

is no longer sufficient, i.e. flexibility becomes necessary.

432
FEATURE OF MODELLING APPROACH

Flexibility is a feature of modelling approaches which facilitates the multi-staged

resolution of uncertainty, such as the decision tree based techniques of decision

analysis, contingent claims analysis, and stochastic dynamic programming

employed in the literature. Robustness characterises modelling approaches which

aim for completeness of coverage, to ensure all likely ranges of uncertainty are

covered, e.g. sensitivity analysis, scenario analysis, and risk analysis.

VERSATILITY OF DECISION ANALYSIS

The literature review of existing modelling approaches (Chapter 3) reveals the

potential for using decision analysis as an organisational tool for synthesis. With

user-friendly software which incorporate both decision trees and influence

diagrams such as DPL (ADA, 1992), decision analysis becomes even more

attractive as a structuring tool. However, decision analysis assumes that the model

is built in direct consultation with decision makers, which is not the case with the

modelling and decision making styles of the electricity supply industry. Thus it is

more appropriate as a framework for structuring and assessing flexibility external

to the modelling approach, i.e. to compensate for model unease, rather than as a

platform for model synthesis within the modelling approach.

9.2 Research Questions and Answers

The argument of this thesis is dominated by the main themes discussed in section

9.1, woven by research questions and answers listed below, and distinguished by

types of contributions listed in section 9.3. Figure 9.1 illustrates how the chapters

are related to each other. The numbers correspond to chapters while the arrows

carry the messages. Chapter 2 links the two parts together by uncertainty. The

two themes of model synthesis and flexibility are related by the use of the decision

analytic framework and replication/evaluation method (Chapters 4 and 7). While

433
model synthesis addresses completeness (Chapters 2, 3, 4), flexibility addresses

model unease and uncertainty.

Figure 9.1 Research Messages

Part One 5 Part Two

m
words, concepts,

ea
su
"coping" with uncertainty relationships

re
s
model unease

completeness uncertainty 6 elements of F


4 2 7
adequacy response options
types of F

rs
EV a t o
s o nd

UF mapping
pe a
fU

c
di
ty reas

U as o
m the

ar

in
K f

strategies
sy

e
od sis

ES U
n
el

3 8
U = uncertainty decision analysis framework
F = flexibility replication and evaluation method
UF = uncertainty-flexibility

Table 9.1 answers the ten questions raised initially in Chapter 1 table 1.1. Each of

these are discussed briefly afterwards.

434
Table 9.1 Research Questions and Answers

Question Location* Answer and Contribution


1) What are the new requirements for 2 Table 2.6
capacity planning in the privatised Areas of uncertainties
and restructured UK ESI? Types of uncertainties
2) What are existing approaches to this 3, B All kinds of OR techniques given in
problem and how well do they treat figure 3.13
these uncertainties? Critique summarised in table 3.4
3) How can we compare different 4, A, B 4-step method of
modelling appoaches more criteria
objectively, systematically, fairly, and replication
deeply than by reviewing the evaluation
literature? comparison
4) Is model synthesis feasible and 4, C feasible, but conceptual and
practical for these purposes? What operational issues
are the conceptual and operational practical limitations
issues involved in model synthesis? compatibility requirements
weak and strong forms of synthesis
5) What is flexibility? How is it 5, 6 necessary definitional elements
defined? How does it relate to other types of flexibility
words and concepts? robustness
Figure 6.1 Conceptual Framework
6) In what way(s) can flexibility be 5, 6 decision criterion (under
useful in addressing uncertainty in uncertainty)
electricity capacity planning? feature of approach (vs robustness)
operationalisation
against model unease
7) When, i.e. under which conditions, is 6 conditions of uncertainty
it useful or not useful? available options, strategies
downside of flexibility
Table 6.4
8) How can we operationalise 6, 7, 8 options
flexibility? strategies
9) How can we measure flexibility? 7, D indicators (Table 7.1)
expected values
not entropy
10) How can flexibility be modelled and 7, 8 practical guidelines:
applied to electricity planning? decision analytic framework
2 stage decision sequence
uncertainty-flexibility mapping
Table 7.1: enabler, disabler,
motivator, trigger event, trigger
state, likelihood, number of choices
and states
Table 8.1
* numbers correspond to chapters; letters to appendices

435
1) Requirements

Chapter 2 distinguishes between types and areas of uncertainties. Types of

uncertainties refer to the nature of uncertainty, not what it affects or where it

comes from. Areas of uncertainties refer to the source of uncertainty, i.e. the

factor which is uncertain. Different factors that affect capacity planning but are

uncertain are identified and discussed. This classification and enumeration is the

first step towards the completeness of addressing different areas of uncertainties

and the adequacy of treating different types of uncertainties. In addition to these

uncertainties, intricacies in power generation and other aspects of the business are

discussed. Together, they form a list of model requirements in table 2.6.

2) Existing approaches and performance

Performance of existing modelling approaches is assessed by literature review and

by replication.

Chapter 3 reviews the techniques (figure 3.13) used in electricity capacity planning,

and critiques the associated applications (models) with respect to completeness in

modelling areas of uncertainties and adequacy of treating them. The review

concludes that all kinds of OR techniques have been used for this but models based

on individual techniques are incapable of addressing all aspects of capacity planning

due to inappropriate level of detail, lack of decision focus, and insufficient attention

to multi-criteria and uncertainty. These additional modelling difficulties are

summarised together with limitations of techniques in table 3.4. Applications based

on two or more techniques show better performance than singular technique-based

models.

Appendix B replicates three archetypal modelling approaches (deterministic,

probabilistic, and decision analytic) to evaluate model performance in greater depth

436
than by literature review and to enable a fair comparison. It concludes (in Chapter

4) that each approach is incapable of meeting the conflicting criteria of

comprehensiveness and comprehensibility. Instead, a synthesis of relevant but

complementary features of these approaches is suggested.

3) Objective, systematic, fair, and in-depth model critique

To overcome the limitations and biases in model assessment by literature review, a

four step method is proposed and tested. The four steps consist of the following:

1) Criteria: Identify requirements from the literature.

2) Replication: Replicate the model with available software tools.

3) Evaluation: Assess replicated model against the predefined criteria.

4) Comparison: Compare models against each other.

The feasibility of this method has been established by two detailed pilot studies, the

first of which is documented in Appendix A. This method is used for the first stage

of the modelling experiment (Appendix B and Chapter 4). This four step method is

later employed in Chapter 7 to assess different measures of flexibility by replicating

cited examples.

4) Feasibility and practicality of model synthesis: conceptual and operational issues

Multi-technique based applications, particularly evident in the trend in large energy

models, indicate the feasibility of model synthesis. However, the conceptualisation

of model synthesis in Appendix C and experimental results in Chapter 4 highlight

the conceptual and operational difficulties that must be overcome for feasibility.

Some of these difficulties, such as exemplified by the detailed study of model of

model are so costly that model synthesis becomes impractical. Chapter 4

concludes that model synthesis is feasible but not practical for a utility in the UK

ESI.

437
5) What is flexibility? How is it defined? How does it relate to other words and

concepts?

Many attempts at giving a precise definition of flexibility end up restricting its rich,

multi-faceted content to a narrow context. Instead, this thesis identifies and

collates necessary definitional elements to preserve the multiple aspects. These

context-free elements consist of the important concept of favourability, number of

choices, change, time, and conditions of uncertainty. The context-dependent type

of flexibility depends on the uncertainty-flexibility mapping.

The meaning of flexibility is also clarified by contrasting and analysing it against

other closely related words and concepts in a Conceptual Framework depicted in

6.1. Six relationships are studied:

a) Flexibility and robustness

b) Flexibility is preferred to optimality under uncertainty

c) Robustness as safety or lack of risk in a decision; a robust decision is one for which
elements will not have to be regretted.

d) Lack of confidence reduces the desire for commitment and increases the preference
for flexibility.

e) Flexibility and robustness are embedded in the finance definition of an option: the
right but not the obligation

f) Close relationships exist between uncertainty and flexibility, liquidity and learning.

6) Usefulness of flexibility

Chapter 5 shows the wide range of contexts in which flexibility is found. Chapter 6

discusses and supports its use as a decision criterion (as opposed to optimality)

under uncertainty, as a feature of the modelling approach (as opposed to

robustness), as a practical means to cope with uncertainty by operationalisation,

and as a hedge against model unease.

438
7) Conditions for usefulness and downside

Chapter 6 identifies conditions which together make flexibility useful: uncertainty,

availability of means to flexibility, and that it must be worthwhile to consider

flexibility. In addition, Mandelbaums (1978) conditions under which flexibility is

not useful is translated into its converse in table 6.4, to show that capacity planning

in the UK ESI can make use of flexibility.

These three conditions are suggestive but not guaranteed, i.e. the mere existence of

these conditions do not guarantee that flexibility will be useful, as it may be

undesirable for the particular decision maker or situation. Flexibility is not

desirable for a decision maker who is intolerant of uncertainty, cautious, hesitant or

indecisive. The downside of flexibility is briefly discussed to warn against treating

all types of flexibility, and indeed, all degrees of flexibility as useful or valuable. In

other words, there may be a limit to the usefulness of flexibility. There is also no

evidence that flexibility reduces uncertainty.

8) Operationalisation of flexibility

Operationalisation refers to the implementation of the conceptual aspects of

flexibility. Chapter 6 distinguishes between options and strategies. Options are

those alternatives which provide flexibility by increasing the number of future

options or by their characteristics, e.g. short lead time. Strategies introduce

flexibility by sequentiality, partitioning, postponement, diversity, searching for

more options, resistance to change, substituting, incrementalism, contingency

planning, etc.

9) Measuring flexibility

Instead of developing a single best overall measure, this thesis has found two

groups of measures which meet the criteria for measuring flexibility. Partial

439
measures form the largest group in the literature, and they support the classification

of indicators. Expected value based measures are useful, but caution is needed as

expected value over-emphasize the favourability aspect of flexibility. The third

group of entropic measures is misleading and therefore not recommended for

further use.

In addition to these measures, flexibility and robustness are contrasted and

measured in a specific application of over and under capacity in production and

inventory control (Appendix D). This example shows another way to assess

flexibility.

10) Modelling and application of flexibility

Ultimately, to make use of flexibility, we must be able to operationalise and assess

it. Chapter 8 uses the terminology of Chapter 7 to develop practical guidelines for

structuring flexibility in a decision analysis framework and assessing it using

indicators and expected values. The terminology is summarised in table 7.1 of

indicators: enabler, disabler, motivator, trigger event, trigger state, likelihood,

number of choices and states. The practical guidelines consist of four steps:

identify uncertainties, operationalise flexibility, structure by decision trees and

influence diagrams, and assess with indicators and expected value-based measures

as given in table 8.1.

Table 9.2 summarises the questions raised in Chapter 4 and answers concluded in

Part Two regarding flexibility.

440
Table 9.2 Flexibility

Questions Cross Conceptual Measuring Modelling Flexibility


and Disciplinary Framework Flexibility Flexibility and
Answers Review Robustness

Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Appendix D

What is interpretation, relationships definitional expressed in contrast


flexibility? applications with other elements; a decision against
concepts types of analysis robustness
flexibility framework
contrast against
robustness

How to use characteristic reflecting as a decision feature of the as under and


it? of systems degree of criterion modelling over capacity
(manufactur- commitment, approach in production
ing), decision compensate for and inventory
preference, lack of application
desirable goal confidence;
conditions
under which it
is useful

How to indicators path in a costs of under


measure it? (partial decision tree and over
measures) capacity

expected
value

entropic
measures

How to use electricity operationali- examples of supply versus


it in planning sation via pool price, demand of
modelling literature, as a options and plant electricity
uncertainty characteristics strategies economics,
in capacity of plants and and
planning? portfolios strategies
(capacity mix)

9.3 Research Contributions

We discuss the above answers to research questions from Chapter 1 in terms of

four types of contributions: critical, methodological, conceptual, and synthetic

(synthesis).

441
1) CRITIQUE

A critique refers to an assessment against given criteria. Four critiques have been

made in this thesis: a) requirements, b) technique and applications, c) modelling

approaches, and d) flexibility measures.

a) A critical review of the industry and history of capacity planning provides the basis

for the identification and classification of different areas of uncertainty relevant to

this thesis in Chapter 2.

b) Based on capacity planning applications reported in the literature, Chapter 3

assesses all kinds of OR techniques and models against the areas and types of

uncertainties identified previously. These evaluations reveal the additional

modelling requirements that must be met.

c) Through a modelling experiment, 3 archetypal modelling approaches are replicated

and evaluated against the criteria compiled from the list of uncertainties in Chapter

2 and additional modelling requirements in Chapter 3.

d) From an extensive cross disciplinary review, different measures of flexibility

emerge. Grouped into three main categories, they are assessed according a pre-

defined criteria: partial measures (indicators), expected value, and entropy.

2) METHODOLOGY

Methodological contributions refer to methods developed, tested, and applied in

this thesis. Three kinds of methods have been contributed by this thesis: a) the

four step model replication and evaluation, b) two staged modelling experiment,

and c) practical guidelines for flexibility.

a) A method of model assessment that is more objective and fair than mere literature

review is developed and successfully tested in two separate pilot studies. This four

442
step method is applied in the two stage modelling experiment and again in the

critique of measures of flexibility. The four steps consist of criteria, replication,

evaluation, and comparison.

b) A two staged case study based modelling experiment is designed and conducted to

facilitate prototyping of model synthesis and comparison against existing

approaches.

c) Practical guidelines for structuring and assessing flexibility are developed and

applied to UK ESI capacity planning examples of plant economics and pool price

behaviour. Four models of operationalisation strategies are structured. These

guidelines make new uses of decision analysis and expected values.

3) CONCEPTUAL DEVELOPMENT

Conceptual development or conceptualisation refers to a creative and logical

process of analysis. What emerged from this are a) the conceptual issues in model

synthesis and b) the conceptual framework of flexibility relationships and new

terminology.

a) To fill the void in the literature, issues in model synthesis, including a taxonomy

and typology, are conceptualised.

b) Following the same manner of conceptual analysis of flexibility and closely related

words, Chapter 6 analyses the relationship between flexibility and more established

concepts. Important relationships, definitional elements, conditions, and other

conceptual aspects of flexibility are developed. The following new terms are

introduced: favourability, uncertainty to flexibility mapping, indicators, trigger

events, enabler, disabler, motivator, local event, and external event.

443
4) SYNTHESIS

Three different kinds of synthesis have been examined in this thesis: a) model

synthesis, b) optimisation and decision analysis, and c) expected value and entropy.

a) Model synthesis refers to configuring existing models or techniques to meet the

conflicting modelling criteria of comprehensiveness and comprehensibility.

b) Optimisation and decision analysis techniques are complementary in many ways.

While optimisation uses lots of data, decision analysis uses few. Optimisation is

single staged and deterministic, while decision analysis is multi-staged and contains

probabilities and decisions. Optimisation contains constraints, hence constrained

optimisation. Decision analysis is a kind of unconstrained optimisation. Attempts

at synthesizing the two types of techniques included embedding optimisation in

decision analysis. However, incompatible data size and interface prevented a direct

formulation. Furthermore, the multiple alternative stages of the decision tree are

not utilised, only terminal nodes.

c) Expected value and entropy exhibit complementarity in capturing the favourability

and uncertainty aspects of flexibility. This at first suggested that a synthesis would

lead to a better measure. However, differences and inconsistencies in underlying

assumptions prevented any form of co-existence. Furthermore, entropy was

rejected as a meaningful and reliable measure of flexibility.

9.4 Further Research

Our investigation into model synthesis and flexibility has opened up a number of

areas for further research. The following three areas are suggested.

444
ON MODEL SYNTHESIS

1) Other Forms of Synthesis

We have only examined the synthesis between optimisation and decision analysis

via a model of model. We cannot generalise from this limited experience that we

have found all the conceptual and operational difficulties in model synthesis.

2) Different Levels of Synthesis

The weak and strong forms of synthesis pertain to the degree of interaction

between the components. Does the level of synthesis (or integration) contribute to

the completeness of modelling?

ON FLEXIBILITY

1) Measuring Flexibility

Appendix D showed that flexibility is useful when there is a chance that actual

demand may exceed forecasted demand. The probability that actual demand

exceeds forecasted demand indicates a need for flexibility. This suggests that

flexibility requires a measure of uncertainty.

Gerwin (1993) mentioned that the type of flexibility corresponds to the type of

uncertainty. Others have proved that flexibility has no value if there is no

uncertainty. This thesis proposes an uncertainty to flexibility mapping, but this

does not imply that type of flexibility can only be analysed with respect to (area of)

uncertainty. Furthermore, a more formal method of incorporating indicators in

assessment may be helpful, such as some kind of multi-attribute ranking and trade-

off to score the value of flexibility with respect to each of the key uncertainties.

Do we aggregate flexibility with all uncertainties or weight individual flexibilities

conditional on type of uncertainty?

445
The uncertainty to flexibility mapping assumes a one to one correspondence. How

do we deal with the permutation of flexibility characteristics of options or flexible

options or different strategies? This is not a straightforward one to one uncertainty

to flexibility mapping.

Trade-off analysis becomes more troublesome in complicated problems. We may

need to define a preference function. We may need to aggregate the indicators.

We may need to use multi-attribute weighting and ranking. Guidelines are required

for defining these functions.

2) Comparing different strategies

How do we compare different strategies, e.g. different sources of flexibility?

Most discussions of flexibility have concentrated on evaluating a single source of

flexibility. Including all sources of flexibility in one model enables us to compare

different strategies. However, it adds to the dimensionality (and complexity) of the

decision tree as each source of flexibility is necessarily meaningful only in relation

to its triggering uncertainty and reflected by the structure of the decision tree, i.e.

the order of decisions, branches of decision nodes, etc.

ON CAPACITY PLANNING

1) Extension of Capacity Planning Models

According to Paraskevopoulos et al (1991), capacity expansion models determine

the type, size, and sequencing of productive facilities to optimally meet

expectations about future market conditions. These models are thus important for

industries characterised by 1) commitment of substantial resources for new

investment with long payout times, 2) low resale or scrap value of newly installed

equipment, 3) substantial economies of scale (both static and dynamic) due to

technology. The modelling approaches and measures researched in this thesis

446
should be applicable to capacity planning problems elsewhere, particularly in

capital-intensive industries involving long lead-times, huge investments, and high

uncertainty. Likewise, these capacity planning models should be extensible to the

electricity supply industries of other countries as they all share the unique features

of non-storability and high sunk costs.

2) Combining Model Synthesis and Flexibility

The modelling approach is still necessary for completeness, and model synthesis is

a means to this. To hedge against model unease and to cope with uncertainties,

flexibility becomes necessary. This thesis has addressed the two themes separately.

To use model synthesis and flexibility together for model completeness and

uncertainty remains an open question.

447
REFERENCES

ABI-INFORM (1970 - 1994) UMI University Microfilms, Ann Arbor, Michigan

ADA (1992) DPL: Users Guide

Allan, R.N. and R. Billington (1992) Probabilistic methods applied to electric power
systems --- are they worth it? Power Engineering Journal, Vol. 6, No. 3, May, pp. 121 -
129

Amagai, Hisashi; Pingsun Leung (1989) Multi-Criteria Analysis for Japans Electric
Power Generation Mix, Energy Systems and Policy, Vol. 13, No. 3, pp. 219 - 236

Anastasi, Anne (1990) Reaction: Diversity and Flexibility, The Counseling Psychologist,
Vol. 18, No. 2, pp. 248 - 261

Anders, George J. (1990) Probability Concepts in Electric Power Systems, John Wiley
and Sons

Anderson, Dennis (1972) Models for determining least-cost investments in electricity


supply, Bell Journal of Economics and Management Science, International Bank for
Reconstruction and Development

Ansoff, H.I. (1968) Corporate Strategy, Penguin Books

Atkinson, John (1985) Flexibility, Uncertainty and Manpower Management, IMS Report
No. 89, Institute of Manpower Studies

Atkinson, John (1989) Management Strategies for Flexibility and the Role of the Trade
Unions, IMS Paper No. 154, Institute of Manpower Studies

Balci, Osman (1986) Requirements for Model Development Environments, Computers and
Operational Research, Vol. 13, No. 1, pp. 53 - 67

Balson, W.E.; S.M. Barrager (1979) Uncertainty Methods in Comparing Power Plants,
EPRI, FFAS-1048 Technical Planning Study 78 - 797

449
Barbier, Edward; David Pearce (1990) Thinking economically about climate change,
Energy Policy, Vol. 18, No. 1, January/February, pp. 11 - 18

Barrager, Stephen; Oliver Gildersleeve (1989) A Methodology to Incorporate Uncertainty


into R&D Cost and Performance Data, Resources and Energy, Vol. 11, pp. 177 - 193

Baughman, M.L.; D.P. Kamat (1980) Assessment of the Effect of Uncertainty on the
Adequacy of the Electric-Utility Industry's Expansion Plans, 1983 - 1990, EPRI Project
EA 1446, 1153-1, Electric Power Research Institute, December 1979

Baughman, Martin L.; Paul Joskow (1976) Energy Consumption and Fuel Choice by
Residential and Commercial Consumers in the United States, Energy Systems and Policy,
Vol. 1, No. 4, pp. 305 - 323

Beaver, Ron (1993) Structural Comparison of the Models in EMF12, Energy Policy, Vol.
21, No. 3, March, pp. 238 - 248

Beck, P.W. (1982) Corporate Planning for an Uncertain Future, Long Range Planning,
Vol. 15, pp. 12 - 21

Bellman, Richard E. (1957) Dynamic Programming, Princeton University

Bernardo, J.J.; Z. Mohamed (1992) The measurement and use of operational flexibility in
the loading of Flexible Manufacturing Systems, European Journal of Operational
Research, Vol. 60, No. 2, pp. 144 - 155

Berrie, T.W.; D. McGlade (1991) Electricity planning in the 1990s, Utilities Policy, Vol.
1, No. 3, April, pp. 199 - 211

Bonder, S. (1979) Changing the future of operations research, Operations Research, Vol.
27, pp. 209 - 224

Borison, A.B.; B.R. Judd, P.A.Morris, E.C. Walters (1981) Evaluating R&D Options
Under Uncertainty, Volume 3: An Electric-Utility Generation-Expansion Planning
Model, EPRI EA-1964, Vol 3 Research Project 1432-1, EPRI

450
Borison, Adam B.; Peter A. Morris, Shmuel S. Oren (1984) A State-of-the-World
Decomposition Approach to Dynamics and Uncertainty in Electric Utility Generation
Expansion Planning, Operations Research, Vol. 32, No. 5, pp. 1052 - 1068

Borison, Adam Bruce (1982) Optimal Electric Utility Generation Expansion Under
Uncertainty, PhD Thesis, Stanford University

Box, G.E.P.; N.R. Draper (1987) Empirical Model Building and Response Surfaces, John
Wiley, New York

Boyd, Robert; Roderick Thompson (1980) The Effect of Demand Uncertainty on the
Relative Economics of Electrical Generation Technologies with Differing Lead Times,
Energy Systems and Policy, Vol .4, No. 1-2, pp. 99 - 124

Brealey, Richard A.; Stewart C. Myers (1988) Principles of Corporate Finance,


McGraw-Hill Book Company

Brill, E. Downey, Jr; John M. Flach; Lewis D. Hopkins; S. Ranjithan (1990) MGA: A
Decision Support System for Incompletely Defined Problems, IEEE Transactions on
Systems, Man, and Cybernetics, Vol. 20, No. 4, pp. 745 - 757

Brown, Rex V.; Dennis V. Lindley (1986) Plural Analysis: Multiple Approaches to
Quantitative Research, Theory and Decision, Vol. 20, pp. 133 - 154

Bunn, Derek W. (1984) Applied Decision Analysis, John Wiley and Sons

Bunn, Derek W.; Ahti A. Salo (1993) Forecasting with Scenarios, European Journal of
Operational Research, Vol. 68, No. 3, August, pp. 291 - 303

Bunn, Derek W.; Eric R. Larsen (1992) Sensitivity of Reserve Margins to Factors
Influencing Investment Behaviour in the Electricity Market of England and Wales, Energy
Policy, Vol. 20, No. 5, May, pp 420 - 429

Bunn, Derek W.; Erik R. Larsen; Kiriakos Vlahos (1991) Modelling the Effects of
Privatisation on Capacity Investment in the UK Electricity Industry, London Business
School, November

451
Bunn, Derek W.; Erik R. Larsen; Kiriakos Vlahos (1993) Complementary Modelling
Approaches for Analysing Several Effects of Privatization on Electricity Investment,
Journal of the Operational Research Society, Vol. 44, No. 10, pp. 957 - 971

Bunn, Derek W.; Kiriakos Vlahos (1989a) Evaluation of the Long-term Effects on UK
Electricity Prices following Privatisation, Fiscal Studies, Vol. 10, No. 4, pp. 104 - 116

Bunn, Derek W.; Kiriakos Vlahos (1989b) Evaluation of the Nuclear Constraint in a
Privatised Electricity Supply Industry, Fiscal Studies, Vol. 10, No. 1, pp. 41 - 52

Bunn, Derek W.; Kiriakos Vlahos (1992) A model-referenced procedure to support


adversarial decision processes: application to electricity planning, Energy Economics, Vol.
14, No. 4, October, pp. 242 - 247

Butler, Timothy; Kirk Karwan, James Sweigart (1992) Multi-Level Strategic Evaluation of
Hospital Plans and Decisions, Journal of the Operational Research Society, Vol. 43, No.
7, pp. 665-675

Buzacott, J.A. (1982) The fundamental principles of flexibility in manufacturing systems,


Proceedings of the First International Conference on Flexible Manufacturing Systems,
pp. 13 - 22

Carlsson, Bo (1989) Flexibility and the Theory of the Firm, International Journal of
Industrial Organisation, Vol. 7, pp. 179 - 203

Cazalet, E.G.; C.E. Clark, T.W. Keelin (1978) Costs and Benefits of Over/Under
Capacity in Electric Power System Planning, EPRI Research Project 1107, Electric Power
Research Institute, Palo Alto, California

Chandra, Pankaj; Mihkel M. Tombak (1992) Models for the evaluation of routing and
machine flexibility, European Journal of Operational Research, Vol. 60, pp. 156 - 165

Chapman, C.B.; Dale Cooper (1983) Risk analysis: Testing some prejudices, European
Journal of Operational Research, Vol. 14, No. 3, pp. 238 - 247

Choobineh, F.; A. Behrens (1992) Use of Intervals and Possibility Distributions in


Economic Analysis, Journal of the Operational Research Society, Vol. 43, No. 9, pp. 907
- 918

452
CIGRE (1991) Flexibility of power systems: Principles, means of achievement and
approaches for facing uncertainty by planning flexible development of power systems,
Electra, No. 135, Working Group 01 of Study Committee 37, pp. 76 - 101

CIGRE (1993) Dealing with Uncertainty in System Planning - Has Flexibility Proved to be
an Adequate Answer? Electra, No. 151, Working Group 37.10, December, pp. 53 - 65

Clark, Charles E., Jr (Decision Focus Inc) (1985) Decision Analysis in Strategic Planning,
Strategic Management and Planning for Electric Utilities, pp. 39 - 61

Clemen, Robert T. (1991) Making Hard Decisions: An Introduction to Decision


Analysis, PWS-Kent Publishing Company

Cline, William R. (1992) Global Warming, the Economic Stakes, Institute for
International Economics

Collingridge, David (1979) The Fallibist Theory of Value and its Applications to Decision
Making, PhD Thesis, University of Aston

Collingridge, David; Peter James (1991) Inflexible Energy Policies in a Rapidly-changing


Market, Long Range Planning, Vol. 24, No. 2, pp. 101 - 107

Commission of the European Communities (1992) The Week in Europe

Ct, G.; M.A. Laughton (1979) Decomposition techniques in power system planning: the
Benders partitioning method, Electrical Power and Energy Systems, Vol. 1, No. 1, April,
pp. 57 - 64

Covello, Vincent T (1987) Decision Analysis and Risk Management Decision Making:
Issues and Methods, Risk Analysis, Vol. 7, No. 2, pp. 131 - 139

Dantzig, G.B. (1955) Linear Programming Under Uncertainty, Management Science, Vol.
1, pp. 197 - 206

Davis, Wayne J.; Sharon M. West (1987) An Integrated Approach to Stochastic


Decisionmaking: A Project Scheduling Example, IEEE Transactions on Systems, Man,
and Cybernetics, Vol. 17, No. 2, March/April, pp. 199 - 209

453
DeGroote, Xavier (1994) The Flexibility of Production Processes: A General Framework,
Management Science, Vol. 40, No. 7, July, pp. 933 - 945

Department of Energy (1992) Energy Trends, January, April

Dhar, S.B. (1979) Power System Long-range Decision Analysis Under Fuzzy
Environment, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-98, No. 2,
March/April, pp. 585 - 596

Dixit, Avinash K.; Robert S. Pindyck (1994) Investment Under Uncertainty, Princeton
University Press, Princeton

Dixon, E.C. (1989) Modelling Under Uncertainty: Comparing Three Acid-Rain Models,
Journal of the Operational Research Society, Vol. 40, No. 1, pp. 29 - 40

DOE (1994) The National Energy Modeling System: An Overview, May, Energy
Information Administration, Office of Integrated Analysis and Forecasting, US Department
of Energy, Washington, DC 20585

Dolk, Daniel R. (1993) An introduction to model integration and integrated modeling


environments, Decision Support Systems, Vol. 10, pp. 249 - 254

Dolk, Daniel R.; Jeffrey E. Kottemann (1993) Model integration and a theory of models,
Decision Support Systems, Vol. 9, pp. 51 - 63

Dowlatabadi, Hadi; Michael Toman (1990) Technology Choice in Electricity Generation


Under Different Market Conditions, Resources & Energy (Netherlands), Vol. 12, No. 3,
September, pp. 231-251

Drechsler, F.S. (1968) Decision Trees and the Second Law, Operational Research
Quarterly, Vol. 19, No. 4, pp. 409 - 419

Eden, Richard; M Posner, R Bending, E Crouch, J Stanislaw (1981) Energy economics-


growth, resources and policies, Press Syndicate of the University of Cambridge

EA (1992) UK Electricity, Electricity Association

Energy Committee (1990) The Cost of Nuclear Power, Vol. I & II, Session 1989-90,
Fourth Report , HMSO

454
Energy Committee (1992) Consequences of Electricity Privatisation, HMSO, Vol. 1 & 2,
February

Eppink, D. Jan (1978) Planning for Strategic Flexibility, Long Range Planning, Vol. 11,
pp. 9 - 15

Eppink, Derk Jan (1978) Managing the Unforeseen: a study of flexibility, PhD Thesis,
Free University of Amsterdam

Eschenbach, Ted G. (1992) Spiderplots versus Tornado Diagrams for Sensitivity Analysis,
Interfaces, Vol. 22, No. 6, November/December, pp. 40 - 46

Commission of European Communities (1990) Document EUR 5914

Evans, J. Stuart (1991) Strategic Flexibility for High Technology Manoeuvres: A


Conceptual Framework, Journal of Management Studies, Vol. 28, No. 1, pp. 69 -89

Evans, John Stuart (1982) Flexibility in Policy Formation, PhD Thesis, Aston University

Evans, Nigel (1984) The Sizewell Decision: a sensitivity analysis, Energy Economics,
Vol. 6, No. 1, January, pp. 14 - 20

Evans, Nigel (1984) An economic evaluation of the Sizewell decision, Energy Policy,
September, pp. 288 - 295

Evans, Nigel; Chris Hope (1984) Nuclear Power: Future, Costs and Benefits, Cambridge
University Press, Cambridge

Eyre, N.J. (1990) Gaseous Emissions due to Electricity Fuel Cycles in the United
Kingdom, Energy and Environment Paper, No. 1

Fine, Charles H.; Robert M. Freund (1990) Optimal Investment in Product-Flexible


Manufacturing Capacity, Management Science, Vol. 36, No. 4, April, pp. 449 - 466

Foley, Michael; Dan Prepdall (1990) Siting new power plants: challenges for the 1990s,
Electrical World, Vol. 204, No. 10, October, pp. 58 - 60

Ford, Andrew (1985) Short-Lead-Time Technologies as a Defense against Demand


Uncertainty, Strategic Management and Planning for Electric Utilities, pp. 115 - 136

455
Ford, Andrew; Irving W. Yabroff (1980) Defending Against Uncertainty in the Electric
Utility Industry, Energy Systems and Policy, Vol. 4, No. 1-2, pp. 57 - 98

Ford, Andrew; Michael Bull (1989) Using system dynamics for conservation policy
analysis in the Pacific Northwest, System Dynamics Review, Vol. 5, No. 1, Winter, pp. 1 -
16

Garver, L L; H G Stoll, R S Szczepanski (1976) Impact of Uncertainty on Long-Range


Generation Planning, Proceedings of the American Power Conference, Vol. 38, pp. 1165 -
1174

Gass, Saul I., ed. (1981) Validation and Assessment of Energy Models, Proceedings of a
Symposium held at the National Bureau of Standards, Gaithersburg, Maryland, 19 - 21
May 1980, NBS Special Publication 616

Gellings, Clark W.; Pradeep C. Gupta; Ahmad Faruqui (1985) Strategic Implications of
Demand-Side Planning, Electric Power Research Institute

Geoffrion, Arthur M. (1987) An Introduction to Structured Modeling, Management


Science, Vol. 33, No. 5, May, pp. 547 - 588

Gerking, Harald (1987) Modeling of multi-stage decision-making processes in multi-period


energy models, European Journal of Operational Research, Vol. 32, pp. 191 - 204

Gertler, Meric S. (1988) The limits to flexibility: comments on the Post-Fordist vision of
production and its geography, Transactions : Institute of British Geographers, Vol. 13,
pp. 419 - 432

Gerwin, D. (1982) Dos and Donts of Computerised Manufacture, Harvard Business


Review, Vol. 60, March/April, pp. 107 - 116

Gerwin, Donald (1993) Manufacturing Flexibility: A Strategic Perspective, Management


Science, Vol. 39, No. 4, April, pp. 395 - 410

Ghosh, D.; R. Agarwal (1991) Model Selection and Sequencing in Decision Support
Systems, Omega, Vol. 19, No. 2/3, pp. 157 - 167

456
Goldberg, Michael A.(1989) On Systemic Balance: Flexibility and Stability in Social,
Economic, and Environmental Systems, Praeger Publishers, New York

Goldman, Steven Marc (1974) Flexibility and the Demand for Money, Journal of
Economic Theory, Vol. 9, pp. 203 - 222

Gorenstin, B.G.; N.M. Campodonico, J.P.Costa, M.V.F.Pereira (1991) Power System


Expansion Planning Under Uncertainty, Proceedings of IEEE Winter Meeting

Greenberger, Martin (1981) Humanising Policy Analysis: Confronting the Paradox in


Energy Modeling, in Gass (1981), pp. 25 - 42

Greenhalgh, Geoffrey (1985) Sizewell B: the Central Electricity Generating Boards Case,
Power Tomorrow, Kogan Page

Greenhalgh, Geoffrey (1990) Energy conservation policies, Energy Policy, Vol. 18, No. 3,
April, pp. 293 - 299

Grimston, Malcolm (1993) Issues in the Privatisation of the Electricity Supply Industry,
British Nuclear Industry Forum, presented at the Institute of Energy

Grubb, Michael (1989) The Greenhouse Effect: Negotiating Targets, The Royal Institute
of International Affairs

Gunasekaran, A.; T. Martikainen, P. Yli-Olli (1993) Flexible manufacturing systems: An


investigation for research and applications, European Journal of Operational Research,
Vol. 66, pp. 1 - 26

Gupta, D. and Buzacott, J.A. (1988) A framework for understanding flexibility of


manufacturing systems, Technical Report, University of Waterloo

Gupta, Shiv K.; Jonathan Rosenhead (1968) Robustness in Sequential Investment


Decisions, Management Science, Vol. 15, No. 2, pp. B-18 - B-29

Gupta, Yash P.; Sameer Goyal (1989) Flexibility of manufacturing systems: Concepts and
measurements, Invited Review, European Journal of Operational Research, Vol 43, pp.
119 - 135

457
Gustavsson, Sten Olof (1984) Flexibility and Productivity in Complex Production
Processes, International Journal of Production Research, Vol. 22, No. 5, pp. 801 - 808

Hankinson, G.A. (1986) Energy Scenarios- The Sizewell Experience, Long Range
Planning, Vol. 19, No. 5, pp. 94 - 101

Hart, A.G. (1937) Anticipations, Business Planning, and the Cycle, Quarterly Journal of
Economics, pp. 272 - 297

Hashimoto, Tsuyoshi (1980) Robustness, Reliability, Resilience and Vulnerability


Criteria for Planning, PhD Thesis, Cornell University

Hashimoto, Tsuyoshi; Daniel P. Loucks; Jery R. Stedinger (1982) Robustness of Water


Resources Systems, Water Resources Research, Vol. 18, No. 1, February, pp. 21 - 26

Hayes, William C. (1989) DSM Demand Side Management: A cornucopia of techniques


and technologies, Electrical World, Vol. 203, No. 2, February, pp. 49 - 56

Heimann, Stephen R.; Edward J. Lusk (1976) Decision Flexibility: An Alternative


Evaluation Criterion, Accounting Review, Vol. 51, pp. 51 - 64

Helm, Dieter and McGowan, Frances (1989) Ch 13 Electricity Supply in Europe: Lessons
for the UK, p. 237- 260, in Helm, Kay, Thompson, ed. The Market for Energy, Clarendon
Press

Helsinki (1991) Proceedings of the Senior Expert Symposium on Electricity and the
Environment, 13 - 17 May, Helsinki, Finland

Henderson, Y. K.; R.W. Kopcke, G.J. Houlihan, N.J. Inman (1988) Planning for New
Englands Electricity Requirements, New England Economic Review, January- February,
pp. 3 - 30

Hertz, David B. (1964) Risk Analysis in Capital Investment, Harvard Business Review,
January/February, pp. 95 - 106

Hertz, David; Howard Thomas (1983) Risk Analysis and its Applications, John Wiley and
Sons

458
Hertz, David; Howard Thomas (1984) Practical Risk Analysis: An Approach through
Case Histories, John Wiley & Sons

Hespos, R.F.; Strassman. P.A. (1965) Stochastic Decision Trees for the Analysis of
Investment Decisions, Management Science, Vol. 11, No. 10, August, pp. B244 - B259

High Performance Systems (1990) ITHINK: the visual thinking tool for the 90s, Users
Guide

Hirshleifer, Jack; John G. Riley (1992) The Analytics of Uncertainty and Information,
Cambridge University Press

Hirst, Eric (1989) Benefits and Costs of Small, Short Lead-Time Power Plants and
Demand-Side Programs in an Era of Load-Growth Uncertainty, March, Oak Ridge
National Laboratory, Tennessee, USA

Hirst, Eric (1990) Benefits and Costs of Flexibility: Short-Lead Time Power Plants, Long
Range Planning, Vol. 23, No. 5, pp. 106 - 115

Hirst, Eric (1990) Flexibility Benefits of Demand-Side Programs in Electric Utility


Planning, The Energy Journal, Vol. 11, No. 1, January, pp. 151 - 163

Hirst, Eric; M. Schweitzer (1990) Electric Utility Resource Planning and Decision
Making: the importance of uncertainty, Risk Analysis, Vol. 10, No. 1, pp.137-146

Hobbs, B.F.; P. Maheshwari (1990) A Decision Analysis of the Effect of Uncertainty


Upon Electric Utility Planning, Energy, Vol. 15, No. 9, pp. 785-801

Hobbs, Benjamin F.; Jeffrey C Honious, Joel Bluestein (1992) Whats Flexibility Worth?
The Enticing Case of Natural Gas Cofiring, The Electricity Journal, pp. 37 - 47

Hobbs, Benjamin F.; Jeffrey C. Honious; Joel Bluestein (1994) Estimating the Flexibility
of Utility Resource Plans: An Application to Natural Gas Cofiring for SO2 Control, IEEE
Transactions on Power Systems, Vol. 9, No. 1, May, pp. 167 - 173

Hoeller, Peter; Markku Wallin (1991) Energy Prices, Taxes and Carbon Dioxide
Emissions, OECD Economic Studies, No. 17, OECD, Autumn, pp. 91 - 105

459
Holling, C.S. (1973) Resilience and Stability of Ecological Systems, International Institute
of Applied Systems Analysis

Holmes, Andrew (1987) A Changing limate: Environmentalism and Its Impact on the
European Energy Industries, Financial Times Business Information

Holmes, Andrew (1992) Conference Report: European Energy Policy Impact of the Single
Market, Energy World, No 198, May

Horowitz, Ann R.; Ira Horowitz (1976) The Real and Illusory Virtues of Entropy-Based
Measures for Business and Economic Analysis, Decision Sciences, Vol. 7, pp. 121 - 136

Hull, J.C. (1980) The Evaluation of Risk in Business Investment, Pergamon Press, Oxford

Hunt, Sally; Graham Shuttleworth (1993) Forward, option, and spot markets in the UK
Power Pool, Utilities Policy, January, pp. 2 - 8

Huss, W.R.; E.J. Honton (1987) Scenario Planning--What Style Should You Use?, Long
Range Planning , Vol. 20, No. 4, pp. 21 - 29

Hutchinson, G.K.; D. Sinha (1989) A quantification of the value of flexibility, Journal of


Manufacturing Systems, Vol. 8, No. 1, pp. 47 - 57

IAEA (1984) Expansion Planning For Electrical Generating Systems: A Guidebook,


technical Reports Series No. 241, International Atomic Energy Agency, Vienna

IEA (1987) International Energy Agency Statistics: Energy Prices and Taxes

IEA (1991) International Energy Agency Statistics: Energy Prices and Taxes, 3rd quarter
1991

Inman, Ronald L.; Jon C. Helton (1988) An Investigation of Uncertainty and Sensitivity
Analysis Techniques for Computer Models, Risk Analysis, Vol. 8, No. 1, pp. 71 - 90

Jones, P.M.S.; G. Woite (1990) Cost of nuclear and conventional baseload electricity
generation, IAEA Bulletin, March

460
Jones, Ian (1989) Risk Analysis and Optimal Investment in the Electricity Supply Industry,
(repreinted from 1986 article in Applied Economics), in Helm, Kay, Thompson, eds., The
Market for Energy, Clarendon Press, Chapman and Hall, pp. 214 - 236

Jones, Robert A.; Joseph M. Ostroy (1984) Flexibility and Uncertainty, Review of
Economic Studies, Vol. 51, pp. 13 - 32

Joskow, P.L. and Schmalensee, R. (1983) Markets for power, An Analysis of Electric
Utility Deregulation, Cambridge, Mass, MIT Press

Kahn, Herman; A.J. Weiner (1967) The Year 2000, MacMillan, London

Kaufmann, Robert K. (1991) Limits on the Economic Effectiveness of a Carbon Tax,


Energy Journal, Vol. 12, No. 4, pp. 139 - 144

Keeney, Ralph L. (1982) Decision Analysis: An Overview, Operations Research, Vol. 30,
pp. 803 - 838

Keeney, Ralph L.; Alan Sicherman (1983) Illustrative Comparison of One Utilitys Coal
and Nuclear Choices, Operations Research, Vol. 31, No. 1, pp. 50 - 83

Keeney, Ralph L; John F Lathrop, Alan Sicherman (1986) An Analysis of Baltimore Gas
and Electric Companys Technology Choice, Operations Research, Vol. 34, No. 1,
January/February, pp. 18 - 39

Keller, L. Robin; Joanna L. Ho (1988) Decision Problem Structuring: Generating Options,


IEEE Transactions on Systems, Man, and Cybernetics, Vol. 18, No. 5, September-
October, pp. 715-728

Kendall, C. (1969) Introduction to model building and decision problems, Mathematical


Model Building in Economics and Industry, Griffin, First Series

Klein, Burton H. (1984) Prices, Wages, and Business Cycles: A Dynamic Theory,
Pergamon, New York

Knight, Frank H. (1921) Risk, Uncertainty, and Profit, Houghlin Mifflin Company,
Chicago

461
Kogut, Bruce; Nalin Kulatilaka (1994) Operating Flexibility, Global Manufacturing, and
the Option Value of a Multinational Network, Management Science, Vol. 40, No. 1,
January, pp. 123 - 139

Krautmann, Anthony C.; John L. Solow (1988) Economies of Scale in Nuclear Power
Generation, Southern Economic Journal, Vol. 55, No. 1, July, pp. 70-85

Kreczko, Adam; Nigel Evans, Chris Hope (1987) A decision analysis of the commercial
demonstration fast reactor, Energy Policy, Vol. 15, No. 4, August, pp. 303 - 314

Kreps, David M. (1979) A Representation Theorem for Preference for Flexibility,


Econometrica, Vol. 47, No. 3, pp. 565 - 577

Kulatilaka, Nalin (1988) Valuing the Flexibility of Flexible Manufacturing Systems, IEEE
Transactions on Engineering Management, November, Vol. 35, No. 4, pp. 250 - 257

Kumar, Vinod (1986) On Measurement of Flexibility in Flexible Manufacturing Systems:


An Information-Theoretic Approach, Proceedings of the Second ORSA/TIMS Conference
on Flexible Manufacturing systems: Operations Research Models and Applications,
edited by K.E. Stecke and R. Suri, Elsevier Science Publishers B.V., Amsterdam, pp 131 -
143

Kumar, Vinod (1987) Entropic measures of manufacturing flexibility, International


Journal of Production Research, Vol. 25, No. 7, pp. 957 - 966

Kydes, Andy S.; Lewis Rubin (1981) Workshop Report 1: Composite Model Validation, in
Gass, ed (1981), pp. 139 - 148

Layfield, Frank (1987) Sizewell B Public Inquiry, Department of Energy, HMSO

Leggett, Jeremy, ed (1990) Global Warming, the Green Peace Report, Oxford University
Press

Lendaris, George G. (1980) Structural Modelling---A Tutorial Guide, IEEE Transactions


on Systems, Man, and Cybernetics, Vol. SMD-10, No. 12, pp. 807 -840

462
Levy, D. (1989) The Role of Models in the Planning Process, the Experience of Electricite
de France, Proceedings of the Workshop on Resource Planning Under Uncertainty for
Electric Power Systems, Stanford University

Linstone, H.A. (1984) Multiple Perspectives for Decision Making: Bridging the Gap
btween Analysis and Action, North-Holland, New York

Lo, E.O.; R. Campo, F. Ma (1987) Decision Framework For New Technologies: A Tool
For Strategic Planning Of Electric Utilities, IEEE Transactions on Power Systems, Vol.
PWRS-2, No. 4, November, pp. 959 - 967

Maddala, G.S. (A.L. Fletcher & Associates) (1980) Cost Uncertainty in Programming
Models of Electricity Supply, EPRI EA-1636 Research Project 1220-4, Electric Power
Research Institute

Mandelbaum, Marvin (1978) Flexibility and Decision Making: An Exploration and


Unification, PhD Thesis, University of Toronto

Mandelbaum, Marvin; John Buzacott (1990) Flexibility and decision making, European
Journal of Operational Research, Vol. 44, pp. 17 - 27

Mankki, Pirjo (1987) Electric Utility Generation Expansion Planning Under Uncertainty,
Proceedings of the Ninth Power Systems Computation Conference, Cascais, Portugal,
Butterworths

Markandya, A. (1990) Environmental Costs and Power Systems Planning, Utilities Policy,
Vol. 1, No. 1, pp. 13 - 27

Markowitz, H.M. (1952) Portfolio Selection, Journal of Finance, Vol. 7, pp. 77 - 81

Marschak, Thomas; Richard Nelson (1962) Flexibility, Uncertainty, and Economic


Theory, Metroeconomica, Vol. 14, pp. 42 - 58

Mascarenhas, Briance (1981) Planning for Flexibility, Long Range Planning, Vol. 14, No.
5, pp. 78 - 82

Mass, P.; R. Gibrat (1957) Application of Linear Programming to Investments in the


Electric Power Industry, Management Science, Vol. 3, pp. 149 - 166

463
McInnes, Genevieve; Erich Unterwurzacher (1991) Electricity end-use efficiency, Energy
Policy, Vol. 19, No. 3, April, pp. 208 - 216

McKay, Michael D.; Richard J. Beckman; Leslie M. Moore; and Richard R. Picard (1992)
An Alternative View of Sensitivity in the Analysis of Computer Codes, Proceedings of the
American Statistical Association Section on Physical and Engineering Sciences, Boston,
August 9 - 13

McNamara, John R. (1976) A linear programming model for long-range capacity planning
in an electric utility, Journal of Economics and Business, Vol. 28, pp. 227 - 235

Merkhofer, M.W. (1975) Flexibility and Decision Analysis, PhD Thesis, Stanford
University

Merkhofer, M.W. (1977) The Value of Information given Decision Flexibility,


Management Science, Vol. 23, No. 7, March, pp. 716 - 727

Merrill, H.M.; F.C. Schweppe (1984) Strategic Planning for Electric Utilities: Problems
and Analytic Methods, Interfaces, Vol. 14, No 1, January/February, pp. 72 - 83

Merrill, Hyde (1983) Cogeneration-A Strategic Evaluation, IEEE Transactions on Power


Apparatus and Systems, Vol. PAS-102, No. 2, February, pp. 463 - 471

Merrill, Hyde M.; Allen J. Wood (1991) Risk and Uncertainty in Power System Planning,
Electrical Power & Energy Systems, Vol. 13, No. 2, pp. 81 - 90

Merrill, Hyde; Fred Schweppe, David White; Dimitrios Aperjis, Matthew Mettler (1982)
Energy Strategy Planning for Electric Utilities Part I, SMARTE Methodology and Part II,
Case Study, IEEE Transactions on Power Apparatus and Systems, Vol. PAS-101, No. 2,
February, pp. 340 - 355

Miller, Louis W.; Norman Katz (1986) A Model Management System to Support Policy
Analysis, Decision Support Systems, Elsevier Science Publishers B.V., Vol. 2, pp. 55 - 63

Mitnick, Stephen A. (1992) To Scrub or Not to Scrub: the hidden risks of inflexibility,
The Electricity Journal, pp. 44 - 49

464
Mobasheri, Fred; Lowell H. Orren; Fereidoon P. Sioshansi (1989) Scenario Planning at
Southern California Edison, Interfaces, Vol. 19, No. 5, September/October, pp. 31 - 44

Modiano, Eduardo Marco (1987) Derived Demand and Capacity Planning Under
Uncertainty, Operational Research, Vol. 35, No. 2, March/April, pp. 185 - 197

Montenegro, J.L.A. (1978) Planning Telecommunication Systems: A Normative


Approach, PhD Thesis, University of Pennsylvania

Moore, P.G.; H. Thomas (1973) Measurement Problems in Decision Analysis, Journal of


Management Studies, Vol. 10, pp. 168 - 193

Morgan, M. Granger; Max Henrion (1992) Uncertainty: A Guide to Dealing with


Uncertainty in Quantitative Risk and Policy Analysis, Cambridge University Press

Morris, W. (1967) On the Art of Modelling, Management Science, Vol. 13, No. 12, pp.
B707 - B717

Mulvey, John M. (1979) Strategies in Modeling: A Personnel Scheduling Example,


Interfaces, Vol. 9, No. 3, May, pp. 66 - 77

Munasinghe, Mohan (1990) Energy Analysis and Policy: energy in developing countries,
World Bank

National Power (1992) Annual Review and Summary Financial Statement

Nicholson, T.A.J. (1971) Optimization in Industry, Vol. 2 Applications, T&A Constable,


Ltd, Edinburgh

Nuclear Forum (1992) Nuclear Forum: The News Magazine of the British Nuclear
Forum, August/September

O'Brien, F.A.; R.G. Dyson, C. Morris (1992) Multiple Scenario Analyis: What has
Probability Distibution Theory to Offer, No. 49, Warwick Business School Research
Bureau

OECD/NEA (1989) Projected Costs of Generating Electricity from Power Stations for
Commissioning in the Period 1995-2000, OECD/NEA, IEA

465
OFFER (1992) Review of Pool Prices, December

Ontario Hydro (1989) Environmental Analysis, Providing the Balance of Power

Orr, Daniel (1967) Chapter 13 Capital Flexibility and Long Run Cost Under Stationary
Uncertainty, in M. Shubik, Studies in Mathematical Economics, pp. 171 - 187

Ottinger, Richard; Nicholas Robinson, David Wooley, David Hodas (1991) Incorporating
the cost of protecting the environment into decisions about electric power, Perspectives in
Energy, Vol. 1, pp. 95 - 114

Ottinger, RL; D Wooley, N Robinson, D Hodas, S Babb (1990) Environmental Costs of


Electricity, Pace University, Oceana Publications

Palisade Corporation (1992) @RISK: Risk Analysis and Simulation Add-In for Microsoft
Excel

Paraskevopoulos, Dimitris; Elias Karakitsos; Berk Rustem (1991) Robust Capacity


Planning Under Uncertainty, Management Science, Vol. 37, No. 7, July, pp. 787 - 800

Paribas (1990) The UK Electricity Privatisation, July, Paribas Capital Markets Group,
International Equity Research

Peck, Stephen C.; Deborah K. Bosch, John P. Weyant (1988) Industrial Energy Demand:
A Simple Structural Approach, Resources and Energy, Vol 10, January, pp. 111 - 133

Peerenboom, James P.; William A. Buehring; Timothy W. Joseph (1989) Selecting a


Portfolio of Environmental Programs for A Synthetic Fuels Facility, Operations Research,
Vol. 37, No. 5, September-October, pp. 689 - 699

Pollert, Anna (1991) Farewell to Flexibility?, Basil Blackwell Ltd.

PowerGen (1992) PowerGen plc Annual Review

Price, Terence (1990) Political Electricity: What Future for Nuclear Energy?, Oxford
Universty Press

Pye, Roger (1978) A Formal, Decision-Theoretic Approach to Flexibility and Robustness,


Journal of Operational Research Society, Vol. 29, No. 3, pp. 215 - 227

466
Raiffa, Howard (1968) Decision Analysis, Addison Wesley

Rappaport, A. (1967) Sensitivity Analysis in Decision Making, Accounting Review, Vol.


42, pp. 441 - 456

Reisman, Arnold (1987) Some Thoughts for Model Builders in the Management and Social
Sciences, Interfaces, Vol. 17, No. 5, pp. 114 - 120

Reuters (1994) Reuters Business News

Richardson, G.P.; A.L. Pugh (1981) Introduction to System Dynamics Modelling with
DYNAMO, Cambridge, Massachusetts, MIT Press

Richter, K. (1990) A Dynamic Energy Production Model, Engineering Costs and


Production Economics, Vol. 19, January, pp. 375 - 378

Rller, L.H.; M. Tombak (1990) Strategic Choice of Flexible Production Technologies and
Welfare Implications, Journal of Industrial Economics, Vol. 38, pp. 417 - 431

Rosenhead, Jonathan (1980) Planning Under Uncertainty: I. The Inflexibility of


Methodologies, Journal of the Operational Research Society, Vol. 31, No. 3, pp. 209 -
216

Rosenhead, Jonathan (1980) Planning Under Uncertainty: II. A Methodology for


Robustness Analysis, Journal of the Operational Research Society, Vol. 31, No. 4, pp.
331 - 341

Rosenhead, Jonathan (1989) Chapter 8 Robustness Analysis: Keeping Your Options


Open, Rational Analysis for a Problematic World, John Wiley and Sons Ltd, pp. 193 -
218

Rosenhead, Jonathan; Martin Elton; Shiv K. Gupta (1972) Robustness and Optimality as
Criteria for Strategic Decisions, Operational Research Quarterly, Vol. 23, No. 4, pp. 413
- 431

Ruth-Nagel, Stefan; Stocks, Kenneth (1982) Energy Modelling for Technology


Assessment, the MARKAL Approach, OMEGA, Vol. 10, No. 5, pp. 493 - 505

467
SCE (1992) Twelve Scenarios for Southern California Edison, Planning Review, Vol. 20,
No. 3, pp. 30 - 37

Schaeffer, Peter V.; Louis J. Cherene (1989) The inclusion of spinning reserves in
investment and simulation models for electricity generation, European Journal of
Operational Research, Vol. 42, pp. 178 - 189

Schneewei, Christoph; Martin Khn (1990) Zur Definition und gegenseitigen Abgrenzung
der Begriffe Flexibilitt, Elastizitt und Robustheit, Zeitschrift fr
Betriebswirtschaftslehre, Vol. 42, May, pp. 378 - 395

Schroeder, Christopher H.; Lyna L. Wiggins, and Daniel T. Wormhoudt (1981) Flexibility
of scale in large conventional coal fired power plants, Energy Policy, Vol. 9, No. 2, June,
pp. 127 - 135

Schweppe, Fred C.; Hyde M. Merrill; William J. Burke (1989) Least Cost Planning:
Issues and Methods, Proceedings of the IEEE, Vol. 77, No. 6, pp. 899 - 907

Senge, Peter M. (1990) The Fifth Discipline, Doubleday/Currency

Sengupta, Jati K. (1991) Robust Solutions in Stochastic Linear Programming, Journal of


the Operational Research Society, Vol. 42, No. 10, pp. 857 - 870

Sethi, A.K.; S.P. Sethi (1990) Flexibility in Manufacturing: A Survey, International


Journal of Flexible Manufacturing Systems, Vol. 2, No. 4, pp. 289 - 328

Sherali, H.D.; A.L. Soyster, F.H. Murphy, S. Sen (1984) Intertemporal Allocation of
Capital Costs in Electric Utility Capacity Expansion Planning under Uncertainty,
Management Science, Vol. 30, No. 1, pp. 1 - 19

Sim, Steven R. (1991) How Environmental Costs Impact DSM, Public Utilities
Fortnightly, Vol. 128, No. 1, July 1st, pp. 24 - 27

Simon, H.A. (1964) On the concept of organisational goal, Administrative Science


Quarterly, Vol. 9, pp. 1 - 22

Slack, N (1983) Flexibility as a manufacturing objective, International Journal of


Production Management, Vol. 3, No. 3, pp. 4 - 13

468
Slack, Nigel (1988) Manufacturing systems flexibility - an assessment procedure,
Computer-Integrated Manufacturing Systems, Vol. 1, No. 1, February, pp. 25 - 31

Smith, James E.; Robert F. Nau (1992) Valuing Risky Projects: Option Pricing Theory
and Decision Analysis, Duke University, Fuqua School of Business, Working Paper #9201

Son, Y.K.; C.S. Park (1987) Economic measure of productivity, quality, and flexibility in
advanced manufacturing systems, Journal of Manufacturing Systems, Vol. 6, No. 3, pp.
193 - 206

Soyster, A.L. (1980) Literature Review on Capacity expansion Planning under


Uncertainty, Systemetric Inc. report DOE/TIE-11177, Blacksburg, Virginia

Sprague, Ralph H. Jr; Eric D. Carlson (1982) Building Effective Decision Support
Systems, Prentice-Hall Inc., New Jersey

Stevenson, Howard H. (1985) The Heart of Entrepreneurship, Harvard Business Review,


Vol. 63, No. 2, March/April, pp. 85 - 94

Stigler, George (1939) Production and Distribution in the Short Run, The Journal of
Political Economy, Vol. 47, No. 3, pp. 305 - 327

Stirling, Andrew (1994) Diversity and ignorance in electricity supply investment:


Addressing the solution rather than the problem, Energy Policy, March, pp. 195 - 216

Stoll, Harry G. (1989) Least-Cost Electric Utility Planning, John Wiley & Sons

Strangert, Per (1977) Adaptive Planning and Uncertainty Resolution, Futures, Vol. 9, No.
1, February, pp. 32 - 44

Sullivan, William G.; Wayne Claycombe (1977) Decision Analysis of Capacity Expansion
Strategies for Electrical Utilities, IEEE Transactions on Engineering Management, Vol.
EM-24, No. 4, November, pp. 139 - 144

Thomas, Howard (1972) Decision Theory and the Manager, Pitman Publishing

Thomas, Howard; Danny Samson (1986) Subjective Aspects of the Art of Decision
Analysis: Exploring the Role of Decision Analysis in Decision Structuring, Decision

469
Support and Policy Dialogue, Journal of the Operational Research Society, Vol. 37, No.
3, March, pp. 249 - 265

Thompson, S. Daniel; Wayne J. Davis (1990) An Integrated Approach for Modeling


Uncertainty in Aggregate Production Planning, IEEE Transactions on Systems, Man, and
Cybernetics,Vol. 20, No. 5, September/October, pp. 1000 - 1012

Tomkins, Robert (1991) Options Explained, Stockton Press

Triantis, Alexander J.; James E. Hodder (1990) Valuing Flexibility as a Complex Option
The Journal of Finance, Vol. XLV, No. 2, June, pp. 549 - 565

Trigeorgis, Lenos; Scott P. Mason (1987) Valuing Managerial Flexibility, Midland


Corporate Finance Journal, Vol. 5, No. 1, Spring, pp. 14 - 21

UBS Phillips and Drew (1990) The Electricity Industry in England and Wales, August,
UBS Phillips and Drew Global Research Group

UBS Phillips and Drew (1991) Electricity Research: Investing in Electricity, 31 October

UBS Phillips and Drew (1992) National Power and PowerGen: An Offer Refused, 30
January

Ullrich, Maureen F. (1980) Whatever Happened to the Work Ethic? Motivating Employees
in a Changing Society, Montana Business Quarterly, Vol. 18, No. 1, Spring, pp. 14 - 17

UNIPEDE (1988) Electricity Generation Costs: Assessments Made in 1987 for Stations
to be Commissioned in 1995, Sorento Congress, May 30 - June 3

Upton, David M. (1984) The Management of Manufacturing Flexibility, California


Management Review, Winter, Vol. 36, No. 2, pp. 72 - 89

Verter, Vedat; M. Cemal Dincer (1992) Invited Review: An integrated evaluation of


facility location, capacity acquisition, and technology selection for designing global
manufacturing strategies, European Journal of Operational Research, Vol. 60, pp. 1 - 18

Vickers, John; George Yarrow (1991) Reform of the electricity supply industry in Britain,
An assessment of the development of public policy, European Economic Review, Vol. 35,
pp. 485 - 495

470
Virdis, Maria R.; Michael Rieber (1991) The Cost of Switching Electricity Generation
from Coal to Nuclear Fuel, Energy Journal, Vol. 12, No. 2, pp. 109 - 134

Vlahos, Kiriakos (1990) Capacity Planning in the Electricity Supply Industry, PhD
Thesis, London Business School

Vlahos, Kiriakos (1991) ECAP: The Electricity Capacity Planning Model Users Manual

Vlahos, Kiriakos; Derek Bunn (1988a) Electricity Capacity Planning Using Mathematical
Decomposition, Electricity Planning Project Research Paper Series, May, London
Business School

Vlahos, Kiriakos; Derek Bunn (1988b) Large Scale Evaluation of Benders Decomposition
for Capacity Expansion Planning in the Electricity Supply Industry, Electricity Planning
Project Research Paper Series, June, London Business School

Ward, S.C. (1989) Arguments for Constructively Simple Models, Journal of the
Operational Research Society, Vol. 40, No. 2, pp. 141 - 153

Watson, Stephen R.; Dennis M. Buede (1987) Decision Synthesis: the principles and
practice of decision analysis, Cambridge University Press, Cambridge

White, D.J. (1969) Viewpoint: Operational Research and Entropy, Operational Research
Quarterly, Vol. 20, No. 1, pp. 126 - 127

White, D.J. (1970) Viewpoint: The Use of the Concept of Entropy in System Modelling,
Operational Research Quarterly, Vol. 21, No. 2, pp. 279 - 281

White, D.J. (1975) Entropy and Decision, Operational Research Quarterly, Vol. 26, No.
1, pp. 15 - 23

Williams, Simon (1990) The Electricity Handbook, June, Kleinwort Benson Securities

Wilson, A.G. (1970) The Use of the Concept of Entropy in System Modelling,
Operational Research Quarterly, Vol. 21, No. 2, pp. 247 - 265

Yamayee, Zia A.; Hossein Hakimmashhadi (1984) A Flexible Generation Planning


Approach Recognizing Long Term Load Growth Uncertainty, IEEE Transactions on
Power Apparatus and Systems, Vol. PAS-103, No. 8, pp. 1990- 1996

471
Yu, Oliver S.; Hung-po Chao (1989) Electric Utility Planning in a Changing Business
Environment: Past Trends and Future Challenges, Proceedings of the Workshop on
Resource Planning Under Uncertainty for Electric Power Systems, Stanford University,
California

Zelenovic, Dragutin M. (1982) Flexibility --- a condition for effective production systems,
International Journal of Production Research, Vol. 20, No. 3, pp. 319 - 337

472

You might also like