Professional Documents
Culture Documents
00064
JULIAN J. BOMMER
Department of Civil and Environmental Engineering,
Imperial College, London SW7 2BU, UK
Deterministic and probabilistic seismic hazard assessment are frequently represented
as irreconcilably different approaches to the problem of calculating earthquake ground
motions for design, each method fervently defended by its proponents. This situation
often gives the impression that the selection of either a deterministic or a probabilistic
approach is the most fundamental choice in performing a seismic hazard assessment.
The dichotomy between the two approaches is not as pronounced as often implied and
there are many examples of hazard assessments combining elements of both methods.
Insistence on the fundamental division between the deterministic and probabilistic approaches is an obstacle to the development of the most appropriate method of assessment
in a particular case. It is neither possible nor useful to establish an approach to seismic
hazard assessment that will be the ideal tool for all situations. The approach in each
study should be chosen according to the nature of the project and also be calibrated
to the seismicity of the region under study, including the quantity and quality of the
data available to characterise the seismicity. Seismic hazard assessment should continue
to evolve, unfettered by almost ideological allegiance to particular approaches, with the
understanding of earthquake processes.
Keywords: Seismic hazard; probabilistic seismic hazard assessment; deterministic seismic
hazard assessment; seismic risk.
1. Introduction
Seismic hazard could be defined, in the most general sense, as the possibility of
potentially destructive earthquake effects occurring at a particular location. With
the exception of surface fault rupture and tsunami, all the destructive effects of
earthquakes are directly related to the ground shaking induced by the passage
of seismic waves. Textbooks that present guidance on how to assess the hazard of
strong ground-motions invariably present the fundamental choice facing the analyst
as that between adopting a deterministic or probabilistic approach [e.g. Reiter,
1990; Krinitzsky et al., 1993; Kramer, 1996]. Statements made by proponents of
the two approaches often imply very serious differences between deterministic and
probabilistic seismic hazard assessment and reinforce the idea that the choice between them is one of the most important steps in the process of hazard assessment.
This paper aims to show that this apparently diametric split between the two approaches is misleading and, more importantly, that it is not helpful to those faced
43
44
00064
J. J. Bommer
with the problem of assessing the hazard presented by earthquake ground motions
at a site.
reader is referred, for example, to the acknowledgements in the paper by Krinitzsky [1998]
and the response by Hanks and Cornell [2001].
00064
45
the rate at which different levels of ground motion occur at the site is calculated.
The design values of motion are then those having a particular annual frequency of
occurrence.
Common to both approaches is the very fundamental, and highly problematic,
issue of identifying potential sources of earthquakes. Another common feature is
the modelling of the ground motion through the use of attenuation relationships,
more correctly called ground-motion prediction equations [D. M. Boore, written
communication]. The principle difference in the two procedures, as described above,
resides in those steps of PSHA that are related to characterising the rate at which
earthquakes and particular levels of ground motion occur. Hanks and Cornell [2001]
point out that the two approaches have far more in common than they do in
differences and that in fact the only difference is that a PSHA has units of time
and DSHA does not. This is indeed a very fundamental distinction between the
two approaches as currently practised: in a DSHA the hazard will be defined as
the ground motion at the site resulting from the controlling earthquake, whereas
in PSHA the hazard is defined as the mean rate of exceedance of some chosen
ground-motion amplitude [Hanks and Cornell, 2001]. At this point it is useful to
briefly define the different terms used to characterise probabilistic seismic hazard:
the return period of a particular ground motion, Tr (Y ), is simply the reciprocal
of the annual rate of exceedance. For a specified design life, L, of a project, the
probability of exceedance of the level of ground motion (assuming that a Poisson
model is adopted) is given by:
L
q = 1 e Tr .
(1)
46
00064
J. J. Bommer
parties.
the joke that if there are three Trotskyists gathered in a room there will be four political
00064
47
48
00064
J. J. Bommer
00064
49
50
00064
J. J. Bommer
Fig. 1. Seismic hazard curves obtained for San Salvador (El Salvador) using the median values
from the attenuation relationship and including the integration across the scatter [Bommer et al.,
2000].
00064
51
Fig. 2. Relationship between magnitude and recurrence intervals for earthquakes of M 7.1 and
greater in the Mexican subduction zone [Hong and Rosenblueth, 1988].
Fig. 3. Contours of MMI > VII for upper-crustal earthquakes in and around San Salvador during
the last three centuries [Harlow et al., 1993].
52
00064
J. J. Bommer
curve provided a good fit to the recorded values this would vindicate PSHA. In
the meantime, it is not actually possible to validate the results of a probabilistic
seismic hazard assessment.
The characteristic earthquake model, in which major faults produce large earthquakes of similar characteristics at more or less constant intervals, is clearly not
consistent with the idea that hazard varies gradually with time. The model was
originally proposed by Singh et al. [1983] for major events in the Mexican subduction zone, where earthquakes of magnitude above 8 occur quasi-periodically and
there is an absence of activity in the magnitude range 7.4 to 8.0 (Fig. 2). The concept of characteristic earthquakes has been subsequently applied to major crustal
faults and reinforced by paleoseismology [Schwartz and Coppersmith, 1984]. For a
given site affected by a single source with a characteristic earthquake, the ground
motion expected at the site will clearly jump by a certain increment when the
characteristic earthquake occurs.
The concept of characteristic earthquakes may not be limited, however, to large
events on plate boundaries and major faults. Main [1987] identified characteristic
earthquakes of magnitude about 4.6 in the sequence preceding the 1980 eruption
of Mount St. Helens. Another example may be the upper-crustal seismicity along
the Central American volcanic chain [White and Harlow, 1993]. These destructive
events occur at irregular intervals, often in clusters, around the main volcanic centres (Fig. 3). Recurrence relationships derived for the entire volcanic chain clearly
indicate a bilinear behaviour, reminiscent of the characteristic model (Fig. 4). The
recurrence of these destructive events, sometimes with almost identical locations
and characteristics [Ambraseys et al., 2001], obviously suggests itself for a deterministic treatment, as has been done for example in the microzonation study of San
Salvador [Faccioli et al., 1988].
Fig. 4. Magnitude-frequency relationships for the Central American volcanic chain using the
seismic catalogue for different periods: clockwise from top left 18981994, 19311994 and
19641994 [Bommer et al., 1998].
00064
53
54
00064
J. J. Bommer
events, hence a probabilistic approach may help to select an appropriate sourceto-site distance (although it cannot guarantee that the distance for any site in the
next earthquake will not actually be shorter). The source-to-site distance of the
hazard-consistent scenario has indeed been found to decrease as the return period
grows, but the main effect of going to lower and lower probabilities of exceedance
is just to add more and more increments of standard deviation to the expected
motions from a typical earthquake scenario [Bommer et al., 2000].
3.3. The issue of uncertainty
All seismic hazard assessment, based on our current knowledge of earthquake processes and strong-motion generation, must inevitably deal with very considerable
uncertainties. Fundamental amongst these uncertainties are the location and magnitude of future earthquakes: PSHA integrates all possible combinations of these
parameters while DSHA assumes the most unfavourable combinations.
Another very important element of the uncertainty is that associated with the
scatter inherent to strong-motion scaling relationships, which again is treated differently in the two methods. PSHA generally now includes integration across the
scatter in the attenuation relationship as part of the calculations, although this has
not always been the case: the US hazard study by Algermissen et al. [1982] is based
on median values. In DSHA, as noted previously, the scatter is either ignored, by
using median values, or accounted for by the addition of one standard deviation to
the median values of ground motion. Both approaches have shortcomings, discussed
in the next section and also in Sec. 5.3, but it is also important to note here that
the different treatments of scatter in the two methods questions the validity of the
comparisons that are sometimes made between the two. In particular, the use of
the deterministic map to cap the ground motions calculated probabilistically in the
NEHRP MCE maps, discussed in Sec. 2.4, ignores completely the fact that the deterministic motions are based on median values while the probabilistic values may,
for a 2475-year return period, may be related to values more than 1.5 standard
deviations above the median [Bommer et al., 2000]. Like is not being compared
with like.
An important development in the understanding of the nature of uncertainty in
strong-motion scaling relationships is the distinction between epistemic and aleatory
uncertainty. In very simple terms, the epistemic uncertainty is due to incomplete
data and knowledge regarding the earthquake process and the aleatory uncertainty
is related to the unpredictable nature of future earthquakes [e.g. Anderson and
Brune, 1999]; a more complete definition of epistemic and aleatory uncertainty is
provided by Toro et al. [1997]. A major component of the uncertainty is due to all
of the parameters that are currently not included in simple strong-motion scaling
relationships, whose inclusion would reduce the scatter. Consider equations for duration based on magnitude, distance and site classification: additional parameters
that could be included to reduce the scatter are those related to the directivity
00064
55
[Somerville et al., 1997] and velocity of rupture, and degree to which the rupture is
unilateral or bilateral [Bommer and Martinez-Pereira, 1999]. This would reduce the
scatter in the regressions to determine the equations, but it would not necessarily
reduce the uncertainty associated with estimates of future ground motions because
the point of rupture initiation, and the direction and speed of its propagation, are
currently almost impossible to estimate for future events. The reduction of the
uncertainty in the strong-motion scaling relationship has simply transferred it to
the uncertainty in the other parameters of the hazard model. PSHA would then
handle this by considering more and more scenarios to cover all of the possible variations of each feature, whereas DSHA would simply assume their least favourable
combination.
3.4. Acceleration time-history representation of seismic hazard
The most complete representation of the ground-shaking hazard is an acceleration
time-history. The specifications for selecting or generating accelerograms in current
seismic design codes are generally unworkable, largely because of the representation
of the basic design seismic actions in terms of probabilistic maps and uniform hazard
spectra [Bommer and Ruggeri, 2002].
The requirement of dynamic analysis for critical, irregular and high ductility
structures and the need to define appropriate input has been one of the main motivations for the development of deaggregation techniques such as Kameda and
Nojima [1988], Hwang and Huo [1994], Chapman [1995] and McGuire [1995], the
last of which has been widely adopted into practice. Since the PSHA calculations
include integration across all possible combinations of magnitude (M ) and distance
(R) and also across the scatter in the strong-motion relationships, the deaggregation must define the earthquake scenario in terms of M , R and , the number of
standard deviations above the median. Bommer et al. [2000] have shown that if a
single seismogenic source is considered, it is possible to define the hazard-consistent
scenario almost exactly rather than as bins of M , R and values. Performing
the deaggregations of hazard determined with and without integration across the
scatter reveals interesting features of the way PSHA treats the uncertainty: for a
100 000-year return period, the scenario without scatter is an earthquake of Ms 6.6
at 2 km from the site; with scatter, the scenario becomes an Ms 6.3 earthquake
at 6 km, together with 2.7 standard deviations above the median. The implications of this for hazard assessment in general are discussed in Sec. 5.4 but here the
interest is to consider the implications for the selection of hazard-consistent real
accelerograms for engineering design. In the first case, records could be searched
that approximately match the M -R pair of 6.6 and 2 km; if sufficient number of accelerograms were found, the uncertainty would be represented by their own scatter,
whereas if only few records were found they could be scaled to account for the uncertainty. That the scatter is inherent in selected suites of accelerograms is reflected
by specifications such as those in the 1999 AASHTO Guidelines for Base-Isolated
56
00064
J. J. Bommer
Bridges: if three records are used in dynamic analysis, the maximum response values are used for design, whereas if more than seven are employed, it is permitted
to use the average response. In the case of the second earthquake scenario, one is
faced with the almost impossible task of finding records that approximately match
the M -R pair of 6.3 and 6 km, for which all ground-motion parameters are simultaneously almost three standard deviations above the median for this M -R scenario.
This problem can be overcome by carrying out the hazard assessment considering
the joint probabilities of different strong-motion parameters [Bazzurro, 1998] but
such techniques are some way from being adopted into general practice.
00064
57
Fig. 5. British earthquakes during the over a period of more than seven centuries. Magnitudes:
a: M = 5.5, b: 5.5 > M > 5.0, c: 5.0 > M > 4.5, d: 4.5 > M > 4.0. [Ambraseys and Jackson,
1985].
58
00064
J. J. Bommer
anyones standards is low. Ambraseys and Jackson [1985] review several centuries
of seismicity in Britain (Fig. 5) and find no earthquake larger than Ms 5.5 with the
additional blessings that these events are very infrequent recurrence intervals
nationally of at least 300 years and that focal depths tend to increase with
magnitude. Accounts of losses of life or property due to British earthquakes are
hard to come across. This lack of significant activity and of any consensus on the
principles for zoning this seismotectonically inhomogeneous territory [Woo, 1996]
has not prevented PSHA being performed. The probabilistic study by Musson and
Winter [1997] found 10 000-year PGA values below 0.10g in most of the country but
nudging past 0.25g at isolated locations, no doubt in part because of their use of
rather high values of maximum magnitude from 6.2 to 7.0. An earlier study by Ove
Arup and Partners [1993] found similar results at selected locations and also carried
out a probabilistic risk assessment. It was found that annual earthquake losses were
only 10% of those due to meteorological hazards, although this was largely the
result of the unlikely but still credible scenario of a magnitude 6 earthquake directly
under a city [Booth and Pappin, 1995]. A useful, but complex and lengthy, study
would compare these probabilistic losses with the cost of incorporating earthquakeresistant design into UK construction, and actually perform an iterative analysis to
determine the reduction in risk from different levels of investment in earthquakeresistant design and construction. This would allow an informed decision regarding
the benefit of the investment compared to the loss that may be prevented, taking
into account competing demands on the same resources.
The issue of seismic safety definitely cannot be dismissed for nuclear power
plants built in the UK, for which elaborate probabilistic hazard assessments have
been carried out to define the 10 000-year design ground motions. The real hazard is
posed by moderate magnitude [M 5] earthquakes, whose association with tectonic
structures is tenuous, that could produce damaging ground motions over a radius
of about 510 km. In the authors opinion, unless there is good scientific reasons
to exclude it as unfeasible, the only rational design basis for seismic safety in such
a setting is a DSHA based on an earthquake of about Ms 5.5, which would require
rupture on a fault that could very easily escape detection, occurring close to or even
below the power station. This approach could be classified as one of conservatism.
In their treatise on the philosophy of seismic hazard assessment for nuclear power
plants in the UK, Mallard and Woo [1993] argue against the use conservatism
and in favour of a systematic methodology for quantifying uncertainty. The tool
proposed for this procedure is the logic-tree, a device that has become part of the
stock-in-trade of PSHA enthusiasts. As applied to seismic hazard assessment, the
etymology of the second part of its name is obvious from its dendritic structure,
but the logic is sometimes harder to detect.
4.2. The use and abuse of probability
At this point, three principles for seismic hazard assessment can be stated:
00064
59
(1) Seismic hazard can only be rationally interpreted in relation to the mitigation
of the attendant risk.
(2) If determinism is taken to mean the assignment of values by judgement, it is impossible to perform a seismic hazard assessment, by any method, without many
deterministic elements. These should be based, as far as possible, exclusively
on the best scientific data available.
(3) Probability, at least in a relative sense, is essential to the evaluation of riskc
(but not necessarily to its calculation).
The application of logic-trees to seismic hazard assessment is a mechanism for
handling those epistemic uncertainties that cannot, unlike the aleatory scatter in
strong-motion scaling relationships, be statistically measured. The different options
at each step are considered and each is assigned a subjective weight that reflects the
confidence in each particular value or choice. The results allow the determination of
confidence bands on the mean hazard curve, which as Reiter [1990] rightly points
out, places an uneasy burden on those who have to use the results. Krinitzsky
[1995a] presents powerful arguments against the use of logic-tree for hazard assessment, while illustrating how without the contentious application of weights it
Fig. 6. Contours of 475-year PGA levels for El Salvador produced by independent seismic hazard
studies [Bommer et al., 1997].
c For
60
00064
J. J. Bommer
can be a useful tool for comparing risk scenarios. In the context of this study, and
in terms of the three principles outlined above, the logic-tree for hazard assessment
appears to be upside down: deterministic judgementsd are turned into numbers
and treated together with observational data to calculate probabilities that then
fix the design ground motions. Returning to the point made in Sec. 3.2, this again
points to PSHA having a degree of confidence in its probabilities, which, to the
author, does not seem justified. Reiter [1990] notes that different studies may
conflict over whether the likelihood of exceeding 0.3g at site X is 103 or 104
and whether the likelihood of exceeding the same acceleration at location Y is 104
or 105 . Given how PSHA is actually applied, this means that for a specified
annual rate of exceedance estimates of PGA can vary by factors of two or more
(Fig. 6). It is important to note that a degree of this divergence in results can
be removed through application of the procedures proposed by the Senior Seismic
Hazard Analysis Committee [SSHAC, 1997]. However, the available earthquake
and ground-motion data for most parts of the world is such that any estimate
of the ground motion for a prescribed rate of exceedance will inevitably carry an
appreciable degree of uncertainty.
So the big question is, how are the probabilities fixed? Hanks and Cornell [2001]
explain that the starting point is a performance target expressed as a probability
of failure, Pf , which is set by life safety concerns or perhaps political fiat . This
probability is defined by the integral:
Z
Pf =
H(a) F 0 (a) da
(2)
0
where H(a) describes the hazard curve (annual rate of exceedance of different levels
of acceleration, a), and F 0 (a) is the derivative of the fragility function (the probability of failure given a particular level of acceleration). This is fine if an iterative
process is carried out in order determine the appropriate fragility curve, which
would then define the design criteria, to give the desired level of Pf . If the fragility
curve is sufficiently steep, once it has been determined Hanks and Cornell [2001]
show how Eq. (2) can be approximated to determine the level of H(a) to be used as
the basis of design, although this begins to get complicated because to obtain F (a)
the design must already have been carried out. However, this is not how the design
levels used in current practice have been fixed or else it is quite remarkable that
nearly every country in the world, from New Zealand to Ethiopia, regardless of
seismicity, building types or construction standards, all came up with 0.002 as the
design annual rate of exceedance!
In fact, the almost universal use of the 475-year return period in codes can be
traced back to the hazard study for the USA produced by Algermissen and Perkins
[1976], which was based on an exposure period of 50 years (a typical design life)
d Krinitzsky
or taste.
[1995a] describes these as degrees of belief that are no more quantifiable than love
00064
61
and a probability of 10% of exceedance, whose selection has not been explained.
Every code and regulation since has either followed suit or else adopted its own
return periods: Whitman [1989] reports proposals by the Structural Engineers Association of Northern California to use a 2000-year return period, chosen because
the committee believed it reflected a probability or risk comparable to other risks that
the public accepts in regard to life safety. If one considers that in most structural
codes the importance factor increases the design ground motions by a constant
factor, this means that the actual return period of the design accelerations will be
different from 475 years and will probably vary throughout a country. In the development of the NEHRP Guidelines, this is actually been done intentionally in order
to provide a uniform margin against collapse, resulting in design for motions corresponding to 10% in 50 years in San Francisco and 5% in 50 years in Central and
Eastern US [Leyendecker et al., 2000]. A review of seismic design regulations around
the world reveals a host of design return periods, the origin of which is rarely if
ever explained. The AASHTO Guidelines specify 500 years for essential bridges,
2500 years for critical bridges.e In the Vision 2000 proposal for a framework
for performance-based seismic design, return periods of 72, 244, 475 and 974 years
have been specified as the basis for fixing demand for different performance levels
[SEAOC, 1995]. It is hard not to feel that some of these values have been almost
pulled from a hat, all the more so for the 10 000-year return periods specified for
safety-critical structures such as nuclear power plants.
62
00064
J. J. Bommer
recurrence relationships [e.g. Frankel and Safak, 1998; Kiremidjian, 1999; Bommer
et al., 2002].
It has in fact already become well recognised that there is not a mutually exclusive dichotomy between DSHA and PSHA, opening up the possibility of exploring
combined methods. Reiter [1990] says that in many situations the choice has been
rephrased so that the issue is not whether but rather to what extent a particular
approach should be used. McGuire [2001] asserts that determinism vs. probabilism is not a bivariate choice but a continuum in which both analyses are conducted,
but more emphasis is given to one over the other. It is, however, interesting to
note that both Reiter [1990] and McGuire [2001], who make positive contributions
to moving away from the dichotomy, point towards deaggregation of PSHA as an
important tool in hybrid approaches.
5.2. Against orthodoxiesf
How should the method for seismic hazard assessment be chosen? Reiter [1990]
rightly argues that the analysis must fit the needs. Nonetheless, there is a tendency amongst many in the field of earthquake engineering who would argue for
the superiority of one approach or the other as the ideal, a panacea for all situations. Such arguments carry the danger of establishing orthodoxies, with all their
attendant perils and limitations. Hazard assessment should be chosen and adapted
not only to the required output and objectives of the risk study of which it forms
part, it should also be adapted to the characteristics of the area where it is being
applied and the level and quality of the data available. There is no need to establish
standard approaches that may be well suited to some settings and not to others, or
which may be appropriate now and yet not be in a few years time as understanding
of the earthquake process advances. Every major earthquake that is well recorded
by accelerographs throws up new answers and poses new questions, leading to rapid
evolution. The near-field directivity factors proposed by Somerville et al. [1997], for
example, have had to be revised not only numerically but also conceptually following the 1999 earthquakes in Turkey and Taiwan, a mere two years after their
publication. The ability to locate and identify active geological faults in continental
areas has also advanced by leaps and bounds over recent years, and continues to
do so [e.g. Jackson, 2001].
What are the bases for the orthodoxies of PSHA and DSHA? For the PSHA
church, the cornerstone of its creed seems to be the overriding importance of the
probability of exceedance of particular levels of ground motion, despite the fact
that teams of experts working with the same data can easily come up with answers
f In
the period 182 to 188 A.D., at a time when the early Christian church had gone beyond
only having enemies outside its camp, St. Irenaeus of Lyons penned a multi-tome treatise entitled
Adversus Haereses Against Heresies. The effects of such rigid insistence on a particular, narrow
philosophy in stifling religious and scientific thought in later centuries are known only too
well.
00064
63
that differ by an order of magnitude. For the DSHA temple, the sacred cow is
the worst-case scenario, even though this is not what they actually produce, as
discussed in the Sec. 5.4. The fact is that one cannot get away from the fact that
with current levels of knowledge of earthquake processes a major component of
seismic hazard assessment is judgement and risk decisions are governed not only
by scientific and technical data but also by opinion. The word orthodoxy, from
the Greek, has exactly that paradoxical and unscientific meaning: correct (ortho)
opinion (doxa). The Oxford Dictionary goes on to define the word also to mean
not independent-minded and unoriginal .
5.3. Alternative approaches to seismic hazard mapping
One area in which PSHA currently enjoys almost total domination is in seismic
hazard mapping, particularly for seismic design codes. The difference between deterministic and probabilistic hazard maps highlights one extremely important difference between current practice of the two methods, quite apart from the debatable issue of whether or not units of time are included. Probabilistic maps combine
the influence and effects of different sources of seismicity, often into a single map
showing contours of PGA that are then used to anchor spectral shapes that relate
only to the site conditions and which somehow try to cover all possible earthquake
scenarios.
Donovan [1993] states that the ground motion portion of codes represents an
attempt to produce a best estimate of what ground motions might occur at a site
during a future earthquake. Current code descriptions of earthquake actions clearly
do not fulfil this objective for many reasons, particularly because of the practice of
combining the hazard from various sources of seismicity. Here it is hard to justify
this by insisting on the importance of the total probability when codes are generally
based on the arbitrary level of the 475-year return period and, even more importantly, a return period which holds only for the zero period spectral ordinate. In
fact, because of the use of discrete zones to represent continuously varying hazard
over geographical areas, the actual return period will often not correspond to the
475-year level even at the spectral anchor point. The uniform hazard spectra of
seismic codes do not represent motions that might occur during a plausible future
earthquake, whence the difficulties of codes to specify acceleration time-histories.
In certain regions, particularly those affected by both small, local earthquakes and
larger, distant earthquakes, the application of spectral modal analysis using the
code spectrum can amount to designing a long-period structure for two different
types of earthquake occurring simultaneously. It is important to note here that
several codes, and in particular the NEHRP provisions [FEMA, 1997], use two parameters to define the spectrum, hence the spectral shape does vary with hazard
level, but the above comments still apply to some extent.
If it is accepted that the total probability theorem is not an inviolable principle
in defining earthquake actions, especially when the calculation of that probability
64
00064
J. J. Bommer
00064
Fig. 7.
1987].
65
Attenuation relationship derived for the 1971 San Fernando earthquake data [Dowrick,
standard deviations above the median, which assumes that for all magnitude and
distance combinations, the physical upper bound on the ground motion is always
at a fixed ratio of the median amplitudes. There is also debate regarding the actual
level at which the truncation should be applied: proposals range from 2 to 4.5 standard deviations [Reiter, 1990; Abrahamson, 2000; Romeo and Pristininzi, 2000] and
some widely used software codes employ cut-offs at 6 standard deviations above the
median.
Untruncated lognormal distributions create even more difficulties for DSHA,
since, at least for critical structures, its basis should be to identify the worst-case
scenario. Firstly, it establishes a maximum credible earthquake (MCE) as the basis
for the design. Abrahamson [2000] points out that if the MCE is determined as
the mean value from an empirical relationship between magnitude and rupture
dimensions, it is not the worst case; the worst case would be the magnitude at
which the probability distribution is truncated. This truncation requires a physical
rather than statistical basis: upper bounds are required, especially because the
regressions are supported by very few data for large magnitudes [Jackson, 1996].
66
00064
J. J. Bommer
After fixing the MCE, DSHA calculates design ground motions as the median
or median-plus-one-standard deviation values from strong-motion scaling relationships. Again Abrahamson [2000] points out that this is not the worst case but
then goes on to state the worst case ground motion would be 2 to 3 standard
deviations above the mean. The problem is where between these two limits to
fix the worst case? Probabilistically they are close, corresponding to the 97.7 and
99.9 percentiles respectively, although in terms of spectral amplitudes the effect of
the third standard deviation thrown on can increase the amplitudes by factors of
two, which clearly has very significant implications for design. Here again, physical
upper bounds are needed. Can upper bounds be fixed for ground-motion parameters? Reiter [1990] argues that there must be a physical limit on the strength
of ground motion that a given earthquake can generate. It is also interesting to
note that before sufficient strong-motion accelerograms were available for regression analyses, a number of studies considered possible upper bounds on groundmotion parameters [e.g. Ambraseys, 1974; Ambraseys and Hendron, 1967; Ida, 1973;
Newmark, 1965; Newmark and Hall, 1969]. Upper bounds need to be established
and procedures for their application developed that take into account current understanding of epistemic uncertainty. For example, if a deterministic scenario has
included forward directivity effects, does it then make sense to add on two or three
standard deviations when a significant component of the scatter has already been
accounted for?
There are many ways that the scatter in attenuation equations can be truncated, including the adaptation of models developed for log-normal distributions
with finite maxima [Bezdak and Solomon, 1983] to ground-motion prediction models [Restrepo-Velez, 2001]. As PSHA pushes estimates to longer return periods the
issue of truncating the influence of the scatter, in order to avoid physically unrealisable ground-motion amplitudes, becomes more important. The Pegasos project, currently underway to assess seismic hazard at nuclear power plant sites in Switzerland
down to very low rates of exceedance, is taking PSHA to new limits [Smit et al.,
2002]. The ground-motion estimates will be defined by median values, standard
deviations and truncations, with associated confidence intervals for each of these,
for a wide range of magnitude-range distance pairs.
5.5. Time-dependent estimates of seismic hazard
In order to make full use of the output from a PSHA, including estimates of uncertainty and the expected levels of ground motion for a range of return periods, it
is necessary that the engineering design also follow a probabilistic approach. Such
approaches are now being developed, particularly through the outstanding work of
Allin Cornell and his students at Stanford University [e.g. Bazzurro, 1998]. However,
there are many obstacles to the adoption of these approaches in routine engineering
practice simply because there are few areas of activity where the saying time is
money is as true as it is in the construction industry. One could actually conclude
00064
67
68
00064
J. J. Bommer
nor does there need to be, a method that can be applied as a panacea in all situations, since the available time and resources, and the required output, will vary
for different applications. There is no reason why a single approach should be ideal
for drafting zonation maps for seismic design codes, for loss estimation studies in
urban areas, for emergency planning, and for site-specific assessments for nuclear
power plants. McGuire [2001] presents an interesting scheme for selecting the relative degrees of determinism and probabilism according to the application. Woo
[1996] states that the method of (probabilistic) analysis should be decided on the
merits of the regional data rather than the availability of particular software or
the analysts own philosophical inclination. The reality is that both of these views
are partially correct and hence both criteria need to be used simultaneously: the
selection of the appropriate method must fit the requirements of the application
and also consider the nature of the seismicity in the region and its correlation or
indeed lack thereof with the tectonics. It is not possible to define a set of criteria
that can then be blindly applied to all types of hazard assessment in all regions
of the world and for this reason no such attempt has been made in this paper.
The analyst, after establishing the needs and conditions set by the engineer, should
adapt the assessment both to these criteria and to the region under study. Casting
aside simplistic choices between DSHA and PSHA will help the best approach to be
found and will also allow the best use to be made of the considerable and growing
body of expertise in engineering seismology around the world.
The confidence with which seismic probabilities can be calculated does not generally warrant their rigid use to define design ground motions and wherever possible the results should at least be checked against deterministic scenarios, wherever
there is sufficient data for these to be defined [McGuire, 2001]. One application in
which the arguments for strict adherence to the total probability theorem cannot
be defended is in the derivation of earthquake actions for code-based seismic design.
The currently very crude definition of earthquake actions in seismic codes could be
greatly improved if it was accepted that it is not necessary to use representations
that attempt to simultaneously envelope all of the possible ground motions that
may occur at a site.
Finally, one area in which research is required, that will be of benefit to seismic
hazard assessment in general and perhaps in particular to deterministic approaches,
is to identify upper bounds on ground-motion parameters for different combinations
of magnitude, distance and rupture mechanism. The existing database of strongmotion accelerograms can provide some insight into this issue [e.g. Martnez-Pereira,
1999; Restrepo-Velez, 2001]. Advances in finite fault models for numerical simulation of ground motions could be employed to perform large numbers of runs, with
a wide range of combinations of physically realisable values of the independent
parameters, in order to obtain estimates of the likely range of the upper bounds on
some parameters.
00064
69
Acknowledgements
The author wishes to thank David M. Boore, Luis Fernando Restrepo-Velez, John
Douglas and John Berrill for heads up regarding some key references, and particularly to Ellis Krinitzsky and Tom Hanks for providing pre-prints of papers. The
author has enjoyed and benefited from discussing many of the issues in this paper with others, particularly Norman A. Abrahamson, Amr Elnashai, Rui Pinho,
Nigel Priestley, Sarada K. Sarma and the students of the ROSE School in Pavia.
Robin McGuire, Belen Benito, John Douglas, Nick Ambraseys and Sarada Sarma
all provided very useful feedback on the first draft of this manuscript. The author is
particularly indebted to Dave Boore for an extremely thorough, detailed and challenging review of the first draft; in the few cases where my determined stubbornness
has led me to ignore his counsel, I have probably done so to the detriment of the
paper. The second draft of the paper has further benefited from thorough reviews
by John Berrill and two anonymous referees, to whom I also extend my gratitude.
References
Abrahamson, N. A. [2000] State of the practice of seismic hazard evaluation, GeoEng
2000, Melbourne, Australia, 1924 November.
Algermissen, S. T. and Perkins, D. M. [1976] A probabilistic estimate of maximum accelerations in rock in the contiguous United States, US Geological Survey Open-File
Report 76416.
Algermissen, S. T., Perkins, D. M., Thenhaus, P. C., Hanson, S. L. and Bender, B. L.
[1982] Probabilistic estimates of maximum acceleration and velocity in rock in the
contiguous United States, US Geological Survey Open-File Report 821033.
Alvarez, L., Vaccari, F., and Panza, G. F. [1999] Deterministic seismic zoning of eastern
Cuba, Pure Appl. Geophys. 156, 469486.
Ambraseys, N. N. [1974] Dynamics and response of foundation materials in epicentral
regions of strong earthquakes, Proceedings Fifth World Conference on Earthquake
Engineering, Rome, vol. 1, cxxvi-cxlviii.
Ambraseys, N. N., Bommer, J. J., Buforn, E. and Udas A. [2001] The earthquake
sequence of May 1951 at Jucuapa, El Salvador, J. Seism. 5(1), 2339.
Ambraseys, N. N. and Hendron, A. [1967] Dynamic Behaviour of Rock Masses. Rock
Mechanics in Engineering Practice, Stagg, K. and Zienkiewicz, O. (eds.), John Wiley,
pp. 203236.
Ambraseys, N. N. and Jackson, J. A. [1985] Long-term seismicity of Britain, Earthquake
Engineering in Britain, Thomas Telford, pp. 4965.
Anderson, J. G. [1997] Benefits of scenario ground motion maps, Engrg. Geol. 48, 4357.
Anderson, J. G. and Brune, J. N. [1999] Probabilistic seismic hazard analysis without
the ergodic assumption, Seism. Res. Lett. 70(1), 1928.
Aoudia, A., Vaccari, F., Suhadolc, P. and Megrahoui, M. [2000] Seismogenic potential
and earthquake hazard assessment in Tell Atlas of Algeria, J. Seism. 4, 7998.
Barbano, M. S, Egozcue, J. J., Garca Fern`
andez, M., Kijko, A., Lapajne, L., Mayer-Rosa,
D., Schenk, V., Schenkova, Z., Slejko, D. and Zonno, G. [1989] Assessment of seismic
hazard for the Sannio-Matese area of southern Italy a summary, Natural Hazards
2, 217228.
70
00064
J. J. Bommer
00064
71
Dowrick, D. J. [1987] Earthquake Resistant Design for Engineers and Architects. Second
edition, John Wiley & Sons.
Faccioli, E., Battistella, C., Alemani, P. and Tibaldi, A. [1988] Seismic microzoning investigations in the metropolitan area of San Salvador, El Salvador, following the
destructive earthquake of 10 October 1986, Proceedings International Seminar on
Earthquake Engineering, Innsbruck, 2865.
FEMA [1997] NEHRP recommended provisions for seismic regulations for new buildings
and other structures, FEMA 302, Federal Emergency Management Agency, Washington DC.
Frankel, A. [1995] Mapping seismic hazard in the Central and Eastern United States,
Seism. Res. Lett. 66(4), 821.
Frankel, A. and Safak, E. [1998] Recent trends and future prospects in seismic hazard
analysis, Geotechnical Earthquake Engineering & Soil Dynamics III, ASCE Geotechnical Special Publication 75, 1, 91115.
Hanks, T. C. and Cornell, C. A. [2001] Probabilistic seismic hazard analysis: a beginners
guide, Earthq. Spectra, submitted.
Harlow, D. H., White, R. A., Rymer, M. J. and Alvarado, S. G. [1993] The San Salvador
earthquake of 10 October 1986 and its historical context, Bull. Seism. Soc. Am.
83(4), 11431154.
Harmsen, S., Perkins, D. and Frankel, A. [1999] Deaggregation of probabilistic ground
motions in the Central and Eastern United States, Bull. Seism. Soc. Am. 89, 113.
Hofmann, R. B. [1996] Individual faults cant produce a Gutenberg-Richter earthquake
recurrence, Engrg. Geol. 43, 59.
Hong, H. P. and Rosenblueth, E. [1988] The Mexico earthquake of September 19, 1985
model for generation of subduction earthquakes, Earthq. Spectra 4(3), 481498.
Hwang, H. H. M. and Huo, J. R. [1994] Generation of hazard-consistent ground motion,
Soil Dyn. Earthq. Engrg. 13, 377386.
Ida, Y. [1973] The maximum acceleration of seismic ground motion, Bull. Seism. Soc.
Am. 63(3), 959968.
Jackson, D. D. [1996] The case for huge earthquakes, Seism. Res. Lett. 67(1), 35.
Jackson, J. A. [2001] Living with earthquakes: know your faults, J. Earthq. Engrg. 5,
Special Issue No. 1, 5123.
Kameda, H. and Nojima, N. [1988] Simulation of risk-consistent earthquake motion,
Earthq. Engrg. Struct. Dyn. 16, 10071019.
Kijko, A. and Graham, G. [1998] Parametric-historic procedure for probabilistic seismic
hazard analysis, Proceedings Eleventh European Conference on Earthquake Engineering, Paris.
Kiremidjian, A. S. [1999] Multiple earthquake event loss estimation methodology, Proceedings of the Eleventh European Conference on Earthquake Engineering: Invited
Lectures. Balkema, pp. 151160.
Kiremidjian, A. S. and Anagnos, T. [1984] Stochastic slip-predictable models for earthquake occurrence, Bull. Seism. Soc. Am. 74, 739755.
Kiremidjian, A. S. and Suzuki, S. [1987] A stochastic model for site ground motions from
temporally independent earthquakes, Bull. Seism. Soc. Am. 77, 11101126.
Kramer, S. L. [1996] Geotechnical Earthquake Engineering, Prentice Hall.
Krinitzsky, E. L. [1993] Earthquake probability in engineering Part 2: Earthquake
recurrence and limitations of Gutenberg-Richter b-values for the engineering of critical
structures, Engrg. Geol. 36, 152.
Krinitzsky, E. L. [1995a] Problems with Logic Trees in earthquake hazard evaluation,
Engrg. Geol. 39, 13.
72
00064
J. J. Bommer
00064
73
Reiter, L. [1990] Earthquake Hazard Analysis: Issues and Insights, Columbia University
Press.
Restrepo-Velez, L. F. [2001] Explorative study of the scatter in strong-motion attenuation
equations for application to seismic hazard assessment, Master Dissertation, ROSE
School, Universita di Pavia.
Romeo, R. and Prestininzi, A. [2000] Probabilistic versus deterministic hazard analysis:
an integrated approach for siting problems, Soil Dyn. Earthq. Engrg. 20, 7584.
Schwartz, D. P. and Coppersmith, K. J. [1984] Fault behaviour and characteristic earthquakes: examples from the Wasatch and San Andreas faults, J. Geophys. Res. 89,
56815698.
SEAOC [1995] Vision 2000 A framework for performance based design. Structural
Engineers Association of California, Sacramento.
Shepherd, J. B., Tanner, J. G. and Prockter, L. [1993] Revised estimates of the levels of
ground acceleration and velocity with 10% probability of exceedance in any 50-year
period for the Trinidad and Tobago region, Caribbean Conference on Earthquakes,
Volcanoes, Windstorms and Floods, 1115 October, Port of Spain, Trinidad.
Singh, S. K., Rodriguez, M. and Esteva, L. [1983] Statistics of small earthquakes and
frequency of occurrence of large earthquakes along the Mexican subduction zone,
Bull. Seism. Soc. Am. 73, 17791796.
Speidel, D. H. and Mattson, P. H. [1995] Questions on the validity and utility of b-values:
an example from the Central Mississippi Valley, Engrg. Geol. 40, 927.
Somerville, P. G., Smith, N. F., Graves, R. W. and Abrahamson, N. A. [1997] Modification
of empirical strong ground motion attenuation relations to include the amplitude and
duration effects of rupture directivity, Seism. Res. Lett. 68, 199222.
SSHAC [1997] Recommendations for probabilistic seismic hazard analysis: guidance on
uncertainty and the use of experts. Senior Seismic Hazard Analysis Committee,
NUREG/CR-6372, Washington, DC.
Smit, P., Torri, A., Sprecher, C., Birkhauser, P., Tinic, S. and Graf, R. [2002] Pegasos
a comprehensive probabilistic seismic hazard assessment for nuclear power plants in
Switzerland, Twelfth European Conference on Earthquake Engineering, London.
Stepp, J. C., Wong, I., Whitney, J., Quittmeyer, R., Abrahamson, N., Toro, G., Youngs, R.,
Coppersmith, K., Savy, J., Sullivan, T. and Yucca Mountain PSHA Project Members
[2001] Probabilistic seismic hazard analyses for ground motions and fault displacements at Yucca Mountain, Nevada, Earthq. Spectra 17(1), 113151.
Toro, G. R., Abrahamson, N. A. and Scheider, J. F. [1997] Model of strong ground
motions from earthquakes in Central and Eastern North America: best estimates and
uncertainties, Seism. Res. Lett. 68(1), 4157.
Veneziano, D., Cornell, C. A. and OHara, T. [1984] Historical method of seismic hazard
analysis, Electrical Power Research Institute Report NP-3438, Palo Alto, California.
White, R. A. and Harlow, D. H. [1993] Destructive upper-crustal earthquakes of Central
America since 1900, Bull. Seism. Soc. Am. 83(4), 11151142.
Whitman, R. (ed.) [1989] Workshop on ground motion parameters for seismic hazard
mapping, Technical Report NCEER-89-0038, National Center for Earthquake Engineering Research, State University of New York at Buffalo.
Woo, G. [1996] Kernel estimation methods for seismic hazard area source modelling,
Bull. Seism. Soc. Am. 86, 353362.
Wahlstr
om, R. and Gr
unthal, G. [2000] Probabilistic seismic hazard assessment (horizontal PGA) for Sweden, Finland and Denmark using different logic tree approaches,
Soil Dyn. Earthq. Engrg. 20, 4558.