You are on page 1of 31

March 15, 2002 8:34 WSPC/124-JEE

00064

Journal of Earthquake Engineering, Vol. 6, Special Issue 1 (2002) 4373


c Imperial College Press

DETERMINISTIC VS. PROBABILISTIC SEISMIC HAZARD


ASSESSMENT: AN EXAGGERATED AND
OBSTRUCTIVE DICHOTOMY

JULIAN J. BOMMER
Department of Civil and Environmental Engineering,
Imperial College, London SW7 2BU, UK
Deterministic and probabilistic seismic hazard assessment are frequently represented
as irreconcilably different approaches to the problem of calculating earthquake ground
motions for design, each method fervently defended by its proponents. This situation
often gives the impression that the selection of either a deterministic or a probabilistic
approach is the most fundamental choice in performing a seismic hazard assessment.
The dichotomy between the two approaches is not as pronounced as often implied and
there are many examples of hazard assessments combining elements of both methods.
Insistence on the fundamental division between the deterministic and probabilistic approaches is an obstacle to the development of the most appropriate method of assessment
in a particular case. It is neither possible nor useful to establish an approach to seismic
hazard assessment that will be the ideal tool for all situations. The approach in each
study should be chosen according to the nature of the project and also be calibrated
to the seismicity of the region under study, including the quantity and quality of the
data available to characterise the seismicity. Seismic hazard assessment should continue
to evolve, unfettered by almost ideological allegiance to particular approaches, with the
understanding of earthquake processes.
Keywords: Seismic hazard; probabilistic seismic hazard assessment; deterministic seismic
hazard assessment; seismic risk.

1. Introduction
Seismic hazard could be defined, in the most general sense, as the possibility of
potentially destructive earthquake effects occurring at a particular location. With
the exception of surface fault rupture and tsunami, all the destructive effects of
earthquakes are directly related to the ground shaking induced by the passage
of seismic waves. Textbooks that present guidance on how to assess the hazard of
strong ground-motions invariably present the fundamental choice facing the analyst
as that between adopting a deterministic or probabilistic approach [e.g. Reiter,
1990; Krinitzsky et al., 1993; Kramer, 1996]. Statements made by proponents of
the two approaches often imply very serious differences between deterministic and
probabilistic seismic hazard assessment and reinforce the idea that the choice between them is one of the most important steps in the process of hazard assessment.
This paper aims to show that this apparently diametric split between the two approaches is misleading and, more importantly, that it is not helpful to those faced
43

March 15, 2002 8:34 WSPC/124-JEE

44

00064

J. J. Bommer

with the problem of assessing the hazard presented by earthquake ground motions
at a site.

2. DSHA VS. PSHA: An Exaggerated Dichotomy


Probabilistic seismic hazard assessment (PSHA) was introduced a little over
30 years ago in the landmark paper by Cornell [1968] and has become the most
widely used approach to the problem of determining the characteristics of strong
ground-motion for engineering design. Some, however, have challenged the approach
and put up vociferous defence of deterministic seismic hazard assessment (DSHA),
in turn soliciting firm responses from those favouring PSHA. The division between
the two camps has been expressed in the scientific and technical literature, sometimes in terms reminiscent of political debate.a In some situations the division
has become public as for example when in October 2000 US newspapers reported
the disagreements between Caltrans and the US Army Corps of Engineers regarding the choice of PSHA or DSHA in assessing design loads for the new eastern
span of the Bay Bridge in San Francisco Bay. Hanks and Cornell [2001] have predicted a similar showdown around the seismic hazard assessment for the Yucca
Mountains nuclear waste repository [Stepp et al., 2001]. All of this points to a
seemingly irreconcilable split between DSHA and PSHA, which warrants closer
examination.
2.1. Determinism and probability in seismic hazard assessment
Reiter [1990] and Kramer [1996], currently the most widely consulted textbooks on
seismic hazard analysis, describe DSHA in the same way. The basis of DSHA is to
develop earthquake scenarios, defined by location and magnitude, which could affect
the site under consideration. The resulting ground motions at the site, from which
the controlling event is determined, are then calculated using attenuation relations;
in some cases, there may be more than one controlling event to be considered in
design.
The mechanics of PSHA are far less obvious than those of DSHA, with the result
that there is often misunderstanding of many of the basic features. The excellent
recent papers by Abrahamson [2000] and by Hanks and Cornell [2001] provide very
useful clarification of many issues that have created confusion regarding PSHA.
The essence of PSHA is to identify all possible earthquakes that could affect a site,
including all feasible combinations of magnitude and distance, and to characterise
the frequency of occurrence of different size earthquakes through a recurrence relationship. Attenuation equations are then employed to calculate the ground-motion
parameters that would result at the site due to each of these earthquakes and hence
a The

reader is referred, for example, to the acknowledgements in the paper by Krinitzsky [1998]
and the response by Hanks and Cornell [2001].

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

45

the rate at which different levels of ground motion occur at the site is calculated.
The design values of motion are then those having a particular annual frequency of
occurrence.
Common to both approaches is the very fundamental, and highly problematic,
issue of identifying potential sources of earthquakes. Another common feature is
the modelling of the ground motion through the use of attenuation relationships,
more correctly called ground-motion prediction equations [D. M. Boore, written
communication]. The principle difference in the two procedures, as described above,
resides in those steps of PSHA that are related to characterising the rate at which
earthquakes and particular levels of ground motion occur. Hanks and Cornell [2001]
point out that the two approaches have far more in common than they do in
differences and that in fact the only difference is that a PSHA has units of time
and DSHA does not. This is indeed a very fundamental distinction between the
two approaches as currently practised: in a DSHA the hazard will be defined as
the ground motion at the site resulting from the controlling earthquake, whereas
in PSHA the hazard is defined as the mean rate of exceedance of some chosen
ground-motion amplitude [Hanks and Cornell, 2001]. At this point it is useful to
briefly define the different terms used to characterise probabilistic seismic hazard:
the return period of a particular ground motion, Tr (Y ), is simply the reciprocal
of the annual rate of exceedance. For a specified design life, L, of a project, the
probability of exceedance of the level of ground motion (assuming that a Poisson
model is adopted) is given by:
L

q = 1 e Tr .

(1)

Once a mean rate of exceedance or probability of exceedance or return period is


selected as the basis for design, the output of PSHA is then expressed in terms of
a specified ground motion, in the same way as DSHA.
Another important difference between the two approaches, which is discussed
further in Sec. 3, is related to the treatment of hazard due to different sources of
earthquakes. In PSHA, the hazard contributions of different seismogenic sources are
combined into a single frequency of exceedance of the ground-motion parameter;
in DSHA, each seismogenic source is considered separately, the design motions
corresponding to a single scenario in a single source.
Regarding differences and similarities between the two methods, it is often
pointed out that probabilities are at least implicitly present in DSHA in so far
as the probability of a particular earthquake scenario occurring during the design life of the engineering project is effectively assigned as unity. An alternative interpretation is that within a framework of spatially distributed seismicity,
the probability of occurrence of a deterministic scenario is, mathematically, zero
[Hanks and Cornell, 2001] but in general the implied probability of one is a valid
interpretation of the scenarios defined in DSHA. As regards the resulting ground
motions, however, the probability depends upon the treatment of the scatter in the
strong-motion prediction equation: if the median plus one standard deviation is

March 15, 2002 8:34 WSPC/124-JEE

46

00064

J. J. Bommer

used, this will correspond to a motion with a 16-percent probability of exceedance,


for the particular earthquake specified in the scenario. The probabilistic nature of
ground-motion levels obtained from scaling relationships is reflected in the fact that
in Japan probabilistic usually refers to ground motions obtained in this way from
a particular scenario, as opposed to deterministic ground motions obtained by
modelling of the fault rupture and wave propagation [Abrahamson, 2000].
It can equally be pointed out that any PSHA includes many deterministic
elements in so much that the definition of nearly all of the input requires the
application of judgements to select from a range of possibilities. This applies in
particular to the definition of the geographical limits of the seismic sources zones
and the selection of the maximum magnitude. The deterministic nature of defining
seismic source zones and the consequently great differences that can arise amongst
the interpretations of different experts are well known [e.g. Barbano et al., 1989;
Reiter, 1990].
In addition to the various parameters that define the physical model that is the
basis for any PSHA, it could also be argued that another parameter, which has a
pronounced influence on the input to engineering design, is also defined deterministically: the design probability of exceedance. This issue, which is of fundamental importance to the raison detre of probabilistic seismic hazard assessment, is
explored in Sec. 4.
2.2. From Cornell to kernel boldemdash which PSHA?
The nature of the conflict between proponents of PSHA and DSHA was previously
likened to that between opposing political or religious ideologies, in which each
side claims exclusive ownership of the truth. However, scratching the surface of
opposing sides in ideological conflicts nearly always reveals the division to be far
less clear, with divisions within each camp being often almost as pronounced as
the fundamental ideological split itself. One only needs to look at the history of
the Left in international politics, in which internal conflicts have often taken on
a ferocity at least as great as that shown in the confrontation with the Right, for
a clear example of an apparent dichotomy concealing a multitude of opinions and
philosophies.b In this sense, the analogy remains useful for the split between the
proponents of DSHA and PSHA: the defendants of PSHA often ignore the fact
that there are many different methods of analysis that fall under the heading of
probabilistic and the proponents of each of these methods argue their case by
pointing out shortcomings in the others.
The formal beginning of PSHA, as mentioned before, can be traced back to the
classic paper by Cornell [1968]. Important developments included the development
of the software EQRISK by McGuire [1976], with the result that the method is freb Hence

parties.

the joke that if there are three Trotskyists gathered in a room there will be four political

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

47

quently referred to by the name CornellMcGuire. A significant difference between


EQRISK and the original formulation of Cornell [1968] was the inclusion of the
influence of the uncertainty or scatter in the strong-motion prediction equation,
subsequently explored by Bender [1984].
Two fundamental features of the CornellMcGuire method are definition of
seismogenic source zones, as areas or lines, with spatially uniform activity and the
assumption of a Poisson process to represent the seismicity, both of which have been
challenged by different researchers who have proposed alternatives. For example,
the use of Markov renewal chains has been proposed as an alternative probabilistic
model for subduction zones with identified seismic gaps, to develop either slipdependent [Kiremidjian and Anagnos, 1984] or time-dependent [Kiremdjian and
Suzuki, 1987] estimates of hazard. There have also been numerous studies published
giving short-term hazard estimates based on non-Poissonian seismicity, such as the
forecast by Parsons et al. [2000] for hazard in the Sea of Marmara area following
the 1999 Kocaeli and Duzce earthquakes.
Many alternatives to uniformly distributed seismicity within sources defined by
polygons have been put forward, such as Bender and Perkins [1982, 1987] who
proposed sources with smoothed boundaries, obtained by defining a standard error
on earthquake locations. In an earlier study Peek et al. [1980] used fuzzy set theory
to smooth the transitions between source boundaries.
There have also been proposals to do away with source zones altogether and use
the seismic catalogue itself to represent the possible locations of earthquakes, an
approach that may have been used before 1968. Such historic approaches can be
non-parametric [Veneziano et al., 1984] or parametric [Shepherd et al., 1993; Kijko
and Graham, 1998]; Makropoulos and Burton [1986] developed an approach using
the earthquake catalogue to represent sources and Gumbel distributions. Recent
adaptation of these zone-free methods include the approach based on spatiallysmoothed historical seismicity of Frankel [1995] and the kernel method of Woo
[1996], who alludes to misgivings over zonation and, consequently, with the edifice
of hazard computation built on zonation, the cornerstone of the CornellMcGuire
method. In these most recent methods, earthquake epicentres in the catalogue are
smoothed, according to criteria related to their magnitude and recurrence interval,
to form seismic sources.
The differences amongst these different approaches to PSHA are not simply academic: Bommer et al. [1998] produced hazard maps for upper-crustal seismicity in
El Salvador determined using the CornellMcGuire approach, two zone-free methods and the kernel method. The four hazard maps, prepared using exactly the same
input, show very significant differences in the resulting spatial distribution of the
hazard and the maximum values of PGA vary amongst the four maps by a factor
of more than two. Similarly divergent results obtained using the CornellMcGuire
and the Frankel [1995] approach have been presented by Wahlstr
om and Gr
unthal
[2000] for Fennoscandanavia.

March 15, 2002 8:34 WSPC/124-JEE

48

00064

J. J. Bommer

2.3. . . . and which DSHA?


Just there are many different approaches to PSHA, developed since the publication of Cornell [1968], there is not a single, established and universally accepted
approach to DSHA, in part precisely because of the absence of a classic point of
reference. Arguably, the recent paper by Krinitzsky [2001], could become the standard reference for DSHA that is currently missing from the technical literature.
One important difference between DSHA as proposed by Krinitzsky [2001] and
DSHA as described by Reiter [1990] and Kramer [1996], is that whereas the latter
imply that the ground motions for each scenario should be calculated using median
(50-percentile) values from strong-motion scaling relationships, Krinizsky [2001]
proposes the use of the median-plus-one-standard deviation (84-percentile) values.
Confusion regarding the meaning of DSHA is created by the use of the word
deterministic to describe scenarios obtained by deaggregation of PSHA, a subject discussed in Sec. 3. Romeo and Prestininizi [2000] obtain design earthquake
scenarios by manipulation of magnitude-distance pairs found from deaggregation
and refer to these as deterministic reference events; to add to the confusion, the
paper also uses the term in the sense defined in Sec. 2.1, assigning the maximum
magnitude earthquake to an active fault.
It is sometimes thought that DSHA is mainly applicable to site-specific hazard
assessments, but there are examples of deterministic seismic hazard maps, such
as those produced by the California Division of Mines and Geology for Caltrans
[Mualchin, 1996]. The map is prepared by assigning the maximum credible earthquake (MCE) to each known active or potentially active fault, calculating the resulting ground motions, and mapping contours of the highest values of the chosen
ground-motion parameter. Anderson [1997] argues for supplementing probabilistic
maps with such deterministic scenario maps to provide insight to what might
happen if particular faults actually rupture during the design life of a project.
The hazard mapping method developed by Costa et al. [1993] for Italy, and subsequently applied to many other countries [e.g. Alvarez et al., 1999; Aoudia et al.,
2000; Panza et al., 1996; Radulian et al., 1996] has also been called deterministic,
although it is significantly different from DSHA as described in Sec. 2.1. The method
is based on the generation of synthetic accelerograms at the nodes of a grid, from
which contours of the peak motions are drawn. One notable feature of the approach
is that the choice of grid size results in an arbitrary minimum source-to-site distance
of about 15 km, which is hardly a worst-case scenario.
2.4. Combined DSHA and PSHA
The apparently irreconcilable contradiction between PSHA and DSHA has not been
sufficient to prevent their combined use in some recent applications. In common with
most seismic design codes, the 1997 edition of the Uniform Building Code (UBC97)
defines the design earthquake actions on the basis of a zonation map corresponding
to a 475-year return period, used to anchor a response spectrum. Within Zone 4,

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

49

for any site within 15 km of an identified active fault capable of producing an


earthquake of M 6.5 or greater, the near source factors Na and Nv are applied to
increase the spectral ordinates for the effects of rupture directivity [e.g. Somerville
et al., 1997]. Arguably there is a probabilistic element in this procedure since the
slip rate is used in addition to the maximum magnitude to classify the fault and
thus determine the values of the factors. However, the application of Na and Nv is
essentially deterministic: the implicit assumption is that if there is an active fault
within 15 km of the site, it will rupture during the design life of the structure and
furthermore it will rupture in such a way as to produce forward directivity effects
at the site.
A more explicit combination of DSHA and PSHA has been used in the preparation of the maximum considered earthquake (for which the acronym MCE is,
confusingly, also used) maps for the 1997 NEHRP Recommended Provisions for
Seismic Regulations for New Buildings [Leyendecker et al., 2000]. Before continuing, it is important to note here that in this context MCE has a different meaning
from that within DSHA mentioned above. Most importantly and this is a source
for much potential confusion in the DSHA context, the earthquake refers to
an event defined by magnitude and location, whereas in PSHA earthquake now
refers to the ground motion at the site. Two maps are produced, giving contours of
spectral ordinates at 0.2 and 1.0 s, one using PSHA for a probability of exceedance
of 2% in 50 years, the other deterministic, using active fault locations and activities combined with median values from strong-motion scaling relations multiplied
by a factor of 1.5. Areas are defined on the probabilistic map where the ordinates
exceed 1.5g and 0.6g for periods of 0.2 and 1.0 s respectively; within these plateaus,
the values from the deterministic map are used instead wherever these are lower
than those from the probabilistic map. This process effectively uses DSHA to cap
the results of PSHA by not allowing motions higher than those corresponding to
a deterministic scenario. Wherever the deterministic values are higher than those
from the probabilistic map, the latter are retained; hence, the conclusion is that
the probabilistic estimates should not exceed the deterministic estimates, which are
effectively considered therefore as an upper bound. The validity of this assumption
is questionable, however, in part due to the possibility that a significant active fault
has not been recognised, but more importantly because of the very different treatment of uncertainty in the preparation of the two maps, a subject explored further
in the next section.
3. Earthquakes, Seismic Actions and Seismic Hazard
One of the most fundamental differences between DHSA and PSHA is the transparency or otherwise of the underlying features of the earthquake processes and in
the treatment of the uncertainties associated with current models of these processes.
In this section, the two methods are considered with respect to their treatment of
uncertainty and their relationship with real earthquake processes.

March 15, 2002 8:34 WSPC/124-JEE

50

00064

J. J. Bommer

3.1. Earthquake scenarios and seismic actions


The first and most fundamental output from a PSHA is a hazard curve, a plot
of the values of the annual rate of exceedance, return period or the probability
of exceedance within a particular design life against a selected ground-motion or
response parameter (Fig. 1).
The values on a hazard curve convey whether the area is of low, moderate
or high seismic hazard and its slope may indicate if the larger earthquakes have
relatively short or long recurrence intervals. Beyond this, a hazard curve for a single ground-motion parameter tells one almost nothing about the nature of earthquakes likely to affect the site. For a given exceedance probability, the curves imply
that the corresponding level of ground motion increases smoothly and continuously
with the period of exposure. This is the inevitable result of calculations based on
random spatial distribution of earthquakes and a continuous magnitude-frequency
relationship. In passing it is worth noting that the validity of the GutenbergRichter
recurrence relationship, which forms the backbone of PSHA, has been questioned by
several researchers [Krinitzsky, 1993; Speidel and Mattson, 1995; Hofmann, 1996].
Several centuries from now, when some accelerograph stations have been operating
for thousands of years, it will be possible to plot the actual variation of average ground-motion parameters with time to observe how well our hazard curves
are modelling what actually may happen at any given site. The results will of
course be a series of step functions rather than a smooth curve, but if the hazard

Fig. 1. Seismic hazard curves obtained for San Salvador (El Salvador) using the median values
from the attenuation relationship and including the integration across the scatter [Bommer et al.,
2000].

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

51

Fig. 2. Relationship between magnitude and recurrence intervals for earthquakes of M 7.1 and
greater in the Mexican subduction zone [Hong and Rosenblueth, 1988].

Fig. 3. Contours of MMI > VII for upper-crustal earthquakes in and around San Salvador during
the last three centuries [Harlow et al., 1993].

March 15, 2002 8:34 WSPC/124-JEE

52

00064

J. J. Bommer

curve provided a good fit to the recorded values this would vindicate PSHA. In
the meantime, it is not actually possible to validate the results of a probabilistic
seismic hazard assessment.
The characteristic earthquake model, in which major faults produce large earthquakes of similar characteristics at more or less constant intervals, is clearly not
consistent with the idea that hazard varies gradually with time. The model was
originally proposed by Singh et al. [1983] for major events in the Mexican subduction zone, where earthquakes of magnitude above 8 occur quasi-periodically and
there is an absence of activity in the magnitude range 7.4 to 8.0 (Fig. 2). The concept of characteristic earthquakes has been subsequently applied to major crustal
faults and reinforced by paleoseismology [Schwartz and Coppersmith, 1984]. For a
given site affected by a single source with a characteristic earthquake, the ground
motion expected at the site will clearly jump by a certain increment when the
characteristic earthquake occurs.
The concept of characteristic earthquakes may not be limited, however, to large
events on plate boundaries and major faults. Main [1987] identified characteristic
earthquakes of magnitude about 4.6 in the sequence preceding the 1980 eruption
of Mount St. Helens. Another example may be the upper-crustal seismicity along
the Central American volcanic chain [White and Harlow, 1993]. These destructive
events occur at irregular intervals, often in clusters, around the main volcanic centres (Fig. 3). Recurrence relationships derived for the entire volcanic chain clearly
indicate a bilinear behaviour, reminiscent of the characteristic model (Fig. 4). The
recurrence of these destructive events, sometimes with almost identical locations
and characteristics [Ambraseys et al., 2001], obviously suggests itself for a deterministic treatment, as has been done for example in the microzonation study of San
Salvador [Faccioli et al., 1988].

Fig. 4. Magnitude-frequency relationships for the Central American volcanic chain using the
seismic catalogue for different periods: clockwise from top left 18981994, 19311994 and
19641994 [Bommer et al., 1998].

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

53

3.2. Seismic hazard from multiple sources of seismicity


In the simple descriptions of DSHA and PSHA given in Sec. 2.1, apart from the
fundamental difference related to the units of time, the most important distinction between the two approaches is in the way the hazard from different sources of
seismicity are treated. DSHA treats each seismogenic source separately and their
influence on the final outcome is entirely transparent. PSHA combines the contributions from all relevant sources into a single rate for each level of a particular
ground-motion parameter. The consequence is that if the hazard is calculated in
terms of a range of parameters, such as spectral ordinates at several periods, the
final results will generally not be compatible with any physically feasible earthquake scenario. In recent years, many deaggregation techniques have been developed to identify the earthquake scenarios that contribute most significantly to the
design motions obtained from PSHA [McGuire, 1995; Chapman, 1995, 1999; Bazzurro and Cornell, 1999; Musson, 1999]. The output of these techniques is often
that more than one scenario must be defined in order to match different portions of the uniform hazard spectrum (UHS). This is the reason that Romeo and
Prestininzi [2000] needed to use an adjusted design earthquake, altering the scenario found by deaggregation, in order for the resulting spectrum not to fall below
the UHS.
If such manipulation of the scenarios found from deaggregation is required or if
different scenarios from different sources are to be used, this begs the question of
why should the influence of different seismogenic sources be combined in the first
place. The answer would appear to lie in the primordial importance that PSHA
attaches to the rate of exceedance of ground-motion levels. The total probability
theorem requires that all earthquakes contributing to the rate of exceedance of a
given ground-motion level be considered simultaneously and therein PSHA creates
for itself considerable physical problems for the sake of mathematical rigour.
For now let us assume that there is a sound and rigorous basis for the selection
of the design probabilities, an assumption that will be revisited later. If many
earthquake sources affect a site, and consequently a wide range of M -R scenarios are
possible, selecting the design seismic actions at a particular annual frequency may
be a rational approach. If, however, characteristic earthquakes are identified, what
is the result of this approach? The answer will depend on the relation between the
characteristic recurrence interval and the design return period; for large magnitude
earthquakes on plate boundaries the result could lie somewhere in the no-mans
land above the ground-motion levels from the background seismicity and below
the ground motions due to the occurrence of the characteristic event.
Let us return to the case of San Salvador (Fig. 4); the historical record over
three centuries reveals recurrence intervals of between 2 and 65 years for local
destructive earthquakes, with an average of about 22 years [Harlow et al., 1993].
What then is the result of selecting ground motions corresponding to a 475-year
return period? Unlike characteristic events on major faults or subduction zones,
there is evidently a degree of uncertainty associated with the location of future

March 15, 2002 8:34 WSPC/124-JEE

54

00064

J. J. Bommer

events, hence a probabilistic approach may help to select an appropriate sourceto-site distance (although it cannot guarantee that the distance for any site in the
next earthquake will not actually be shorter). The source-to-site distance of the
hazard-consistent scenario has indeed been found to decrease as the return period
grows, but the main effect of going to lower and lower probabilities of exceedance
is just to add more and more increments of standard deviation to the expected
motions from a typical earthquake scenario [Bommer et al., 2000].
3.3. The issue of uncertainty
All seismic hazard assessment, based on our current knowledge of earthquake processes and strong-motion generation, must inevitably deal with very considerable
uncertainties. Fundamental amongst these uncertainties are the location and magnitude of future earthquakes: PSHA integrates all possible combinations of these
parameters while DSHA assumes the most unfavourable combinations.
Another very important element of the uncertainty is that associated with the
scatter inherent to strong-motion scaling relationships, which again is treated differently in the two methods. PSHA generally now includes integration across the
scatter in the attenuation relationship as part of the calculations, although this has
not always been the case: the US hazard study by Algermissen et al. [1982] is based
on median values. In DSHA, as noted previously, the scatter is either ignored, by
using median values, or accounted for by the addition of one standard deviation to
the median values of ground motion. Both approaches have shortcomings, discussed
in the next section and also in Sec. 5.3, but it is also important to note here that
the different treatments of scatter in the two methods questions the validity of the
comparisons that are sometimes made between the two. In particular, the use of
the deterministic map to cap the ground motions calculated probabilistically in the
NEHRP MCE maps, discussed in Sec. 2.4, ignores completely the fact that the deterministic motions are based on median values while the probabilistic values may,
for a 2475-year return period, may be related to values more than 1.5 standard
deviations above the median [Bommer et al., 2000]. Like is not being compared
with like.
An important development in the understanding of the nature of uncertainty in
strong-motion scaling relationships is the distinction between epistemic and aleatory
uncertainty. In very simple terms, the epistemic uncertainty is due to incomplete
data and knowledge regarding the earthquake process and the aleatory uncertainty
is related to the unpredictable nature of future earthquakes [e.g. Anderson and
Brune, 1999]; a more complete definition of epistemic and aleatory uncertainty is
provided by Toro et al. [1997]. A major component of the uncertainty is due to all
of the parameters that are currently not included in simple strong-motion scaling
relationships, whose inclusion would reduce the scatter. Consider equations for duration based on magnitude, distance and site classification: additional parameters
that could be included to reduce the scatter are those related to the directivity

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

55

[Somerville et al., 1997] and velocity of rupture, and degree to which the rupture is
unilateral or bilateral [Bommer and Martinez-Pereira, 1999]. This would reduce the
scatter in the regressions to determine the equations, but it would not necessarily
reduce the uncertainty associated with estimates of future ground motions because
the point of rupture initiation, and the direction and speed of its propagation, are
currently almost impossible to estimate for future events. The reduction of the
uncertainty in the strong-motion scaling relationship has simply transferred it to
the uncertainty in the other parameters of the hazard model. PSHA would then
handle this by considering more and more scenarios to cover all of the possible variations of each feature, whereas DSHA would simply assume their least favourable
combination.
3.4. Acceleration time-history representation of seismic hazard
The most complete representation of the ground-shaking hazard is an acceleration
time-history. The specifications for selecting or generating accelerograms in current
seismic design codes are generally unworkable, largely because of the representation
of the basic design seismic actions in terms of probabilistic maps and uniform hazard
spectra [Bommer and Ruggeri, 2002].
The requirement of dynamic analysis for critical, irregular and high ductility
structures and the need to define appropriate input has been one of the main motivations for the development of deaggregation techniques such as Kameda and
Nojima [1988], Hwang and Huo [1994], Chapman [1995] and McGuire [1995], the
last of which has been widely adopted into practice. Since the PSHA calculations
include integration across all possible combinations of magnitude (M ) and distance
(R) and also across the scatter in the strong-motion relationships, the deaggregation must define the earthquake scenario in terms of M , R and , the number of
standard deviations above the median. Bommer et al. [2000] have shown that if a
single seismogenic source is considered, it is possible to define the hazard-consistent
scenario almost exactly rather than as bins of M , R and values. Performing
the deaggregations of hazard determined with and without integration across the
scatter reveals interesting features of the way PSHA treats the uncertainty: for a
100 000-year return period, the scenario without scatter is an earthquake of Ms 6.6
at 2 km from the site; with scatter, the scenario becomes an Ms 6.3 earthquake
at 6 km, together with 2.7 standard deviations above the median. The implications of this for hazard assessment in general are discussed in Sec. 5.4 but here the
interest is to consider the implications for the selection of hazard-consistent real
accelerograms for engineering design. In the first case, records could be searched
that approximately match the M -R pair of 6.6 and 2 km; if sufficient number of accelerograms were found, the uncertainty would be represented by their own scatter,
whereas if only few records were found they could be scaled to account for the uncertainty. That the scatter is inherent in selected suites of accelerograms is reflected
by specifications such as those in the 1999 AASHTO Guidelines for Base-Isolated

March 15, 2002 8:34 WSPC/124-JEE

56

00064

J. J. Bommer

Bridges: if three records are used in dynamic analysis, the maximum response values are used for design, whereas if more than seven are employed, it is permitted
to use the average response. In the case of the second earthquake scenario, one is
faced with the almost impossible task of finding records that approximately match
the M -R pair of 6.3 and 6 km, for which all ground-motion parameters are simultaneously almost three standard deviations above the median for this M -R scenario.
This problem can be overcome by carrying out the hazard assessment considering
the joint probabilities of different strong-motion parameters [Bazzurro, 1998] but
such techniques are some way from being adopted into general practice.

4. Seismic Hazard and Seismic Risk


Seismic hazard outside of the context of seismic risk is little more than an academic
amusement. The development of better practice is not always well served by papers
that present local, regional or even global hazard studies whose intended purpose
is never stated or those that propound the virtues of one particular approach or
method as the best for all applications.
4.1. Hazard assessment as an element of risk mitigation
The seismic risk in the existing built environment may be calculated in order to
design financial (insurance and reinsurance) or physical (retrofit and upgrading)
mitigation measures. For planned construction, hazard estimates are required so
that appropriate measures can be taken to control the consequent levels of risk
through relocation (exposure) or earthquake-resistant design (vulnerability).
In the case of financial loss estimation for the purposes of insurance, the probability associated with different levels of risk is a vital part of the information
required to fix premiums and to guide the purchase of reinsurance. In the case of
seismic design, of new or existing structures, the target is an acceptable level of
risk, for which probability may be needed. In the PSHA approach, once the design
probability of exceedance is chosen, it is assumed that design for the corresponding
level of ground motion will provide an acceptable level of risk.
The important issue is always what is at stake since what matters is the
possibility of loss of function due to an earthquake, whether that be to safely
house people, to provide rapid transportation, emergency medical care or a constant
energy supply, or to contain radioactive material. Krinitzsky [2001] proposes that
any project for which the consequences of failure are intolerable, to the owner and/or
the users, should be considered critical. There are strong arguments for using a deterministic approach for critical facilities [e.g. Krinitzsky, 1995b] although current
DSHA procedures may warrant improvement (Sec. 5.4).
The definition of intolerable is, of course, subjective, although it is unlikely that
anyone would contend the adjective being applied to the failure of a nuclear power
plant. In this context, let us consider the seismicity of Great Britain, which by

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

57

Fig. 5. British earthquakes during the over a period of more than seven centuries. Magnitudes:
a: M = 5.5, b: 5.5 > M > 5.0, c: 5.0 > M > 4.5, d: 4.5 > M > 4.0. [Ambraseys and Jackson,
1985].

March 15, 2002 8:34 WSPC/124-JEE

58

00064

J. J. Bommer

anyones standards is low. Ambraseys and Jackson [1985] review several centuries
of seismicity in Britain (Fig. 5) and find no earthquake larger than Ms 5.5 with the
additional blessings that these events are very infrequent recurrence intervals
nationally of at least 300 years and that focal depths tend to increase with
magnitude. Accounts of losses of life or property due to British earthquakes are
hard to come across. This lack of significant activity and of any consensus on the
principles for zoning this seismotectonically inhomogeneous territory [Woo, 1996]
has not prevented PSHA being performed. The probabilistic study by Musson and
Winter [1997] found 10 000-year PGA values below 0.10g in most of the country but
nudging past 0.25g at isolated locations, no doubt in part because of their use of
rather high values of maximum magnitude from 6.2 to 7.0. An earlier study by Ove
Arup and Partners [1993] found similar results at selected locations and also carried
out a probabilistic risk assessment. It was found that annual earthquake losses were
only 10% of those due to meteorological hazards, although this was largely the
result of the unlikely but still credible scenario of a magnitude 6 earthquake directly
under a city [Booth and Pappin, 1995]. A useful, but complex and lengthy, study
would compare these probabilistic losses with the cost of incorporating earthquakeresistant design into UK construction, and actually perform an iterative analysis to
determine the reduction in risk from different levels of investment in earthquakeresistant design and construction. This would allow an informed decision regarding
the benefit of the investment compared to the loss that may be prevented, taking
into account competing demands on the same resources.
The issue of seismic safety definitely cannot be dismissed for nuclear power
plants built in the UK, for which elaborate probabilistic hazard assessments have
been carried out to define the 10 000-year design ground motions. The real hazard is
posed by moderate magnitude [M 5] earthquakes, whose association with tectonic
structures is tenuous, that could produce damaging ground motions over a radius
of about 510 km. In the authors opinion, unless there is good scientific reasons
to exclude it as unfeasible, the only rational design basis for seismic safety in such
a setting is a DSHA based on an earthquake of about Ms 5.5, which would require
rupture on a fault that could very easily escape detection, occurring close to or even
below the power station. This approach could be classified as one of conservatism.
In their treatise on the philosophy of seismic hazard assessment for nuclear power
plants in the UK, Mallard and Woo [1993] argue against the use conservatism
and in favour of a systematic methodology for quantifying uncertainty. The tool
proposed for this procedure is the logic-tree, a device that has become part of the
stock-in-trade of PSHA enthusiasts. As applied to seismic hazard assessment, the
etymology of the second part of its name is obvious from its dendritic structure,
but the logic is sometimes harder to detect.
4.2. The use and abuse of probability
At this point, three principles for seismic hazard assessment can be stated:

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

59

(1) Seismic hazard can only be rationally interpreted in relation to the mitigation
of the attendant risk.
(2) If determinism is taken to mean the assignment of values by judgement, it is impossible to perform a seismic hazard assessment, by any method, without many
deterministic elements. These should be based, as far as possible, exclusively
on the best scientific data available.
(3) Probability, at least in a relative sense, is essential to the evaluation of riskc
(but not necessarily to its calculation).
The application of logic-trees to seismic hazard assessment is a mechanism for
handling those epistemic uncertainties that cannot, unlike the aleatory scatter in
strong-motion scaling relationships, be statistically measured. The different options
at each step are considered and each is assigned a subjective weight that reflects the
confidence in each particular value or choice. The results allow the determination of
confidence bands on the mean hazard curve, which as Reiter [1990] rightly points
out, places an uneasy burden on those who have to use the results. Krinitzsky
[1995a] presents powerful arguments against the use of logic-tree for hazard assessment, while illustrating how without the contentious application of weights it

Fig. 6. Contours of 475-year PGA levels for El Salvador produced by independent seismic hazard
studies [Bommer et al., 1997].

c For

critical structures, any finite probability of failure may be considered intolerable.

March 15, 2002 8:34 WSPC/124-JEE

60

00064

J. J. Bommer

can be a useful tool for comparing risk scenarios. In the context of this study, and
in terms of the three principles outlined above, the logic-tree for hazard assessment
appears to be upside down: deterministic judgementsd are turned into numbers
and treated together with observational data to calculate probabilities that then
fix the design ground motions. Returning to the point made in Sec. 3.2, this again
points to PSHA having a degree of confidence in its probabilities, which, to the
author, does not seem justified. Reiter [1990] notes that different studies may
conflict over whether the likelihood of exceeding 0.3g at site X is 103 or 104
and whether the likelihood of exceeding the same acceleration at location Y is 104
or 105 . Given how PSHA is actually applied, this means that for a specified
annual rate of exceedance estimates of PGA can vary by factors of two or more
(Fig. 6). It is important to note that a degree of this divergence in results can
be removed through application of the procedures proposed by the Senior Seismic
Hazard Analysis Committee [SSHAC, 1997]. However, the available earthquake
and ground-motion data for most parts of the world is such that any estimate
of the ground motion for a prescribed rate of exceedance will inevitably carry an
appreciable degree of uncertainty.
So the big question is, how are the probabilities fixed? Hanks and Cornell [2001]
explain that the starting point is a performance target expressed as a probability
of failure, Pf , which is set by life safety concerns or perhaps political fiat . This
probability is defined by the integral:
Z
Pf =
H(a) F 0 (a) da
(2)
0

where H(a) describes the hazard curve (annual rate of exceedance of different levels
of acceleration, a), and F 0 (a) is the derivative of the fragility function (the probability of failure given a particular level of acceleration). This is fine if an iterative
process is carried out in order determine the appropriate fragility curve, which
would then define the design criteria, to give the desired level of Pf . If the fragility
curve is sufficiently steep, once it has been determined Hanks and Cornell [2001]
show how Eq. (2) can be approximated to determine the level of H(a) to be used as
the basis of design, although this begins to get complicated because to obtain F (a)
the design must already have been carried out. However, this is not how the design
levels used in current practice have been fixed or else it is quite remarkable that
nearly every country in the world, from New Zealand to Ethiopia, regardless of
seismicity, building types or construction standards, all came up with 0.002 as the
design annual rate of exceedance!
In fact, the almost universal use of the 475-year return period in codes can be
traced back to the hazard study for the USA produced by Algermissen and Perkins
[1976], which was based on an exposure period of 50 years (a typical design life)
d Krinitzsky

or taste.

[1995a] describes these as degrees of belief that are no more quantifiable than love

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

61

and a probability of 10% of exceedance, whose selection has not been explained.
Every code and regulation since has either followed suit or else adopted its own
return periods: Whitman [1989] reports proposals by the Structural Engineers Association of Northern California to use a 2000-year return period, chosen because
the committee believed it reflected a probability or risk comparable to other risks that
the public accepts in regard to life safety. If one considers that in most structural
codes the importance factor increases the design ground motions by a constant
factor, this means that the actual return period of the design accelerations will be
different from 475 years and will probably vary throughout a country. In the development of the NEHRP Guidelines, this is actually been done intentionally in order
to provide a uniform margin against collapse, resulting in design for motions corresponding to 10% in 50 years in San Francisco and 5% in 50 years in Central and
Eastern US [Leyendecker et al., 2000]. A review of seismic design regulations around
the world reveals a host of design return periods, the origin of which is rarely if
ever explained. The AASHTO Guidelines specify 500 years for essential bridges,
2500 years for critical bridges.e In the Vision 2000 proposal for a framework
for performance-based seismic design, return periods of 72, 244, 475 and 974 years
have been specified as the basis for fixing demand for different performance levels
[SEAOC, 1995]. It is hard not to feel that some of these values have been almost
pulled from a hat, all the more so for the 10 000-year return periods specified for
safety-critical structures such as nuclear power plants.

5. The Best of Both Worlds


The three principles stated in Sec. 4.2 imply that it is not possible to perform useful
seismic hazard assessment that is free from determinism or probability, since both
are essential ingredients. Hence it is logical to look for ways that make best use of
both these components.
5.1. Hybrid approaches
Hanks and Cornell [2001] assert that the main difference between deterministic and
probabilistic approaches is that PSHA has units of time and DHSA does not. This
is true in the brief descriptions of the two methods presented at the beginning of
the paper but it does not mean that time and probabilities cannot be attached
to deterministic scenarios. Orozova and Suhadolc [1999] assign recurrence rates
to earthquakes of particular magnitude in order to adapt the method of Costa
et al. [1993] discussed in Sec. 2.3 to create a deterministic-probabilistic
approach. Furthermore, it is common in loss estimation to model the hazard as a
series of earthquake scenarios, each of which is assigned an average rate from the
e A wonderful example occurred in a recent project the design return period was finally fixed
by the engineer at 2000 years because the bridge was judged to be almost critical !

March 15, 2002 8:34 WSPC/124-JEE

62

00064

J. J. Bommer

recurrence relationships [e.g. Frankel and Safak, 1998; Kiremidjian, 1999; Bommer
et al., 2002].
It has in fact already become well recognised that there is not a mutually exclusive dichotomy between DSHA and PSHA, opening up the possibility of exploring
combined methods. Reiter [1990] says that in many situations the choice has been
rephrased so that the issue is not whether but rather to what extent a particular
approach should be used. McGuire [2001] asserts that determinism vs. probabilism is not a bivariate choice but a continuum in which both analyses are conducted,
but more emphasis is given to one over the other. It is, however, interesting to
note that both Reiter [1990] and McGuire [2001], who make positive contributions
to moving away from the dichotomy, point towards deaggregation of PSHA as an
important tool in hybrid approaches.
5.2. Against orthodoxiesf
How should the method for seismic hazard assessment be chosen? Reiter [1990]
rightly argues that the analysis must fit the needs. Nonetheless, there is a tendency amongst many in the field of earthquake engineering who would argue for
the superiority of one approach or the other as the ideal, a panacea for all situations. Such arguments carry the danger of establishing orthodoxies, with all their
attendant perils and limitations. Hazard assessment should be chosen and adapted
not only to the required output and objectives of the risk study of which it forms
part, it should also be adapted to the characteristics of the area where it is being
applied and the level and quality of the data available. There is no need to establish
standard approaches that may be well suited to some settings and not to others, or
which may be appropriate now and yet not be in a few years time as understanding
of the earthquake process advances. Every major earthquake that is well recorded
by accelerographs throws up new answers and poses new questions, leading to rapid
evolution. The near-field directivity factors proposed by Somerville et al. [1997], for
example, have had to be revised not only numerically but also conceptually following the 1999 earthquakes in Turkey and Taiwan, a mere two years after their
publication. The ability to locate and identify active geological faults in continental
areas has also advanced by leaps and bounds over recent years, and continues to
do so [e.g. Jackson, 2001].
What are the bases for the orthodoxies of PSHA and DSHA? For the PSHA
church, the cornerstone of its creed seems to be the overriding importance of the
probability of exceedance of particular levels of ground motion, despite the fact
that teams of experts working with the same data can easily come up with answers
f In

the period 182 to 188 A.D., at a time when the early Christian church had gone beyond
only having enemies outside its camp, St. Irenaeus of Lyons penned a multi-tome treatise entitled
Adversus Haereses Against Heresies. The effects of such rigid insistence on a particular, narrow
philosophy in stifling religious and scientific thought in later centuries are known only too
well.

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

63

that differ by an order of magnitude. For the DSHA temple, the sacred cow is
the worst-case scenario, even though this is not what they actually produce, as
discussed in the Sec. 5.4. The fact is that one cannot get away from the fact that
with current levels of knowledge of earthquake processes a major component of
seismic hazard assessment is judgement and risk decisions are governed not only
by scientific and technical data but also by opinion. The word orthodoxy, from
the Greek, has exactly that paradoxical and unscientific meaning: correct (ortho)
opinion (doxa). The Oxford Dictionary goes on to define the word also to mean
not independent-minded and unoriginal .
5.3. Alternative approaches to seismic hazard mapping
One area in which PSHA currently enjoys almost total domination is in seismic
hazard mapping, particularly for seismic design codes. The difference between deterministic and probabilistic hazard maps highlights one extremely important difference between current practice of the two methods, quite apart from the debatable issue of whether or not units of time are included. Probabilistic maps combine
the influence and effects of different sources of seismicity, often into a single map
showing contours of PGA that are then used to anchor spectral shapes that relate
only to the site conditions and which somehow try to cover all possible earthquake
scenarios.
Donovan [1993] states that the ground motion portion of codes represents an
attempt to produce a best estimate of what ground motions might occur at a site
during a future earthquake. Current code descriptions of earthquake actions clearly
do not fulfil this objective for many reasons, particularly because of the practice of
combining the hazard from various sources of seismicity. Here it is hard to justify
this by insisting on the importance of the total probability when codes are generally
based on the arbitrary level of the 475-year return period and, even more importantly, a return period which holds only for the zero period spectral ordinate. In
fact, because of the use of discrete zones to represent continuously varying hazard
over geographical areas, the actual return period will often not correspond to the
475-year level even at the spectral anchor point. The uniform hazard spectra of
seismic codes do not represent motions that might occur during a plausible future
earthquake, whence the difficulties of codes to specify acceleration time-histories.
In certain regions, particularly those affected by both small, local earthquakes and
larger, distant earthquakes, the application of spectral modal analysis using the
code spectrum can amount to designing a long-period structure for two different
types of earthquake occurring simultaneously. It is important to note here that
several codes, and in particular the NEHRP provisions [FEMA, 1997], use two parameters to define the spectrum, hence the spectral shape does vary with hazard
level, but the above comments still apply to some extent.
If it is accepted that the total probability theorem is not an inviolable principle
in defining earthquake actions, especially when the calculation of that probability

March 15, 2002 8:34 WSPC/124-JEE

64

00064

J. J. Bommer

is subject to such uncertainty and presented so approximately, more imaginative


procedures can be used. The hazard resulting from different sources of seismicity
can be treated individually and their resulting spectra defined separately. This
is, in fact, already done: the seismic design codes of both China and Portugal
define one spectrum for local events and another for more distant large magnitude
earthquakes. Once different types of seismicity are separated, it will usually be found
that at most sites one source dominates. One possibility is then to deaggregate the
hazard associated with the dominant source for each site and hence characterise the
hazard in terms of actual M -R scenarios. The hazard could then be represented by
maps showing contours of M and R for different types of seismicity. The problem
still remains as to how to select an appropriate return period at which to perform
the deaggregation, but in many cases it will be possible to define the M -R pairs
deterministically [Bommer and White, 2001]. This would differ from the maps of
magnitude and distance presented by Harmsen et al. [1999] because a single pair
of maps would cover all strong-motion parameters rather than just the spectral
ordinate at a single response period.
Another issue is how to incorporate rationally the uncertainty without going
to very complex representations of hazard in terms of M -R- contours [Bazzurro
et al., 1998]. The important point is that once the hazard is characterised by pairs of
magnitude and distance, every required parameter of the ground motion, including
ordinates of spectral acceleration and displacement in both the vertical and horizontal directions, and duration, can be computed directly and the required criteria
for selecting or generating acceleration time-histories are provided [Bommer, 2000].
Within the framework of performance-based seismic design it is already envisaged
that several levels of earthquake action will be considered in structural design. If
the objective is to obtain improved control over earthquake response to expected
seismic actions, then it is surely a step in the right direction to begin by separating
different types of earthquake action.
5.4. Upper bounds: the missing piece
In 1971, the San Fernando earthquake more than doubled number of strong-motion
accelerograms available, marking the dawn of the age of curve fitting to clouds of
data to find scaling relationships. It is curious that the curves fitted severely underestimated the most significant ground-motion recorded in the earthquake (Fig. 7).
The scatter in these relationships is generally assumed to be lognormal and is
invariably large: for spectral ordinates, the 84-percentile values are typically 80
100% higher than the median. This scatter creates difficulties for both DSHA and
PSHA. In PSHA the untruncated lognormal scatter results in the probable maximum ground motion at a particular site increasingly indefinitely as the time window
of the PSHA increases, due to the increasing influence of the tail of the Gaussian
distribution on the probabilistic values [Anderson and Brune, 1999]. This has given
rise to different mechanisms for truncating the scatter at a certain number of

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

Fig. 7.
1987].

65

Attenuation relationship derived for the 1971 San Fernando earthquake data [Dowrick,

standard deviations above the median, which assumes that for all magnitude and
distance combinations, the physical upper bound on the ground motion is always
at a fixed ratio of the median amplitudes. There is also debate regarding the actual
level at which the truncation should be applied: proposals range from 2 to 4.5 standard deviations [Reiter, 1990; Abrahamson, 2000; Romeo and Pristininzi, 2000] and
some widely used software codes employ cut-offs at 6 standard deviations above the
median.
Untruncated lognormal distributions create even more difficulties for DSHA,
since, at least for critical structures, its basis should be to identify the worst-case
scenario. Firstly, it establishes a maximum credible earthquake (MCE) as the basis
for the design. Abrahamson [2000] points out that if the MCE is determined as
the mean value from an empirical relationship between magnitude and rupture
dimensions, it is not the worst case; the worst case would be the magnitude at
which the probability distribution is truncated. This truncation requires a physical
rather than statistical basis: upper bounds are required, especially because the
regressions are supported by very few data for large magnitudes [Jackson, 1996].

March 15, 2002 8:34 WSPC/124-JEE

66

00064

J. J. Bommer

After fixing the MCE, DSHA calculates design ground motions as the median
or median-plus-one-standard deviation values from strong-motion scaling relationships. Again Abrahamson [2000] points out that this is not the worst case but
then goes on to state the worst case ground motion would be 2 to 3 standard
deviations above the mean. The problem is where between these two limits to
fix the worst case? Probabilistically they are close, corresponding to the 97.7 and
99.9 percentiles respectively, although in terms of spectral amplitudes the effect of
the third standard deviation thrown on can increase the amplitudes by factors of
two, which clearly has very significant implications for design. Here again, physical
upper bounds are needed. Can upper bounds be fixed for ground-motion parameters? Reiter [1990] argues that there must be a physical limit on the strength
of ground motion that a given earthquake can generate. It is also interesting to
note that before sufficient strong-motion accelerograms were available for regression analyses, a number of studies considered possible upper bounds on groundmotion parameters [e.g. Ambraseys, 1974; Ambraseys and Hendron, 1967; Ida, 1973;
Newmark, 1965; Newmark and Hall, 1969]. Upper bounds need to be established
and procedures for their application developed that take into account current understanding of epistemic uncertainty. For example, if a deterministic scenario has
included forward directivity effects, does it then make sense to add on two or three
standard deviations when a significant component of the scatter has already been
accounted for?
There are many ways that the scatter in attenuation equations can be truncated, including the adaptation of models developed for log-normal distributions
with finite maxima [Bezdak and Solomon, 1983] to ground-motion prediction models [Restrepo-Velez, 2001]. As PSHA pushes estimates to longer return periods the
issue of truncating the influence of the scatter, in order to avoid physically unrealisable ground-motion amplitudes, becomes more important. The Pegasos project, currently underway to assess seismic hazard at nuclear power plant sites in Switzerland
down to very low rates of exceedance, is taking PSHA to new limits [Smit et al.,
2002]. The ground-motion estimates will be defined by median values, standard
deviations and truncations, with associated confidence intervals for each of these,
for a wide range of magnitude-range distance pairs.
5.5. Time-dependent estimates of seismic hazard
In order to make full use of the output from a PSHA, including estimates of uncertainty and the expected levels of ground motion for a range of return periods, it
is necessary that the engineering design also follow a probabilistic approach. Such
approaches are now being developed, particularly through the outstanding work of
Allin Cornell and his students at Stanford University [e.g. Bazzurro, 1998]. However,
there are many obstacles to the adoption of these approaches in routine engineering
practice simply because there are few areas of activity where the saying time is
money is as true as it is in the construction industry. One could actually conclude

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

67

that probabilistic assessment of design earthquake actions is far more advanced


than probabilistic approaches to seismic design and for this reason the results of
PSHA are generally not used to their full advantage.
The limited time made available for even major engineering projects is reflected
in the very tight deadlines often set for the execution of seismic hazard assessments.
In engineering practice it is not unusual for a hazard assessment for a site to be
carried out in a few weeks, generally making it inevitable that the study will be
based largely on available data published in papers and reports. Under such circumstances, and especially where the design considers the seismic performance at only
one limit state, DSHA can represent an attractive option to define a level of ground
motion that is unlikely to be severely exceeded. One advantage of DSHA is that
it immediately releases the hazard analyst from the responsibility of defining the
design return period, which the client or engineer often passes, quite incorrectly, to
the Earth scientist. More importantly, DSHA is generally more dependent on the
data that is likely to be determined most reliably: the characteristics of the largest
historical earthquakes and the location of the largest geological faults. Since the
actual calculations involved in DSHA are simple, the analyst can carry out sensitivity analyses even when working to a very strict timetable. Furthermore, because
of the transparency of the deterministic approach, peer review can be carried out
swiftly and easily, which may not be the case for a probabilistic assessment where
so many input parameters and assumptions, whose individual influences are often
difficult to distinguish, need to be defined. One cannot escape from the fact that
expert opinion is of overriding importance, regardless of whether the probabilistic
or deterministic approach is adopted. The value of a logic-tree formulation, with
weights assigned by one analyst, in situations where the design engineer for
whom the provision of earthquake resistance is just one of a series of considerations
that affect siting, dimensions and detailing is seeking a single number to characterise the earthquake action, is questionable. This is not to say that careful use of
multiple expert opinions is not to be encouraged, indeed a Level 4 hazard study as
recommended by SSHAC [1997] provides a very well crafted framework for exactly
such a process. However, the time scale, and hence cost, required to perform such a
study is beyond the means and the schedule of most engineering projects: to date
the SSHAC Level 4 method has only be applied in the Yucca Mountains project
[Stepp et al., 2001] and now the Pegasos project in Switzerland [Smit et al., 2002].
When working to a very short time scale, however, a peer-reviewed DSHA may
be far more useful to the project engineer and ultimately to the provision of an
adequate level of earthquake resistance.
6. Conclusions
There is not a simple dichotomy between probabilistic and deterministic approaches
to seismic hazard assessment many different probabilistic approaches exist and
there is not a standard methodology for deterministic approaches. There is not,

March 15, 2002 8:34 WSPC/124-JEE

68

00064

J. J. Bommer

nor does there need to be, a method that can be applied as a panacea in all situations, since the available time and resources, and the required output, will vary
for different applications. There is no reason why a single approach should be ideal
for drafting zonation maps for seismic design codes, for loss estimation studies in
urban areas, for emergency planning, and for site-specific assessments for nuclear
power plants. McGuire [2001] presents an interesting scheme for selecting the relative degrees of determinism and probabilism according to the application. Woo
[1996] states that the method of (probabilistic) analysis should be decided on the
merits of the regional data rather than the availability of particular software or
the analysts own philosophical inclination. The reality is that both of these views
are partially correct and hence both criteria need to be used simultaneously: the
selection of the appropriate method must fit the requirements of the application
and also consider the nature of the seismicity in the region and its correlation or
indeed lack thereof with the tectonics. It is not possible to define a set of criteria
that can then be blindly applied to all types of hazard assessment in all regions
of the world and for this reason no such attempt has been made in this paper.
The analyst, after establishing the needs and conditions set by the engineer, should
adapt the assessment both to these criteria and to the region under study. Casting
aside simplistic choices between DSHA and PSHA will help the best approach to be
found and will also allow the best use to be made of the considerable and growing
body of expertise in engineering seismology around the world.
The confidence with which seismic probabilities can be calculated does not generally warrant their rigid use to define design ground motions and wherever possible the results should at least be checked against deterministic scenarios, wherever
there is sufficient data for these to be defined [McGuire, 2001]. One application in
which the arguments for strict adherence to the total probability theorem cannot
be defended is in the derivation of earthquake actions for code-based seismic design.
The currently very crude definition of earthquake actions in seismic codes could be
greatly improved if it was accepted that it is not necessary to use representations
that attempt to simultaneously envelope all of the possible ground motions that
may occur at a site.
Finally, one area in which research is required, that will be of benefit to seismic
hazard assessment in general and perhaps in particular to deterministic approaches,
is to identify upper bounds on ground-motion parameters for different combinations
of magnitude, distance and rupture mechanism. The existing database of strongmotion accelerograms can provide some insight into this issue [e.g. Martnez-Pereira,
1999; Restrepo-Velez, 2001]. Advances in finite fault models for numerical simulation of ground motions could be employed to perform large numbers of runs, with
a wide range of combinations of physically realisable values of the independent
parameters, in order to obtain estimates of the likely range of the upper bounds on
some parameters.

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

69

Acknowledgements
The author wishes to thank David M. Boore, Luis Fernando Restrepo-Velez, John
Douglas and John Berrill for heads up regarding some key references, and particularly to Ellis Krinitzsky and Tom Hanks for providing pre-prints of papers. The
author has enjoyed and benefited from discussing many of the issues in this paper with others, particularly Norman A. Abrahamson, Amr Elnashai, Rui Pinho,
Nigel Priestley, Sarada K. Sarma and the students of the ROSE School in Pavia.
Robin McGuire, Belen Benito, John Douglas, Nick Ambraseys and Sarada Sarma
all provided very useful feedback on the first draft of this manuscript. The author is
particularly indebted to Dave Boore for an extremely thorough, detailed and challenging review of the first draft; in the few cases where my determined stubbornness
has led me to ignore his counsel, I have probably done so to the detriment of the
paper. The second draft of the paper has further benefited from thorough reviews
by John Berrill and two anonymous referees, to whom I also extend my gratitude.
References
Abrahamson, N. A. [2000] State of the practice of seismic hazard evaluation, GeoEng
2000, Melbourne, Australia, 1924 November.
Algermissen, S. T. and Perkins, D. M. [1976] A probabilistic estimate of maximum accelerations in rock in the contiguous United States, US Geological Survey Open-File
Report 76416.
Algermissen, S. T., Perkins, D. M., Thenhaus, P. C., Hanson, S. L. and Bender, B. L.
[1982] Probabilistic estimates of maximum acceleration and velocity in rock in the
contiguous United States, US Geological Survey Open-File Report 821033.
Alvarez, L., Vaccari, F., and Panza, G. F. [1999] Deterministic seismic zoning of eastern
Cuba, Pure Appl. Geophys. 156, 469486.
Ambraseys, N. N. [1974] Dynamics and response of foundation materials in epicentral
regions of strong earthquakes, Proceedings Fifth World Conference on Earthquake
Engineering, Rome, vol. 1, cxxvi-cxlviii.
Ambraseys, N. N., Bommer, J. J., Buforn, E. and Udas A. [2001] The earthquake
sequence of May 1951 at Jucuapa, El Salvador, J. Seism. 5(1), 2339.
Ambraseys, N. N. and Hendron, A. [1967] Dynamic Behaviour of Rock Masses. Rock
Mechanics in Engineering Practice, Stagg, K. and Zienkiewicz, O. (eds.), John Wiley,
pp. 203236.
Ambraseys, N. N. and Jackson, J. A. [1985] Long-term seismicity of Britain, Earthquake
Engineering in Britain, Thomas Telford, pp. 4965.
Anderson, J. G. [1997] Benefits of scenario ground motion maps, Engrg. Geol. 48, 4357.
Anderson, J. G. and Brune, J. N. [1999] Probabilistic seismic hazard analysis without
the ergodic assumption, Seism. Res. Lett. 70(1), 1928.
Aoudia, A., Vaccari, F., Suhadolc, P. and Megrahoui, M. [2000] Seismogenic potential
and earthquake hazard assessment in Tell Atlas of Algeria, J. Seism. 4, 7998.
Barbano, M. S, Egozcue, J. J., Garca Fern`
andez, M., Kijko, A., Lapajne, L., Mayer-Rosa,
D., Schenk, V., Schenkova, Z., Slejko, D. and Zonno, G. [1989] Assessment of seismic
hazard for the Sannio-Matese area of southern Italy a summary, Natural Hazards
2, 217228.

March 15, 2002 8:34 WSPC/124-JEE

70

00064

J. J. Bommer

Bazzurro, P. [1998] Probabilistic seismic demand analysis, PhD Dissertation, Stanford


University.
Bazzurro, P. and Cornell, C. A. [1999] Disaggregation of seismic hazard, Bull. Seism.
Soc. Am. 89, 501520.
Bazzurro, P., Winterstein, S. R. and Cornell, C. A. [1998] Seismic contours: a new characterization of seismic hazard, Proc. Eleventh European Conference on Earthquake
Engineering, Paris.
Bender, B. [1984] Incorporating acceleration variability into seismic hazard analysis,
Bull. Seism. Soc. Am. 74, 14511462.
Bender, B. and Perkins, D. M. [1982] SEISRISK II, A computer program for seismic
hazard estimation, US Geological Survey Open-File Report 82293.
Bender, B. and Perkins, D. M. [1987] SEISRISK III, A computer program for seismic
hazard estimation, US Geol. Surv. Bull. 1772, 120.
Bezdek, J. C. and Solomon, K. H. [1983] Upper limit lognormal distribution for drop size
data, ASCE J. Irrigation and Drainage Engrg. 109(1), 7288.
Bommer, J. J. [2000] Seismic zonation for comprehensive definition of earthquake
actions, Proceedings Sixth International Conference on Seismic Zonation, Palm
Springs, CA.
Bommer, J. J. and Martnez-Pereira, A. [1999] The effective duration of earthquake
strong motion, J. Earthq. Engrg. 3(2), 127172.
Bommer, J. J. and Ruggeri, C. [2002] The specification of acceleration time-histories in
seismic design codes, Eur. Earthq. Engrg., 16(1), in press.
Bommer, J. J. and White, N. [2001] Una propuesta para un mtodo alternativo de zonificaci
on ssmica en los pases de Iberoamerica, Segundo Congreso Iberoamericano de
Ingeniera Ssmica, Madrid, 1619 October.
Bommer, J., McQueen, C., Salazar, W., Scott, S. and Woo, G. [1998] A case study of the
spatial distribution of seismic hazard (El Salvador), Natural Hazards 18, 145166.
Bommer, J. J., Scott, S. G. and Sarma, S. K. [2000] Hazard-consistent earthquake
scenarios, Soil Dyna. Earthq. Engrg. 19, 219231.
Bommer, J., Spence, R., Erdik, M., Tabuchi, S., Aydinoglu, N., Booth, E., del Re, D. and
Peterken, O. [2002] Development of an earthquake loss model for Turkish Catastrophe Insurance, J. Seism., accepted for publication.
Bommer, J. J., Udas, A., Cepeda, J. M., Hasbun, J. C., Salazar, W. M., Su
arez, A.,
Ambraseys, N. N., Buforn, E., Cortina, J., Madariaga, R., M`
endez, P., Mezcua, J.
and Papastamatiou, D. [1997] A new digital accelerograph network for El Salvador,
Seism. Res. Lett. 68, 426437.
Booth, E. D. and Pappin, J. W. [1995] Seismic design requirements for structures in the
United Kingdom, European Seismic Design Practice, A. S. Elnashai (ed.), Balkema,
133140.
Chapman, M. C. [1995] A probabilistic approach to ground-motion selection for engineering design, Bull. Seism. Soc. Am. 85, 937942.
Chapman, M. C. [1999] On the use of elastic input energy for seismic hazard analysis,
Earthq. Spectra 15, 607635.
Cornell, C. A. [1968] Engineering seismic risk analysis, Bull. Seism. Soc. Am. 58,
15831606.
Costa, G., Panza, G. F., Suhadolc, P. and Vaccari, F. [1993] Zoning of the Italian territory in terms of expected peak ground acceleration derived from complete synthetic
seismograms, J. Appl. Geophys. 30, 149160.
Donovan, N. [1993] Relationship of seismic hazard studies to seismic codes in the United
States, Tectonophysics 218, 257271.

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

71

Dowrick, D. J. [1987] Earthquake Resistant Design for Engineers and Architects. Second
edition, John Wiley & Sons.
Faccioli, E., Battistella, C., Alemani, P. and Tibaldi, A. [1988] Seismic microzoning investigations in the metropolitan area of San Salvador, El Salvador, following the
destructive earthquake of 10 October 1986, Proceedings International Seminar on
Earthquake Engineering, Innsbruck, 2865.
FEMA [1997] NEHRP recommended provisions for seismic regulations for new buildings
and other structures, FEMA 302, Federal Emergency Management Agency, Washington DC.
Frankel, A. [1995] Mapping seismic hazard in the Central and Eastern United States,
Seism. Res. Lett. 66(4), 821.
Frankel, A. and Safak, E. [1998] Recent trends and future prospects in seismic hazard
analysis, Geotechnical Earthquake Engineering & Soil Dynamics III, ASCE Geotechnical Special Publication 75, 1, 91115.
Hanks, T. C. and Cornell, C. A. [2001] Probabilistic seismic hazard analysis: a beginners
guide, Earthq. Spectra, submitted.
Harlow, D. H., White, R. A., Rymer, M. J. and Alvarado, S. G. [1993] The San Salvador
earthquake of 10 October 1986 and its historical context, Bull. Seism. Soc. Am.
83(4), 11431154.
Harmsen, S., Perkins, D. and Frankel, A. [1999] Deaggregation of probabilistic ground
motions in the Central and Eastern United States, Bull. Seism. Soc. Am. 89, 113.
Hofmann, R. B. [1996] Individual faults cant produce a Gutenberg-Richter earthquake
recurrence, Engrg. Geol. 43, 59.
Hong, H. P. and Rosenblueth, E. [1988] The Mexico earthquake of September 19, 1985
model for generation of subduction earthquakes, Earthq. Spectra 4(3), 481498.
Hwang, H. H. M. and Huo, J. R. [1994] Generation of hazard-consistent ground motion,
Soil Dyn. Earthq. Engrg. 13, 377386.
Ida, Y. [1973] The maximum acceleration of seismic ground motion, Bull. Seism. Soc.
Am. 63(3), 959968.
Jackson, D. D. [1996] The case for huge earthquakes, Seism. Res. Lett. 67(1), 35.
Jackson, J. A. [2001] Living with earthquakes: know your faults, J. Earthq. Engrg. 5,
Special Issue No. 1, 5123.
Kameda, H. and Nojima, N. [1988] Simulation of risk-consistent earthquake motion,
Earthq. Engrg. Struct. Dyn. 16, 10071019.
Kijko, A. and Graham, G. [1998] Parametric-historic procedure for probabilistic seismic
hazard analysis, Proceedings Eleventh European Conference on Earthquake Engineering, Paris.
Kiremidjian, A. S. [1999] Multiple earthquake event loss estimation methodology, Proceedings of the Eleventh European Conference on Earthquake Engineering: Invited
Lectures. Balkema, pp. 151160.
Kiremidjian, A. S. and Anagnos, T. [1984] Stochastic slip-predictable models for earthquake occurrence, Bull. Seism. Soc. Am. 74, 739755.
Kiremidjian, A. S. and Suzuki, S. [1987] A stochastic model for site ground motions from
temporally independent earthquakes, Bull. Seism. Soc. Am. 77, 11101126.
Kramer, S. L. [1996] Geotechnical Earthquake Engineering, Prentice Hall.
Krinitzsky, E. L. [1993] Earthquake probability in engineering Part 2: Earthquake
recurrence and limitations of Gutenberg-Richter b-values for the engineering of critical
structures, Engrg. Geol. 36, 152.
Krinitzsky, E. L. [1995a] Problems with Logic Trees in earthquake hazard evaluation,
Engrg. Geol. 39, 13.

March 15, 2002 8:34 WSPC/124-JEE

72

00064

J. J. Bommer

Krinitzsky, E. L. [1995b] Deterministic versus probabilistic seismic hazard analysis for


critical structures, Engrg. Geol. 40, 17.
Krinitzsky, E. L. [1998] The hazard in using probabilistic seismic hazard assessment for
engineering, Env. Engrg. Geosci. 4(4), 425443.
Krinitzsky, E. L. [2001] How to obtain earthquake ground motions for engineering design,
Engrg. Geosci., submitted.
Krinitzsky, E. L., Gould, J. P. and Edinger, P. H. [1993] Fundamentals of Earthquake
Resistant Construction, Wiley.
Leyendecker, E. V., Hunt, R. J., Frankel, A. D. and Rukstales, K. S. [2000] Development
of maximum considered earthquake ground motion maps, Earthq. Spectra 16, 2140.
Main, I. G. [1987] A characteristic earthquake model of the seismicity preceding the
eruption of Mount St. Helens on 18 May 1980, Phys. Earth Planetary Interiors 49,
283293.
Makropoulos, K. C. and Burton, P. W. [1986] HAZAN: a FORTRAN program to evaluate seismic hazard parameters using Gumbels theory of extreme value statistics,
Comput. Geosci. 12, 2946.
Mallard, D. J. and Woo, G. [1993] Uncertainty and conservatism in UK seismic hazard
assessment, Nuclear Energy 32(4), 199205.
McGuire, R. K. [1976] FORTRAN computer program for seismic risk analysis, US
Geological Survey Open-File Report 7667.
McGuire, R. K. [1995] Probabilistic seismic hazard analysis and design earthquakes:
closing the loop, Bull. Seism. Soc. Am. 85, 12751284.
McGuire, R. K. [2001] Deterministic vs. probabilistic earthquake hazard and risks, Soil
Dyn. Earthq. Engrg. 21, 377384.
Mualchin, L. [1996] Development of Caltrans deterministic fault and earthquake hazard
map of California, Engrg. Geol. 42, 217222.
Musson, R. M. W. [1999] Determination of design earthquakes in seismic hazard analysis
through Monte Carlo simulation, J. Earthq. Engrg. 3(4), 463474.
Musson, R. M. W. and Winter, P. W. [1997] Seismic hazard maps for the UK, Natural
Hazards 14, 141154.
Newmark, N. M. [1965] Effects of earthquakes on dams and embankments, Geotechnique
15(2), 139160.
Newmark, N. M. and Hall, W. J. [1969] Seismic design criteria for nuclear reactor facilities, Proceedings Fourth World Conference on Earthquake Engineering, Santiago de
Chile, vol. 2, B5.1B5.12.
Orozova, I. M. and Suhadolc, P. [1999] A deterministic-probabilistic approach for seismic
hazard assessment, Tectonophysics 312, 191202.
Ove Arup and Partners [1993] Earthquake hazard and risk in the UK. Her Majestys
Stationery Office, London.
Panza, G. F., Vaccari, F., Costa, G., Suhadolc, P. and Fah, D. [1996] Seismic input
modelling for zoning and microzoning, Earthq. Spectra 12, 529566.
Parsons, T., Roda, S., Stein, R. S, Barka, A. and Dietrich, J. H. [2000] Heightened odds of
large earthquakes near Istanbul: an interaction-based probability calculation, Science
288, 28 April, 661665.
Peek, R., Berrill, J. B. and Davis, R. O. [1980] A seismicity model for New Zealand,
Bull. New Zealand Nat. Soc. Earthq. Engrg. 13(4), 355364.
Radulin, M., Vaccari, F., Mandrescu, N., Panza, G. F. and Moldoveanu, C. L. [2000]
Seismic hazard of Romania: deterministic approach, Pure Appl. Geophys. 157,
221248.

March 15, 2002 8:34 WSPC/124-JEE

00064

Deterministic vs. Probabilistic Seismic Hazard Assessment

73

Reiter, L. [1990] Earthquake Hazard Analysis: Issues and Insights, Columbia University
Press.
Restrepo-Velez, L. F. [2001] Explorative study of the scatter in strong-motion attenuation
equations for application to seismic hazard assessment, Master Dissertation, ROSE
School, Universita di Pavia.
Romeo, R. and Prestininzi, A. [2000] Probabilistic versus deterministic hazard analysis:
an integrated approach for siting problems, Soil Dyn. Earthq. Engrg. 20, 7584.
Schwartz, D. P. and Coppersmith, K. J. [1984] Fault behaviour and characteristic earthquakes: examples from the Wasatch and San Andreas faults, J. Geophys. Res. 89,
56815698.
SEAOC [1995] Vision 2000 A framework for performance based design. Structural
Engineers Association of California, Sacramento.
Shepherd, J. B., Tanner, J. G. and Prockter, L. [1993] Revised estimates of the levels of
ground acceleration and velocity with 10% probability of exceedance in any 50-year
period for the Trinidad and Tobago region, Caribbean Conference on Earthquakes,
Volcanoes, Windstorms and Floods, 1115 October, Port of Spain, Trinidad.
Singh, S. K., Rodriguez, M. and Esteva, L. [1983] Statistics of small earthquakes and
frequency of occurrence of large earthquakes along the Mexican subduction zone,
Bull. Seism. Soc. Am. 73, 17791796.
Speidel, D. H. and Mattson, P. H. [1995] Questions on the validity and utility of b-values:
an example from the Central Mississippi Valley, Engrg. Geol. 40, 927.
Somerville, P. G., Smith, N. F., Graves, R. W. and Abrahamson, N. A. [1997] Modification
of empirical strong ground motion attenuation relations to include the amplitude and
duration effects of rupture directivity, Seism. Res. Lett. 68, 199222.
SSHAC [1997] Recommendations for probabilistic seismic hazard analysis: guidance on
uncertainty and the use of experts. Senior Seismic Hazard Analysis Committee,
NUREG/CR-6372, Washington, DC.
Smit, P., Torri, A., Sprecher, C., Birkhauser, P., Tinic, S. and Graf, R. [2002] Pegasos
a comprehensive probabilistic seismic hazard assessment for nuclear power plants in
Switzerland, Twelfth European Conference on Earthquake Engineering, London.
Stepp, J. C., Wong, I., Whitney, J., Quittmeyer, R., Abrahamson, N., Toro, G., Youngs, R.,
Coppersmith, K., Savy, J., Sullivan, T. and Yucca Mountain PSHA Project Members
[2001] Probabilistic seismic hazard analyses for ground motions and fault displacements at Yucca Mountain, Nevada, Earthq. Spectra 17(1), 113151.
Toro, G. R., Abrahamson, N. A. and Scheider, J. F. [1997] Model of strong ground
motions from earthquakes in Central and Eastern North America: best estimates and
uncertainties, Seism. Res. Lett. 68(1), 4157.
Veneziano, D., Cornell, C. A. and OHara, T. [1984] Historical method of seismic hazard
analysis, Electrical Power Research Institute Report NP-3438, Palo Alto, California.
White, R. A. and Harlow, D. H. [1993] Destructive upper-crustal earthquakes of Central
America since 1900, Bull. Seism. Soc. Am. 83(4), 11151142.
Whitman, R. (ed.) [1989] Workshop on ground motion parameters for seismic hazard
mapping, Technical Report NCEER-89-0038, National Center for Earthquake Engineering Research, State University of New York at Buffalo.
Woo, G. [1996] Kernel estimation methods for seismic hazard area source modelling,
Bull. Seism. Soc. Am. 86, 353362.
Wahlstr
om, R. and Gr
unthal, G. [2000] Probabilistic seismic hazard assessment (horizontal PGA) for Sweden, Finland and Denmark using different logic tree approaches,
Soil Dyn. Earthq. Engrg. 20, 4558.

You might also like