You are on page 1of 15

Probability Distributions

CEE 201L. Uncertainty, Design, and Optimization


Department of Civil and Environmental Engineering
Duke University
Philip Scott Harvey, Henri P. Gavin and Jeffrey T. Scruggs
Spring, 2016

1 Probability Distributions
Consider a continuous, random variable (rv) X with support over the domain X . The probability
density function (PDF) of X is the function fX (x) such that for any two numbers a and b in the
domain X , with a < b,
Z b
P [a < X b] = fX (x) dx
a
For fX (x) to be a proper distribution, it must satisfy the following two conditions:

1. The PDF fX (x) is positive-valued; fX (x) 0 for all values of x X .


R
2. The rule of total probability holds; the total area under fX (x) is 1; X fX (x) dx = 1.

Alternately, X may be described by its cumulative distribution function (CDF). The CDF
of X is the function FX (x) that gives, for any specified number x X , the probability that the
random variable X is less than or equal to the number x is written as P [X x]. For real values of
x, the CDF is defined by
Z b
FX (x) = P [X b] = fX (x) dx ,

so,
P [a < X b] = FX (b) FX (a)

By the first fundamental theorem of calculus, the functions fX (x) and FX (x) are related as
d
fX (x) = FX (x)
dx
2 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

A few important characteristics of CDFs of X are:

1. CDFs, FX (x), are monotonic non-decreasing functions of x.


2. For any number a, P [X > a] = 1 P [X a] = 1 FX (a)
Rb
3. For any two numbers a and b with a < b, P [a < X b] = FX (b) FX (a) = a fX (x)dx

2 Descriptors of random variables


The expected or mean value of a continuous random variable X with PDF fX (x) is the centroid
of the probability density. Z
X = E[X] = x fX (x) dx

The expected value of an arbitrary function of X, g(X), with respect to the PDF fX (x) is
Z
g(X) = E[g(X)] = g(x) fX (x) dx

The variance of a continuous rv X with PDF fX (x) and mean X gives a quantitative measure of
how much spread or dispersion there is in the distribution of x values. The variance is calculated
as
Z
2
X = V[X] = (x X )2 fX (x) dx

=
=
=
=

p
The standard deviation (s.d.) of X is X = V[X]. The coefficient of variation (c.o.v.) of
X is defined as the ratio of the standard deviation X to the mean X :

X
cX =
X
for non-zero mean. The c.o.v. is a normalized measure of dispersion (dimensionless).

A mode of a probability density function, fX (x), is a value of x such that the PDF is maximized;

d
fX (x) =0.
dx x=xmode

The median value, xm , is is the value of x such that


P [X xm ] = P [X > xm ] = FX (xm ) = 1 FX (xm ) = 0.5 .

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 3

3 Some common distributions

A few commonly-used probability distributions are described at the end of this document: the
uniform, triangular, exponential, normal, and log-normal distributions. For each of these distribu-
tions, this document provides figures and equations for the PDF and CDF, equations for the mean
and variance, the names of Matlab functions to generate samples, and empirical distributions of
such samples.

3.1 The Normal distribution

The Normal (or Gaussian) distribution is perhaps the most commonly used distribution function.
The notation X N (X , X 2 ) denotes that X is a normal random variable with mean
X and
2
variance X . The standard normal random variable, Z, or z-statistic, is distributed as N (0, 1).
The probability density function of a standard normal random variable is so widely used it has its
own special symbol, (z), !
1 z2
(z) = exp
2 2
Any normally distributed random variable can be defined in terms of the standard normal random
variable, through the change of variables

X = X + X Z.

If X is normally distributed, it has the PDF


!
x X 1 (x X )2
 
fX (x) = =q exp 2
X 2
2X 2X

There is no closed-form equation for the CDF of a normal random variable. Solving the integral
Z z
1 2 /2
(z) = eu du
2

would make you famous. Try it. The CDF of a normal random variable is expressed in terms of the
error function, erf(z). If X is normally distributed, P [X x] can be found from the standard
normal CDF
x X
 
P [X x] = FX (x) = .
X
Values for (z) are tabulated and can be computed, e.g., the Matlab command . . .
Prob_X_le_x = normcdf(x,muX,sigX). The standard normal PDF is symmetric about z = 0,
so (z) = (z), (z) = 1 (z), and P [X > x] = 1 FX (x) = 1 ((x X )/X ) =
((X x)/X ).

The linear combination of two independent normal rvs X1 and X2 (with means 1 and 2 and
variances 12 and 22 ) is also normally distributed,
 
aX1 + bX2 N a1 + b2 , a2 12 + b2 22 ,

and more specifically, aX b N aX b, a2 X


2 .


CC BY-NC-ND PSH, HPG, JTS


4 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

Given the probability of a normal rv, i.e., given P [X x], the associated value of x can be found
from the inverse standard normal CDF,
x X
= z = 1 (P [X x]) .
X
Values of the inverse standard normal CDF are tabulated, and can be computed, e.g., the Matlab
command . . . x = norminv(Prob_X_le_x,muX,sigX).

3.2 The Log-Normal distribution

The Normal distribution is symmetric and can be used to describe random variables that can take
positive as well as negative values, regardless of the value of the mean and standard deviation. For
many random quantities a negative value makes no sense (e.g., modulus of elasticity, air pressure,
and distance). Using a distribution which admits only positive values for such quantities eliminates
any possibility of non-sensical negative values. The log-normal distribution is such a distribution.

If ln X is normally distributed (i.e., ln X N (ln X , ln X )) then X is called a log-normal random


variable. In other words, if Y (= ln X) is normally distributed, eY (= X) is log-normally distributed.
P [Y y] FY (y)
 
y Y
Y

Y = ln X , Y2 = ln
2
X, P [ln X ln x] = Fln X (ln x) = 
ln xln X

P [X x] FX (x) ln X

The mean and standard deviation of a log-normal variable X are related to the mean and standard
deviation of ln X.
1 2 2

2

ln X = ln X ln ln X = ln 1 + (X /X )
2 X
If (X /X ) < 0.30, ln X (X /X ) = cX

The median, xm , is a useful parameter of log-normal rvs. By definition of the median value, half
of the population lies above the median, and half lies below, so
ln xm ln X
 
= 0.5
ln X
ln xm ln X
= 1 (0.5) = 0
ln X
q
and, ln xm = ln X xm = exp(ln X ) X = xm 1 + c2X

For the log-normal distribution xmode < xmedian < xmean . If cX < 0.15, xmedian xmean .

If ln X is normally distributed (X is log-normal) then (for cX < 0.3)


ln x ln xm
 
P [X x]
cX

2 ), and ln Y N (
If ln X N (ln X , ln 2 n m then
X ln Y , ln Y ), and Z = aX /Y
2
ln Z = ln a + n ln X m ln Y N (ln Z , ln Z)

where ln Z = ln a + nln X mln Y = ln a + n ln xm m ln ym


2
ln 2 2 2 2 2 2 2
and Z = (nln X ) + (mln Y ) = n ln(1 + cX ) + m ln(1 + cY ) = ln(1 + cZ )

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 5

Uniform X U[a, b] Triangular X T (a, b, c)


X = R, b > a X = R, a c b

2/(b-a)
1/(b-a)
p.d.f., f(x)

p.d.f., f(x)
0 0
a + b a c + +2 b

1 1
c.d.f., F(x)

c.d.f., F(x)
1/2

0 0
a + b a c + +2 b
x x

2(xa)
(ba)(ca) , x [a, c]
(
1
x [a, b]

f (x) = ba , f (x) = 2(bx)
0, otherwise (ba)(bc) , x [c, b]

0, otherwise

0, xa
0, xa
2
(xa) ,

x [a, c]

xa (ba)(ca)
F (x) = ba , x [a, b] F (x) = (bx)2

1, xb 1 (ba)(bc) , x [c, b]



1, xb

X = 12 (a + b) X = 13 (a + b + c)
2 = 1 1
X 12 (b a)2 2 =
X 18 (a
2 + b2 + c2 ab ac bc)

x = a + (b-a).*rand(1,N); x = triangular rnd(a,b,c,1,N);

0.3
0.5
0.25
empirical p.d.f.

empirical p.d.f.

0.4
0.2
0.15 0.3

0.1 0.2
0.05 0.1
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6

1 1
empirical c.d.f.

empirical c.d.f.

0.8 0.8
0.6 0.6
0.4 0.4
0.2 =3.0 =1.2 0.2 =2.7 =0.8
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
x x

CC BY-NC-ND PSH, HPG, JTS


6 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

Normal X N (, 2 ) 2 )
Log-Normal ln X N (ln X , ln X
X = R, R, > 0 + +
X = R , ln X R , ln X > 0
p.d.f., f(x)

p.d.f., f(x)
0 0
2 + +2 0 + +2

0.977 1
0.841
c.d.f., F(x)

c.d.f., F(x)
0.5

0.159
0.023 0
2 + +2 + +2
x x
 
2 2
 
f (x) = 1 exp (x)
2 2
f (x) = 1 2 exp (ln x
2
2ln
ln X )
2 2 x 2ln X X

h  i   
1 x 1 lnxln X
F (x) = 2 1 + erf F (x) = 2 1 + erf 2
2 2 2ln X
q
X = X = xm 1 + c2X
2 = 2 2 = x2 c2 1 + c2

X X m X X

x = muX + sigmaX*randn(1,N); x = logn rnd(Xm,Cx,1,N);

0.45 0.7
0.4 0.6
empirical p.d.f.

empirical p.d.f.

0.35
0.3 0.5
0.25 0.4
0.2 0.3
0.15 0.2
0.1
0.05 0.1
0 0
-4 -3 -2 -1 0 1 2 3 4 0 1 2 3 4 5

1 1
empirical c.d.f.

empirical c.d.f.

0.8 0.8
0.6 0.6
0.4 0.4
0.2 =0.0 =1.0 0.2 =2.0 =1.0
0 0
-4 -3 -2 -1 0 1 2 3 4 0 1 2 3 4 5
x x

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 7

Exponential X E() Rayleigh X R(m) Laplace X L(, 2 )


X = R+ , > 0 X = R+ , m > 0 X = R, R, > 0

1
p.d.f., f(x)

p.d.f., f(x)

p.d.f., f(x)
1/(e )

0 0 0
0 2 3 m + +2 2 + +2

1 1 0.97
0.878
0.8
c.d.f., F(x)

c.d.f., F(x)

c.d.f., F(x)
1-1/e 0.6
0.5
0.4
0.2
0.123
0 0 0.03
0 2 3 m + +2 2 + +2
x x x
   
f (x) = 1
exp(x/) f (x) = x
m2
exp 21 (x/m)2 f (x) = 2
2 exp 2 |x|

 
  1
2 exp 2 |x|
x<
F (x) = 1 exp(x/) F (x) = 1 exp 12 (x/m)2 F (x) = 
|x|

1 1 exp 2 x
2
p
X = X = m /2 X =
2 = 2
X 2 = m2 (4 )/2
X 2 = 2
X

x = exp rnd(muX,1,N); x = rayleigh rnd(modeX,1,N); x = laplace rnd(muX,sigmaX,1,N);

1 0.7 0.5
0.6
empirical p.d.f.

empirical p.d.f.

empirical p.d.f.

0.8 0.4
0.5
0.6 0.4 0.3
0.4 0.3 0.2
0.2
0.2 0.1 0.1
0 0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4 -4 -3 -2 -1 0 1 2 3 4

1 1 1
empirical c.d.f.

empirical c.d.f.

empirical c.d.f.

0.8 0.8 0.8


0.6 0.6 0.6
0.4 0.4 0.4
0.2 =1.0 0.2 m=1.0 0.2 =0.0 =1.4
0 0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4 -4 -3 -2 -1 0 1 2 3 4
x x x

CC BY-NC-ND PSH, HPG, JTS


8 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

4 Sums and Differences of Independent Normal Random Variables

Consider two normally-distributed random variables, X N (X , X2 ) and Y N ( , 2 ). Any


Y Y
weighted sum of normal random variables is also normally-distributed.

Z = aX bY
 
Z N aX bY , (aX )2 + (bY )2

Z = aX bY Z2 = (aX )2 + (bY )2

5 Products and Quotients of Independent LogNormal Random Variables


2 ) and ln Y N (
Consider two log-normally-distributed random variables, ln X N (ln X , ln 2
X ln Y , ln Y ).
Any product or quotient of lognormal random variables is also lognormally-distributed.

Z = X/Y
 
2 2
ln Z N ln X ln Y , ln X + ln Y

2 2 2
ln Z = ln X ln Y ln Z = ln X + ln Y c2Z = c2X + c2Y + c2X c2Y

6 Examples
1. The strength, S, of a particular grade of steel is log-normally distributed with median 36 ksi
and c.o.v. of 0.15. What is the probability that the strength of a particular sample is greater
than 40 ksi?
ln 40 ln 36 3.69 3.58
   
P [S > 40] = 1 P [S 40] = 1 =1
0.15 0.15
= 1 (0.702) = 1 0.759 = 0.241

0.08
smode = 35.20 ksi
0.07 smedian = 36.00 ksi
smean = 36.40 ksi
0.06

0.05
p.d.f.

0.04

0.03

0.02

0.01 P[S>40]

0
20 25 30 35 40 45 50 55 60
strength of steel, s, ksi

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 9

2. Highway truck weights in Michigan, W , are assumed to be normally distributed with mean
100 k and standard deviation 40 k. The load capacity of bridges in Michigan, R, are also
assumed to be normally distributed with mean 200 k and standard devation 30 k. What is
the probability of a truck exceeding a bridge load rating?
E = W R. If E > 0 the truck weight eceededs the bridge capacity.
W R = 100 200 = 100 k.
E =
E = 402 + 302 = 50 k.
0 (100)
 
P [E > 0] = 1 P [E 0] = 1 = 1 (2) = 1 0.977 = 0.023
50

R
W
E=WR 40 30
50 000000000000000
111111111111111
000000000000000
111111111111111
100 0 100 200 k

3. Windows in the Cape Hattaras Lighthouse can withstand wind pressures of R. R is log-
normal with median of 40 psf and coefficient of variation of 0.25. The peak wind pressure
during a hurricane P in psf is given by the equation P = 1.165 103 CV 2 where C is a
log-normal coefficient with median of 1.8 and coefficient of variation of 0.20 and V is the
wind speed with median 100 fps and coefficient of variation of 0.30. What is the probability
of the wind pressure exceeding the strength of the window?
The peak wind pressure is also log-normal.

ln P = ln(1.165 103 ) + ln C + 2 ln V
ln P = ln(1.165 103 ) + ln C + 2ln V
ln P = ln(1.165 103 ) + ln(1.8) + 2 ln(100) = 3.0431
2
ln P = ln(1 + 0.202 ) + 2 ln(1 + 0.302 ) = 0.2116 . . . ln P = 0.4600

The wind pressure exceeds the resistance if P/R > 1 (that is, if ln P ln R > 0)

ln E = ln P ln R
ln E = ln P ln R = 3.0431 ln(40) = 0.646
2 2
ln E = 0.2116 + ln(1 + 0.25 ) = 0.27220 . . . ln E = 0.5217

The probability of the wind load load exceeding the resistance of the glass is,
0 + 0.646
 
P [E > 1] = 1 P [E 1] = 1 P [ln E 0] = 1 = 1 (1.2383) = 0.11
0.5217

4. Earthquakes with M > 6 earthquake shake the ground at a building site randomly. The peak
ground acceleration (PGA) is log-normally distributed with median of 0.2 g and a coefficient
of variation of 0.25. Assume that the building will sustain no damage for ground motion
shaking up to 0.3 g. What is the probability of damage from an earthquake of M > 6?

ln(0.3) ln(0.2)
 
P [D|M > 6] = P [P GA > 0.3] = 1P [P GA 0.3] = 1 = 10.947 = 0.053.
0.25

CC BY-NC-ND PSH, HPG, JTS


10 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

There have been two earthquakes with M > 6 in the last 50 years. What is the probability
of no damage from earthquakes with M > 6 in the next 20 years?

P [D0 |M > 6] = 1 0.053 = 0.947

From the law of total probability,

P [D0 ] in 20 yr = P [D0 |0 EQ M > 6 in 20yr] P [0 EQ M > 6 in 20yr] +


P [D0 |1 EQ M > 6 in 20yr] P [1 EQ M > 6 in 20yr] +
P [D0 |2 EQ M > 6 in 20yr] P [2 EQ M > 6 in 20yr] +
P [D0 |3 EQ M > 6 in 20yr] P [3 EQ M > 6 in 20yr] +

where P [D0 |n EQ M > 6] = (P [D0 |1 EQ M > 6])n (assuming damage from an earthquake
does not weaken the building . . . ) So,

(20/25)n
P [D0 ] in 20 yr
X
= (0.947)n exp(20/25)
n=0
n!
" #
0.8 0.82 0.83
= exp(0.8) 1 + 0.947 + (0.947)2 + (0.947)3 +
1! 2! 3!
= exp(0.8) exp(0.947 0.8) = 0.958

The probability of damage from earthquakes in the next 20 years (given the assumptions in
this example) is close to 4%. Would that be an acceptable level of risk for you?

7 Empirical PDFs, CDFs, and exceedence rates (nonparametric statistics)

The PDF and CDF of a sample of random data can be computed directly from the sample, without
assuming any particular probability distribution . . . (such as a normal, exponential, or other kind
of distribution).

A random sample of N data points can be sorted into increasing numerical order, so that

x1 x2 xi1 xi xi+1 xN 1 xN .

In the sorted sample there are i data points less than or equal to xi . So, if the sample is represen-
tative of the population, and the sample is big enough the probability that a random X is less
than or equal to the ith sorted value is i/N . In other words, P [X xi ] = i/N . Unless we know
that no value of X can exceed xN , we must accept some probability that X > xN . So, P [X xN ]
should be less than 1 and in such cases we can write P [X xi ] = (i 1/2)/N .

The empirical CDF computed from a sorted sample of N values is

i i 1/2
FX (xi ) = . . . or . . . FX (xi ) =
N N

The empirical PDF is basically a histogram of the data. The following Matlab lines plot empirical
CDFs and PDFs from a vector of random data, x.

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 11

1 N = length ( x ); % number o f v a l u e s i n t h e sample


2 nBins = f l o o r ( N /50); % number o f b i n s i n t h e h i s t o g r a m
3 [ fx , xx ] = h i s t (x , nBins ); % compute t h e h i s t o g r a m
4 fx = fx / N * nBins /(max( x ) -min( x ))); % s c a l e t h e h i s t o g r a m t o a PDF
5 F_x = ([1: N ] -0.5)/ N ; % e m p i r i c a l CDF
6 subplot (211); bar( xx , fx ); % p l o t e m p i r i c a l PDF
7 subplot (212); plot ( sort ( x ) , F_x ); % p l o t e m p i r i c a l CDF
8 p r o b a b i l i t y _ o f _ f a i l u r e = sum(x >0) / N % probability that X > 0

The number of values in the sample greater than xi is (N i). If the sample is representative, the
probability of a value exceeding xi is Prob[X > xi ] = 1 FX (xi ) 1 i/N . If the N samples were
collected over a period of time T , the average exceedence rate (number of events greater than xi
per unit time) is (xi ) = N (1 FX (xi ))/T N (1 i/N )/T = (N i)/T .

8 Random variable generation using the Inverse CDF method


A sample of a random variable having virtually any type of CDF, P [X x] = P = FX (x) can
be generated from a sample of a uniformly-distributed random variable, U , (0 < U < 1), as
long as the inverse CDF, x = FX1 (P ) can be computed. There are many numerical methods for
generating a sample of uniformly-distributed random numbers. It is important to be aware that
samples from some methods are more random than samples from others. The Matlab command
u = rand(1,N) computes a (row) vector sample of N uniformly-distributed random numbers with
0 < u < 1.

If X is a continuous rv with CDF FX (x) and U has a uniform distribution on (0, 1), then the
random variable FX1 (U ) has the distribution FX . Thus, in order to generate a sample of data
distributed according to the CDF FX , it suffices to generate a sample, u, of the rv U U[0, 1] and
then make the transformation x = FX1 (u).

For example, if X is exponentially-distributed, the CDF of X is given by


FX (x) = 1 ex/ ,
so
FX1 (u) = ln(1 FX (x)).
Therefore if u is a value from a uniformly-distributed rv in [0, 1], then
x = ln(u)
is a value from an exponentially distributed random variable. (If U is uniformly distributed in [0,1]
then so is 1 U .)

As another example, if X is log-normally distributed, the CDF of X is


ln x ln xm
 
FX (x) = .
ln X
If u is a sample from a standard uniform distribution, then
h i
x = exp ln xm + 1 (u)ln X
is a sample from a lognormal distribution.

Note that since expressions for (z) and 1 (P ) do not exist, the generation of normally-distributed
random variables requires other numerical methods. x = muX + sigX*randn(1,N) computes a
(row) vector sample of N normally-distributed random numbers.

CC BY-NC-ND PSH, HPG, JTS


12 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

cdf cdf
1

1
F(x) F(x)
X X
u

u
u = F (x) u = F (x)
X X
f (u)

f (u)
1
pdf

pdf
x = F (u) x = F (u)
U

U
X X
0

0
x=a x=b x x=a x=b x
1

1
pdf pdf
dFX dFX
f (x) = f (x) =
X dx X dx

x=a x=b x x=a x=b x

Figure 1. Examples of the generation of uniform random variables from the inverse CDF method.
u

cdf cdf
u = FX (x) 1 u = FX(x)
1

F(x) F(x)
X
X

1 1
f (u)

f (u)

x = F (u) x = FX (u)
pdf

pdf
U

X
0
0

x x
1

pdf pdf

dFX dFX
f (x) = f (x) =
X dx X dx

x x

Figure 2. Examples of the generation of random variables from the inverse CDF method. The density of
1
the horizontal arrows u is uniform, whereas the density of the vertical arrows, x = FX (u), is proportional
0
to FX (x), that is, proportional to fX (x).

CC BY-NC-ND PSH, HPG, JTS


Probability Distributions 13

9 Functions of Random Variables and Monte Carlo Simulation

The probability distributions of virtually any function of random variables can be computed using
the powerful method of Monte Carlo Simulation (MCS). MCS involves computing values of functions
with large samples of random variables.

For example, consider a function of three random variables, X1 , X2 , and X3 , where X1 is normally
distributed with mean of 6 and standard deviation of 2, X2 is log-normally distributed with median
of 2 and coefficient of variation of 0.3, and X3 is Rayleigh distributed with mode of 1. The function
p
Y = sin(X1 ) + X2 exp(X3 ) 2

is a function of these three random variables and is therefore also random. The distribution function
and statistics of Y may be difficult to derive analytically, especially if the function Y = g(X) is
complicated. This is where MCS is powerful. Given samples of N values of X1 , X2 and X3 , a
sample of N values of Y can also be computed. The statistics of Y (mean, variance, PDF, and
CDF) can be estimated by computing the average value, sample variance, histogram, and emperical
CDF of the sample of Y . The probability P [Y > 0] can be estimated by counting the number of
positive values in the sample and dividing by N . The Matlab command P_Y_gt_0 = sum(y>0)/N
may be used to estimate this probability.

10 Monte Carlo Simulation in Matlab


1 % MCS intro .m
2 % Monte Car lo S i m u l a t i o n . . . an i n t r o d u c t o r y example
3 %
4 % Y = g (X1 , X2 , X3) = s i n (X1) + s q r t (X2) exp(X3) 2 ;
5 %
6 % H. P . Gavin , Dept . C i v i l and Environmental E n g i n e e r i n g , Duke Univ , Jan . 2012
7
8 % X1 X2 X3
9 % normal lognormal Rayleigh
10 mu1 = 6; med2 = 2; mod3 = 2;
11 sd1 = 2; cv2 = 0.3;
12
13 N_MCS = 1000; % number o f random v a l u e s i n t h e sample
14
15 % ( 1 ) g e n e r a t e a l a r g e sample f o r each random v a r i a b l e i n t h e problem . . .
16
17 X1 = mu1 + sd1 *randn(1 , N_MCS );
18 X2 = logn_rnd ( med2 , cv2 ,1 , N_MCS );
19 X3 = Rayleigh_rnd ( mod3 ,1 , N_MCS );
20
21 % ( 2 ) e v a l u a t e t h e f u n c t i o n f o r each random v a r i a b l e t o compute a new sample
22
23 Y = s i n ( X1 ) + sqrt ( X2 ) - exp( - X3 ) - 2;
24
25 % s u p p o s e p r o b a b i l i t y o f f a i l u r e i s Prob [ g (X1 , X2 , X3) > 0 ] ...
26
27 P r o b a b i l i t y _ o f _ f a i l u r e = sum(Y >0) / N_MCS
28
29 % ( 3 ) p l o t h i s t o g r a m s o f t h e random v a r i a b l e s
30
31 sort_X1 = sort ( X1 );
32 sort_X2 = sort ( X2 );
33 sort_X3 = sort ( X3 );
34 CDF = ([1: N_MCS ] -0.5) / N_MCS ; % e m p i r i c a l CDF o f a l l quantities

CC BY-NC-ND PSH, HPG, JTS


14 CEE 201L. Uncertainty, Design, and Optimization Duke University Spring 2016 P.S.H., H.P.G. and J.T.S.

0.25 1 0.4
0.35
0.2 0.8
0.3
0.15 0.6 0.25
P.D.F.

0.2
0.1 0.4 0.15
0.1
0.05 0.2
0.05
0 0 0
0 2 4 6 8 101214 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8

1 1 1

0.8 0.8 0.8

0.6 0.6 0.6


C.D.F.

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 2 4 6 8 101214 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 8

X1 : normal X2 : log-normal X3 : Rayleigh

0.8
0.7
0.6
0.5
P.D.F.

0.4
0.3
0.2
0.1 PF
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1

0.8

0.6
C.D.F.

Y>0
0.4

0.2

0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1

Y = g(X1,X2,X3)

Figure 3. Analytical and empirical PDFs and CDFs for X1 , X2 , and X3 , and the Empirical PDF and
CDF for Y = g(X1 , X2 , X3 ) CC BY-NC-ND PSH, HPG, JTS
Probability Distributions 15

1 nBins = f l o o r ( N_MCS /20);


2 f i g u r e (1)
3 clf
4 subplot (231)
5 [ fx , xx ] = h i s t ( X1 , nBins ); % h i s t o g r a m o f X1
6 fx = fx / N_MCS * nBins /(max( X1 ) -min( X1 )); % s c a l e histogram t o PDF
7 hold on
8 plot ( sort_X1 , normpdf ( sort_X1 , mu1 , sd1 ) , -k , LineWidth ,3);
9 s t a i r s ( xx , fx , LineWidth ,1)
10 hold off
11 ylabel ( P . D . F . )
12 subplot (234)
13 hold on
14 plot ( sort_X1 , normcdf ( sort_X1 , mu1 , sd1 ) , -k , LineWidth ,3);
15 s t a i r s ( sort_X1 , CDF , LineWidth ,1)
16 hold off
17 ylabel ( C . D . F . )
18 xlabel ( X_1 : normal )
19 subplot (232)
20 [ fx , xx ] = h i s t ( X2 , nBins ); % h i s t o g r a m o f X2
21 fx = fx / N_MCS * nBins /(max( X2 ) -min( X2 )); % s c a l e histogram t o PDF
22 hold on
23 plot ( sort_X2 , logn_pdf ( sort_X2 , med2 , cv2 ) , -k , LineWidth ,3);
24 s t a i r s ( xx , fx , LineWidth ,1)
25 hold off
26 subplot (235)
27 hold on
28 plot ( sort_X2 , logn_cdf ( sort_X2 ,[ med2 , cv2 ]) , -k , LineWidth ,3);
29 s t a i r s ( sort_X2 , CDF , LineWidth ,1)
30 hold off
31 xlabel ( X_2 : log - normal )
32 subplot (233)
33 [ fx , xx ] = h i s t ( X3 , nBins ); % h i s t o g r a m o f X3
34 fx = fx / N_MCS * nBins /(max( X3 ) -min( X3 )); % s c a l e histogram t o PDF
35 hold on
36 plot ( sort_X3 , Rayleigh_pdf ( sort_X3 , mod3 ) , -k , LineWidth ,3);
37 s t a i r s ( xx , fx , LineWidth ,1)
38 hold off
39 subplot (236)
40 hold on
41 plot ( sort_X3 , Rayleigh_cdf ( sort_X3 , mod3 ) , -k , LineWidth ,2);
42 s t a i r s ( sort_X3 , CDF , LineWidth ,1)
43 hold off
44 xlabel ( X_3 : Rayleigh )
45 f i g u r e (2)
46 clf
47 subplot (211)
48 [ fx , xx ] = h i s t ( Y , nBins ); % histogram of Y
49 fx = fx / N_MCS * nBins /(max( Y ) -min( Y )); % s c a l e histogram t o PDF
50 hold on
51 s t a i r s ( xx , fx , LineWidth ,2)
52 plot ([0 0] ,[0 0.5] , -k );
53 hold off
54 text (0.1 ,0.1 , P_F )
55 ylabel ( P . D . F . )
56 subplot (212)
57 hold on
58 s t a i r s ( sort ( Y ) , CDF , LineWidth ,2)
59 plot ([0 0] ,[0 1] , -k );
60 hold off
61 text (0.5 ,0.5 , Y >0 )
62 ylabel ( C . D . F . )
63 xlabel ( Y = g ( X_1 , X_2 , X_3 ) );

CC BY-NC-ND PSH, HPG, JTS

You might also like