Professional Documents
Culture Documents
1 Introduction
2 Basic Quantities
Definitions
3 Transformation
Transformation of Random Vectors
4 Covariance Matrices
Covariance Matrices
Transformation/Diagonalization
Examples
5 Gaussian Random Vectors
pdf
Transformed pdf
Introduction
X = (X1 , . . . , Xn )T
Y = (Y1 , . . . , Ym )T
Each Entry Is Itself a Random Variable
Could Have Identical Or Different Distribution for Each
Dimension
Could Be Independent Or Have Correlation Between
Dimensions
Dr. Adam Panagos
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors
Definitions
Definition
The Joint Distribution Function of the random variables X and Y
is defined as
FXY (x, y ) = P[X x, Y y ]
Properties Include
FXY (, ) = 1
FXY (, y ) = FXY (x, ) = 0
FXY (, y ) = FY (y )
FXY (x, ) = FX (x)
2
xy FXY (x, y ) = fXY (x, y )
Can Generalize to N Random Variables
Dr. Adam Panagos
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors
Definitions
FX (x) = P(X x)
= P(X1 x1 , X2 x2 , . . . , Xn xn )
= P (nk=1 {Xk xk })
Definitions
Definition
The Probability Density Function (pdf) of the random vector
X = (X1 , X2 , . . . , Xn )T can be obtained from FX (x) as
n FX (x)
fX (x) =
x1 . . . xn
Definitions
Definition
The Joint Distribution Function of the random vectors
X = (X1 , X2 , . . . , Xn )T and Y = (Y1 , Y2 , . . . , Ym )T is defined as
Definitions
Definition
The Joint Density Function of the random vectors
X = (X1 , X2 , . . . , Xn )T and Y = (Y1 , Y2 , . . . , Ym )T is defined as
Definitions
Definition
The Marginal Density Function of the random vector
X = (X1 , X2 , . . . , Xn )T can be obtained from the joint density
function as
Z Z
fX (x) = fXY (x, y)dy1 . . . dym
Definitions
Mean Vector
Definition
The Mean Vector of the random vector X = (X1 , X2 , . . . , Xn )T
is a vector whose elements are given by
Z Z
i = xi fX (x1 , . . . , xn )dx1 . . . dxn
Definitions
Definition
The random vector X = (X1 , X2 , . . . , Xn )T is Jointly Gaussian iff
the joint density function has the form
1 1 1 T
fX (x) = exp (x )K (x )
(2)n/2 | det(K)|1/2 2
Definitions
Uncorrelated
Definition
Let X and Y be real ndimensional random vectors with mean
vectors X and Y respectively. The random vectors are
uncorrelated if
E {XYT } = X T Y
Definitions
Orthogonal
Definition
Let X and Y be real ndimensional random vectors. The random
vectors are orthogonal if
E {XYT } = 0
Definitions
Independent
Definition
Let X and Y be real ndimensional random vectors with joint pdf
fXY (x, y). The random vectors are independent if
fXY (x, y) = fX (x)fY (y)
Problem Statement
Y1 = g1 (X1 , X2 , . . . , Xn )
Y2 = g2 (X1 , X2 , . . . , Xn )
..
.
Yn = gn (X1 , X2 , . . . , Xn )
Problem Solution
x1 = 1 (y1 , y2 , . . . , yn )
x2 = 2 (y1 , y2 , . . . , yn )
..
.
xn = n (y1 , y2 , . . . , yn )
Problem Solution
An Example
Notes
Random Vector Transformation
Covariance Matrices
Definition
Definition
Let X be a real-valued random vector with associated mean vector
. The covariance matrix K is
K = E [(X )(X )T ]
Notes
Covariance Form
Covariance Matrices
zT Az 0
zT Az > 0
Notes
A Proof
Dr. Adam Panagos
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors
Covariance Matrices
A =
T = ||||2 = 1
Covariance Matrices
Comments
Notes/Matlab
Eigenvalue and Eigenvector Computations
Transformation/Diagonalization
Introduction
Transformation/Diagonalization
Definitions
Definition
Two n n matrices A and B are similar if there exists an n n
matrix T with det(T) 6= 0 such that
T1 AT = B
T Is A Transformation Matrix
Transformation/Diagonalization
Theorems
Theorem
Let M be a real symmetric (r.s.) matrix with eigenvalues
1 , . . . , n . Then M has n mutually orthogonal unit eigenvectors
1 , . . . , n .
Theorem
An n n matrix M is similar to a diagonal matrix if and only if M
has n linearly independent eigenvectors.
Transformation/Diagonalization
Diagonalization
U = (1 , . . . , n )
Matrix M Is Transformed As
U1 MU =
Matlab
Covariance Diagonalization
Dr. Adam Panagos
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors
Transformation/Diagonalization
Joint Diagonalization
VT PV = I
VT QV = diag(1 , . . . , n )
where 1 , . . . , n are generalized eigenvalues satisfying
Qvi = i Pvi
Transformation/Diagonalization
P1 Q
0
2 Calculate the unnormalized eigenvectors vi for i = 1, . . . , n by
solving
0
(P1 Q I)vi = 0
3 Find normalization constants Ki for i = 1, . . . , n such that
0
vi = Ki vi satisfies viT Pvi = 1
4 Vectors vi form the columns of V
Examples
Example
Matlab
Joint Diagonalization
Examples
Classification Example
Examples
Classification Example
Examples
Classification Example
i = E [Y |i ]
= aT i
Examples
Classification Example
Examples
Classification Example
Examples
Classification Example
Choice Of a Is Important
Want to Maximize Distance Between Means i and Minimize
Variances i2
Want to Maximize Cost Function
(1 2 )2
J(a) =
12 + 22
Examples
Classification Example
(1 2 )2
J(a) =
12 + 22
(aT 1 aT 2 )2
=
aT K1 a + aT K2 a
(aT (1 2 ))2
=
aT (K1 + K2 )a
(aT (1 2 )(1 2 )T a)
=
aT (K1 + K2 )a
Examples
Classification Example
Q = (1 2 )(1 2 )T
P = K1 + K2
So J(a) Can Be Written As
aT Qa
J(a) =
aT Pa
Examples
Classification Example
bT VT QVb
J(a) =
bT VT PVb
bT b
=
||b||2
Examples
Classification Example
Theorem
Let M be a real symmetric (r.s.) matrix with largest eigenvalue 1 ,
then
xT Mx
1 = max
||x||2
and the maximum is achieved for x = K 1 where 1 is the unit
eigenvector associated with 1 and K is any real-valued constant.
Examples
Classification Example
Since a Is Just The Eigenvector Associated With 1 It
Satisfies
P1 Qa = 1 a
Substituting for Q We Have That a Satisfies
P1 (1 2 )(1 2 )T a = 1 a
But (1 2 )T a Is Just a Scalar, Lets Denote as k
k 1
a= P (1 2 )
1
a is called the Fisher Linear Discriminant
Usually normalize such that ||a|| = 1
Dr. Adam Panagos
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors
Examples
Classification Example
Introduction
Transformed pdf
Introduction
Consider Now Transforming X Using the Nonsingular n n
Transformation Matrix A To Yield
Y = AX
What Is The Distribution of Y?
Using Transformation Of Random Vectors Can Show That Y
Is Also A Gaussian Random Vector With
E [Y] = E [AX] = AE [X] = A = Y
Transformed pdf