Professional Documents
Culture Documents
Eric Bechhoefer
Goodrich Sensors and Integrated Systems
Vergennes, VT 05491
Setting to zero to find the extrema (thereby maximizing the The resulting covariance, K, of the transformed coordinates
separation) and solving for x results in: (y) in the decision space H1 is not diagonal. K can be
diagonlized by an appropriate orthnormal transformation:
Σ-1x = µx or Σx = λx
Where λ = 1/ µ. w = ΨTy,
The solution to λ must satisfy the determinant: |Σ-λI| =0: the where Ψ is the eigenvector matrix of K such that ΨΤΚΨ is
solution is defined as the eigenvalues of Σ (ref 5). Because Σ diagonal. Combining these processes yields the overall
is a symmetric n x n matrix (e.g. a covariance matrix), there transformation matrix A = ΦΛ−1/2Ψ
are n real eigenvalues (λ1…λn) and n real eigenvectors The Optimal Decision Rule
φ1...φn. The characteristic equation can then be written as:
Using the developed transformation matrix A, we can now
ΣΦ = ΦΛ apply this to the Bayes classifier to maximize the separation
between the decision spaces. This transformation is optimal
where Φ is an n x n matrix consisting of n eigenvectors and
in that no other transformation will provide a higher
Λ is a diagonal matrix of eigenvalues. Note that the
probability of correct classification (recall that the
eigenvectors corresponding to two different eigenvalues are
transformation A is based on the Eigenmatrix solution to the
orthonormal
characteristic function). Given Eq 1. and Eq 2, the following
ΦΤΦ=Ι, change of variables are made: