You are on page 1of 2

vw

||v|| ||w||

P
= cos |v w| ||v|| ||w||, A = [A1 A2 . . . An ] Ax = i Ai xi
Gauss-Jordan: [A I] [I A1 ]
LU factorization: A = LU, (col i) lij (col j) = 0
Pivot vars vs. Free vars
Calculating N (A): Gaussian Elim A U , free column of U leads to solution, substitute free var
= 1 and other free vars = 0 substitute solves Ax = 0
A has full rank = N (A) = {~0} & if Ax = b, then x is unique.
A Rmn , dim C(A) = dim C(AT ), dim C(A) + dim N (A) = n, dim C(AT ) + dim N (AT ) = m, rankA =
dim C(A), if rankA = 1, A = uv T , C(AT ) N (A), C(A) N (AT ) (Proofs?)
Projections: line in direction of ~a, want point p closest to ~b, line from ~b to p~ is perpendicular
T
T
to ~a. So the projection of ~b onto the line through ~a is p = x
a = aaT ab a p = a
x = P b with P = aa
aT a
The combo x1 a1 + + xn an = A
x that is closest to ~b come from AT (b A
x) = 0. The projection of
~b onto the subspace is p = A
x = P b = (AT A)1 AT b
AT A is invertible <=> A has linearly independent columns.
Least Squares:
Gram-Schmidt: A = [a1 |a2 | . . . |an ] u1 = a1 , e1 = u1 /||u1 ||, uk+1 = ak+1 (ak+1 e1 )e1 (ak+1 ek )ek , ek =
uk /||uk ||
a1 e1 a2 e1 ... an e1
0

A = [e1 |e2 | . . . |en ] .


.
.

a2 e2 ... an e2

.
. = QR
.
. .
.
0
0 ... an en





0 0
0
0
ta tb








1
d b

= t a b , a + a b + b = a b + a b .
A = ac db A1 = |A|
c a
c d
c d
c




d
c d
c d






a b
b a a
b a b




c d = d c c la d lb = c d
For any row i of A with its cofactors, |A| = ai1 Ci1 + ai2 Ci2 + + ain Cin with Cij is A without row
i and column j & Cij = (1)i+j |Mij |
Cramers Rule: If |A| 6= 0, Ax = b has unique solution xi = |Bi |/|A| i with Bj has the jth column
replaced with ~b
A1
ij = Cji /|A|
A has eigenvalue = An has eigenvalue n
A has n linear indep. eigenvectors {x1 , . . . , xn }, S = [x1 . . . xn ] A = SS 1 with = diag(i )i
If A has n distinct eigenvalues, then eigenvectors of A are lin. indep.and A is diagonizable.
If A and B have same eigenvector matrix S <=> AB = BA
Differential equations:
Markov matrix:
Complex: Every symmetric (and Hermitian) matrix has real eigenvalues, eigenvectors can be chosen
to be orthonormal.
Unitary: U H U = U U H = I
Similar Matrices: If B = M 1 AM , then A & B have same eigenvalues. Every eigenvector x of A
leads to M 1 x of B.
Change of Basis = Similarity Transformation:
Spectral Thm: Every symmetric (Hermitian) A can be diagonalized by an orthogonal (unitary) matrix
Q. A = QQ1 or QQT | U U 1 or U U H
Normal Matrices: N is normal: N H N = N N H = U 1 N U = . N has a complete set of orthonormal
eigenvectors.
Jordan form: If A has s independent
eigenvectors, A J with s Jordan blocks. J = M 1 AM =
!
i 1


J1
i .
where
J
=
2 matrices are similar <=> they share same Jordan matrix J
i
Js
. 1
.
.
.

A is positive definite if and only if: (I): xT Ax > 0 x 6= 0, (II): all eigenvalues of Aare > 0.
(III): All upper left minors Ak have positive determinant. (IV): All the pivots (without row exchan
have dk > 0 (V): There is a matrixR with independent columns such that A = RT R[
SVD: A = U V T = (ortho)(diag)(ortho). The columns of U (m by n) are eigenvectors of AAT , and
the columns of V (n by n) are eigenvectors of AT A. The r = rankA singular values on the diagonal
of (m by n) are the square roots of the nonzero eigenvalues of both AAT and AT A. For positive
definite matrices, is and U V T QQT . For symmetric matrices, any negative eigenvalues
in become positive in . For complex matrices, remains real but U, V are unitary.
U and V give orthonormal bases for all 4 fundamental spaces: first, r columns of U = C(A), last

m-r columns of U = N (AT ), first r columns of of V := C(AT ), last n-r columns of V : = N (A).
What do eigenvalues say about the matrix?
Left inverse: If A has full column rank n, then the left inverse L = (AT A)1 AT and LA = In
Right inverse: If A has full row rank m, then the right-sided inverse R = AT (AAT )1 and AR =
Im .

You might also like