Professional Documents
Culture Documents
Iveta Hnětynková
2
1. Singular value decomposition
A has
m columns of length n ,
n rows of lenght m ,
r is the maximal number of linearly independent
columns (rows) of A .
3
There exists an SVD decomposition of A in the form
A = U ΣV T,
σ1 ≥ σ2 ≥ . . . ≥ σr > 0 .
4
Singular value decomposition – the matrices:
A U S VT
5
The SVD gives us:
6
Singular value decomposition – the subspaces:
A
T
A
Rn Rm
u1 s1 v1
s2
range(A) u...2 v2
... range(AT
)
ur sr vr
}
}
ur+1 vr+1
0 ... ker(A)
T
ker(A ) ... 0 vm
un
7
The outer product (dyadic) form:
A1
+ A2
A + ...
+ Ar -1
+ Ar
r
A SA
i=1
i
9
Optimal approximation of A with a rank-k:
Singular values of the BCSPWR06 matrix (from Matrix Market). Singular values of the ZENIOS matrix (from Matrix Market).
2 10
10 10
0
10
0
10
−2
10
−4
10
−10
10
−6
10
−8 −20
10 10
−10
10
−30
10
−12
10
−14
10
−40
10
−16
10
−18 −50
10 10
0 500 1000 1500 100 200 300 400 500 600 700 800 900 1000
Colored image: 3 matrices for Red, Green and Blue brightness values
13
MATLAB DEMO:
14
3. Application 2 - image deblurring
15
Sources of noise and blurring:
Challenge: Given only the blurred noisy image B and having infor-
mation about how it was obtained try to find (or approximate) exact
image X.
16
Model of blurring process:
Blurred photo: −→
Barcode scanning: −→
17
Obtaining a linear model:
Ax = b, A ∈ Rn×n, x, b ∈ Rn,
where A is a discretization of A, b = vec(B), x = vec(X).
18
Image vectorization B → b = vec (B):
[]
B = [b1,b2,...,bw] b = b1
b2
...
bw
19
Solution back reshaping x = vec (X) → X:
x = x1 X = [x1,x2,...,xw]
[]
x2
...
xw
20
Solution of the linear problem:
Ax = b
has the unique solution
x = A−1b.
21
Image deblurring problem: Original image x, noisy blurred
image b and the “naive” solution xnaive ≡ A−1b:
Ax = b, b = bexact + bnoise.
Usual properties:
23
SVD components of the naive solution:
24
n uT bexact n uT bnoise
j j
xnaive =
X X
vj + vj
j=1 σj j=1 σj
| {z } | {z }
x exact −1
=A b exact −1
A bnoise
We want the left sum xexact. What is the size of the right sum
(inverted noise) in comparison to the left one?
25
Thus uT T exact + uT bnoise are for small indexes j dominated by
j b = uj b j
the exact part, but for large j by the noisy part.
singular values of A and projections uTb
i
4
10
right−hand side projections on left singular subspaces uTb
i
singular values σ
2
i
10 noise level
0
10
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
0 1 2 3 4 5 6 7 8
4
x 10
26
Because of the division by the singular value, the components of the
naive solution
Xk u T bexact Xk u T bnoise
j j
xnaive ≡ A−1b = j=1
vj + j=1
vj
σj σj
| {z } | {z }
xexact inverted noise
XN uTjb
exact XN uTjb
noise
+ j=k+1
vj + j=k+1 vj
σj σj
| {z } | {z }
xexact inverted noise
corresponding to small σj ’s are dominated by inverted noise.
27
Basic regularization method - Truncated SVD:
i=1
we can approximate A with a rank k matrix
k k
ui σi viT .
X X
A ≈ Sk ≡ Ai =
i=1 i=1
28
TSVD regularization: removing of troublesome components
0
10
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
0 1 2 3 4 5 6 7 8
4
x 10
29
Here the smallest σj ’s are not present. However, we removed also
some components of xexact.
30
Other regularization methods:
• direct regularization;
• stationary regularization;
• projection (including iterative) regularization;
• hybrid methods combining the previous ones.
31
Other applications:
32
References:
Textbooks:
• Hansen, Nagy, O’Leary: Deblurring Images,
Spectra, Matrices, and Filtering, SIAM, 2006.
• Hansen: Discrete Inverse Problems,
Insight and Algorithms, SIAM, 2010.
33