You are on page 1of 33

Singular Value Decomposition -

Applications in Image Processing

Iveta Hnětynková

Katedra numerické matematiky, MFF UK

Ústav informatiky, AV ČR


1
Outline

1. Singular value decomposition

2. Application 1 - image compression

3. Application 2 - image deblurring

2
1. Singular value decomposition

Consider a (real) matrix

A ∈ Rn×m, r = rank (A) ≤ min {n, m} .

A has

m columns of length n ,
n rows of lenght m ,
r is the maximal number of linearly independent
columns (rows) of A .

3
There exists an SVD decomposition of A in the form

A = U ΣV T,

where U = [u1, . . . , un] ∈ Rn×n, V = [v1, . . . , vm] ∈ Rm×m are orthogo-


nal matrices, and
 
" # σ1
Σr 0 ...
Σ= ∈ Rn×m, Σr =  ∈R
r×r ,
 
0 0
σr

σ1 ≥ σ2 ≥ . . . ≥ σr > 0 .

4
Singular value decomposition – the matrices:

A U S VT

{ui} i = 1,...,n are left singular vectors (columns of U ),


{vi} i = 1,...,m are right singular vectors (columns of V ),
{σi} i=1,...,r are singular values of A.

5
The SVD gives us:

span (u1, . . . , ur ) ≡ range (A) ⊂ Rn,


span (vr+1, . . . , vm) ≡ ker (A) ⊂ Rm,

span (v1, . . . , vr ) ≡ range (AT) ⊂ Rm,


span (ur+1, . . . , un) ≡ ker (AT) ⊂ Rn,

spectral and Frobenius norm of A, rank of A, ...

6
Singular value decomposition – the subspaces:

A
T
A
Rn Rm
u1 s1 v1
s2
range(A) u...2 v2
... range(AT
)
ur sr vr
}
}
ur+1 vr+1
0 ... ker(A)
T
ker(A ) ... 0 vm

un

7
The outer product (dyadic) form:

We can rewrite A as a sum of rank-one matrices in the dyadic form


A = U ΣV T
T
  
σ1 v1
= [u1, . . . , ur ] 
 ...   . 
  .. 
σr vrT
= u1σ1v1T + . . . + ur σr vrT
r
σiuiviT
X
=
i=1
r
X
≡ Ai .
i=1

Moreover kAik2 = σi gives kA1k2 ≥ kA2k2 ≥ . . . ≥ kAr k2 .


8
Matrix A as a sum of rank-one matrices:

A1
+ A2
A + ...
+ Ar -1
+ Ar

r
A SA
i=1
i

SVD reveals the dominating information encoded in a matrix. The


first terms are the “most” important.

9
Optimal approximation of A with a rank-k:

The sum of the first k dyadic terms


k k
σiuiviT
X X
Ai ≡
i=1 i=1
is the best rank-k approximation of the matrix A in the sense of
minimizing the 2-norm of the approximation error, tj.
k
uiσiviT = arg
X
min {kA − Xk2}
i=1 X∈Rn×m , rank (X)≤k

This allows to approximate A with a lower-rank matrix


k k
σiuiviT .
X X
A≈ Ai ≡
i=1 i=1
10
Different possible distributions of singular values:

Singular values of the BCSPWR06 matrix (from Matrix Market). Singular values of the ZENIOS matrix (from Matrix Market).
2 10
10 10

0
10
0
10
−2
10

−4
10
−10
10
−6
10

−8 −20
10 10

−10
10
−30
10
−12
10

−14
10
−40
10
−16
10

−18 −50
10 10
0 500 1000 1500 100 200 300 400 500 600 700 800 900 1000

The hardly (left) and the easily (right) approximable matrices


(BCSPWR06 and ZENIOS from the Harwell-Boeing Collection).
11
2. Application 1 - image compression

Grayscale image = matrix, each entry represents a pixel brightness.


12
Grayscale image: scale 0, . . . , 255 from black to white

255 255 255 255 255 ... 255 255 255


 
255 255 31 0 255 ... 255 255 255
 255 255 101 96 121 ... 255 255 255 
255 99 128 128 98 ... 255 255 255
 
=
 
.. .. .. .. .. .. .. ..
. . . . . ... . . .

 
 255 90 158 153 158 ... 100 35 255 
255 255 102 103 99 ... 98 255 255
255 255 255 255 255 ... 255 255 255

Colored image: 3 matrices for Red, Green and Blue brightness values

13
MATLAB DEMO:

Approximate a grayscale image A using the SVD by ki=1 Ai. Compare


P

storage requirements and quality of approximation for different k.

Memory required to store:

an uncompressed image of size m × n: mn values

rank k SVD approximation: k(m + n + 1) values

14
3. Application 2 - image deblurring

15
Sources of noise and blurring:

• physical sources (moving objects, lens out of focus, ...),


• measurement errors,
• discretization errors,
• rounding errors (finite precision arithmetics in computer),
• ...

Challenge: Given only the blurred noisy image B and having infor-
mation about how it was obtained try to find (or approximate) exact
image X.

16
Model of blurring process:

Blurred photo: −→

Barcode scanning: −→

X(exact image) A(blurring operator) B(blurred noisy image)

17
Obtaining a linear model:

Using some discretization techniques, it is possible to transform this


problem to a linear problem

Ax = b, A ∈ Rn×n, x, b ∈ Rn,
where A is a discretization of A, b = vec(B), x = vec(X).

Size of the problem: n = number of pixels in the image, e.g., even


for a low resolution 456 x 684 px we get 723 432 equations.

18
Image vectorization B → b = vec (B):

[]
B = [b1,b2,...,bw] b = b1
b2
...
bw

19
Solution back reshaping x = vec (X) → X:

x = x1 X = [x1,x2,...,xw]

[]
x2
...
xw

20
Solution of the linear problem:

Let A be nonsingular (which is usually the case). Then

Ax = b
has the unique solution
x = A−1b.

Can we simply take it?

21
Image deblurring problem: Original image x, noisy blurred
image b and the “naive” solution xnaive ≡ A−1b:

x = true image b = blurred, noisy image x = inverse solution

Why it does not work? Because of the properties of our problem.


22
Consider that bnoise is noise and bexact is the exact part in our image
b. Then our linear model is

Ax = b, b = bexact + bnoise.
Usual properties:

• the problem is ill-posed (i.e. sensitive to small changes in b);


• often A is nonsingular, but its singular values σj decay quickly;
• bexact is smooth, and satisfies the discrete Picard condition (DPC);
• bnoise is often random - white (does not have dominant frequen-
cies), and does not satisfy DPC;
• kbexactk  kbnoisek, but kA−1bexactk  kA−1bnoisek .

23
SVD components of the naive solution:

From the SVD A = U ΣV T , we have


n n
1 T 1
A−1 = V Σ−1U T = vj uT
X X
vj uj = j .
j=1 σj j=1 σj
Thus
n n uT b
1 j
xnaive ≡ A−1b = vj uT
X X
jb = vj
j=1 σj j=1 σj
n uT bexact n uT bnoise
X j X j
= vj + vj .
j=1 σj j=1 σj
| {z } | {z }
xexact=A−1 bexact A−1 bnoise

24
n uT bexact n uT bnoise
j j
xnaive =
X X
vj + vj
j=1 σj j=1 σj
| {z } | {z }
x exact −1
=A b exact −1
A bnoise
We want the left sum xexact. What is the size of the right sum
(inverted noise) in comparison to the left one?

Exact data satisfy DPC:


On average, |uT
j b exact | decay faster than the singular values σ of A.
j

White noise does not:


The values |uT
jb
noise | do not exhibit any trend.

25
Thus uT T exact + uT bnoise are for small indexes j dominated by
j b = uj b j
the exact part, but for large j by the noisy part.
singular values of A and projections uTb
i
4
10
right−hand side projections on left singular subspaces uTb
i
singular values σ
2
i
10 noise level

0
10

−2
10

−4
10

−6
10

−8
10

−10
10

−12
10
0 1 2 3 4 5 6 7 8
4
x 10

26
Because of the division by the singular value, the components of the
naive solution
Xk u T bexact Xk u T bnoise
j j
xnaive ≡ A−1b = j=1
vj + j=1
vj
σj σj
| {z } | {z }
xexact inverted noise
XN uTjb
exact XN uTjb
noise
+ j=k+1
vj + j=k+1 vj
σj σj
| {z } | {z }
xexact inverted noise
corresponding to small σj ’s are dominated by inverted noise.

Remember, there are usually many small singular values.

27
Basic regularization method - Truncated SVD:

Using the dyadic form


n
A = U ΣV T = uiσiviT ,
X

i=1
we can approximate A with a rank k matrix
k k
ui σi viT .
X X
A ≈ Sk ≡ Ai =
i=1 i=1

Replacing A by Sk gives an TSVD approximate solution


k uT b
j
x(k) =
X
vj .
j=1 σj

28
TSVD regularization: removing of troublesome components

singular values of A and TSDV filtered projections uTb


i
4
10
filtered projections φ(i) uTb
i
singular values σ
2
i
10 noise level
filter function φ(i)

0
10

−2
10

−4
10

−6
10

−8
10

−10
10

−12
10
0 1 2 3 4 5 6 7 8
4
x 10

29
Here the smallest σj ’s are not present. However, we removed also
some components of xexact.

An optimal k has to balance between removing noise and not loosing


too many components of the exact solution. It depends on the matrix
properties and on the amount of noise in the considered image.

MATLAB DEMO: Compute TSVD regularized solutions for differ-


ent values of k. Compare quality of the obtained image.

30
Other regularization methods:

• direct regularization;
• stationary regularization;
• projection (including iterative) regularization;
• hybrid methods combining the previous ones.

31
Other applications:

• computer tomography (CT);


• magnetic resonance;
• seismology;
• crystallography;
• material sciences;
• ...

32
References:

Textbooks:
• Hansen, Nagy, O’Leary: Deblurring Images,
Spectra, Matrices, and Filtering, SIAM, 2006.
• Hansen: Discrete Inverse Problems,
Insight and Algorithms, SIAM, 2010.

Software (MatLab toolboxes): on the homepage of P. C. Hansen


• HNO package,
• Regularization tools,
• AIRtools,
• ...

33

You might also like