You are on page 1of 51

Graphs, Linear Systems and Laplacian Matrices

A seminar

Dilawar Singh
Version 0.5

Department of Electrical Engineering


Indian Institute of Technology Bombay

November 29, 2010

. . . . . .
An old Chinese philosopher never said, “Words about graph worth
a thousand pictures.”

“Bending the curve“


On Langugae, William Safire 1

1
http:
//www.nytimes.com/2009/09/13/magazine/13FOB-OnLanguage-t.html
. . . . . .
Outline

I Solve Ax = b Quick!

2
Daniel A Spielman; Proceedings of ICM Hyderabad 2010 . . . . . .
Outline

I Solve Ax = b Quick!
I Goal Special case when A is Laplacian matrix of a Graph.

2
Daniel A Spielman; Proceedings of ICM Hyderabad 2010 . . . . . .
Outline

I Solve Ax = b Quick!
I Goal Special case when A is Laplacian matrix of a Graph.
I Connections between spectral graph theory, and numerical
linear algrabra.2

2
Daniel A Spielman; Proceedings of ICM Hyderabad 2010 . . . . . .
I Why?

. . . . . .
I Why?
I Classical Approach!

. . . . . .
I Why?
I Classical Approach!
I Recent Development

. . . . . .
I Why?
I Classical Approach!
I Recent Development
I Connections!

. . . . . .
I Why?
I Classical Approach!
I Recent Development
I Connections!
I ?

. . . . . .
Graphs and Laplacian Matrix

Definition
I Graph G given by a triple (V, E, w),

3
If w represents conductance in corresponding electrical network and x is
any potential vector then xT Lx is the total power consumed
. . in .network.
. . .
Graphs and Laplacian Matrix

Definition
I Graph G given by a triple (V, E, w),
I Laplacian Matrix L of a graph is naturally defined by the
quadratic form it induce. For a vector
∑ x ∈ <V , the Laplacian
quadratic form of G is x Lx = (u,v)∈E wu,v (x(u) − x(v)2 ).3
T

3
If w represents conductance in corresponding electrical network and x is
any potential vector then xT Lx is the total power consumed
. . in .network.
. . .
Graphs and Laplacian Matrix

Definition
I Graph G given by a triple (V, E, w),
I Laplacian Matrix L of a graph is naturally defined by the
quadratic form it induce. For a vector
∑ x ∈ <V , the Laplacian
quadratic form of G is x Lx = (u,v)∈E wu,v (x(u) − x(v)2 ).3
T

I More the x jumps, larger the quadratic form. Thus L provides


a measure of smoothness of x over the edges of G.

3
If w represents conductance in corresponding electrical network and x is
any potential vector then xT Lx is the total power consumed
. . in .network.
. . .
Laplacian continued ...

I Define D to be a diagonal matrix whose diagonal contains d,


and define {
the weighted adjacency matrix of G by
wu,v if(u, v) ∈ E
A(u, v) =
0 otherwise
t  
.b .1 .c 2 −1 0 0 −1
 −1 3 −1 −1 0 
 
 0 −1 2 −1 0 
 
.1

.a.  0 −1 −1 4 −2 
.1
.1

−1 0 0 −2 3
.1

.e .2 .d

. . . . . .
Properties of Laplacian Matrix

 
2 −1 0 0 −1
 −1 3 −1 −1 0 
 
 0 −1 2 −1 0 
 
 0 −1 −1 4 −2 
−1 0 0 −2 3

I They are symetric, have zero row-sums, and have non-positive


off-diagonal entries.

. . . . . .
Properties of Laplacian Matrix

 
2 −1 0 0 −1
 −1 3 −1 −1 0 
 
 0 −1 2 −1 0 
 
 0 −1 −1 4 −2 
−1 0 0 −2 3

I They are symetric, have zero row-sums, and have non-positive


off-diagonal entries.
I Positive semi-definite. Every eigenvalue is non-negative.

. . . . . .
Properties of Laplacian Matrix

 
2 −1 0 0 −1
 −1 3 −1 −1 0 
 
 0 −1 2 −1 0 
 
 0 −1 −1 4 −2 
−1 0 0 −2 3

I They are symetric, have zero row-sums, and have non-positive


off-diagonal entries.
I Positive semi-definite. Every eigenvalue is non-negative.
I Connectivity. Let 0 ≤ λ1 ≤ λ2 .... ≤ λn be the eigenvalues.
Then λ2 ≥ 0 iff G is connected.

. . . . . .
Properties of Laplacian Matrix

 
2 −1 0 0 −1
 −1 3 −1 −1 0 
 
 0 −1 2 −1 0 
 
 0 −1 −1 4 −2 
−1 0 0 −2 3

I They are symetric, have zero row-sums, and have non-positive


off-diagonal entries.
I Positive semi-definite. Every eigenvalue is non-negative.
I Connectivity. Let 0 ≤ λ1 ≤ λ2 .... ≤ λn be the eigenvalues.
Then λ2 ≥ 0 iff G is connected.
I When two graphs are NOT isomorphic?

. . . . . .
Some definitions

Definition
Given a set of vertices S ⊂ V , define the boundary of S, written
β(S) to be the set of edges in V with exactly one vertex in S
I It is of great interest in computer science to minimize or
maximize the β(S).

. . . . . .
Some definitions

Definition
Given a set of vertices S ⊂ V , define the boundary of S, written
β(S) to be the set of edges in V with exactly one vertex in S
I It is of great interest in computer science to minimize or
maximize the β(S).

Conductance of a graph
def w|β(S)|
Let φ(S) = min(|d(S)|,|d(V −S)|) , where d(S) is the sum of the
weighted degree of the vertices in S then conductance of graph G,
def
= minS⊂V φ(S)

. . . . . .
Some definitions

Definition
Given a set of vertices S ⊂ V , define the boundary of S, written
β(S) to be the set of edges in V with exactly one vertex in S
I It is of great interest in computer science to minimize or
maximize the β(S).

Conductance of a graph
def w|β(S)|
Let φ(S) = min(|d(S)|,|d(V −S)|) , where d(S) is the sum of the
weighted degree of the vertices in S then conductance of graph G,
def
= minS⊂V φ(S)

Iso-perimeter Number
def |β(S)|
Let i(S) = min(|(S)|,|(V −S)|) , then iso-perimeter number of G,
def
iG = mins⊂G i(S).

. . . . . .
Conductance

. . . . . .
Conductance

. . . . . .
Conductance

. . . . . .
Fast solution of Linear Equations

Conductance ∼ Sparcity
I Conjugate Gradient Method is fast when conductance is high!

. . . . . .
Fast solution of Linear Equations

Conductance ∼ Sparcity
I Conjugate Gradient Method is fast when conductance is high!
I Elimination is fast when conductance is low on G and all
subgraphs.

. . . . . .
Fast solution of Linear Equations

Conductance ∼ Sparcity
I Conjugate Gradient Method is fast when conductance is high!
I Elimination is fast when conductance is low on G and all
subgraphs.
I PROBLEM Fast solutions when graph is in between the
extremes.

. . . . . .
Fast solution of Linear Equations

Conductance ∼ Sparcity
I Conjugate Gradient Method is fast when conductance is high!
I Elimination is fast when conductance is low on G and all
subgraphs.
I PROBLEM Fast solutions when graph is in between the
extremes.
I Sparsify the graph

. . . . . .
Solving Linear Equations in Laplacian

I Conjugate Gradient Methods multiplies matrix A by some


vector x at each iteration. The number of iterations depends
on the eigenvalue of A. Condition number 4 .

4
Ratio of largest to smallest eigenvalue
5
Gerard Meurant, Computer Solution of Large Linear Systems, North
Holland . . . . . .
Solving Linear Equations in Laplacian

I Conjugate Gradient Methods multiplies matrix A by some


vector x at each iteration. The number of iterations depends
on the eigenvalue of A. Condition number 4 .
I Preconditioned Iterative Methods Can one find a matrix B
such that BA will have the same solution but have smaller
condition number? We can!

4
Ratio of largest to smallest eigenvalue
5
Gerard Meurant, Computer Solution of Large Linear Systems, North
Holland . . . . . .
Solving Linear Equations in Laplacian

I Conjugate Gradient Methods multiplies matrix A by some


vector x at each iteration. The number of iterations depends
on the eigenvalue of A. Condition number 4 .
I Preconditioned Iterative Methods Can one find a matrix B
such that BA will have the same solution but have smaller
condition number? We can!
I B is known as pre-conditioner.They have been proved
incredibly useful in practice. 5

4
Ratio of largest to smallest eigenvalue
5
Gerard Meurant, Computer Solution of Large Linear Systems, North
Holland . . . . . .
Preconditioned Iterative Methods

I When A and B are Laplacian Matrix of connected graph,


similar analysis holds.6
I Relative condition number of AB + written κ(A, B) in this
case measure how well those graphs approximate each other.
B + is Moore-Penrose pseudo-inverse of B.

6
precondition number is now the ratio of largest to smallest non-zero
eigenvalue . . . . . .
Approximation by Sparse Graph
Sparsification
Sparsification is the process of approximating a given graph G by a
sparse graph H. H is said to be an α-approximation of G if
κf (LG , LH ) ≤ 1 + α, where LG and LH are Laplacian matrix of G
and H.

. . . . . .
Approximation by Sparse Graph
Sparsification
Sparsification is the process of approximating a given graph G by a
sparse graph H. H is said to be an α-approximation of G if
κf (LG , LH ) ≤ 1 + α, where LG and LH are Laplacian matrix of G
and H.
I G and H are similar in many ways. They have similar
eigenvalues and the effective resistance between every pair of
nodes in approximately the same.

. . . . . .
Approximation by Sparse Graph
Sparsification
Sparsification is the process of approximating a given graph G by a
sparse graph H. H is said to be an α-approximation of G if
κf (LG , LH ) ≤ 1 + α, where LG and LH are Laplacian matrix of G
and H.
I G and H are similar in many ways. They have similar
eigenvalues and the effective resistance between every pair of
nodes in approximately the same.
I It Solving system in dense matrix → Solving in a sparse
matrix. Conjugate Gradient Methods are faster on sparse
Matrix.

. . . . . .
Approximation by Sparse Graph
Sparsification
Sparsification is the process of approximating a given graph G by a
sparse graph H. H is said to be an α-approximation of G if
κf (LG , LH ) ≤ 1 + α, where LG and LH are Laplacian matrix of G
and H.
I G and H are similar in many ways. They have similar
eigenvalues and the effective resistance between every pair of
nodes in approximately the same.
I It Solving system in dense matrix → Solving in a sparse
matrix. Conjugate Gradient Methods are faster on sparse
Matrix.
I Find H with O(n) zero entries then solve system in LG by
using a preconditioned Conjugate Gradient with LH as
preconditioner, and solving the system in LH by Conjugate
Gradient.

. . . . . .
Approximation by Sparse Graph
Sparsification
Sparsification is the process of approximating a given graph G by a
sparse graph H. H is said to be an α-approximation of G if
κf (LG , LH ) ≤ 1 + α, where LG and LH are Laplacian matrix of G
and H.
I G and H are similar in many ways. They have similar
eigenvalues and the effective resistance between every pair of
nodes in approximately the same.
I It Solving system in dense matrix → Solving in a sparse
matrix. Conjugate Gradient Methods are faster on sparse
Matrix.
I Find H with O(n) zero entries then solve system in LG by
using a preconditioned Conjugate Gradient with LH as
preconditioner, and solving the system in LH by Conjugate
Gradient.
I Each solve in H takes O(n2 ), and for  accuracy, PCG will
take log −1 . Total complexity would be. O((m
.
+ .n2 ) log
.
−1
.
). .
More on Sparcification

I Does such good specifier exists?

7
Spileman, Proceedings of ICM 2010 . . . . . .
More on Sparcification

I Does such good specifier exists?


I Benezur and Karger seems to have developed very similar. 7 .

7
Spileman, Proceedings of ICM 2010 . . . . . .
Subgraph Preconditioner and Support theory
I Vaidya’s idea of precondition Laplacian matrices by the matrix
of its own subgraph. The tools used to analyse them are
known as support theory. 8

8
urlhttp://www.sandia.gov/ bahendr/support.html
9
Algebraic Tools for Analyzing Preconditioners, Bruce Hendrickson (with
Erik Boman). Invited talk at Preconditioning’03
10
Kolla et all, CoRR 99 . . . . . .
Subgraph Preconditioner and Support theory
I Vaidya’s idea of precondition Laplacian matrices by the matrix
of its own subgraph. The tools used to analyse them are
known as support theory. 8
I Vaidya did not publish his results. He build a software. His
student used his work in his dissertation.

8
urlhttp://www.sandia.gov/ bahendr/support.html
9
Algebraic Tools for Analyzing Preconditioners, Bruce Hendrickson (with
Erik Boman). Invited talk at Preconditioning’03
10
Kolla et all, CoRR 99 . . . . . .
Subgraph Preconditioner and Support theory
I Vaidya’s idea of precondition Laplacian matrices by the matrix
of its own subgraph. The tools used to analyse them are
known as support theory. 8
I Vaidya did not publish his results. He build a software. His
student used his work in his dissertation.
I Laplacian of maximum spanning tree is used as preconditioner.
Lower bounds were not proved. A bit inefficient.

8
urlhttp://www.sandia.gov/ bahendr/support.html
9
Algebraic Tools for Analyzing Preconditioners, Bruce Hendrickson (with
Erik Boman). Invited talk at Preconditioning’03
10
Kolla et all, CoRR 99 . . . . . .
Subgraph Preconditioner and Support theory
I Vaidya’s idea of precondition Laplacian matrices by the matrix
of its own subgraph. The tools used to analyse them are
known as support theory. 8
I Vaidya did not publish his results. He build a software. His
student used his work in his dissertation.
I Laplacian of maximum spanning tree is used as preconditioner.
Lower bounds were not proved. A bit inefficient.
I Low stretch Spanning Treewere proven good precondition.
I A few edges may be added to maximum spanning tree to
improve the preconditioning 9 .

8
urlhttp://www.sandia.gov/ bahendr/support.html
9
Algebraic Tools for Analyzing Preconditioners, Bruce Hendrickson (with
Erik Boman). Invited talk at Preconditioning’03
10
Kolla et all, CoRR 99 . . . . . .
Subgraph Preconditioner and Support theory
I Vaidya’s idea of precondition Laplacian matrices by the matrix
of its own subgraph. The tools used to analyse them are
known as support theory. 8
I Vaidya did not publish his results. He build a software. His
student used his work in his dissertation.
I Laplacian of maximum spanning tree is used as preconditioner.
Lower bounds were not proved. A bit inefficient.
I Low stretch Spanning Treewere proven good precondition.
I A few edges may be added to maximum spanning tree to
improve the preconditioning 9 .
I Good preconditioner existed 10 but the challenge was to
construct them quickly.
8
urlhttp://www.sandia.gov/ bahendr/support.html
9
Algebraic Tools for Analyzing Preconditioners, Bruce Hendrickson (with
Erik Boman). Invited talk at Preconditioning’03
10
Kolla et all, CoRR 99 . . . . . .
Overview : Application of Laplacian
I Regression on Graphs. A function f is given on a subset of
vertices of G, calculate f over remaining vertices. Minimize
xT Lx.

11
Daniel P. Spielman and Shang Hua Tang, Smoothed analysis of
termination of linear programming algorithms, Mathematical Programming, B,
2003. to appear
12
Gilbert Strang, Introduction to Applied Mathematics,
. .1986 . . . .
Overview : Application of Laplacian
I Regression on Graphs. A function f is given on a subset of
vertices of G, calculate f over remaining vertices. Minimize
xT Lx.
I Spectral Graph Theory. Study of eigenvalues and their
relation with graphs. Adiabatic computing!

11
Daniel P. Spielman and Shang Hua Tang, Smoothed analysis of
termination of linear programming algorithms, Mathematical Programming, B,
2003. to appear
12
Gilbert Strang, Introduction to Applied Mathematics,
. .1986 . . . .
Overview : Application of Laplacian
I Regression on Graphs. A function f is given on a subset of
vertices of G, calculate f over remaining vertices. Minimize
xT Lx.
I Spectral Graph Theory. Study of eigenvalues and their
relation with graphs. Adiabatic computing!
I Interior Point Method and Maximum Flow Problem.
Equivalent to solving system of linear equations that can
be reduced to restricted Laplacian Systems.11
I Resistor Networks. Measurement of effective resistance
between two vertices.

11
Daniel P. Spielman and Shang Hua Tang, Smoothed analysis of
termination of linear programming algorithms, Mathematical Programming, B,
2003. to appear
12
Gilbert Strang, Introduction to Applied Mathematics,
. .1986 . . . .
Overview : Application of Laplacian
I Regression on Graphs. A function f is given on a subset of
vertices of G, calculate f over remaining vertices. Minimize
xT Lx.
I Spectral Graph Theory. Study of eigenvalues and their
relation with graphs. Adiabatic computing!
I Interior Point Method and Maximum Flow Problem.
Equivalent to solving system of linear equations that can
be reduced to restricted Laplacian Systems.11
I Resistor Networks. Measurement of effective resistance
between two vertices.
I Partial Differential Equations. Laplacian Matrix of path graph
↔ Vibration of a string. FEM to solve Laplace’s equations
using a triangulation with no obtuse angle. 12
11
Daniel P. Spielman and Shang Hua Tang, Smoothed analysis of
termination of linear programming algorithms, Mathematical Programming, B,
2003. to appear
12
Gilbert Strang, Introduction to Applied Mathematics,
. .1986 . . . .
Conclusion

I There is a connection!

13
http://arxiv.org/abs/1003.2958v1 . . . . . .
Conclusion

I There is a connection!
I Koutis, Miller and Peng 13 made tremendous progress on this
problem, they produces ultra-sparsifiers that lead to an
algorithm for solving linear systems in Laplacian that takes
time O(m log2 n(log log n)2 log −1 ).
I This is much faster than any algorithm known to date!!

13
http://arxiv.org/abs/1003.2958v1 . . . . . .
Conclusion

I There is a connection!
I Koutis, Miller and Peng 13 made tremendous progress on this
problem, they produces ultra-sparsifiers that lead to an
algorithm for solving linear systems in Laplacian that takes
time O(m log2 n(log log n)2 log −1 ).
I This is much faster than any algorithm known to date!!
I To Do : Solve electrical networks using interior point
methods. Plug in Laplacian.

13
http://arxiv.org/abs/1003.2958v1 . . . . . .

You might also like