Professional Documents
Culture Documents
OF MATHEMATICS/E-MATERIAL
MA 7156 APPLIED MATHEMATICS FOR PERVASIVE COMPUTING
Signification
Associativity of addition
u + (v + w) = (u + v) + w
Commutativity of addition
u+v=v+u
Distributivity of scalar
multiplication with respect to vector
a(u + v) = au + av
addition
Distributivity of scalar multiplication
with respect to field addition
Compatibility of scalar multiplication
with field multiplication
(a + b)v = av + bv
a(bv) = (ab)v [nb 1]
Page 1
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
Identity element of scalar
multiplication
identity in F.
The first four axioms are those of V being an abelian group under vector addition. Vector
spaces may be diverse in nature, for example, containing functions, polynomials or
matrices. Linear algebra is concerned with properties common to all vector spaces.
Solution:
Page 2
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
Solution:
Page 3
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
T(u+v)=T(u)+T(v), T(av)=aT(v)
for any vectors u,v V and a scalar a F.
Additionally for any vectors u, v V and scalars a, b F:
Page 4
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
T(au+bv)=T(au)+T(bv)=aT(u)+bT(v)
When a bijective linear mapping exists between two vector spaces (that is,
every vector from the second space is associated with exactly one in the
first), we say that the two spaces are isomorphic. Because an isomorphism
preserves linear structure, two isomorphic vector spaces are "essentially the
same" from the linear algebra point of view. One essential question in linear
algebra is whether a mapping is an isomorphism or not, and this question
can be answered by checking if the determinant is nonzero. If a mapping is
not an isomorphism, linear algebra is interested in finding its range (or
image) and the set of elements that get mapped to zero, called the kernel of
the mapping.
Linear transformations have geometric significance. For example, 2 2 real
matrices denote standard planar mappings that preserve the origin.
Subspaces, span, and basis.
Again, in analogue with theories of other algebraic objects, linear algebra is
interested in subsets of vector spaces that are themselves vector spaces;
these subsets are called linear subspaces. For example, both the range and
kernel of a linear mapping are subspaces, and are thus often called the range
space and the null space; these are important examples of subspaces.
Another important way of forming a subspace is to take a linear combination
of a set of vectors v1, v2,, vk:
a1 v1 + a2 v2 + . + ak vk
where a1, a2, , ak are scalars. The set of all linear combinations of vectors
v1, v2, , vk is called their span, which forms a subspace.
A linear combination of any system of vectors with all zero coefficients is the
zero vector of V. If this is the only way to express the zero vector as a linear
combination of v1, v2,, vk then these vectors are linearly independent.
Given a set of vectors that span a space, if any vector w is a linear
combination of other vectors (and so the set is not linearly independent),
Page 5
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
then the span would remain the same if we remove w from the set. Thus, a
set of linearly dependent vectors is redundant in the sense that there will be
a linearly independent subset which will span the same subspace. Therefore,
we are mostly interested in a linearly independent set of vectors that spans a
vector space V, which we call a basis of V. Any set of vectors that spans V
contains a basis, and any linearly independent set of vectors in V can be
extended to a basis. It turns out that if we accept the axiom of choice, every
vector space has a basis; nevertheless, this basis may be unnatural, and
indeed, may not even be constructible. For instance, there exists a basis for
the real numbers considered as a vector space over the rational, but no
explicit basis has been constructed.
Theorem : If T is a linear transformation from V to W and u and v exist in V
then:
1) T(0) = 0
2) T(-v) = -T(v)
3) T(u - v) = T(u) - T(v)
Page 6
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
A matrix norm is a natural extension of the notion of a vector norm to
matrices.
In
what
Let
follows,
will
denote
rows and
numbers.
. That is, if
, then,
if
for all
in
and
in
in
Additionally, in the case of square matrices (thus, m = n), some (but not all)
matrix norms satisfy the following condition, which is related to the fact that
matrices are more than just vectors:
for all matrices
and
in
A matrix norm that satisfies this additional property is called a submultiplicative norm.
Induced norm
If vector norms on Km and Kn are given (K is the field of real or complex
numbers), then one defines the corresponding induced norm or operator
norm on the space of m-by-n matrices as the following maxima:
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
These are different from the entrywise p-norms and the Schatten p-norms for matrices
treated below, which are also usually denoted by
Page 8
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
Problem(Example):
The
Page 9
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
which is mentioned in the beginning of the article.
The characteristic polynomial of A is
There are three chains. Two have length one: {v} and {w}, corresponding to
the eigenvalues 1 and 2, respectively. There is one chain of length two
corresponding to the eigenvalue 4. To find this chain, calculate
Pick a vector in the above span that is not in the kernel of A 4I, e.g., y =
(1,0,0,0)T. Now, (A 4I)y = x and (A 4I)x = 0, so {y, x} is a chain of length
two corresponding to the eigenvalue 4.
Page 10
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
The transition matrix P such that P1AP = J is formed by putting these vectors
next to each other as follows
If we had interchanged the order of which the chain vectors appeared, that
is, changing the order of v, w and {x, y} together, the Jordan blocks would be
interchanged. However, the Jordan forms are equivalent Jordan forms.
1.5 Generalized Eigenvectors:
A generalized eigenvector of an n n matrix
certain
criteria
which
are
more
relaxed
than
those
for
an
(ordinary) eigenvector.
Let
of
linear
maps
representation of
from
into
itself;
and
Problem (Example)
The matrix
Page 11
let
be
the matrix
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
has eigenvalues
and
and
with algebraic
multiplicities
and
with
Page 12
is the ordinary
and
are
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
. Together
the two chains of generalized eigenvectors span the space of all 5dimensional column vectors.
is obtained
as follows:
where
is
a generalized
modal
, and
matrix for
.
Page 13
the
columns
of
are
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
The singular
value
decomposition (SVD)
is
It
applications
has
many
useful
a factorization of
in signal
real
numbers
on
the
diagonal,
matrix.
The
diagonal
known
as
the left-singular
vectors of M,
respectively.
The singular value decomposition and the eigen decomposition are closely
related. Namely:
the
square
roots
of
the
non-zero eigen
values of
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
Problem (Example)
Consider the 4 5 matrix
Notice is zero outside of the diagonal and one diagonal element is zero.
Furthermore, because the matrices U and V are unitary, multiplying by their
respective conjugate transposes yields identity matrices, as shown below. In
this case, because U and V are real valued, they each are an orthogonal
matrix.
Page 15
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
such
that
and a matrix
is a generalized inverse of
Page 16
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
Given any m n-matrix A (real or complex), the pseudo-inverse A+ of A is
the unique
n m-matrix satisfying the following properties:
AA+A = A,
A+AA+ = A+
(AA+)T = AA+
(A+A)T= A+A.
1.8 Least Square Approximations:
The method of least squares is a standard approach in regression analysis to
the approximate solution of over determined systems, i.e., sets of equations
in which there are more equations than unknowns. "Least squares" means
that the overall solution minimizes the sum of the squares of the errors made
in the results of every single equation.
Problems (Example):
A crucial application of least squares is fitting a straight line to m points.
Start with three points: Find the closest line to the points .0; 6/; .1; 0/, and .2;
0/.
No straight line b = C + Dt goes through those three points. We are asking
for two
numbers C and D that satisfy three equations. Here are the equations at t D
0; 1; 2 to
match the given values b = 6, 0, 0:
t = 0 The first point is on the line b = C + Dt if C C + D.0 = 6
t = 1 The second point is on the line b = C+Dt if C + D.1 = 0
t = 2 The third point is on the line b = C + Dt if C + D.2 = 0.
1.9 QR algorithm:
Page 17
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
The QR algorithm is an eigen value algorithm: that is, a procedure to
calculate the eigen values and eigenvectors of a matrix.
Problem (Example):
Find the eigenvectors for:
4
2 / 3 4 / 3 4 / 3
2/3
4
0
0
A
4/3 0
6
2
0
2
6
4/3
-0.1491 -0.9862
0.0237
0.0685
0.0917 -0.4606
0.8310
>> A1=r*q;
>> [q r] = slow_qr(A1);
>> q2 = q1*q
q2 =
0.7809 -0.0770
0.2082
-0.4165
0.0571 -0.6173
0.9677 -0.0131
0.1697
0.4165 -0.1697
0.1415
0.7543 -0.4782
0.6539
0.6085
Page 18
KCE/DEPT. OF MATHEMATICS/E-MATERIAL
>> A2 = r*q;
>> [q r] = slow_qr(A2);
>> q3 = q2*q
q3 =
-0.7328
-0.2268 -0.9562
0.0043
0.1850
0.2048 -0.6952
0.5187
Page 19