You are on page 1of 49

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.

com

1


A short Course on
Linear Vector Space

Lecture I

Basic Concept:
A vector has components. Example: Position vector,
A scalar has no components. Example: Mass,

Components of a vector are scalars; the scalars are some numbers that can be real or complex. Real
numbers are drawn from a real field and the complex numbers are drawn from a complex field .
Therefore, a vector is always defined over a field. The number of components (or tuples) of a vector
corresponds to the dimension of the space, we define it later as vector space! In the beginning, as we
wrote the components and mass , they are all real, and so they belong to a real field. We
write, .
To construct a vector space, there are some axioms that have to be followed which we describe
later.

Why the study of Vector Space?

Many mathematical systems satisfy the vector space axioms. The set of all complex numbers and
suitable sets of vectors, matrices, polynomials, functions all satisfy the structure of vector space. We
should therefore, study the properties of the abstract vector space, in general. Then we can apply
them to more specific mathematical systems representing specific physical problems.

Examples from Physics: Fourier components, Special functions etc.

What is Field?

A field is a set in which all the mathematical operations like addition, subtraction, multiplication and
division are well defined among its members.
Example: The set of rational numbers , the set of real numbers , the set of complex numbers .

Note: The set of integer numbers is not a field. In general, we denote a field by the symbol . A field
can be infinite or finite accordingly as its members.

To define a Field:

A field is a non-empty set on which two operations, addition (+) and multiplication (.) are defined
such that the following axioms hold for all members .
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

2

Addition:
i. (Closure property)
ii. (Commutativity)
iii. (Associativity)
iv. There exists such that (Identity)
v. There exists an element so that (Inverse)

Multiplication:
i. (Closure property)
ii. (Commutativity)
iii. (Associativity)
iv. There exists 1 such that (Identity)
v. If , there exists an element

such that

(Inverse)


Next we define a Vector Space in terms of axioms.

Vector Space:
A non-empty set V is said to be a vector space over a field if for all V and , the
following axioms hold.

Addition:
i. V (Closure)
ii. (Commutativity)
iii. (Associativity)
iv. There exists an element V, such that (Identity)
v. There exists an element V so that (Inverse)

Scalar Multiplication:
i. V (Closure)
ii. (Distributive over vector addition)
iii. (Distributive over scalar addition)
iv. (Associativity)
v. (Identity)


Example:

is a vector space over the real field . Consider the vectors,

and so on.
Similarly,

is a vector space over the complex field of dimension, .



Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

3

NOTE: In general,

is a vector space over the field where the vectors are n-tuples and the
elements are drawn from the field We symbolically write,

}

Any set that is isomorphic to the vector space

is also a vector space (with the operations defined).



What is isomorphism?

If there is a linear mapping, such that the two vector spaces and are one to one, we call
them isomorphic. The mapping is then called isomorphism.

Note: Isomorphism is invertible. This is a more precise way of saying equivalent.

When is a mapping linear?

The mapping is linear if it preserves the two basic operations of a vector space: Vector
addition and Scalar multiplication such that the following laws hold.

(i)


(ii) V [Think as an operator.]

[We may study the mapping, its properties, in general and then there are the concepts of Image and
kernel, we may look up any text book for definitions.]

Some Examples of Vector Space

# MATRICES:
A matrix over a field ,
(

+,



Each row in the above can be regarded as a vector ( -tuples) in

and each column as a vector ( -


tuples) in

.

Any operation defined for the vector space

can be applied to every row of the matrix . Similarly,


any operation defined for the vector space

can also be applied to every column of the matrix .


Thus the rows and columns of can be joined to give a -tuple of elements in . Therefore,
the set of all matrices over is a vector space isomorphic to

.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

4

The operations of addition and scalar multiplication:

,

where (rows), (columns), .

Note: The vector space

is isomorphic to the vector space of row vectors and

is isomorphic to
vector space of column vectors.

We can similarly construct the vector spaces with functions or polynomials etc.
(More discussions on this, later.)

On Linear Dependence of Vectors :

Linear combination: Let

are elements of a vector space V over a field . A linear


combination of the elements will be another element of V and is expressed as

, where

,

Linear dependence: If the linear combination of the vectors,

,
and the coefficients

, at least one of them, is nonzero then the vectors are called


linearly dependent.
For example, for two vectors,

, one is -times the other,

.
Linear Independence: The set of vectors {

} in the vector space V over a field is


linearly independent if and only if the solution of the following equation

yields

.

Examples:

#1. Is the set of vectors { }

linearly independent?
Ans.

and


So the vectors are linearly independent.

#2. The following set of vectors { }

is linearly independent. Why?





Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

5

Ans.

.

#3. How is about the following set of vectors { }

, are they linearly dependent?


Ans.
Here we find two solutions,

and

which imply no unique solution. However,


if we consider

, that can satisfy the above. Thus we can say, the vectors are linearly
dependent.


Lecture II

Vector Subspace/ Spanning Set/ Inner Products
[In the Lecture Notes-I we provided a formal introduction to Vector Space with some examples. Here we
continue with the structure and provide more practical examples.]
Subspaces:

A vector subspace is a vector space that is embedded in a larger vector space. In other words,
subspace is a subset of a vector space that is itself a vector space!
Note: Being a subset, the vectors in it follow most of the vector space axioms. One additional
condition that is required is the closure property. This automatically implies the inclusion of zero
element.
Symbolically, we write

is the subspace of , where is vector space defined over a field
Conditions:
Let be a vector space over a field .
Now if be a subspace of , then the following conditions have to satisfy.
0 , where 0 zero vector inclusion
***REPAIRED***

for all


, for and

Examples:
Closure property
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

6

#1. Straight lines or planes through the origin constitute a subspace of the three dimensional
Euclidean space.

{ } (3D Euclidean space, written symbolically)


Now we can write,
{ }

(The xy-plane)
The above is a 2D vector subspace of

which is isomorphic to

.
In the same way, the x-axis { }

is a 1D vector subspace of

which is then
isomorphic to .
#2. To prove that { }

is a subspace of

.
Proof:
(i) Zero vector inclusion:
0 = (0, 0, 0) which satisfies the constraint, , so 0
(ii) Closure property under addition:
Consider two vectors,

with the conditions that

and

Condition satisfied
Thus,


(iii) Closure under multiplication:
Let

implies
for all and all .
Therefore, we can say


Spanning Set:
Let

be a set of elements from a vector space . The vector subspace consisting of all
linear combinations of

is called the vector space spanned by the set {

}. Now the
set of vectors {

} will be called the spanning set for .


In mathematical terms,
{

} , where .
Note the following:
Every element of is a linear combination of {

}.
The subspace containing {

} only, is the smallest subspace of .



Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

7

Example:

is spanned by , ,

(unit vector along x, y and z-axes).


More elaborately, we can say, the vectors

make a
spanning set for

.
Consider as a linear combination of the above vectors:


Now we can check, every

is a linear combination of {

}. Thus {

} is a
spanning set for

.
Basis Set:
A basis set is a set of vectors {

} which is linearly independent and a spanning set of .


Example:
Consider the following vector subspace

, where
{ }.
From the constraint, we have
So we write, { }
For the following choices,
,

and
,


We can check that

and

are independent and they span the space. In this case

and

can
be chosen as a basis set.
H.W. Problems:
#1. Prove the above claim (in the example)
#2. Prove that if

and

are elements of a vector space

, then
{

} forms a basis set for .



SUM of two Vector Spaces:
Definition:
Consider, ,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

8

{ }
The vector space consists of all sums, .
Example: Consider two vector spaces,
{ } and { }

{ }

DIRECT SUM:

The direct sum is when every vector in can be written in an unique way (i.e., one and only one
way) as .
Example:
{ } xy-plane
and { } Line along z-axis
Any vector

can be uniquely written as .




Note the difference between above two cases:
Ordinary SUM: (1
st
case)
or , so we can
write the sum in more than one ways and so this it is not unique.
Direct SUM: (2
nd
case)
unique!

THEOREM:
The vector space is the direct sum of its subspaces and , if and only if (i)
and (ii) {}, intersection consists only zero element.
Proof:
Consider and {}
Let and
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

9

There is a vector for which .
To show uniqueness, also let

and

such that

.
Thus we can have,


But

and

[using closure property]


Now, for uniqueness {} which implies

and



A useful idea applied in Physics:
Consider, be the vector space of symmetric Matrices and be the vector space of anti-
symmetric Matrices.
Any matrix can be written as the sum of symmetric and anti-symmetric matrices,

, where

is symmetric Matrix]

is anti-symmetric Matrix]
We can say,

and .
Next, we show that {}
Suppose, some element belongs to the intersection of the two sets, .
Thus we will have,

and also

which means .
Hence, {}
Therefore, .
NOTE:
The general forms of real symmetric and anti-symmetric matrices,

(


),

(


)

Inner products:
Inner product of two vectors is a generalization of the dot product that we already know.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

10

Inner Product:
[The real inner product is a function (Mapping) from ]
For the real inner product, the following axioms hold:
(i)

for all ,

[Linearity]
(ii) , for all [Symmetry]
(iii) and if and only if , for all [Positive definite]

Note:
Linearity and symmetry together is called bilinearity.

Let

and


We define the inner product,

[Euclidean Norm]
If the vectors are represented by column matrices,
(

), (

)
Then

Transpose of ]
Now suppose, and are two vectors in

and is a matrix.
is a column and is another vector in

.
Now consider the inner product


Also, we can have

.
Example:
(

) and (


)

(


) , and are Orthogonal.
NORM of a vector is defined as,
(inner product with itself)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

11

This is Euclidean Norm when

.
Following rules hold for a NORM:
0 for all and if and only if .
, for all and all .

Note:
For Euclidean Norm,

, which we normally do.


Let us define -Norm:

, for


is our special case. It may be called 2-norm.

Two Important Theorems on inner product and sum of two vectors
CAUCHY-SCHWARZ Inequality:
For two vectors




Proof:
According to definition,


[Euclidean, defined on Real field]
Consider any two real numbers,
Now,


Now considering,

and

we set

. (1)
Here we considered the norms,

and also

.
From (1), summing on both side,

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

12


In the above, we have considered the following identity:


Hence, (proved)
[Note that,

; think of the identity ]



MINKOWSKIs Inequality:
For the real numbers:


Proof:

[Since,

..(2)
Now we apply Cauchy-Schwarz inequality,


From (2),


(proved)

Note:



Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

13

Now if the inner product, , the vectors are orthogonal, we get back

Pythagoras Theorem!

Also, under the new symbols, we rediscover, if , i.e., if , then is called
unit vector. For any vector ,

(normalized)
For any non-negative real number, is called the distance between and .

Inner Product defined on Complex vector space,

,
where


Note that,

and


The bilinearity is lost!
NOTE: A real inner product space is sometimes called Euclidean space and a complex inner product
space is called a Unitary space.

Lecture III

Eigen values and Eigen Vectors
[In the Lecture Notes-I & II, we provided a formal introduction to Vector Space, subspace, inner products etc.
with examples.]

Consider a linear Map over .
If there exists a nonzero vector and a scalar such that
,
then is an eigenvector of and is an eigenvalue.
If the mapping is done by a real matrix and

, , we write

.(1),
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

14

= identity matrix. The above matrix equation (1) has a nontrivial solution, if and only
if
.
Note:
Otherwise, if ,

exists which would mean

is the
only unique and trivial solution!




NOTE:
The solution of the -degree polynomial equation gives -values of , which are the
eigenvalues. However, the eigenvalues may not all be distinct.
For each distinct eigenvalue, there is an eigenvector.

Example #1: Consider, (


)
Characteristic polynomial, |


|

which gives , two Eigen values, real and distinct.


For , (


* (

) wherefrom we get

( ).
As one parameter can be arbitrary, we choose, for example,

, the corresponding Eigen vector


is (


* . Similarly, the other vector can be found easily.
[Note that as the eigenvalue can be arbitrarily chosen, the length of the eigenvector is arbitrary.]

Similarity to a Diagonal Matrix

Suppose, the matrix has -linearly independent eigenvectors. Each eigenvector can be thought of
as a column and we arrange the columns side by side to construct a matrix, .
Now the matrix is such that its columns are linearly independent!
is the characteristic equation of . This is a polynomial of degree in .

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

15

[Note: The matrix has maximal rank and is nonsingular. Thus

exists.]
Consider,

,
We can write,

[Each of

is a column matrix.]

+
, where (

+ , a diagonal matrix having the distinct eigenvalues.


Now we can write,


The above is a Similarity Transformation. The matrices (representing some Operator may be) and
are called similar to each other. Also note that the matrix has been diagonalized and is that
diagonal matrix.
Note:
If we consider,

, we may write the similarity transformation,

.

A reverse thinking:
Imagine, we have a similarity transformation,

, where is a matrix and is a


diagonal matrix and also

. Then the diagonal elements of must be the eigenvalues of


and the columns of must be the corresponding Eigenvectors. Also, it can be said that the Eigen
vectors are linearly independent and thus they span

.

An Important Theorem:
Two matrices and represent the same linear operator if and only if they are similar to each
other. [In fact, all the matrix representations of a linear operator form an equivalence class of similar
matrices.]


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

16


Home Work
Consider the matrix
(



+

Find the Eigen values. Find the Eigen vectors. Prove that the eigenvectors are distinct (independent).
Now diagonalize to show that the diagonal elements are themselves the Eigenvalues.

Some Important propositions:
#1. A matrix with distinct Eigenvalues is diagonalizable!

#2. Eigenvalues are invariant under a similarity transformation.
[Note: A similarity transformation corresponds to a change of basis.]
Proof:
, (1)
where is the Eigenvalue of the square matrix .
Let , where is some non-singular matrix.


Thus from (1),


Now, we can say

.
Thus we see, the Eigenvalue of is also the Eigenvalue of the matrix

corresponding to a
different Eigenvector.

Whats about Trace? Does it remain invariant under a similarity transformation?

Trace and determinant both remain invariant under a similarity transformation.

Trace:



[Note: We used the property, ]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

17


Determinant:

[ ]

In our case,



NOTE: Trace and Determinant remain invariant, anyway, even for matrices that are not independent!

EIGENSPACE:
If the Eigenvalue is the multiple root of the characteristic equation then there are a number of Eigen
vectors corresponding to the particular Eigenvalue . The set of Eigen vectors corresponding to
together with zero vector form Eigen space,

, where this is the vectors subspace of .



Note:
Even if the Eigen values are not distinct, that is, there are multiple roots, it is possible to find
independent Eigenvectors and then form the -matrix to diagonalize .

Home Work:
(



+, (multiple roots)
Establish the diagonal matrix by similarity transformation.


CAYLEY-HAMILTON THEOREM

A square matrix satisfies its own characteristic equation.
For the characteristic equation:

, we can write,

, where is the matrix whose all elements are zero.



To apply CH theorem, let us consider,


Similarly,

for any

Now,


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

18

.
In this way, we can arrive at the following,

.
Now,

) , the
zero matrix.
Therefore, we can say,

.

[NOTE: This is a special proof when the matrix is diagonalizable. For the general proof, consult any
book, for example the book by Lipschutz (Shaum Series).]

Applications of CH Theorem:

Consider the matrix, (


)
Characteristic equation,
|


|



From CH theorem, we can write,


So, the higher power of can be expressed by the lower power of . This is indeed a nice trick!

Now we can easily calculate the even higher powers of the matrix.


And so on


Eigen values and Eigen vectors for a REAL SYMMETRIC MATRIX

Eigen Values are Real

Consider,

, where

= Conjugate Transpose of a vector .


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

19

Now, transposing and conjugating the above equation,

]
If is a real symmetric matrix,

, which means


But

for any

is real.


[Note: We used the inner product definition,

Norm]

Eigen Values are Orthogonal.

Let is a real symmetric matrix and it has Eigen vectors , corresponding to two distinct Eigen
values , .
and
Now,

(1)

(2)
Take Transpose of (2),

(3) [

for a symmetric matrix.]



(1)-(3)


As the Eigen values are real and distinct,

The Eigen values are Orthogonal.




For Orthogonal Matrix:
An orthogonal matrix is a real square matrix.



Note:
The rows and columns of an orthogonal matrix are orthonormal.

[(

+(

+]

s are rows]
A real symmetric matrix has real Eigen values.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

20

Operation of Orthogonal Matrix:

is orthogonal matrix. When is applied on a vector , we get a new vector .

]


So the Norm is unchanged. This Represents Rotation!

#Example:
(


)
The Matrix is not necessarily symmetric here. But we may check,

(orthogonal).
Check the columns,
(

) and (


)
(


) . Also, , columns are orthonormal!

NOTE:
A Matrix is diagonalizable if it has two distinct Eigen values.
Any general Square Matrix can necessarily not be diagonalized by any similarity
transformation.
Consider a Real Symmetric Matrix. This is very special. It has distinct Eigenvalues and it is
diagonalizable.

#Example:
(


)
Eigen values


Corresponding Eigen vectors,
(

), (

)
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

21

Also, (

) (

) Eigen vectors are orthogonal.


Now, normalize the eigenvectors and form the following matrix,
(

), this is orthogonal by default. So,

.
Similarity Transformation:
Considering

,(


) (

,
(


) Diagonalized.
Important Conclusion:
A real symmetric matrix is diagonalizable by similarity transformation through a real orthogonal
matrix!

Some Important Points:
1. A linear operator is said to be Unitary if its: adjoint operator

coincides with its inverse

, i.e.,

. For all the Eigen values are equal to unity. (Rotation)


2. A Hermitian Operator is such that

(self- adjoint). All the Eigenvalues of are real


numbers.
3. Projection Operator,

Eigen value, ,
[ ] [ ], this means Eigen value .



Lecture IV

[In this Lecture set, we discuss inner products, orthogonality, different Unitary and Hermitian
operators, applications and all that. ]

Some important ideas:

Orthogonal Diagonalizability of a real Symmetric Matrix

Consider a real matrix,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

22


For orthogonal matrix,


Thus the matrix is symmetric!

Note: If a real matrix is diagonalizable by an orthogonal matrix, it is symmetric.

Similarity Transform and Characteristic Equation:

Consider two similar matrices, and :


Thus the similar matrices have same characteristic polynomials.



Operations with UNITARY Matrix

Preliminaries:
In our previous lecture, we defined the inner product on complex vector space.

, for complex numbers,

, for complex matrices.



For Unitary Matrices (usually complex),



# Examples:

(


)

(


)

(


)

(


)

(


)

(


) (


)

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

23

Note: The product of two unitary matrices is a unitary matrix. The inverse of a unitary matrix is
another unitary matrix. Identity matrix is unitary. The unitary matrices form a group called unitary
group.

Consider a special unitary matrix,
(

*, where the determinant,


* (

* (


)

These special unitary matrices form a special unitary group: SU(2)


The Eigenvalues of a Unitary Matrix are equal to unity:

Let us consider be an eigenvector corresponding to the eigenvalue of the unitary matrix,
.


Therefore,



Inner Product is preserved due to operation of Unitary Matrix

Let us suppose,

correspond to the orthogonal eigenvectors (independent)

, ,

. Also assume, for every ,

.
Take any vector and expand it in terms of the orthogonal basis formed by the eigenvectors.

, this means is unitary.



Further, let us consider two vectors, and .


Now consider,

, ,

that form an orthonormal basis:


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

24


and


Now,

[Assume,

]
Also,


If now

, ,

forma orthonormal basis,

for each .


Thus the inner product is preserved due to the operation of unitary matrix. In other words,
we can say that the linear operator carries an orthonormal basis into another and this is
then unitary operator.

Operations with Unitary Matrix:




NOTE:
The operation of on the vectors in a vector space means the isomorphism
onto itself.

Operations with HERMITIAN Matrix

A Hermitian matrix is self-adjoint,

.

All the eigenvalues of a Hermitian operator are real numbers.

Take any eigenvalue corresponding to the eigenvector :

Consider,


Again,

Inner product conserved


Norm preserved
Distance preserved.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

25


which means real number.


Inverse of a Hermitian Operator

is also Hermitian.
Suppose,


Also,


Now


Thus we can write,


The eigenvalues of

are the inverse of the eigenvalues of .



Decomposition of a Matrix

Suppose, is a unitary matrix.
We can always write,

and

are Hermitian matrices.



Now,


But

] , which means

and

commute!
NOTE: The reverse is also true.

Any matrix which is not Hermitian, can be expressed as the sum of a Hermitian matrix and a
skew Hermitian matrix.


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

26

Suppose, is an unitary matrix and is a Hermitian matrix. Consider the similarity
transformation,

.
Now the adjoint to the above,


So, the similarity transformation is self-adjoint!
# EXAMPLES of Hermitian Matrices that we often encounter in Physics:
Pauli spin matrices,

(


),

(


),

(


)
If we now construct a general matrix,

(


*, this is also a Hermitian matrix.

Gram-Schmidt Ortho-normalization Process

This is a method to construct a orthonormal basis from an arbitrary basis:
{

} {

}
Consider,

so that

is normal,


Next,

and

is normal.
{

} form an orthonormal set.


Next stage,

and

is normal.
Now, {

} form an orthonormal set.


In this way, we can construct

and

.
Notice the following transition:
(

) (

+(

)
The Transition matrix is clearly in triangular form.

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

27


To use the method:
For a vector, if we construct a new vector in terms of the basis set {

} in the
following way,


The vector will be orthogonal to each of the vectors

s.
To check:


[ {

} is a orthonormal set]
This is true for all

s, .

Example #1:
Consider the following set of vectors,

in

) (

) (

*
We can now check,

.
Note:

form orthonormal basis in

.
Example #2: (Ref: Hoffman-Kunze)
Consider the vectors,

in


Apply Gram-Schmidt process to find orthogonal set of vectors.
Soln.


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

28



Now

are orthogonal set of vectors in

.







Cauchy-Schwarz Inequality

From Gram-Schmidt orthogonalization process we go over to Cauchy-Schwarz inequality.
Consider two vectors, , .

From

[From Gram-Schmidt]
Then
Also,


This is Cauchy-Schwarz inequality.

Inner Product in complex Vector Space

For , , is a complex number. So we can write,

Now,
Inner Product space:
An inner product space is a real or complex vector space together with a specified inner product
defined on that space.
A finite dimensional real inner product space is called Euclidean space.
A complex inner product space is called Unitary space.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

29

[ ]

Consider,

(1)
Similarly,

(2)
Therefore, for
Real space:

(3)
Complex space:

(4)
[H.W.: Check the above two identities.]
(3) & (4) are called polarization identities.

Condition for Orthogonalization:
Consider a matrix, (


), where are complex numbers in general.
Now set two vectors,

and


Next, we apply Gram-Schmidt orthogonalization scheme,


Now

and

are linearly independent if

and

are also linearly independent.]


The above is possible only if .
Thus this is the condition of orthogonalization of two vectors.
#Example:
Two vectors, (

) and (

) are orthogonal.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

30

We have, |


|
On the contrary:
Take (


) where the
Now construct two vectors,

) and

)


Thus the two vectors are not orthogonal.

H.W.
Check the orthogonality for the following rotation matrix:
(



+


More about Inner Product Space:
For a vector space of matrices, the inner product


For a vector space of all continuous complex valued functions on the unit interval, ,

.

Lecture V

Lecture on Vector Space: Lecture Set V
[ Discussed about minimum and characteristic polynomials, triangular matrix etc. ]

Suppose, the matrix corresponding to an operator is of the following triangular form:
(

+
[Note: Precisely, an upper triangular form matrix.]
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

31

The characteristic polynomial can be written as in the following, a product of factors:


[Check this with a matrix.]

Theorem:
For a mapping, be a linear operation, the linear operator , whose characteristic
polynomial can be written as a product of factors, then there exists a basis in for which the
operator can be represented as a triangular matrix.
NOTE: The eigenvalues in this case are the entries appearing in the diagonal.

Two similar matrices, and , related by

, have same characteristic polynomial:




Characteristic polynomial for the transposed matrix:

, as

.
Since, the determinant of a matrix and its transpose are same.

BLOCK MATRIX:

Consider a Block matrix, (


), where are themselves matrices.
We can show,
In the same way, for a triangular square matrix,
(


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

32

For characteristic polynomials:
(


)
|


|
Here also, the above can be generalized in the same way as we have done for determinant.

Minimum Polynomial:

For a square matrix , it satisfies the characteristic polynomial, .
[C-H Theorem]
Now, there can be some other polynomials for which we may write,

The minimum polynomial of is the non-zero polynomial of lowest degree with leading
coefficient 1 (monic), which divides the characteristic polynomial, .

NOTE:
The minimum and characteristic polynomials of matrix have the same irreducible factors.


EXAMPLE:
Consider the following triangular matrix,
(




,
The characteristic polynomial of is


The min. polynomial can be one of the following, which is equal to zero and satisfied by :

or

itself (which is zero by C-H theorem) [See Lect. Note: IV]


Now if we see,

, whereas,

, we can say that

is the min.
polynomial.


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

33

How to actually find min. polynomial?
Example #1:
(




, [Example from Lipschutz]
Characteristic polynomial of
|




|
|



| |



|
The 2
nd
term in the above is zero.
|


|


The min. polynomial will have the same irreducible factors and and that will divide
.

Thus we verify one of the options,
,


Now consider the first option, (




,(




,

For the 2
nd
option, it can be checked that

=0

is the minimum polynomial.


Note: Sometimes, the min. polynomial and the characteristic polynomials can coincide.

Example #2:
Consider the following block diagonal matrix.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

34



2 4 0 0 0 0 0
0 2 0 0 0 0 0
0 0 4 2 0 0 0
0 0 1 3 0 0 0
0 0 0 0 0 3 0
0 0 0 0 0 0 0
0 0 0 0 0 0 5

Consider,
(


), (


), (


),
(




, Block diagonal.
The minimum polynomial of is the least common multiple of the minimum polynomials of , ,
and .

The min. polynomials of , , and are

, ,

and .
Therefore, the min. polynomial of :

, for which .

NOTE:
A triangular matrix, where all the diagonal elements are same.
For example,
(

, n-square matrix
The characteristic equation,

, the min. polynomial will be equal to the characteristic


polynomial.

Theorem:
Let V be a finite-dimensional complex inner product space and let T be any linear operator on V.
Then there is an orthonormal basis for V in which the matrix of T is Upper Triangular.


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

35


Lecture VI

Lecture on Vector Space: Lecture Set VI

Let us consider a vector space in finite dimension:
is a linear operator in
is the matrix representation of

For a triangular matrix,
(

,
The characteristic polynomial,




Invariant Subspace:
Formal Definition

For a linear operator, , a subspace is said to be invariant if for each element ,
we have , i.e., maps onto itself.
The subspace is called -invariant.

THEOREM:
If is an invariant subspace of under the mapping by a linear operator, , then has got a
block matrix representation like



Example:
Consider the following rotation matrix in xy-plane around z-axis,

0
0
0 0 1

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

36


The xy-plane is the invariant subspace of the xyz-space under

.

Here,

(


),

),

.

Any vector in the xy-plane remains in the xy-plane due to the operation of

(



+(

) (

+ (

+ another vector in xy-plane.



Kernel and Image:

For
Kernel: { } Set of all vectors for which
Image: { }
The subspace is -invariant. is also -invariant.

Kernel and Image for Projection Operator:

Projection Operator (or Projector) is defined, if

.
If and are subspaces of , then
{ }
For the direct sum, i.e.,
, {}
When we have two subspaces for which , {}, they are called complementary.

Let , then
For and [

]



Further, suppose some

, for some and also



{}
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

37

Thus and are complementary subspaces.
It follows that for any ,

Also, note that if is a projector, is also a projector.





The projectors and are called complementary projectors.


Consider the following direct sum:


So, for any vector in , can be written as

, where

for each

Now consider, is a linear operator and now is the direct sum of -invariant subspaces,

,
Now the operator also can be written as the direct sum of operators.

, where

denotes the restriction of to

, .

Example:

,
,


Consider two bases in the two subspaces, {

} and {

}
Also,


And





Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

38


Thus we write,

),

+


Primary Decomposition:
Consider the linear operator, .
If is the direct sum of -invariant subspaces,


We have the min. polynomial

, where

is the minimum polynomial of

, where

denotes the restriction of to

.

NILPOTENT:
is nilpotent if

for some positive integer but

, index of nilpotency
of .
Example: (


),

,
Theorem:
If is the nilpotent operator of some index , then has a block diagonal matrix representation,
where the diagonal blocks are of the following form
Block (




,
There is at least one of order and all other blocks are of orders .
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

39


Jordon Canonical Form:
is a linear operator where characteristic and min. polynomials are

s are distinct scalars.


In this case, has a block-diagonal matrix representation whose diagonal entries are of the form:


1. There is at least one

of order

, all other

s are of the order

.
2. The sum of the orders of the

s is

.
3. The number of

s equals to the geometric multiplicity of


[Note:

is the multiplicity of

and so on.]

Example:


Here,


2 1
0 2
2 1
0 2
3 1
0 3
3
A 77 matrix, for two independent eigenvectors belonging to 2.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

40

O
2 1
0 2
2
2
3 1
0 3
3

A 77 matrix, for three independent eigenvectors belonging to 2.

More on Jordon Canonical Form:
A matrix ( ) is diagonalizable if it has -linearly independent eigenvectors.
Consider a matrix, (




,


Investigation shows that we have only 3 linearly independent eigenvectors, but the matrix is .
Therefore, the matrix is not diagonalizable!
But the matrix in this case, can be made nearly diagonal.
Consider the following,
(




,

(




,(




, (




, Jordon form

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

41

Theorem:
For every matrix there exists an invertible matrix such that

, where is a
canonical matrix.
The matrix consists of columns of eigenvectors or generalized eigenvectors.
For an eigenvector, .
For a generalized eigenvector,

, and

.
In the above example,
(

, , (

, , (

,
They are independent eigenvectors.
And
(

, but

, Generalized eigenvector.

Decomposition of a square Matrix: (LU Decomposition)
Consider to be an matrix.
, Lower Triangular matrix, Upper Triangular matrix
For example, consider a matrix:
(

+(

+ (

+
(

+ (

+

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

42

Here we have 9 equations and 12 unknowns. So this can NOT be solved by ordinary method.
Instead, the matrix equation is solved by a trick!

First solve, [Consider, ]
This is done by forward substitution.
(

+(

+ (

] for (1)
Next solve, for .
This is called back substitution.

] for (2)
These algorithms (1) and (2) are implemented in Computational methods.

Matrix Diagonalization: Eigen Decomposition
Consider a decomposition of a square matrix .
If has non-degenerate eigenvalues

and corresponding independent eigenvectors, ,

, where

and so on.
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

43

Then we construct
(


And find,

,
***REPAIRED***

and are similar.
This is called eigenvalue decomposition. [Nothing but similarity transformation]
We write,


Similarly,


Also,

[putting ]



Exponential of a matrix:


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

44

,

Lecture VII

Lecture on Vector Space: Lecture Set VII

Let be the vector space over a field . Consider a mapping, .
This is a linear functional if for every and for every we have the following linearity
relations:

A linear functional on is a linear mapping from to .

Now we have to understand that the set of linear functionals on a vector space over a field is
also a vector space over with the following addition and multiplication defined by:


[Where

and

are two different mappings.]



Now the space of linear functionals,

will be called Dual space!


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

45


Example:
Consider

and

as a vector.
We write,

, a scalar.
Dual Basis:
Consider a vector space of dimension defined over a field . The dimension of the dual space


is also . Each basis of determines a basis of

.
Now think of {

} to be a basis of over .
Let us also consider a set of vectors, {

so that


We thus can say, {

} is a basis of

.

Explicitly,

and so on.

Example:

in


Find the dual basis {

}.
Ans. Consider

, and

) ,


Thus dual basis is

, and



To Work out:
#1. Find a dual basis for


[Ans.

, and

]

#2. Consider the following basis of

.
Find the dual basis {

}.
[Ans.

]


Let {

} be the basis of and {

} be the dual basis of

.
Then for any ,
Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

46


and for any functional



Proof: Suppose,



Similarly, we will have,

and so on.

Thus,

(1)
Now applying over (1),




One Theorem:
Let {

} and {

} be the bases of and if {

} and {

} be
the corresponding dual bases of

. If is the transition matrix from {

} to {

} then

is the
transition matrix from {

} to {

}

[We skip the proof here for this set of elementary lecture notes.]
In Quantum Mechanics:

Each quantum state of a particle is represented by a state vector, belonging to some state space ,
which is a subspace of the Hilbert space.
We call the elements of to be ket vectors according to Dirac notation.
In function space, , which corresponds to a ket vector,
[The advantage of Dirac notation is that it has to refer not coordinate system.]

For each pair of kets and , we associate a complex number:
inner product
which satisfies the following:
(i)


(ii)



(iii)


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

47


Dual Space:
Consider a linear functional : (complex field)
So, is a complex number.
The linear functional defined on a ket vector and that turns it into a complex number in .
The linear functionals defined on the kets constitute a vector space which is the dual space
of , we will call it

.

Now according to Dirac, any element of the dual space

is called a bra vector, <.


<


For example, any bra < designates a linear functional which when applied on a ket, gives us a
number which we could write as but in Dirac notation we write,

So the inner product of earlier notation ( is now .

Because of the complex inner product, there is the ant-linear relation:



Also, if is a ket and is also a ket, we can write,
And <


To every ket there is a Bra.
Then we can define an operator:

Apply the above one a ket, |

and we get


Where is a complex number.
If we have a normalized , we write

So we can define a projection operator by
Because,



For more details of the operator algebra and Quantum Mechanical techniques, follow a standard
Quantum Mechanics book with Dirac notation.

[Ref: Quantum Mechanics Concepts and Applications by N. Zettilli (Wiley Pub)]

Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

48

APPENDIX
Matrix representation of a matrix in a basis, {

}:


..


Matrix representation of is the transpose of .
Example:
(


)


In the basis,

),

(


) (

) (

(


) (

) (


[]

(


) (


)


For be the -invariant subspace of ,
In the basis of {

} of and for the extended basis {

} of


..

and


...


[]
{

}
(


)

(


)

(

,


Vector Space: Lecture Notes compiled by Dr. Abhijit Kar Gupta, kg.abhi@gmail.com

49



References:

1) Linear Algebra Seymour Lipschutz (Schaums Outline Series)
2) Linear Algebra - K. Hoffman, Ray Kunze (Prentice Hall)
3) Linear Algebra Francis J. Wright (Lecture Notes) [centaur.maths.qmw.ac.uk]
4) Linear Algebra and Matrices Martin Fluch (Deptt. of Maths, Univ. of Helsinki webpage)
5) Linear Algebra Paul Dawkins (Lecture Notes in www.scribd.com)
6) Berkley Lecture Notes Edward Carter
7) Linear Algebra V.V. Voyevodin (Mir Pub, Moscow)
















[Cartoon Source: Internet]

You might also like