Professional Documents
Culture Documents
We develop the tensor algebra that is required to study the continuum mechanics. In this
lecture, we introduce the concept of vector, vector space, basis, and inner product.
Elementary concept of vector
The vector in elementary physics is defined as the quantity which has magnitude and
direction. The vector is graphically represented by an arrow shown in Fig. (1a) while
the algebraic representation (in 3-D space) is done with three numbers. For example,
vector u is represented by {u1 , u2 , u3 } or u1 e1 + u2 e2 + u3 e3 , q
where e1 , e2 , e3 are unit
vectors along coordinate axes. The length of vector, |u| = u21 + u22 + u23 , represents
the magnitude while direction cosines (cos θ1 , cos θ2 , cos θ3 ) represent the direction of
vector as shown in Fig. (1a). Two vectors can be added and subtracted as shown in
Figs. (1b) and (1c). Two vectors are said to be equal if they have same magnitude and
direction. Examples for the vector quantities are position, velocity, acceleration etc. It
can be observed that the coordinate transformation can alter the components but does
not affect the vector. For example, somebody throws a stone then the velocity vector of
stone does not depend on the choice of coordinate system, i.e., the existence of the vector
is independent of the coordinate frame. However, the components of velocity vector do
depend on the coordinate frame, i.e., the column matrix that represents the velocity vector
does change with coordinate frame. The foregoing discussion on vectors is intuitive from
the elementary physics.
1_
2
|u| = (u12 + u22 + u32 )
cos θ1 = |_
u1
u|
u+ v y u = y-x
u cos θ2 = |u_2
u|
v
θ2
cos θ3 = |_
u3
u|
θ1 u x
θ3
It is clear from the elementary definition of vector that unit vectors along coordinate
axes are used without really defining them. Since unit vectors along coordinate axes
themselvs are vectors, they cannot exist without having definition of vector. Clearly,
elementary definition of vector cannot be done without unit vectors along coordinate axes
and also unit vectors along coordinate axes cannot stand without defining vector. This
is running into the egg and chicken paradox. In order to resolve the paradox, one should
follow axiomtic framework. We now define vector space and vectors by axioms. Later,
the unit vectors along coordinate axes can be recoverd by the definition of basis.
Vector space
We consider only the vector spaces over the field of real numbers. Let us denote the set
of real numbers by < and vector space by V. The set, V, equipped with an addition
operation (V × V → V) and a scalar multiplication operation (< × V → V), is called
vector space if it follows:
(i) Associativity: (u + v) + w = u + (v + w), for all u, v, w ∈ V.
(ii) Commutativity: u + v = v + u, for all u, v ∈ V.
(iii) Existence of a zero element: There exist 0 ∈ V such that u + 0 = u, for all u ∈ V.
(iv) Existence of negative elements: For each u ∈ V there exist a unique v ∈ V such that
u + v = 0 and v is denoted as −u.
(v) Associativity in scalar multiplication (or the compatibility of multiplication defined
between the field elements (real numbers) and scalar multiplication operation): α(βu) =
(αβ)u, for all u, ∈ V, and α, β ∈ <.
(vi) Identity in scalar multiplication: There exist a unique element 1 ∈ < such that
1u = u, for all u ∈ V.
(vii) Distributivity with respect to addition operation on vectors: α(u + v) = αu + αv,
for all u, v ∈ V, and α ∈ <.
(viii) Distributivity with respect to scalar addition: (α + β)u = αu + βu, for all u ∈ V,
and α, β ∈ <.
The elements of set V are called vectors. This axiomatic framework generalizes the
notion of vector. We now present few examples for the vector space.
Examples of vector spaces
Example-1: Let us consider the set of all n-tuples,
Addition operation:
u + v = (u1 + v1 , u2 + v2 , · · · , un + vn )
Scalar multiplication operation:
It is easy to verify that the set, <n , obey all the axioms of vector space under the addition
and scalar multiplication. Therefore, the set, <n , is a vector space. It can be observed
that the special case n = 3 is our usual three-dimensional (3-D) Euclidean space.
Example-2: Let us consider the set of all m × n matrices over the real numbers,
· · · a1n
a11 a12
· · · a2n
a21 a22
<m×n =
.. .. .. .. : aij ∈ <
. . . .
am1 am2 · · · amn
Addition operation:
···
a11 + b11 a12 + b12 a1n + b1n
a21 + b21 a22 + b22 ··· a2n + b2n
A+B = .. .. .. ..
. . . .
am1 + bm1 am2 + bm2 · · · amn + bmn
Scalar multiplication:
· · · αa1n
αa11 αa12
αa21 αa22 · · · αa2n
αA = .. .. .. ..
. . . .
αam1 αam2 · · · αamn
It is easy to verify that the given set, <m×n , with defined operations is a vector space.
Example-3: Let us consider set of all Lebesgue measurable functions over the domain of
[0,1], as stated in the following expression,
Z 1
p p
L [0, 1] = f : |f | dx < ∞ ,
0
Linear independence:
A subset {u1 , u2 , · · · , un } of V is said to be linearly dependent if and only if there exist
set of scalars α1 , α2 , · · · , αn , not all zero, such that
α1 u 1 + α2 u 2 + · · · + αn u n = 0
If such nonzero scalars do not exist, i.e, α1 = α2 = · · · = αn = 0, then the set vectors,
{u1 , u2 , · · · , un }, are said to be linearly independent.
Basis vectors:
A subset {u1 , u2 , · · · , un } of V is said to be the basis if the subset is linearly indepen-
dent and linear combination of this subset spans the total set V, i.e. there exist some
Figure 2: Vector representation in 3-D: (a) components of vector (b) two vectors separated
by an angle θ.
Now, an alternative definition is stated for dot product which is equivalent to the one
defined in Eq. (1).
u · v = u1 v1 + u2 v2 + u3 v3 (3)
We note that the definition of dot product given in Eq. (3) accounts for both length of
vector shown in Eq. (2) and also angle between vectors shown in Eq. (1). We will prove
the equivalence of definitions stated in Eq. (1) and Eq. (3).
Problem 1. In 2-D Euclidean space with canonical basis ({e1 = (1, 0), e2 = (0, 1)}) the
definition of dot product defined in Eq. (1) and Eq. (3) are equivalent, i.e. |u||v| cos(θ) =
u1 v1 + u2 v2
Proof-1. Let us consider two vectors in two dimensional space as shown in the Fig. (3).
We note that e1 is unit vector along horizontal axis and e2 is unit vector along vertical
axis.
u2 u
v
v2 θ
θv θu
u1
v1
Let θu and θv be angles made by vectors u and v with respect to horizontal axis as shown
in Fig. (3). Therefore, we have
u1 u2 v1 v2
cosθu = , sinθu = , cosθv = , sinθv = . (5)
|u| |u| |v| |v|
u1 v1 + u2 v2 = |u||v|cosθ (7)
The result can be generalized to 3-D as any two vectors from two-dimensional subspace.
We can have the following alternative proof for the same result based on cosine rule.
Proof-2. We have two vectors u and v with an angle θ between them. Therefore, the
vectors u, v and u − v forms
q a triangle
q as shown q
in Fig. (4). Three sides of triangle are
2 2 2 2
|u|, |v| and |u − v|, i.e., u1 + u2 , v1 + v2 and (u1 − v1 )2 + (u2 − v2 )2 .
u u-v
θ
v
We now apply the cosine rule to the triangle shown in Fig. (4), i.e.,
|u − v|2 = |u|2 + |v|2 − 2|u||v|cosθ (8)
We can rewrite Eq. (8) as
1
|u||v|cosθ = (|u|2 + |v|2 − |u − v|2 )
2
1 2 1 1
= (u1 + u22 ) + (v12 + v22 ) − ((u1 − v1 )2 + (u2 − v2 )2 )
2 2 2
= u1 v1 + u2 v2
Thus, the result is proved. This can be easily generalized to the vectors in 3-D space.
Inner product:
We saw the general definition of vector space in the previous discussion. Now, we gener-
alize the definition of dot product over the vector space, which is known as inner product
or scalar product.
Let V be a vector space. The inner product is a function from V × V → <, denoted by
(u, v), satisfying the following properties:
(i) Linearity: (αu + βv, w) = α(u, w) + β(v, w) ∀α, β ∈ < and ∀u, v, w ∈ V
(ii) Symmetry: (u, v) = (v, u) ∀u, v ∈ V
(iii) Positive-definiteness: (u, u) ≥ 0, ∀u ∈ V and (u, u) = 0 if and only if u = 0.
The definition of inner product brings out the geometric quantities such as length and
orthogonality. The vector space equipped with inner product is called inner product space
or Hilbert space.
Example-4: There is an inner product on the vector space <n . It is defined as,
n
X
u · v = u1 v1 + u2 v2 + · · · + un vn = ui vi (9)
i=1
This definition satisfies all the properties of inner product. It is known as standard inner
product on <n .
Example-5: The choice of inner product is not unique. For any given positive definite
matrix S, another choice of inner product on <n is
n X
X n
u·v = ui Sij vj (10)
i=1 j=1
u = e1 + 2e2
e2 e'2 v = 2e1 + e2
e 1 e'1
Solution: Given vectors, u = e1 + 2e2 and v = 2e1 + e2 are in canonical basis. Length of
√ √
vectors, |u| = 5 and |v| = 5.
Let θ be the angle between two vectors. Then
u·v u1 v1 + u2 v2 4
cos θ = = =
|u||v| |u||v| 5
Both vectors u and v have the following representation in the basis B1 and B2 .
Length of vectors,
1 1
q 2 X
2 2
√ q 2 X
2 2
√
u0i Sij u0j = 0 0
X X
|u| = (u, u) = 5 and |v| = (v, v) = v Sij v
i j = 5
i=1 j=1 i=1 j=1
Orthogonality:
If the inner product between two non-zero vectors is zero then two vectors are said to be
orthogonal. Let u and v be orthogonal vectors in an inner product space V. Then
(u, v) = 0.
and the equality holds if and only if u and v are linearly dependent.
Proof. If (u, v) = 0 then the inequality is trivial. Let us assume both vectors u and v
are non-zero. The positive definite property of inner product implies
The quadratic function f (α) is minimum at α = (u, v)/(v, v). Thus, we obtain the
Cauchy-Schwartz inequality by substituting α = (u, v)/(v, v) in previous equation. If
equality holds in Eq. (11) then we get (u − αv, u − αv) = 0 where α = (u, v)/(v, v). The
relation (u − αv, u − αv) = 0 implies u = αv, i.e., u and v are linearly dependent.
The application of the Cauchy-Schwartz inequality to <n and L2 [0, 1] spaces yields
n
!2 n
! n
!
u2i vi2 ,
X X X
ui vi ≤
i=1 i=1 i=1
Z 1 2 Z 1 Z 1
f (x)g(x)dx ≤ f (x)2 dx g(x)2 dx .
0 0 0
The triangle inequality follows from the Cauchy-Schwartz inequality. The proof is pre-
sented next.
Triangle inequality:
Let V be an inner product space. Then
Proof:
|u + v|2 = (u + v, u + v)
= |u|2 + 2(u, v) + |v|2
≤ |u|2 + 2|(u, v)| + |v|2 (Since (u, v) ≤ |(u, v)|)
≤ |u|2 + 2|u||v| + |v|2 (By Cauchy-Schwartz inequality)
2
≤ (|u| + |v|)
We can get the triangle inequality by taking square-root on both sides. The triangle
inequality is essential in defining the normed vector space where the definition of length
or norm of vector is the main task.
1
kuk := |u| = (u, u) 2 .
Converse is not true, i.e., normed vector space is not necessarily an inner product space.
Consequently, the normed space is subset of general vector space and the inner product
space is a subset of normed vector space. This fact is depicted in Fig. (6). Furthermore,
the concept of length is defined for the normed space whereas the length and also angle
between vectors is defined for the inner product spaces. In continuum mechanics, we
require the definition of both length of vector and angle between vectors. Thus, all the
analysis are being done in the inner product spaces.
Figure 6: Venn diagram for vector spaces, normed spaces, and inner product spaces
Reference