Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

An Introductory Course on Differentiable Manifolds
An Introductory Course on Differentiable Manifolds
An Introductory Course on Differentiable Manifolds
Ebook552 pages4 hours

An Introductory Course on Differentiable Manifolds

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Based on author Siavash Shahshahani's extensive teaching experience, this volume presents a thorough, rigorous course on the theory of differentiable manifolds. Geared toward advanced undergraduates and graduate students in mathematics, the treatment's prerequisites include a strong background in undergraduate mathematics, including multivariable calculus, linear algebra, elementary abstract algebra, and point set topology. More than 200 exercises offer students ample opportunity to gauge their skills and gain additional insights.
The four-part treatment begins with a single chapter devoted to the tensor algebra of linear spaces and their mappings. Part II brings in neighboring points to explore integrating vector fields, Lie bracket, exterior derivative, and Lie derivative. Part III, involving manifolds and vector bundles, develops the main body of the course. The final chapter provides a glimpse into geometric structures by introducing connections on the tangent bundle as a tool to implant the second derivative and the derivative of vector fields on the base manifold. Relevant historical and philosophical asides enhance the mathematical text, and helpful Appendixes offer supplementary material.

LanguageEnglish
Release dateMar 23, 2017
ISBN9780486820828
An Introductory Course on Differentiable Manifolds

Related to An Introductory Course on Differentiable Manifolds

Related ebooks

Mathematics For You

View More

Related articles

Reviews for An Introductory Course on Differentiable Manifolds

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    An Introductory Course on Differentiable Manifolds - Siavash Shahshahani

    there.

    Part I

    Pointwise

    CHAPTER 1

    Multilinear Algebra

    .

    A. Dual Space

    Let V be a finite dimensional vector space over a field F. The set of linear mappings VF will be denoted by V*. This set will be endowed with the structure of a vector space over F. Let α and ß be elements of V* and r an element of F, then we define α+ß and by

    where x is an arbitrary element of V. With these operations, V* becomes a vector space over F and will be called the dual space to V. Suppose (e1, . . . , en) is a basis for V. We define elements ei of V* by their value on ej, j = 1, . . . , n, as follows:

    denotes the value 1 or 0 depending on whether i=j or ij. Note that any element α of V* can be written as a linear combination of e¹, . . . , en. In fact,

    since the value of both sides on an arbitrary basis element ej is the same. Further, {e¹, . . . , en} is a linearly independent set, for if ∑iriei=0, evaluating both sides on the basis element ej yields rj=0. Therefore, the ordered set (e¹, . . . , en) is a basis for V*, called the dual basis for V* relative to (e1 , . . . , en). Thus V* has the same dimension as V.

    By repeating the operation of dual making, one can look at (V*)*, the so-called double dual of V, usually denoted by V**. The double dual will then have the same dimension as the original space V, and since all linear spaces of the same dimension over a given field are isomorphic, there are isomorphisms between V, V* and V**. But in the case of V and V**, there is a distinguished natural isomorphism, denoted by Iv:VV** , which is given as follows. For each vV, the element Iv(v) is defined by

    It follows from (1.1) and (1.2) that Iv(v) is indeed linear, i.e., it is a member of V**. That Iv is linear follows from the linearity of α. To show that Iv is an isomorphism, it suffices to show that its kernel is {0} since the domain and target linear spaces are finite dimensional of the same dimension. But α(v)=0 for all α in V* implies that v=0, and the isomorphism is established. Note that the definition of Iv was independent of the specific nature of the linear space V or the choice of basis for it. In fact, one can state the following general assertion.

    1. Theorem For any basis (e1, . . . , en) of V, (Iv(e1) , . . . , Iv(en)) is the dual basis in V** relative to the basis (e¹ , . . . , en)for V*.

    PROOF. We must show

    This is a consequence of (

    By virtue of the natural isomorphism Iv, the space V** is often identified with V. Under this identification, Iv(ei) is identified with ei, so that (e1 , . . . , en) becomes the dual basis for V** relative to (e¹ , . . . , en).

    B. Tensors

    Let V1 , . . . , VP and W be vector spaces over a field. A map α : V1×⋯×VpW is called p-linear provided that by fixing any p-1 components of (v1 , . . . , vp)∈V1×⋯×Vp, α is linear with respect to the remaining component. As we shall see in some of the following examples, operations generally known as products in elementary mathematics are of this nature.

    2. Examples

    (a) Let V be a vector space over a field F. Regard F as a one-dimensional vector

    space over F. Then the product F×VV given by

    (r, v)↦rv

    is 2-linear (bilinear).

    (b) Let F be a field. Then the p-fold product F× ⋯ ×FF given by

    (r1, . . . , rp)↦r1⋯rp

    is p-linear.

    (c) Let V . Then any inner product V×Vis another example of a bilinear mapping. In general, let ß : V×VF be bilinear and consider a basis (e1 , . . . , en) for V. The n×n matrix B=[ßij], where ßij = ß(ei, ej), determines ß completely as

    If B is a symmetric matrix with positive eigenvalues, then ß is an inner product. Conversely, any inner product on V is obtained in this manner.

    (d) For a vector space V over a field F, the evaluation pairing V×V*→F, given by (v, α)↦α(v), is bilinear.

    (e) Let F be a field and V1 , . . . , Vp; W1 , . . . , Wq be vector spaces over F. Suppose p-linear and q-linear maps α:V1×⋯×Vp→F and β:W1×⋯×WqF are given. Then the tensor product

    α β : V1 × ⋯ × Vp × W1 × ⋯ × Wq F

    is defined by

    Note that αβ is a (p+q)-linear mapping. Further, it follows from the associativity of the product operation in the field F that ⊗ is associative, hence the product α1⊗⋯⊗αk is unambiguously defined by induction.

    In what follows, V will be a finite dimensional vector space over a field F. The n-fold product V×⋯×V will be denoted by Vn.

    3. Definition

    (a) A p-linear map VpF will be called a covariant p-tensor, or a tensor of type (p,0), on V.

    (b) A q-linear map (V*)qF will be called a contravariant q-tensor, or a tensor of type (0,q), on V.

    (c) A (p + q)-linear map V px(V *)→F will be called a mixed (p,q)-tensor, or a tensor of type (p,q), on V.

    4. ExamplesAn element of V* is a covariant 1-tensor on V. In view of the natural isomorphism Iv, any member of V may be regarded as a contravariant tensor on y. The evaluation pairing (Example 2d) is a (1, l)-tensor on V. Inner products are examples of covariant 2-tensors.

    We use the symbols LP(V), Lq(V, respectively, to denote the sets of (p, 0)-, (0, q)- and (p, q)-tensors on V. Under functional addition, and multiplication by elements of the field F, each of these becomes a vector space over F. The dimensions of these spaces are, respectively, np, nq and np+q, as the following will imply.

    5. Basis for the Space of Tensors Let (e1, . . . , en) be a basis for V. Then the following are basis elements for the spaces of tensors.

    (a) For Lp(V):

    (b) For Lq(V):

    (c) For :

    PROOF. Note that by virtue of Example 2e, the displayed tensors are actually elements of the stated spaces. We prove the third case which includes the other two. To show linear independence, suppose that

    By applying the two sides to (ei1, . . . , eip , ej¹ , . . . , ejqcan be written as

    which can be verified by applying both sides to (ei1, . . . , eip , ej¹ , . . . , ejq

    By convention, we let LV=L0V=F.

    6. Change of Basis

    The bases introduced above for the spaces of tensors as well as the resulting components of the tensors depend on the original choice of basis for the linear space. We are now going to investigate how a linear change of basis for the space affects the value of tensor components. We take V to be an n-dimensional vector space over F. It will be convenient to write n×n matrices with entries from F , where the superscript denotes the row index and the subscript indicates the column of the matrix entry. Suppose two bases B =(ei,⋯,enare given for Vas

    Thus the components of ēj with respect to the basis B are the entries of the j, we have the dual bases B*=(e1 , . . . , en. We will first investigate the linear relationship between these two bases. We write

    Therefore, the components of ēi with respect to the basis B* are the entries of the i. To identify B, we note that

    Therefore, the matrix B is the inverse of the transpose of the matrix A:

    B−1 = AT

    Now let α be a (p, q)-tensor on V. With respect to the above bases, the following two representations for α are obtained.

    . Using (1.10), we have

    This is equal to

    Thus we have obtained the desired formula for the change of tensor components under a linear change of variables

    ’ which transform under a linear change of variables according to formula (1.13).

    (a) Special case (p=l,q=0) For a covariant l-tensor

    we obtain

    (b) Special case (p=0,q=l) Consider a contravariant l-tensor, or by virtue of the natural isomorphism Iv, an element x of V

    In this case, we have

    7. Functoriality

    Let V and W be vector spaces over a field F, and suppose f:VW is a linear map. For each non-negative integer p, a map LPf:LpWLpV is defined as follows.

    If p. For p > 0, suppose αLpW and v1,⋯, vpV, then

    That (Lpf)(α) ∈ LPV follows from the linearity of f and the fact that α LPW. The linearity of LPf follows from the definition of linear space operations in the space of tensors. The following two properties are straightforward consequences of definition and establish LP as a contravariant functor.

    (a) For any vector space V and any non-negative integer p,

    (b) For linear maps f : V W and g : U V, and any non-negative integer p,

    Of course, L¹V = V*. The induced linear map Llf is denoted by f*. Note that by definition, LqV* = LqV. For a linear map f : VW, we denote Lqf* by Lqf. The following properties follow from (a) and (b) above and are summarized by saying that Lq is a covariant functor.

    (c) For any vector space V and non-negative integer q,

    (d) For linear maps f : V W and g : U V , and any non-negative integer q,

    C. Anti-symmetric Tensors

    The so-called anti-symmetric tensors are among the most powerful tools in the study of geometric structures. As we shall see in the following section, these are closely related to the concepts of volume and orientation in the case of real vector spaces.

    We recall some elementary facts about the group Sn of permutations on n symbols {1,⋯, n}. A transposition is a permutation that exchanges two symbols and leaves the other symbols fixed. Any permutation σSn can be written as a composition of transpositions, σ=τ1ο⋯οτk, where k is not unique but its parity (even- or odd-ness) is determined by σ. Thus a permutation σ is called even or odd depending on whether k is even or odd. We write ε(σ) = +1 or ε(σ) = −1, respectively, if σ is even or odd. The map ε, called the sign, is a homomorphism from Sn onto the two-element multiplicative group {+1, −1}; thus ε(σ1οσ2) = ε(σ1)οε(σThe set of even permutations form a subgroup of index 2 in Sn.

    8. Definition Let V be a finite-dimensional linear space over a field F and let p be a natural number. An element αLpV is called anti-symmetric (or alternating) if for every σ∈Sp and any u1 , . . . , up in V

    We denote the set of anti-symmetric elements of LpV by ⋀pV. By convention, ⋀⁰V=LV=F.

    9. Elementary Properties of Anti-symmetric Tensors

    (a) A tensor αLpV is anti-symmetric if and only if for each transposition τ and any u1 , . . . , up in V,

    α((1) , . . . , (P)) = −α(u1 , . . . , up)

    PROOF. The statement follows from the facts that ε(τ)= − 1, any permutation is a composition of transpositions and that ε

    (b) A tensor αLpV is anti-symmetric if and only if it has the property that α(u1 , . . . , up)=0 whenever ui=uj for i≠j.

    PROOF. Suppose the property holds and u1 , . . . , up are elements of V. Taking i<j, and expanding α(u1 , . . . , ui + uj , . . . , ui + uj , . . . , up) we obtain

    0 =α(u1, . . . , ui, . . . , ui, . . . , up) + α(u1, . . . , ui, . . . , uj, . . . , up) + α(u1, . . . , ui, . . . , uj, . . . , up) + α(u1, . . . , uj, . . . , uj, . . . , up)

    The first and the last term above vanish by the property, and the result follows.

    Conversely, suppose that for i<j, we have ui=uj=u and consider the transposition that switches i and j. Applying (a) we obtain

    α(u1, . . . , ui, . . . , uj, . . . , up) = −α(u1, . . . , uj, . . . , ui, . . . , up)

    Since the field characteristic was assumed to be 0, we have 1 ≠ − 1, and

    α(u1, . . . , u , . . . , u , . . . , up) = 0

    (c) Let α∈⋀pV. lf {u1 , . . . , up}⊂ V is linearly dependent, then α(u1 , . . . , up)=0.

    PROOF. We write one ui as a linear combination of the rest and expand by p

    10. Basis for ⋀PV

    Let (e1 , . . . , en) be a basis for V and consider an element α∈⋀pV. If p>n, then α=0 by 9c above. For 0<pn, anti-symmetry implies that α is completely determined by its value on p, where i1<⋯<ip. Therefore, to define an element of ⋀PV, it suffices to specify its values on p, where i1<⋯<ip, where i1<⋯<ip, by giving its value as

    such elements in ⋀PV. One may extend the definition to an arbitrary multi-superscript (i1⋯ip=0 if there is repetition in superscripts, and by multiplying by ε(σ), where σ is the permutation that arranges i1 , . . . , ip in increasing order.

    We can now state and prove a couple of very useful propositions.

    (a) Let (e1 , . . . , en) be a basis for V. Then for 0<pn, a basis for pV is given by , where i1<⋯<ip. Further, dim ⋀⁰V=l and dim ⋀pV=0 for p>n.

    PROOF. By earlier convention, ⋀°V=L°V is the underlying field. The case p>n was treated at the beginning of the previous paragraph. For 0<pn, suppose that

    , where j1<⋯<jp, yields cj1⋯jp=0, and linear independence is established. Further, we have the representation

    for α∈⋀pV, j1<⋯<jp

    Let dim V=n. Any non-zero element of ⋀nV is called a volume element for V and serves as a basis for this one-dimensional linear space. The following amplifies 9c in the case p=n.

    (b) Let dim V=n and ω be a volume element for V. Then a subset {u1 , . . . , un}⊂V is linearly dependent if and only if ω(u1 , . . . , un)=0.

    PROOF. AS shown in 9c, linear dependence of {u1 , . . . , un} implies that ω(u1 , . . . , un)=0. Conversely, if {u1 , . . . , un} is linearly independent, then it is a basis for the n-

    dimensional space V. Therefore, ω(u1 , . . . , un)=0 would imply that ω vanishes on any n-tuple of elements of V, i.e., ω

    11. Functoriality

    Let f:VW be a linear map. We recall the definition of Lpf:LpWLpV in (1.16). If α∈⋀pW, it follows that Lpf(α)∈⋀pV. Thus denoting the restriction of Lpf to ⋀PW by ⋀pf, we obtain a linear map

    pf : ⋀PW→ ⋀PV

    by

    For p<n=min{dim V, dim W}, ⋀pf is the zero map, and for p. From Subsection 7, we obtain the following by restriction.

    (a) For any linear space V and any non-negative integer p,

    (b) For linear maps f:VW and g: UV, and any non-negative integer p,

    Note that ⋀¹V =L¹ V and ⋀¹f=L¹f=f*.

    12. Determinants

    An important consequence of the one-dimensionality of ⋀nV is that for a linear map f:VV, the induced linear map ⋀nf:⋀nV→⋀nV is multiplication by a (fixed) element of the field. This element we call the determinant of f and denote it by det f. Thus,

    In the next section on real vector spaces we will give an incisive geometric interpretation of the determinant, but for now we concentrate on developing the formal algebraic properties of the concept.

    13. Elementary Properties of the Determinant

    (a) We have

    These are consequences of (1.25) and (1.26).

    (b) A linear map f:VV is invertible if and only if det f≠0. In this case, det f −1=(det f) −1.

    PROOF. The second statement is a consequence of 13a. For the first, let (e1 , . . . , en) be a basis and a volume element for V. By the definition of determinant

    By 10b, we have ω(e1 , . . . , en)≠0, therefore it follows that det f≠0 if and only if the set {f(e1) , . . . , f(en)} is linearly independent, i.e., f

    (c) Expansion of the Determinant

    Suppose that the matrix of a linear map f: VVrelative to a basis for V is , then

    PROOF. Let (e1 , . . . , en) be a basis for V, and consider the volume element e¹⋯n for V (see (1.22) for the definition of e¹⋯n). Thus e¹⋯n(e1 , . . . , en)=l. Now using (1.28),

    If there is any repetition among i1 , . . . , in, we get e¹⋯n(ei1 , . . . , ein)=0, otherwise (ei1 , . . . , ein) represents a permutation σ of (e1 , . . . , en), and e¹⋯n(eil , . . . , ein)=ε(σ

    Note that as σ ranges over Sn in the sum (1.29), σ −1 also ranges over Sn. Moreover, ε(σ)=ε(σ −1), therefore one may also write

    An interpretation of this result is that the determinant of the transpose of a matrix is equal to the determinant of the original matrix. Equivalently, in (1.29), the products of matrix entries are picked consecutively from columns 1 to n, while in (1.30), the products are taken consecutively from rows 1 to n. All familiar formulas about the expansion of the determinant according to column or row follow from (1.29) and (1.30). For these facts and a generalization, see Exercise 1.6 at the end of the chapter.

    D. Real Linear Spaces

    with the standard basis (e1;..., enwill be represented as x=(xl , . . . , xn. We will first try to find an interpretation for elements ei¹⋯ip of ⋀p . Let us look at the cases p= 1,2,3, where x, y and z .

    From elementary analytic geometry, we know that the absolute values of the right-hand sides of the above have, respectively, the following interpretations: the length of the projection of x on the i-axis, the area of the projection of the parallelogram determined by x and y on the (i, j)-plane, and the volume of the projection of the parallelepiped determined by x, y and z on the (i, j, k)-space. Further, the signs of the above have the following meaning. In (1.31), xi is positive or negative depending on whether the projection of x points in the same or against the direction of ei. In (1.32), the determinant is positive or negative depending on whether the projection of the ordered pair (x, y) is right-handed or left-handed relative to the ordered pair (ei, ej). Likewise, the sign of the determinant in (1.33) signifies whether the projection of the ordered triple (x, y, z) on (i, j, k)-space has the same or opposite handedness as the ordered triple (ei, ej, ek). (See Figure 1.)

    Based on the above intuition, we generalize the notions of volume and orientation to arbitrary real linear spaces. Let V be a linear space of dimension n , and consider two ordered bases B = (e1 , . . . , enfor this space. There is a unique linear map f: VV with f(ej)=ēj, j=1 , . . . , n. f is invertible as it carries basis to basis, so det f≠0. We say that B has the same orientation if and only if det f>0. This is an equivalence relation and breaks up the set of ordered bases for V into two classes, each called an orientation for V. An equivalent approach is the following. For each ordered basis B = (e1 , . . . , en), consider the corresponding volume element e¹⋯n as in (has the same orientation as B if and only if the determinant of the relating linear map is positive. Thus, the non-zero elements of the one-dimensional space ⋀nV (i.e., the volume elements) break up into two classes, each signifying one of the two orientations of V.

    Continuing as above with the linear space V of dimension n on R, we consider a volume element ω on V. Let (α1 , . . . , αn) be an ordered n-tuple of elements of V. We define the n-dimensional parallelepiped determined by (α1 , . . . , αn) to be the set

    FIGURE 1. Orientation

    The volume (relative to ω) of P(a1 , . . . , an) is defined as follows:

    This definition is compatible with the discussion at the beginning of the section. We let (u1 , . . . , un) be a basis for V with ω(u1 , . . . , un) = 1, i.e., we take P(u1 , . . . , un) to be a unit parallelepiped relative to 9. Consider the linear map f : VV that sends uj to aj for each j= 1 , . . . , n. Then

    Note that the matrix of f relative to the basis (u1 , . . . , un) has a1 , . . . , an as columns.

    E. Product Structure

    In Example 2e, we defined the tensor product αβ of αLpV and βLqV to be an element of Lp+qV. It is not the case, however, that if α∈⋀pV and β∈⋀qV, then αβ is anti-symmetric, i.e., it is an element of ⋀p+qV. Here we modify ⊗ by a sort of anti-symmetric averaging to obtain an anti-symmetric tensor.

    Let α∈⋀pV and β∈⋀qV. Then the wedge product pq will be a (p+q)-covariant tensor defined by giving its value on an arbitrary (p+q)-tuple (u1 , . . . , up+q) of elements of V as follows:

    As we shall soon see, the choice of coefficient before the summation above makes the associative law come out true for the wedge product and allows for computational simplifications.

    14. Elementary Properties of the Wedge Product

    (a) If α∈⋀pV and β∈⋀qV, then αβ∈⋀p+qV.

    PROOF. Let ρ∈Sp+q. Then

    For a fixed ρ∈Sp+q, the element σρ takes on all the values in the group Sp+q as σ does, so we may consider the above summation as running over σρ. It follows that

    (b) The wedge product is a bilinear map pV⋯⋀qV→⋀p+qV.

    PROOF. This is an immediate consequence of the definition (

    (c) The wedge product is associative.

    PROOF. We take α∈⋀pV, β∈⋀qV, γ∈⋀rV and u1 , . . . , up+q+r in V. Then the value of ((αβ)∧γ)(u1 , . . . , up+q+r) is equal to

    For each σ∈Sp+q+r, we let Sσ be the subgroup of Sp+q+r consisting of permutations that leave each of σ(p + q + 1) to σ(p + q + r) fixed. This is isomorphic to Sp+q. The expression inside the summation above is then equal to

    Now for given σ, τ∈Sp+q+r we consider the equation ρσ=τ where ρ∈Sσ. For i>(p+q), the definition of Sσ implies that τ(i)=σ(i), so for a given τ, the number of σ’s that can satisfy this equation is (p+q)! On the other hand, for given τ and σ, a unique ρ satisfies this equation, therefore the value of the double summation above is

    It follows then that ((αβ)∧γ)(u1 , . . . , up+q+r) is equal to

    The associativity of multiplication in the field then shows that the alternative ((αβ)∧γ)(u1 , . . . , up+q+r

    As an outcome of the above calculation, the expression αβγ is meaningful and the following formula holds:

    Inductively, the expression α1∧⋯∧αk is unambiguously defined for αi∈⋀pi V, i=1 , . . . , k, and

    (d) Let αi∈⋀pi V, i=1 , . . . , k, and ujV, j=1 , . . . , p1+...+pk, then

    (e) Let αi∈V* and i=1 , . . . , k, then

    (α1∧⋯∧αk(u1, . . . , uk = det [αi(uj)]

    PROOF. This is the special case of (d) for p1=⋯=pk

    (f) For α∈⋀pV and β∈⋀pV, one has

    . Note also from the definition of the wedge product that eiej= − ejei. Therefore in order to transform

    into

    in order, p places to the left by making consecutive transpositions. Since there are q , it takes pq

    (g) If p is odd and α∈⋀pV, then αα=0.

    (h) Let (e1 , . . . , en) be a basis for V with dual basis (e¹ , . . . , en). Then

    PROOF. The two sides have the same effect on any p-tuple

    Let f : VW be a linear map. Recalling the induced linear maps ⋀pf : ⋀pW→⋀PV, it is a routine matter to check that the induced maps preserve the wedge product.

    (i) Let f : VW be a linear map, α∈⋀pW and β∈⋀qW. Then

    p+q f (α β) = ⋀p f (α) ∧ ⋀q f (β)

    PROOF. Let u1 , . . . , up+q be elements of V. Then by definition of the induced map, (⋀p+q f (αβ))(u1 , . . . , up+q) is equal to (αβ)(f (u1) , . . . , f (up+q)), which in turn is equal to

    (j) Graded Exterior Algebra ⋀*V

    Let dim V=nPV as one linear space. Then

    Thus an element of A*V is an (β+l)-tuple (α0,α1 , . . . , αn), or a formal sum α0+α1+⋯+αn, where αi is an element of ⋀iV. αi is called an element of degree i in ⋀*V. Now the wedge product is extended to a product

    ∧ : ⋀*V × ⋀*V → ⋀*V

    by stipulating that the distributive laws

    α ∧ (β + γ) = (α β) + (α γ)

    (α + β) ∧ γ = (α γ) + (β γ)

    hold. These are of course consistent with the bilinearity of ∧ as in (b). With the operations + and ∧, ⋀*V is known as the (graded) exterior algebra of V.

    For a linear map f: VW , the induced linear maps ⋀pf give rise to a linear and ⋀-preserving linear map ⋀*W→⋀* V, which is denoted by f *. It is also customary to denote all ⋀p f by f *, a convention we will henceforth adopt, unless there is danger of confusion.

    15. Interior Product or Contraction

    Let V be a linear space over a field F and let xV. As the final topic in this section, we consider a method for reducing the degree of an anti-symmetric tensor, known as contraction by x or interior product with x. We define an operator ix or x⌟ that will map each ⋀PV to ⋀p−lV. For p=0, we let ix be the zero map, adding the convention that ⋀pV={0} for p<0. For p≥1, α∈⋀pV, and u2 , . . . , up in V, we define

    We will now state and prove the main properties of contraction.

    (a) If α∈⋀pV, then ixα∈⋀p−lV.

    PROOF. (p−l)-linearity and anti-symmetry of ixα follow from the corresponding properties of α

    (b) ixα is linear with respect to x, i.e., ix+y=ix+iy for x and y in V, and irx=rix for x in V and r in F.

    PROOF. This is linearity with respect to the first component of the argument of a

    (c) For x and y in V, ixοiy= − iyοix, therefore ixοix=0.

    PROOF. For p<2, both sides are zero. Otherwise,

    The exchange of x and y

    (d) Basic Example

    Let (e1 , . . . , en) be a basis for V. Then

    where the symbol always indicates the deletion of *.

    PROOF. We check that the two sides have the same effect on an arbitrary ordered (p. If k is not one of {i1 , . . . , ip}, then using 14e we see that the first column of the matrix consists of zeros, so the determinant is zero. Now suppose k=iv, then

    If the subscript set {j2, . . . , jp} is not the same as {i1 , . . . , iv−1 ,iv+1 , . . . , ipinvolves v

    (e) Let α∈⋀PV, β∈⋀qV and xV. Then

    PROOF. We take an ordered basis (e1 , . . . , en) for V. Because of the bilinearity of the wedge product and the linearity of ix with respect to x, it suffices to consider the case where x=ek, where i1<⋯<ip and j1<⋯<jq. We consider four cases:

    Case 1: k∉{i1 , . . . , ip}⋃{j1 , . . . , jq}.

    In this case, ixα=0, ixβ=0 and ix(αβ)=0, all by the first case of (1.39).

    Case 2: k=∈{i1 , . . . , ip}, but k∉{j1 , . . . , jq}.

    Here again by (1.39), ixβ=0, and

    Case 3: k{i1..., ip}, but k=jv∈{j1 , . . . , jq}.

    This is similar to Case 2, except that an extra (−1)p appears.

    Case 4: k=∈{i1 , . . . , ip}, and k=jv∈{j1 , . . . , jq}.

    Here we have αβ=0, so the left-hand side of (1.40) is zero. On the other hand,

    It takes (pµ+v−1) transpositions to move ek

    EXERCISES

    1.1 Let V be a vector space and V* its dual. Show there is no isomorphism VV* that maps every basis to its dual.

    1.2 Let V 1 and V2 be vector spaces with ordered bases B1 and B2, respectively, and let A be the matrix of a linear map f:V1→V2 with respect to these bases. Show that the matrix of the induced linear map f *:V2*→V1* with respect to the dual bases is the transpose of A.

    1.3 Let V be a vector space and α¹ , . . . , αk∈V*. Show that {α¹ , . . . , αk} is linearly dependent if and only if α¹∧⋯∧αk=0.

    1.4 For anti-symmetric tensors α and on a vector space V, we define

    [α,β]=αββα

    If α, β and γ are anti-symmetric tensors on V, show that [[α,β],γ]=0.

    1.5 Let B be an ordered basis for n-dimensional vector space V. Denote the basis constructed in 10a for ⋀PV by Bp (you may choose any fixed order for the basis elements of Bp).

    (a) For nbe the matrix of a linear map f: VV with respect to B. Describe the matrices of ⋀pf with respect to Bp for p= 0, 1, 2, 3.

    (b) Do the same for arbitrary n and p.

    1.6 (Laplace Expansion) Let V be a vector space with basis (e1 , . . . , en) and suppose f: VV is a linear map.

    (a) Using the formula 14i for p = 1 and q = n−1, obtain the expansion formula for determinant in terms of the first row (column).

    (b) For arbitrary p and q with p+q=n, obtain a more general formula.

    1.7 Denote the vector space of 2x2 matrices with entries from the field F by M 2(F). Consider a fixed element M ∊M2(F) and denote the linear map X↦MX from M 2(F) to itself by f.

    (a) Show that det f=(det M)².

    (b) Compute the matrices of ⋀pf relative to the bases described in 10a for all p.

    1.8 Let dim V=n, 0≤pn and suppose that f : VV is a linear map. Show that

    1.9 We denote by τp the trace of the linear map ⋀p f, where f : VV is a linear map, and dim V=n. Identify τ0, τ1 and τn. Show that

    1.10 Let V be a finite-dimensional vector space over a field F and suppose that β:V×VF is a bilinear map. We define β♭:VV* by

    (β♭(u))(v) = β(v,u), u,vV

    (a) Prove that in fact β♭(u)∈V* and that β♭ is linear.

    (b) Show that β♭ is an isomorphism if βis an inner product.

    (c) Let B=(e1 , . . . , en) be a basis for V. Prove that

    Enjoying the preview?
    Page 1 of 1