Chapter 5 Duality

5.1 Dual spaces

Recall from Theorem 1.6: if \(V\) and \(W\) are vector spaces over \(\F \) then the set \(L(V,W)\) of linear maps from \(V\) to \(W\) is also a vector space under pointwise addition and scalar multiplication. In this chapter we will study the special case where \(W=\F \) the field of scalars.

  • Definition. Let \(V\) be a vector space over \(\F \). The dual space \(V^{*}\) of \(V\) is

    \begin{equation*} V^{*}:=L(V,\F )=\set {\alpha :V\to \F \st \text {$\alpha $ is linear}}. \end{equation*}

    Elements of \(V^{*}\) are called linear functionals or (less often) linear forms.

Let us spell this out. An element \(\alpha \in V^{*}\) is a function \(\alpha :V\to \F \) which is linear:

\begin{equation*} \alpha (v_1+\lambda v_2)=\alpha (v_1)+\lambda \alpha (v_2), \end{equation*}

for all \(v_1,v_2\in V\) and \(\lambda \in \F \). The addition and scalar multiplication on the right are the field addition and multiplication in \(\F \).

The dual space \(V^{*}\) is a vector space (indeed a subspace of \(\F ^V\)) under pointwise addition and scalar multiplication. Thus:

\begin{align*} (\alpha _1+\alpha _2)(v)&:=\alpha _1(v)+\alpha _2(v)\\ (\lambda \alpha )(v)&:=\lambda (\alpha (v)), \end{align*} for all \(\alpha ,\alpha _1,\alpha _2\in V^{*}\), \(v\in V\) and \(\lambda \in \F \). Again, the algebraic operations on the right hand side of these formulae are those of the field \(\F \).

  • Examples.

    • (1) Fix \(\lst \alpha 1n\in \F \) and define \(\alpha :\F ^n\to \F \) by

      \begin{equation*} \alpha (\lst {x}1n)=\lc \alpha {x}1n. \end{equation*}

      We will soon see that all \(\alpha \in (\F ^n)^{*}\) are of this form for unique \(\lst {\alpha }1n\).

    • (2) Let \(P:=\R [t]\) be the vector space of polynomials on \(\R \). Here are some linear functionals on \(P\):

      • (a) integration over an interval \([a,b]\): \(p\mapsto \int _a^bp\).

      • (b) Evaluation at a point: for example, \(p\mapsto p(\sqrt {2})\).

      • (c) Evaluation of a derivative at a point: for example \(p\mapsto p'''(\pi )\).

When \(V\) is finite-dimensional, so is \(V^{*}\). Indeed:

  • Proposition 5.1. Let \(V\) be a finite-dimensional vector space with basis \(\lst {v}1n\).

    Define \(\dlst {v}1n\in V^{*}\) by setting

    \begin{equation*} v_i^{*}(v_j)=\delta _{ij}= \begin{cases} 1&\text {if $i=j$}\\ 0&\text {otherwise,} \end {cases} \end{equation*}

    and extending by linearity (thus applying Proposition 1.7).

    Then \(\dlst {v}1n\) is a basis of \(V^{*}\) called the dual basis to \(\lst {v}1n\).

  • Proof. Here is the key computation: if \(\sum _{i=1}^n\lambda _iv^{*}_i\in V^{*}\) is a linear combination of the \(v^{*}_i\) then evaluating on \(v_j\) gives

    \begin{equation*} \sum _{i=1}^n\lambda _iv^{*}_i(v_j)=\sum _{i=1}^n\lambda _i\delta _{ij}=\lambda _j. \end{equation*}

    In particular, if \(\sum _{i=1}^n\lambda _iv^{*}_i=0\) then each \(\lambda _j=0(v_j)=0\) and \(\dlst {v}1n\) are linearly independent.

    Now let \(\alpha \in V^{*}\) and set \(\lambda _i=\alpha (v_i)\), for \(\bw 1in\). Then \(\alpha \) and \(\sum _{i=1}^n\lambda _iv_i^{*}\) agree on each \(v_j\) and so everywhere:

    \begin{equation*} \alpha =\sum _{i=1}^n\alpha (v_i)v_i^{*}. \end{equation*}

    Thus \(\dlst {v}1n\) span.  □

  • Remark. We have met these \(v^{*}_i\) before, perhaps without realising it. Write \(v\in V\) in terms of the \(\lst {v}1n\): \(v=\sum _{j=1}^n\lambda _jv_j\). Then

    \begin{equation*} v_i^{*}(v)=\sum _{j=1}^n\lambda _jv_i^{*}(v_j)=\lambda _i. \end{equation*}

    Thus \(v^{*}_i\) is the \(i\)-th coordinate function on \(V\) with respect to \(\lst {v}1n\).

    In particular, applying this to the standard basis \(\lst {e}1n\) of \(\F ^{n}\), we see that, for \(x=(\lst {x}1n)\in \F ^n\), \(e_i^{*}(x)=x_i\) so that any \(\alpha \in (\F ^n)^{*}\) is given by

    \begin{equation*} \alpha (x)=\lc {\alpha }x1n \end{equation*}

    with \(\alpha _i=\alpha (e_i)\).

  • Corollary 5.2. If \(V\) is finite-dimensional then \(\dim V=\dim V^{*}\).

A basic question is how big is \(V^{*}\): are there enough linear functionals to detect all elements of \(V\)? The answer is yes and the key is the following theorem:

  • Theorem 5.3 (Sufficiency principle). Let \(V\) be a vector space and \(v\in V\). Then \(\alpha (v)=0\), for all \(\alpha \in V^{*}\), if and only if \(v=0\).

  • Proof. A complete proof requires a tool from set theory called Zorn’s Lemma, equivalent to the Axiom of Choice, which has the faintly controversial property that it is logically independent from the usual axioms of set theory (so you can choose to believe it or not without running into a contradiction). Rather than get involved in all that we simply prove the result in the finite-dimensional case.

    If \(V\) is finite-dimensional, choose a basis \(\lst {v}1n\). For \(v\in V\), write \(v=\lc \lambda {v}1n\). If \(\alpha (v)=0\) for all \(\alpha \in V^{*}\) then, in particular, for each \(i\), \(0=v_i^{*}(v)=\lambda _i\) so that \(v=0\).  □

  • Exercise.1 Let \(v\in V\) and \(U\leq V\) with \(v\notin U\). Show that there is \(\alpha \in V^{*}\) such that \(\alpha (v)\neq 0\) while \(\alpha _{|U}=0\).

    Hint: apply Theorem 5.3 to \(V/U\).

1 Question 4 on sheet 8.

We apply Theorem 5.3 to get a converse to Proposition 5.1:

  • Proposition 5.4. Let \(V\) be a finite-dimensional vector space and \(\lst {\alpha }1n\) a basis of \(V^{*}\). Then there is a basis \(\lst {v}1n\) of \(V\) such that

    \begin{equation*} \alpha _i(v_j)=\delta _{ij}. \end{equation*}

    Thus \(\alpha _i=v^{*}_i\), for \(\bw 1{i}n\).

  • Proof. Define a linear map \(\phi :V\to \F ^n\) by

    \begin{equation*} \phi (v)=(\alpha _1(v),\dots ,\alpha _n(v)) \end{equation*}

    and observe that \(v\in \ker \phi \) if and only if \(\alpha _i(v)=0\), for \(\bw 1in\), whence, since any \(\alpha \in V^{*}\) is a linear combination of the \(\alpha _i\), \(\alpha (v)=0\), for all \(\alpha \in V^{*}\). We deduce from Theorem 5.3 that \(v=0\) so that \(\ker \phi =\set 0\) and \(\phi \) is injective. On the other hand, \(\dim V=\dim V^{*}=n=\dim \F ^n\) so that \(\phi \) is an isomorphism.

    Now set \(v_i=\phi ^{-1}(e_i)\), \(\bw 1in\), to get a basis of \(V\) since \(\lst {e}1n\) is a basis of \(\F ^n\). Then

    \begin{equation*} \phi (v_j)=(\alpha _1(v_j),\dots ,\alpha _n(v_j))=e_j=(0,\dots ,1,\dots ,0), \end{equation*}

    where the \(1\) is in the \(j\)-th slot. Otherwise said, \(\alpha _i(v_j)=\delta _{ij}\) as required.  □

Since the dual space \(V^{*}\) is a vector space, we can contemplate its dual space \(V^{**}:=(V^{*})^{*}\), the double dual of \(V\). This is closely related to \(V\) itself. Indeed, each \(v\in V\) defines a linear map \(\ev (v):V^{*}\to \F \) by evaluation at \(v\):

\begin{equation*} \ev (v)(\alpha ):=\alpha (v)\in \F . \end{equation*}

  • Exercises.2

    • (1) \(\ev (v)\) is indeed linear: for \(\alpha ,\beta \in V^{*}\) and \(\lambda \in \F \),

      \begin{equation*} \ev (v)(\alpha +\lambda \beta )=\ev (v)(\alpha )+\lambda \ev (v)(\beta ). \end{equation*}

      Thus \(\ev (v)\in V^{**}\).

    • (2) We therefore have a map \(\ev :V\to V^{**}\). Show that \(\ev \) is linear: that is,

      \begin{equation*} \ev (v+\lambda w)=\ev (v)+\lambda \ev (w), \end{equation*}

      for all \(v,w\in V\), \(\lambda \in \F \). To spell it out even more, this means

      \begin{equation*} \ev (v+\lambda w)(\alpha )=\ev (v)(\alpha )+\lambda \ev (w)(\alpha ), \end{equation*}

      for all \(\alpha \in V^{*}\).

    • (3) \(\ev \) is injective (use Theorem 5.3) and so, when \(V\) is finite-dimensional, an isomorphism since \(\dim V=\dim V^{*}=\dim V^{**}\).

2 Question 6 on sheet 8.

Thus:

  • Theorem 5.5. If \(V\) is a finite-dimensional vector space then \(\ev :V\to V^{**}\) is an isomorphism.

  • Remark. In general, a vector space for which \(\ev :V\to V^{**}\) is an isomorphism is said to be reflexive.