M216: Exercise sheet 10

    Warmup questions

  • 1. Show that the following are bilinear maps:

    • (a) Matrix multiplication \(M_{m\times n}(\F )\times M_{n\times p}(\F )\to M_{m\times p}(\F )\).

    • (b) Evaluation \((\phi ,v)\mapsto \phi (v):L(V,W)\times V\to W\).

    • (c) For \(\alpha \in V^{*}\) and \(w\in W\), define \(\phi _{\alpha ,w}:V\to W\) by

      \begin{equation*} \phi _{\alpha ,w}(v)=\alpha (v)w. \end{equation*}

      • (i) Show that each \(\phi _{\alpha ,w}\) is linear.

      • (ii) Show that the map \(t:V^{*}\times W\to L(V,W)\) given by \(t(\alpha ,w)=\phi _{\alpha ,w}\) is bilinear.

  • 2. Let \(B:V\times V\to \F \) be a symmetric bilinear form with diagonalising basis \(\lst {v}1n\). Suppose that, for some \(v_i\), \(\bw 1in\), we have \(B(v_i,v_i)=0\). Prove that \(v_i\in \rad B\).

  • 3. Let \(B:V\times V\to \F \) be a real symmetric bilinear form with diagonalising basis \(\lst {v}1n\). Show that \(B\) is positive definite if and only if \(B(v_i,v_i)>0\), for all \(\bw 1{i}n\).

  • 4. Let \(A,B\in M_{n\times n}(\F )\) be congruent: \(B=P^TAP\), for some \(P\in \mathrm {GL}(n,\F )\).

    Are the following statements true or false?

    • (a) \(\det A=\det B\).

    • (b) \(A\) is symmetric if and only if \(B\) is symmetric.

    Rank and signature

  • 5. Let \(B=B_A:\R ^4\times \R ^4\to \R \) where

    \begin{equation*} A= \begin{pmatrix} 0&2&1&0\\2&0&0&1\\1&0&0&2\\0&1&2&0 \end {pmatrix}. \end{equation*}

    Diagonalise \(B\) and hence, or otherwise, compute its signature.

  • 6. Diagonalise the symmetric bilinear form \(B:\R ^3\times \R ^3\to \R \) given by \(B(x,y)=x_1y_1+x_1y_2+x_2y_1+2x_2y_{2}+x_2y_3+x_3y_2+x_3y_3\).

    Hence, or otherwise, compute the rank and signature of \(B\).

  • 7. Compute the rank and signature of the quadratic form \(Q(x)=x_1x_2-4x_3x_4\) on \(\R ^4\).

December 15, 2023

M216: Exercise sheet 10—Solutions

  • 1.

    • (a) The bilinearity amounts to:

      \begin{align*} A(C+\lambda D)&=AC+\lambda AD\\ (A+\lambda B)C&=AC+\lambda BC, \end{align*} for all \(A,B\in M_{m\times n}(\F )\), \(C,D\in M_{n\times p}(\F )\) and \(\lambda \in \F \). Both of these are easy to prove. For example,

      \begin{multline*} (A(C+\lambda D))_{ij}=\sum _{k=1}^nA_{ik}(C+\lambda D)_{kj}=\sum _{k=1}^nA_{ik}(C_{kj}+\lambda D_{kj})\\ =\sum _{k=1}^n(A_{ik}C_{kj}+\lambda A_{ik}D_{kj}) = (AC)_{ij}+\lambda (AD)_{ij}=(AC+ \lambda AD)_{ij}. \end{multline*}

    • (b) Here, bilinearity reads

      \begin{align*} (\phi _1+\lambda \phi _{2})(v)&=\phi _1(v)+\lambda \phi _2(v)\\ \phi (u+\lambda v)&=\phi (u)+\lambda \phi (v), \end{align*} for all \(\phi ,\phi _1,\phi _2\in L(V,W)\), \(u,v\in V\) and \(\lambda \in \F \). But the first of these is simply the definition of the pointwise addition and scalar multiplication in \(L(V,W)\) while the second is simply the assertion that \(\phi \) is linear!

    • (c)

      • (i) This comes straight from the linearity of \(\alpha \): for \(u,v\in V\) and \(\lambda \in \F \),

        \begin{equation*} \phi _{\alpha ,w}(u+\lambda v)=\alpha (u+\lambda v)w= \alpha (u)w+\lambda \alpha (v)w =\phi _{\alpha ,w}(u)+\lambda \phi _{\alpha ,w}(v). \end{equation*}

      • (ii) Bilinearity of \(t\) amounts to:

        \begin{align*} \phi _{\alpha +\lambda \beta ,w}&=\phi _{\alpha ,w}+\lambda \phi _{\beta ,w}\\ \phi _{\alpha ,w_1+\lambda w_2}&=\phi _{\alpha ,w_1}+\lambda \phi _{\alpha ,w_2}, \end{align*} for all \(\alpha ,\beta \in V^{*}\), \(w,w_1,w_2\in W\) and \(\lambda \in \F \). Each is proved by showing that both sides take the same values on each \(v\in V\). For example:

        \begin{multline*} \phi _{\alpha +\lambda \beta ,w}(v)= (\alpha +\lambda \beta )(v)w=\alpha (v)w+\lambda \beta (v)w\\ =\phi _{\alpha ,w}(v)+\lambda \phi _{\beta ,w}(v) =(\phi _{\alpha ,w}+\lambda \phi _{\beta ,w})(v) \end{multline*}

  • 2. In this case, we have \(B(v_i,v_j)=0\), for all \(\bw 1jn\). So, if \(v\in V\), write \(v=\sum _j\lambda _jv_j\) and then

    \begin{equation*} B(v_i,v)=\sum _j\lambda _jB(v_i,v_j)=0. \end{equation*}

    Otherwise said, \(v_i\in \rad B\).

  • 3. If \(B\) is positive definite, then \(B(v,v)>0\) for any non-zero \(v\in V\) and so, in particular, each \(B(v_i,v_i)>0\).

    Conversely, suppose that each \(B(v_i,v_i)>0\) and let \(v\in V\). Write \(v=\lc {\lambda }{v}1n\) and compute:

    \begin{equation*} B(v,v)=B(\sum _i\lambda _{i}v_i,\sum _j\lambda _jv_j)=\sum _{i,j}\lambda _i\lambda _jB(v_i,v_j) =\sum _i\lambda _i^2B(v_i,v_i). \end{equation*}

    This last is non-negative and vanishes if and only if each \(\lambda _i^2B(v_i,v_i)=0\), or, equivalently, \(\lambda _{i}=0\). Thus \(B\) is positive definite.

  • 4.

    • (a) This is false: let \(P=\lambda I_n\), for \(\lambda \in \F \). Then \(B=\lambda ^2A\) so that \(\det B=\lambda ^{2n}\det A\).

    • (b) This is true: if \(A^T=A\) then

      \begin{equation*} B^T=(P^TAP)^T=P^TA^TP=P^TAP=B. \end{equation*}

      Conversely, if \(B^T=B\) we get \(P^TA^TP=P^TAP\) and multiplying by \(P^{-1}\) on the right and \((P^T)^{-1}\) on the left gives \(A^T=A\).

  • 5. We need to start with \(v_1\) with \(B(v_1,v_1)\neq 0\). Those diagonal zeros say that none of the standard basis will do so let us try \(v_1=(1,1,0,0)\) for which \(B(v_1,v_1)=4\).

    Now seek \(v_2\) among the \(y\) with

    \begin{equation*} 0=B(v_1,y)= \begin{pmatrix} 1&1&0&0 \end {pmatrix}A\by = \begin{pmatrix} 2&2&1&1 \end {pmatrix}\by =2y_1+2y_2+y_3+y_4. \end{equation*}

    We take \(v_2=(0,0,1,-1)\) with

    \begin{equation*} B(v_2,y)= \begin{pmatrix} 0&0&1&-1 \end {pmatrix}A\by = \begin{pmatrix} 1&-1&-2&2 \end {pmatrix}\by =y_1-y_2-2y_3+2y_4. \end{equation*}

    Then \(B(v_2,v_2)=-4\) and we seek \(v_3\) among the \(y\) with \(B(v_1,y)=B(v_2,y)=0\), that is:

    \begin{align*} 2y_1+2y_2+y_3+y_4&=0\\ y_1-y_2-2y_3+2y_4&=0. \end{align*} One solution is \(v_3=(-3,5,-4,0)\) with

    \begin{equation*} B(v_3,y)= \begin{pmatrix} -3&5&-4&0 \end {pmatrix}A\by = 3\begin{pmatrix} 2&-2&-1&-1 \end {pmatrix}\by =3(2y_1-2y_2-y_3-y_4). \end{equation*}

    Thus \(B(v_3,v_3)=-36\) and we need to find \(v_4=y\) with \(B(v_1,y)=B(v_2,y)=B(v_3,y)=0\):

    \begin{align*} 2y_1+2y_2+y_3+y_4&=0\\ y_1-y_2-2y_3+2y_4&=0\\ 2y_1-2y_2-y_3-y_4&=0. \end{align*} A solution is \(v_{4}=(0,4,-5,-3)\) with \(B(v_4,v_4)=36\).

    We now have a diagonalising basis with \(B(v_i,v_{i})=4,-4,-36,36\) so \(B\) has signature \((2,2)\) and so has rank \(4\).

    After all this linear equation solving it is probably good to check our answer: let \(P\) have the \(v_j\) as columns and check that \(P^TAP\) is diagonal:

    \begin{equation*} \begin{pmatrix} 1&1&0&0\\0&0&1&-1\\-3&5&-4&0\\0&4&-5&-3 \end {pmatrix} \begin{pmatrix} 0&2&1&0\\2&0&0&1\\1&0&0&2\\0&1&2&0 \end {pmatrix} \begin{pmatrix} 1&0&-3&0\\1&0&5&4\\0&1&-4&-5\\0&-1&0&-3 \end {pmatrix}= \begin{pmatrix} 4&0&0&0\\ 0&-4&0&0\\ 0&0&-36&0\\ 0&0&0&36 \end {pmatrix} \end{equation*}

  • 6. \(B=B_A\) where

    \begin{equation*} A= \begin{pmatrix} 1&1&0\\1&2&1\\0&1&1 \end {pmatrix}. \end{equation*}

    Let us exploit the zero in the \((1,3)\) slot: note that

    \begin{equation*} B(e_1,e_1)=B(e_3,e_3)=1,\qquad B(e_1,e_3)=0 \end{equation*}

    so that we just need to find \(y\) with

    \begin{align*} 0&=B(e_1,y)=y_1+y_2\\ 0&=B(e_3,y)=y_2+y_3. \end{align*} Clearly \(y=(1,-1,1)\) does the job with \(B(y,y)=0\). Thus \(e_1,e_3,y\) are a diagonalising basis with matrix

    \begin{equation*} \begin{pmatrix} 1&0&0\\0&1&0\\0&0&0. \end {pmatrix} \end{equation*}

    Either way, we see that the signature is \((2,0)\) and so the rank is \(2\).

  • 7. The fastest way to do this is to recall that \(xy=\tfrac 14\bigl ((x+y)^2-(x-y)^2\bigr )\) so that

    \begin{equation*} x_1x_2-4x_3x_4=\tfrac 14(x_1+x_2)^2-\tfrac 14(x_1-x_2)^2 -(x_3+x_4)^2+(x_3-x_4)^{2}. \end{equation*}

    Moreover, the four linear functionals \(x_1\pm x_2, x_3\pm x_4\) are linearly independent: one way to see this is that \(x_1\pm x_2=0=x_3\pm x_4\) forces each \(x_i=0\) so that Corollary 5.7 applies.

    Now two squares appear positively and two negatively giving signature \((2,2)\) and so rank \(4\).