6.2 Symmetric bilinear forms

  • Definition. A bilinear form \(B:V\times V\to \F \) is symmetric if, for all \(v,w\in V\),

    \begin{equation*} B(v,w)=B(w,v) \end{equation*}

  • Exercise. If \(V\) is finite-dimensional, \(B\) is symmetric if and only if \(B(v_i,v_j)=B(v_j,v_i)\), \(\bw 1{i,j}n\), for some basis \(\lst {v}1n\) of \(V\).

    Thus \(B\) is symmetric if and only if its matrix with respect to some (and then any) basis is symmetric.

  • Example. A real inner product is a symmetric bilinear form.

6.2.1 Rank and radical
  • Definitions. Let \(B:V\times V\to \F \) be a symmetric bilinear form.

    The radical \(\rad B\) of \(B\) is given by

    \begin{equation*} \rad B:=\set {v\in V\st \text {$B(v,w)=0$, for all $w\in V$}}. \end{equation*}

    We shall shortly see that \(\rad B\leq V\).

    We say that \(B\) is non-degenerate if \(\rad B=\set 0\).

    If \(V\) is finite-dimensional, the rank of \(B\) is \(\dim V-\dim \rad B\) (so that \(B\) is non-degenerate if and only if \(\rank B=\dim V\)).

  • Remark. A real inner product \(B\) is non-degenerate since \(B(v,v)>0\) when \(v\neq 0\).

Here is how to understand both the rank and the radical of \(B\). We use \(B\) to define a map \(\beta :V\to V^{*}\) by

\begin{equation*} \beta (v)(w)=B(v,w), \end{equation*}

for \(v,w\in V\). Then:

  • • \(\beta (v)\in V^{*}\) since \(B\) is linear in the second slot.

  • • \(\beta :V\to V^{*}\) is linear since \(B\) is linear in the first slot.

  • • \(\ker \beta =\set {v\in V\st \beta (v)=0}=\set {v\in V\st \text {$B(v,w)=0$ for all $w\in V$}}=\rad B\). Thus \(\rad B\leq V\) and \(\rank B=\rank \beta \) when \(V\) is finite-dimensional.

    Moreover \(B\) is non-degenerate if and only if \(\beta \) injects or, when \(V\) is finite-dimensional, is an isomorphism.

  • • Let \(B\) have matrix \(A\) with respect to a basis \(\lst {v}1n\) of \(V\). Then

    \begin{equation*} \beta (v_j)(v_i)=B(v_j,v_i)=A_{ji}=A_{ij}, \end{equation*}

    where we used the symmetry of \(A\) in the last equality. It follows that

    \begin{equation*} \beta (v_j)=\sum _{i=1}^nA_{ij}v_i^{*} \end{equation*}

    so that \(A\) is the matrix of \(\beta \) with respect to the dual bases \(\lst {v}1n\) and \(\dlst {v}1n\) of \(V\) and \(V^{*}\).

We learn from this how to compute the rank of \(B\):

  • Lemma 6.3. Let \(B:V\times V\to \F \) be a symmetric bilinear form on a finite-dimensional vector space \(V\) with matrix \(A\) with respect to some basis of \(V\). Then

    \begin{equation*} \rank B=\rank A. \end{equation*}

    In particular, \(B\) is non-degenerate if and only if \(\det A\neq 0\).

  • Examples. We contemplate some symmetric bilinear forms on \(\F ^3\):

    • (1) \(B(x,y)=x_1y_1+x_2y_2-x_3y_3\). With respect to the standard basis, we have

      \begin{equation*} A= \begin{pmatrix} 1&0&0\\0&1&0\\0&0&-1 \end {pmatrix} \end{equation*}

      so that \(\rank B=3\).

    • (2) \(B(x,y)=x_1y_2+x_2y_1\). Here the matrix with respect to the standard basis is

      \begin{equation*} A= \begin{pmatrix} 0&1&0\\1&0&0\\0&0&0 \end {pmatrix} \end{equation*}

      so that \(B\) has rank \(2\) and radical \(\Span {e_3}\).

    • (3) In general, \(B(x,y)=\sum _{i,j=1}^3A_{ij}x_iy_j\) so we can read off \(A\) from the coefficients of the \(x_iy_j\).

6.2.2 Classification of symmetric bilinear forms
  • Convention. In this section, we work with a field \(\F \) where \(1+1\neq 0\) so that \(\half =(1+1)^{-1}\) makes sense. This excludes, for example, the \(2\)-element field \(\Z _2\).

We can always find a basis with respect to which \(B\) has a diagonal matrix. First a lemma:

  • Lemma 6.4. Let \(B:V\times V\to \F \) be a symmetric bilinear form such that \(B(v,v)=0\), for all \(v\in V\). Then \(B\equiv 0\).

  • Proof. Let \(v,w\in V\). We show that \(B(v,w)=0\). We know that \(B(v+w,v+w)=0\) and expanding out gives us

    \begin{equation*} 0=B(v,v)+2B(v,w)+B(w,w)=2B(v,w). \end{equation*}

    Since \(2\neq 0\) in \(\F \), \(B(v,w)=0\).  □

We can now prove:

  • Theorem 6.5 (Diagonalisation Theorem). Let \(B\) be a symmetric bilinear form on a finite-dimensional vector space over \(\F \). Then there is a basis \(\lst {v}1n\) of \(V\) with respect to which the matrix of \(B\) is diagonal:

    \begin{equation*} B(v_i,v_j)=0, \end{equation*}

    for all \(\bw 1{i\neq j}n\). We call \(\lst {v}1n\) a diagonalising basis for \(B\).

  • Proof. This is reminiscent of the spectral theorem2 and we prove it in a similar way by inducting on \(\dim V\).

    So our inductive hypothesis is that such a diagonalising basis exists for symmetric bilinear forms on a vector space of dimension \(n\).

    Certainly the hypothesis holds vacuously if \(\dim V=1\). Now suppose it holds for all vector spaces of dimension at most \(n-1\) and that \(B\) is a symmetric bilinear form on a vector space \(V\) with \(\dim V=n\).

    There are two possibilities: if \(B(v,v)=0\), for all \(v\in V\), then, by Lemma 6.4, \(B(v,w)=0\), for all \(v,w\in V\), and any basis is trivially diagonalising.

    Otherwise, there is \(v_1\in V\) with \(B(v_1,v_1)\neq 0\) and we set

    \begin{equation*} U:=\Span {v_1}, \qquad W:=\set {v\st B(v_1,v)=0}\leq V. \end{equation*}

    We have:

    • (1) \(U\cap W=\set 0\): if \(\lambda v_1\in W\) then \(0=B(v_1,\lambda v_1)=\lambda B(v_1,v_1)\) forcing \(\lambda =0\).

    • (2) \(V=U+W\): for \(v\in V\), write

      \begin{equation*} v=\tfrac {B(v_1,v)}{B(v_1,v_1)}v_1+(v-\tfrac {B(v_1,v)}{B(v_1,v_1)}v_1). \end{equation*}

      The first summand is in \(U\) while

      \begin{equation*} B\bigl (v_1,v-\tfrac {B(v_1,v)}{B(v_1,v_1)}v_1\bigr )=B(v_1,v)-B(v_1,v)=0 \end{equation*}

      so the second summand is in \(W\).

    We conclude that \(V=U\oplus W\). We therefore apply the inductive hypothesis to \(B_{|W\times W}\) to get a basis \(\lst {v}2n\) of \(W\) with \(B(v_i,v_j)=0\), for \(\bw 2{i\neq j}n\).

    Now \(\lst {v}1n\) is a basis of \(V\) and, further, since \(v_j\in W\), for \(j>1\), \(B(v_1,v_j)=0\) so that

    \begin{equation*} B(v_i,v_j)=0, \end{equation*}

    for all \(\bw 1{i\neq j}n\).

    Thus the inductive hypothesis holds at \(\dim V=n\) and so the theorem is proved.  □

2 Theorem 5.2.11 from Algebra 1B

  • Remark. We can do a little better if \(\F \) is \(\C \) or \(\R \): when \(B(v_i,v_i)\neq 0\), either

    • (1) If \(\F =\C \), replace \(v_i\) with \(v_i/\sqrt {B(v_i,v_i)}\) to get a diagonalising basis with each \(B(v_i,v_i)\) either \(0\) or \(1\).

    • (2) If \(\F =\R \), replace \(v_i\) with \(v_i/\sqrt {\abs {B(v_i,v_i)}}\) to get a diagonalising basis with each \(B(v_i,v_i)\) either \(0\), \(1\) or \(-1\).

  • Corollary 6.6. Let \(A\in M_{n\times n}(\F )\) be symmetric. Then there is an invertible matrix \(P\in \GL (n,\F )\) such that \(P^TAP\) is diagonal.

  • Proof. We apply Theorem 6.5 to \(B_A\) to get a diagonalising basis \(\mathcal {B}\) and then let \(P\) be the change of basis matrix from the standard basis to \(\mathcal {B}\). Now apply Proposition 6.2.  □

  • Remark. When \(\F =\R \), Corollary 6.6 also follows from the spectral theorem for real symmetric matrices3, which assures the existence of \(P\in \rO (n)\) with \(P^{-1}AP=P^TAP\) diagonal.

3 Algebra 1B, Theorem 5.2.11.

Theorem 6.5 also gives us a recipe for computing a diagonalising basis: find \(v_1\) with \(B(v_1,v_1)\neq 0\), compute \(W=\set {v\st B(v_1,v)=0}\) and iterate. In more detail:

  • (1) Find \(v_1\in V\) with \(B(v_1,v_1)\neq 0\).

  • (2) Suppose we already have found \(\lst {v}1{k-1}\). Now find non-zero \(y\in V\) solving

    \begin{equation} \label {eq:16} B(v_1,y)=\dots =B(v_{k-1},y)=0. \end{equation}

  • (3) If \(k=\dim V\), take \(v_k=y\) and we are done. Otherwise:

  • (4) Inspect \(B(y,y)\). There are three possibilities:

    • (i) If \(B(y,y)\neq 0\), then set \(v_k=y\), and return to step 2 to find \(v_{k+1}\).

    • (ii) If \(B(y,y)=0\) and \(y\in \rad B\) (so that \(B(y,v)=0\) for all \(vin V\)), then again set \(v_k=y\), and return to step 2 to find \(v_{k+1}\).

    • (iii) Otherwise reject \(y\) (it cannot be a member of a diagonalising basis4) and try another solution of (6.1).

Here are some examples:

4 See question 2 on sheet 10.

  • Examples.

    • (1) Problem: find a diagonalising basis for \(B=B_A:\R ^3\times \R ^3\to \R \) where

      \begin{equation*} A= \begin{pmatrix} 1&2&1\\2&0&1\\1&1&0 \end {pmatrix}. \end{equation*}

      Solution: First notes that \(A_{11}\neq 0\) so take \(v_1=e_1\). We seek \(v_2\) among \(y\) such that

      \begin{equation*} 0=B(v_1,y)= \begin{pmatrix} 1&0&0 \end {pmatrix}A\by = \begin{pmatrix} 1&2&1 \end {pmatrix}\by =y_1+2y_2+y_3. \end{equation*}

      We try \(v_2=(1,-1,1)\) for which

      \begin{equation*} B(v_2,y)= \begin{pmatrix} 1&-1&1 \end {pmatrix}A\by = \begin{pmatrix} 0&3&0 \end {pmatrix}\by =3y_{2} \end{equation*}

      In particular, \(B(v_2,v_2)=-3\neq 0\) so we can carry on.

      Now seek \(v_3\) among \(y\) such that \(B(v_1,y)=B(v_2,y)=0\), that is:

      \begin{align*} y_1+2y_2+y_3&=0\\ 3y_2&=0. \end{align*} A solution is given by \(v_3=(1,0,-1)\) and \(B(v_3,v_3)=-1\).

      We have therefore arrived at the diagonalising basis \((1,0,0),(1,-1,1),(1,0,-1)\).

      Note that such bases are far from unique: starting from a different \(v_1\) would give a different, equally correct answer.

    • (2) The same calculation solves another problem: find \(P\in \GL (3,\R )\) such that \(P^{T}AP\) is diagonal.

      Solution: we take our diagonalising basis as the columns of \(P\) so that

      \begin{equation*} P= \begin{pmatrix} 1&1&1\\0&-1&0\\0&1&-1 \end {pmatrix}. \end{equation*}

      • Exercise. Check that \(P^TAP\) really is diagonal!

      • Remark. We could also solve this by finding an orthonormal basis of eigenvectors of \(A\) but this is way more difficult because we would have to find the eigenvalues by solving a cubic equation.

    • (3) Now let us take

      \begin{equation*} A= \begin{pmatrix} 1&2&3\\2&4&6\\3&6&9 \end {pmatrix} \end{equation*}

      and find a diagonalising basis for \(B=B_A\).

      Solution: As before, we can take \(v_1=e_1\) and seek \(v_2\) among \(y\) with

      \begin{equation*} 0=B(v_1,y)=y_1+2y_2+3y_3. \end{equation*}

      Let us try \(v_2=(3,0,-1)\). Then

      \begin{equation*} B(v_2,y)= \begin{pmatrix} 3&0&-1 \end {pmatrix}A\by =0, \end{equation*}

      for all \(y\). Otherwise said, \(v_2\in \rad B\). We keep \(v_2\) and try again with \(v_3=(0,-3,2)\). Again we find that \(v_3\in \rad B\) and conclude that \(v_1,v_2,v_3\) are a diagonalising basis with \(B(v_1,v_1)=1\) and \(B(v_2,v_2)=B(v_3,v_3)=0\).

    • (4) Here is a trick that can short-circuit these computations if there is a zero in an off-diagonal slot. Take

      \begin{equation*} A= \begin{pmatrix} 1&1&0\\1&0&1\\0&1&-1 \end {pmatrix} \end{equation*}

      and seek a diagonalising basis for \(B=B_A\).

      We can exploit the zero in the \((1,3)\)-slot of \(A\): observe that

      \begin{align*} B(e_1,e_1)&=1\\B(e_3,e_3)&=-1\\B(e_1,e_3)&=0 \end{align*} so we are well on the way to getting a diagonalising basis starting with \(e_1,e_3\). To get the last basis vector, we seek \(y\in \R ^3\) with

      \begin{align*} 0=B(e_1,y)&=y_1+y_2\\ 0=B(e_3,y)&=y_2-y_3. \end{align*} We solve these to get \(y=(-1,1,1)\), for example, and so that \((1,0,0), (0,0,1), (-1,1,1)\) are a diagonalising basis and

      \begin{equation*} B(y,y)=1-2+2-1=0. \end{equation*}

6.2.3 Sylvester’s Theorem

Let \(B\) be a symmetric bilinear form on a real finite-dimensional vector space. We know that there is a diagonalising basis \(\lst {v}1n\) with each \(B(v_i,v_i)\in \set {\pm 1,0}\) and would like to know how many of each there are. We give a complete answer.

  • Definitions. Let \(B\) be a symmetric bilinear form on a real vector space \(V\).

    Say that \(B\) is positive definite if \(B(v,v)>0\), for all \(v\in V\setminus \set 0\).

    Say that \(B\) is negative definite if \(-B\) is positive definite.

    If \(V\) is finite-dimensional, the signature of \(B\) is the pair \((p,q)\) where

    \begin{align*} p&=\max \set {\dim U\st \text {$U\leq V$ with $B_{|U\times U}$ positive definite}}\\ q&=\max \set {\dim W\st \text {$W\leq V$ with $B_{|W\times W}$ negative definite}}. \end{align*}

  • Remark. A symmetric bilinear form \(B\) on \(V\) is positive definite if and only if it is an inner product on \(V\).

The signature is easy to compute:

  • Theorem 6.7 (Sylvester’s Law of Inertia). Let \(B\) be a symmetric bilinear form of signature \((p,q)\) on a finite-dimensional real vector space Then:

    • • \(p+q=\rank B\);

    • • any diagonal matrix representing \(B\) has \(p\) positive entries and \(q\) negative entries (necessarily on the diagonal!).

  • Proof. Set \(K=\rad B\), \(r=\rank B\) and \(n=\dim V\) so that \(\dim K=n-r\).

    Let \(U\leq V\) be a \(p\)-dimensional subspace on which \(B\) is positive definite and \(W\) a \(q\)-dimensional subspace on which \(B\) is negative definite.

    First note that \(U\cap K=\set 0\) since \(B(k,k)=0\), for all \(k\in K\). Thus, by the dimension formula,

    \begin{equation*} \dim (U+K)=\dim U+\dim K=p+n-r. \end{equation*}

    Moreover, if \(v=u+k\in U+K\), with \(u\in U\) and \(k\in K\), then \(B(v,v)=B(u+k,u+k)=B(u,u)\geq 0\).

    From this we see that \(W\cap (U+K)=\set 0\): if \(w\in W\cap (U+K)\) then \(B(w,w)\geq 0\) by what we just proved but also \(B(w,w)\leq 0\) since \(w\in W\). Thus \(B(w,w)=0\) and so, by definiteness on \(W\), \(w=0\). Thus

    \begin{equation*} \dim W+(U+K)=\dim W+\dim (U+K)=q+n+p-r\leq \dim V=n \end{equation*}

    so that \(p+q\leq r\).

    Now let \(\lst {v}1n\) be a diagonalising basis of \(B\) with \(\hat {p}\) positive entries on the diagonal of the corresponding matrix representative \(A\) of \(B\) and \(\hat {q}\) negative entries. Then \(B\) is positive definite on the \(\hat {p}\)-dimensional space \(\Span {v_i\st B(v_i,v_{i})>0}\) (exercise5!). Thus \(\hat {p}\leq p\). Similarly, \(\hat {q}\leq q\).

    However \(r=\rank A\) is the number of non-zero entries on the diagonal, that is \(r=\hat {p}+\hat {q}\). We therefore have

    \begin{equation*} r=\hat {p}+\hat {q}\leq p+q=r \end{equation*}

    so that \(p=\hat {p}\), \(q=\hat {q}\) and \(p+q=r\).  □

5 Question 4 on sheet 10

  • Example. Find the rank and signature of \(B=B_A\) where

    \begin{equation*} A= \begin{pmatrix} 1&2&1\\2&0&1\\1&1&0 \end {pmatrix}. \end{equation*}

    Solution: we have already found a diagonalising basis \(v_1=(1,0,0), v_2=(1,-1,1), v_3=(1,0,-1)\) so we need only count how many \(B(v_i,v_i)\) are positive and how many negative. In this case, \(B(v_1,v_1)=1>0\) while \(B(v_2,v_2)=-3<0\) and \(B(v_3,v_3)=-1<0\). Thus the signature is \((1,2)\) while \(\rank B=1+2=3\).

  • Remarks.

    • (1) Here is a useful sanity check: symmetric bilinear \(B\) of signature \((p,q)\) on an \(n\)-dimensional \(V\) has \(p,q,p+q\leq n\) (since \(p,q,p+q\) are all dimensions of subspaces of \(n\)-dimensional \(V\) or \(V^{*}\)).

    • (2) A symmetric bilinear form of signature \((n,0)\) on a real \(n\)-dimensional vector space is simply an inner product.

    • (3) In physics, the setting for Einstein’s theory of special relativity is a \(4\)-dimensional real vector space (space-time) equipped with a symmetric bilinear form of signature \((3,1)\).