5.3 Application: Quadratic forms

  • Convention. We continue working with a field \(\F \) where \(1+1\neq 0\).

We can construct a function on \(V\) from a bilinear form \(B\) (which is a function on \(V\times V\)).

  • Definition. A quadratic form on a vector space \(V\) over \(\F \) is a function \(Q:V\to \F \) of the form

    \begin{equation*} Q(v)=B(v,v), \end{equation*}

    for all \(v\in V\), where \(B:V\times V\to \F \) is a symmetric bilinear form.

  • Remark. For \(v\in V\) and \(\lambda \in \F \), \(Q(\lambda v)=B(\lambda v,\lambda v)=\lambda ^2Q(v)\) so \(Q\) is emphatically not a linear function!

  • Examples. Here are two quadratic forms on \(\F ^3\):

    • (1) \(Q(x)=x_1^2+x_2^2-x_3^2=B_A(x,x)\) where

      \begin{equation*} A= \begin{pmatrix} 1&0&0\\0&1&0\\0&0&-1 \end {pmatrix}. \end{equation*}

    • (2) \(Q(x)=x_1x_2=B_A(x,x)\) where

      \begin{equation*} A= \begin{pmatrix} 0&\half &0\\\half &0&0\\0&0&0 \end {pmatrix}. \end{equation*}

We can recover the symmetric bilinear form \(B\) from its quadratic form \(Q\):

  • Lemma 5.9. Let \(Q:V\to \F \) be a quadratic form with \(Q(v)=B(v,v)\) for a symmetric bilinear form \(B\). Then

    \begin{equation*} B(v,w)=\half \bigl (Q(v+w)-Q(v)-Q(w)\bigr ), \end{equation*}

    for all \(v,w\in V\).

    \(B\) is called the polarisation of \(Q\).

  • Proof. Expand out to get

    \begin{equation*} Q(v+w)-Q(v)-Q(w)=B(v,w)+B(w,v)=2B(v,w). \end{equation*}

     □

Here is how to do polarisation in practice: any quadratic form \(Q:\F ^n\to \F \) is of the form

\begin{equation*} Q(x)=\sum _{1\leq i\leq j\leq n}q_{ij}x_ix_j=\bx ^{T} \begin{pmatrix} q_{11}&&\half q_{ji}\\&\ddots &\\\half q_{ij}&&q_{nn} \end {pmatrix}\bx \end{equation*}

so that the polarisation is \(B_A\) where

\begin{equation*} A_{ij}=A_{ji}= \begin{cases} q_{ii}&\text {if $i=j$;}\\ \half q_{ij}&\text {if $i<j$}. \end {cases} \end{equation*}

  • Example. Let \(Q:\R ^3\to \R \) be given by

    \begin{equation*} Q(x)=x_1^2+2x_2^2+2x_1x_2+x_1x_3. \end{equation*}

    Let us find the polarisation \(B\) of \(Q\), that is, we find \(A\) so that \(B=B_A\): we have \(q_{11}=1\), \(q_{22}=2\), \(q_{12}=2\) and \(q_{13}=1\) with all other \(q_{ij}\) vanishing so

    \begin{equation*} A= \begin{pmatrix} 1&1&\half \\1&2&0\\\half &0&0 \end {pmatrix}. \end{equation*}

  • Definitions. Let \(Q\) be a quadratic form on a finite-dimensional vector space \(V\) over \(\F \).

    The rank of \(Q\) is the rank of its polarisation.

    If \(\F =\R \), the signature of \(Q\) is the signature of its polarisation.

What does the diagonalisation theorem mean for a quadratic form \(Q\)? We take a practical point of view and let \(Q:F^n\to \F \) be a quadratic form on \(\F ^n\) with polarisation \(B\). We have a diagonalising basis \(\lst {v}1n\) of \(B\) and let \(P\) be the change of basis matrix from the standard basis to \(\lst {v}1n\). Then, with \(x=\sum _ix_ie_i=\sum _jy_jv_j\), we have

\begin{equation*} Q(x)=\sum _{i=1}^nB(v_i,v_i)y_i^2=\sum _{i=1}^nB(v_i,v_i)(\sum _{j=1}^n\hat {P}_{ij}x_j)^2, \end{equation*}

where \(\hat {P}_{ij}=(P^{-1})_{ij}\). Otherwise said, \(Q\) is a linear combination of squares of linear functions in the \(x_i\) and the linear functions have linearly independent coefficients (the rows of \(P^{-1}\)).

Let us now apply the classification results of §5.2 and summarise the situation for quadratic forms on vector spaces over our favourite fields:

  • Theorem 5.10. Let \(Q\) be a quadratic form with rank \(r\) polarisation on a finite-dimensional vector space over \(\F \).

    • (1) When \(\F =\C \), there is a basis \(\lst {v}1n\) of \(V\) such that

      \begin{equation*} Q(\sum _{i=1}^nx_iv_i)=\plst {x^2}1r. \end{equation*}

    • (2) When \(\F =\R \) and \(Q\) has signature \((p,q)\), there is a basis \(\lst {v}1n\) of \(V\) such that

      \begin{equation*} Q(\sum _{i=1}^nx_iv_i)=\plst {x^2}1p-x_{p+1}^2-\dots -x_r^2. \end{equation*}

  • Example. Find the signature of \(Q:\R ^3\to \R \) given by

    \begin{equation*} Q(x)=x_1^2+x_2^2+x_3^2+2x_1x_3+4x_2x_3. \end{equation*}

    \(Q\) has polarisation \(B=B_A\) with

    \begin{equation*} A= \begin{pmatrix} 1&0&1\\0&1&2\\1&2&1 \end {pmatrix}. \end{equation*}

    Solution: exploit the zero in the \((1,2)\)-slot of \(A\) to see that \(e_1,e_2,y=(-1,-2,1)\) is a diagonalising basis and so gives us a diagonal matrix representing \(B\) with \(Q(e_1)=Q(e_2)=1>0\) and \(Q(y)=-4<0\) along the diagonal. So the signature is \((2,1)\).

    Here are two alternative techniques:

    • (1) Orthogonal diagonalisation yields a diagonal matrix representing \(B\) with the eigenvalues of \(A\) down the diagonal so we just count how many positive and negative eigenvalues there are.

      In fact, \(A\) has eigenvalues \(1\) and \(1\pm \sqrt {5}\). Since \(\sqrt {5}>2\), \(1-\sqrt {5}<0\) and we again conclude that the signature is \((2,1)\).

      Danger: this method needed us to solve a cubic equation which is already difficult. For an \(n\times n\) \(A\) with \(n\geq 5\), this could be impossible!

    • (2) Finally, we could try and write \(Q\) as a linear combination of linearly independent squares and then count the number of positive and negative coefficients. In fact,

      \begin{align*} Q(x)&=x_1^2+x_2^2+x_3^2+2x_1x_3+4x_2x_3\\ &=(x_1+x_3)^2+x_2^2+4x_2x_3=(x_1+x_3)^2+(x_2+2x_3)^2-4x_3^2. \end{align*} We must check that the linear functions \(x_1+x_3, x_2+2x_3,x_3\) have linearly independent coefficients (that is, \((1,0,1)\), \((0,1,2)\), \((0,0,1)\) are linearly independent) but that is easy. Now the coefficients of these squares are \(1,1,-4\) and so, once more, we get that the signature is \((2,1)\).