1. Let \(U\) be a subset of a vector space \(V\). Show that \(U\) is a linear subspace of \(V\) if and only if \(U\) satisfies the following conditions:
(i) \(0\in U\);
(ii) For all \(u_1,u_2\in U\) and \(\lambda \in \F \), \(u_1+\lambda u_2\in U\).
2. Which of the following subsets of \(\R ^3\) are linear subspaces? In each case, briefly justify your answer.
(a) \(U_1:=\set {(x_1,x_2,x_3)\st x_1^2+x_2^2+x_3^2=1}\) (b) \(U_2:=\set {(x_1,x_2,x_3)\st x_1=x_2}\) (c) \(U_3:=\set {(x_1,x_2,x_3)\st x_1+2x_2+3x_3=0}\)
3. Which of the following maps \(f:\R ^2\to \R ^2\) are linear? In each case, briefly justify your answer.
(a) \(f(x,y)=(5x+y,3x-2y)\) (b) \(f(x,y)=(5x+2,7y)\) (c) \(f(x,y)=(\cos y,\sin x)\) (d) \(f(x,y)=(3y^{2},x^3)\).
4. Let \(\cI \) be a set and \(V\) a vector space over a field \(\F \). Recall that \(V^{\cI }\) is the set of maps \(\cI \to V\).
Show that \(V^{\cI }\) is a vector space under pointwise addition and scalar multiplication.
5. Let \(\R [t]\) be the space of real polynomials. This is a vector space under coefficient-wise addition and scalar multiplication.
For \(d\in \N \), let \(P_{d}\subset \R [t]\) be the set of polynomials of degree no more than \(d\). Show that \(P_d\leq \R [t]\) and has basis \(1,t,\dots , t^{d}\)
Define a linear map \(D:P_d\to P_d\) by \((Dp)(t)=p’(t)\). Compute its matrix with respect to \(1,t,\dots , t^{d}\). What are \(\ker D\) and \(\im D\)?
6. Which of the following subsets of \(\C ^3\) are linear subspaces over \(\C \)? In each case, briefly justify your answer.
(a) \(U_1:=\set {(z_1,z_2,z_3)\st z_1z_2=1}\) (b) \(U_2:=\set {(z_1,z_2,z_3)\st z_1=\bar {z}_2}\) (c) \(U_3:=\set {(z_1,z_2,z_3)\st z_1+\sqrt {-1}z_2+3z_3=0}\)
7. Let \(V\) be an \(n\)-dimensional vector space over \(\C \), and let \(V_\R \) be the underlying vector space over \(\R \) (thus \(V_\R \) has the same set of vectors as \(V\), but scalar multiplication is restricted to real scalars). Prove that \(V_\R \) has dimension \(2n\).
[Hint: let \(\cB :v_1,v_2,\ldots ,v_n\) be a basis for \(V\) and show that \(\cB _\R :v_1,i v_1, v_2,i v_2, \ldots ,v_n,i v_n\) is a basis for \(V_\R \), where \(i\in \C \) is \(\sqrt {-1}\) rather than an index!]
Please upload to moodle by NOON on Friday 11th October 2019
1. First suppose that \(U\leq V\). The \(U\) is non-empty so there is some \(u\in U\) and then, since \(U\) is closed under addition and scalar multiplication, \(0=u+(-1)u\in U\) also and condition (i) is satisfied. Now if \(u_1, u_2\in U\) and \(\lambda \in \F \), then \(\lambda u_2\in U\) (\(U\) is closed under scalar multiplication) and so \(u_1+\lambda u_2\in U\) (\(U\) is closed under addition). Thus condition (ii) holds also.
For the converse, if conditions (i) and (ii) hold, then, first, \(0\in U\) so \(U\) is non-empty and, second, \(U\) is closed under addition (take \(\lambda =1\) in condition (ii)) and under scalar multiplication (take \(u_1=0\) in condition (ii)). Thus \(U\leq V\).
2.
(a) \(U_1\) is not a subspace as it does not contain \(0\)!
(b) \(U_2\) is a subspace: in fact, it is \(\ker \phi _A\) where \(A= \begin {pmatrix} 1&-1&0 \end {pmatrix} \).
(c) \(U_3\) is a subspace. It is \(\ker \phi _A\) for \(A= \begin {pmatrix} 1&2&3 \end {pmatrix} \).
3.
(a) Here \(f\) is linear: it is the map \(\phi _A\) corresponding to the matrix
\(\seteqnumber{0}{}{0}\)\begin{equation*} A= \begin {pmatrix} 5&1\\3&-2 \end {pmatrix}. \end{equation*}
(b) This is not linear (because of that \(+2\) term). In particular \(f(0,0)=(2,0)\neq 0\)!
(c) Again \(f(0,0)=(1,0)\neq 0\) so this \(f\) cannot be linear. Of course, we already know this because it is certainly not true that \(\cos (y_1+y_2)=\cos y_1+\cos y_2\).
(d) Another non-linear map: for example \(f(2x,2y)\neq 2f(x,y)\).
4. The basic idea is that the vector space axioms for \(V^{\cI }\) will follow from those of \(V\) applied to the values of elements of \(V^{\cI }\). Since those elements are completely determined by their values, this will bake the cake.
In more detail: let \(u,v,w\in V^{\cI }\), then, for \(i\in \cI \),
\(\seteqnumber{0}{}{0}\)\begin{equation*} (u+v)(i)=u(i)+v(i)=v(i)+u(i)=(v+u)(i), \end{equation*}
whence \(u+v=v+u\). Here the first and last equalities are just the definition of pointwise addition and the middle one of commutativity of addition in \(V\).
Similarly,
\(\seteqnumber{0}{}{0}\)\begin{equation*} ((u+v)+w)(i)=(u+v)(i)+w(i)=(u(i)+v(i))+w(i)= u(i)+(v(i)+w(i))=(u+(v+w))(i) \end{equation*}
so that \((u+v)+w=u+(v+w)\).
The zero element is the zero map defined by \(0(i):=0\), for all \(i\in \cI \), while the additive inverse \(-v\) of \(v\in V^{\cI }\) is defined by \((-v)(i):=-(v(i))\). Now
\(\seteqnumber{0}{}{0}\)\begin{align*} (v+0)(i)&=v(i)+0(i)=v(i)+0=v(i)\\ (v+(-v))(i)&=v(i)+(-v)(i)=v(i)-v(i)=0=0(i) \end{align*} so that \(v+0=v\) and \(v+(-v)=0\) as required.
The axioms around scalar multiplication are verified in the same way. For example, for \(\lambda ,\mu \in \F \),
\(\seteqnumber{0}{}{0}\)\begin{equation*} ((\lambda +\mu )v)(i)=(\lambda +\mu )(v(i))=\lambda (v(i))+\mu (v(i))=(\lambda v)(i)+(\mu v)(i)=(\lambda v+\mu v)(i) \end{equation*}
so that \((\lambda +\mu )v=\lambda v+\mu v\).
Again, for \(u,v\in V^{\cI }\) and \(\lambda \in \F \),
\(\seteqnumber{0}{}{0}\)\begin{multline*} (\lambda (u+v))(i)=\lambda (u+v)(i)=\lambda (u(i)+v(i))=\lambda u(i)+\lambda v(i)\\ =(\lambda u)(i)+(\lambda v)(i)=(\lambda u+\lambda v)(i) \end{multline*} so that \(\lambda (u+v)=\lambda u+\lambda v\).
For \(\lambda ,\mu \in F\) and \(v\in V^{\cI }\),
\(\seteqnumber{0}{}{0}\)\begin{equation*} ((\lambda \mu )v)(i)=(\lambda \mu )v(i)=\lambda (\mu v(i))=(\lambda (\mu v))(i) \end{equation*}
so that \((\lambda \mu )v=\lambda (\mu v)\).
Finally, \((1v)(i)=1v(i)=v(i)\) so that \(1v=v\) and we are (at last!) done.
5. Clearly \(P_d\) is non-empty as it contains the zero polynomial. Moreover, for any polynomials \(p,q\) and \(\lambda \in \R \), we have
\(\seteqnumber{0}{}{0}\)\begin{align*} \deg (p+q)&\leq \max \set {\deg p,\deg q}\\ \deg (\lambda p)&\leq \deg p, \end{align*} from which it easily follows that \(P_d\) is closed under addition and scalar multiplication.
Any polynomial \(p(t)\in P_d\) has a unique expression of the form
\(\seteqnumber{0}{}{0}\)\begin{equation*} p(t)=a_0+a_1t+\dots +a_dt^{d}. \end{equation*}
Here we get the uniqueness because \(a_k=k!p^{(k)}(0)\), for each \(0\leq k\leq d\). It now follows from Proposition 1.1 that \(1,\rng {t}{t^d}\) is a basis for \(P_d\).
Set \(v_j(t)=t^{j-1}\), for \(1\leq j\leq d+1\), and compute \(D v_j\) in terms of the \(v_i\):
\(\seteqnumber{0}{}{0}\)\begin{equation*} D v_j=(j-1)v_{j-1} \end{equation*}
so that the matrix \(A\) of \(D\) with respect to this basis has all entries 0 except just above the diagonal where \(A_{(j-1)j}=j-1\). For example, if \(d=3\), we have
\(\seteqnumber{0}{}{0}\)\begin{equation*} A= \begin {pmatrix} 0&1&0&0\\0&0&2&0\\0&0&0&3\\0&0&0&0 \end {pmatrix}. \end{equation*}
The kernel of \(D\) is the constant polynomials \(P_0\) and the image is \(P_{d-1}\).
6.
(a) \(0\notin U_1\) so \(U_1\) is not a subspace.
(b) \(U_2\) is not a subspace because it is not closed under complex scalar multiplication: \((1,1,0)\in U_2\) but \(i(1,1,0)=(i,i,0)\) is not (here \(i=\sqrt {-1}\)). In general, any time you see complex conjugation in the definition of a subset, it is unlikely to be a complex subspace.
(c) \(U_3=\ker \phi _A\) for \(A= \begin {pmatrix} 1&\sqrt {-1}&3 \end {pmatrix} \) and so is a subspace.
7. Following the hint we need to show that any \(v\in V_\R \) can be written uniquely as a real linear combination of vectors in the list \(\cB _\R \). Since \(v\in V\), we may write \(v=\sum _{j=1}^n \lambda _j v_j\) for unique \(\lambda _j\in \C \). Write \(\lambda _j=a_j+i b_j\) with \(a_j,b_j\in \R \). Then \(v=\sum _{j=1}^n (a_j v_j+b_j i v_j)\) and this expression is unique: it suffices to observe that for \(v=0\), \(\lambda _j=0\) for all \(j\), and hence \(a_j=b_j=0\) for all \(j\).