Skip to main content

Euclidean vector spaces

Section 5.2 Null space, image, and isomophisms

Having introduced linear transformations, we now treat them as proper objects of study. Forget for a moment the linear algebraic nature of a linear transformation \(T\colon V\rightarrow W\text{,}\) and think of it just as a function. Purely along function-theoretic lines, we want to know whether \(T\) is injective, surjective, and invertible. As we will learn, there are two subspaces associated to a linear transformation \(T\text{,}\) its null space and image, that provide an easy way for answering these questions. We will also see that in the case of a matrix transformation \(T_A\text{,}\) these associated spaces coincide with two of the fundamental spaces of the matrix \(A\text{.}\) (You can probably guess one of these.)

Subsection 5.2.1 Null space and image

Definition 5.2.1. Null space and image.

Let \(T\colon V\rightarrow W\) be a linear transformation.
  1. Null space.
    The null space of \(T\text{,}\) denoted \(\NS T\text{,}\) is defined as
    \begin{equation*} \NS T=\{\boldv\in V\colon T(\boldv)=\boldzero_W\}\text{.} \end{equation*}
  2. Image.
    The image (or range) of \(T\text{,}\) denoted \(\im T\text{,}\) is defined as
    \begin{equation*} \im T=\{\boldw\in W\colon \boldw=T(\boldv) \text{ for some } \boldv\in V \}\text{.} \end{equation*}
As with the fundamental spaces of a matrix, given a linear transformation \(T\colon V\rightarrow W\) it is helpful to keep straight the different ambient spaces where \(\NS T\) and \(\im T\) live. As illustrated by FigureΒ 5.2.(a), we have \(\NS T\subseteq V\) and \(\im T\subseteq W\text{:}\) that is, the null space is a subset of the domain of \(T\text{,}\) and the image is a subset of the codomain. Figures 5.2.(b)and 5.2.(c) go on to convey that \(\NS T\) is the set of elements of \(V\) that are mapped to \(\boldzero_W\text{,}\) and that \(\im T\) is the set of outputs of \(T\text{.}\)
(a) Null space lives in the domain; image lives in the codomain.
(b) The entire null space gets mapped to \(\boldzero_W\text{.}\)
(c) The entire domain is mapped to \(\im T\text{.}\)
Figure 5.2.2. Null space and image
As mentioned at the top, the null space and image of a linear transformation are subspaces, as we now show.

Proof.

Null space of \(T\).
We use the two-step technique to prove \(\NS T\) is a subspace.
  1. Since \(T(\boldzero_V)=\boldzero_W\) (TheoremΒ 3.2.12), we see that \(\boldzero_V\in \NS T\text{.}\)
  2. Suppose \(\boldv_1, \boldv_2\in \NS T\text{.}\) Given any \(c,d\in \R\text{,}\) we have
    \begin{align*} T(c\boldv_1+d\boldv_2) \amp=cT(\boldv_1)+dT(\boldv_2) \amp (T \text{ is linear, } \knowl{./knowl/xref/th_trans_props.html}{\text{Theorem 3.2.12}})\\ \amp=c\boldzero_W+d\boldzero_W \amp (\boldv_1, \boldv_2\in \NS T) \\ \amp = \boldzero_W\text{.} \end{align*}
    This shows that \(c\boldv_1+d\boldv_2\in \NS T\text{,}\) completing our proof.
Image of \(T\).
The proof proceeds in a similar manner, using the two-step technique.
  1. Since \(T(\boldzero_V)=\boldzero_W\) (TheoremΒ 3.2.12), we see that \(\boldzero_W\) is β€œhit” by \(T\text{,}\) and hence is a member of \(\im T\text{.}\)
  2. Assume vectors \(\boldw_1, \boldw_2\in W\) are elements of \(\im T\text{.}\) By definition, this means there are vectors \(\boldv_1, \boldv_2\in V\) such that \(T(\boldv_i)=\boldw_i\) for \(1\leq i\leq 2\text{.}\) Now given any linear combination \(\boldw=c\boldw_1+d\boldw_2\text{,}\) we have
    \begin{align*} T(c\boldv_1+d\boldv_2) \amp = cT(\boldv_1)+dT(\boldv_2)\\ \amp =c\boldw_1+d\boldw_2\\ \amp =\boldw\text{.} \end{align*}
    This shows that for any linear combination \(\boldw=c\boldw_1+d\boldw_2\text{,}\) there is an element \(\boldv=c\boldv_1+d\boldv_2\) such that \(T(\boldv)=\boldw\text{.}\) We conclude that if \(\boldw_1, \boldw_2\in \im T\text{,}\) then \(\boldw=c\boldw_1+d\boldw_2\in \im T\) for any \(c,d\in \R\text{,}\) as desired.

Example 5.2.4.

Define \(F\colon M_{nn}\rightarrow M_{nn}\) as \(F(A)=A^T-A\text{.}\)
  1. Prove that \(F\) is linear.
  2. Identify \(\NS F\) as a familiar matrix subspace.
  3. Identify \(\im F\) as a familiar matrix subspace.
Solution.
  1. Linearity is an easy consequence of transpose properties. For any \(A_1, A_2\in M_{nn}\) and \(c_1,c_2\in \R\text{,}\) we have
    \begin{align*} F(c_1A_1+c_2A_2) \amp= (c_1A_1+c_2A_2)^T-(c_1A_1+c_2A_2) \\ \amp = c_1A_1^T+c_2A_2^T-c_1A_1-c_2A_2\amp (\knowl{./knowl/xref/th_transp_linear.html}{\text{2}}) \\ \amp =c_1(A_1^T-A_1)+c_2(A_2^T-A_2)\\ \amp =c_1F(A_1)+c_2F(A_2)\text{.} \end{align*}
  2. We have
    \begin{align*} \NS F \amp= \{A\in M_{nn}\colon F(A)=\boldzero\} \\ \amp=\{A\in M_{22}\colon A^T-A=\boldzero\} \\ \amp=\{A\in M_{22}\colon A^T=A\} \text{.} \end{align*}
    Thus \(\NS F\) is the subspace of symmetric \(n\times n\) matrices!
  3. Let \(W=\{B\in M_{nn}\colon B^T=-B\}\text{,}\) subspace of skew-symmetric \(n\times n\) matrices. We claim \(\im F=W\text{.}\) As this is a set equality, we prove it by showing the two set inclusions \(\im F\subseteq W\) and \(W\subseteq \im F\text{.}\) (See Basic set properties)
    The inclusion \(\im F\subseteq W\) is the easier of the two. If \(B\in \im F\text{,}\) then \(B=F(A)=A^T-A\) for some \(A\in M_{nn}\text{.}\) Using various properties of transposition, we have
    \begin{equation*} B^T=(A^T-A)^T=(A^T)^T-A^T=-(A^T-A)=-B\text{,} \end{equation*}
    showing that \(B\) is skew-symmetric, and thus \(B\in W\text{,}\) as desired.
    The inclusion \(W\subseteq \im F\) is trickier: we must show that if \(B\) is skew-symmetric, then there is an \(A\) such that \(B=F(A)=A^T-A\text{.}\) Assume we have a \(B\) with \(B^T=-B\text{.}\) Letting \(A=-\frac{1}{2}B\) we have
    \begin{equation*} A^T-A=(-\frac{1}{2}B)^T+\frac{1}{2}B=\frac{1}{2}(-B^T+B)=\frac{1}{2}(B+B)=B\text{.} \end{equation*}
    Thus we have found a matrix \(A\) satisfying \(F(A)=B\text{.}\) It follows that \(B\in\im T\text{.}\)

Remark 5.2.5. Subspace as null space.

As illustrated by ExampleΒ 5.2.4, TheoremΒ 5.2.3 provides an alternative technique for proving that a subset of \(W\subseteq V\) is in fact a subspace: namely, find a linear transformation \(T\colon V\rightarrow Z\) such that \(W=\NS T\text{.}\)
Not surprisingly, there is a connection between the null space of a matrix, as defined in DefinitionΒ 4.1.10, and our new notion of null space. Indeed, given an \(m\times n\) matrix \(A\text{,}\) for all \(\boldx\in \R^n\) we have
\begin{align*} \boldx\in \NS A \amp \iff A\boldx=\boldzero \\ \amp \iff T_A(\boldx)=\boldzero \amp (\text{def. of } T_A)\\ \amp \iff \boldx\in \NS T_A\text{,} \end{align*}
and thus \(\NS A=\NS T_A\text{.}\) Furthermore, as we show next, we have \(\CS A=\im T_A\text{.}\)

Proof.

The first equality was discussed above. As for the second, we have
\begin{align*} \boldy\in \CS A \amp \iff \boldy=A\boldx \text{ for some } \boldx\in \R^n \amp (\knowl{./knowl/xref/th_col_space.html}{\text{4.5.5}}) \\ \amp \iff \boldy=T_A(\boldx) \text{ for some } \boldx\in \R^n \amp (\text{def. of } T_A)\\ \amp \iff \boldy\in \im T_A \amp (\text{def. of } \im T_A)\text{.} \end{align*}
Thus \(\CS A=\im T_A\text{.}\)

Example 5.2.7. Matrix transformation.

Let
\begin{equation*} A=\begin{amatrix}[rrrr] 1\amp 2\amp 3\amp 4\\ 2\amp 4\amp 6\amp 8 \end{amatrix}\text{,} \end{equation*}
and let \(T_A\colon \R^4\rightarrow \R^2\) be its associated matrix transformation. Provide bases for \(\NS T_A\) and \(\im T_A\) and compute the dimensions of these spaces.
Solution.
We have \(\NS T_A=\NS A\) and \(\im T_A=\CS A\text{.}\) Following ProcedureΒ 4.5.10, we first row reduce \(A\) to
\begin{equation*} \begin{amatrix}[rrrr] \boxed{1}\amp 2\amp 3\amp 4\\ 0\amp 0\amp 0\amp 0 \end{amatrix}\text{,} \end{equation*}
and conclude that
\begin{equation*} \NS T_A=\NS A=\{(-2r-3s-4t,r,s,t)\colon r,s,t\in \R\}=\Span\{(-2,1,0,0),(-3,0,1,0),(-4,0,0,1)\} \end{equation*}
and
\begin{equation*} \im T_A=\CS A=\Span\{(1,2)}\text{.} \end{equation*}

Subsection 5.2.2 The rank-nullity theorem

The rank-nullity theorem relates the the dimensions of the null space and image of a linear transformation \(T\colon V\rightarrow W\text{,}\) assuming \(V\) is finite dimensional. Roughly speaking, it says that the bigger the null space, the smaller the image. More precisely, it tells us that
\begin{equation*} \dim V=\dim\NS T+\dim\im T\text{.} \end{equation*}
As we will see, this elegant result can be used to significantly simplify computations with linear transformations. For example, in a situation where we wish to compute explicitly both the null space and image of a given linear transformation, we can often get away with just computing one of the two spaces and using the rank-nullity theorem (and a dimension argument) to easily determine the other. Additionally, the rank-nullity theorem will directly imply some intuitively obvious properties of linear transformations. For example, suppose \(V\) is a finite-dimensional vector space. It seems obvious that if \(\dim W> \dim V\text{,}\) then there is no linear transformation mapping \(V\) surjectively onto \(W\text{:}\) i.e., you should not be able to map a β€œsmaller” vector space onto a β€œbigger” one. Similarly, if \(\dim W \lt \dim V\text{,}\) then we expect that there is no injective linear transformation mapping \(V\) injectively into \(W\text{.}\) Both these results are easy consequences of the rank-nullity theorem.
Before proving the theorem we give names to \(\dim \NS T\) and \(\dim\im T\text{.}\)

Definition 5.2.8. Rank and nullity.

Let \(T\colon V\rightarrow W\) be a linear transformation.
  • The rank of \(T\text{,}\) denoted \(\rank T\text{,}\) is the dimension of \(\im T\text{:}\) i.e.,
    \begin{equation*} \rank T=\dim\im T\text{.} \end{equation*}
  • The nullity of \(T\text{,}\) denoted \(\nullity T\text{,}\) is the dimension of \(\NS T\text{:}\) i.e.,
    \begin{equation*} \nullity T=\dim\NS T\text{.} \end{equation*}

Proof.

Choose a basis \(B'=\{\boldv_1, \boldv_2, \dots, \boldv_k\}\) of \(\NS T\) and extend \(B'\) to a basis \(B=\{\boldv_1, \boldv_2,\dots, \boldv_k,\boldv_{k+1},\dots, \boldv_n\}\text{,}\) using TheoremΒ 4.4.15. Observe that \(\dim\NS T=\nullity T=k\) and \(\dim V=n\text{.}\)
We claim that \(B''=\{T(\boldv_{k+1}),T(\boldv_{k+2}),\dots, T(\boldv_{n})\}\) is a basis of \(\im T\text{.}\)
Proof of claim.
\(B''\) is linearly independent.
Suppose \(a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n)=\boldzero\text{.}\) Then the vector \(\boldv=a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n\) satisfies \(T(\boldv)=\boldzero\) (using linearity of \(T\)), and hence \(\boldv\in \NS T\text{.}\) Then, using the fact that \(B'\) is a basis of \(\NS T\text{,}\) we have
\begin{equation*} b_1\boldv_1+b_2\boldv_2+\cdots +\boldv_k=\boldv=a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n, \end{equation*}
and hence
\begin{equation*} b_1\boldv_1+b_2\boldv_2+\cdots +\boldv_k-a_k\boldv_k-a_{k+1}\boldv_{k+1}-\cdots -a_n\boldv_n=\boldzero. \end{equation*}
Since the set \(B\) is linearly independent, we conclude that \(b_i=a_j=0\) for all \(1\leq i\leq k\) and \(k+1\leq j\leq n\text{.}\) In particular, \(a_{k+1}=a_{k+2}=\cdots=a_n=0\text{,}\) as desired.
\(B''\) spans \(\im T\).
It is clear that \(\Span B''\subseteq \im T\) since \(T(\boldv_i)\in \im T\) for all \(k+1\leq i\leq n\) and \(\im T\) is closed under linear combinations.
For the other direction, suppose \(\boldw\in \im T\text{.}\) Then there is a \(\boldv\in V\) such that \(\boldw=T(\boldv)\text{.}\) Since \(B\) is a basis of \(V\) we may write
\begin{equation*} \boldv=a_1\boldv_1+a_2\boldv_2+\cdots a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n\text{,} \end{equation*}
in which case
\begin{align*} \boldw=T(\boldv)\amp= T(a_1\boldv_1+a_2\boldv_2+\cdots a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n)\\ \amp=a_1T(\boldv_1)+a_2T(\boldv_2)+\cdots a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n) \amp (T \text{ is linear })\\ \amp=\boldzero +a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n) \amp (\boldv_i\in\NS T \text{ for } 1\leq i\leq k) \text{.} \end{align*}
This shows that \(\boldw=a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n)\in \Span B''\text{,}\) as desired.
\(B''\)\(\im T\text{,}\)\(\dim \im T=\val{B''}=n-(k+1)+1=n-k\text{,}\)
\begin{align*} \dim V \amp=k+(n-k) \\ \amp=\dim\NS T+\dim \im T \\ \amp = \nullity T+\rank T\text{.} \end{align*}

Example 5.2.10. Rank-nullity application.

Show that the linear transformation
\begin{align*} T\colon \R^4 \amp\rightarrow \R^3 \\ (x,y,z,w)\amp \mapsto (x+z,y+z+w,z+2w) \end{align*}
is surjective: i.e., \(\im T=\R^3\text{.}\) Do so by first computing \(\NS T\text{.}\)
Solution.
We first examine \(\NS T\text{.}\) We have
\begin{equation*} T(x,y,z,w)=\boldzero \iff \begin{linsys}{4} x\amp \amp \amp+ \amp z \amp \amp \amp =\amp 0\\ \amp \amp y\amp+ \amp z \amp + \amp w\amp =\amp 0\\ \amp \amp \amp\amp z \amp + \amp 2w\amp =\amp 0 \end{linsys}\text{.} \end{equation*}
The system above is already in row echelon form, and so we easily see that
\begin{equation*} \NS T=\{(2t,t,-2t,t)\colon t\in \R\}=\Span\{(-1,1,-2,1)\}\text{.} \end{equation*}
Thus \(B=\{(2,1,-2,1)\}\) is a basis of \(\NS T\text{,}\) and we conclude that \(\dim \NS T=1\text{.}\) The rank-nullity theorem now implies that
\begin{equation*} \dim \im T=4-\dim \NS T=4-1=3\text{.} \end{equation*}
Since \(\im T\subseteq \R^3\) and \(\dim\im T=\dim \R^3=3\text{,}\) we conclude by CorollaryΒ 4.4.17 that \(\im T=\R^3\text{.}\) Thus \(T\) is surjective.

Example 5.2.11. Rank-nullity application.

Let \(F\colon M_{nn}\rightarrow M_{nn}\) be defined as \(F(A)=A^T-A\) as in ExampleΒ 5.2.4. Prove that \(\im F\) is the subspace \(W\) of all skew-symmetric matrices following the steps below.
Solution.
We compute
\begin{align*} \NS F \amp =\{A\in M_{nn}\colon F(A)=\boldzero\}\\ \amp =\{A\in M_{nn}\colon A^T-A=\boldzero\}\\ \amp =\{A\in M_{nn}\colon A^T=A\}\text{.} \end{align*}
Thus \(\NS F\) is the subspace of all symmetric matrices. It can be shown that this space has dimension \(\frac{n(n+1)}{2}\text{.}\) To see why this is true intuitively, note that a symmetric matrix is completely determined by the entries on and above the diagonal: there are
\begin{equation*} 1+2+\cdots n=\frac{n(n+1)}{2} \end{equation*}
of these. Next, the rank-nullity theorem implies that
\begin{align*} \dim \im F \amp =\dim M_{nn}-\dim \NS F\\ \amp =n^2-\frac{n(n+1)}{2}\\ \amp =\frac{n(n-1)}{2}\text{.} \end{align*}
As we showed in ExampleΒ 5.2.4, any matrix \(B=F(A)\in \im F\) satisfies \(B^T=-B\text{:}\) i.e., \(\im F\subseteq W\text{,}\) where \(W\) is the subspace of all skew-symmetric \(n\times n\) matrices. Similarly to the space of symmetric matrices, we can show that \(\dim W=\frac{n(n-1)}{2}\text{:}\) intuitively, a skew-symmetric matrix is determined by the \(n(n-1)/2\) entries strictly above the diagonal (the diagonal entries must be equal to zero). Since \(\im F\subseteq W\) and \(\dim \im F=\dim W=\frac{n(n-1)}{2}\text{,}\) we conclude that \(\im F=W\text{,}\) by CorollaryΒ 4.4.17. This proves that \(\im F\) is the space of all skew-symmetric matrices.

Subsection 5.2.3 Injective and surjective linear transformations

Recall the notions of injectivity and surjectivity from DefinitionΒ 0.2.7: a function \(f\colon X\rightarrow Y\) is injective (or one-to-one) if for all \(x,x'\in X\) we have \(f(x)=f(x')\) implies \(x=x'\text{;}\) it is surjective (or onto) if for all \(y\in Y\) there is an \(x\in X\) with \(f(x)=y\text{.}\) As with all functions, we will be interested to know whether a given linear transformation is injective or surjective; as it turns out, the concepts of null space and image give us a convenient manner of answering these questions. As remarked in DefinitionΒ 0.2.7, there is in general a direct connection between the surjectivity and the image of a function: namely, \(f\colon X\rightarrow Y\) is surjective if and only if \(\im f=Y\text{.}\) It follows immediately that a linear transformation \(T\colon V\rightarrow W\) is surjective if and only if \(\im T=W\text{.}\) As for injectivity, it is easy to see that if a linear transformation \(T\) is injective, then its null space must consist of just the zero vector of \(V\text{.}\) What is somewhat surprising is that the converse is also true, as we now show.

Proof.

Statement (2) is true of any function, whether it is a linear transformation or not; it follows directly from the definitions of surjectivity and image. Thus it remains to prove statement (1). We prove both implications separately.
Implication \(\implies\).
Assume \(T\) is injective. Since \(T(\boldzero_V)=\boldzero_W\text{,}\) we see that for any \(\boldv\in V\) we have
\begin{align*} T(\boldv)=\boldzero_W \amp \implies T(\boldv)=T(\boldzero_V)\\ \amp \implies \boldv=\boldzero_V \amp (T \text{ is injective})\text{.} \end{align*}
It follows that \(\boldzero_V\) is the only element of \(\NS T\text{:}\) equivalently, we have \(\NS T=\{\boldzero_V\}\text{,}\) as desired.
Implication \(\impliedby\).
Assume \(\NS T=\{\boldzero_V\}\text{.}\) Given any vectors \(\boldv,\boldv'\in V\) we have
\begin{align*} T(\boldv)=T(\boldv') \amp \implies T(\boldv)-T(\boldv')=\boldzero_W\\ \amp \implies T(\boldv-\boldv')=\boldzero_W\\ \amp \implies \boldv-\boldv'\in \NS T\\ \amp \implies \boldv-\boldv'=\boldzero_V\amp (\NS T=\boldzero_V)\\ \amp \implies \boldv=\boldv'\text{.} \end{align*}
This proves \(T\) is injective, as desired.

Remark 5.2.13.

To determine whether a function of sets \(f\colon X\rightarrow Y\) is injective, we normally have to show that for each output \(y\) in the image of \(f\) there is exactly one input \(x\) satisfying \(f(x)=y\text{.}\) Think of this as checking injectivity at every output. TheoremΒ 5.2.12 tells us that in the special case of a linear transformation \(T\colon V\rightarrow W\) it is enough to check injectivity at exactly one ouput: namely, \(\boldzero\in W\text{.}\)

Proof.

Definition 5.2.15. Isomorphism.

Let \(V\) and \(W\) be vector spaces. A linear transformation \(T\colon V\rightarrow W\) is an isomorphism if it is invertible as a function: i.e., if there is an inverse function \(T^{-1}\colon W\rightarrow V\) satisfying
\begin{align*} T^{-1}\circ T \amp = \id_V\\ T\circ T^{-1} \amp =\id_W\text{.} \end{align*}
The vector spaces \(V\) and \(W\) are isomorphic if there is an isomorphism from \(V\) to \(W\text{.}\)
DefinitionΒ 5.2.15 leaves open the question of whether the inverse function \(T^{-1}\colon W\rightarrow V\) of an isomorphism \(T\colon V\rightarrow W\) is a linear transformation. The next theorem resolves that issue.

Proof.

Let \(T\colon V\rightarrow W\) be an isomorphism, and let \(T^{-1}\colon W\rightarrow V\) be its inverse function. We use the 1-step technique to show \(T^{-1}\) is linear, but with a slight twist: given any \(c,d\in \R\) and vectors \(\boldw_1,\boldw_2\in W\) we will show that
\begin{equation*} T\left(T^{-1}(c\boldw_1+d\boldw_2)\right)=T\left(cT^{-1}(\boldw_1)+dT^{-1}(\boldw_2)\right)\text{;} \end{equation*}
since \(T\) is injective, it will then follow that
\begin{equation*} T^{-1}(c\boldw_1+d\boldw_2)=cT^{-1}(\boldw_1)+dT^{-1}(\boldw_2)\text{.} \end{equation*}
To this end, we compute
\begin{align*} T\left(T^{-1}(c\boldw_1+d\boldw_2)\right)\amp = c\boldw_1+d\boldw_2 \amp (T\circ T^{-1}=\id_W) \\ T\left(cT^{-1}(\boldw_1)+dT^{-1}(\boldw_2)\right) \amp =cT(T^{-1}(\boldw_1))+dT(T^{-1}(\boldw_2)) \amp (T \text{ is linear}) \\ \amp =c\boldw_1+d\boldw_2 \amp (T\circ T^{-1}=\id_W)\text{.} \end{align*}
Thus we have
\begin{equation*} T\left(T^{-1}(c\boldw_1+d\boldw_2)\right)=T\left(cT^{-1}(\boldw_1)+dT^{-1}(\boldw_2)\right)=c\boldw_1+d\boldw_2\text{,} \end{equation*}
as desired.

Remark 5.2.17. Proving \(T\) is an isomorphism.

According to DefinitionΒ 5.2.15, to prove a function \(T\colon V\rightarrow W\) is an isomorphism, we must show both that \(T\) is linear and that it is invertible. We know already how to decide whether a function is linear, but how do we decide whether it is invertible? Recall that a function is invertible if and only if it is bijective. (See TheoremΒ 0.2.11.) This fact gives rise to two distinct methods for proving that a given linear transformation is invertible:
  1. we can show directly that \(T\) is invertible by providing an inverse \(T^{-1}\colon W\rightarrow V\text{;}\)
  2. we can show that \(T\) is bijective (i.e., injective and surjective).
One of the two methods described in RemarkΒ 5.2.17 for determining whether a linear transformation is invertible may be more convenient than the other, depending on the linear transformation in question. Thanks to TheoremΒ 5.2.12, we can decide whether a linear transformation is bijective by looking at its two associated subspaces \(\NS T\) and \(\im T\text{.}\)

Proof.

The result follows directly from TheoremΒ 5.2.12 and the fact that a function is invertible if and only if it is bijective:
\begin{align*} T \text{ is an isomorphism} \amp \iff T \text{ is bijective}\\ \amp \iff T \text{ is injective and surjective}\\ \amp \iff \NS T=\{\boldzero_V\} \text{ and } \im T=W\text{.} \end{align*}

Example 5.2.19. Transposition is an isomorphism.

Let \(F\colon M_{mn}\rightarrow M_{nm}\) be defined as \(F(A)=A^T\text{.}\) Prove that \(F\) is an isomorphism.
Solution.
We know already that \(F\) is a linear transformation. It remains to show that it is invertible. In the spirit of RemarkΒ 5.2.17, we give two proofs, corresponding to methods (1) and (2).
First proof.
Define \(G\colon M_{nm}\rightarrow M_{mn}\) as \(G(B)=B^T\text{.}\) We claim that \(G=F^{-1}\text{,}\) and hence that \(F\) is an isomorphism. To do show we must show
\begin{align*} G\circ F \amp = \id_{M_{mn}} \amp F\circ G\amp =\id_{M_{nm}}\text{.} \end{align*}
For any \(A\in M_{mn}\text{,}\) we have
\begin{align*} G(F(A)) \amp =G(A^T)\\ \amp(A^T)^T \\ \amp = A\text{.} \end{align*}
Thus \(G\circ F=\id_{M_{mn}}\text{.}\) Similarly, for any \(B\in M_{nm}\text{,}\) we have
\begin{align*} F(G(B)) \amp = F(B^T)\\ \amp = (B^T)^T\\ \amp = B\text{.} \end{align*}
Second proof.
We will show that \(F\) is bijective, and hence invertible. To do so it suffices to show \(\NS F=\{\boldzero\}\) and \(\im F=M_{nm}\text{.}\) We have
\begin{align*} F(A) = \boldzero \amp \iff A^T=\boldzero\\ \amp \iff A=\boldzero\text{.} \end{align*}
Thus \(\NS F=\{\boldzero\}\text{,}\) showing \(F\) is injective. Next given any \(B\in M_{nm}\text{,}\) let \(A=B^T\text{;}\) then
\begin{equation*} F(A)=F(B^T)=(B^T)^T=B\text{.} \end{equation*}
This proves that for any \(B\in M_{nm}\) there is a matrix \(A\in M_{mn}\) such that \(F(A)=B\text{.}\) Thus \(\im F=M_{nm}\text{.}\)
Why is it useful to know whether two vector spaces are isomorphic? The short answer is that if \(V\) and \(W\) are isomorphic, then although they may be very different objects when considered as sets, from the linear-algebraic perspective there is essentially no difference between the two: i.e., they satisfy the exact same linear-algebraic properties. Furthermore, an isomorphism \(T\colon V\rightarrow W\) witnessing the fact that \(V\) and \(W\) are isomorphic gives us a perfect bijective dictionary between the two spaces, allowing us to answer questions about the one space by β€œtranslating” it to a question about the other, using \(T\) or \(T^{-1}\text{.}\) TheoremΒ 5.2.20 gives a first glimpse into this dictionary-like nature of isomorphisms. Later in SectionΒ 5.3 we will introduce a particular isomorphism called the coordinate vector, which illustrates the computational value of being able to translate questions about abstract vector spaces \(V\) to questions about our beloved and familiar \(\R^n\) spaces.

Proof.

We prove \((1)\) and leave the remaining statements as an exercise. Let \(T\colon V\rightarrow W\) be an isomorphism, and let \(S\subseteq V \) be a linearly independent set. Define
\begin{equation*} S'=T(S)=\{T(\boldv)\colon \boldv\in S\}\text{.} \end{equation*}
Suppose
\begin{equation*} \boldw_1=T(\boldv_1), \boldw_2=T(\boldv_2),\dots, \boldw_n=T(\boldv_n)\ \end{equation*}
are distinct elements of \(S'\text{,}\) and suppose we have
\begin{equation*} c_1T(\boldv_1)+c_2T(\boldv_2)+\cdots +c_nT(\boldv_n)=\boldzero\text{.} \end{equation*}
Let \(T^{-1}\colon W\rightarrow V\) be the inverse function of \(T\text{,}\) and recall that \(T^{-1}\) is linear by TheoremΒ 5.2.16. Now applying \(T^{-1}\) to both sides of the equation above and simplify
\begin{align*} T^{-1}(c_1T(\boldv_1)+c_2T(\boldv_2)+\cdots +c_nT(\boldv_n))\amp =T^{-1}(\boldzero) \\ c_1T^{-1}T(\boldv_1)+c_2T^{-1}(T(\boldv_2))+\cdots c_nT^{-1}T(\boldv_n) \amp = \boldzero \amp (T^{-1} \text{ is linear})\\ c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n \amp =\boldzero \amp (T^{-1}\circ T=\id_V) \text{.} \end{align*}
Since \(S\) is assumed to be linearly independent, we conclude that \(c_1=c_2=\cdots =c_n=0\text{,}\) as desired.

Remark 5.2.21.

TheoremΒ 5.2.20image of a set\(f\colon X\rightarrow Y\)\(A\subseteq X \text{,}\)\(A\)\(f\)
\begin{equation*} f(A)=\{y\in Y\colon y=f(x) \text{ for some } x\in X\}=\{f(a)\colon a\in A\}\text{.} \end{equation*}
We now state a powerful theorem that applies specifically to finite-dimensional spaces.

Proof.

  1. We prove both implications separately.
    If \(V\) and \(W\) are isomorphic, then there is an isomorphism \(T\colon V\rightarrow W\text{.}\) It follows from TheoremΒ 5.2.20 that \(\dim V=\dim W\text{.}\)
    Now assume \(\dim V=\dim W=n\text{.}\) By definition there are bases \(B=\{\boldv_1,\boldv_2,\dots, \boldv_n\}\) and \(B'=\{\boldw_1,\boldw_2,\dots, \boldw_n\}\) of \(V\) and \(W\text{,}\) respectively, satisfying \(\abs{B}=\abs{B'}=n\text{.}\) Using Existence of transformations there is a linear transformation
    \begin{align*} T\colon V \amp \rightarrow W \end{align*}
    satisfying \(T(\boldv_i)=\boldw_i\) for all \(1\leq i\leq n\text{,}\) and a linear transformation
    \begin{align*} T'\colon W\rightarrow \amp V \end{align*}
    satisfying \(T'(\boldw_i)=\boldv_i\) for all \(1\leq i\leq n\text{.}\) We now use Uniqueness of transformations to prove that
    \begin{align*} T'\circ T \amp =\id_V \amp T\circ T'\amp =\id_W\text{,} \end{align*}
    and hence that \(T'=T^{-1}\text{.}\) To do so we observe that for all \(1\leq i\leq n\) we have
    \begin{align*} T'\circ T(\boldv_i) \amp = T'(\boldw_i) \amp (\text{def. of } T)\\ \amp = \boldv_i \amp (\text{def. of } T') \\ \amp = \id_V(\boldv_i) \amp (\text{def. of } \id_V\text{.} \end{align*}
    Since \(T'\circ T\) and \(\id_V\) agree on the basis \(B\) (\(T'\circ T(\boldv_i)=\id_V(\boldv_i)\) for all \(1\leq i\leq n\)), we conclude that \(T'\circ T=\id V\text{.}\) A very similar argument shows \(T\circ T'=\id_W\text{.}\) Having shown that \(T'=T^{-1}\text{,}\) we conclude that \(T\) is an isomorphism, and hence that \(V\) and \(W\) are isomorphic.
  2. Assume that \(T\colon V\rightarrow W\) and that \(\dim V=\dim W=n\text{.}\) We will establish the cycle of implications (a)\(\implies\)(b)\(\implies\)(c)\(\implies\)(a).
    (b)\(\implies\)(c).
    Using the rank-nullity theorem, if \(\NS T=\{\boldzero\}\text{,}\) then
    \begin{align*} \dim \im T \amp =\dim V-\dim \NS T\\ \amp = n-0 \amp (\NS T=\{\boldzero\})\\ \amp =n \text{.} \end{align*}
    Since \(\im T\subseteq W\) and \(\dim \im T=\dim W=n\text{,}\) we conclude using CorollaryΒ 4.4.17 that \(\im T=W\text{.}\)
    (c)\(\implies\)(a).
    Assume \(\im T=W\text{.}\) Again, using the rank-nullity theorem, we see that
    \begin{align*} \dim \NS T \amp =\dim V-\dim \im T\\ \amp = n-n \amp (\im T=W)\\ \amp = 0\text{.} \end{align*}
    It follows that \(\NS T=\{\boldzero\}\text{,}\) and hence, by TheoremΒ 5.2.18, that \(T\) is an isomorphism.
A shocking consequence of TheoremΒ 5.2.22, is that any two vector spaces of the same dimension are isomorphic: in particular, any \(n\)-dimensional vector space is isomorphic to \(\R^n\text{!}\)

Proof.

We have finally delivered on a promise made way back in ChapterΒ 1: namely, we now see how any finite-dimensional vector space is isomorphic (and thus structurally equivalent) to one of our Euclidean spaces \(\R^n\text{.}\) There is something almost anticlimactic about CorollaryΒ 5.2.23. Having devoted much time and energy to introducing and studying various β€œexotic” finite-dimensional vector spaces, we now learn that they are essentially no different than our familiar spaces \(\R^n\text{.}\) Be not disappointed! By establishing an isomorphism between an exotic vector space \(V\) and a Euclidean space \(\R^n\text{,}\) we are able to transport to \(V\) all the wonderful computational techniques we have at our disposal in the \(\R^n\) context. Furthermore, it is still up to us to choose our isomorphism \(T\colon V\rightarrow \R^n\text{,}\) and as we will see, there is a subtle art to choosing the isomorphism \(T\) to suit our particular needs.

Subsection 5.2.4 Invertibility and isomorphisms

We end this section by making a connection between invertible matrices and isomorphisms. It follows easily from TheoremΒ 5.2.22 that a matrix \(A\in M_{nn}\) is invertible if and only if its corresponding matrix transformation \(T_A\colon \R^n\rightarrow \R^n\) is an isomorphism. Indeed, \(T_A\) is an isomorphism if and only if \(\NS T_A=\{\boldzero\}\text{,}\) if and only if \(\NS A=\{\boldzero\}\text{,}\) if and only if \(A\) is invertible. Furthermore, we see in this case that the inverse function of \(T_A\) is the matrix transformation \(T_{A^{-1}}\text{:}\) i.e., \((T_A)^{-1}=T_{A^{-1}}\text{.}\) We add this statement to our ever-growing list of equivalent formulations of invertibility.

Exercises 5.2.5 Exercises

WeBWork Exercises

1.
Let \(A=\left[\begin{matrix} 4 \amp 2 \amp 18 \cr 3 \amp 2 \amp 14 \cr \end{matrix}\right]\text{,}\) \({\bf b}=\left[\begin{matrix} -4 \cr -1 \cr 1 \cr \end{matrix}\right]\text{,}\) and \({\bf c}=\left[\begin{matrix} 10 \cr 9 \cr \end{matrix}\right]\text{.}\) Define \(T({\bf x})=A{\bf x}\text{.}\)
Select true or false for each statement.
  1. The vector \({\bf c}\) is in the range of \(T\)
  2. The vector \({\bf b}\) is in the kernel of \(T\)
Solution.
\(A{\bf b}=\left[\begin{matrix} 4 \amp 2 \amp 18 \cr 3 \amp 2 \amp 14 \cr \end{matrix}\right] \left[\begin{matrix} -4 \cr -1 \cr 1 \cr \end{matrix}\right]= \left[\begin{matrix} 0 \cr 0 \cr \end{matrix}\right]\text{,}\)\({\bf b}\in \ker(T)\text{.}\)\(A{\bf x}={\bf c}\text{.}\)\(\left[\begin{matrix} 4 \amp 2 \amp 18 \amp 10 \cr 3 \amp 2 \amp 14 \amp 9 \cr \end{matrix}\right]\sim \left[\begin{matrix} 1 \amp 0 \amp 4 \amp 1 \cr 0 \amp 1 \amp 1 \amp 3 \cr \end{matrix}\right].\)\(A \left[\begin{matrix} 1 \cr 3 \cr 0 \cr \end{matrix}\right] = {\bf c}\text{,}\)\({\bf c}\in\)\((T)\text{.}\)
2.
Let \(T({\bf x})=A{\bf x}\) for the matrix \(A\) and
\(A=\left[\begin{array}{ccc} 5 \amp -1 \amp 13\cr -2 \amp 0 \amp -6\cr 7 \amp -5 \amp 11 \end{array}\right]\text{,}\) \({\bf b}=\left[\begin{array}{c}-3\\-2\\1\\\end{array}\right]\text{,}\) and \({\bf c}=\left[\begin{array}{c}-1\\0\\-5\\\end{array}\right]\text{.}\)
Select true or false for each statement.
The vector \(\bf{c}\) is in the range of \(T\text{.}\)
The vector \(\bf{b}\) is in the kernel of \(T\text{.}\)
Answer 1.
\(\text{True}\)
Answer 2.
\(\text{True}\)
Solution.
\(A{\bf b}=\left[\begin{array}{ccc} 5 \amp -1 \amp 13\cr -2 \amp 0 \amp -6\cr 7 \amp -5 \amp 11 \end{array}\right] \left[\begin{array}{c}-3\\-2\\1\\\end{array}\right]= \left[\begin{array}{c}0\\0\\0\\\end{array}\right] ={\bf 0}\text{,}\)\({\bf b}\in \ker(T)\text{.}\)\(A{\bf x}={\bf c}\text{.}\)\(\left[\begin{array}{cccc} 5 \amp -1 \amp 13 \amp -1\cr -2 \amp 0 \amp -6 \amp 0\cr 7 \amp -5 \amp 11 \amp -5 \end{array}\right] \sim \left[\begin{array}{cccc} 1 \amp 0 \amp 3 \amp 0\cr 0 \amp 1 \amp 2 \amp 1\cr 0 \amp 0 \amp 0 \amp 0 \end{array}\right]\text{.}\)\(\bf{c}\)\(\bf{A}\text{.}\)\(\bf{A}\bf{x}=\bf{c}\)\(\bf{x} = \left[\begin{array}{c}0\\1\\0\\\end{array}\right] + t \left[\begin{array}{c}-3\\-2\\1\\\end{array}\right]\)\(t \in \mathbb{R}\text{,}\)\(A \left[\begin{array}{c}0\\1\\0\\\end{array}\right] = {\bf c}\text{.}\)\({\bf c}\in\)\((T)\text{.}\)
3.
If \(T:{\mathbb R}^8\to {\mathbb R}^2\) is a linear transformation, then consider whether the set ker (\(T\) ) is a subspace of \({\mathbb R}^{8}\text{.}\)
Select true or false for each statement. first problem looking at subspaces
  1. This set contains the zero vector and is closed under vector addition and scalar multiplication.
  2. This set is a subspace of \({\mathbb R}^8\)
  3. This set is a subset of the codomain
  4. This set is a subset of the domain.
Solution.
\({\mathbb R}^8\text{,}\)
4.
Let \(A=\left[\begin{matrix} 1 \amp 3 \cr -5 \amp 7 \cr 7 \amp -5\cr \end{matrix}\right]\text{,}\) \({\bf b}=\left[\begin{matrix} -7 \cr -7 \cr \end{matrix}\right]\text{,}\) and \({\bf c}=\left[\begin{matrix} -4 \cr 2 \cr \end{matrix}\right]\text{.}\) Define \(T({\bf x})=A{\bf x}\text{.}\)
Select true or false for each statement.
  1. The vector \({\bf c}\) is in the range of \(T\)
  2. The vector \({\bf b}\) is in the kernel of \(T\)
Solution.
\(A{\bf b}=\left[\begin{matrix} 1 \amp 3 \cr -5 \amp 7 \cr 7 \amp -5\cr \end{matrix}\right] \left[\begin{matrix} -7 \cr -7 \cr \end{matrix}\right]= \left[\begin{matrix} -28 \cr -14 \cr -14 \cr \end{matrix}\right] \ne {\bf 0}\text{,}\)\({\bf b}\not\in \ker(T)\text{.}\)\(T\)\({\mathbb R}^3\)\({\bf c}\in{\mathbb R}^2\text{,}\)\({\bf c}\not\in\)\((T)\text{.}\)
5.
If \(T:{\mathbb R}^4\to {\mathbb R}^2\) is a linear transformation, then select true or false for each statement about the set \(\ker( T)\text{.}\)
  1. This set is a subset of the codomain.
  2. This set contains the zero vector and is closed under vector addition and scalar multiplication.
  3. This set is a subset of the domain.
  4. This set is a subspace of \(\mathbb{R}^2\text{.}\)
Solution.
\({\mathbb R}^4\text{,}\)\({\mathbb R}^2\text{.}\)
8.
Let \(T\) be an linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\) Let \(A\) be the matrix associated to \(T\text{.}\)
Fill in the correct answer for each of the following situations.
  1. Two columns in the row-echelon form of \(A\) are not pivot columns.
  2. The row-echelon form of \(A\) has no column corresponding to a free variable.
  3. Every column in the row-echelon form of \(A\) is a pivot column.
  4. The row-echelon form of \(A\) has a column corresponding to a free variable.
  1. T is one-to-one
  2. T is not one-to-one
  3. There is not enough information to tell.
9.
Let \(T\) be a linear transformation from \({\mathbb R}^3\) to \({\mathbb R}^3\) .
Determine whether or not \(T\) is onto in each of the following situations:
  1. Suppose \(T(a) = u\text{,}\) \(T(b) = v\text{,}\) \(T(c)=u+v\text{,}\) where \(a, b, c, u,v\) are vectors in \({\mathbb R}^3\text{.}\)
  2. Suppose \(T\) is a one-to-one function
  3. Suppose \(T(1, 4, -2)=u\text{,}\) \(T(1, 3, 2)=v\text{,}\) \(T(2, 8, 0)=u+v\text{.}\)
  1. T is not onto.
  2. T is onto.
  3. There is not enough information to tell
10.
Match the following concepts with the correct definitions:
  1. \(f\) is a one-to-one function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  2. \(f\) is an onto function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  3. \(f\) is a function from \({\mathbb R}^3\) to \({\mathbb R}^3\)
  1. For every \(y\in {\mathbb R}^3\text{,}\) there is a \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  2. For every \(x\in {\mathbb R}^3\text{,}\) there is a \(y\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  3. For every \(y\in {\mathbb R}^3\text{,}\) there is a unique \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
  4. For every \(y \in {\mathbb R}^3\text{,}\) there is at most one \(x\in {\mathbb R}^3\) such that \(f(x)=y\text{.}\)
11.
Let \(T\) be an linear transformation from \({\mathbb R}^r\) to \({\mathbb R}^s\text{.}\) Let \(A\) be the matrix associated to \(T\text{.}\)
Fill in the correct answer for each of the following situations.
  1. The row-echelon form of \(A\) has a pivot in every column.
  2. Two rows in the row-echelon form of \(A\) do not have pivots.
  3. Every row in the row-echelon form of \(A\) has a pivot.
  4. The row-echelon form of \(A\) has a row of zeros.
  1. T is onto
  2. T is not onto
  3. There is not enough information to tell.
12.
Let \(T: {\mathbb R}^3 \rightarrow {\mathbb R}^3\) be the linear transformation defined by
\begin{equation*} T(x_1, x_2, x_3 )= (x_1- x_2, x_2- x_3, x_3-x_1). \end{equation*}
Find a vector \(\vec{w} \in {\mathbb R}^3\) that is NOT in the image of \(T\text{.}\)
\(\vec{w} =\) (1Β Γ—Β 3 array)
and find a different, nonzero vector \(\vec{v} \in {\mathbb R}^3\) that IS in the image of \(T\text{.}\)
\(\vec{v} =\) (1Β Γ—Β 3 array).
Solution.
\((a,b,c)\) is in the image of \(T\) if and only if \(T(x_1,x_2,x_3)=(a,b,c)\text{,}\) that is, if and only if
\begin{equation*} \begin{aligned} x_1-x_2 \amp = a \\ x_2-x_3 \amp = b \\ x_3-x_1 \amp = c \end{aligned} \end{equation*}
The matrix
\begin{equation*} \begin{bmatrix} 1 \amp -1 \amp 0 \amp a \\ 0 \amp 1 \amp -1 \amp b \\ -1 \amp 0 \amp 1 \amp c \end{bmatrix} \end{equation*}
represents this system of equations. Row reducing, one obtains
\begin{equation*} \begin{bmatrix} 1 \amp -1 \amp 0 \amp a \\ 0 \amp 1 \amp -1 \amp b \\ 0 \amp 0 \amp 0 \amp a+b+c \end{bmatrix} \end{equation*}
Thus the system has a solution if and only if \(a+b+c = 0\text{,}\) that is, \((a,b,c)\) is in the image of \(T\) if and only if \(a+b+c = 0\text{.}\)
13.
Let
\begin{equation*} A = \left[\begin{array}{cc} 6 \amp 5\cr -5 \amp -2\cr 31 \amp 15 \end{array}\right] \ \mbox{ and } \ \vec{b} = \left[\begin{array}{c} 7\cr 5\cr -18 \end{array}\right]. \end{equation*}
Define the linear transformation \(T: {\mathbb R}^2 \rightarrow {\mathbb R}^3\) by \(T(\vec{x}) = A\vec{x}\text{.}\) Find a vector \(\vec{x}\) whose image under \(T\) is \(\vec{b}\text{.}\)
\(\vec{x} =\) (2Β Γ—Β 1 array).
Is the vector \(\vec{x}\) unique?
Answer.
\(\text{unique}\)

Written Exercises

Computing \(\NS T\) and \(\im T\) parametrically.
For each linear transformation \(T\) give parametric descriptions of \(\NS T\) and \(\im T\text{.}\) To do so you will want to relate each computation to a system of linear equations.
14.
\begin{align*} T\colon \R^4 \amp \rightarrow \R^3 \\ (x,y,z,w)\amp\mapsto (x+z+w, x-y-z,-2x+y-w) \end{align*}
15.
\begin{align*} T\colon M_{22}\amp \rightarrow M_{22} \\ A \amp\mapsto \begin{amatrix}[rr]1\amp 1\\ 1\amp 1 \end{amatrix}A \end{align*}
16.
\begin{align*} T\colon P_2\amp \rightarrow P_2 \\ f \amp\mapsto f(x)+f'(x) \end{align*}
Identifying \(\im T\).
For the given linear transformation \(T\) prove the claim about \(\im T\text{.}\)
17.
\begin{align*} T\colon \R^\infty \amp \rightarrow \R^\infty \\ s=(a_1,a_2,\dots)\amp\mapsto T(s)=(a_2,a_3,\dots) \text{.} \end{align*}
Claim: \(\im T=\R^\infty\)
18.
\begin{align*} T\colon C(\R) \amp \rightarrow C(\R) \\ f(x)\amp\mapsto g(x)=f(x)+f(-x)\text{.} \end{align*}
Claim: \(\im T\) is the set of all continuous symmetric functions. In other words,
\begin{equation*} \im T=\{f\in C(\R)\colon f(-x)=f(x)\}. \end{equation*}
Identifying \(W\) as a null space.
For each subset \(W\subseteq V\) show \(W\) is a subspace by identifying it with the null space of a linear transformation \(T\text{.}\) You may use any of the examples from SectionΒ 5.1, and any of the results from the exercises in ExercisesΒ 5.1.6.
19.
\begin{equation*} W=\{A\in M_{nn}\colon \tr A=0\} \end{equation*}
20.
\begin{equation*} W=\{A\in M_{nn}\colon A^T=-A\} \end{equation*}
21.
\begin{equation*} W=\{f\in C^2(\R)\colon f''=2f'-3f\} \end{equation*}
24.
Prove TheoremΒ 5.2.16. Use the defining identities of the inverse function: namely, we have
\begin{align*} T^{-1}(T(\boldv)) \amp = \boldv \text{ for all } \boldv\in V\\ T(T^{-1}(\boldw)) \amp = \boldw \text{ for all } \boldw\in W\text{.} \end{align*}