Section3.8Rank-nullity theorem and fundamental spaces
This section is in a sense just a long-format example of how to compute bases and dimensions of subspaces. Along the way, however we meet the rank-nullity theorem (sometimes called the “fundamental theorem of linear algebra”), and apply this theorem in the context of fundamental spaces of matrices (Definition 3.8.5).
Subsection3.8.1The rank-nullity theorem
The rank-nullity theorem relates the the dimensions of the null space and image of a linear transformation \(T\colon V\rightarrow W\text{,}\) assuming \(V\) is finite dimensional. Roughly speaking, it says that the bigger the null space, the smaller the image. More precisely, it tells us that
As we will see, this elegant result can be used to significantly simplify computations with linear transformations. For example, in a situation where we wish to compute explicitly both the null space and image of a given linear transformation, we can often get away with just computing one of the two spaces and using the rank-nullity theorem (and a dimension argument) to easily determine the other. Additionally, the rank-nullity theorem will directly imply some intuitively obvious properties of linear transformations. For example, suppose \(V\) is a finite-dimensional vector space. It seems obvious that if \(\dim W> \dim V\text{,}\) then there is no linear transformation mapping \(V\) surjectively onto \(W\text{:}\) i.e., you should not be able to map a “smaller” vector space onto a “bigger” one. Similarly, if \(\dim W \lt \dim V\text{,}\) then we expect that there is no injective linear transformation mapping \(V\) injectively into \(W\text{.}\) Both these results are easy consequences of the rank-nullity theorem.
Before proving the theorem we give names to \(\dim \NS T\) and \(\dim\im T\text{.}\)
Definition3.8.1.Rank and nullity.
Let \(T\colon V\rightarrow W\) be a linear transformation.
The rank of \(T\text{,}\) denoted \(\rank T\text{,}\) is the dimension of \(\im T\text{:}\) i.e.,
Choose a basis \(B'=\{\boldv_1, \boldv_2, \dots, \boldv_k\}\) of \(\NS T\) and extend \(B'\) to a basis \(B=\{\boldv_1, \boldv_2,\dots, \boldv_k,\boldv_{k+1},\dots, \boldv_n\}\text{,}\) using Theorem 3.7.11. Observe that \(\dim\NS T=\nullity T=k\) and \(\dim V=n\text{.}\)
We claim that \(B''=\{T(\boldv_{k+1}),T(\boldv_{k+2}),\dots, T(\boldv_{n})\}\) is a basis of \(\im T\text{.}\)
Suppose \(a_kT(\boldv_k)+a_{k+1}T(\boldv_{k+1})+\cdots +a_nT(\boldv_n)=\boldzero\text{.}\) Then the vector \(\boldv=a_k\boldv_k+a_{k+1}\boldv_{k+1}+\cdots +a_n\boldv_n\) satisfies \(T(\boldv)=\boldzero\) (using linearity of \(T\)), and hence \(\boldv\in \NS T\text{.}\) Then, using the fact that \(B'\) is a basis of \(\NS T\text{,}\) we have
Since the set \(B\) is linearly independent, we conclude that \(b_i=a_j=0\) for all \(1\leq i\leq k\) and \(k+1\leq j\leq n\text{.}\) In particular, \(a_{k+1}=a_{k+2}=\cdots=a_n=0\text{,}\) as desired.
\(B''\) spans \(\im T\).
It is clear that \(\Span B''\subseteq \im T\) since \(T(\boldv_i)\in \im T\) for all \(k+1\leq i\leq n\) and \(\im T\) is closed under linear combinations.
For the other direction, suppose \(\boldw\in \im T\text{.}\) Then there is a \(\boldv\in V\) such that \(\boldw=T(\boldv)\text{.}\) Since \(B\) is a basis of \(V\) we may write
Here the parametric description is obtained using our usual technique for solving systems of equations (Procedure 1.3.6). From the parametric description, it is clear that the set \(B=\{(-1,1,0), (-1,0,1)\}\) spans \(\NS T\text{.}\) Since \(B\) is clearly linearly independent, it is a basis for \(\NS T\text{,}\) and we conclude that \(\dim \NS=\val{B}=2\text{.}\) (Alternatively, the equation \(x+y+z=0\) defines a plane passing through the origin in \(\R^3\text{,}\) and we know such subspaces are of dimension two. )
Next it is fairly clearly that \(\im T=\{(t,t)\colon t\in \R\}=\Span\{(1,1)\}\text{.}\) Thus \(B'=\{(1,1)\}\) is a basis for \(\im T\) and \(\dim\im T=\val{B'}=1\text{.}\)
Since \(\im T\subseteq \R^3\) and \(\dim\im T=\dim \R^3=3\text{,}\) we conclude by Corollary 3.7.13 that \(\im T=\R^3\text{.}\) Thus \(T\) is surjective.
Subsection3.8.2Fundamental spaces of matrices
We now treat the special case of matrix transformations \(T_A\colon \R^n\rightarrow \R^m\text{.}\) The fundamenal spaces of a matrix \(A\) defined below are can each be connected to \(\NS T_A\) and \(\im T_A\text{,}\) and hence the rank-nullity theorem comes to bear in their analysis. Observe that \(\NS A\) was defined previously (3.4.5). We include it below to gather all the fundamental spaces together under one definition.
Definition3.8.5.Fundamental spaces.
Let \(A\) be a an \(m\times n\) matrix. Let \(\boldr_1,\dots, \boldr_m\) be the \(m\) rows of \(A\text{,}\) and let \(\boldc_1,\dots \boldc_n\) be its \(n\) columns. The following subspaces are called the fundamental subspaces of \(A\).
The null space of \(A\), denoted \(\NS A\) is defined as
\begin{equation*}
\NS A =\{\boldx\in\R^n\colon A\boldx=\boldzero\}\subseteq \R^n.\text{.}
\end{equation*}
The row space of \(A\), denoted \(\RS A\text{,}\) is defined as
The rank and nullity of \(A\text{,}\) denoted \(\rank A\) and \(\nullity A\text{,}\) respectively, are defined as \(\rank A=\dim \CS A\) and \(\nullity A=\dim\NS A\text{.}\)
How do the fundamental spaces of a matrix \(A\) relate to its associated matrix transformation \(T_A\text{?}\) It is easy to see that \(\NS T_A=\NS A\text{,}\) and indeed we made this connection in Section 3.4. What about \(\im T_A\text{?}\) We claim that \(\im T_A=\CS A\text{.}\) To see why, let \(\boldc_1, \boldc_2,\dots, \boldc_n\) be the columns of \(A\) and consider the following chain of equivalences:
\begin{align*}
\boldy\in \im T_A \amp\iff \boldy=T_A(\boldx) \text{ for some } \boldx\in \mathbb{R}^n \\
\amp\iff \boldy=A \boldx \text{ for some } \boldx=(x_1,x_2,\dots, x_n)\in \mathbb{R}^n
\amp (\text{definition of } T_A) \\
\amp \iff \boldy=x_1\boldc_1+x_2\boldc_2+\cdots +x_n\boldc_n \text{ for some } x_i\in \mathbb{\R} \amp (\knowl{./knowl/th_column_method.html}{\text{Theorem 2.1.24}})\\
\amp \iff \boldy\in\CS A\text{.}
\end{align*}
We now highlight three equivalent statments in this chain
\begin{equation*}
\boldy\in\CS A \iff \boldy=A \boldx \text{ for some } \boldx\in \R^n \iff \boldy\in \im\CS A\text{.}
\end{equation*}
The first equivalence tells us that \(\CS A\) is the set of \(\boldy\in \R^m\) for which the matrix equation \(A\boldx=\boldy\) is consistent. The second equivalence tells us that \(\CS A=\im T_A\text{.}\) In sum, we have proven the following result.
Theorem3.8.6.Null space and image as fundamental spaces.
Let \(A\) be an \(m\times n\) matrix, and let \(T_A\colon \R^n\rightarrow \R^m\) be its corresponding matrix transformation. The following equalities hold:
\begin{align}
\NS A\amp=\NS T_A \tag{3.8.1}\\
\CS A\amp=\im T_A \tag{3.8.2}\\
\CS A \amp= \{\boldy\in \R^m\colon A\boldx=\boldy \text{ is consistent}\} \text{.}\tag{3.8.3}
\end{align}
The next theorem indicates how row reduction affects fundamental spaces.
Theorem3.8.7.Fundamental spaces and row equivalence.
Let \(A\) be an \(m\times n\) matrix, and suppose \(A\) is row equivalent to \(B\text{.}\) The following equalities hold
\begin{align}
\NS A\amp =\NS B \tag{3.8.4}\\
\RS A\amp =\RS B \tag{3.8.5}\\
\dim\CS A \amp = \dim \CS B \text{.}\tag{3.8.6}
\end{align}
Although \(\CS A\) and \(\CS B\) have the same dimension, they are in general not equal as subspaces.
First observe that \(A=\begin{bmatrix}1\amp 1\\ 1\amp 1 \end{bmatrix}\) is row equivalent to \(B=\begin{bmatrix}1\amp 1\\ 0 \amp 0\end{bmatrix}\) and yet \(\CS A=\Span\{(1,1)\}=\{(t,t)\colon t\in \R\}\ne \CS B=\Span\{(1,0)\}=\{(t,0)\colon t\in\R\}\text{.}\) Thus we do not have \(\CS A=\CS B\) in general.
We now turn to the equalities (3.8.4)–(3.8.6). Assume that \(A\) is row equivalent to \(B\text{.}\) Using the formulation of row reduction in terms of multiplication by elementary matrices, we see that there is an invertible matrix \(Q\) such that \(B=QA\text{,}\) and hence also \(A=Q^{-1}B\text{.}\) But then we have
Lastly, we turn to the row spaces. We will show that each row of \(B\) is an element of \(\RS A\text{,}\) from whence it follows that \(\RS B\subseteq \RS A\text{.}\) Let \(\boldr_i\) be the \(i\)-th row of \(B\text{,}\) and let \(\boldq_i\) be the \(i\)-th column of \(Q\text{.}\) By Theorem 2.1.26, we have \(\boldr_i=\boldq_i A\text{,}\) and furthermore, \(\boldq_i A\) is the linear combination of the rows of \(A\) whose coefficients come from the entries of \(\boldq_i\text{.}\) Thus \(\boldr_i\in\RS A\text{,}\) as desired.
Having shown that \(\RS B\subseteq \RS A\text{,}\) we see that the same argument works mutatis mutandis (swapping the roles of \(A\) and \(B\) and using \(Q^{-1}\) in place of \(Q\)) to show that \(\RS A\subseteq \RS B\text{.}\) We conclude that \(\RS A=\RS B\text{.}\)
Now that we better understand the role row reduction plays in fundamental spaces, we investigate the special case of a matrix in row echelon form.
Theorem3.8.8.Fundamental spaces and row echelon forms.
Let \(A\) be an \(m\times n\) matrix, and suppose \(A\) is row equivalent to the matrix \(U\) in row echelon form. Let \(r \) be the number of leading ones in \(U\text{,}\) and let \(s=n-r\text{;}\) i.e., \(r\) and \(s\) are the number of leading and free variables, respectively, of the system corresponding to \(\begin{amatrix}[r|r]U\amp \boldzero\end{amatrix}\text{.}\) We have
By Theorem 3.8.7 we know that \(\NS A=\NS U, \RS A=\RS U\text{,}\) and \(\dim\CS A=\dim\CS U\text{.}\) So it is enough to show that \(\dim \NS U=\dim\RS U=r\) and \(\dim \NS U=s\text{.}\)
First, we will show that the \(r\) nonzero rows of \(U\) form a basis for \(\RS U\text{,}\) proving \(\dim\RS U=r\text{.}\) Clearly the nonzero rows span \(\RS U\text{,}\) since any linear combination of all the rows of \(U\) can be expressed as a linear combination of the nonzero rows. Furthermore, since \(U\) is in row echelon form, the staircase pattern of the leading ones appearing in the nonzero rows assures that these row vectors are linearly independent.
Next, we show that the columns of \(U\) containing leading ones form a basis of \(\CS U\text{.}\) Let \(\boldu_{i_1},\dots, \boldu_{i_r}\) be the columns of \(U\) with leading ones, and let \(\boldu_{j_1}, \boldu_{j_2}, \dots, \boldu_{j_s}\) be the columns without leading ones. To prove the \(\boldu_{i_k}\) form a basis for \(\CS U\text{,}\) we will show that given any \(\boldy\in \CS U\) there is a unique choice of scalars \(c_1, c_2,\dots,
c_r\) such that \(c_1\boldu_{i_1}+\cdots +c_r\boldu_{i_r}=\boldy\text{.}\) (Recall that the uniqueness of this choice implies linear independence.) Given \(\boldy\in \CS U\text{,}\) we can find \(\boldx\in\R^n\) such that \(U\boldx=\boldy\) (3.8.6), which means the linear system with augmented matrix \([\ U\ \vert \ \boldy]\) is consistent. Using our Gaussian elimination theory (specifically, Procedure 1.3.6), we know that the solutions \(\boldx=(x_1,x_2,\dots,
x_n)\) to this system are in 1-1 correspondence with choices for the free variables \(x_{j_1}=t_{j_1}, x_{j_2}=t_{j_2}, \dots,
x_{j_s}=t_{j_s}\text{.}\) (Remember that the columns \(\boldu_{j_k}\) without leading ones correspond to the free variables.) In particular, there is a unique solution to \(U\boldx=\boldy\) where we set all the free variables equal to 0. By the column method (Theorem 2.1.24), this gives us a unique linear combination of only the columns \(\boldu_{i_k}\) with leading ones equal to \(\boldy\text{.}\) This proves the claim, and shows that the columns with leading ones form a basis for \(\CS U\text{.}\) We conclude that \(\dim\CS U=r\text{.}\)
where the last equality uses the fact that the sum of the number of columns with leading ones (\(r\)) and the number of columns without leading ones (\(s\)) is \(n\text{,}\) the total number of columns.
Theorem 3.8.9 is now an easy consequence of the foregoing.
Theorem3.8.9.Rank-nullity for matrices.
Let \(A\) be an \(m\times n\) matrix. We have
\begin{align}
n \amp= \nullity A+\rank A \tag{3.8.9}\\
\amp =\dim\NS A+\dim\CS A \tag{3.8.10}\\
\amp =\dim\NS A+\dim\RS A \text{.}\tag{3.8.11}
\end{align}
\begin{align*}
n \amp =\dim\NS T_A+\dim\im T_A \amp (\knowl{./knowl/th_rank-nullity.html}{\text{Theorem 3.8.2}})\\
\amp =\dim\NS A+\dim\CS A \amp \knowl{./knowl/eq_col_image.html}{\text{(3.8.2)}} \\
\amp =\dim\NS A+\dim\RS A \amp \knowl{./knowl/eq_rank_col_row.html}{\text{(3.8.7)}} \\
\amp =\nullity A+\rank A \amp (\knowl{./knowl/d_fundamental_space.html}{\text{Definition 3.8.5}}) \text{.}
\end{align*}
We now gather this suite of results into one overall procedure for computing with fundamental spaces.
Procedure3.8.10.Computing bases of fundamental spaces.
To compute bases for the fundamental spaces of an \(m\times n\) matrix \(A\text{,}\) proceed as follow.
Row reduce \(A\) to a matrix \(U\) in row echelon form.
We have \(\NS A=\NS U\text{.}\) Compute a parametric description of the solutions to the linear system \(U\boldx=\boldzero\) following Procedure 1.3.6. If the free variables are \(t_1, t_2, \dots, t_k \text{,}\) a basis \(B=\{\boldv_1, \boldv_2, \dots, \boldv_k\}\) of \(\NS A\) is obtained by letting \(\boldv_i\) be the solution corresponding to the choice \(t_i=1\) and \(t_j=0\) for \(j\ne i\text{.}\)
We have \(\RS A=\RS U\text{.}\) The set of nonzero rows of \(U\) is a basis for \(\RS A\text{.}\)
In general \(\CS A\ne \CS U\text{.}\) However, the columns of \(U\) containing leading ones form a basis of \(\CS U\text{,}\) and the corresponding columns of \(A\) form a basis for \(\CS A\text{.}\)
Video example: fundamental spaces.
The results 3.8.6–3.8.9 allow us to add seven more equivalent statements to our invertibility theorem, bringing us to a total of fourteen!
Theorem3.8.12.Invertibility theorem (supersized).
Let \(A\) be an \(n\times n\) matrix. The following statements are equivalent.
has a unique solution: namely, \(\boldx=\boldzero_{n\times 1}\text{.}\)
\(A\) is row equivalent to \(I_n\text{,}\) the \(n\times n\) identity matrix.
\(A\) is a product of elementary matrices.
\(\det A\ne 0\text{.}\)
\(\displaystyle \NS A=\{\boldzero\}\)
\(\displaystyle \nullity A=0\)
\(\displaystyle \rank A=n\)
\(\displaystyle \RS A=\R^n\)
\(\displaystyle \CS A=\R^n\)
Any of the following equivalent conditions about the set \(S\) of columns of \(A\) hold: \(S\) is a basis of \(\R^n\text{;}\)\(S\) spans \(\R^n\text{;}\)\(S\) is linearly independent.
Any of the following equivalent conditions about the set \(S\) of rows of \(A\) hold: \(S\) is a basis of \(\R^n\text{;}\)\(S\) spans \(\R^n\text{;}\)\(S\) is linearly independent.
Subsection3.8.3Contracting and expanding to bases
Thanks to Theorem 3.7.3 we know that spanning sets can be contracted to bases, and linearly independent sets can be extended to bases; and we have already seen a few instances where this result has been put to good use. However, neither the theorem nor its proof provide a practical means of performing this contraction or extension. We would like a systematic way of determining which vectors to throw out (when contracting), or which vectors to chuck in (when extending). In the special case where \(V=\R^n\) for some \(n\text{,}\) we can adapt Procedure 3.8.10 to our needs.
Procedure3.8.13.Contracting and extending to bases of \(\R^n\).
Let \(S=\{\boldv_1, \boldv_2,\dots, \boldv_r\}\subseteq \R^n\text{.}\)
Contracting to a basis
Assume \(S\) spans \(\R^n\text{.}\) To contract \(S\) to a basis \(B\subseteq S\text{,}\) proceed as follows.
Let \(A\) be the \(n\times r\) matrix whose \(j\)-th column is given by \(\boldv_j\) for all \(1\leq j\leq r\text{.}\)
Use the column space procedure (3.8.10) to compute a basis \(B\) of \(\CS A\text{,}\) chosen from among the original columns of \(A\text{.}\)
The subset \(B\subseteq S\) is a basis for \(\R^n\text{.}\)
Extending to a basis
Assume \(S\) is linearly independent. To extend \(S\) to a basis \(B\) of \(\R^n\) proceed as follows.
Let \(A\) be the \(n\times (r+n)\) matrix whose first \(r\) columns are the elements of \(S\text{,}\) and whose remaining \(n\) columns consist of \(\bolde_1, \bolde_2, \dots, \bolde_n\text{,}\) the standard basis elements of \(\R^n\text{.}\)
Use the column space procedure (3.8.10) to compute a basis \(B\) of \(\CS A\text{,}\) chosen from among the original columns of \(A\text{.}\)
The set \(B\) is a basis for \(\R^n\) containing \(S\text{.}\)
Let’s see why in both cases the procedure produces a basis of \(\R^n\) that is either a sub- or superset of \(S\text{.}\)
Contracting to a basis.
In this case we have \(\CS A=\Span S=\R^n\text{.}\) Thus \(B\) is a basis for \(\R^n\text{.}\) Since the column space procedure selects columns from among the original columns of \(A\text{,}\) we have \(B\subseteq S\text{,}\) as desired.
Extending to a basis.
Since \(\CS A\) contains \(\bolde_j\) for all \(1\leq j\leq n\text{,}\) we have \(\CS A=\R^n\text{.}\) Thus \(B\) is a basis for \(\R^n\text{.}\) Since the first \(r\) columns of \(A\) are linearly independent (they are the elements of \(S\)), when we row reduce \(A\) to a matrix \(U\) in row echelon form, the first \(r\) columns of \(U\) will contain leading ones. (To see this, imagine row reducing the \(n\times r\) submatrix \(A'\) consisting of the first \(r\) columns of \(A\) to a row echelon matrix \(U'\text{.}\) Since these columns are linearly independent, they already form a basis for \(\CS A'\text{.}\) Thus the corresponding colmns of \(U'\) must all have leading ones. ) It follows that the first \(r\) columns of \(A\) are selected to be in the basis \(B\text{,}\) and hence that \(S\subseteq B\text{,}\) as desired.
Video example: contracting to a basis.
Exercises3.8.4Exercises
WeBWork Exercises
1.
Suppose that \(A\) is a \(4 \times 8\) matrix that has an echelon form with two zero rows. Find the dimension of the row space of \(A\text{,}\) the dimension of the column space of \(A\text{,}\) and the dimension of the null space of \(A\text{.}\)
The dimension of the row space is the number of nonzero rows in the echelon form, or \(4 - 2 = 2.\) The dimension of the column space is the same as the dimension of the row space, and the dimension of the null space is \(8 - 2 = 6.\)
2.
Are the following statements true or false?
\(R^n\) has exactly one subspace of dimension \(m\) for each of \(m = 0,1,2,\ldots, n\) .
Let \(m>n\text{.}\) Then \(U =\) {\(u_1, u_2,\ldots, u_m\)} in \(R^n\) can form a basis for \(R^n\) if the correct \(m-n\) vectors are removed from \(U\text{.}\)
Let \(m\lt n\text{.}\) Then \(U =\) {\(u_1, u_2,\ldots, u_m\)} in \(R^n\) can form a basis for \(R^n\) if the correct \(n-m\) vectors are added to \(U\text{.}\)
The set {0} forms a basis for the zero subspace.
If {\(u_1, u_2, u_3\)} is a basis for \(R^3\text{,}\) then span{\(u_1, u_2\)} is a plane.
3.
Are the following statements true or false?
If \(\ S_1\) and \(\ S_2\) are subspaces of \(R^n\) of the same dimension, then \(S_1=S_2\text{.}\)
If \(\ S =\) span{\(u_1, u_2, u_3\)}, then \(dim(S) = 3\) .
If the set of vectors \(U\) spans a subspace \(S\text{,}\) then vectors can be added to \(U\) to create a basis for \(S\)
If the set of vectors \(U\) is linearly independent in a subspace \(S\) then vectors can be added to \(U\) to create a basis for \(S\)
If the set of vectors \(U\) is linearly independent in a subspace \(S\) then vectors can be removed from \(U\) to create a basis for \(S\text{.}\)
Solution: (a) Since \(e_1\text{,}\)\(e_2\text{,}\)\(e_3\) spans \(\mathbb{R}^3\text{,}\) we get that \(L(\mathbb{R}^3)\) is spanned by \(L(e_1)\text{,}\)\(L(e_2)\text{,}\)\(L(e_3)\text{.}\) So
In other words given any list of values \(a, b, c, d\text{,}\) we can find a polynomial that evaluates to these values at the inputs \(x=-1, 0, 1, 2\text{.}\)
Define \(T\colon P_3\rightarrow \R^4\) by \(T(p(x))=(p(-1), p(0), p(1), p(2))\text{.}\) Show that \(T\) is linear.
Compute \(\NS T\text{.}\) You may use the fact that a polynomial of degree \(n\) has at most \(n\) roots.
Use the rank-nullity theorem to compute \(\dim \im T\text{.}\) Explain why this implies \(\im T=\R^4\)
Explain why the equality \(\im T=\R^4\) is equivalent to the claim we wish to prove.
8.
Use the rank-nullity theorem to compute the rank of the linear transformation \(T\) described.
For each matrix \(A\) (i) row reduce \(A\) to a matrix \(U\) in row echelon form, (ii) compute bases for \(\CS A\) and \(\CS U\text{,}\) (iii) compute \(\dim \CS A\) and \(\dim \CS U\text{,}\)and (iv) decide whether \(\CS A=\CS U\text{.}\)
Assume \(A\) is invertible, and is row equivalent to the row echelon matrix \(U\text{.}\) Prove: \(\CS A=\CS U\text{.}\)
13.
For each matrix below, (i) compute bases for each fundamental space, (ii) identify these spaces as familiar geometric objects in \(\R^2\) or \(\R^3\text{,}\) and (iii) provide sketches of each space. The sketches of \(\NS A\) and \(\RS A\) should be combined in the same coordinate system.
For each \(A\) compute bases for each fundamental space. In each case, you can find bases for one of the fundamental spaces by inspection, and then use the rank-nullity theorem to reduce your workload for the other spaces. See first solution for a model example.
Clearly, \(B=\{(1,1)\}\) is a basis for \(\CS A\text{,}\) and \(B'=\{(1,1,1,1)\}\) is a basis for \(\RS A\text{.}\) It follows that \(\rank A=1\) and hence \(\nullity A=4-1=3\text{.}\) Thus we need to find three linearly independent elements of \(\NS A\) to find a basis. We can do so by inspection with the help of the column method. Namely, observe that \(\boldv_1=(1,-1,0,0), \boldv_2=(0,1,-1,0), \boldv_3=(0,0,1,-1)\) are all in \(\NS A\) (column method). The location of zeros in these vectors make it clear that \(B''=\{\boldv_1,\boldv_2, \boldv_3\}\) are linearly independent. Since \(\dim NS A=3\text{,}\) and \(\val{B''}=3\text{,}\) we conclude that \(B''\) is a basis of \(\NS A\) (3.7.13).
15.
For each \(A\) use Procedure 3.8.10 to compute bases for each fundamental space.
Prove: \(A^2=\boldzero_{n\times n}\) if and only if \(\CS A\subseteq \NS A\text{.}\)
Construct a \(2\times 2\) matrix \(A\) with \(\NS A=\CS A=\Span\{(1,2)\}\text{.}\) Verify that your \(A\) satisfies \(A^2=\boldzero_{2\times 2}\text{.}\)
18.
Suppose \(A\) is \(m\times n\) with \(m\ne n\text{.}\)
Prove: either the rows of \(A\) are linearly dependent or the columns of \(A\) are linearly dependent.
19.
Prove: \(\nullity A=\nullity A^T\) if and only if \(A\) is a square matrix.
20.
Suppose \(A\) and \(B\) are row equivalent \(m\times n\) matrices. For each \(1\leq j\leq n\) let \(\bolda_j\) and \(\boldb_j\) be the \(j\)-th columns of \(A\) and \(B\text{,}\) respectively.
Prove: columns \(\boldb_{i_1}, \boldb_{i_2}, \dots, \boldb_{i_r}\) of \(B\) are linearly independent if and only if the corresponding columns \(\bolda_{i_1}, \bolda_{i_2},\dots, \bolda_{i_r}\) are linearly independent.
By Corollary 2.4.16 there is an invertible \(Q\) such that \(QA=B\text{.}\) Let \(A'\) and \(B'\) be the submatrices of \(A\) and \(B\) obtained by taking columns \(i_1, i_2, \dots, i_r\text{.}\) Show that we still have \(QA'=B'\) and relate linear independence of the columns to solutions of the matrix equations \(A'\boldx=\boldzero\) and \(B'\boldx=\boldzero\text{.}\)