Skip to main content

Euclidean vector spaces

Section 4.5 Fundamental spaces

This section is in a sense just a long-format example of how to compute bases and dimensions of certain subspaces of \(\R^n\text{.}\) The subspaces in question will be defined as so-called fundamental spaces of matrices. It should thus come as little surprise that the algorithms used in these computations make use of Gaussian elimination, fabled workhorse of linear algebra. Lastly, we will also meet the matrix version of the famous rank-nullity theorem, sometimes called the fundamental theorem of linear algebra.

Subsection 4.5.1 Fundamental spaces of matrices

Let \(A\) be an \(m\times n\) matrix. In addition to its null space \(\NS A\subseteq \R^n\text{,}\) \(A\) gives rise to two additional naturally defined subspaces, called the row space and column space of the matrix. Taken together, these three subspaces associated to \(A\) are called its fundamental spaces. Observe that \(\NS A\) was defined previously (4.1.10). We include it below to gather all the fundamental spaces together under one definition.

Definition 4.5.1. Fundamental spaces.

Let \(A\) be a an \(m\times n\) matrix. Let \(\boldr_1,\dots, \boldr_m\in \R^n\) be the \(m\) rows of \(A\text{,}\) and let \(\boldc_1,\dots \boldc_n\in \R^m\) be its \(n\) columns. The following subspaces are called the fundamental subspaces of \(A\).
  • Null space.
    The null space of \(A\), denoted \(\NS A\) is defined as
    \begin{equation*} \NS A =\{\boldx\in\R^n\colon A\boldx=\boldzero\}\subseteq \R^n.\text{.} \end{equation*}
  • Row space.
    The row space of \(A\), denoted \(\RS A\text{,}\) is defined as
    \begin{equation*} \RS A=\Span \{\boldr_1, \boldr_2, \dots, \boldr_m\}\subseteq \R^n\text{.} \end{equation*}
  • Column space.
    The column space of \(A\), denoted \(\CS A\text{,}\) is defined as
    \begin{equation*} \CS A=\Span \{\boldc_1, \boldc_2, \dots, \boldc_n\}\subseteq \R^m\text{.} \end{equation*}

Remark 4.5.2. Fundamental spaces.

Note that the fundamental spaces of a matrix \(A\in M_{mn}\) are indeed subspaces. This is a simple consequences of theorems 4.1.11 and 4.2.6, which tell us that null spaces of matrices and spans of vectors are subspaces.
A good practice when dealing with fundamental spaces of a matrix, is to first sort out the ambient space where each of these subspaces lives: if \(A\in M_{mn}\text{,}\) then both \(\NS A\) and \(\RS A\) are subspaces of \(\R^n\text{,}\) and \(\CS A\) is a subspace of \(\R^m\text{.}\)
Our goal is to be able to determine the various fundamental spaces of a matrix in an efficient manner. More specifically, we want to be able to compute bases for these spaces, and compute their dimension. Our first example is elementary enough to allow us to do this essentially by inspection.

Example 4.5.3. Fundamental spaces: elementary example.

Provide bases and compute the dimension of the three fundamental spaces of
\begin{equation*} A=\begin{amatrix}[rrr] 1 \amp 2\amp 3 \\ 1 \amp 2\amp 3 \end{amatrix}\text{.} \end{equation*}
Solution.
By definition, we have
\begin{align*} \NS A=\{\boldx\in \R^3\colon A\boldx=\boldzero\}\amp \subseteq \R^3 \amp \\ \RS A=\Span\{(1,2,3),(1,2,3)\} \amp\subseteq \R^3 \\ \CS A=\Span\{(1,1),(2,2),(3,3)\}\amp \subseteq \R^2 \text{.} \end{align*}
We see easily by inspection that
\begin{equation*} \RS A=\Span\{(1,2,3),(1,2,3)\}=\Span\{(1,2,3)\}\text{,} \end{equation*}
a line in \(\R^3\) passing through the origin, and
\begin{equation*} \CS A=\Span\{(1,1),(2,2),(3,3)\}=\Span\{(1,1)\}\text{,} \end{equation*}
a line in \(\R^2\) passing through the origin. Since the sets \(B_1=\{(1,2,3)\}\) and \(B_2=\{(1,1)\}\) are each linearly independent, they are clearly bases for their spans \(\RS A \) and \(\CS A\text{,}\) respectively. It follows that \(\dim \RS A=\dim \CS A=1\text{.}\)
Next, by definition \(\NS A\) is the set of vectors \(\boldx=(x_1,x_2,x_3)\in \R^3\) satisfying
\begin{equation*} A\boldx=\begin{bmatrix} x_1+2x_2+3x_3\\ x_1+2x_2+3x_3 \end{bmatrix} =\begin{bmatrix} 0 \\ 0 \end{bmatrix}\text{,} \end{equation*}
or equivalently, the solutions to the linear equation \(x_1+2x_2+3x_3=0 \text{,}\) a plane in \(\R^3\) passing through the origin. Using ProcedureΒ 2.3.6 we derive the parametric description
\begin{align*} \NS A\amp=\{(-(2s+3t),s,t)\colon s,t\in \R\}\\ \amp =\{s(-2,1,0),+t(-3,0,1)\colon s,t\in \R\} \text{,} \end{align*}
from which we see that
\begin{equation*} \NS A=\Span\{(-2,1,0),(-3,0,1)\}\text{.} \end{equation*}
Since \(B_3=\{(-2,1,0),(-3,0,1)\}\) is linearly independent and spans \(\NS A\text{,}\) it is a basis for \(\NS A\text{.}\) We conclude \(\dim \NS A=2\text{.}\)
The various fundamental spaces computed in ExampleΒ 4.5.3 are represented in FigureΒ 4.5.4. Note that separate graphs are presented for \(\NS A\) and \(\RS A\text{,}\) which live in \(\R^3\text{,}\) and \(\CS A\text{,}\) which lives in \(\R^2\text{.}\)
(a) \(\NS A, \RS A\subseteq \R^3\)
(b) \(\CS A\subseteq \R^2\)
Figure 4.5.4. Fundamental spaces of \(A=\begin{bmatrix}1\amp 2\amp 3\\ 1\amp 2\amp 3\end{bmatrix}\)
The null space of a matrix \(A\) has an obvious connection with systems of linear equations, it being the set of solutions to the homogeneous linear system represented by \(A\boldx=\boldzero\text{.}\) The next theorem provides an interpretation of the column space of \(A\) in terms of linear systems.

Proof.

Statements (2) and (3) are equivalent by virtue of the definition of a consistent system. Statements (1) and (2) are equivalent thanks to the column method of matrix multiplication. Indeed, letting \(\bolda_1, \bolda_2,\dots, \bolda_n\in \R^m\) be the columns of \(A\text{,}\) so that
\begin{equation*} A=\begin{bmatrix} \vert \amp \vert \amp \amp \vert \\ \bolda_1\amp \bolda_2\amp \cdots \amp \bolda_n\\ \vert \amp \vert \amp \amp \vert \end{bmatrix}\text{,} \end{equation*}
we have
\begin{align*} \boldb\in \CS A \amp \iff \boldb=c_1\bolda_1+c_2\bolda_2+\cdots +c_n\bolda_n \text{ for some } c_i\in \R \amp (\text{def.})\\ \amp \iff \boldb=A\boldx, \text{ where } \boldx=(c_1,c_2,\dots, c_n)\in \R^n \amp \knowl{./knowl/xref/th_column_method.html}{\text{3.1.27}}\text{.} \end{align*}
The set equalities (4.14)–(4.15) follow immediately from the equivalence of statements (1)-(3).

Remark 4.5.6. What about the row space?

You might be wondering why we give the row space of a matrix such short shrift here. As it turns out, the null space and column space of a matrix will be of the most importance to us algorithmically, with row space playing more of a supporting role of convenience. That said, as we will be able to show once we know more about inner products, the row space of a matrix \(A\) also has a connection with linear systems associated to \(A\text{:}\) namely, it is the orthogonal complement of the null space of \(A\text{.}\)
For more matrices more complicated than the one in ExampleΒ 4.5.11, the key to computing the various fundamental spaces lies with Gaussian elimination. Our first theorem lays out how exactly this procedure affects the fundamental spaces of a matrix.
The next theorem indicates how row reduction affects fundamental spaces.

Proof.

Assume that \(A\) is row equivalent to \(B\text{.}\) Using CorollaryΒ 3.4.17, we see that this means there is an invertible matrix \(Q\) such that \(B=QA\text{,}\) and hence also \(A=Q^{-1}B\text{.}\) We will make use of this matrix \(Q\) below.
  1. The fact that \(\NS A=\NS B\) is a consequence of TheoremΒ 2.1.16, since if \(A\) is row equivalent to \(B\text{,}\) then so are the linear systems with augmented matrices \([A\vert\boldzero]\) and \([B\vert\boldzero]\text{.}\)
  2. First we show that \(\RS B\subseteq \RS A\text{.}\) Since \(\RS B\) is defined as the span of the rows of \(B\text{,}\) by TheoremΒ 4.2.6 it is enough to show that each row of \(B\) is an element of \(\RS A\text{.}\) For all \(1\leq i\leq m\text{,}\) let \(\bolda_i, \boldb_i, \boldq_i\) denote the \(i\)th rows of \(A\text{,}\) \(B\text{,}\) and \(Q\text{,}\) respectively, treated as row vectors. Applying TheoremΒ 3.1.29 to the matrix product \(B=QA\text{,}\) we have \(\underset{1\times n}{\boldb_i}=\underset{1\times n}{\boldq_i} A\) for all \(1\leq i\leq m\text{.}\) Furthermore, using TheoremΒ 3.1.29 again, the row vector \(\underset{1\times n}{\boldb_i}=\underset{1\times n}{\boldq_i} A\) is itself a linear combination of the rows \(\bolda_i\)of \(A\text{.}\) By definition of row space, this means \(\boldb_i\in \RS A\) for all \(1\leq i\leq m\text{,}\) and hence \(\boldr_i\in\RS A\text{,}\) as desired.
    It remains to show that \(\RS A\subseteq \RS B\text{.}\) But this follows using the same argument as above, using the fact that \(A=Q^{-1}B\text{:}\) i.e., by swapping the roles of \(A\) and \(B\text{,}\) and replacing \(Q\) with \(Q^{-1}\text{.}\)
  3. We now let \(\bolda_j, \boldb_j\) be the columns of \(A\) and \(B\text{,}\) respectively, where \(1\leq j\leq n\text{.}\) Since \(\CS A=\Span\{\bolda_1,\bolda_2,\dots, \bolda_n\}\text{,}\) by TheoremΒ 4.4.15 there is a subset of columns that forms a basis of \(\CS A\text{.}\) Let \(S=\{\bolda_{i_1}, \bolda_{i_2},\dots, \bolda_{i_r}\}\) be any such a basis, so that \(\dim \CS A=r\text{.}\) We claim that the corresponding set of columns of \(S'=\{\boldb_{i_1},\boldb_{i_2},\dots, \boldb_{i_r}\}\) is a basis of \(\CS B\text{,}\) in which case \(\dim \CS B=\dim \CS A=r\text{.}\) To do so, we will use TheoremΒ 4.3.4: in more detail, we will show that given any \(\boldw\in \CS B\text{,}\) we can write
    \begin{equation*} \boldw=c_{i_1}\boldb_{i_1}+c_{i_2}\boldb_{i_2}+\cdots +c_{i_r}\boldb_{i_r}\text{,} \end{equation*}
    in a unique way. Indeed, we have
    \begin{align*} \boldw\in \CS B \amp \iff \boldw\in \CS QA \amp (B=QA)\\ \amp \iff \boldw=QA\boldx \text{ for some } \boldx\in \R^n\\ \amp \iff Q^{-1}\boldw=A\boldx \text{ for some } \boldx\in \R^n\\ \amp \iff Q^{-1}\boldw\in \CS A\\ \amp \iff Q^{-1}\boldw=c_{i_1}\bolda_{i_1}+\cdots +c_{i_r}\bolda_{i_r} \end{align*}
    for a unique \(r\)-tuple \((c_{i_1}, c_{i_2},\dots, c_{i_r})\text{.}\) Here we have used the fact that the subset \(\{\bolda_{i_1},\bolda_{i_2},\dots, \bolda_{i_r}\}\) is a basis of \(\CS A\text{.}\) But this means
    \begin{align*} \boldw \amp =Q(c_{i_1}\bolda_{i_1}+\cdots +c_{i_r}\bolda_{i_r})\\ \amp = c_{i_1}Q\bolda_{i_1}+\cdots +c_{i_r}Q\bolda_{i_r}\\ \amp =c_{i_1}\boldb_{i_1}+\cdots +c_{i_r}\boldb_{i_r}\text{,} \end{align*}
    since \(\boldb_{i}=Q\bolda_i\) for all \(1\leq i\leq n\) by TheoremΒ 3.1.27. Furthermore, the coefficients in the linear combination
    \begin{equation*} \boldw=c_{i_1}\boldb_{i_1}+\cdots +c_{i_r}\boldb_{i_r} \end{equation*}
    are unique, since our work above shows that
    \begin{equation*} \boldw=c_{i_1}\boldb_{i_1}+\cdots +c_{i_r}\boldb_{i_r} \iff Q^{-1}\boldw=c_{i_1}\bolda_{i_1}+\cdots +c_{i_r}\bolda_{i_r}\text{.} \end{equation*}

Remark 4.5.8. Column space and row reduction.

Suppose the matrices \(A\) and \(B\) are row equivalent. Though it is true that their column spaces are of equal dimension, they will not in general be equal as sets. That is, we will often have \(\CS A\ne \CS B\text{.}\) It is useful to have simple examples of this phenomenon at the ready. ExampleΒ 4.5.9 provides one such example.

Example 4.5.9. Column space and row reduction.

The matrix
\begin{align*} A \amp = \begin{bmatrix}1\amp 1\\ 1\amp 1 \end{bmatrix} \end{align*}
row reduces to the matrix
\begin{align*} U \amp = \begin{bmatrix}1\amp 1\\ 0 \amp 0\end{bmatrix} \end{align*}
in row echelon form. By inspection we see that
\begin{align*} \CS A \amp =\Span\{(1,1)\}=\{(t,t)\colon t\in \R\}\\ \CS U \amp =\Span\{(1,0)\}=\{(t,0)\colon t\in \R\}\text{.} \end{align*}
It is clear algebraically from the two set descriptions that \(\CS A\ne \CS U\text{.}\) Geometrically, the two spaces are two distinct lines in \(\R^2\) passing through the origin: \(\CS A\) is the line defined by the equation \(y=x\) and \(\CS U\) is the \(x\)-axis.
By contrast consider the matrix
\begin{equation*} A=\begin{bmatrix}1\amp 1\\ 0\amp 1 \end{bmatrix}\text{,} \end{equation*}
which is row equivalent to
\begin{equation*} U=\begin{bmatrix} 1\amp 0 \\ 0\amp 1 \end{bmatrix} \text{.} \end{equation*}
In this case we have
\begin{align*} \CS A \amp =\Span\{(1,0),(1,1)\}=\R^2\\ \CS U \amp =\Span\{(1,0),(0,1)\}=\R^2\text{,} \end{align*}
and thus \(\CS A=\CS U\text{.}\) (See ExerciseΒ 4.5.4.14 for a more general take on this example.)
Using TheoremΒ 4.5.7, to find bases of the fundamental spaces of a matrix, we can first row reduce to a matrix \(U\) in row echelon form. Some work still needs to be done, however, in determining bases for the fundamental spaces of matrices in row echelon form. We state our results in the form of a procedure, and provide a proof of its validity.

Proof.

Null space.
We must show that the vectors \(\boldv_j\) described form a basis for \(\NS A=\NS U\text{.}\) Let \(x_1,x_2,\dots, x_n\) be the unknowns of the linear system corresponding to \(U\boldx=\boldzero\text{,}\) and let \(x_{i_1},x_{i_2},\dots, x_{i_r}\) be the free variables among these unknowns. Thus \(\boldv_j\) is the vector obtained from the parametric description of the solutions to \(U\boldx=\boldzero\) obtained by setting \(x_{i_j}=1\) and \(x_{i_k}=0\) for all \(k\ne j\text{.}\) Suppose we have
\begin{equation*} a_1\boldv_1+a_2\boldv_2+\cdots +a_r\boldv_r=\boldzero\text{.} \end{equation*}
For each \(1\leq j\leq r\text{,}\) since \(\boldv_j\) is the only vector with a nonzero entry in the \(i_j\)-th term, we must have \(a_j=0\text{.}\) This proves that the set \(B=\{\boldv_1,\boldv_2,\dots, \boldv_r\}\) is linearly independent.
We now show that \(\Span B=\NS A\text{.}\) Assume \(\boldv\in \NS U\text{,}\) and let \(t_{i_j}\) be the \(i_j\)-th entry of \(\boldv\) for each \(1\leq j\leq r\text{.}\) In other words, these are the entries of \(\boldv\) corresponding to the free variables of the system. We claim that
\begin{equation} \boldv=t_{i_1}\boldv_1+t_{i_2}\boldv_2+\cdots +t_{i_r}\boldv_r\text{,}\tag{4.16} \end{equation}
from which it follows that \(\Span B=\NS U\text{.}\) To see why this is true, let \(\boldw\) be the right side of (4.16). Observe first that \(\boldw\) is an element of \(\NS U\) (since \(\boldv_i\in \NS U\) for all \(i\)), and thus a solution to \(U\boldx=\boldzero\text{.}\) Furthermore, arguing as above, we see that the \(i_k\)-th entry of \(\boldw\) is equal to \(t_{i_k}\) for all \(1\leq k\leq r\text{.}\) But according to ProcedureΒ 2.3.6, there is a unique solution corresponding to each choice of assignment of the free variables \(x_{i_k}\text{.}\) Since \(\boldv\) and \(\boldw\) are both solutions to \(A\boldx=\boldzero\) and have the same free variable entries, we conclude that \(\boldv=\boldw\text{,}\) as desired.
Row space.
Since \(\RS A=\RS U\) by TheoremΒ 4.5.7, it suffices to show that the that the nonzero rows of \(U\) form a basis of \(\RS U\text{.}\) Clearly the nonzero rows span \(\RS U\text{,}\) since any linear combination of all the rows of \(U\) can be expressed as a linear combination of the nonzero rows. Furthermore, since \(U\) is in row echelon form, the staircase pattern of the leading ones appearing in the nonzero rows assures that these row vectors are linearly independent.
Column space.
Let \(\boldu_{i_1},\dots, \boldu_{i_s}\) be the columns of \(U\) with leading ones, and let \(\boldu_{j_1}, \boldu_{j_2}, \dots, \boldu_{j_r}\) be the columns without leading ones. To prove the \(\boldu_{i_k}\) form a basis for \(\CS U\text{,}\) we will show that given any \(\boldy\in \CS U\) there is a unique choice of scalars \(c_1, c_2,\dots, c_r\) such that \(c_1\boldu_{i_1}+\cdots +c_s\boldu_{i_s}=\boldy\text{.}\) (Recall that the uniqueness of this choice implies linear independence by TheoremΒ 4.3.4.) Given \(\boldy\in \CS U\text{,}\) we can find \(\boldx\in\R^n\) such that \(U\boldx=\boldy\) (4.5.5), which means the linear system with augmented matrix \([\ U\ \vert \ \boldy]\) is consistent. Using our Gaussian elimination theory (specifically, ProcedureΒ 2.3.6), we know that the solutions \(\boldx=(x_1,x_2,\dots, x_n)\) to this system are in 1-1 correspondence with choices for the free variables \(x_{j_1}=t_{j_1}, x_{j_2}=t_{j_2}, \dots, x_{j_r}=t_{j_r}\text{.}\) (Remember that the columns \(\boldw_{j_k}\) without leading ones correspond to the free variables.) In particular, there is a unique solution to \(U\boldx=\boldy\) where we set all the free variables equal to 0. Using TheoremΒ 3.1.27, we see that this gives rise to a unique linear combination the columns \(\boldu_{i_k}\) with leading ones equal to \(\boldy\text{.}\) This proves the claim, and shows that the columns with leading ones form a basis for \(\CS U\text{.}\) Lastly, by TheoremΒ 4.5.7, the corresponding columns form a basis for \(\CS A\text{.}\)

Example 4.5.11. Fundamental spaces.

Compute bases and dimensions for the three fundamental spaces of
\begin{equation*} A=\begin{amatrix}[rrrrr] 1 \amp 3 \amp 1 \amp -3 \amp 2 \\ 0 \amp 1 \amp 1 \amp -3 \amp 0 \\ -1 \amp -2 \amp 0 \amp 0 \amp -1 \\ 1 \amp 2 \amp 0 \amp 0 \amp 2 \end{amatrix}\text{.} \end{equation*}
Solution.
The matrix \(A\) row reduces as
\begin{equation*} A\xrightarrow{\text{row ops.}} U=\begin{amatrix}[rrrrr] \boxed{1} \amp 3 \amp 1 \amp -3 \amp 2 \\ 0 \amp \boxed{1} \amp 1 \amp -3 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp \boxed{1} \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{amatrix}\text{.} \end{equation*}
To compute \(\NS A=\NS U\text{,}\) we first give a parametric description of the solutions to the linear system represented by \(U\boldx=\boldzero\text{,}\) following ProcedureΒ 2.3.6. We conclude
\begin{equation*} \NS A=\{(2s-6t,-s+3t ,s,t,0)\colon s,t\in \R\}\text{.} \end{equation*}
From this parametric description we extract by inspection the spanning set
\begin{equation*} B_1=\{\boldx_1=(2,-1,1,0,0), \boldv_2=(-6,3,0,1,0)\}\text{.} \end{equation*}
Since \(B_1\) is easily seen to be linearly independent, we conclude it is a basis. (Alternatively, \(B_1\) is the basis one obtains using ProcedureΒ 4.5.10. That is, \(\boldv_1\) is the result of assigning \(s=1,t=0\) in our parametric description, and \(\boldv_2\) is the result of assigning \(s=0\text{,}\) \(t=1\text{.}\)) Since \(\abs{B_1}=2\text{,}\) we have \(\dim \NS A=2\text{.}\)
According to ProcedureΒ 4.5.10, the nonzero rows of \(U\) form a basis of \(\RS A\text{.}\) Thus \(B_2=\{(1,3,1-3,2),(0,1,1-3,0),(0,0,0,0,1)\}\) is a basis of \(\RS A\text{,}\) and \(\dim \RS A=3\text{.}\)
Lastly, since the first, second and fifth columns of \(U\) contain leading ones, the corresponding columns of \(A\) form a basis of \(\CS A\text{.}\) Thus \(B_{3}=\{(1,0,-1,1),(3,1,-2,2),(2,0,-1,2)\}\) is a basis of \(\CS A\text{,}\) and \(\dim \CS A=3\text{.}\)
Given \(A\) and \(U\) as in ProcedureΒ 4.5.10, let \(r\) be the number columns of \(U\) without leading ones, and let \(s\) be the number of columns of \(U\) with leading ones: i.e., \(r\) is the number of free variables the linear system \(U\boldx=\boldzero\text{,}\) and \(s\) is the number of leading variables. It follows from the bases descriptions in ProcedureΒ 4.5.10 that
\begin{equation*} \dim \NS A=r\text{,} \end{equation*}
and
\begin{equation*} \dim \CS A=\dim \RS A=s\text{.} \end{equation*}
Furthermore, since \(U\) has \(n\text{,}\) \(r\) of which do not have a leading one, and the other \(s\) of which do contain leading a leading one, we have \(n=r+s\text{.}\) It follows that
\begin{equation*} n=\dim\NS A+\dim \CS A\text{.} \end{equation*}
We have just proved the rank-nullity theorem for matrix, the name of which derives from the following definition.

Definition 4.5.12. Rank and nullity of matrix.

Let \(A\) be an \(m\times n\) matrix. The nullity of \(A\text{,}\) denoted \(\nullity A\text{,}\) is defined as the dimension of the null space of \(A\text{;}\) the rank of \(A\text{,}\) denoted \(\rank A\text{,}\) is defined as the dimension of the column space of \(A\text{.}\) In other words, we have
\begin{align*} \nullity A \amp =\dim \NS A \amp \rank A\amp =\dim \CS A\text{.} \end{align*}

Example 4.5.14. Rank-nullity.

Verify the rank-nullity theorem for matrices for the matrix
\begin{equation*} A=\begin{amatrix}[rrrrr] 1 \amp 3 \amp 1 \amp -3 \amp 2 \\ 0 \amp 1 \amp 1 \amp -3 \amp 0 \\ -1 \amp -2 \amp 0 \amp 0 \amp -1 \\ 1 \amp 2 \amp 0 \amp 0 \amp 2 \end{amatrix} \end{equation*}
Solution.
We saw that \(\dim \RS A=\dim \CS A=3\text{,}\) and thus that that the row and columns spaces have equal dimension, as predicted by the rank-nullity theorem. Furthermore, we saw that \(\NS A=2\text{.}\) It follows that
\begin{equation*} \dim \NS A+\dim\CS A=2+3=5\text{,} \end{equation*}
the number of columns of \(A\text{,}\) as predicted by the rank-nullity theorem for matrices.

Example 4.5.15. Using the rank-nullity theorem.

Suppose \(A\) is a \(3\times 7\) matrix, and that \(\nullity A=4\text{.}\) Show that \(\CS A=\R^3\text{.}\)
Solution.
By the rank-nullity theorem we have
\begin{align*} \dim\CS A \amp =7-\dim \NS A \\ \amp =7-4\\ \amp = 3\text{.} \end{align*}
Since \(\CS A\subseteq \R^3\) and \(\dim \CS A=\dim \R^3\text{,}\) we conclude by CorollaryΒ 4.4.17 that \(\CS A=\R^3\text{,}\) as desired.

Example 4.5.16. Video example: fundamental spaces.

Figure 4.5.17. Video: computing fundamental spaces

Subsection 4.5.2 Contracting and expanding to bases

Thanks to TheoremΒ 4.4.11 we know that spanning sets can be contracted to bases, and linearly independent sets can be extended to bases; and we have already seen a few instances where this result has been put to good use. However, neither the theorem nor its proof provide a practical means of performing this contraction or extension. We would like a systematic way of determining which vectors to throw out (when contracting), or which vectors to chuck in (when extending). When dealing with subspaces of \(\R^n\text{,}\) we can adapt ProcedureΒ 4.5.10 to our needs.

Proof.

Let’s see why in both cases the procedure produces a basis of \(\R^n\) that is either a sub- or superset of \(S\text{.}\)
Contracting to a basis.
By putting the vectors in as the columns of \(A\text{,}\) we assure that \(W=\Span S=\CSD A\text{.}\) The column space procedure produces a basis \(B\) of \(\CS A\) consisting of columns of \(A\text{.}\) Thus \(B\) is a basis of \(W=\Span S=\CS A\) and is a subset of the original spanning set \(S\text{.}\)
Extending to a basis.
Since \(\CS A\) contains \(\bolde_j\) for all \(1\leq j\leq n\text{,}\) we have \(\CS A=\R^n\text{.}\) Thus \(B\) is a basis for \(\R^n\text{.}\) Since the first \(r\) columns of \(A\) are linearly independent (they are the elements of \(S\)), when we row reduce \(A\) to a matrix \(U\) in row echelon form, the first \(r\) columns of \(U\) will contain leading ones. (To see this, imagine row reducing the \(n\times r\) submatrix \(A'\) consisting of the first \(r\) columns of \(A\) to a row echelon matrix \(U'\text{.}\) Since these columns are linearly independent, they already form a basis for \(\CS A'\text{.}\) Thus the corresponding colmns of \(U'\) must all have leading ones. ) It follows that the first \(r\) columns of \(A\) are selected to be in the basis \(B\text{,}\) and hence that \(S\subseteq B\text{,}\) as desired.

Example 4.5.19. Video example: contracting to a basis.

Figure 4.5.20. Video: contracting to a basis

Subsection 4.5.3 Fundamental spaces and invertibility

The language of fundamental spaces allows to enlarge our invertibility theorem yet again, bringing our tally to a whopping 14 equivalent statements. For most of the additional statements in this new theorem, it is easy to identify an earlier statement that they are equivalent to. For example, it is clear that statementsΒ 8–9 are equivalent to ItemΒ 4. Some of the theory from this section, as well as SectionΒ 4.4 come to our aid to prove that statementsΒ 13–14 can be woven into our fabric of equivalences. We leave the details to the reader.

Proof.

Exercises 4.5.4 Exercises

WeBWork Exercises

1.
Suppose that \(A\) is a \(5 \times 8\) matrix that has an echelon form with one zero row. Find the dimension of the row space of \(A\text{,}\) the dimension of the column space of \(A\text{,}\) and the dimension of the null space of \(A\text{.}\)
The dimension of the row space of \(A\) is .
The dimension of the column space of \(A\) is .
The dimension of the null space of \(A\) is .
Answer 1.
Answer 2.
Answer 3.
Solution.
The dimension of the row space is the number of nonzero rows in the echelon form, or \(5 - 1 = 4.\) The dimension of the column space is the same as the dimension of the row space, and the dimension of the null space is \(8 - 4 = 4.\)
2.
Are the following statements true or false?
  1. Let \(m>n\text{.}\) Then \(U =\) {\(u_1, u_2,\ldots, u_m\)} in \(R^n\) can form a basis for \(R^n\) if the correct \(m-n\) vectors are removed from \(U\text{.}\)
  2. The nullity of a matrix A is the same as the dimension of the subspace spanned be the columns of A.
  3. If {\(u_1, u_2, u_3\)} is a basis for \(R^3\text{,}\) then span{\(u_1, u_2\)} is a plane.
  4. If \(\ S_1\) is of dimension 3 and is a subspace of \(R^4\text{,}\) then there can not exist a subspace\(S_2\) of \(R^4\) such that \(S_1 \subset S_2 \subset R^4\) with \(S_1 \ne S_2\) and \(S_2 \ne R^4\text{.}\)
  5. \(R^n\) has exactly one subspace of dimension \(m\) for each of \(m = 0,1,2,\ldots, n\) .
3.
Let \(X\) be a finite dimensional vector space (\(X=\mathbf{R}^n\text{,}\) for example), let \(S,S_1,S_2\subseteq X\) be subspaces of \(X\) and let \(U,V\subseteq X\) be subsets of \(X.\)
Which of the following statements are true for every \(X\) and every possible choice of subspaces and subsets? Which are false for some \(X\) and some choice of subsets and subspaces?
A. If \(\ S_1 \) and \(\ S_2 \) are subspaces of \( R^n\) of the same dimension, then \(S_1=S_2\).
B. Three nonzero vectors that lie in a plane in \(\mathbf{R}^3\) might form a basis for \(\mathbf{R}^3\).
C. If a set of vectors \(U\) is linearly independent in a subspace \( S\) then zero or more vectors can be added to \(U\) to create a basis for \(S\) (i.e. there is a set of vectors \(V\) with \(U\subseteq V\) which is a basis of \(S\)).
D. If a set of vectors \(U\) spans a subspace \(S\), then zero or more vectors can be added to \(U\) to create a basis for \(S\) (i.e. there is a set of vectors \(V\) with \(U\subseteq V\) which i-s a basis of \(S\)).
E. If \( \ S = \text{span}\{u_1, u_2, u_3 \}\) then \(\text{dim}(S) = 3\) .
Answer 1.
\(\text{False}\)
Answer 2.
\(\text{False}\)
Answer 3.
\(\text{True}\)
Answer 4.
\(\text{False}\)
Answer 5.
\(\text{False}\)
Solution.
  1. False. In the \(x,y\)-plane the coordinate axes are 1-dimensional subspaces, but they are not the same.
  2. False. If three vectors lie in a plane they can’t be linearly independent, so they can’t be a basis for \(\mathbf{R}^3\).
  3. True. If \(U\) spans \(S\) then \(U\) is a basis for \(S\). Otherwise \(S\) contains a vector that is not in the span of \(U\). If you include that vector in \(U\) you’ll get a larger set that is still linearly independent. Repeating this process you’ll get larger and larger sets of linearly independent vectors until the number of vectors in the set equals the dimension of \(S\). When that happens the set of vectors will be a basis for \(S\). (That it’s a basis isn’t obvious but it’s proved in linear algebra texts. The fact that \(S\) is finite-dimensional is crucial. It is finite dimensonal because \(S\subseteq X\) and \(X\) is finite dimensional. Infinite dimensonal spaces are much trickier).
  4. False. If the vectors in \(U\) are not linearly independent then \(U\) cannot be a subset of a basis.
  5. False. The vectors (1,0), (0,1), and (1,1) span the \(x,y\)-plane, but the \(x,y\)-plane is 2-dimensional. These vectors are not linearly independent.

4.

For each matrix \(A\) (i) row reduce \(A\) to a matrix \(U\) in row echelon form, (ii) compute bases for \(\CS A\) and \(\CS U\text{,}\) (iii) compute \(\dim \CS A\) and \(\dim \CS U\text{,}\)and (iv) decide whether \(\CS A=\CS U\text{.}\)
  1. \begin{equation*} A=\boldzero_{n\times n} \end{equation*}
  2. \begin{equation*} A=\begin{amatrix}[rr]-2 \amp 1\\ 1\amp 5 \end{amatrix} \end{equation*}
  3. \begin{equation*} A=\begin{amatrix}[rrr]1 \amp 1 \amp 3 \\ 1 \amp 2 \amp 1 \\ 1 \amp 3 \amp -1 \end{amatrix} \end{equation*}

5.

For each matrix below, (i) compute bases for each fundamental space, (ii) identify these spaces as familiar geometric objects in \(\R^2\) or \(\R^3\text{,}\) and (iii) provide sketches of each space. The sketches of \(\NS A\) and \(\RS A\) should be combined in the same coordinate system.
  1. \begin{equation*} A=\begin{amatrix}[rrr]1\amp 0\amp 0\\ 0\amp 1\amp 0 \end{amatrix} \end{equation*}
  2. \begin{equation*} A=\begin{amatrix}[rrr]1\amp 1\amp -2\\ 3\amp -4\amp 1 \end{amatrix} \end{equation*}
  3. \begin{equation*} A=\begin{amatrix}[rr] 1\amp 2\\ 0\amp 0 \\ -1\amp -2 \end{amatrix} \end{equation*}

6.

For each \(A\) compute bases for each fundamental space. In each case, you can find bases for one of the fundamental spaces by inspection, and then use the rank-nullity theorem to reduce your workload for the other spaces. See first solution for a model example.
  1. \begin{equation*} A=\begin{amatrix}[rrrr]1\amp 1\amp 1\amp 1\\ 1\amp 1\amp 1\amp 1\\ \end{amatrix} \end{equation*}
  2. \begin{equation*} A=\begin{bmatrix}1\amp 1\amp 1\amp 1\\ 2\amp 1\amp 2\amp 1\\ 1\amp 1\amp 1\amp 1 \end{bmatrix} \end{equation*}
Solution.
  1. Clearly, \(B=\{(1,1)\}\) is a basis for \(\CS A\text{,}\) and \(B'=\{(1,1,1,1)\}\) is a basis for \(\RS A\text{.}\) It follows that \(\rank A=1\) and hence \(\nullity A=4-1=3\text{.}\) Thus we need to find three linearly independent elements of \(\NS A\) to find a basis. We can do so by inspection with the help of the column method. Namely, observe that \(\boldv_1=(1,-1,0,0), \boldv_2=(0,1,-1,0), \boldv_3=(0,0,1,-1)\) are all in \(\NS A\) (column method). The location of zeros in these vectors make it clear that \(B''=\{\boldv_1,\boldv_2, \boldv_3\}\) are linearly independent. Since \(\dim NS A=3\text{,}\) and \(\val{B''}=3\text{,}\) we conclude that \(B''\) is a basis of \(\NS A\) (4.4.17).

7.

For each \(A\) use ProcedureΒ 4.5.10 to compute bases for each fundamental space.
  1. \begin{equation*} A= \begin{bmatrix}1\amp 2\amp 4\amp 5\\ 0\amp 1\amp -3\amp 0\\ 0\amp 0\amp 1\amp -3\\ 0\amp 0\amp 0\amp 0 \end{bmatrix} \end{equation*}
  2. \begin{equation*} A= \begin{bmatrix}1\amp 2\amp -1\amp 5\\ 0\amp 1\amp 4\amp 3\\ 0\amp 0\amp 1\amp -7\\ 0\amp 0\amp 0\amp 1 \end{bmatrix} \end{equation*}
  3. \begin{equation*} A = \begin{bmatrix}1\amp 4\amp 5\amp 6\amp 9\\ 3\amp -2\amp 1\amp 4\amp -1\\ -1\amp 0\amp -1\amp -2\amp -1\\ 2\amp 3\amp 5\amp 7\amp 8 \end{bmatrix} \end{equation*}

8.

Find the rank and nullity of each matrix by reducing it to row echelon form.
  1. \begin{equation*} A = \begin{amatrix}[rrrrr] 1\amp 0\amp -2\amp 1\amp 0\\ 0\amp -1\amp -3\amp 1\amp 3\\ -2\amp -1\amp 1\amp -1\amp 3\\ 0\amp 1\amp 3\amp 0\amp -4 \end{amatrix} \end{equation*}
  2. \begin{equation*} A = \begin{amatrix}[rrrr] 1\amp 3\amp 1\amp 3\\ 0\amp 1\amp 1\amp 0\\ -3\amp 0\amp 6\amp -1\\ 3\amp 4\amp -2\amp 1\\ 2\amp 0\amp -4\amp -2 \end{amatrix} \end{equation*}

Rank-nullity theorem for matrices.

For each scenario use the rank-nullity theorem for matrices to compute the desired information from the given information.
9.
Suppose \(A\in M_{7\times 5}\) and \(\nullity A=2\text{,}\) compute \(\rank A\text{.}\)
10.
Suppose \(A\in M_{7\times 5}\) and \(\nullity A=2\text{,}\) compute \(\dim \RS A\text{.}\)
11.
Suppose \(A\in M_{3\times 5}\) and \(\CS A=\R^3\text{,}\) compute \(\nullity A\text{.}\)
12.
Suppose \(A\in M_{7\times 5}\) and \(\NS A=\{\boldzero\}\text{,}\) compute \(\dim \CS A\text{.}\)

14.

We know that in general that row equivalent matrices do not have equal column spaces, but show that this is the case if the matrices are invertible. In other words, prove that if \(A\) is invertible and \(A\) is row equivalent to \(B\text{,}\) then \(\CS A=\CS B\text{.}\)

15.

Let \(A\) be an \(n\times n\) matrix.
  1. Prove: \(A^2=\boldzero_{n\times n}\) if and only if \(\CS A\subseteq \NS A\text{.}\)
  2. Construct a \(2\times 2\) matrix \(A\) with \(\NS A=\CS A=\Span\{(1,2)\}\text{.}\) Verify that your \(A\) satisfies \(A^2=\boldzero_{2\times 2}\text{.}\)

16.

Suppose \(A\) is \(m\times n\) with \(m\ne n\text{.}\)
Prove: either the rows of \(A\) are linearly dependent or the columns of \(A\) are linearly dependent.

17.

Prove: \(\nullity A=\nullity A^T\) if and only if \(A\) is a square matrix.