The definition of a subspace of a vector space \(V\) is very much in the same spirit as our definition of linear transformations. It is a subset of \(V\) that in some sense respects the vector space structure: in the language of Definition 3.3.1, it is a subset that is closed under addition and closed under scalar multiplication.
In fact the connection between linear transformations and subspaces goes deeper than this. As we will see in Definition 3.4.1, a linear transformation \(T\colon V\rightarrow W\) naturally gives rise to two important subspaces: the null space of \(T\) and the image of \(T\).
Subsection3.3.1Definition of subspace
Definition3.3.1.Subspace.
Let \(V\) be a vector space. A subset \(W\subseteq V\) is a subspace of \(V\) if the following conditions hold:
\(W\) contains the zero vector.
We have \(\boldzero\in W\text{.}\)
\(W\) is closed under addition.
For all \(\boldv_1,\boldv_2\in V\text{,}\) if \(\boldv_1,\boldv_2\in W\text{,}\) then \(\boldv_1+\boldv_2\in W\text{.}\) Using logical notation:
Suppose \(\boldv_1=(x_1,y_1), \boldv_2=(x_2,y_2)\in W\text{.}\) Then \(x_1, x_2, y_1, y_2\geq 0\text{,}\) in which case \(x_1+x_2, y_1+y_2\geq 0\text{,}\) and hence \(\boldv_1+\boldv_2\in W\text{.}\) Thus \(W\) is closed under addition.
The set \(W\) is not closed under scalar multiplication. Indeed, let \(\boldv=(1,1)\in W\text{.}\) Then \((-2)\boldv=(-2,-2)\notin W\text{.}\)
Procedure3.3.4.Two-step proof for subspaces.
As with proofs regarding linearity of functions, we can merge conditions (ii)-(iii) of Definition 3.3.1 into a single statement about linear combinations, deriving the following two-step method for proving a set \(W\) is a subspace of a vector space \(V\text{.}\)
Video example: deciding if \(W\subseteq V\) is a subspace.
Remark3.3.7.Subspaces are vector spaces.
If \(W\) is a subspace of a vector space \(V\text{,}\) then it inherits a vector space structure from \(V\) by simply restricting the vector operations defined on \(V\) to the subset \(W\text{.}\)
It is important to understand how conditions (ii)-(iii) of Definition 3.3.1 come into play here. Without them we would not be able to say that restricting the vector operations of \(V\) to elements of \(W\) actually gives rise to well-defined operations on \(W\text{.}\) To be well-defined the operations must output elements that lie not just in \(V\text{,}\) but in \(W\) itself. This is precisely what being closed under addition and scalar multiplication guarantees.
Once we know restriction gives rise to well-defined operations on \(W\text{,}\) verifying the axioms of Definition 3.1.1 mostly amounts to observing that if a condition is true for all \(\boldv\) in \(V\text{,}\) it is certainly true for all \(\boldv\) in the subset \(W\text{.}\)
The “existential axioms” (iii) and (iv) of Definition 3.1.1, however, require special consideration. By definition, a subspace \(W\) contains the zero vector of \(V\text{,}\) and clearly this still acts as the zero vector when we restrict the vector operations to \(W\text{.}\) What about vector inverses? We know that for any \(\boldv\in W\) there is a vector inverse \(-\boldv\) lying somewhere in \(V\text{.}\) We must show that in fact \(-\boldv\) lies in \(W\text{:}\) i.e. we need to show that the operation of taking the vector inverse is well-defined on \(W\text{.}\) We prove this as follows:
\begin{align*}
\boldv\in W \amp\implies (-1)\boldv\in W \amp (\knowl{./knowl/d_subspace.html}{\text{Definition 3.3.1}}, \text{(iii) } )\\
\amp\implies -\boldv\in W \amp (\knowl{./knowl/th_vectorspace_props.html}{\text{Theorem 3.1.16}}, (iii)) \text{.}
\end{align*}
We now know how to determine whether a given subset of a vector space is in fact a subspace. We are also interested in means of constructing subspaces from some given ingredients. The result below tells us that taking the intersection of a given collection of subspaces results in a subspace. In Subsection 3.4.1 we see how a linear transformation automatically gives rise to two subspaces.
Theorem3.3.8.Intersection of subspaces.
Let \(V\) be a vector space. Given a collection \(W_1, W_2,\dots, W_r\text{,}\) where each \(W_i\) is a subspace of \(V\text{,}\) the intersection
While the intersection of subspaces is again a subspace, the same is not true for unions of subspaces.
For example, take \(V=\R^2\text{,}\)\(W_1=\{(t,t)\colon t\in\R\}\) and \(W_2=\{(t,-t)\colon t\in\R\}\text{.}\) Then each \(W_i\) is a subspace, but their union \(W_1\cup W_2\) is not.
Indeed, observe that \(\boldw_1=(1,1)\in W_1\subset W_1\cup W_2\) and \(\boldw_2=(1,-1)\in W_2\subset W_1\cup W_2\text{,}\) but \(\boldw_1+\boldw_2=(2,0)\notin W_1\cup W_2\text{.}\) Thus \(W_1\cup W_2\) is not closed under addition. (Interestingly, it is closed under scalar multiplication.)
Subsection3.3.2Subspaces of \(\R^n\)
The following theorem gives a convenient method of producing a subspace \(W\) of \(\R^n\text{:}\) namely, given any \(m\times n\) matrix \(A\text{,}\) the subset \(W\) defined as
is guaranteed to be a subspace of \(\R^n\text{.}\) We will see in Section 3.4 that this construction is just one example of a more general subspace-building operation (see 3.4.1 and 3.4.8). We introduce the special case here for two reasons: (a) the construction allows us to easily provide examples of subspaces of \(\R^n\text{,}\) and (b) the proof of Theorem 3.3.10 is a nice example of the two-step technique.
Theorem3.3.10.Solutions to \(A\boldx=\boldzero\) form a subspace.
Following the two-step technique, we first show that \(\boldzero_n
\in W\text{.}\) This is clear, since \(A\boldzero_n=\boldzero_m\text{.}\) (We introduce the subscripts to distinguish between the zero vectors of the domain \(\R^n\) and \(\R^m\text{.}\))
Next, we show that for any \(\boldx_1, \boldx_2\in \R^n\) and any \(c_1, c_2\in \R\) we have
If \(\boldx_1, \boldx_2\in W\text{,}\) then we have \(A\boldx_1=A\boldx_2=\boldzero_m\text{,}\) by definition of \(W\text{.}\) It the follows that the vector \(c_1\boldx_1+c_2\boldx_2\) satisfies
Thus \(c_1\boldx_1+c_2\boldx_2\in W\text{,}\) as desired.
Remark3.3.11.Solutions to homogeneous linear systems form a subspace.
Recall from Interlude on matrix equations that the set of solutions to a matrix equation \(A\boldx=\boldb\) is the same thing as the set of solutions to the system of linear equations with augmented matrix \(\begin{amatrix}[c|c] A\amp \boldb \end{amatrix}\text{.}\) Thus, Theorem 3.3.10 implies that the set of solutions to a homogeneous system of linear equations forms a subspace.
Remark3.3.12.Alternative subspace method.
Theorem 3.3.10 provides an alternative way of showing that a subset \(W\subseteq \R^n\text{:}\) namely, find an \(m\times n\) matrix \(A\) for which we have \(W=\{\boldx\in \R^n\colon A\boldx=\boldzero\}\text{.}\) This is often much faster than using the two-step technique.
Geometrically this is the line in \(\R^3\) passing through \((0,0,0)\) with direction vector \((1,2,-1)\text{.}\)
Example3.3.14.Lines and planes.
Recall that a line \(\ell\) in \(\R^2\) that passes through the origin can be expressed as the set of solutions \((x_1,x_2)\in\R^2\) to an equation of the form
Similarly, a plane \(\mathcal{P}\) in \(\R^3\) that passes through the origin can be expressed as the the set of solutions \((x_1,x_2,x_3)\) to an equation of the form
(See Example 1.1.7.) We see immediately that both objects can be described as the set of solutions to a certain homogenous matrix equation \(A\boldx=\boldzero\text{:}\)
We conclude from Theorem 3.3.10 that lines in \(\R^2\text{,}\) and planes in \(\R^3\text{,}\) are subspaces, as long as they pass through the origin.
On the other hand, a line or plane that does not pass through the origin is not a subspace, since it does not contain the zero vector.
Question: How do we classify all subspaces of \(\R^2\) of \(\R^3\text{?}\) We will be able to answer this easily with dimension theory. (See Section 3.7.)
Subsection3.3.3Important subspaces of \(M_{nn}\)
In The invertibility theorem we met three families of square matrices: namely, the diagonal, upper triangular, and lower triangular matrices. (See Definition 2.4.7). We now introduce three more naturally occurring families. Before doing so, we give an official definition of the trace function. (See Exercise 3.2.6.11.)
Definition3.3.15.Trace of a matrix.
Let \(A=[a_{ij}]\) be an \(n\times n\) matrix. The trace of \(A\text{,}\) denoted \(\tr A\) is defined as the sum of the diagonal entries of \(A\text{:}\) i.e.,
Assume \(A\) is a skew-symmetric \(n\times n\) matrix. By definition, for all \(1\leq i\leq n\) we must have \([A]_{ii}=-[A]_{ii}\text{.}\) It follows that \([A]_{ii}=0\) for all \(1\leq i\leq n\text{.}\) Thus the diagonal entries of a skew-symmetric matrix are always equal to 0.
It will come as no surprise that all of the afore mentioned matrix families are in fact subspaces of \(M_{nn}\text{.}\)
Theorem3.3.19.Matrix subspaces.
Fix an integer \(n\geq 1\text{.}\) Each of the following subsets of \(M_{nn}\) is a subspace.
Diagonal matrices.
\(\displaystyle W=\{A\in M_{nn}\colon A\text{ is diagonal}\}\)
Upper triangular matrices.
\(\displaystyle W=\{A\in M_{nn}\colon A\text{ is upper triangular}\}\)
Lower triangular matrices.
\(\displaystyle W=\{A\in M_{nn}\colon A\text{ is lower triangular}\}\)
Let \(I\) be an nondegenerate interval of \(\R\text{:}\) i.e., an interval containing at least two elements. Recall that \(F(I,\R)\) is the set of all functions from \(I\) to \(\R\text{.}\) This is a pretty unwieldy vector space, containing some pathological characters, and when studying functions on an interval we will often restrict our attention to certain more well-behaved subsets: e.g., continuous, differentiable, or infinitely differentiable functions. Not surprisingly, these subsets turn out to be subspaces of \(F(I,\R)\text{.}\)
Definition3.3.20.Function subspaces.
Let \(I\subseteq \R\) be a nondegenerate interval.
We denote by \(C(I)\) the set of all continuous functions on \(I\text{:}\) i.e.,
\begin{equation*}
C(I)=\{f\in F(I,\R)\colon f \text{ is continuous on } I\}\text{.}
\end{equation*}
Fix \(n\geq 1\text{.}\) A function \(f\in F(I,\R)\) is \(C^n\) on I if \(f\) is \(n\)-times differentiable on \(I\) and its \(n\)-th derivative \(f^{(n)}(x)\) is continuous. The set of all \(C^n\) functions on \(I\) is denoted \(C^n(I)\text{.}\)
A function \(f\in F(I,\R)\) is \(C^\infty\) on I if \(f\) is infinitely differentiable on \(I\text{.}\) The set of all \(C^\infty\) functions on \(I\) is denoted \(C^\infty(I)\text{.}\)
A polynomial on \(I\) of degree at most \(n\) is a polynomial of the form \(f(x)=\anpoly\text{,}\) where \(a_i\in \R\text{.}\) (See Section 0.7, and in particular Definition 0.7.1 for more details about polynomials.) Recall that if \(a_n\ne 0\text{,}\) we call \(n\) the the degree of \(f\text{,}\) denoted \(\deg f\text{.}\)
The set of polynomials of degree at most \(n\) on \(I\) is denoted \(P_n(I)\text{;}\) the set of all polynomials on \(I\) is denoted \(P(I)\text{.}\) When \(I=\R\text{,}\) we shorten the notation to \(P_n\) and \(P\text{.}\)
Theorem3.3.21.Function subspaces.
Let \(I\subseteq \R\) be an interval. The sets \(C(I), C^n(I), C^\infty(I), P_n(I), P(I)\) are all subspaces of \(F(I,\R)\text{.}\) Thus we have the following chain of subspaces:
The zero function \(0_I\colon I\rightarrow \R\) is an element of all of these sets: i.e. the zero function is continuous, \(C^n\text{,}\)\(C^\infty\text{,}\) a polynomial, etc..
If \(f\) and \(g\) both satisfy one of these properties (continuous, \(C^n\text{,}\)\(C^\infty\text{,}\) polynomial, etc.), then so does \(cf+dg\) for any \(c,d\in \R\text{.}\)
The second, “closed under linear combinations” observation is easily seen for \(P(I)\) and \(P_n(I)\) (i.e., the sum of two polynomials of degree at most \(n\) is clearly a polynomial of degree at most \(n\)); for the other spaces, this is a result of calculus properties to the effect that adding and scaling functions preserves continuity and differentiability.
Lastly, that each subset relation holds in the given chain follows from similar observations: polynomials are infinitely differentiable, differentiable functions are continuous, etc..
When working within the polynomial spaces \(P_n(I)\) or \(P(I)\text{,}\) we will constantly make use of the fact that a polynomial \(f(x)=\anpoly\) is completely determined by its coefficients \(a_i\text{,}\) and that equality between polynomials can be decided by comparing their coefficients. This is the content of Corollary 0.7.4. We restate the result here in a more convenient form.
Theorem3.3.22.Polynomial equality.
Let \(I\subseteq \R\) be a nondegenerate interval, and let
We have \(f=g\) if and only if \(a_i=b_i\) for all \(1\leq i\leq n\text{.}\)
In particular, \(f=\boldzero\) if and only if \(a_i=0\) for all \(1\leq i\leq n\text{.}\)
Remark3.3.23.Differential operators.
Let \(I\subseteq \R\) be an interval. Define \(T_1\colon C^1(I)\rightarrow C(I)\) as \(T_1(f)=f'\text{:}\) i.e., \(T_1\) takes as input a \(C^1\) function on the interval \(I\text{,}\) and returns its (first) derivative. Note that the definition of \(C^1\) ensures that \(f'\) exists and is continuous on \(I\text{:}\) hence \(f'
\in C(I)\text{,}\) as claimed.
The operator \(T_1\) is a linear transformation. Indeed, given \(c,d\in \R\) and \(f,g\in C^1(I)\text{,}\) we have
Since taking \(n\)-th derivatives amounts to composing the derivative operator \(T_1\) with itself \(n\) times, it follows from Theorem 3.2.32 that for any \(n\geq 1\) the map
which takes a function \(f\) to its \(n\)-th derivative, is also linear. (Note that we are careful to pick the domain \(C^n(I)\) to guarantee this operation is well-defined!)
Lastly, by Exercise 3.2.6.17, we can add and scale these various operators to obtain more general linear transformations of the form
We call such a function a linear differential operator. Understanding the linear algebraic properties of these operators is crucial to the theory of linear differential equations, as Example 3.4.13 illustrates.
Exercises3.3.5Exercises
WeBWork Exercises
1.
Determine if each of the following sets is a subspace of \({\mathbb P}_{n}\text{,}\) for an appropriate value of \(n\text{.}\) Type "yes" or "no" for each answer.
Let \(W_{1}\) be the set of all polynomials of the form \(p(t)= at^{2}\text{,}\) where \(a\) is in \({\mathbb R}\text{.}\)
Let \(W_{2}\) be the set of all polynomials of the form \(p(t)= t^{2} + a\text{,}\) where \(a\) is in \({\mathbb R}\text{.}\)
Let \(W_{3}\) be the set of all polynomials of the form \(p(t)= at^{2} + at\text{,}\) where \(a\) is in \({\mathbb R}\text{.}\)
For each subset \(W\) of \(\R^2\) described below: (a) sketch \(W\) as a region of \(\R^2\text{,}\) and (b) determine whether \(W\) is a subspace. Justify your answer either with a proof or explicit counterexample.
2.
\(W=\{(x,y)\in \R^2\colon 2x+3y=0\}\)
3.
\(W=\{(x,y)\in \R^2\colon \val{x}\geq \val{y}\}\)
4.
\(W=\{(x,y)\in \R^2\colon x^2+2y^2\leq 1\}\)
Exercise Group.
Determine whether the subset \(W\) of \(M_{nn}\) described is a subspace of \(M_{nn}\text{.}\) Justify your answer either with a proof or explicit counterexample.
5.
\(W=\{A\in M_{nn}\colon \det A=0\}\)
6.
\(W=\{A\in M_{nn}\colon A_{11}=A_{nn}\}\)
7.
Fix a matrix \(B\in M_{nn}\) and define \(W=\{A\in M_{nn}\colon AB=BA\}\text{,}\) the set of matrices that commute with \(B\text{.}\)
Exercise Group.
Determine whether the subset \(W\) of \(P_2\) is a subspace. Justify your answer either with a proof or explicit counterexample.
8.
\(W=\{f(x)=ax^2+bx+c \colon c=0\}\)
9.
\(W=\{f(x)=ax^2+bx+c \colon abc=0\}\)
10.
\(W=\{f(x)\in P_2 \colon xf'(x)=f(x)\)
Exercise Group.
Determine whether the subset \(W\) of \(C(\R)\) described is a subspace. Justify your answer either with a proof or explicit counterexample.
11.
\(W=\{f\in C(\R)\colon f(4)=0\} \)
12.
\(W=\{f\in C(\R)\colon f(0)=4\} \)
13.
\(W=\{f\in C(\R)\colon f(x)=f(-x)\} \)
14.
\(W=\{f\in C(\R)\colon f(x+\pi)=f(x)\} \)
15.
\(W=\{f\in C(\R)\colon f\in P \text{ and } \deg f=5\}. \)
Exercise Group.
For each given subset \(W\subseteq \R^n\text{:}\) (a) show that \(W\) is a subspace by identifying it with the set of solutions to a matrix equation, and (b) give a parametric description of \(W\text{.}\)