Skip to main content
Logo image

Section 3.5 Span and linear independence

There are many situations in mathematics where we want to describe an infinite set in a concise manner. We saw this at work already in Section 1.3, where infinite sets of solutions to linear systems were neatly described with parametric expressions.
A similar issue arises when describing vector spaces and their subspaces. As we know, any vector space is either the zero space or infinite (Exercise 3.1.4.15). If we happen to be dealing with a subspace of \(\R^n\text{,}\) then there is the possibility of giving a parametric description; but how do we proceed when working in one of our more exotic vector spaces like \(C^1(\R)\text{?}\)
As we will see in Section 3.6 the relevant linear algebraic tool for this purpose is the concept of a basis. Loosely speaking, a basis for a vector space \(V\) is a set of vectors that is large enough to generate the entire space, and small enough to contain no redundancies. What exactly we mean by “generate” is captured by the rigorous notion of span; and what we mean by “no redundancies” is captured by linear independence.

Subsection 3.5.1 Span

Recall that a linear combination in a vector space \(V\) is a vector of the form
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n\text{,} \end{equation*}
where \(c_i\in \R\) are scalars. We use this notion to define the span of a set of vectors.

Definition 3.5.1. Span.

Let \(V\) be a vector space, and let \(S\subseteq V\) be any subset of \(V\text{.}\) The span of \(S\), denoted \(\Span S\text{,}\) is the subset of \(V\) defined as follows:
  • If \(S=\emptyset\text{,}\) then \(\Span S=\{\boldzero_V\}\text{.}\)
  • Otherwise we define \(\Span S\) to be the set of all linear combinations of elements of \(S\text{:}\) i.e.,
    \begin{equation*} \Span S=\{\boldv\in V\colon \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n \text{ for some } \boldv_i\in S \text{ and } c_i\in \R\}\text{.} \end{equation*}

Remark 3.5.2.

Let \(S\) be a subset of \(V\text{.}\) Some simple observations:
  1. The zero vector is always an element of \(\Span S\text{.}\) Indeed, if \(S=\emptyset\text{,}\) then \(\Span S=\{\boldzero\}\) by definition. Otherwise, given any \(\boldv\in S\text{,}\) the linear combination \(0\boldv=\boldzero\) is an element of \(\Span S\text{.}\)
  2. We have \(S\subseteq \Span S\text{:}\) i.e., \(\Span S\) includes \(S\) itself. Indeed, given any \(\boldv\in S\text{,}\) the linear combination \(1\boldv=\boldv\) is an element of \(\Span S\text{.}\)
  3. If \(S=\{\boldv\}\) contains exactly one element, then \(\Span S=\{c\boldv\colon c\in \R\}\) is simply the set of all scalar multiples of \(\boldv\text{.}\)
    If \(\boldv\ne \boldzero\text{,}\) then we know that this set is infinite (Exercise 3.1.4.15). Thus even when \(S\) is finite, \(\Span S\) will be infinite, as long as \(S\) contains nonzero vectors.

Example 3.5.3. Examples in \(\R^2\).

Let \(V=\R^2\text{.}\) For each \(S\text{,}\) identify \(\Span S\) as a familiar geometric object.
  1. \(S=\{ \}\text{.}\)
  2. \(\displaystyle S=\{(0,0)\}\)
  3. \(S=\{\boldv\}\text{,}\) \(\boldv=(a,b)\ne (0,0)\)
  4. \(\displaystyle S=\{ (1,0), (0,1)\}\)
  5. \(\displaystyle S=\{ (1,1), (2,2)\}\)
  6. \(\displaystyle S=\{(1,1),(1,2)\}\)
  7. \(\displaystyle S=\R^2\)
Solution.
  1. \(\Span S=\{\boldzero\}\text{,}\) the set containing just the origin, by definition.
  2. \(\Span S\) is the set of all scalar multiples of \((0,0)\text{.}\) Thus \(\Span S=\{\boldzero\}\text{.}\)
  3. \(\Span S\) is the set of all scalar multiples of the nonzero vector \((a,b)\text{.}\) Geometrically, this is the line that passes through the the origin and the point \((a,b)\text{.}\)
  4. By definition
    \begin{equation*} S=\{a(1,0)+b(0,1)\colon a,b\in \R\}=\{(a,b)\colon a,b\in\R\}\text{.} \end{equation*}
    Thus \(\Span S=\R^2\text{,}\) the entire \(xy\)-plane.
  5. By definition
    \begin{equation*} S=\{a(1,1)+b(2,2)\colon a,b\in \R\}=\{(a+2b,a+2b)\colon a,b\in\R\}\text{.} \end{equation*}
    It is easy to see that \(S=\{(c,c)\colon t\in \R\}\text{,}\) the line with equation \(y=x\text{.}\) Note that in this case we have
    \begin{equation*} S=\Span\{(1,1), (2,2)\}=\Span \{(1,1)\}\text{,} \end{equation*}
    and thus that the vector \((2,2)\) is in some sense redundant.
  6. By definition
    \begin{equation*} S=\{a(1,1)+b(1,2)\colon a,b\in \R\}=\{(a+b,a+2b)\colon a,b\in\R\}\text{.} \end{equation*}
    Claim: \(\Span S=\R^2\text{.}\) Proving the claim amounts to showing that for all \((c,d)\in \R^2\) there exist \(a,b\in \R\) such that
    \begin{equation*} \begin{array}{ccccc} a \amp +\amp b \amp =\amp c\\ a \amp +\amp 2b \amp =\amp d \end{array}\text{.} \end{equation*}
    Solving this system using Gaussian elimination, we see that the system has the unique solution
    \begin{align*} a\amp =2c-d \amp b\amp =d-c\text{,} \end{align*}
    and thus that
    \begin{equation*} (2c-d)(1,1)+(d-c)(1,2)=(c,d)\text{.} \end{equation*}
    This proves \(\Span S=\R^2\text{,}\) as claimed.
  7. By Remark 3.5.2, we have \(S\subseteq \Span S\text{.}\) Thus \(\R^2\subseteq \Span \R^2\text{.}\) Since \(\Span \R^2\subseteq \R^2\) by definition, we conclude that \(\Span S=\R^2\text{.}\)

Video example: computing span.

Figure 3.5.4. Video: computing span
You may have noticed that each span computation in the previous example produced a subspace of \(\R^2\text{.}\) This is no accident!
We prove each statement separately.
Statement (1).
To show \(\Span S\) is a subspace, we use the two-step technique.
  1. By Remark 3.5.2 we know that \(\boldzero\in \Span S \text{.}\)
  2. Suppose \(\boldv, \boldw\in S\text{.}\) By definition we have
    \begin{align*} \boldv \amp =c_1\boldv_1+c_2\boldv_2+\cdots +c_r\boldv_r \amp \boldw \amp = c_{r+1}\boldv_{r+1}+c_{r+2}\boldv_{r+2}+\cdots +c_{r+s}\boldv_{r+s} \end{align*}
    for some vectors \(\boldv_1, \boldv_2, \dots, \boldv_{r+s}\in S\) and scalars \(c_1,c_2,\dots, c_{r+s}\text{.}\) Then for any \(c,d\in \R\) we have
    \begin{equation*} c\boldv+d\boldw=cc_1\boldv_1+cc_2\boldv_2+\cdots +cc_r\boldv_r+dc_{r+1}\boldv_{r+1}+dc_{r+2}\boldv_{r+2}+\cdots +dc_{r+s}\boldv_{r+s}\text{,} \end{equation*}
    which is clearly a linear combination of elements of \(S\text{.}\) Thus \(c\boldv+d\boldw\in \Span S\text{,}\) as desired.
Statement (2).
Let \(W\subseteq V\) be a subspace that contains all elements of \(S\text{.}\) Since \(W\) is closed under arbitrary linear combinations, it must contain any linear combination of elements of \(S\text{,}\) and thus \(\Span S\subseteq W\text{.}\)
The results of Theorem 3.5.5 motivate the following additional terminology.

Definition 3.5.6. Spanning set.

Let \(S\) be a subset of the vector space \(V\text{.}\) We call \(W=\Span S\) the subspace of \(V\) generated by S, and we call \(S\) a spanning set for \(W\text{.}\)

Remark 3.5.7. Some standard spanning sets.

For most of the vector spaces we’ve met a natural spanning set springs to mind. We will refer to these loosely as standard spanning sets. Some examples:
  • Zero space.
    Let \(V=\{\boldzero\}\text{.}\) By definition the empty set \(S=\emptyset=\{ \}\) is a spanning set of \(V\text{.}\)
  • Tuples.
    Let \(V=\R^n\text{.}\) For \(1\leq i\leq n\text{,}\) define \(\bolde_i\) to be the \(n\)-tuple with a one in the \(i\)-th entry, and zeros elsewhere. Then \(S=\{\bolde_1, \bolde_2,\dots, \bolde_n\}\) is a spanning set for \(\R^n\text{.}\)
  • Matrices.
    Let \(V=M_{mn}\text{.}\) For each \((i,j)\) with \(1\leq i\leq m\) and \(1\leq j\leq n\text{,}\) define \(E_{ij}\) to be the \(m\times n\) matrix with a one in the \(ij\)-th entry, and zeros elsewhere. Then \(S=\{E_{ij}\colon 1\leq i\leq m, 1\leq j\leq n\}\) is a spanning set for \(M_{mn}\text{.}\)
  • Polynomials of bounded degree.
    Let \(V=P_n\text{.}\) The set \(S=\{x^n, x^{n-1}, \dots, x, 1\}\) clearly spans \(P_n\text{.}\) This is just another way of saying that the monomials of degree at most \(n\) generate the polynomials of degree at most \(n\text{.}\)
  • Polynomials.
    Let \(V=P\text{,}\) the space of all polynomials. In a similar vein, the set
    \begin{equation*} S=\{1, x, x^2, \dots\}=\{x^i\colon i\geq 0\} \end{equation*}
    of all monomials is a spanning set for \(P\text{.}\)
Note the glaring difference between the first three examples, and the last: our standard spanning set for \(P\) is infinite, whereas the previous examples are all finite spanning sets. You suspect, no doubt, that there is no finite spanning set for \(P\text{.}\) We will be able to prove this shortly.
It is important to observe that spanning sets for vector spaces are not unique. Far from it! In general, for any nonzero vector space there are infinitely many choices of spanning sets.

Example 3.5.8. Spanning sets are not unique.

For each \(V\) and \(S\) below, verify that \(S\) is a spanning set for \(V\text{.}\)
  1. \(V=\R^2\text{,}\) \(S=\{(1,1), (1,2)\}\)
  2. \(V=M_{22}\text{,}\) \(S=\{A_1, A_2, A_3, A_4\}\text{,}\)
    \begin{equation*} A_1=\begin{amatrix}[rr]1\amp 1\\ 1\amp 1 \end{amatrix}, A_2=\begin{amatrix}[rr]1\amp -1\\ 0\amp 0 \end{amatrix}, A_3=\begin{amatrix}[rr]0\amp 0\\ 1\amp -1 \end{amatrix}, A_4=\begin{amatrix}[rr]1\amp 1\\ -1\amp -1 \end{amatrix}\text{.} \end{equation*}
  3. \(V=P_2\text{,}\) \(S=\{x^2+x+1, x^2-x, x-1\}\)
Solution.
  1. This was shown in Example 3.5.3
  2. We must show, given any \(A=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\text{,}\) we can find \(c_1, c_2, c_3, c_4\in \R\) such that
    \begin{equation*} c_1A_1+c_2A_2+c_3A_3+c_4A_4=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\text{,} \end{equation*}
    or
    \begin{equation*} \begin{amatrix}[rr]c_1+c_2+c_4 \amp c_1-c_2+c_4\\ c_1+c_3-c_4\amp c_1-c_3-c_4 \end{amatrix} = \begin{amatrix}[rr]a\amp b \\ c\amp d \end{amatrix}\text{.} \end{equation*}
    We can find such \(c_i\) if and only if the system with augmented matrix
    \begin{equation*} \begin{amatrix}[rrrr|r] 1\amp 1\amp 0\amp 1\amp a\\ 1\amp -1\amp 0\amp 1\amp b \\ 1\amp 0\amp 1\amp -1\amp c\\ 1\amp 0\amp -1\amp -1\amp d \end{amatrix} \end{equation*}
    is consistent. This matrix row reduces to
    \begin{equation*} \begin{amatrix}[rrrr|r] \boxed{1}\amp 1\amp 0\amp 1\amp a\\ 0\amp \boxed{1}\amp 0\amp 0\amp \frac{a-b}{2} \\ 0\amp 0\amp \boxed{1}\amp -2\amp c-\frac{a+b}{2}\\ 0\amp 0\amp 0\amp \boxed{1}\amp \frac{a+b-c-d}{4} \end{amatrix}\text{.} \end{equation*}
    Since the last column will never contain a leading one, we conclude that the system is consistent for any choice of \(a,b,c,d\text{,}\) and thus that \(\Span S=M_{22}\text{,}\) as claimed.
  3. We must show that given any \(p(x)=ax^2+bx+c\) we can find \(c_1,c_2,c_3\) such that
    \begin{equation*} c_1(x^2+x+1)+c_2(x^2-1)+c_3(x-1)=ax^2+bx+c\text{,} \end{equation*}
    or
    \begin{equation*} (c_1+c_2)x^2+(c_1+c_3)x+(c_1-c_2-c_3)=ax^2+bx+c\text{.} \end{equation*}
    According to Theorem 3.3.22 this equality holds if and only if
    \begin{equation*} \begin{linsys}{3} c_1 \amp +\amp c_2 \amp \amp \amp = \amp a\\ c_1 \amp \amp \amp + \amp c_3 \amp = \amp b\\ c_1 \amp -\amp c_2 \amp - \amp c_3 \amp = \amp c \end{linsys}\text{.} \end{equation*}
    As in the examples above, our reasoning implies \(\Span S=P_2\) if and only if this system is consistent for any choice of \(a,b,c\text{.}\) Thus usual Gaussian elimination procedure tells us that this is indeed so. We leave the details to you.

Subsection 3.5.2 Linear independence

As mentioned at the top, the notion of linear independence is precisely what we need to guarantee that a given spanning set has no “redundancies”. We first define linear independence of a finite set of vectors (Definition 3.5.9) and later generalize to an arbitrary set of vectors (Definition 3.5.14).

Definition 3.5.9. Linear independence (for finite subsets).

Let \(V\) be a vector space, let \(S\) be a finite subset, and let \(\boldv_1, \boldv_2, \dots, \boldv_n\) be the distinct elements of \(S\text{:}\) i.e., \(\boldv_i\ne \boldv_j\) for all \(i\ne j\text{.}\) We say \(S\) is linearly independent if the following condition holds:
\begin{equation*} \text{if } c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero \text{ for some } c_i\in \R , \text{ then } c_i=0 \text{ for all } i\text{.} \end{equation*}
We say \(S\)is linearly dependent if it is not linearly independent; i.e., if we can find scalars \(c_1, c_2,\dots, c_n\) with \(c_i\ne 0\) for some \(i\) and
\begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero\text{.} \end{equation*}
We call a linear combination \(c_1\boldv_1+c_2\boldv_2+\cdots c_n\boldv_n\) trivial if \(c_i=0\) for all \(i\text{,}\) and nontrivial if \(c_i\ne 0\) for some \(i\text{.}\) Using this terminology, a set \(S\) is linearly independent if the only linear combination of distinct elements of \(S\) yielding the zero vector is the trivial one.

Remark 3.5.10.

The definition of linear independence is quite a mouthful! Some clarifying remarks:
  1. To prove a set \(S=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of \(n\) distinct vectors is linearly independent, we must prove an implication:
    \begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero \implies c_1=c_2=\dots=c_n=0\text{.} \end{equation*}
    More often than not you will do so in the direct manner: i.e., assume you have a linear combination equal to the zero vector, then show that all coefficients must be zero.
  2. By the same token, to show \(S\) is linearly dependent, we must produce a nontrivial linear combination of distinct elements equal to the zero vector.
  3. It is easy to see that \(S\) is linearly dependent if and only if some element of \(S\) can be written as a linear combination of other elements of \(S\text{:}\) just take the nontrivial linear combination yielding the zero vector, and solve for the vector in this combination with the nonzero coefficient.
    This makes precise what it means for a spanning set \(S\) to have redundancies or not. If \(S\) is linearly dependent, then one of its vectors can be written as a linear combination of the others, and thus can be “thrown out” when computing \(\Span S\text{,}\) the set of all linear combinations of \(S\text{.}\) Conversely, if \(S\) is linearly independent, then no element of \(S\) can be written as a linear combination of the others; throwing any element out would thus result in \(\Span S\) being strictly smaller.

Example 3.5.11. Elementary examples.

Let \(V\) be a vector space, and let \(S\) be a subset.
  1. If \(\boldzero\in S\text{,}\) then \(S\) is linearly dependent: indeed, we have the nontrivial linear combination \(1\boldzero=\boldzero\text{.}\)
  2. If \(S=\{\boldv\}\text{,}\) then \(S\) is linearly independent if and only if \(\boldv\ne \boldzero\text{.}\) The previous comment shows why \(\boldv\ne \boldzero\) is a necessary condition. Let’s see why it is sufficient.
    Suppose \(\boldv\ne\boldzero\text{,}\) and suppose we have \(c\boldv=\boldzero\text{.}\) By Theorem 3.1.16 we have \(c=0\) or \(\boldv=\boldzero\) (Theorem 3.1.16). Since \(\boldv\ne 0\text{,}\) we conclude \(c=0\text{.}\) This shows that the only linear combination of \(S\) yielding \(\boldzero\) is the trivial one.
  3. Suppose \(S=\{\boldv, \boldw\}\text{,}\) where \(\boldv\ne\boldw\text{.}\) From the last remark, it follows that \(S\) is linearly independent if and only if one of its elements is a scalar multiple of the other.
    This makes it very easy to decide whether a two-element set is linearly independent. Note however, that the same observation does not apply to larger sets: e.g., \(S=\{(1,1),(1,0), (0,1)\}\) can be shown to be linearly dependent, and yet no element of \(S\) is a scalar multiple of any other element.
Deciding whether a subset \(S\) of a vector space \(V\) is linearly independent usually boils down to a question about the solutions to a certain system of linear equations. The procedure below outlines the steps necessary to extract the relevant linear system and draw the relevant conclusions.
This is a fitting point to recall our Gaussian elimination mantra. As you can see, even as we move into more and more abstract realms of linear algebra (linear independence, span, etc.), Gaussian elimination remains our most important tool.

Example 3.5.13. Linear independence.

For each subset \(S\) of the given vector space \(V\text{,}\) decide whether \(S\) is linearly independent.
  1. \(V=\R^3\text{,}\) \(S=\{(1,1,2),(1,0,1), (-2,1,-1)\}\)
  2. \(V=P_2\text{,}\) \(S=\{x^2+x-2, 2x^2+1, x^2-x\}\)
  3. \(V=M_{22}\text{,}\) \(S=\{A_1, A_2, A_3, A_4\}\text{,}\) where
    \begin{equation*} A_1=\begin{bmatrix}3\amp 1\\ 2\amp -3 \end{bmatrix} , A_2= \begin{bmatrix}0\amp 4\\ 2\amp 0 \end{bmatrix} , A_3=\begin{bmatrix}-2\amp -2\\ -2\amp 2 \end{bmatrix}\text{.} \end{equation*}
Solution.
  1. We have
    \begin{equation*} a(1,1,2)+b(1,0,1)+c(-2,1,-1)=(0,0,0) \end{equation*}
    if and only if
    \begin{equation*} \begin{linsys}{3} a \amp +\amp b\amp -\amp 2c\amp =0\\ a \amp \amp \amp +\amp c\amp =0\\ 2a \amp +\amp b\amp -\amp c\amp =0\\ \end{linsys}\text{.} \end{equation*}
    After a little Gaussian elimination we see that \((a,b,c)=(1,-3,-1)\) is a nonzero solution to this system, and thus that
    \begin{equation*} (1,1,2)-3(1,0,1)-(-2,1,-1)=(0,0,0) \end{equation*}
    Since there is a nontrivial linear combination of elements of \(S\) yielding the zero vector, we conclude \(S\) is linearly dependent.
  2. Recall that the zero vector of \(P_2\) is the zero polynomial \(\boldzero=0x^2+0x+0\text{.}\) We have
    \begin{align*} a(x^2+x-2)+b(2x^2+1)+c(x^2-x)=\boldzero \amp \iff (a+2b+c)x^2+(a-c)x+(-2a+b)=0x^2+0x+0 \\ \amp \\ \amp \iff \begin{linsys}{3} a\amp +\amp 2b\amp +\amp c\amp =\amp 0\\ a\amp \amp \amp -\amp c\amp =\amp 0\\ -2a\amp +\amp b \amp \amp \amp =\amp 0 \end{linsys} \amp (\knowl{./knowl/th_poly_equality.html}{\text{Theorem 3.3.22}})\text{.} \end{align*}
    Gaussian elimination shows that \((a,b,c)=(0,0,0)\) is the unique solution to this last system. We conclude that \(S\) is linearly independent.
  3. We have
    \begin{align*} a\begin{bmatrix}3\amp 1\\ 2\amp -3 \end{bmatrix} +b\begin{bmatrix}0\amp 4\\ 2\amp 0 \end{bmatrix} +c\begin{bmatrix}-2\amp -2\\ -2\amp 2 \end{bmatrix}= \begin{bmatrix}0\amp 0\\0\amp 0 \end{bmatrix} \amp \iff \begin{bmatrix}3a-2c\amp a+4b-2c\\ 2a+2b-2c\amp -3a+2c \end{bmatrix}=\begin{bmatrix}0\amp 0\\0\amp 0 \end{bmatrix}\\ \amp \\ \amp \iff \begin{linsys}{3} 3a\amp \amp \amp -\amp 2c\amp =\amp 0\\ a\amp +\amp 4b\amp -\amp 2c\amp =\amp 0\\ 2a\amp +\amp 2b \amp -\amp 2c \amp =\amp 0\\ -3a\amp \amp \amp +\amp 2c\amp =\amp 0 \end{linsys} \text{.} \end{align*}
    Row reduction reveals that this last linear system has a free variable, and hence that there are infinitely many solutions to this system: e.g., \((a,b,c)=(2,1,3)\text{.}\) We conclude that \(S\) is linearly dependent.
So far we have only defined linead independence for finite subsets of a vector space. The notion is easily extended to arbitrary (potentially infinite) subsets.

Definition 3.5.14. Linear independence (for arbitrary subsets).

Let \(V\) be a vector space. A subset \(S\subseteq V\) is linearly independent if and only if every finite subset of \(S\) is lineary independent in the sense of Definition 3.5.9.

Remark 3.5.15.

A subtle question arises here: namely, whether our definition of linear independence for arbitrary sets is consistent with our definition for finite ones. In other words, given a set \(S=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of \(n\) distinct vectors, is it true that \(S\) is linearly independent (in the sense of 3.5.9) if and only if every subset \(\{\boldv_{i_1}, \boldv_{i_2}, \dots, \boldv_{i_k}\}\) is linearly independent (in the sense of 3.5.9)? The answer of course is yes. One direction is easy: since \(S\) is a subset of itself, clearly if every subset is linearly independent, then \(S\) is linearly independent. The other direction is left as an exercise (Exercise 3.5.4.19).

Example 3.5.16. Linear independence in \(P\).

Consider the vector space \(P\) of polynomial functions on \(\R\text{.}\) We show that the set
\begin{equation*} S=\{1,x, x^2, \dots\}=\{x^n\colon n\geq 0\} \end{equation*}
is linearly independent. Given a finite subset \(T\subseteq S\text{,}\) we can write \(T=\{x^{n_1}, x^{n_2}, \dots, x^{n_r}\}\text{,}\)where \(n_1< n_2 < \cdots < n_r\text{.}\) By Corollary 0.7.4, we have
\begin{equation*} c_1x^{n_1}+c_2x^{n_2}+\cdots +c_rx^{n_r}=\boldzero \end{equation*}
if and only if
\begin{equation*} c_1=c_2=\cdots =c_r=0\text{.} \end{equation*}
This proves \(T\) is linearly independent. We have thus shown that any finite subset of \(S\) is linearly independent, and thus conclude that \(S\) is linearly independent by Definition 3.5.14.

Subsection 3.5.3 Linear independence in function spaces

In each computation of Example 3.5.13 it was easy to derive a relevant linear system since the notion of equality in each of these spaces amounts to checking a finite number of numeric equalities: e.g., between the entries of two triples, the coefficients of two polynomials, or the entries of two \(2\times 2\) matrices.
Things are slightly more complicated when working in function spaces on an interval \(X\) (e.g. \(C(X), C^\infty(X)\text{,}\) etc.), since by definition functions \(f\) and \(g\) are equal if and only if \(f(x)=g(x)\) for all \(x\in X\text{.}\) Thus, in principle verifying two functions are equal amounts to showing infinitely many equalities hold: one for each \(x\in X\text{.}\) Despite this added complexity, we can still investigate linear independence of functions with certain linear systems, as described in the procedure below.

Remark 3.5.18.

When using Procedure 3.5.17 to show \(S=\{f_1, f_2, \dots, f_n\}\) is linearly independent, you want to produce enough linear equations to force the unknowns \(c_i\) to all be equal to 0. Note that since you you have \(n\) unknowns, you need at at least \(n\) equations, and so must pick at least \(n\) elements \(x_i\in X\text{.}\) Do so judiciously in order to (a) force the \(c_i\) to all be equal to 0, and (b) make the necessary computations palatable.
We illustrate Procedure 3.5.17 in the following example.

Example 3.5.19. Linear independence of functions.

Let \(V=F((-2\pi, 2\pi), \R)\text{.}\) Determine whether the given subset \(S\) is linearly independent.
  1. \(\displaystyle S=\{f(x)=x, g(x)=\cos x, h(x)=\sin x\}\)
  2. \(S=\{f(x)=1, g(x)=\cos 2x, g(x)=\cos^2 x\}\text{.}\)
Solution.
  1. We claim \(S\) is linearly independent. If
    \begin{equation*} c_1f+c_2g+c_3h=\boldzero\text{,} \end{equation*}
    then
    \begin{equation*} c_1x+c_2\cos x+c_3\sin x=0\text{,} \end{equation*}
    for all \(x\in (-2\pi, 2\pi)\text{.}\) In particular, the equality is true when evaluated at \(x=0, \pi/2, \pi\text{,}\) yielding the following linear system:
    \begin{equation*} \begin{linsys}{4} x=0 \colon \amp \amp 0c_1\amp +\amp \cos(0)c_2\amp +\amp \sin(0) c_3\amp= \amp 0\\ x=\pi/2 \colon \amp \amp (\pi/2)c_1\amp +\amp \cos(\pi/2)c_2\amp +\amp \sin(\pi/2)c_3 \amp = \amp 0\\ x=\pi \colon \amp \amp \pi c_1\amp +\amp \cos(\pi)c_2\amp +\amp \sin(\pi)c_3 \amp= \amp 0 \end{linsys}\text{,} \end{equation*}
    or
    \begin{equation*} \begin{linsys}{4} x=0 \colon \amp \amp \amp \amp c_2\amp \amp \amp = \amp 0\\ x=\pi/2 \colon \amp \amp (\pi/2)c_1\amp +\amp \amp +\amp c_3 \amp = \amp 0\\ x=\pi \colon \amp \amp \pi c_1\amp - \amp c_2 \amp \amp \amp= \amp 0 \end{linsys}\text{.} \end{equation*}
    This system can be now be solved essentially by inspection: the first equation implies \(c_2=0\text{;}\) setting \(c_2=0\text{,}\) in the third equation, we conclude \(c_1=0\text{;}\) the second equation now implies \(c_3=0\text{.}\) We conclude that \(c_1=c_2=c_3=0\text{.}\) Thus the only linear combination of \(f, g, h\) yielding the zero function is the trivial one, showing that \(S\) is linearly independent.
  2. We claim \(S\) is linearly dependent. As discussed in Procedure 3.5.17, to prove this we have to produce an identity involving these functions. The double-angle formula from trigonometry tells us that
    \begin{equation*} \cos 2x=\cos^2 x-\sin^2x=2\cos^2x+1 \end{equation*}
    for all \(x\in (-2\pi, 2\pi)\) (in fact for all \(x\in \R\)). It follows that
    \begin{equation*} 1-\cos 2x+2\cos^2x=0 \end{equation*}
    for all \(x\in (-2\pi, 2\pi)\text{,}\) or
    \begin{equation*} f-g+2h=\boldzero\text{.} \end{equation*}
    Since we have found a nontrivial linear combination of elements of \(S\) yielding the zero vector, we conclude that \(S\) is linearly dependent.

Video example: linear independence of functions.

Figure 3.5.20. Video: linear independence of functions

Exercises 3.5.4 Exercises

WeBWork Exercises

1.
Let \(\mathbf{u}_4\) be a linear combination of \(\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3\rbrace\text{.}\)
Select the best statement.
  • \(\mathrm{span}\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace = \mathrm{span}\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4 \rbrace\) .
  • We only know that \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\subseteq \mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4\rbrace\) .
  • \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace= \mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4\rbrace\) when \(\mathbf{u}_4\) is a scalar multiple of one of \(\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\text{.}\)
  • There is no obvious relationship between \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\) and \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4 \rbrace\) .
  • none of the above
Solution.
\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace=\) span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\text{.}\)
2.
Let \({\bf u}_4\) be a vector that is not a linear combination of \(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,\right\rbrace\text{.}\)
Select the best statement.
  • We only know that span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\subseteq\) span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace\) .
  • There is no obvious relationship between span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace\) and span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\) .
  • We only know that span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3 \right\rbrace \subseteq\) span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4 \right\rbrace\text{.}\)
  • span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace\) is a proper subset of span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\text{.}\)
  • span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace=\) span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\) .
  • span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace=\) span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\) when none of \(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,\right\rbrace\) is a linear combination of the others.
  • none of the above
Solution.
\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace\) is a proper subset of span\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\text{.}\)
3.
Let \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\) be a linearly independent set of vectors.
Select the best statement.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is also a linearly independent set of vectors unless \({\bf u}_4={\bf 0}\text{.}\)
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly dependent set of vectors.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) could be a linearly independent or linearly dependent set of vectors depending on the vectors chosen.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly independent set of vectors.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is also a linearly independent set of vectors unless \({\bf u}_4\) is a scalar multiple another vector in the set.
  • none of the above
Solution.
\({\bf u}_4\) is a linear combination of \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\) then \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is linearly dependent. Otherwise the set in linearly independent. \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) could be a linearly independent or linearly dependent set of vectors depending on the vectors chosen.
4.
Let \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\) be a linearly dependent set of vectors.
Select the best statement.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is a linearly independent set of vectors unless \({\bf u}_4\) is a linear combination of other vectors in the set.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly dependent set of vectors.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) could be a linearly independent or linearly dependent set of vectors depending on the vectors chosen.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is a linearly independent set of vectors unless \({\bf u}_4={\bf 0}\text{.}\)
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly independent set of vectors.
  • none of the above
Solution.
\(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly dependent set of vectors.
5.
Let \(V\) be the vector space \({\mathbb P}_3[x]\) of polynomials in \(x\) with degree less than 3 and \(W\) be the subspace
\begin{equation*} W=\text{span} \lbrace 9+9x+x^{2},4+7x-6x^{2} \rbrace . \end{equation*}
a. Find a nonzero polynomial \(p(x)\) in \(W\text{.}\)
\(p(x) =\)
b. Find a polynomial \(q(x)\) in \(V\setminus W\text{.}\)
\(q(x) =\)
Answer 1.
\(13+16x-5x^{2}\)
Answer 2.
\(-x\)
6.
Determine whether each set \(\left\{ p_{1},p_{2} \right\}\) is a linearly independent set in \({\mathbb P}_{3}\text{.}\) Type "yes" or "no" for each answer.
The polynomials \(p_{1}(t)=1+t^2\) and \(p_{2}(t)=1-t^2\text{.}\)
The polynomials \(p_{1}(t)=2t+t^2\) and \(p_{2}(t)=1+t\text{.}\)
The polynomials \(p_{1}(t)=2t-4t^2\) and \(p_{2}(t)=6t^2-3t\text{.}\)
7.
Are the vectors \(\left[\begin{array}{c} 4\cr -5\cr 4 \end{array}\right]\text{,}\) \(\left[\begin{array}{c} 2\cr 2\cr -3 \end{array}\right]\) and \(\left[\begin{array}{c} 18\cr -9\cr 3 \end{array}\right]\) linearly independent?
  • Choose
  • linearly dependent
  • linearly independent
If they are linearly dependent, find scalars that are not all zero such that the equation below is true. If they are linearly independent, find the only scalars that will make the equation below true.
\(\left[\begin{array}{c} 4\cr -5\cr 4 \end{array}\right] +\) \(\left[\begin{array}{c} 2\cr 2\cr -3 \end{array}\right] +\) \(\left[\begin{array}{c} 18\cr -9\cr 3 \end{array}\right] =\) \(\left[\begin{array}{c} 0\cr 0\cr 0 \end{array}\right]\text{.}\)
Answer 1.
\(\text{linearly dependent}\)
Answer 2.
\(-3;\,-3;\,1\)
8.
Let \(V\) be the vector space of symmetric \(2\times 2\) matrices and \(W\) be the subspace
\begin{equation*} W=\text{span} \lbrace \left[\begin{array}{cc} -2 \amp 4\cr 4 \amp 3 \end{array}\right],\left[\begin{array}{cc} 4 \amp 3\cr 3 \amp 2 \end{array}\right] \rbrace . \end{equation*}
a. Find a nonzero element \(X\) in \(W\text{.}\)
\(X =\) (2 × 2 array)
b. Find an element \(Y\) in \(V\) that is not in \(W\text{.}\)
\(Y =\) (2 × 2 array)

Exercise Group.

A subset \(S\) of a vector space is given in each exercise below. Determine whether the given elements are elements of \(\Span S\text{.}\) Justify your answer by either providing a linear combination of \(S\) yielding the given element, or else showing that no such linear combination exists.
9.
\(V=\R^3\text{,}\) \(S=\{(1,0,1),(1,1,1), (1,2,1) \}\)
  1. \(\displaystyle \boldv=(0,0,0)\)
  2. \(\displaystyle \boldw=(1,2,1)\)
  3. \(\displaystyle \boldu=(7,1,0)\)
10.
\(V=M_{22}\text{,}\) \(S=\{A, B, C\}\text{,}\) where
\begin{equation*} A = \begin{bmatrix}4\amp 0\\ -2\amp -2 \end{bmatrix} \hspace{5pt}, B = \begin{bmatrix}1\amp -1\\ 2\amp 3 \end{bmatrix} \hspace{5pt}, C= \begin{bmatrix}0\amp 2\\ 1\amp 4 \end{bmatrix}\text{.} \end{equation*}
  1. \(\displaystyle A_1=\begin{bmatrix}6\amp -8\\ -1\amp -8 \end{bmatrix}\)
  2. \(\displaystyle A_2=\begin{bmatrix}-1\amp 5\\ 7\amp 1 \end{bmatrix}\)
11.
\(V=C(\R)\text{,}\) \(S=\{f_1(x)=\cos^2 x, f_2(x)=\sin^2 x\}\)
  1. \(\displaystyle f(x)=x\)
  2. \(\displaystyle g(x)=\cos 2x\)
  3. \(\displaystyle h(x)=\sin x\)

12.

Determine whether \(S\) is a spanning set of \(V\text{.}\)
  1. \(V=\R^3\text{,}\) \(S=\{(1,-1,1),(0,1,-2), (1,0,1)\} \)
  2. \(V=\R^3\text{,}\) \(S=\{(1,-1,1), (0,1,-2), (-3,2,-1)\}\)
  3. \(V=P_2\text{,}\) \(S=\{ x^2-1, x^2+1, x^2+2x \}\)
  4. \(V=\R_{>0}\) (see Example 3.1.13), \(S=\{3\}\)

Exercise Group.

In each exercise, determine whether the given subset \(S\) of the vector space \(V\) is linearly independent. Justify your answer.
13.
\(V=\R^4\text{,}\) \(S=\{(3,8,7,-3), (1,5,3,-1), (2,-1,2,6), (4,2,6,4)\}\)
14.
\(V=P_2\text{,}\) \(S=\{2x^2-x-1, x^2-2x+1, x^2+x+1 \}\)
15.
\(V=C([0,1])\text{,}\) \(S=\{f(x)=x, g(x)=2\val{x} \}\)
16.
\(V=C([-1,1])\text{,}\) \(S=\{f(x)=x, g(x)=2\val{x} \}\)

17.

Let \(V=M_{22}\) and let \(S=\{ A_1, A_2, A_3\}\text{,}\) where
\begin{equation*} A_1=\begin{bmatrix}1\amp 0\\ 1\amp c \end{bmatrix}, A_2=\begin{bmatrix}-1\amp 0\\ c\amp 1 \end{bmatrix}, A_3=\begin{bmatrix}2\amp 0\\ 1\amp 3 \end{bmatrix}\text{.} \end{equation*}
Determine all values \(c\in \R\) for which \(S\) is linearly independent.

18.

Let \(V=M_{22}\text{,}\) and define \(A_1=\begin{bmatrix}1\amp 1\\1\amp 1 \end{bmatrix}\text{,}\) \(A_2=\begin{bmatrix}0\amp 1\\1\amp 0 \end{bmatrix}\text{,}\) \(A_3=\begin{bmatrix}1\amp 1\\ 1\amp 0 \end{bmatrix}\text{.}\)
  1. Compute \(W=\Span(\{A_1,A_2,A_3\})\text{,}\) identifying it as a certain familiar set of matrices.
  2. Decide whether \(S=\{A_1,A_2,A_3\}\) is independent.

19.

Let \(V\) be a vector space, and let \(S=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) be a subset of distinct vectors. Assume \(S\) is linearly independent. Show that any subset \(T\subseteq S\) is linearly independent.
Hint.
The empty set \(T=\{\, \}\) is trivially linearly independent. Let \(T=\{\boldv_{i_1}, \boldv_{i_2}, \dots, \boldv_{i_k}\}\) represent an arbitrary nonempty set of distinct elements of \(S\text{.}\) You can show \(T\) is linearly independent by extending linear combinations of the elements of \(T\) to a linear combination of the elements of \(S\) in a certain way.

20. Span, independence, and invertibility.

In this exercise we identify elements of \(V=\R^n\) with \(n\times 1\) column vectors.
Let \(S=\{\boldv_1,\boldv_2,\dots, \boldv_n\}\) be a subset of \(\R^n\text{,}\) and let \(A\) be the \(n\times n\) matrix whose \(j\)-th column is \(\boldv_j\text{:}\) i.e.,
\begin{equation*} A=\begin{bmatrix}\vert \amp \vert \amp \cdots \amp \vert\\ \boldv_1\amp \boldv_2\amp \amp \boldv_n\\ \vert \amp \vert \amp \cdots \amp \vert \end{bmatrix}\text{.} \end{equation*}
  1. Prove: \(\Span S=\R^n\) if and only if \(A\) is invertible.
  2. Prove: \(S\) is linearly independent if and only if \(A\) is invertible.
Hint.
Use the column method (Theorem 2.1.24) and the invertibility theorem (Theorem 2.5.27)

21. Linear transformations, span, and independence.

Suppose \(T\colon V\rightarrow W\) is a linear transformation. Let \(S=\{\boldv_1,\boldv_2,\dots ,\boldv_r\}\) be a subset of \(V\text{,}\) and let \(S'\) be the image of \(S\) under \(T\text{:}\) i.e.,
\begin{equation*} S'=T(S)=\{T(\boldv_1), T(\boldv_2),\dots, T(\boldv_r)\}\text{.} \end{equation*}
Assume \(\boldv_i\ne \boldv_j\) and \(T(\boldv_i)\ne T(\boldv_j)\) for all \(i\ne j\text{.}\)
Answer true or false: if true, provide a proof; if false, provide an explicit counterexample. Note: for a complete counterexample you need to specify \(T\colon V\rightarrow W\text{,}\) and \(S\text{.}\)
  1. If \(S\) is linearly independent, then \(S'\) is linearly independent.
  2. If \(S'\) is linearly independent, then \(S\) is linearly independent.
  3. If \(S\) is a spanning set for \(V\text{,}\) then \(S'\) is a spanning set for \(\im T\text{.}\)