Skip to main content

Euclidean vector spaces

Section 4.2 Span and linear independence

There are many situations in mathematics where we want to describe an infinite set in a concise manner. We saw this at work already in SectionΒ 2.3, where infinite sets of solutions to linear systems were neatly described with parametric expressions.
A similar issue arises when describing vector spaces and their subspaces. As we know, any vector space is either the zero space or infinite (ExerciseΒ 1.1.3.6). If we happen to be dealing with a subspace of \(\R^n\text{,}\) then there is the possibility of giving a parametric description; but how do we proceed when working in one of our more exotic vector spaces like \(C^1(\R)\text{?}\)
As we will see in SectionΒ 4.3 the relevant linear algebraic tool for this purpose is the concept of a basis. Loosely speaking, a basis for a vector space \(V\) is a set of vectors that is large enough to generate the entire space, and small enough to contain no redundancies. What exactly we mean by β€œgenerate” is captured by the rigorous notion of span; and what we mean by β€œno redundancies” is captured by linear independence.

Subsection 4.2.1 Span

Recall that a linear combination in a vector space \(V\) is a vector of the form
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n\text{,} \end{equation*}
where \(c_i\in \R\) are scalars. We use this notion to define the span of a set of vectors.

Definition 4.2.1. Span.

Let \(V\) be a vector space, and let \(S\subseteq V\) be any subset of \(V\text{.}\) The span of \(S\), denoted \(\Span S\text{,}\) is the subset of \(V\) defined as follows:
  • If \(S=\emptyset\text{,}\) then \(\Span S=\{\boldzero_V\}\text{.}\)
  • Otherwise we define \(\Span S\) to be the set of all linear combinations of elements of \(S\text{:}\) i.e.,
    \begin{equation*} \Span S=\{\boldv\in V\colon \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n \text{ for some } \boldv_i\in S \text{ and } c_i\in \R\}\text{.} \end{equation*}

Remark 4.2.2.

Let \(S\) be a subset of \(V\text{.}\) Some simple observations:
  1. The zero vector is always an element of \(\Span S\text{.}\) Indeed, if \(S=\emptyset\text{,}\) then \(\Span S=\{\boldzero\}\) by definition. Otherwise, given any \(\boldv\in S\text{,}\) the linear combination \(0\boldv=\boldzero\) is an element of \(\Span S\text{.}\)
  2. We have \(S\subseteq \Span S\text{:}\) i.e., \(\Span S\) includes \(S\) itself. Indeed, given any \(\boldv\in S\text{,}\) the linear combination \(1\boldv=\boldv\) is an element of \(\Span S\text{.}\)
  3. If \(S=\{\boldv\}\) contains exactly one element, then \(\Span S=\{c\boldv\colon c\in \R\}\) is simply the set of all scalar multiples of \(\boldv\text{.}\)
    If \(\boldv\ne \boldzero\text{,}\) then we know that this set is infinite (ExerciseΒ 1.1.3.6). Thus even when \(S\) is finite, \(\Span S\) will be infinite, as long as \(S\) contains nonzero vectors.

Example 4.2.3. Examples in \(\R^2\).

Let \(V=\R^2\text{.}\) For each \(S\text{,}\) identify \(\Span S\) as a familiar geometric object.
  1. \(S=\{ \}\text{.}\)
  2. \(\displaystyle S=\{(0,0)\}\)
  3. \(S=\{\boldv\}\text{,}\) \(\boldv=(a,b)\ne (0,0)\)
  4. \(\displaystyle S=\{ (1,0), (0,1)\}\)
  5. \(\displaystyle S=\{ (1,1), (2,2)\}\)
  6. \(\displaystyle S=\{(1,1),(1,2)\}\)
  7. \(\displaystyle S=\R^2\)
Solution.
  1. \(\Span S=\{\boldzero\}\text{,}\) the set containing just the origin, by definition.
  2. \(\Span S\) is the set of all scalar multiples of \((0,0)\text{.}\) Thus \(\Span S=\{\boldzero\}\text{.}\)
  3. \(\Span S\) is the set of all scalar multiples of the nonzero vector \((a,b)\text{.}\) Geometrically, this is the line that passes through the the origin and the point \((a,b)\text{.}\)
  4. By definition
    \begin{equation*} S=\{a(1,0)+b(0,1)\colon a,b\in \R\}=\{(a,b)\colon a,b\in\R\}\text{.} \end{equation*}
    Thus \(\Span S=\R^2\text{,}\) the entire \(xy\)-plane.
  5. By definition
    \begin{equation*} S=\{a(1,1)+b(2,2)\colon a,b\in \R\}=\{(a+2b,a+2b)\colon a,b\in\R\}\text{.} \end{equation*}
    It is easy to see that \(S=\{(c,c)\colon t\in \R\}\text{,}\) the line with equation \(y=x\text{.}\) Note that in this case we have
    \begin{equation*} S=\Span\{(1,1), (2,2)\}=\Span \{(1,1)\}\text{,} \end{equation*}
    and thus that the vector \((2,2)\) is in some sense redundant.
  6. By definition
    \begin{equation*} S=\{a(1,1)+b(1,2)\colon a,b\in \R\}=\{(a+b,a+2b)\colon a,b\in\R\}\text{.} \end{equation*}
    Claim: \(\Span S=\R^2\text{.}\) Proving the claim amounts to showing that for all \((c,d)\in \R^2\) there exist \(a,b\in \R\) such that
    \begin{equation*} \begin{array}{ccccc} a \amp +\amp b \amp =\amp c\\ a \amp +\amp 2b \amp =\amp d \end{array}\text{.} \end{equation*}
    Solving this system using Gaussian elimination, we see that the system has the unique solution
    \begin{align*} a\amp =2c-d \amp b\amp =d-c\text{,} \end{align*}
    and thus that
    \begin{equation*} (2c-d)(1,1)+(d-c)(1,2)=(c,d)\text{.} \end{equation*}
    This proves \(\Span S=\R^2\text{,}\) as claimed.
  7. By RemarkΒ 4.2.2, we have \(S\subseteq \Span S\text{.}\) Thus \(\R^2\subseteq \Span \R^2\text{.}\) Since \(\Span \R^2\subseteq \R^2\) by definition, we conclude that \(\Span S=\R^2\text{.}\)

Example 4.2.4. Video example: computing span.

Figure 4.2.5. Video: computing span
You may have noticed that each span computation in the previous example produced a subspace of \(\R^2\text{.}\) This is no accident!

Proof.

We prove each statement separately.
Statement (1).
To show \(\Span S\) is a subspace, we use the two-step technique.
  1. By RemarkΒ 4.2.2 we know that \(\boldzero\in \Span S \text{.}\)
  2. Suppose \(\boldv, \boldw\in S\text{.}\) By definition we have
    \begin{align*} \boldv \amp =c_1\boldv_1+c_2\boldv_2+\cdots +c_r\boldv_r \amp \boldw \amp = c_{r+1}\boldv_{r+1}+c_{r+2}\boldv_{r+2}+\cdots +c_{r+s}\boldv_{r+s} \end{align*}
    for some vectors \(\boldv_1, \boldv_2, \dots, \boldv_{r+s}\in S\) and scalars \(c_1,c_2,\dots, c_{r+s}\text{.}\) Then for any \(c,d\in \R\) we have
    \begin{equation*} c\boldv+d\boldw=cc_1\boldv_1+cc_2\boldv_2+\cdots +cc_r\boldv_r+dc_{r+1}\boldv_{r+1}+dc_{r+2}\boldv_{r+2}+\cdots +dc_{r+s}\boldv_{r+s}\text{,} \end{equation*}
    which is clearly a linear combination of elements of \(S\text{.}\) Thus \(c\boldv+d\boldw\in \Span S\text{,}\) as desired.
Statement (2).
Let \(W\subseteq V\) be a subspace that contains all elements of \(S\text{.}\) Since \(W\) is closed under arbitrary linear combinations, it must contain any linear combination of elements of \(S\text{,}\) and thus \(\Span S\subseteq W\text{.}\)
The results of TheoremΒ 4.2.6 motivate the following additional terminology.

Definition 4.2.7. Spanning set.

Let \(S\) be a subset of the vector space \(V\text{.}\) We call \(W=\Span S\) the subspace of \(V\) generated by S, and we call \(S\) a spanning set for \(W\text{.}\)

Remark 4.2.8. Some standard spanning sets.

For most of the vector spaces we’ve met a natural spanning set springs to mind. We will refer to these loosely as standard spanning sets. Some examples:
  • Zero space.
    Let \(V=\{\boldzero\}\text{.}\) By definition the empty set \(S=\emptyset=\{ \}\) is a spanning set of \(V\text{.}\)
  • Tuples.
    Let \(V=\R^n\text{.}\) For \(1\leq i\leq n\text{,}\) define \(\bolde_i\) to be the \(n\)-tuple with a one in the \(i\)-th entry, and zeros elsewhere. Then \(S=\{\bolde_1, \bolde_2,\dots, \bolde_n\}\) is a spanning set for \(\R^n\text{.}\)
  • Matrices.
    Let \(V=M_{mn}\text{.}\) For each \((i,j)\) with \(1\leq i\leq m\) and \(1\leq j\leq n\text{,}\) define \(E_{ij}\) to be the \(m\times n\) matrix with a one in the \(ij\)-th entry, and zeros elsewhere. Then \(S=\{E_{ij}\colon 1\leq i\leq m, 1\leq j\leq n\}\) is a spanning set for \(M_{mn}\text{.}\)
It is important to observe that spanning sets for vector spaces are not unique. Far from it! In general, for any nonzero vector space there are infinitely many choices of spanning sets.

Example 4.2.9. Spanning sets are not unique.

For each \(V\) and \(S\) below, verify that \(S\) is a spanning set for \(V\text{.}\)
  1. \(V=\R^2\text{,}\) \(S=\{(1,1), (1,2)\}\)
  2. \(V=M_{22}\text{,}\) \(S=\{A_1, A_2, A_3, A_4\}\text{,}\)
    \begin{equation*} A_1=\begin{amatrix}[rr]1\amp 1\\ 1\amp 1 \end{amatrix}, A_2=\begin{amatrix}[rr]1\amp -1\\ 0\amp 0 \end{amatrix}, A_3=\begin{amatrix}[rr]0\amp 0\\ 1\amp -1 \end{amatrix}, A_4=\begin{amatrix}[rr]1\amp 1\\ -1\amp -1 \end{amatrix}\text{.} \end{equation*}
Solution.
  1. This was shown in ExampleΒ 4.2.3
  2. We must show, given any \(A=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\text{,}\) we can find \(c_1, c_2, c_3, c_4\in \R\) such that
    \begin{equation*} c_1A_1+c_2A_2+c_3A_3+c_4A_4=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\text{,} \end{equation*}
    or
    \begin{equation*} \begin{amatrix}[rr]c_1+c_2+c_4 \amp c_1-c_2+c_4\\ c_1+c_3-c_4\amp c_1-c_3-c_4 \end{amatrix} = \begin{amatrix}[rr]a\amp b \\ c\amp d \end{amatrix}\text{.} \end{equation*}
    We can find such \(c_i\) if and only if the system with augmented matrix
    \begin{equation*} \begin{amatrix}[rrrr|r] 1\amp 1\amp 0\amp 1\amp a\\ 1\amp -1\amp 0\amp 1\amp b \\ 1\amp 0\amp 1\amp -1\amp c\\ 1\amp 0\amp -1\amp -1\amp d \end{amatrix} \end{equation*}
    is consistent. This matrix row reduces to
    \begin{equation*} \begin{amatrix}[rrrr|r] \boxed{1}\amp 1\amp 0\amp 1\amp a\\ 0\amp \boxed{1}\amp 0\amp 0\amp \frac{a-b}{2} \\ 0\amp 0\amp \boxed{1}\amp -2\amp c-\frac{a+b}{2}\\ 0\amp 0\amp 0\amp \boxed{1}\amp \frac{a+b-c-d}{4} \end{amatrix}\text{.} \end{equation*}
    Since the last column will never contain a leading one, we conclude that the system is consistent for any choice of \(a,b,c,d\text{,}\) and thus that \(\Span S=M_{22}\text{,}\) as claimed.

Subsection 4.2.2 Linear independence

As mentioned at the top, the notion of linear independence is precisely what we need to guarantee that a given spanning set has no β€œredundancies”.

Definition 4.2.10. Linear independence.

Let \(V\) be a vector space. A subset \(S\subseteq V\) is linearly independent if for any collection \(\boldv_1,\boldv_2,\dots, \boldv_n\) of distinct vectors of \(S\) (i.e., \(\boldv_i\ne \boldv_j\) for \(i\ne j\)), and any scalars \(c_1,c_2,\dots, c_n\in \R\text{,}\) the following implication holds:
\begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero \implies c_1=c_2=\cdots =c_n=0\text{.} \end{equation*}
A subset \(S\) is linearly dependent if it is not linearly independent: i.e., if we can find distinct vectors \(\boldv_1,\boldv_2,\dots, \boldv_n\in S\text{,}\) and scalars \(c_1, c_2,\dots, c_n\) with \(c_i\ne 0\) for some \(i\text{,}\) such that
\begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero\text{.} \end{equation*}

Remark 4.2.11. Linear independence.

Recalling the notion of trivial and nontrivial linear combinations from DefinitionΒ 1.1.13, we can summarize DefinitionΒ 4.2.10 in plain English as follows:
  • A set \(S\) is linearly independent if there is no nontrivial linear combination of distinct elements of \(S\) equal to the zero vector.
  • A set \(S\) is linearly dependent if there is a nontrivial linear combination of elements of \(S\) equal to the zero vector.
As stated, our definition of linear independence is pleasingly general in that it places no restriction on the subset in question; in particular, the definition applies to both finite and infinite subsets of vector spaces. That said, one drawback to this definition is that in order to determine whether \(S\) is linearly independent, we must look at each finite subset of elements of \(S\) and determine for this collection whether or not there is a nontrivial linear combination equal to the zero vector. To do so directly would be quite time consuming. Thankfully, we will focus on finite sets \(S\text{,}\) and in this case it turns out that the only subset we need to consider is \(S\) itself.

Example 4.2.13. Elementary examples.

Let \(V\) be a vector space, and let \(S\) be a subset.
  1. If \(\boldzero\in S\text{,}\) then \(S\) is linearly dependent: indeed, we have the nontrivial linear combination \(1\, \boldzero=\boldzero\text{.}\)
  2. If \(S=\{\boldv\}\text{,}\) then \(S\) is linearly independent if and only if \(\boldv\ne \boldzero\text{.}\) The previous comment shows why \(\boldv\ne \boldzero\) is a necessary condition. Let’s see why it is sufficient.
    Suppose \(\boldv\ne\boldzero\text{,}\) and suppose we have \(c\boldv=\boldzero\text{.}\) By TheoremΒ 1.1.9 we have \(c=0\) or \(\boldv=\boldzero\) (TheoremΒ 1.1.9). Since \(\boldv\ne 0\text{,}\) we conclude \(c=0\text{.}\) This shows that the only linear combination of \(S\) yielding \(\boldzero\) is the trivial one.
  3. Suppose \(S=\{\boldv, \boldw\}\text{,}\) where \(\boldv\ne\boldw\text{.}\) If \(S\) is linearly dependent, then we have
    \begin{equation*} c\boldv+d\boldw=\boldzero\text{,} \end{equation*}
    where \(c\ne 0\) or \(d\ne 0\text{.}\) If \(c\ne 0\text{,}\) then we can solve
    \begin{equation*} \boldv=-\frac{d}{c}\boldw\text{.} \end{equation*}
    Similarly, if \(d\ne 0\text{,}\) then we have
    \begin{equation*} \boldw=-\frac{c}{d}\boldv\text{.} \end{equation*}
    In both cases, we see that one of the vectors is a scalar multiple of the other. Conversely, if one the two vectors is a scalar multiple of the other, then it is easy to see that there is a nontrivial linear combination equal to \(\boldzero\text{:}\) e.g., if \(\boldv=c\boldw\text{,}\) then \(\boldv-c\boldw=\boldzero\text{.}\) We conclude that \(S\) is linearly dependent of if and only if one of the vectors is a scalar multiple of the other, and linearly independent if and only if neither vector is a scalar multiple of the other.
The simple test in ExampleΒ 4.2.13 for linear independence of a set of two vectors unfortunately does not extend to larger sets. For example, the set \(S=\{(1,1),(1,0), (0,1)\}\) can be shown to be linearly dependent, and yet no element of \(S\) is a scalar multiple of any other element. What is true in these cases, is that some element of \(S\) can be written as a linear combination of the others, as articulated in RemarkΒ 4.2.14.

Remark 4.2.14. Linear dependence and redundancy.

Let \(S=\{\boldv_1,\boldv_2, \dots, \boldv_n\}\) be a subset of the vector space \(V\text{,}\) where the \(\boldv_i\) are distinct. If \(n\geq 2\text{,}\) then \(S\) is linearly dependent if and only if we can express some element of \(S\) as a linear combination of the others: i.e., if and only if we have
\begin{equation} \boldv_i=\sum_{j\ne i}\boldv_j\text{.}\tag{4.8} \end{equation}
Indeed, assume we have a vector equation of the form (4.8) for some \(1\leq i\leq n\text{.}\) If \(\boldv_i=\boldzero\text{,}\) then \(S\) is automatically dependent. (See (1) from ExampleΒ 4.2.13.) Otherwise the linear combination on the right side of (4.8) must be nontrivial, in which case
\begin{equation*} \boldv_i+\sum_{j\ne i}-c_j\boldv_j=\boldzero \end{equation*}
is a nontrivial linear combination of distinct elements equal to \(\boldzero\text{.}\) Thus \(S\) is linearly dependent.
Conversely, if \(S\) is linearly dependent, then there are scalars \(c_1,c_2,\dots, c_n\in \R\) such that
\begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n=\boldzero \end{equation*}
and \(\boldc_i\ne 0\) for some \(1\leq i\leq n\text{,}\) in which case
\begin{equation*} \boldv_i=\sum_{j\ne i}-\frac{c_j}{c_i}\boldv_j\text{.} \end{equation*}
Using TheoremΒ 4.2.12, to decide whether a finite set \(S\) is linearly independent, we need to determine whether there is a nontrivial linear combination of its elements equal to the zero vector. As described in the ProcedureΒ 4.2.15, this boils down to a question about the solutions to a certain system of linear equations.
This is a fitting point to recall our Gaussian elimination mantra. As you can see, even as we move into more and more abstract realms of linear algebra (linear independence, span, etc.), Gaussian elimination remains our most important tool.

Example 4.2.16. Linear independence.

For each subset \(S\) of the given vector space \(V\text{,}\) decide whether \(S\) is linearly independent.
  1. \(V=\R^3\text{,}\) \(S=\{(1,1,2),(1,0,1), (-2,1,-1)\}\)
  2. \(V=M_{22}\text{,}\) \(S=\{A_1, A_2, A_3, A_4\}\text{,}\) where
    \begin{equation*} A_1=\begin{bmatrix}3\amp 1\\ 2\amp -3 \end{bmatrix} , A_2= \begin{bmatrix}0\amp 4\\ 2\amp 0 \end{bmatrix} , A_3=\begin{bmatrix}-2\amp -2\\ -2\amp 2 \end{bmatrix}\text{.} \end{equation*}
Solution.
  1. We have
    \begin{equation*} a(1,1,2)+b(1,0,1)+c(-2,1,-1)=(0,0,0) \end{equation*}
    if and only if
    \begin{equation*} \begin{linsys}{3} a \amp +\amp b\amp -\amp 2c\amp =0\\ a \amp \amp \amp +\amp c\amp =0\\ 2a \amp +\amp b\amp -\amp c\amp =0\\ \end{linsys}\text{.} \end{equation*}
    After a little Gaussian elimination we see that \((a,b,c)=(1,-3,-1)\) is a nonzero solution to this system, and thus that
    \begin{equation*} (1,1,2)-3(1,0,1)-(-2,1,-1)=(0,0,0) \end{equation*}
    Since there is a nontrivial linear combination of elements of \(S\) yielding the zero vector, we conclude \(S\) is linearly dependent.
  2. We have
    \begin{align*} a\begin{bmatrix}3\amp 1\\ 2\amp -3 \end{bmatrix} +b\begin{bmatrix}0\amp 4\\ 2\amp 0 \end{bmatrix} +c\begin{bmatrix}-2\amp -2\\ -2\amp 2 \end{bmatrix}= \begin{bmatrix}0\amp 0\\0\amp 0 \end{bmatrix} \amp \iff \begin{bmatrix}3a-2c\amp a+4b-2c\\ 2a+2b-2c\amp -3a+2c \end{bmatrix}=\begin{bmatrix}0\amp 0\\0\amp 0 \end{bmatrix}\\ \amp \\ \amp \iff \begin{linsys}{3} 3a\amp \amp \amp -\amp 2c\amp =\amp 0\\ a\amp +\amp 4b\amp -\amp 2c\amp =\amp 0\\ 2a\amp +\amp 2b \amp -\amp 2c \amp =\amp 0\\ -3a\amp \amp \amp +\amp 2c\amp =\amp 0 \end{linsys} \text{.} \end{align*}
    Row reduction reveals that this last linear system has a free variable, and hence that there are infinitely many solutions to this system: e.g., \((a,b,c)=(2,1,3)\text{.}\) We conclude that \(S\) is linearly dependent.

Subsection 4.2.3 Invertibility, span, and linear independence

Equipped with the concepts of span and linear independence, we can add some useful additional statements to our invertibility theorem.

Proof.

StatementsΒ 8–9, can be shown to be equivalent to one of the previous statements of the invertibilty theorem using the column methodΒ 3.1.27 of matrix multiplication. Indeed, given \(\boldb\in \R^n\) the matrix equation \(A\boldx=\boldb\) has a solution if and only if \(\boldb\) can be written as a linear combination of the columns of \(A\text{.}\) Thus, the matrix equation \(A\boldx=\boldb\) has a solution for all \(\boldb\in \R^n\) if and only if all vectors \(\boldb\in \R^n\) lie in the span of the rows of \(A\text{.}\) This proves statementΒ 8 is equivalent to statementΒ 3. Similarly, the matrix equation \(A\boldx=\boldzero\) has a nontrivial solution if and only if there is a nontrivial linear combination of the columns of \(A\) equal to \(\boldzero\text{,}\) if and only if the columns of \(A\) are linearly independent. This proves statementΒ 9 is equivalent to statementΒ 4.
StatementsΒ 8–10 are easily seen to be equivalent to statementsΒ 8–10 using the fact that \(A\) is invertible if and only if \(A^T\) is invertible.

Exercises 4.2.4 Exercises

WeBWork Exercises

1.
Let \(\mathbf{u}_4\) be a linear combination of \(\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3\rbrace\text{.}\)
Select the best statement.
  • We only know that \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\subseteq \mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4\rbrace\) .
  • There is no obvious relationship between \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\) and \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4 \rbrace\) .
  • \(\mathrm{span}\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace = \mathrm{span}\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4 \rbrace\) .
  • \(\mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace= \mathrm{span} \lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3, \mathbf{u}_4\rbrace\) when \(\mathbf{u}_4\) is a scalar multiple of one of \(\lbrace\mathbf{u}_1,\mathbf{u}_2,\mathbf{u}_3 \rbrace\text{.}\)
  • none of the above
Solution.
\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3\right\rbrace=\)\(\left\lbrace{\bf u}_1,{\bf u}_2,{\bf u}_3,{\bf u}_4\right\rbrace\text{.}\)
2.
Let \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\) be a linearly independent set of vectors.
Select the best statement.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is also a linearly independent set of vectors unless \({\bf u}_4={\bf 0}\text{.}\)
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) could be a linearly independent or linearly dependent set of vectors depending on the vectors chosen.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly dependent set of vectors.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is also a linearly independent set of vectors unless \({\bf u}_4\) is a scalar multiple another vector in the set.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly independent set of vectors.
  • none of the above
Solution.
\({\bf u}_4\)\(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\)\(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\)\(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\)
3.
Let \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3}\rbrace\) be a linearly dependent set of vectors.
Select the best statement.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is a linearly independent set of vectors unless \({\bf u}_4\) is a linear combination of other vectors in the set.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly independent set of vectors.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) could be a linearly independent or linearly dependent set of vectors depending on the vectors chosen.
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is a linearly independent set of vectors unless \({\bf u}_4={\bf 0}\text{.}\)
  • \(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\) is always a linearly dependent set of vectors.
  • none of the above
Solution.
\(\lbrace{{\bf u}_1, {\bf u}_2, {\bf u}_3, {\bf u}_4}\rbrace\)
4.
Are the vectors \(\left[\begin{array}{c} -4\cr -3\cr -3 \end{array}\right]\text{,}\) \(\left[\begin{array}{c} 3\cr -1\cr -4 \end{array}\right]\) and \(\left[\begin{array}{c} -7\cr -15\cr -24 \end{array}\right]\) linearly independent?
If they are linearly dependent, find scalars that are not all zero such that the equation below is true. If they are linearly independent, find the only scalars that will make the equation below true.
\(\left[\begin{array}{c} -4\cr -3\cr -3 \end{array}\right] +\) \(\left[\begin{array}{c} 3\cr -1\cr -4 \end{array}\right] +\) \(\left[\begin{array}{c} -7\cr -15\cr -24 \end{array}\right] =\) \(\left[\begin{array}{c} 0\cr 0\cr 0 \end{array}\right]\text{.}\)
Answer 1.
\(\text{linearly dependent}\)
Answer 2.
\(-4;\,-3;\,1\)
5.
Let \(V\) be the vector space of symmetric \(2\times 2\) matrices and \(W\) be the subspace
\begin{equation*} W=\text{span} \lbrace \left[\begin{array}{cc} -5 \amp 5\cr 5 \amp 2 \end{array}\right],\left[\begin{array}{cc} 5 \amp -3\cr -3 \amp -3 \end{array}\right] \rbrace . \end{equation*}
a. Find a nonzero element \(X\) in \(W\text{.}\)
\(X =\) (2Β Γ—Β 2 array)
b. Find an element \(Y\) in \(V\) that is not in \(W\text{.}\)
\(Y =\) (2Β Γ—Β 2 array)

Exercise Group.

In each exercise, determine whether the given subset \(S\) of the vector space \(V\) is linearly independent. Justify your answer.
6.
\(V=\R^4\text{,}\) \(S=\{(3,8,7,-3), (1,5,3,-1), (2,-1,2,6), (4,2,6,4)\}\)
7.
\(V=M_{22}\text{,}\) \(S=\left\{\begin{amatrix}[rr]1\amp 1\\1\amp -3\end{amatrix}, \begin{amatrix}[rr]1\amp -1\\1\amp -1\end{amatrix}, \begin{amatrix}[rr]2\amp 1 \\-1\amp -2\end{amatrix}, \begin{amatrix}[rr]0\amp 1\\1\amp -2\end{amatrix} \right\}\)

8.

Let \(V=M_{22}\) and let \(S=\{ A_1, A_2, A_3\}\text{,}\) where
\begin{equation*} A_1=\begin{bmatrix}1\amp 0\\ 1\amp c \end{bmatrix}, A_2=\begin{bmatrix}-1\amp 0\\ c\amp 1 \end{bmatrix}, A_3=\begin{bmatrix}2\amp 0\\ 1\amp 3 \end{bmatrix}\text{.} \end{equation*}
Determine all values \(c\in \R\) for which \(S\) is linearly independent.

9.

Let \(V=M_{22}\text{,}\) and define \(A_1=\begin{bmatrix}1\amp 1\\1\amp 1 \end{bmatrix}\text{,}\) \(A_2=\begin{bmatrix}0\amp 1\\1\amp 0 \end{bmatrix}\text{,}\) \(A_3=\begin{bmatrix}1\amp 1\\ 1\amp 0 \end{bmatrix}\text{.}\)
  1. Compute \(W=\Span(\{A_1,A_2,A_3\})\text{,}\) identifying it as a certain familiar set of matrices.
  2. Decide whether \(S=\{A_1,A_2,A_3\}\) is independent.

10.

Let \(V\) be a vector space, and let \(S=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) be a subset of distinct vectors. Assume \(S\) is linearly independent. Show that any subset \(T\subseteq S\) is linearly independent.

11. Span, independence, and invertibility.

In this exercise we identify elements of \(V=\R^n\) with \(n\times 1\) column vectors.
Let \(S=\{\boldv_1,\boldv_2,\dots, \boldv_n\}\) be a subset of \(\R^n\text{,}\) and let \(A\) be the \(n\times n\) matrix whose \(j\)-th column is \(\boldv_j\text{:}\) i.e.,
\begin{equation*} A=\begin{bmatrix}\vert \amp \vert \amp \cdots \amp \vert\\ \boldv_1\amp \boldv_2\amp \amp \boldv_n\\ \vert \amp \vert \amp \cdots \amp \vert \end{bmatrix}\text{.} \end{equation*}
  1. Prove: \(\Span S=\R^n\) if and only if \(A\) is invertible.
  2. Prove: \(S\) is linearly independent if and only if \(A\) is invertible.
Hint.
Use the column method (TheoremΒ 3.1.27) and the invertibility theorem (TheoremΒ 3.5.27)