Skip to main content
Logo image

Section 3.1 Real vector spaces

When discussing matrix algebra we saw that operations from real number arithmetic have natural analogues in the world of matrices. Furthermore, the act of comparing these two different algebraic systems brought to light many interesting features of matrix algebra.
Why stop at matrices? Are there other interesting algebraic systems that admit analogous operations? If so, to what degree do these systems agree with or differ from real number or matrix algebra?
A common technique in mathematics for such investigations is to distill the important properties of the motivating operations into a list of axioms, and then to prove statements that apply to any system that satisfies these axioms.
We now embark on just such an axiomatic approach. The notion of a vector space arises from focusing on just two operations from matrix algebra: matrix addition and matrix scalar multiplication. As we saw in Section 2.2, these two operations satisfy many useful properties: e.g., commutativity, associativity, distributivity, etc.. Whereas earlier we showed directly that matrix algebra satisfies these properties, now we will come at things the other way: we record these various properties as a list of axioms, and declare any system that satisfies these axioms to be a vector space.
Once we’ve established the definition of a vector space, when we go on to investigate the properties enjoyed by vector spaces we make no assumptions beyond the fact that the basic axioms are satisfied. This approach comes off as somewhat abstract, but has the advantage that our conclusions now apply to any vector space you can think of. You don’t have to reinvent the wheel each time you stumble across a new vector space.

Subsection 3.1.1 Definition of a vector space

Definition 3.1.1. Vector space.

A (real) vector space is a set \(V\) together with two operations, scalar multiplication and vector addition, described in detail below.
Scalar multiplication
This operation takes as input any real number \(c\in R\) and any element \(\boldv\in V\text{,}\) and outputs another element of \(V\text{,}\) denoted \(c\boldv\text{.}\) We describe this operation using function notation as follows:
\begin{align*} \R\times V\amp \rightarrow V\\ (c,\boldv)\amp \mapsto c\boldv\text{.} \end{align*}
Vector addition
This operation takes as input any pair of elements \(\boldv, \boldw\in V\) and returns another element of \(V\text{,}\) denoted \(\boldv+\boldw\text{.}\) In function notation:
\begin{align*} V\times V\amp \rightarrow V\\ (\boldv,\boldw)\amp \mapsto \boldv+\boldw\text{.} \end{align*}
Furthermore, these two operations must satisfy the following list of axioms.
  1. Vector addition is commutative.
    For all \(\boldv, \boldw\in V\text{,}\) we have
    \begin{equation*} \boldv+\boldw=\boldw+\boldv\text{.} \end{equation*}
  2. Vector addition is associative.
    For all \(\boldu, \boldv, \boldw\in V\text{,}\) we have
    \begin{equation*} (\boldu+\boldv)+\boldw=\boldu+(\boldv+\boldw)\text{.} \end{equation*}
  3. Existence of additive identity.
    There is an element \(\boldzero\in V\) such that for all \(\boldv\in V\text{,}\) we have
    \begin{equation*} \boldzero+\boldv=\boldv+\boldzero=\boldv\text{.} \end{equation*}
    The element \(\boldzero\) is called the zero vector of \(V\).
  4. Existence of additive inverses.
    For all \(\boldv\in V\text{,}\) there is another element \(-\boldv\) satisfying
    \begin{equation*} -\boldv+\boldv=\boldv+(-\boldv)=\boldzero\text{.} \end{equation*}
    Given \(\boldv\in V\text{,}\) the element \(-\boldv\) is called the vector inverse of \(\boldv\).
  5. Distribution over vector addition.
    For all \(c\in \R\) and \(\boldv, \boldw\in V\text{,}\) we have
    \begin{equation*} c(\boldv+\boldw)=c\boldv+c\boldw\text{.} \end{equation*}
  6. Distribution over scalar addition.
    For all \(c, d\in \R\) and \(\boldv\in V\text{,}\) we have
    \begin{equation*} (c+d)\boldv=c\boldv+d\boldv \end{equation*}
  7. Scalar multiplication is associative.
    For all \(c,d\in \R\) and all \(\boldv\in V\text{,}\) we have
    \begin{equation*} c(d\boldv)=(cd)\boldv\text{.} \end{equation*}
  8. Scalar multiplicative identity.
    For all \(\boldv\in V\text{,}\) we have
    \begin{equation*} 1\boldv=\boldv\text{.} \end{equation*}
We call the elements of a vector space vectors.

Remark 3.1.2.

What’s the deal with the ‘real’ modifier? The reals are one example of a type of number system called a field. Other examples of fields are given by the complex numbers (\(\C\)) and the rational numbers (\(\Q\)). If \(K\) is a field, and if we replace each mention of \(\R\) in Definition 3.1.1 with a mention of \(K\text{,}\) then we are left with the definition of a vector space over \(K\text{.}\) Setting \(K=\C\text{,}\) for example, we get the definition of a complex vector space.
In our treatment of linear algebra we will largely focus on real vector spaces, and as such will often drop this modifier: hence the parentheses in the definition.

Subsection 3.1.2 Examples

When introducing a new vector space there are many details in Definition 3.1.1 that must be verified. To help organize this task, follow this checklist:
  1. Make explicit the underlying set \(V\) of the vector space.
  2. Make explicit what the scalar multiplication and vector addition operations are.
  3. Identify an element of \(V\) that serves as the zero vector and indicate the rule that assigns vector inverses to elements of \(V\text{.}\)
  4. Show that the two vector operations and our choice of zero vector and vector inverses satisfy the axioms of Definition 3.1.1.
Think of items (1)-(3) of our checklist as official declarations about the makeup of our vector space: “The underlying set shall be as stated”; “We declare the vector operations thusly”; “The zero vector shall be this element here, and vector inverses shall be assigned in this manner”. Item (4) is where we get down to the nitty gritty of showing that the our proposed vector space structure articulated in (1)-(3) does indeed satisfy all the necessary properties.
In each of the examples below we carefully lay out the details of items (1)-(3) while often leaving much of the work of item (4) to you. You will meet these vector spaces frequently throughout the rest of your life. Each time you do, it will be helpful for orientation purposes to mentally run through items (1)-(3). Ask yourself: What is the underlying set? What are vector operations? What acts as the zero vector, and how do I assign vector inverses?

Example 3.1.3. Vector space of \(m\times n\) matrices.

Underlying set.
The vector space of \(m\times n\) matrices, denoted \(M_{mn}\text{,}\) is the set of all \(m\times n\) matrices: i.e.,
\begin{equation*} M_{mn}=\left\{ A=[a_{ij}]_{m\times n}\colon a_{ij}\in \R\right\}\text{.} \end{equation*}
Vector operations.
Scalar multiplication and vector addition in \(M_{mn}\) are defined as matrix scalar multiplication and matrix addition, respectively.
Zero vector and vector inverses.
The zero vector of \(M_{mn}\) is the \(m\times n\) zero matrix: i.e., \(\boldzero=\boldzero_{m\times n}\text{.}\)
Given an element \(A=[a_{ij}]\in M_{mn}\text{,}\) its vector inverse is the matrix additive inverse \(-A=[-a_{ij}]\text{.}\)
Verification of axioms.
We showed in Theorem 2.2.1 that matrix scalar multiplication and vector addition, satisfy axioms (i), (ii), (v)-(viii). Theorem 2.2.4 implies that our choice of zero vector (\(\boldzero_{m\times n}\)) and vector inverses (\(-A\)) satisfies axioms (iii)-(iv).

Example 3.1.4. Vector space of real \(n\)-tuples.

Underlying set.
The vector space of real \(n\)-tuples, denoted \(\R^n\text{,}\) is the set of all real \(n\)-tuples with real coefficients: i.e.,
\begin{equation*} \R^n=\{ (a_1,a_2,\dots, a_n) \colon a_i\in \R \}\text{.} \end{equation*}
Vector operations.
The vector operations on \(n\)-tuples are defined entry-wise.
Scalar multiplication. Given \(c\in \R\) and \(\boldv=(a_1,a_2,\dots, a_n)\in \R^n\text{,}\) we define
\begin{equation*} c\boldv=(ca_1,ca_2,\dots, ca_n)\text{.} \end{equation*}
Vector addition. Given \(\boldv=(a_1,a_2,\dots, a_n)\) and \(\boldw=(b_1,b_2,\dots, b_n)\text{,}\) we define
\begin{equation*} \boldv+\boldw=(a_1+b_1, a_2+b_2,\dots, a_n+b_n)\text{.} \end{equation*}
Zero vector and vector inverses.
The zero vector \(\R^n\) is the \(n\)-tuple of all zeros: i.e., \(\boldzero=(0,0,\dots, 0)\text{.}\)
Given a vector \(\boldv=(a_1,a_2,\dots, a_n)\text{,}\) we have \(-\boldv=(-a_1,-a_2,\dots, -a_n)\text{.}\)
Verification of axioms.
It is clear that structurally speaking \(\R^n\) behaves exactly like \(M_{1n}\text{,}\) the vector space of \(1\times n\) row vectors: we have essentially just replaced brackets with parentheses. As such it follows from the previous example that \(\R^n\text{,}\) along with the given operations, constitutes a vector space.

Remark 3.1.5. Visualizing \(\R^n\text{:}\) points and arrows.

Fix \(n\in\{2,3\}\text{.}\) Once we choose a coordinate system for \(\R^n\) (complete with origin and coordinate axes), we can visually represent an element \((a_1,a_2,\dots, a_n)\) of \(\R^n\) either as a point \(P=(a_1,a_2,\dots, a_n)\) or as an arrow (or directed line segment) \(\overrightarrow{QR}\) that begins at a point \(Q=(b_1,b_2,\dots, b_n)\) of our choosing and ends at the point \(R=(a_1+b_1,a_2+b_2,\dots, a_n+b_n)\text{.}\) When we choose the initial point to be the origin \(O=(0,0,\dots, 0)\text{,}\) the corresponding arrow is just \(\overrightarrow{OP}\text{,}\) called the position vector of \(P=(a_1,a_2,\dots, a_n)\text{.}\) Figure 3.1.6 illustrates a variety of visual representations of the element \((1,2)\) of \(\R^2\text{.}\)
Point and arrow representations of \((1,2)\)
Figure 3.1.6. Point and arrow representations of \((1,2)\)
As a general rule of thumb, when trying to visualize subsets of \(\R^n\) (e.g., lines and planes), it helps to think of \(n\)-tuples as points; and when trying to visualize vector arithmetic in \(\R^n\text{,}\) it helps to think of \(n\)-tuples as arrows. Indeed, when using the arrow representation of \(n\)-tuples, vector addition can be visualized using the familiar “tip to tail” method; and vector scalar multiplication can be understood as scaling arrows. Figure 3.1.7 summarizes these visualization techniques in the case \(n=3\text{.}\)
Figure 3.1.7. Visualizing vector arithmetic between \(\boldv=\overrightarrow{OP}\) and \(\boldw=\overrightarrow{OQ}\)
Why introduce a new vector space, \(\R^n\text{,}\) if it is essentially the same thing as \(M_{1n}\text{,}\) or even \(M_{n1}\) for that matter? Recall that a matrix is not simply an ordered sequence: it is an ordered sequence arranged in a very particular way. This subtlety is baked into the very definition of matrix equality, and allows us to say that
\begin{equation*} \begin{amatrix}[rr]1\amp 2 \end{amatrix}\ne \begin{amatrix}[r]1\\ 2 \end{amatrix}\text{.} \end{equation*}
There are situations, however, where we don’t need this extra layer of structure, where we want to treat an ordered sequence simply as an ordered sequence. In such situations tuples are preferred to row or column vectors.
That said, the close connection between linear systems and matrix equations makes it very convenient to treat an \(n\)-tuple \((c_1,c_2,\dots, c_n)\) as if it were the column vector
\begin{equation*} \colvec{c_1\\ c_2\\ \vdots \\ c_n}\text{.} \end{equation*}
This conflation is so convenient, in fact, that we will simply declare it to be true by fiat!
We now continue our catalog of vector spaces by moving on to more exotic examples, starting with the zero (or trivial) vector space.

Example 3.1.9. Zero vector space.

Underlying set.
A zero (or trivial) vector space is a set containing exactly one element: i.e., \(V=\{\boldv\}\text{.}\)
Vector operations.
Since \(V=\{\boldv\}\) contains only one element we have no real choice in defining our vector operations.
Scalar multiplication. Define \(c\boldv=\boldv\) for any \(c\in \R\) and the unique element \(\boldv\in V\text{.}\)
Vector addition. Define \(\boldv+\boldv=\boldv\) for the unique element \(\boldv\in V\text{.}\)
Zero vector and vector inverses.
We declare \(\boldzero=\boldv\text{.}\) Accordingly we will write \(V=\{\boldzero\}\) from now on.
We declare \(-\boldv=\boldv\text{.}\)
Verification of axioms.
It is clear that \(V=\{\boldzero\}\) satisfies the axioms of Definition 3.1.1: for axioms (i)-(ii) and (v)-(viii) both sides of the desired equality are equal to \(\boldzero\text{;}\) axioms (iii)-(iv) boil down to the fact that \(\boldv+\boldv=\boldv\) by definition.

Example 3.1.10. The vector space of infinite real sequences.

Underlying set.
The vector space of real infinite sequences, denoted \(\R^\infty\text{,}\) is the set of all infinite sequences \((a_i)_{i=1}^\infty=(a_1,a_2,\dots,)\text{,}\) where \(a_i\in \R\) for all \(i\text{:}\) i.e.,
\begin{equation*} \R^\infty=\{ (a_i)_{i=1}^\infty \colon a_i\in \R \}\text{.} \end{equation*}
Vector operations.
As in \(\R^n\) we define our vector operations on infinite sequences entry-wise.
Scalar multiplication. Given \(c\in \R\) and \(\boldv=(a_i)_{i=1}^\infty\in \R^\infty\text{,}\) we define
\begin{equation*} c\boldv=(ca_{i})_{i=1}^\infty=(ca_1,ca_2,\dots)\text{.} \end{equation*}
Vector addition. Given \(\boldv=(a_i)_{i=1}^\infty\) and \(\boldw=(b_i)_{i=1}^\infty\text{,}\) we define
\begin{equation*} \boldv+\boldw=(a_i+b_i)_{i=1}^\infty=(a_1+b_1, a_2+b_2,\dots)\text{.} \end{equation*}
Zero vector and vector inverses.
The zero vector \(\R^\infty\) is the sequence of all zeros: i.e., \(\boldzero=(0,0,\dots)\text{.}\)
Given a vector \(\boldv=(a_i)_{i=1}^\infty\text{,}\) we let \(-\boldv=(-a_i)_{i=1}^\infty=(-a_1,-a_2,\dots )\text{.}\)
Verification of axioms.
See Exercise 3.1.4.4. Observe that since the vector operations are defined entry-wise, the vector arithmetic in \(\R^\infty\) is not so very different from that of \(\R^n\text{.}\)

Example 3.1.11. Real-valued functions.

Underlying set.
Let \(I\) be an interval in the real line. The vector space of functions from \(I\) to \(\R\), denoted \(F(I,\R)\text{,}\) is the set of all real-valued functions \(f\colon I\rightarrow \R\text{:}\) i.e., the set of all functions with domain \(I\) and codomain \(\R\text{.}\)
Vector operations.
The vector operations on \(F(I,\R)\) defined below are generalizations of operations you may have seen before when learning about function transformations.
Scalar multiplication. Given \(c\in \R\) and a real-valued function \(f\colon I\rightarrow \R\text{,}\) we let \(cf\) be the function defined as
\begin{equation*} (cf)(x)=c(f(x)) \text{ for all } x\in I\text{.} \end{equation*}
Vector addition. Given real-valued functions \(f\) and \(g\) with domain \(I\text{,}\) we let \(f+g\) be the function defined as
\begin{equation*} (f+g)(x)=f(x)+g(x) \text{ for all } x\in I\text{.} \end{equation*}
Zero vector and vector inverses.
The zero vector \(F(I,\R)\) is the constant function \(O_I\) that assigns a value of 0 to all elements of \(I\text{:}\) i.e., \(0_I(x)=0\) for all \(x\in I\text{.}\)
Given a function \(f\in F(I,\R)\text{,}\) its vector inverse is the function \(-f\) defined as
\begin{equation*} (-f)(x)=-f(x) \text{ for all } x\in I\text{.} \end{equation*}

Remark 3.1.12.

Take a moment to let the exotic quality of this example sink in. The things we are calling vectors in this case are entire functions!
Consider \(F(\R,\R)\text{.}\) A vector of \(F(\R,\R)\) is a function \(f\colon \R\rightarrow \R\text{:}\) a rule that assigns to any input \(x\in \R\) a unique output \(y\in \R\text{.}\) Thus the functions \(f\) and \(g\) defined as \(f(x)=x^2+1\) and \(g(x)=\sin x-x\) are both vectors of \(F(\R,\R)\text{,}\) as is any function given by a formula involving familiar mathematical functions and operations (as long as the formula is defined for all \(x\in \R\)). That’s a lot of vectors! And yet we are only beginning to scratch the surface, since a function of \(F(\R,\R)\) need not be given by a nice formula; it simply has to be a well-defined rule. For example, the function \(h\) defined as
\begin{equation*} h(x)=\begin{cases} 1\amp \text{if } x \text{ is rational}\\ 0\amp \text{if } x \text{ is not rational} \end{cases} \end{equation*}
is also an element of \(F(\R,\R)\text{.}\)
Hopefully this discussion gives some indication of how a vector space like \(F(\R,\R)\) is in some sense much larger than spaces like \(\R^n\) or \(M_{mn}\text{,}\) whose general elements can be described in a finite manner. This vague intuition can be made precise with the notion of the dimension of a vector space, which we develop in Section 3.7.
We end with an example that illustrates how we can define the vector operations to be anything we like, as long as they satisfy the axioms of Definition 3.1.1. In this case scalar multiplication will be defined as real number exponentiation, and vector addition will be defined as real number multiplication.

Example 3.1.13. Vector space of positive real numbers.

Underlying set.
The vector space of positive real numbers, denoted \(\R_{>0}\text{,}\) is defined as
\begin{equation*} \R_{>0}=\{x\in \R\colon x>0\}\text{.} \end{equation*}
Vector operations.
Scalar multiplication is defined via exponentiation and vector addition is defined as multiplication.
Scalar multiplication. Given \(c\in \R\) and \(\boldv=a\in \R_{>0}\) we define
\begin{equation*} c\boldv=a^c\text{.} \end{equation*}
Vector addition. Given \(\boldv=a, \boldw=b\in \R_{>0}\text{,}\) we define
\begin{equation*} \boldv+\boldw=ab\text{.} \end{equation*}
Zero vector and vector inverses.
The zero vector of \(\R_{>0}\) is the number 1: i.e., we have \(\boldzero=1\) in the vector space \(\R_{>0}\text{.}\)
Given \(\boldv=a\in \R_{>0}\) the vector inverse is defined as
\begin{equation*} -\boldv=\frac{1}{a}\text{.} \end{equation*}
Verification of axioms.
Exercise. We point out, however, that in this case the fact that the operations are actually well-defined should be justified. This is where the positivity of elements of \(\R_{>0}\) comes into play: since \(\boldv=a\) is a positive number, the power \(a^c\) is defined for any \(c\in \R\) and is again positive. Thus \(c\boldv=a^c\) is indeed an element of \(\R_{>0}\text{.}\) Similarly, if \(\boldv=a\) and \(\boldw=b\) are both positive numbers, then so is \(\boldv+\boldw=ab\text{.}\)
The notion of a linear combination of matrices (Definition 2.1.13) generalizes easily to any vector space, and will be an important concept in the further development of our theory.

Definition 3.1.14. Linear combination of vectors.

Let \(V\) be a vector space. An expression of the form
\begin{equation*} c_1\boldv_1+c_2\boldv_2+\cdots +c_r\boldv_r\text{,} \end{equation*}
where \(c_i\in\R\) and \(\boldv_i\in V\) for all \(i\text{,}\) is called a linear combination. The scalars \(c_i\) are called the coefficients of the linear combination.

Example 3.1.15. Vector linear combination.

Let \(V=\R_{>0}\text{.}\) Given the vectors \(\boldv=2\) and \(\boldw=\frac{1}{2}\text{,}\) compute the linear combination
\begin{equation*} 3\boldv+(-1/5)\boldw\text{.} \end{equation*}
Solution.
By definition of scalar multiplication in \(\R_{>0}\) (Example 3.1.13) we have
\begin{equation*} 3\boldv=2^3=8 \text{ and }(-1/5)\boldw=(1/2)^{-1/5}=\sqrt[5]{2}\text{.} \end{equation*}
Next, since vector addition in \(\R_{>0}\) is defined as real number multiplication (Example 3.1.13), we conclude
\begin{equation*} 3\boldv+(-1/5)\boldw=8\sqrt[5]{2}\text{.} \end{equation*}

Subsection 3.1.3 General properties

When proving a general fact about vector spaces we can only invoke the defining axioms; we cannot assume the vectors of the space assume any particular form. For example, we cannot assume vectors of \(V\) are \(n\)-tuples, or matrices, etc. We end with an example of such an axiomatic proof.
We prove (1), leaving (2)-(4) as an exercise.
First observe that \(0\boldv=(0+0)\boldv\text{,}\) since \(0=0+0\text{.}\)
By Axiom (vi) we have \((0+0)\boldv=0\boldv+0\boldv\text{.}\) Thus \(0\boldv=0\boldv+0\boldv\text{.}\)
Now add \(-0\boldv\text{,}\) the vector inverse of \(0\boldv\text{,}\) to both sides of the last equation:
\begin{equation*} -0\boldv+0\boldv=-0\boldv+(0\boldv+0\boldv)\text{.} \end{equation*}
Now simplify this equation step by step using the axioms:
\begin{align*} -0\boldv+0\boldv=-0\boldv+(0\boldv+0\boldv)\amp\implies \boldzero=(-0\boldv+0\boldv)+0\boldv \amp (\text{Axiom (iv) and Axiom (ii)}) \\ \amp\implies \boldzero=\boldzero+0\boldv \amp (\text{(Axiom (iv))})\\ \amp\implies \boldzero=0\boldv \text{.} \end{align*}

Exercises 3.1.4 Exercises

WeBWork Exercises

1.
Let \(V={\mathbb R}\text{.}\) For \(u,v \in V\) and \(a\in{\mathbb R}\) define vector addition by \(u \boxplus v := u+v-3\) and scalar multiplication by \(a \boxdot u := au-3a+3\text{.}\) It can be shown that \((V,\boxplus,\boxdot)\) is a vector space over the scalar field \(\mathbb R\text{.}\) Find the following:
the sum:
\(4\boxplus 7 =\)
the scalar multiple:
\(0\boxdot 4 =\)
the zero vector:
\(\underline{0}_V =\)
the additive inverse of \(x\text{:}\)
\(\boxminus x=\)
Answer 1.
\(8\)
Answer 2.
\(3\)
Answer 3.
\(3\)
Answer 4.
\(6-x\)
2.
Let \(V={\mathbb R}^2\text{.}\) For \((u_1,u_2),(v_1,v_2) \in V\) and \(a\in{\mathbb R}\) define vector addition by \((u_1,u_2) \boxplus (v_1,v_2) := (u_1+v_1-3,u_2+v_2 + 1)\) and scalar multiplication by \(a \boxdot (u_1,u_2) := (au_1-3a+3,au_2 + a - 1)\text{.}\) It can be shown that \((V,\boxplus,\boxdot)\) is a vector space over the scalar field \(\mathbb R\text{.}\) Find the following:
the sum:
\((-4,5)\boxplus (-5,-6) =\)(,)
the scalar multiple:
\(-2\boxdot (-4,5) =\)(,)
the zero vector:
\(\underline{0}_V =\)(,)
the additive inverse of \(( x,y)\text{:}\)
\(\boxminus (x,y)=\)(,)
Answer 1.
\(-12\)
Answer 2.
\(0\)
Answer 3.
\(17\)
Answer 4.
\(-13\)
Answer 5.
\(3\)
Answer 6.
\(-1\)
Answer 7.
\(6-x\)
Answer 8.
\(-\left(2+y\right)\)
3.
Let \(V=(-8,\infty)\text{.}\) For \(u,v \in V\) and \(a\in{\mathbb R}\) define vector addition by \(u \boxplus v := uv + 8(u+v)+56\) and scalar multiplication by \(a \boxdot u := (u + 8)^a - 8\text{.}\) It can be shown that \((V,\boxplus,\boxdot)\) is a vector space over the scalar field \(\mathbb R\text{.}\) Find the following:
the sum:
\(1\boxplus -1 =\)
the scalar multiple:
\(8\boxdot 1 =\)
the additive inverse of \(1\text{:}\)
\(\boxminus 1=\)
the zero vector:
\(\underline{0}_V =\)
the additive inverse of \(x\text{:}\)
\(\boxminus x=\)
Answer 1.
\(55\)
Answer 2.
\(4.30467\times 10^{7}\)
Answer 3.
\(-7.88889\)
Answer 4.
\(1+-8\)
Answer 5.
\(\frac{1}{x+8}-8\)

4.

Verify that \(\R^\infty\) along with the vector operations defined as in Example 3.1.10 satisfies the axioms of a vector space.

5.

Let \(I\) be an interval in the real line. Verify that \(F(I,\R)\) along with the vector operations defined as in Example 3.1.11 satisfies the axioms of a vector space.

6.

Verify that \(\R_{>0}=\{a\in \R\colon a>0\}=(0,\infty)\text{,}\) along with the proposed vector operations
\begin{align*} c\odot a\amp=a^c \\ a\oplus b \amp =ab \text{,} \end{align*}
satisfies the defining axioms of a vector space.
Note: we use the funny symbols \(\odot\) and \(\oplus\) for scalar multiplication and vector addition to prevent confusion between the vector operations of \(\R_{>0}\) and real number arithmetic operations.

Failed vector spaces.

In each exercise below, the provided set, along with proposed vector operations, does not constitute a vector space. Identify all details of the vector space definition that fail to be satisfied. In addition to checking the axioms, you should also ask whether the proposed vector operations are well-defined. Provide explicit counterexamples for each failed property.
7.
Let \(V=\R^n\text{.}\) Define vector addition on \(V\) to be the usual \(n\)-tuple vector addition, but define scalar multiplication as
\begin{equation*} c(a_1,a_2,\dots ,a_n)=(c^2a_1,c^2a_2,\dots, c^2a_n)\text{.} \end{equation*}
8.
Let \(V=\R^2\text{.}\) Define vector addition on \(V\) to be the usual \(n\)-tuple vector addition, but define scalar multiplication as
\begin{equation*} c(a_1,a_2)=(ca_1,0)\text{.} \end{equation*}
9.
Let
\begin{equation*} V=\{A\in M_{nn}\colon A \text{ is invertible}\}\text{.} \end{equation*}
Define scalar multiplication and vector addition to be the usual matrix scalar multiplication and matrix addition.

Verifying vector space axioms.

In each exercise below the provided set, along with proposed vector operations, does constitute a vector space. Verify the indicated axioms.
10.
Let \(x\) be a variable. Define \(V=\{ax+1\colon a\in\R\}\text{,}\) the set of all linear polynomials \(f(x)=ax+1\) with constant coefficient equal to one. Define the vector operations as follows:
\begin{align*} c(ax+1) \amp= cax+1 \\ (ax+1)(bx+1)\amp=(ax+1)(bx+1)-abx^2 \text{.} \end{align*}
Verify axioms (iii)-(vi).
11.
Let \(V=\{(a,b)\in\R^2\colon a>0, \ b\lt 0\}\text{:}\) i.e., \(V\) is the set of pairs whose first entry is positive and whose second entry is negative.
\begin{align*} c(a, b) \amp= (a^c, -\lvert b\rvert^c) \\ (a_1,b_1)+(a_2,b_2)\amp=(a_1a_2,-b_1b_2) \text{.} \end{align*}
Verify axioms (iii)-(v), and axiom (viii).

12.

Prove statements (2)-(4) of Theorem 3.1.16. When treating a specific part you may assume the results of any part that has already been proven, including statement (1).

13.

Let \(V\) be a vector space.
  1. Show that the zero vector of \(V\) is unique: i.e., show that if \(\boldw\in V\) satisfies \(\boldw+\boldv=\boldv\) for all \(\boldv\in V\text{,}\) then \(\boldw=\boldzero\text{.}\)
  2. Fix \(\boldv\in V\text{.}\) Show that the vector inverse of \(\boldv\) is unique: i.e., show that if \(\boldw+\boldv=\boldzero\text{,}\) then \(\boldw=-\boldv\text{.}\)
Thus we may speak of the zero vector of a vector space, and the vector inverse of a vector \(\boldv\text{.}\)

14.

Let \(V\) be a vector space. Prove:
\begin{equation*} \boldu + \boldw = \boldv + \boldw \iff \boldu=\boldv\text{.} \end{equation*}

15.

Let \(V\) be a vector space. Prove that either \(V=\{\boldzero\}\) (i.e., \(V\) is the zero space) or \(V\) is infinite. In other words, a vector space contains either exactly one element or infinitely many elements.
Hint.
Assume \(V\) contains a nonzero vector \(\boldv\ne\boldzero\text{.}\) Show that if \(c\ne d\text{,}\) then \(c\boldv\ne d\boldv\text{.}\) You assume the results of Theorem 3.1.16.