Skip to main content

Euclidean vector spaces

Section 1.1 Vector space structure of \(\R^n\)

What exactly is a Euclidean vector space? Getting ahead of ourselves somewhat, it is a finite-dimensional vector space over the real numbers along with an inner product. We will eventually give precise meaning to all of these notions, starting in this section with vector spaces. We will also introduce what will be for us essentially the only example of a Euclidean vector space: namely, \(\R^n\) together with its standard vector operations and the dot product. This might strike the reader as a somewhat overly restricted focus: if there are other examples of Euclidean spaces out there, why not invite them in? As it turns out, an \(\R^n\)-centered treatment of linear algebra will give us plenty of rich material to deal with in this introductory course. Furthermore, although there are other โ€œexoticโ€ examples of Euclidean vector spaces, they are all structurally equivalent to \(\R^n\) (for some positive integer \(n\)) with its usual vector operations and the dot product. More precisely, getting ahead of ourselves once again, any two \(n\)-dimensional Euclidean vector spaces are isomorphic to one another. (We will make sense of this notion also, in due time.) As a result, the theory of linear algebra and inner product spaces, as articulated in the specific context of \(\R^n\text{,}\) can be seen as entirely characteristic of the general theory.

Subsection 1.1.1 Vector operations on \(\R^n\)

Given a positive integer \(n\text{,}\) we denote by \(\R^n\) the set of all \(n\)-tuples of real numbers. Tuples are treated in more detail in Sectionย 0.3. In short, as defined in Definitionย 0.3.1, an \(n\)-tuple is an ordered sequence of elements. The defining property of tuples is that they are ordered objects, as opposed to unordered. This is captured by the notion of equality between tuples: we have
\begin{equation*} (s_1,s_2,\dots, s_n)=(t_1,t_2,\dots, t_n) \end{equation*}
if and only if \(s_i=t_i\) for all \(1\leq i\leq n\text{.}\)
You have probably encountered \(\R^2\) and \(\R^3\) as models of planar and three-dimensional space. Perhaps you have even encountered \(\R^4\) as a model of relativistic space-time. In keeping with this tradition, we will call \(\R^n\) (real) \(n\)-space.

Definition 1.1.1. Real \(n\)-space.

Let \(n\) be a positive integer. The set of all real \(n\)-tuples is called real \(n\)-space (or just \(n\)-space) and is denoted \(\R^n\text{:}\) i.e.,
\begin{equation*} \R^n=\{(a_1,a_2,\dots, a_n)\colon a_i\in \R\}\text{.} \end{equation*}
We now introduce the standard vector operations on \(\R^n\text{,}\) so called as they give \(\R^n\) the structure of a vector space.

Definition 1.1.2. Vector operations of \(\R^n\).

Let \(n\) be a positive integer. We will call the operations below the standard vector operations on \(\R^n\text{.}\)
  • Vector addition on \(\R^n\).
    Given elements \(n\)-tuples \(\boldx=(x_1,x_2,\dots, x_n)\) and \(\boldy=(y_1,y_2,\dots, y_n)\text{,}\) we define their vector sum \(\boldx+\boldy\) as
    \begin{equation*} \boldx+\boldy=(x_1+y_1,x_2+y_2,\dots, x_n+y_n)\text{.} \end{equation*}
    The operation
    \begin{align*} \R^n\times \R^n \amp \rightarrow \R^n\\ (\boldx, \boldy) \amp \mapsto \boldx+\boldy \end{align*}
    is called vector addition.
  • Scalar multiplication on \(\R^n\).
    Given an \(n\)-tuple \(\boldx=(x_1,x_2,\dots, x_n)\in\R^n\) and a real number \(c\in \R\text{,}\) the \(n\)-tuple \(c\boldx\) defined as
    \begin{equation*} c\boldx=(cx_1,cx_2,\dots, cx_n) \end{equation*}
    is called the scalar multiple of \(\boldx\) by \(c\text{.}\) The operation
    \begin{align*} \R\times \R^n \amp \rightarrow \R^n\\ (c,\boldx) \amp \mapsto c\boldx \end{align*}
    is called scalar multiplication.

Remark 1.1.3. Vector operations input/output.

It is a good habit, when dealing with a variety of types of mathematical operations, to have in your mind a qualitative summary of what their inputs and outputs are. For example, vector addition in \(\R^n\) takes as input a pair of \(n\)-tuples, \(\boldx\) and \(\boldy\text{,}\) and returns as output the \(n\)-tuple \(\boldx+\boldy\text{.}\) By contrast, scalar multiplication in \(\R^n\) is a sort of hybrid operation that takes as input a real number \(c\) and \(n\)-tuple \(\boldx\) and returns as output a new \(n\)-tuple \(c\boldx\text{.}\)
The mores of conventional textbook writing dictate that we provide some examples of these vector operations on \(\R^n\text{,}\) as unenlightening as they may be. Instead, we give you an interactive SageCell that will allow you to create and play with examples on your own. The cells in Sage exampleย 1 can be evaluated by clicking the Evaluate (Sage) button, or by typing shift+return. You can experiment by editing the code in these cells and then evaluating.

Sage example 1. Vector operations on \(\R^n\).

To create a vector in Sage, use the vector() command. The input should be a sequence of numbers enclosed in brackets.
You can make use of sequence routines to create special sequences.
If you prefer the two outputs above to not be listed as a pair, you can use the print() command in sequence. (This is a peculiarity of interactive SageCells, not Sage itself.)
The standard vector operations of \(\R^n\) are implemented using an intuitive syntax in Sage.
Once a vector v is created in Sage, various properties of the vector can be computed using the v.foo() syntax. For example, the command v.length() returns the length of the vector v.

Subsection Vector spaces

We now jump to the heart of the matter, which is that \(\R^n\text{,}\) together with its standard vector operations, constitutes an example of an important type of mathematical object called a vector space.

Definition 1.1.4. Vector space.

A real vector space is a set \(V\) together with two operations
\begin{align*} V\times V \amp \rightarrow V \amp \R\times V\amp \rightarrow V\\ (\boldv,\boldw) \amp\mapsto \boldv+\boldw \amp (c,\boldv)\amp \mapsto c\boldv \text{,} \end{align*}
called respectively vector addition and scalar multiplication, that satify the following vector space axioms.
  1. Vector addition is commutative.
    \(\boldv+\boldw=\boldw+\boldv\) for all \(\boldv,\boldw\in V\text{.}\)
  2. Vector addition is associative.
    \((\boldu+\boldv)+\boldw=\boldu+(\boldv+\boldw)\) for all \(\boldu, \boldv, \boldw\in V\text{.}\)
  3. Zero vector.
    There is an element \(\boldzero\in V\) such that for all \(\boldv\in V\text{,}\) we have \(\boldzero+\boldv=\boldv+\boldzero=\boldv \text{.}\) We call \(\boldzero\) the zero vector of \(V\text{.}\)
  4. Vector inverses.
    For all \(\boldv\in V\text{,}\) there is another element \(-\boldv\) satisfying \(-\boldv+\boldv=\boldv+(-\boldv)=\boldzero \text{.}\) We call \(-\boldv\) the vector inverse of \(\boldv\text{.}\)
  5. Distribution over vector addition.
    \(c(\boldv+\boldw)=c\boldv+c\boldw\) for all \(c\in \R\) and \(\boldv, \boldw\in V\text{.}\)
  6. Distribution over scalar addition.
    \((c+d)\boldv=c\boldv+d\boldv\) for all \(c\in \R\) and \(\boldv, \boldw\in V\text{.}\)
  7. Scalar multiplication is associative.
    \(c(d\boldv)=(cd)\boldv\) for all \(c,d\in \R\) and all \(\boldv\in V\text{.}\)
  8. Scalar multiplication identity.
    \(1\,\boldv=\boldv\) for all \(\boldv\in V\text{.}\)
We call elements of a vector space vectors and the elements of \(\R\) scalars.

Remark 1.1.5. (Real) vector spaces.

It is possible to define the notion of a vector space over number systems other than the real numbers \(\R\text{.}\) For example, by replacing every instance of \(\R\) in Definitionย 1.1.4 with \(\C\text{,}\) we get the definition of a complex vector space. For our purposes, we will deal almost exclusively with real vector spaces, and accordingly will not use the โ€˜realโ€™ modifier unless absolutely necessary.
It is essential to understand the very general nature of Definitionย 1.1.4. The definition does not specify what the underlying set \(V\) of the vector space is, or what the vector operations are. Rather, it allows for any set \(V\) and any choice of operations to be called a vector space, as long as our choices satisfy the vector space axioms. In this sense, the act of establishing a vector space consists of first making a sequence of declarations (โ€œvector addition and scalar multiplication are defined like soโ€, โ€œthis element is the zero vector of our spaceโ€, โ€œwe define the vector inverse of elements like soโ€), and then verifying that these various choices satisfy the vector axioms. Procedureย 1.1.6 provides a useful model for carrying out these steps.
Note how Procedureย 1.1.6 highlights the special role played by Axiomsย iiiโ€“iv. These are sometimes called the existential axioms, as they posit the existence of certain special elements of \(V\text{:}\) a vector satisfying the identity of Axiomย iii that is denoted \(\boldzero\text{,}\) and for all vectors \(\boldv\in V\text{,}\) a vector inverse denoted \(-\boldv\) that satisfies the identity of Axiomย iv. Note also that our choice of vector inverses in Axiomย iv depends on our choice of the zero vector \(\boldzero\) in Axiomย iii, as this appears in the defining identity of Axiomย iv.
Letโ€™s apply Procedureย 1.1.6 to verify that \(\R^n\text{,}\) together with the vector operations defined in Definitionย 1.1.2, constitutes a vector space.

Proof.

The statement itself of the theorem has already taken care of some of the steps of Procedureย 1.1.6: it has identified the underlying set \(\R^n\) and proposed vector operations (steps (1)-(2)), and it has identified the zero vector and the rule for computing vector inverses (steps (3)-(4)). It remains to show that the proposed zero vectors and vector inverses satisfy the identities of Axiomsย iiiโ€“iv, and that Axioms (i)-(ii) and (v)-(viii) are satisfied. We first consider Axiomsย iiiโ€“iv.
Axiom iii.
We claim that \((0,0,\dots, 0)\) satisfies the identity of Axiomย iii, and thus that \(\boldzero=(0,0,\dots, 0)\text{.}\) Indeed, for all \(\boldx\in \R^n\) we have
\begin{align*} \boldzero+\boldx \amp = (0+x_1,0+x_2,\dots, 0+x_n)\amp \text{(def. of vec. add.)}\\ \amp = (x_1,x_2,\dots, x_n)\\ \amp =\boldx\text{,} \end{align*}
as desired.
Axiom iv.
We claim that given any \(\boldx=(x_1,x_2,\dots, x_n)\text{,}\) the vector \((-x_1,-x_2,\dots, -x_n)\) satisfies the identity of Axiomย iv, and thus that \(-\boldx=(-x_1,-x_2,\dots, -x_n)\text{.}\) Indeed, we have
\begin{align*} (-x_1,-x_2,\dots, -x_n)+(x_1,x_2,\dots, x_n) \amp =(-x_1+x_1,-x_2+x_2,\dots, -x_n+x_n) \amp \text{(def. of vec. add.)}\\ \amp = (0,0,\dots, 0)\\ \amp = \boldzero\text{,} \end{align*}
as desired.
As for the remaining axioms, we will verify Axioms ii and vi, and leave the rest as an exercise. As you see below, the desired identities here all boil down to a familiar property of real number arithmetic: e.g., commutativity of real number addition, real number distributivity, etc. In what follows, \(\boldx=(x_1,x_2,\dots, x_n), \boldy=(y_1,y_2,\dots, y_n), \boldz=(z_1,z_2,\dots, z_n)\) will denote arbitrary elements of \(\R^n\text{,}\) and \(c,d\) will denote arbitrary elements of \(\R\text{.}\)
Axiom ii.
We have
\begin{align*} (\boldx+\boldy)+\boldz \amp = (x_1+y_1,x_2+y_2,\dots, x_n+y+n)+\boldz \amp \text{(def. of vec. add.)}\\ \amp = ((x_1+y_1)+z_1,(x_2+y_2)+z_2,\dots, (x_n+y_n)+z_n) \amp \text{(def. of vec. add.)}\\ \amp = (x_1+(y_1+z_1),\dots, x_n+(y_n+z_n)) \amp \text{(assoc. of real add.)}\\ \amp =\boldx+(\boldy+\boldz) \text{.} \end{align*}
Axiom vi.
For all \(c,d\in \R\) and \(\boldx\in \R^n\text{,}\) we have
\begin{align*} (c+d)\boldx \amp = ((c+d)x_1,(c+d)x_2,\dots, (c+d)x_n) \amp \text{(def. of sc. mult.)} \\ \amp =(cx_1+dx_1,cx_2+dx_2,\dots, cx_n+dx_n) \amp \text{(dist. prop. of reals)}\\ \amp =(cx_1,\dots, cx_n)+(cy_1,\dots, cy_n) \amp \text{(def. of vec. add.)}\\ \amp =c\boldx+d\boldx \amp \text{(def. of sc. mult.)}\text{.} \end{align*}
It should be noted that there are (infinitely) many different ways to define a vector space structure \(\R^n\text{.}\) From now on, however, we will assume without further comment that the vector operations on \(\R^n\) are the standard ones given in Definitionย 1.1.2. With respect to these standard operations the zero vector and vector inverses of \(\R^n\) are as described in Theoremย 1.1.7. We make this official below.

Definition 1.1.8. Vector space terminology for \(\R^n\).

Fix a positive integer \(n\text{.}\) When treating \(\R^n\) as a vector space, \(n\)-tuples \(\boldx\in \R^n\) are called \(n\)-vectors.The zero vector of \(\R^n\) is defined as \(\boldzero=(0,0,\dots, 0)\text{.}\) Given an \(n\)-vector \(\boldx=(x_1,x_2,\dots, x_n)\text{,}\) its vector inverse is the vector \(-\boldx\) defined as \(-\boldx=(-x_1,-x_2,\dots, -x_n)\text{.}\)
Observe that for all \(n\)-vectors \(\boldx=(x_1,x_2,\dots, x_n)\in \R^n\) we have
\begin{equation*} -\boldx=(-x_1,-x_2,\dots, -x_n)=(-1)\boldx\text{.} \end{equation*}
In other words, the vector inverse of \(\boldx\) is equal to the scalar multiple \((-1)\boldx\text{.}\) As it turns out, this is not particular to the specific vector space \(\R^n\text{,}\) but rather a general property of all vector spaces, as we see in Theoremย 1.1.9. In order to prove this and other properties for a general vector space \(V\text{,}\) our arguments must rely only on the vector space axioms. In particular, we cannot assume that \(V=\R^n\) along with its standard vector operations, as this is but one example of a vector space.

Proof.

We prove (1), leaving (2)-(4) as an exercise.
First observe that \(0\boldv=(0+0)\boldv\text{,}\) since \(0=0+0\text{.}\)
By Axiom (vi) we have \((0+0)\boldv=0\boldv+0\boldv\text{.}\) Thus \(0\boldv=0\boldv+0\boldv\text{.}\)
Now add \(-0\boldv\text{,}\) the vector inverse of \(0\boldv\text{,}\) to both sides of the last equation:
\begin{equation*} -0\boldv+0\boldv=-0\boldv+(0\boldv+0\boldv)\text{.} \end{equation*}
Now simplify this equation step by step using the axioms:
\begin{align*} -0\boldv+0\boldv=-0\boldv+(0\boldv+0\boldv)\amp\implies \boldzero=(-0\boldv+0\boldv)+0\boldv \amp (\text{Axiom (iv) and Axiom (ii)}) \\ \amp\implies \boldzero=\boldzero+0\boldv \amp (\text{(Axiom (iv))})\\ \amp\implies \boldzero=0\boldv \text{.} \end{align*}
Again, we emphasize that \(\R^n\) is just one example of a vector space: albeit, the example that we will mostly focus on. For good measure we include a few more examples of vector spaces here. (And we will also meet a few other examples later.) We begin with the simplest of all vector spaces, the zero space. Elementary as this example is, it serves well to illustrate the axiomatic nature of Definitionย 1.1.4.

Definition 1.1.10. Zero space.

Let \(V=\{\bot\}\text{,}\) a set containing exactly one element. There is a unique vector space structure that can be given to \(V\text{,}\) defined as follows.
  • Vector operations.
    Vector addition on \(V\) is defined as \(\bot+\bot=\bot\text{;}\) scalar multiplication on \(V\) is defined as \(c\cdot \bot=\bot\) for all \(c\in \R\text{.}\)
  • Zero vector.
    The zero vector of \(V\) is \(\bot\text{:}\) i.e., \(\boldzero=\bot\text{.}\)
  • Vector inverses.
    The vector inverse of \(\bot\) is \(\bot\text{:}\) i.e., \(-\bot=\bot\text{.}\)
Since \(\bot=\boldzero\) with respect to this vector space structure, we have \(V=\{\boldzero\}\text{.}\) We call \(V\) a zero space.
Definitionย 1.1.10 makes two claims: that the given operations make \(V=\{\bot\}\) into a vector space, and that this is the only way to make \(V=\{\bot\}\) into a vector space. As with all claims in mathematics, these need to be proved, but as you will see, the proof is a very light affair.

Proof for Definitionย 1.1.10.

Since \(V=\{\bot\}\) only has one item, there is no choice for what vector addition \(V\times V\rightarrow V\) and scalar multiplciation \(\R\times V\rightarrow V\) can be. They must be defined in the manner given in Definitionย 1.1.10. Similarly, we must have \(\boldzero=\bot\) and \(-\bot=\bot\text{,}\) as once again, \(\bot\) is the only element of \(V\text{!}\) This shows that there can be at most one way of giving \(V\) a vector space structure.
It is now easy to see that these choices do indeed satisfy the vector space axioms. That \(\bot\) satisfies the identity of Axiomย iii defining the zero vector \(\boldzero\) follows from the fact that for all \(\boldv\in V\) we have \(\boldv=\bot\) (since \(V=\{\bot\}\)), and thus
\begin{align*} \bot+\boldv \amp =\bot+\bot\\ \amp = \bot\\ \amp =\boldv\text{.} \end{align*}
Thus \(\bot=\boldzero\) is the zero vector of the space.
Similarly, to show all elements of \(V\) have vector inverses amounts to showing that \(\bot\) has a vector inverse, since this is the only element of \(V\text{.}\) It is claimed that \(-\bot=\bot\) (i.e., \(\bot\) is its own vector inverse), which follows from the fact that
\begin{align*} -\bot+\bot \amp = \bot+\bot\\ \amp = \bot\\ \amp = \boldzero\text{.} \end{align*}
Lastly, the identities of Axioms i-ii and v-viii in this setting all reduce to trivial statements of the form \(\bot+\bot=\bot+\bot\text{.}\) Consider Axiom vii, for example. For all \(\boldv,\boldw\in V\text{,}\) we have \(\boldv=\boldw=\bot\text{,}\) in which case
\begin{align*} c(\boldv+\boldw) \amp =c(\bot+\bot) \\ \amp =c\cdot \bot\\ \amp = \bot \amp \text{(def. of sc. mult.)} \end{align*}
and
\begin{align*} c\boldv+d\boldw \amp =c\cdot \bot+d\cdot \bot\\ \amp = \bot+\bot\\ \amp =\bot\text{.} \end{align*}
Thus
\begin{equation*} c(\boldv+\boldw)=\bot=c\boldv+d\boldw \end{equation*}
for all \(\boldv, \boldw\in V\text{.}\)
We leave verification of the rest of the axioms to the reader.
We also include two โ€œexoticโ€ examples of vector spaces. We leave verification of the vector space axioms for these spaces as an exercise. (See Exerciseย 1.1.3.1 and Exerciseย 1.1.3.2.)

Example 1.1.11. Vector space of infinite sequences.

Define \(\R^\infty\) to be the set of all infinite sequences: i.e.,
\begin{equation*} \R^\infty=\{(a_i)_{i=1}^\infty\colon a_i\in \R\}\text{.} \end{equation*}
Vector addition and scalar multiplication of sequences is defined term-wise, exactly as with \(\R^n\text{.}\) In other words, given sequences \((a_i)_{i=1}^\infty\) and \((b_i)_{i=1}^\infty\text{,}\) and scalar \(c\in \R\text{,}\) we define
\begin{align*} (a_i)_{i=1}^\infty+(b_i)_{i=1}^\infty \amp = (a_i+b_i)_{i=1}^\infty \\ c(a_i)_{i=1}^\infty \amp = (ca_i)_{i=1}^\infty \text{.} \end{align*}
In case you prefer the expanded notation for infinite sequences, we have:
\begin{align*} (a_1,a_2,\dots)+(b_1,b_2,\dots) \amp =(a_1+b_1,a_2+b_2,\dots)\\ c(a_1,a_2,\dots) \amp =(ca_1,ca_2,\dots)\text{.} \end{align*}
The set \(\R^n\) together with these operations constitutes the vector space of infinite real sequences.

Example 1.1.12. Vector space of positive reals.

Define \(\R_{> 0}\) to be the set of all positive real numbers: i.e.,
\begin{equation*} \R_{> 0}=\{r\in \R\colon r> 0\}\text{.} \end{equation*}
Define vector addition on \(\R_{> 0}\) to be real number multiplication, and define scalar multiplication on \(\R_{> 0}\) to be real number exponentiation: i.e., given vectors \(\boldv=r\) and \(\boldw=s\) in \(\R_{> 0}\text{,}\) and scalar \(c\in \R\text{,}\) we define
\begin{align*} \boldv\oplus\boldw \amp = r\, s\\ c\odot\boldv \amp = r^c \text{.} \end{align*}
Note: we have introduced new notation for our vector operations to help distinguish them from familiar real number arithmetic operations.
The set \(\R_{> 0}\) together with these operations constitutes the vector space of positive reals.
Before returning to \(\R^n\text{,}\) we introduce another important notion from general vector spaces: linear combinations. As simple as the idea of a linear combination is, you will see that it plays a crucial role in many concepts to come.

Definition 1.1.13. Linear combination.

Let \(V\) be a vector space, and let \(\boldv_1,\boldv_2,\dots, \boldv_n\) be a collection of vectors of \(V\text{.}\) A linear combination of the \(\boldv_i\) is a vector expression of the form
\begin{equation} c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n\text{,}\tag{1.1} \end{equation}
where \(c_i\in \R\) for all \(1\leq i\leq n\text{.}\) The scalars \(c_i\) appearing in (1.1) are called the coefficients of the linear combination. The linear combination (1.1) is trivial if \(c_i=0\) for all \(1\leq i\leq n\text{,}\) and nontrivial if \(c_i\ne 0\) for some \(1\leq i\leq n\text{.}\)
A vector \(\boldv\in V\) is a linear combination of the \(\boldv_i\) if we have
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n \end{equation*}
for some choice of scalars \(c_i\in \R\text{.}\)

Example 1.1.14. Linear combination.

Let \(\boldv_1=(1,0,0), \boldv_2=(0,1,0), \boldv_3=(0,0,1)\text{.}\) Show that every vector in \(\R^3\) is a linear combination of the \(\boldv_i\text{.}\)
Solution.
Given any vector \(\boldv=(a,b,c)\in \R^3\text{,}\) we have
\begin{align*} \boldv\amp =a(1,0,0)+b(0,1,0)+c(0,0,1) \\ \amp \amp =a\boldv_1+b\boldv_2+c\boldv_3\text{.} \end{align*}

Example 1.1.15. Linear combination.

Express \(\boldzero=(0,0,0,0)\) as a nontrivial linear combination of \(\boldv_1=(1,1,1,1)\) and \(\boldv_2=(2,2,2,2)\text{.}\)
Solution.
Since clearly \(\boldv_2=2\boldv_1\text{,}\) we have
\begin{align*} \boldzero \amp = 2\boldv_1+ -\boldv_2\\ \amp = 2\boldv_1+(-1)\boldv_2\text{.} \end{align*}
This is not the only nontrivial linear combination yielding \(\boldzero\text{.}\) In fact we have
\begin{equation*} \boldzero=(2r)\boldv_1+(-r)\boldv_2 \end{equation*}
for any scalar \(r\in \R\) (including \(r=0\)).
It is natural to want to rewrite a linear combination of the form \(2\boldv_1+(-1)\boldv_2\) as \(2\boldv_1-\boldv_2\text{.}\) But of course this expression doesnโ€™t quite make sense. What we are missing is the vector difference operation.

Definition 1.1.16. Vector difference.

Let \(V\) be a vector space. Given vectors \(\boldv,\boldw\in V\text{,}\) we define their difference \(\boldv-\boldw\) as
\begin{equation*} \boldv-\boldw=\boldv+(-\boldw)=\boldv+(-1)\boldw\text{.} \end{equation*}

Subsection 1.1.2 Visualizing \(\R^n\)

We will only explicitly visualize (or graph) elements of \(\R^n\) for \(n=2\) and \(n=3\text{.}\) However, these special cases bring to light an important point-vector duality in how we conceive of \(n\)-tuples that carries over into higher values of \(n\text{.}\) Fix \(n=3\) for now. We will sometimes conceive of a triple \((a,b,c)\in \R^3\) as a point, in which case we will use capital letters to denote the triple (e.g., \(P=(a,b,c)\)), and will represent the point visually with respect to a coordinate system as the point in \(3\)-space reached by starting at the origin \(O=(0,0,0)\) and moving a directed distance \(a\) units in the \(x\)-direction, \(b\) units in the \(y\)-direction and \(c\) units in the \(z\)-direction.
Figure 1.1.17. Point visualization of triple. Made with GeoGebra.
Alternatively, when conceiving of a triple \((a,b,c)\) as a vector, we will use lowercase bold letters to denote it (e.g., \(\boldx=(a,b,c)\) or \(\boldv=(a,b,c)\)), and represent it visually as a directed line segment (i.e., an arrow). In more detail, given the 3-vector \(\boldx=(a,b,c)\text{,}\) we choose an initial point \(Q=(x_0,y_0,z_0)\) and represent \(\boldx\) as the directed line segment \(\overrightarrow{QR}\) that starts at \(Q\) and ends at the point \(R=(x_0+a,y_0+b,z_0+c)\text{,}\) the terminal point of \(\overrightarrow{QR}\text{.}\) Note that in this manner we get infinitely-many different graphical representations of \(\boldx\text{,}\) one for each choice of starting point \(Q\text{.}\) Although these are technically different arrows (they have different starting points), we consider them to be equal as vectors. You can think of each particular choice of arrow-representation \(\boldx=\overrightarrow{QR}\) as an instance or incarnation of the vector \(\boldx=(a,b,c)\text{.}\) When the initial point of our arrow representation is chosen to be the origin \(O=(0,0,0)\text{,}\) we have \(\boldx=\overrightarrow{OP}\text{,}\) where \(P=(a,b,c)\text{.}\) We call \(\overrightarrow{OP}\) the position vector of the point \(P\text{.}\)
Figure 1.1.18. Vector visualization of triple \(\boldx=\overrightarrow{OP}=\overrightarrow{QR}\text{.}\) Drag \(P\) to change the vector \(\boldx\text{.}\) Drag \(Q\) to change the initial point of \(\overrightarrow{QR}\text{.}\) Made with GeoGebra.
The representation of vectors as arrows gives rise to the so-called tip-to-tail interpretation of vector addition. Let \(\boldx=(a,b,c)\) and \(\boldy=(d,e,f)\text{.}\) Starting with an initial point \(P=(x_0,y_0,z_0)\text{,}\) we can represent \(\boldx\) as \(\overrightarrow{PQ}\text{,}\) where \(Q=(x_0+a,y_0+b,z_0+c)\text{,}\) and \(\boldy=\overrightarrow{QR}\text{,}\) where \(R=(x_0+a+d,y_0+b+e,z_0+c+f)\text{.}\) But then we have
\begin{equation*} \boldx+\boldy=(a+d,b+e,c+f)=\overrightarrow{PR}\text{,} \end{equation*}
or alternatively,
\begin{equation} \overrightarrow{PQ}+\overrightarrow{QR}=\overrightarrow{PR}\text{.}\tag{1.2} \end{equation}
In other words, if we choose our arrow representations so that the terminal point (the tip) of \(\boldx=\overrightarrow{PQ}\) is placed at the initial point (the tail) of \(\boldy=\overrightarrow{QR}\) , then \(\boldx+\boldy\) is represented by the arrow whose initial point is \(P\text{,}\) and whose terminal point is reached by first traveling along \(\boldx\text{,}\) and then traveling along \(\boldy\text{.}\)
Figure 1.1.19. Tip-to-tail visualization of vector addition. Made with GeoGebra.
The tip-to-tail visualization of vector addition gives rise to a similar conceptualization of vector difference. Performing a little vector algebra on the definition
\begin{equation*} \boldy-\boldx=\boldy+ (-\boldx)\text{,} \end{equation*}
we see that
\begin{equation*} \boldx+(\boldy-\boldx)=\boldy\text{.} \end{equation*}
Using tip-to-tail terminology, this means if we represent \(\boldx=\overrightarrow{PQ}\) and \(\boldy=\overrightarrow{PR}\text{,}\) then \(\boldy-\boldx\) is the arrow \(\overrightarrow{QR}\) that starts at the tip of \(\boldx\) and ends at the tip of \(\boldy\text{.}\) We thus have a tip-to-tip description of vector difference.
Figure 1.1.20. Tip-to-tip visualization of vector difference. Made with GeoGebra.
Next consider scalar multiplication. Given a vector \(\boldx=(a,b,c)=\overrightarrow{PQ}\) and a scalar \(k\in \R\text{,}\) the scalar multiple \(k\boldx=(ka,kb,kc)\) can be represented as an arrow that starts at \(P\) and points along the line containing \(\overrightarrow{PQ}\text{.}\) As we will see in the next section, the length of the resulting arrow is multiplied by the factor \(\abs{k}\text{,}\) resulting in a stretched arrow if \(\abs{k}> 1\) and a shrunk arrow if \(\abs{k}< 1\text{.}\) Furthermore, if \(k\geq 0\text{,}\) then the arrow representing \(k\boldx\) points in the same direction as \(\overline{PQ}\text{;}\) if \(k< 0\text{,}\) it points in the opposite direction.
Figure 1.1.21. Visualization of scalar multiplication. Drag point labeled \(k\) to change scalar. Made with GeoGebra.

Exercises 1.1.3 Exercises

Vector space examples.

Show that the given set together with the given operations constitutes a vector space by verifying that the vector axioms hold. Follow Procedureย 1.1.6. In particular, explicitly identify what element of the set acts as the zero vector \(\boldzero\text{,}\) and how vector inverses of elements are defined.
1. Infinite sequences.
Define \(\R^\infty\) to be the set of all infinite sequences: i.e.,
\begin{equation*} \R^\infty=\{(a_i)_{i=1}^\infty\colon a_i\in \R\}\text{.} \end{equation*}
Define vector addition and scalar multiplication as follows:
\begin{align*} (a_i)_{i=1}^\infty+(b_i)_{i=1}^\infty \amp = (a_i+b_i)_{i=1}^\infty \\ c(a_i)_{i=1}^\infty \amp = (ca_i)_{i=1}^\infty \text{.} \end{align*}
In case you prefer the expanded notation for infinite sequences, we have:
\begin{align*} (a_1,a_2,\dots)+(b_1,b_2,\dots) \amp =(a_1+b_1,a_2+b_2,\dots)\\ c(a_1,a_2,\dots) \amp =(ca_1,ca_2,\dots)\text{.} \end{align*}
2. Positive reals.
Define \(\R_{> 0}\) to be the set of all positive real numbers: i.e.,
\begin{equation*} \R_{> 0}=\{r\in \R\colon r> 0\}\text{.} \end{equation*}
Define vector addition on \(\R_{> 0}\) to be real number multiplication, and define scalar multiplication on \(\R_{> 0}\) to be real number exponentiation: i.e., given vectors \(\boldv=r\) and \(\boldw=s\) in \(\R_{> 0}\text{,}\) and scalar \(c\in \R\text{,}\) we define
\begin{align*} \boldv\oplus\boldw \amp = r\, s\\ c\odot\boldv \amp = r^c \text{.} \end{align*}
Note: we have introduced new notation for our vector operations to help distinguish them from familiar real number arithmetic operations.

3.

Prove statements (2)-(4) of Theoremย 1.1.9. When treating a specific part you may assume the results of any part that has already been proven, including statement (1).

4.

Let \(V\) be a vector space.
  1. Show that the zero vector of \(V\) is unique: i.e., show that if \(\boldw\in V\) satisfies \(\boldw+\boldv=\boldv\) for all \(\boldv\in V\text{,}\) then \(\boldw=\boldzero\text{.}\)
  2. Fix \(\boldv\in V\text{.}\) Show that the vector inverse of \(\boldv\) is unique: i.e., show that if \(\boldw+\boldv=\boldzero\text{,}\) then \(\boldw=-\boldv\text{.}\)
Thus we may speak of the zero vector of a vector space, and the vector inverse of a vector \(\boldv\text{.}\)

5.

Let \(V\) be a vector space. Prove:
\begin{equation*} \boldu + \boldw = \boldv + \boldw \iff \boldu=\boldv\text{.} \end{equation*}

6.

Let \(V\) be a vector space. Prove that either \(V=\{\boldzero\}\) (i.e., \(V\) is the zero space) or \(V\) is infinite. In other words, a vector space contains either exactly one element or infinitely many elements.
Hint.
Assume \(V\) contains a nonzero vector \(\boldv\ne\boldzero\text{.}\) Show that if \(c\ne d\text{,}\) then \(c\boldv\ne d\boldv\text{.}\) You assume the results of Theoremย 1.1.9.