Skip to main content
Logo image

Section 4.1 Coordinate vectors

Suppose \(V\) is an \(n\)-dimensional vector space. Once we choose a basis \(B=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of \(V\text{,}\) we know from Theorem 3.6.7 that any \(\boldv\in V\) can be expressed in a unique way as
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n, c_i\in \R\text{.} \end{equation*}
Coordinate vectors turn this observation into a computational tool by exploiting the resulting one-to-one correspondence
\begin{equation} \boldv\in V \longleftrightarrow (c_1,c_2,\dots, c_n)\in \R^n\text{.}\tag{4.1.1} \end{equation}
We will use the correspondence (4.1.1) in two distinct ways, as described below.
  1. Given an \(n\)-dimensional vector space \(V\) and basis \(B\text{,}\) the correspondence (4.1.1) allow us to treat elements of the abstract space \(V\) as if they were elements of \(\R^n\text{,}\) and to then make use of our wealth of computational procedures related to \(n\)-tuples.
  2. The correspondence (4.1.1) is also useful when working in \(\R^n\) itself. Namely, there will be situations where it is convenient to represent vectors with a particular nonstandard basis \(B\text{,}\) as opposed to the standard basis \(\{\bolde_1, \bolde_2, \dots, \bolde_n\}\text{.}\) In this setting the correspondence (4.1.1) will be used as a “change of coordinates” technique.

Subsection 4.1.1 Coordinate vectors

Before we can define coordinate vectors we need to define an ordered basis. As the name suggests this is nothing more than a basis along with a particular choice of ordering of its elements: i.e. first element, second element, etc.. In other words, an ordered basis will be a sequence of vectors, as opposed to a set of vectors.

Definition 4.1.1. Ordered bases.

Let \(V\) be a finite-dimensional vector space. An ordered basis of \(V\) is a sequence of distinct vectors \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) whose underlying set \(\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) is a basis of \(V\text{.}\)

Remark 4.1.2.

A single (unordered) basis \(B=\{\boldv_1, \boldv_2, \dots, \boldv_n\}\) of an \(n\)-dimensional vector space gives rise to \(n!\) different ordered bases: you have \(n\) choices for the first element of the ordered basis, \((n-1)\) choices for the second element, etc..
For example the standard basis \(B=\{\bolde_1, \bolde_2, \bolde_3\}\) of \(\R^3\) gives rise to \(3!=6\) different ordered bases of \(\R^3\text{:}\)
\begin{align*} B_1\amp =(\bolde_1, \bolde_2, \bolde_3) \amp B_2\amp =(\bolde_1, \bolde_3, \bolde_2) \\ B_3\amp=(\bolde_2, \bolde_1, \bolde_3) \amp B_4\amp =(\bolde_2, \bolde_3, \bolde_1) \\ B_5 \amp =(\bolde_3, \bolde_1, \bolde_2) \amp B_6\amp =(\bolde_3, \bolde_2, \bolde_1)\text{.} \end{align*}
By a slight abuse of language we will use “standard basis” to describe both one of our standard unordered bases and the corresponding ordered basis obtained by choosing the implicit ordering of the set descriptions in Remark 3.6.2. For example, \(\{x^2, x, 1\}\) and \((x^2, x, 1)\) will both be called the standard basis of \(P_2\text{.}\)

Definition 4.1.3. Coordinate vectors.

Let \(B=(\boldv_1, \boldv_2,\dots , \boldv_n)\) be an ordered basis for the vector space \(V\text{.}\) According to Theorem 3.6.7, for any \(\boldv\in V\) there is a unique choice of scalars \(c_i\in \R\) satisfying
\begin{equation} \boldv=c_1\boldv_1+c_2\boldv_2\cdots +c_n\boldv_n\text{.}\tag{4.1.2} \end{equation}
We call the corresponding \(n\)-tuple \((c_1,c_2,\dots, c_n)\) the coordinate vector of \(\boldv\) relative to the basis \(B\), and denote it \([\boldv]_B\text{:}\) i.e.,
\begin{equation*} [\boldv]_B=(c_1,c_2,\dots, c_n)\text{.} \end{equation*}
Observe that computing a coordinate vector with respect to a basis involves setting up a vector equation of the form (4.1.2) and then solving for the unknown coefficients \(c_i\text{.}\) This is a familiar situation for us by now, and carrying out the computation involves reducing the given vector equation to a system of linear equations that we solve with our old work horse Gaussian elimination.
As illustrated by the next example, one setting for which we can compute \([\boldv]_B\) by inspection (see (2) of Procedure 4.1.4) is when \(B\) is one of our standard ordered bases.

Example 4.1.5. Standard bases.

Computing coordinate vectors relative to one of our standard ordered bases for \(\R^n\text{,}\) \(M_{mn}\text{,}\) or \(P_{n}\) amounts to just listing the coefficients or entries used to specify the given vector. The examples below serve to illustrate the general method in this setting.
  1. Let \(V=\R^3\) and \(B=(\bolde_1, \bolde_2, \bolde_3)\text{.}\) For any \(\boldv=(a,b,c)\in \R^3\) we have \([\boldv]_{B}=(a,b,c)\text{,}\) since \((a,b,c)=a\bolde_1+b\bolde_2+c\bolde_3\text{.}\)
  2. Let \(V=M_{22}\) and \(B=(E_{11}, E_{12}, E_{21}, E_{22})\text{.}\) For any \(A=\begin{amatrix}[rr]a\amp b\\ c\amp d \end{amatrix}\) we have \([A]_B=(a,b,c,d)\) since
    \begin{equation*} A=aE_{11}+bE_{12}+cE_{21}+dE_{22}\text{.} \end{equation*}

Example 4.1.6. Reorderings of standard bases.

If we choose an alternate ordering of one of the standard ordered bases, the entries of the coordinate vector are reordered accordingly, as illustrated by the examples below.
  1. Let \(V=\R^3\) and \(B=(\bolde_2, \bolde_1, \bolde_3)\text{.}\) Given \(\boldv=(a,b,c)\in \R^3\) we have \([\boldv]_B=(b,a,c)\text{,}\) since
    \begin{equation*} \boldv=b\bolde_2+a\bolde_1+c\bolde_3\text{.} \end{equation*}
  2. Let \(P=P_3\) and \(B=(1,x,x^2, x^3)\text{.}\) Given \(p(x)=ax^3+bx^2+cx+d\) we have \([p(x)]_B=(d, c, b, a)\text{,}\) since
    \begin{equation*} p(x)=(d)1+cx+bx^2+ax^3\text{.} \end{equation*}

Example 4.1.7. Nonstandard bases.

For a nonstandard ordered basis, we compute coordinate vectors by solving a relevant system of linear equations, as the examples below illustrate.
  1. Let \(V=\R^2\text{,}\) \(B=((1,2),(1,1))\text{,}\) and \(\boldv=(3,3)\text{.}\) Compute \([\boldv]_B\text{.}\) More generally, compute \([(a,b)]_B\) for an arbitrary \((a,b)\in \R^2\text{.}\)
  2. Let \(V=P_2\text{,}\) \(B=(x^2+x+1, x^2-x, x^2-1)\text{,}\) and \(p(x)=x^2\text{.}\) Compute \([p(x)]_B\text{.}\) More generally, compute \([ax^2+bx+c]\) for an arbitrary element \(ax^2+bx+c\in P_2\text{.}\)
Solution.
  1. Using Procedure 4.1.4, we compute \([(3,3)]_B\) by finding the unique pair \((c_1, c_2)\) satisfying
    \begin{equation*} (3,3)=c_1(1,2)+c_2(1,1)\text{.} \end{equation*}
    By inspection, we see that
    \begin{equation*} (3,3)=3(1,1)=0(1,2)+3(1,1)\text{.} \end{equation*}
    We conclude that
    \begin{equation*} [\boldv]_{B}=(0,3)\text{.} \end{equation*}
    More generally, to compute \([\boldv]_B\) for an arbitrary \(\boldv=(a,b)\in \R^2\text{,}\) we must find the pair \((c_1,c_2)\) satisfying \((a,b)=c_1(1,2)+c_2(1,1)\text{,}\) or equivalently
    \begin{equation*} \begin{linsys}{2} c_1\amp +\amp c_2 \amp =\amp a\\ 2c_1\amp +\amp c_2\amp =\amp b \end{linsys}\text{.} \end{equation*}
    The usual Gaussian elimination technique yields the unique solution \((c_1,c_2)=(-a+b,2a-b)\text{,}\) and thus
    \begin{equation*} [\boldv]_B=(-a+b, 2a-b) \end{equation*}
    for \(\boldv=(a,b)\text{.}\)
  2. To compute \([x^2]_B\) we must find the unique triple \((c_1,c_2,c_3)\) satisfying
    \begin{equation*} x^2=c_1(x^2+x+1)+c_2(x^2-x)+c_3(x^2-1)\text{.} \end{equation*}
    The equivalent linear system once we combine like terms and equate coefficients is
    \begin{equation*} \begin{linsys}{3} c_1\amp +\amp c_2\amp +\amp c_3\amp =\amp 1\\ c_1\amp -\amp c_2\amp \amp \amp =\amp 0\\ c_1\amp \amp \amp -\amp c_3\amp =\amp 0\\ \end{linsys}\text{.} \end{equation*}
    The unique solution to this system is \((c_1,c_2,c_3)=(1/3, 1/3, 1/3)\text{.}\) We conclude
    \begin{equation*} [x^2]_B=\frac{1}{3}(1, 1, 1)\text{.} \end{equation*}
    The same reasoning shows that more generally, given polynomial \(p(x)=ax^2+bx+c\text{,}\) we have
    \begin{equation*} [p(x)]_B=\frac{1}{3}(a+b+c, a-2b+c, a+b-2)\text{.} \end{equation*}

Video example: coordinate vectors.

Figure 4.1.8. Video: coordinate vectors

Subsection 4.1.2 Coordinate vector transformation

The next theorem is the key to understanding the tremendous computational value of coordinate vectors. Here we treat the coordinate vector operation as a function
\begin{align*} [\phantom{v}]_B\colon V\amp \rightarrow \R^n\\ \boldv\amp\mapsto [\boldv]_B\in \R^n \text{.} \end{align*}
Not surprisingly, this turns out to be a linear transformation, which we call a coordinate vector transformation. Furthermore, the correspondence
\begin{equation*} \boldv\in V \longmapsto [\boldv]_B\in \R^n \end{equation*}
is a one-to-one correspondence between \(V\) and \(\R^n\text{,}\) allowing us to identify the vectors \(\boldv\in V\) with \(n\)-tuples in \(\R^n\text{.}\) In the language of Section 3.9, these two facts taken together mean that the coordinate vector transformation is an isomorphism between \(V\) and \(\R^n\text{.}\) Practically speaking, this means any question regarding the vector space structure of \(V\) can be translated to an equivalent question about the vector space \(\R^n\text{.}\) As a result, given any “exotic” vector space \(V\) of finite dimension, once we choose an ordered basis \(B\) of \(V\text{,}\) questions about \(V\) can be answered by taking coordinate vectors with respect to \(B\) and answering the corresponding question in the more familiar setting of \(\R^n\text{,}\) where we have a wealth of computational procedures at our disposal. We memorialize this principle as a mantra.
  1. Suppose \(T(\boldv)=[\boldv]_B=(a_1,a_2,\dots, a_n), T(\boldw)=[\boldw]_B=(b_1, b_2, \dots, b_n)\text{.}\) By definition this means
    \begin{align*} \boldv \amp =\sum_{i=1}^na_i\boldv_i, \amp \boldw\amp =\sum_{i=1}^nb_i\boldv_i\text{.} \end{align*}
    It follows that
    \begin{equation*} c\boldv+d\boldw=\sum_{i=1}^n(ca_i+db_i)\boldv_i\text{,} \end{equation*}
    and hence
    \begin{align*} T(c\boldv+d\boldw)\amp =[c\boldv+d\boldw]_B \amp (\text{def. of } [\phantom{\boldv}]_B) \\ \amp =(ca_1+db_1,ca_2+db_2,\dots, ca_n+db_n) \\ \amp =c(a_1,a_2,\dots, a_n)+d(b_1,b_2,\dots, b_n)\\ \amp =c[\boldv]_B+d[\boldw]_B\\ \amp =cT(\boldv)+dT(\boldw) \text{.} \end{align*}
    This proves \(T\) is linear.
  2. Clearly, if \(\boldv=\boldw\text{,}\) then \([\boldv]_B=[\boldw]_B\text{.}\) If \(T(\boldv)=T(\boldw)=(c_1,c_2,\dots, c_n)\text{,}\) then by definition of \([\phantom{\boldv}]_B\) we must have
    \begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n==\boldw\text{.} \end{equation*}
  3. Given any \(\boldb=(b_1,b_2,\dots, b_n)\in \R^n\text{,}\) we have \(\boldb=T(\boldv)\text{,}\) where
    \begin{equation*} \boldv=b_1\boldv_1+b_2\boldv_2+\cdots +b_n\boldv_n\text{.} \end{equation*}
    This proves \(\im T=\R^n\text{.}\)
  4. We have
    \begin{align*} \boldv\in \Span S \amp \iff \boldv=\sum_{i=1}^rc_i\boldv_i\\ \amp\iff [\boldv]_{B}=\left[ \sum_{i=1}^rc_i\boldv_i\right] \\ \amp \iff [\boldv]_B=\sum_{i=1}^rc_i[\boldv_i]_B\amp ([\phantom{\boldv}]_B \text{ is linear})\\ \amp \iff [\boldv]_B\in \Span S'\text{.} \end{align*}
    Similarly, we have
    \begin{align*} \sum_{i=1}^rc_i\boldv_i=\boldzero \amp \iff \left[\sum_{i=1}^rc_i\boldv_i\right]_B=[\boldzero]_B \amp (\boldv=\boldw\iff [\boldv]_B=[\boldw]_B) \\ \amp\iff \sum_{i=1}^rc_i[\boldv_i]_B=(0,0,\dots, 0) \amp ([\phantom{\boldv}]_B \text{ is linear}) \text{.} \end{align*}
    From this equivalence we see that there is a nontrivial linear combination of \(S\) yielding \(\boldzero\in V\) if and only if there is a nontrivial linear combination of \(S'\) yielding \(\boldzero\in \R^n\text{.}\) In other words, \(S\) is linearly independent if and only if \(S'\) is linearly independent.

Remark 4.1.11.

Statements (2) and (3) of Theorem 4.1.10 tell us that the coordinate transformation is injective (or one-to-one) and surjective (or onto), respectively. (See Definition 0.2.7).
As an illustration of the coordinate vector mantra, we describe a general method of contracting and extending subsets of a general finite-dimensional vector space \(V\) to bases. The method translates the problem into \(\R^n\) using the coordinate transformation, applies the relevant algorithm available to us for subsets of \(\R^n\text{,}\) and then “lifts” the results back to \(V\) using the coordinate transformation again.

Example 4.1.13.

The set
\begin{equation*} S=\left \{ A_1=\begin{bmatrix}2\amp 1\\ 0\amp -2 \end{bmatrix} , A_2=\begin{bmatrix}1\amp 1\\ 1\amp -1 \end{bmatrix} , A_3=\begin{bmatrix}0\amp 1\\ 2\amp 0 \end{bmatrix} , A_4=\begin{bmatrix}-1\amp 0\\ 1\amp 1 \end{bmatrix} \right\} \end{equation*}
is a subset of the space \(W=\{ A\in M_{22}\colon \tr A=0\}\text{.}\) Let \(W'=\Span S\text{.}\) Contract \(S\) to a basis of \(W'\) and determine whether \(W'=W\text{.}\)
Hint.
Choose an ordered basis \(B\) of \(M_{22}\) and use the coordinate vector map to translate to a question about subspaces of \(\R^4\text{.}\) Answer this question and translate back to \(M_{22}\text{.}\)
Solution.
Let \(B=(E_{11}, E_{12}, E_{21}, E_{22})\) be the standard basis of \(M_{22}\text{.}\) Apply \([\phantom{\boldv}]_B\) to the elements of the given \(S\) to get a corresponding set \(S'\subseteq\R^4\text{:}\)
\begin{equation*} S'=\left\{ [A_1]_B=(2,1,0,-2), [A_2]_B=(1,1,1,-1), [A_3]_B=(0,1,2,0), [A_4]_B=(-1,0,1,1) \right\}\text{.} \end{equation*}
Apply the column space procedure of Procedure 3.8.13 to contract \(S'\) to a basis \(T'\) of \(\Span S'\text{.}\) This produces the subset
\begin{equation*} T'=\{[A_1]_B=(2,1,0,-2), [A_2]_B=(1,1,1,-1)\} \end{equation*}
Translating back to \(V=M_{22}\text{,}\) we conclude that the corresponding set
\begin{equation*} B=\{A_1, A_2\} \end{equation*}
is a basis for \(W'=\Span S\text{.}\) We conclude that \(\dim W'=2 \text{.}\)
Lastly the space \(W\) of all trace-zero matrices is easily seen to have basis
\begin{equation*} \left\{ \begin{amatrix}[rr]1\amp 0\\ 0 \amp -1 \end{amatrix}, \begin{amatrix}[rr]0\amp 1\\ 0\amp 0 \end{amatrix}, \begin{amatrix}[rr]0 \amp 0\\ 1\amp 0 \end{amatrix} \right\}\text{,} \end{equation*}
and hence \(\dim W=3\text{.}\) Since \(\dim W'\lt\dim W\text{,}\) we conclude that \(W'\ne W\text{.}\)

Exercises 4.1.3 Exercises

WeBWork Exercises

1.
Consider the basis \(B\) of \({\mathbb R}^2\) consisting of vectors
\begin{equation*} \left[\begin{array}{c} -7\cr -1 \end{array}\right] \ \ \ \mbox{and} \ \ \ \left[\begin{array}{c} -3\cr 0 \end{array}\right]. \end{equation*}
Find \(\vec{x}\) in \({\mathbb R}^2\) whose coordinate vector relative to the basis \(B\) is
\([\vec{x}]_B = \left[\begin{array}{c} -1\cr 1 \end{array}\right]\text{.}\)
\(\vec{x} =\) (2 × 1 array)
2.
The set
\begin{equation*} B = \left\lbrace \left[\begin{array}{cc} -1 \amp 3\cr 0 \amp 0 \end{array}\right], \left[\begin{array}{cc} 0 \amp 1\cr 0 \amp -2 \end{array}\right], \left[\begin{array}{cc} 0 \amp 0\cr 0 \amp -2 \end{array}\right] \right\rbrace \end{equation*}
is a basis of the space of upper-triangular \(2\times 2\) matrices.
Find the coordinates of \(M = \left[\begin{array}{cc} -2 \amp 1\cr 0 \amp 8 \end{array}\right]\) with respect to this basis.
\([M]_B=\) (3 × 1 array)
3.
A square matrix \(A\) is called half-magic if the sum of the numbers in each row and column is the same. The common sum in each row and column is denoted by \(s(A)\) and is called the magic sum of the matrix \(A\text{.}\) Let \(V\) be the vector space of \(2\times 2\) half-magic squares.
(a) Find an ordered basis \(B\) for \(V\text{.}\)
\(B = (\) (2 × 2 array), (2 × 2 array) \()\text{.}\)
(b) Find the coordinate vector \([M]_B\) of \(M=\left[\begin{array}{cc} -4 \amp 3\cr 3 \amp -4 \end{array}\right]\) in your chosen ordered basis \(B\text{.}\)
\([M]_B =\) (2 × 1 array).
4.
The set \(B={ -\left(1+2x^{2}\right), \ x-4-8x^{2}, \ 14-3x+26x^{2} }\) is a basis for \(P_2\text{.}\) Find the coordinates of \(p(x)=49-11x+90x^{2}\) relative to this basis:
\([p(x)]_B=\) (3 × 1 array)

Coordinate vectors in \(\R^n\).

In each exercise an ordered basis \(B\) is given for \(\R^3\text{.}\) Compute \([\boldx]_B\) for the given \(\boldx\in \R^3\text{.}\)
5.
\(B=\left((1,0,0), (2,2,0), (3,3,3) \right)\text{,}\) \(\boldx=(2,-1,3)\)
6.
\(B=\left((5,-12,3), (1,2,3), (-4,5,6) \right)\text{,}\) \(\boldx=(2,-1,3)\)

Coordinate vectors in \(P_n\).

In each exercise an ordered basis \(B\) is given for \(P_2\text{.}\) Compute \([p]_B\) for the given polynomial \(p\in P_2\text{.}\)
7.
\(B=(1,x,x^2)\text{,}\) \(p(x)=-2x^2+3x-5\)
8.
\(B=(x^2+1, x+1, x^2+x)\text{,}\) \(p(x)=x^2-x+2\)

9.

Let \(B=(A_1,A_2,A_3,A_4)\) where
\begin{equation*} A_1=\begin{bmatrix} 1\amp 0\\ 1\amp 0 \end{bmatrix}, A_2=\begin{bmatrix} 1\amp 1\\ 0\amp 0 \end{bmatrix}, A_3=\begin{bmatrix} 1\amp 0\\ 0\amp 1 \end{bmatrix}, A_4=\begin{bmatrix} 0\amp 0\\ 1\amp 0 \end{bmatrix}\text{.} \end{equation*}
You may take for granted that \(B\) is an ordered basis of \(M_{22}\text{.}\)
  1. Compute \(\left [ \begin{bmatrix} 6\amp 2\\ 5\amp 3 \end{bmatrix}\right]_B\text{.}\)
  2. Compute \([A]_B\) for an arbitrary matrix \(\begin{bmatrix} a\amp b\\ c\amp d \end{bmatrix}\in M_{22}\text{.}\)

10.

Let \(S=\{p_1(x)=2x^2+x+1, p_2(x)=x-1, p_3(x)=x^2+1, p_4(x)=x^2-x-1\}\subseteq P_2 \text{.}\)
  1. Use one of the techniques described in Procedure 4.1.12 to contract \(S\) to a basis of \(W=\Span S\text{.}\) To begin, choose your favorite ordered basis of \(P_2\text{.}\)
  2. Use your result in (a) to describe \(W\) is as simple a manner as possible.

11.

Let \(S=\{ p_1=x^3+1, p_2=2x^3+x+1, p_3=3x^3+2x+1, p_4=2x^3+x^2+x+1\}\subseteq P_3\text{.}\)
  1. Use one of the techniques described in Procedure 4.1.12 to contract \(S\) to a basis of \(W=\Span S\text{.}\) To begin, choose your favorite ordered basis of \(P_3\text{.}\)
  2. Using your result in (a) to decide whether \(W=P_3\text{.}\)

12.

Let \(S=\{p_1(x)=x^2+x+1, p_2(x)=3x^2+6x\}\subseteq P_2\text{.}\) Use one of the techniques described in Procedure 4.1.12 to extend \(S\) to a basis of \(P_2\text{.}\)

13.

Let
\begin{equation*} S=\left\{A_1=\begin{amatrix}[rr]1\amp 2\\1\amp 1 \end{amatrix}, \ A_2=\begin{amatrix}[rr] 1\amp 1\\2\amp 1\end{amatrix}, A_3=\begin{amatrix}[rr] -1\amp 1\\ -4\amp -1 \end{amatrix} , A_4=\begin{amatrix}[rr] 0\amp 1\\ 2\amp 0\end{amatrix}\right\}\subseteq M_{22}\text{.} \end{equation*}
  1. Use one of the techniques described in Procedure 4.1.12 to contract \(S\) to a basis of \(W=\Span S\text{.}\)
  2. Show that
    \begin{equation*} W=\{\begin{bmatrix}a\amp b\\ c\amp a\end{bmatrix}\colon a,b,c\in \R\}. \end{equation*}
    Use a dimension argument to make your life easier.