Skip to main content

Euclidean vector spaces

Section 5.4 Matrix representations of linear transformations

We have seen how the coordinate vector map can be used to translate a linear algebraic question posed about a finite-dimensional vector space \(V\) into a question about \(\R^n\text{,}\) where we have many computational algorithms at our disposal. We would like to extend this technique to linear transformations \(T\colon V\rightarrow W\text{,}\) where both \(V\) and \(W\) are finite-dimensional. The basic idea, to be fleshed out below, can be described as follows:
  1. Pick a basis \(B\) for \(V\text{,}\) and a basis \(B'\) for \(W\text{.}\)
  2. β€œIdentify” \(V\) with \(\R^n\) and \(W\) with \(\R^m\) using the coordinate vector isomorphisms \([\hspace{5pt}]_B\) and \([\hspace{5pt}]_{B'}\text{,}\) respectively.
  3. β€œModel” the linear transformation \(T\colon V\rightarrow W\) with a certain linear transformation \(T_A\colon \R^n\rightarrow \R^m\text{.}\)
The matrix \(A\) defining \(T_A\) will be called the matrix representing \(T\) with respect to our choice of basis \(B\) for \(V\) and \(B'\) for \(W\).
In what sense does \(A\) β€œmodel” \(T\text{?}\) All the properties of \(T\) we are interested in (\(\NS T\text{,}\) \(\nullity T\text{,}\) \(\im T\text{,}\) \(\rank T\text{,}\) etc.) are perfectly mirrored by the matrix \(A\text{.}\) As a result, this technique allows us to answer questions about the original \(T\) essentially by applying a relevant matrix algorithm to \(A\text{.}\)

Subsection 5.4.1 Matrix representations of linear transformations

Given a linear transformation \(T\colon V\rightarrow W\) and choice of ordered bases \(B\) and \(B'\) of \(V\) and \(W\text{,}\) respectively, we define the matrix \(A=[T]_B^{B'}\) representing \(T\) column by column, using a familiar looking formula.

Definition 5.4.1. Matrix representations of linear transformations.

Let \(V\) and \(W\) be vector spaces with ordered bases \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) and \(B'=(\boldw_1, \boldw_2, \dots, \boldw_m)\text{,}\) respectively. Given a linear transformation \(T\colon V\rightarrow W\text{,}\) the matrix representing \(T\) with respect to \(B\) and \(B'\), is the \(m\times n\) matrix \([T]_B^{B'}\) whose \(j\)-th column is \([T(\boldv_j)]_{B'}\text{,}\) considered as a column vector: i.e.,
\begin{equation} [T]_B^{B'}=\begin{amatrix}[cccc]\vert \amp \vert \amp \amp \vert \\ \left[T(\boldv_1)\right]_{B'}\amp [T(\boldv_2)]_{B'}\amp \dots \amp [T(\boldv_n)]_{B'} \\ \vert \amp \vert \amp \amp \vert \end{amatrix}\text{.}\tag{5.10} \end{equation}
In the special case where \(W=V\) and we pick \(B'=B\) we write simply \([T]_B\text{.}\)

Example 5.4.2. Matrix representation.

The function
\begin{align*} T\colon \R^3 \amp \rightarrow M_{22}\\ (x,y,z) \amp \mapsto \begin{bmatrix} x+y+z\amp x-z\\ x+2y\amp y-z \end{bmatrix} \end{align*}
is linear. Compute \([T]_{B}^{B'}\text{,}\) where \(B\) and \(B'\) are the standard bases for \(\R^3\) and \(M_{22}\text{,}\) respectively.
Solution.
We have \(B=(\bolde_1,\bolde_2, \bolde_3)\) and \(B'=(E_{11},E_{12},E_{21},E_{22})\text{.}\) By definition, we have
\begin{align*} [T]_{B}^{B'} \amp =\begin{bmatrix} \vert \amp \vert \amp \vert \\ [T(\bolde_1)]_{B'} \amp [T(\bolde_2)]_{B'}\amp [T(\bolde_3)]_{B'} \\ \vert \amp \vert \amp \vert \end{bmatrix} \text{.} \end{align*}
We first compute \(T(\bolde_i)\) for each \(1\leq i\leq 3\text{:}\)
\begin{align*} T(\bolde_1) \amp = T(1,0,0)=\begin{bmatrix}1\amp 1\\ 1 \amp 0\end{bmatrix}\\ T(\bolde_2) \amp = T(0,1,0)=\begin{bmatrix}1\amp 0 \\ 2 \amp 1 \end{bmatrix}\\ T(\bolde_3) \amp = T(0,0,1)=\begin{bmatrix}1\amp -1 \\ 0 \amp -1\end{bmatrix} \text{.} \end{align*}
To finish our computation, we must compute \([T(\bolde_i)]_{B'}\) for each \(1\leq i\leq 3\text{.}\) Since \(B'\) is the standard basis of \(M_{22}\text{,}\) this is not difficult: in general we have
\begin{equation*} \left[ \begin{bmatrix}a\amp b\\ c\amp d\end{bmatrix}\right]_{B'}=(a,b,c,d)\text{.} \end{equation*}
Thus
\begin{align*} [T(\bolde_1)]_{B'} \amp = (1,1,1,0)\\ [T(\bolde_2)]_{B'} \amp = (1,0,2,1)\\ [T(\bolde_3)]_{B'} \amp = (1,-1,0,-1) \end{align*}
and
\begin{equation*} [T]_{B}^{B'}=\begin{amatrix}[rrr] 1\amp 1\ \amp 1\\ 1\amp 0\amp -1\\ 1\amp 2\amp 0\\ 0\amp 1\amp -1\end{amatrix}\text{.} \end{equation*}
The formula for \([T]_{B}^{B'}\) should remind you of the formula from TheoremΒ 5.1.26 used for computing the standard matrix for a linear transformation \(T\colon \R^n\rightarrow \R^m\text{:}\) i.e., the matrix \(A\) such that \(T(\boldx)=A\boldx\) for all \(\boldx\in \R^n\text{.}\) TheoremΒ 5.4.3 explicates this resemblance.

Proof.

According to the recipe in TheoremΒ 5.1.26 we have
\begin{equation*} A= \begin{bmatrix}\vert\amp \vert\amp \amp \vert \\ T(\bolde_1)\amp T(\bolde_2)\amp \cdots \amp T(\bolde_n)\\ \vert\amp \vert\amp \amp \vert \end{bmatrix}\text{.} \end{equation*}
Let \(B\) and \(B'\) be the standard ordered bases of \(\R^n\) and \(\R^m\text{,}\) respectively. To see why \(A=[T]_{B}^{B'}\text{,}\) observe that for all \(1\leq j\leq n\) the \(j\)-th column of \(A\) is \(T(\bolde_j)\) and the \(j\)-th column of \([T]_B^{B'}\) is \([T(\bolde_j)]_{B''}\text{.}\) That these are equal is a result of the fact that for all vectors \(\boldw\in \R^m\) we have \([\boldw]_{B'}=\boldw\text{:}\) that is, the coordinate vector of a vector \(\boldw\in \R^m\) with respect to the standard basis is just \(\boldw\) itself. (See ExampleΒ 5.3.5).

Remark 5.4.4.

Let \(T\colon \R^n\rightarrow \R^m\) be a linear transformation, and let \(A\) be its standard matrix: i.e., \(A\) is the \(m\times n\) matrix satisfying \(T(\boldx)=A\boldx\) for all \(\boldx\in \R^n\text{.}\) According to TheoremΒ 5.4.3, the standard matrix \(A\) is just one way of representing \(T\text{:}\) namely, the representation with respect to the standard bases of \(\R^n\) and \(\R^m\text{.}\) This begs the question of whether a different choice of bases might give rise to a more convenient matrix representation of \(T\text{.}\) The answer is yes, as we will see over the course of this chapter.

Example 5.4.5. Different choice of bases.

Define \(T\colon \R^2\rightarrow \R^2\) as \(T(x,y)=(4x-3y,2x-y)\text{.}\)
  1. Compute \([T]_B\text{,}\) where \(B=(\bolde_1, \bolde_2)\) is the standard basis of \(\R^2\text{.}\)
  2. Compute \([T]_{B'}\text{,}\) where \(B'=((1,1), (1,-1))\text{.}\)
Solution.
  1. According to TheoremΒ 5.4.3, since \(B\) is the standard basis \([T]_B\) is the matrix \(A\) such that \(T=T_A\text{:}\)
    \begin{align*} [T]_B\amp=\begin{bmatrix} \vert \amp \vert \\ T(\bolde_1)\amp T(\bolde_2)\\ \vert \amp \vert \end{bmatrix} \\ \amp= \begin{amatrix}[rr] 4\amp -3\\ 2\amp -1 \end{amatrix} \text{.} \end{align*}
  2. We have
    \begin{align*} [T]_{B'}=[T]_{B'}^{B'} \amp = \begin{bmatrix} \vert\amp \vert\\ [T((1,1))]_{B'}\amp [T(1,-1)]_{B'}\\ \vert \amp \vert \end{bmatrix} \\ \amp = \begin{bmatrix} \vert\amp \vert\\ [(1,1)]_{B'}\amp [(7,3)]_{B'}\\ \vert \amp \vert \end{bmatrix}\\ \amp = \begin{amatrix}[rr] 1\amp 5\\ 0\amp 2 \end{amatrix}\text{,} \end{align*}
    where the last equality uses the fact that \([(1,1)]_{B'}=(1,0)\) and \([(7,3)]_{B'}=(5,2)\text{,}\) as you can verify yourself.
So we have \([T]_B=\begin{amatrix}[rr]4\amp -1\\ 2\amp -1 \end{amatrix}\) and \([T]_{B'}=\begin{amatrix}[rr]1\amp 5\\ 0\amp 2 \end{amatrix}\text{.}\) Moral: different choices of bases yield different matrix representations.

Subsection 5.4.2 Matrix representations as models

Before moving to more examples, we describe in what precise sense the matrix \(A=[T]_B^{B'}\) models the original linear transformation \(T\colon V\rightarrow W\text{,}\) and how we can use \(A\) to answer questions about \(T\text{.}\) The next theorem is key to understanding this.

Proof.

Let \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\text{.}\)
  1. By definition we have
    \begin{equation*} [T]_B^{B'}=\begin{amatrix}[cccc]\vert \amp \vert \amp \amp \vert \\ \left[T(\boldv_1)\right]_{B'}\amp [T(\boldv_2)]_{B'}\amp \dots \amp [T(\boldv_n)]_{B'} \\ \vert \amp \vert \amp \amp \vert \end{amatrix}\text{.} \end{equation*}
    Given any \(\boldv\in V\text{,}\) we can write
    \begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\dots +c_n\boldv_n \end{equation*}
    for some \(c_i\in \R\text{.}\) Then
    \begin{align*} [T]_{B}^{B'}[\boldv] \amp= [T]_{B}^{B'} \begin{bmatrix} c_1\\ c_2\\ \vdots \\ v_n \end{bmatrix} \\ \amp=c_1[T(\boldv_1)]_{B'}+c_n[T(\boldv_n)]_{B'}+\cdots +c_n[T(\boldv_n)]_{B'} \amp (\text{column method})\\ \amp = [c_1T(\boldv_1)+c_2T(\boldv_2)+\cdots +c_nT(\boldv_n)]_{B'} \amp (\knowl{./knowl/xref/th_coordinates.html}{\text{5.3.11}})\\ \amp=[T(c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n)]_{B'} \amp (T \text{ is linear})\\ \amp =[T(\boldv)]_{B'}\text{,} \end{align*}
    as desired.
  2. Assume \(A\) satisfies
    \begin{equation*} A[\boldv]_B=[T(\boldv)]_{B'} \end{equation*}
    for all \(\boldv\in V\text{.}\) Then in particular we have
    \begin{equation} A[\boldv_i]_B=[T(\boldv_i)]_{B'}\tag{5.12} \end{equation}
    for all \(1\leq i\leq n\text{.}\) Since \(\boldv_i\) is the \(i\)-th element of \(B\text{,}\) we have \([\boldv_i]_B=\bolde_i\text{,}\) the \(i\)-th standard basis element of \(\R^n\text{.}\) Using the column method (3.1.27), we see that
    \begin{equation*} A[\boldv_i]_B=A\bolde_i=\boldc_i, \end{equation*}
    where \(\boldc_i\) is the \(i\)-th column of \(A\text{.}\) Thus (5.12) implies that the \(i\)-th column of \(A\) is equal to \([T(\boldv_i)]_{B}\text{,}\) the \(i\)-th column of \([T]_B^{B'}\text{,}\) for all \(1\leq i\leq n\text{.}\) Since \(A\) and \([T]_{B}^{B'}\) have identical columns, we conclude that \(A=[T]_{B}^{B'}\text{,}\) as desired.

Remark 5.4.7. Uniqueness of \([T]_B^{B'}\).

The uniqueness property described in (2) of TheoremΒ 5.4.6 provides an alternative way of computing \([T]_{B}^{B'}\) that can be useful in certain situations: namely, simply provide an \(m\times n\) matrix \(A\) that satisfies the defining property
\begin{equation*} A[\boldv]_B=[T(\boldv)]_{B'} \end{equation*}
for all \(\boldv\in V\text{.}\) Since there is only one such matrix, we must have \(A=[T]_B^{B'}\) in this case.
Let \(T\colon V\rightarrow W\text{,}\) \(B\text{,}\) and \(B'\) be as in TheoremΒ 5.4.6. The defining property (5.11) of \([T]_B^{B'}\) can be summarized by saying that the following diagram is commutative.
Figure 5.4.8. Commutative diagram for \([T]_B^{B'}\)
The diagram being commutative here means that starting with an element \(\boldv\in V\) in the top left of the diagram, whether we travel to the bottom right of the diagram either by first applying \(T\) and then applying \([\hspace{5pt}]_{B'}\) (β€œgo right, then down”), or else by first applying \([\hspace{5pt}]_B\) and then applying \([T]_B^{B'}\) (β€œgo down, then right”), we get the same result! (The bottom map should technically be labeled \(T_A\text{,}\) where \(A=[T]_B^{B'}\text{,}\) but this would detract from the elegance of the diagram.)
Besides commutativity, the other import feature of FigureΒ 5.4.8 is that the two vertical coordinate transformations identify the domain and codomain of \(T\) with the familiar spaces \(\R^n\) and \(\R^m\) in a one-to-one manner. (Using the language of SectionΒ 5.2, these maps are isomorphisms.) These properties together allow us to translate any linear algebraic question about \(T\) to an equivalent question about the matrix \(A\text{,}\) as the following theorem indicates.

Proof.

Proof of (1).
We have
\begin{align*} \boldv\in \NS T \amp\iff T(\boldv)=\boldzero \\ \amp\iff [T(\boldv)]_{B'}=[\boldzero]_{B'} \amp (\knowl{./knowl/xref/th_coordinates.html}{\text{Theorem 5.3.11}}, (2))\\ \amp \iff [T]_{B}^{B'}[\boldv]_B=\boldzero \amp (\knowl{./knowl/xref/eq_matrixrep_prop.html}{\text{(5.11)}})\\ \amp [\boldv]_B\in\NS A \amp (A=[T]_B^{B'}) \text{.} \end{align*}
Proof of (2).
We have
\begin{align*} \boldw\in \im T \amp\iff \boldw=T(\boldv) \text{ for some } \boldv\in V \\ \amp\iff [\boldw]_{B'}=[T(\boldv)]_{B'} \text{ for some } \boldv \in V\amp (\knowl{./knowl/xref/th_coordinates.html}{\text{Theorem 5.3.11}}-(2)) \\ \amp\iff [\boldw]_{B'}=[T]_{B}^{B'}[\boldv]_B \text{ for some } \boldv\in V \amp (\knowl{./knowl/xref/eq_matrixrep_prop.html}{\text{(5.11)}}) \\ \amp\iff [\boldw]_{B'}=A\boldx \text{ for some } \boldx\in \R^n \amp (A=[T]_B^{B'}, \knowl{./knowl/xref/th_coordinates.html}{\text{Theorem 5.3.11}}-(3))\\ \amp \iff [\boldw]_{B'}\in \CS A\text{.} \end{align*}
The last equivalence follows from the fact that
\begin{equation*} \CS A=\im T_A=\{\boldb\in \R^m\colon \boldb=A\boldx \text{ for some } \boldx\in \R^n\}\text{.} \end{equation*}
Proof (3)-(4).
As an illustration of TheoremΒ 5.4.9, we treat again the linear transformation \(T\colon M_{nn}\rightarrow M_{nn}\text{,}\) \(T(A)=A^T-A\) treated in ExampleΒ 5.2.4 and ExampleΒ 5.2.11. Recall that we concluded in these examples that \(\NS T\) is the space of all symmetric matrices, and that \(\im T\) is the space of all skew-symmetric matrices. In ExampleΒ 5.4.10 we derive this result yet again in a slightly more computational manner, in the case where \(n=2\text{.}\)

Example 5.4.10. Computing with matrix representations.

The function
\begin{align*} F\colon M_{22} \amp \rightarrow M_{22}\\ A \amp \mapsto A^T-A \end{align*}
is linear.
  1. Compute \(A=[T]_{B}\text{,}\) where \(B=(E_{11},E_{12},E_{21},E_{22})\) is the standard basis of \(M_{22}\text{.}\)
  2. Compute bases of \(\NS A\) and \(\CS A\text{.}\)
  3. β€œLift” these bases to bases of \(\NS T\) and \(\im T\text{,}\) using the coordinate vector isomorphism \([\phantom{\boldv}]_B\text{.}\) Use these bases to identify \(\NS T\) and \(\im T\) as familiar subspaces of \(2\times 2\) matrices.
Solution.
First compute
\begin{align*} F(E_{11}) \amp =E_{11}^T-E_{11}=\begin{amatrix}[rr]0\amp 0 \\ 0 \amp 0\end{amatrix}\\ F(E_{12}) \amp =E_{12}^T-E_{12}=\begin{amatrix}[rr]0\amp -1 \\ 1 \amp 0\end{amatrix}\\ F(E_{21}) \amp =E_{21}^T-E_{21}=\begin{amatrix}[rr]0\amp 1 \\ -1 \amp 0\end{amatrix}\\ F(E_{22}) \amp =E_{22}^T-E_{22}=\begin{amatrix}[rr]0 \amp 0 \\ 0 \amp 0\end{amatrix} \end{align*}
Next, using the fact that for the standard basis \(B\) of \(M_{22}\) we have
\begin{equation*} \left[ \begin{bmatrix}a\amp b\\ c\amp d\end{bmatrix}\right]_B=(a,b,c,d)\text{,} \end{equation*}
we see that
\begin{align*} A\amp =\begin{bmatrix} \vert\amp \vert \amp \vert \amp \vert \\ [F(E_{11})]_{B}\amp [F(E_{12})]_{B} \amp [F(E_{21})]_{B} \amp [F(E_{22})]_{B} \\ \vert\amp \vert \amp \vert \amp \vert \end{bmatrix}\\ \amp = \begin{amatrix}[rrrr] 0\amp 0\amp 0\amp 0 \\ 0\amp -1\amp 1\amp 0 \\ 0\amp 1\amp -1 \amp 0\\ 0\amp 0\amp 0\amp 0 \end{amatrix}\text{.} \end{align*}
It is not difficult to compute bases for \(\NS A\) and \(\CS A\) using ProcedureΒ 4.5.10, but the matrix \(A\) is simple enough that we will compute these by inspection. For example, we see easily that \(\CS A=\Span\{(0,-1,1,0)\}\text{.}\) From the rank-nullity theorem, it then follows that \(\dim \NS A=4-\dim \CS A=4-1=3\text{.}\) Thus we need just three linearly independent elements of \(\NS A\) to get a basis. Again, by inspection (using 3.1.27) we see that \((1,0,0,0),(0,0,0,1),(0,1,1,0)\in \NS A\text{.}\) It is fairly easy to see the three vectors are independent, and thus form a basis of \(\NS A\text{.}\)
To β€œlift” the bases \(\{(1,0,0,0),(0,0,0,1),(0,1,1,0)\}\) and \(\{(0,-1,1,0)\}\) to bases of \(\NS F\) and \(\im F\text{,}\) we simply find the matrices in \(M_{22}\) that correspond to these \(4\)-vectors via the coordinate vector isomorphism \([\phantom{\boldv}]_B\text{:}\) i.e., the matrices that have these \(4\)-vectors as their coordinate vectors. This is easily done by inspection. We conclude that
\begin{equation*} B_1=\left\{A_1=\begin{bmatrix}1\amp 0\\ 0\amp 0\end{bmatrix}, A_2=\begin{amatrix}[cc]0\amp 0 \\ 0\amp 1\end{amatrix}, A_3=\begin{amatrix}[cc]0\amp 1 \\ 1\amp 0\end{amatrix}\right\} \end{equation*}
is a basis of \(\NS F\text{,}\) and
\begin{equation*} B_2=\left\{\begin{amatrix}[rr]0\amp -1 \\ 1\amp 0\end{amatrix}\right\} \end{equation*}
is a basis of \(\im F\text{.}\)
Lastly, since \(A_1,A_2,A_3\) are all symmetric, \(\NS F=\Span \{A_1,A_2,A_3\}\) is a subspace of the space of symmetric matrices. Since both spaces have dimension three, we conclude that they are equal. A similar argument shows that
\begin{equation*} \im F=\Span\left\{\begin{amatrix}[rr]0\amp -1 \\ 1\amp 0\end{amatrix} \right\} \end{equation*}
is precisely the space of skew-symmetric matrices.

Example 5.4.11. Video example: matrix representations.

Figure 5.4.12. Video: matrix representations

Exercises 5.4.3 Exercises

WeBWork Exercises

1.
To every linear transformation \(T\) from \({\mathbb R}^2\) to \({\mathbb R}^2\text{,}\) there is an associated \(2 \times 2\) matrix. Match the following linear transformations with their associated matrix.
  1. Reflection about the y-axis
  2. Counter-clockwise rotation by \(\pi/2\) radians
  3. Reflection about the line y=x
  4. The projection onto the x-axis given by T(x,y)=(x,0)
  5. Reflection about the \(x\)-axis
  6. Clockwise rotation by \(\pi/2\) radians
  1. \(\displaystyle \begin{pmatrix}0\amp -1\\ 1\amp 0 \end{pmatrix}\)
  2. \(\displaystyle \begin{pmatrix}1\amp 0\\ 0\amp 0 \end{pmatrix}\)
  3. \(\displaystyle \begin{pmatrix}-1\amp 0\\ 0\amp 1 \end{pmatrix}\)
  4. \(\displaystyle \begin{pmatrix}0\amp 1\\ -1\amp 0 \end{pmatrix}\)
  5. \(\displaystyle \begin{pmatrix}1\amp 0\\ 0\amp -1 \end{pmatrix}\)
  6. \(\displaystyle \begin{pmatrix}0\amp 1\\ 1\amp 0 \end{pmatrix}\)
  7. None of the above
2.
Find the matrix \(A\) of the linear transformation \(T(f(t)) = f(7 t + 6)\) from \(P_2\) to \(P_2\) with respect to the standard basis for \(P_2\text{,}\) \(\left\lbrace 1, t, t^2 \right\rbrace\text{.}\)
\(A=\) (3Β Γ—Β 3 array)
3.
The matrices
\begin{equation*} A_1 = \left[\begin{array}{cc} 1 \amp 0\cr 0 \amp 0 \end{array}\right], \ A_2 = \left[\begin{array}{cc} 0 \amp 1\cr 0 \amp 0 \end{array}\right], \end{equation*}
\begin{equation*} A_3 = \left[\begin{array}{cc} 0 \amp 0\cr 1 \amp 0 \end{array}\right], \ A_4 = \left[\begin{array}{cc} 0 \amp 0\cr 0 \amp 1 \end{array}\right], \end{equation*}
form a basis for the linear space \(V={\mathbb R}^{2\times 2}.\) Write the matrix of the linear transformation \(T:{\mathbb R}^{2\times 2} \rightarrow {\mathbb R}^{2\times 2}\) such that \(T(A)=13 A + 4 A^T\) relative to this basis.
4.
Find the matrix \(A\) of the linear transformation
\begin{equation*} T(M)= \left[\begin{array}{cc} 9 \amp 7\cr 0 \amp 2 \end{array}\right] M {\left[\begin{array}{cc} 9 \amp 7\cr 0 \amp 2 \end{array}\right]}^{-1} \end{equation*}
from \(U^{2\times 2}\) to \(U^{2\times 2}\) (upper triangular matrices) with respect to the standard basis for \(U^{2\times 2}\) given by
\begin{equation*} \left\lbrace \left[\begin{array}{cc} 1 \amp 0\cr 0 \amp 0 \end{array}\right], \left[\begin{array}{cc} 0 \amp 1\cr 0 \amp 0 \end{array}\right], \left[\begin{array}{cc} 0 \amp 0\cr 0 \amp 1 \end{array}\right] \right\rbrace. \end{equation*}
\(A=\) (3Β Γ—Β 3 array)
5.
Find the matrix \(A\) of the linear transformation
\begin{equation*} T(M)= \left[\begin{array}{cc} 1 \amp 5\cr 0 \amp 7 \end{array}\right] M \end{equation*}
from \(U^{2\times 2}\) to \(U^{2\times 2}\) (upper triangular matrices) with respect to the basis
\begin{equation*} \left\lbrace \left[\begin{array}{cc} 1 \amp 0\cr 0 \amp 0 \end{array}\right], \left[\begin{array}{cc} 1 \amp 1\cr 0 \amp 0 \end{array}\right], \left[\begin{array}{cc} 0 \amp 0\cr 0 \amp 1 \end{array}\right] \right\rbrace \end{equation*}
\(A=\) (3Β Γ—Β 3 array)
6.
Let \(V\) be the space spanned by the two functions \(\cos(t)\) and \(\sin(t)\text{.}\) Find the matrix \(A\) of the linear transformation \(T(f(t)) = f''(t) + 3 f'(t) + 4 f(t)\) from \(V\) into itself with respect to the basis \(\left\lbrace \cos(t), \sin(t) \right\rbrace\text{.}\)
\(A=\) (2Β Γ—Β 2 array)
7.
Let \(V\) be the plane with equation \(x_1 + 2 x_2 - 4 x_3 =0\) in \({\mathbb R}^3\text{.}\) The linear transformation
\begin{equation*} T\left(\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\right)= \left[\begin{array}{ccc} -6 \amp -2 \amp -4\cr 1 \amp -1 \amp -2\cr -1 \amp -1 \amp -2 \end{array}\right] \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} \end{equation*}
maps \(V\) into \(V\) so, by restricting it to \(V\text{,}\) we may regard it as a linear transformation \(\tilde T \colon V \to V\text{.}\)
Find the matrix \(A\) of the restricted map \(\tilde T \colon V \to V\) with respect to the basis \(\left\lbrace \left[\begin{array}{c} -2\cr 1\cr 0 \end{array}\right], \left[\begin{array}{c} 4\cr 0\cr 1 \end{array}\right] \right\rbrace.\)
\(A=\) (2Β Γ—Β 2 array)
8.
Consider the multiplication operator \(L_A:{\mathbb R}^2\to {\mathbb R}^2\) defined by \(L_A(x)=Ax\) where \(A=\left[\begin{array}{cc} -1 \amp 0\cr 3 \amp -3 \end{array}\right]\text{.}\) Find an ordered basis \(B=(b_1,b_2)\) for \({\mathbb R}^2\) such that \([L_A]_B^B=\left[\begin{array}{cc} -13 \amp -15\cr 8 \amp 9 \end{array}\right]\text{.}\) Hint: \([L_A]_E^E=A\) where \(E\) is the standard basis.
\(b_1 =\) (2Β Γ—Β 1 array), \(b_2 =\) (2Β Γ—Β 1 array)

Exercise Group.

Compute \([T]_B^{B'}\) for each provided \(T\colon V\rightarrow W\) and choice of bases \(B\) and \(B'\) of \(V\) and \(W\text{.}\) You may assume that the given \(T\) is a linear transformation.
9.
\(T\colon P_3\rightarrow P_4\text{,}\) \(T(p(x))=p'(x)-x\, p(x)\text{;}\) \(B=(x^3, x^2, x, 1)\text{;}\) \(B'=(x^4, x^3, x^2, x,1)\)
10.
\(T\colon P_2\rightarrow M_{22}\text{,}\) \(T(p(x))=\begin{bmatrix} p(-1)\amp p(0)\\ p(1)\amp p(2) \end{bmatrix}\text{;}\) \(B=(x^2,x,1)\text{;}\) \(B'=(E_{11}, E_{12}, E_{21}, E_{22})\)
11.
\(T\colon \R^2\rightarrow \R^2\text{,}\) \(T=T_A\) where \(A=\begin{bmatrix} 1\amp 2\\ 2\amp 4 \end{bmatrix}\text{;}\) \(B=B'=\left((1,2),(2,-1)\right)\)

12.

Suppose \(T\colon P^3\rightarrow P^2\) is a linear transformation with matrix representation
\begin{equation*} [T]_B^{B'}=\begin{bmatrix} 2\amp 0\amp 0\amp -1\\ 0\amp 1\amp -1\amp 0\\ 1\amp 1\amp 1\amp 1 \end{bmatrix}\text{,} \end{equation*}
where \(B=(x^3, x^2, x, 1)\) and \(B'=(x^2, x, 1)\text{.}\) Use the defining property (5.11) of \([T]_B^{B'}\) to determine the formula for \(T(ax^3+bx^2+cx+d)\) for an arbitrary polynomial \(ax^3+bx^2+cx+d\in P_3\text{.}\)

13.

Suppose \(T\colon \R^2\rightarrow \R^2\) is a linear transformation with matrix representation
\begin{equation*} [T]_{B}=\begin{bmatrix} 1\amp 1\\ 1\amp 2 \end{bmatrix}\text{,} \end{equation*}
where \(B=\left( (1,2), (-1,1)\right)\)
  1. Use the defining property (5.11) of \([T]_B\) to compute \(T((1,0))\) and \(T((0,1))\text{.}\) (You will first need to compute \([(1,0)]_B\) and \([(0,1)]_B\text{.}\) )
  2. Use (a) and the fact that \(T\) is linear to give a general formula for \(T(x,y)\) in terms of \(x\) and \(y\text{.}\)

14.

The function \(S\colon M_{22}\rightarrow M_{22}\) defined as \(S(A)=A^T-A\) is a linear transformation.
  1. Compute \(C=[S]_B\text{,}\) where \(B\) is the standard basis of \(M_{22}\text{.}\)
  2. Use TheoremΒ 5.4.9 to lift bases of \(\NS C\) and \(\CS C\) back to bases for \(\NS S\) and \(\im S\text{.}\)
  3. Identify \(\NS S\) and \(\im S\) as familiar subspaces of matrices.

15.

The function \(T\colon P_2\rightarrow P_3\) defined by \(T(p(x))=xp(x-3)\) is a linear transformation.
  1. Compute \(A=[T]_B^{B'}\text{,}\) where \(B\) and \(B'\) are the standard bases of \(P_2\) and \(P_3\text{.}\)
  2. Use TheoremΒ 5.4.9 to lift bases of \(\NS A\) and \(\CS A\) back to bases for \(\NS T\) and \(\im T\text{.}\)

16.

Let \(T\colon \R^2\rightarrow \R^3\) be the linear transformation defined as \(T(x,y)=(x-y, x+2y, y)\text{,}\) and let \(B=\left((1,0), (0,1)\right)\text{,}\) \(B'=\left((1,1),(1,-1)\right)\text{,}\) \(B''=\left((1,0,0),(0,1,0),(0,0,1)\right)\)
  1. Compute \([T]_B^{B''}\text{.}\)
  2. Compute \([T]_{B'}^{B''}\text{.}\)

17.

Let \(T\colon P_2\rightarrow M_{22}\) be the linear transformation defined as
\begin{equation*} T(p(x))=\begin{bmatrix} p(0)\amp p(1)\\ p(-1)\amp p(0) \end{bmatrix}\text{,} \end{equation*}
and let \(B=(1,x,x^2)\text{,}\) \(B'=(1, 1+x, 1+x^2)\text{,}\) \(B''=(E_{11}, E_{12}, E_{21}, E_{22})\) (the standard basis of \(M_{22}\)).
  1. Compute \([T]_B^{B''}\text{.}\)
  2. Compute \([T]_{B'}^{B''}\text{.}\)

18.

Let \(T_1\colon V\rightarrow W\) and \(T_2\colon W\rightarrow U\) be linear transformations, and suppose \(B, B', B''\) are ordered bases for \(V\text{,}\) \(W\text{,}\) and \(U\text{,}\) respectively. Prove:
\begin{equation*} [T_2\circ T_1]_B^{B''}=[T_2]_{B'}^{B''}[T_1]_B^{B'}\text{.} \end{equation*}
Hint.
Let \(A_1=[T_1]_B^{B'}\) and \(A_2=[T_2]_{B'}^{B''}\text{.}\) Show that the matrix \(A_2A_1\) satisfies the defining property of \([T_2\circ T_1]_{B}^{B''}\text{:}\) i.e.,
\begin{equation*} A_2A_1[\boldv]_B=[T_2\circ T_1(\boldv)]_{B''} \end{equation*}
for all \(\boldv\in V\text{.}\)