Skip to main content

Euclidean vector spaces

Section 5.1 Linear transformations

As detailed in DefinitionΒ 5.1.1 a linear transformation is a special type of function between two vector spaces: one that respects in some sense the vector operations of both spaces.
This manner of theorizing is typical in mathematics: first we introduce a special class of objects defined axiomatically, then we introduce special functions or maps between these objects. Since the original objects of study (e.g. vector spaces) come equipped with special structural properties (e.g. vector operations), the functions we wish to study are the ones that somehow acknowledge this structure.
You have already seen this principle at work in your study of calculus. First we give \(\R\) some structure by defining a notion of proximity (i.e., \(x\) is close to \(y\) if \(\val{x-y}\) is small), then we introduce a special family of functions that somehow respects this structure: these are precisely the continuous functions!
As you will see, linear transformations are not just interesting objects of study in their own right, they also serve as invaluable tools in our continued exploration of the intrinsic properties of vector spaces.
In the meantime rejoice in the fact that we can now give a succinct definition of linear algebra: it is the theory of vector spaces and the linear transformations between them. Go shout it from the rooftops!

Subsection 5.1.1 Linear transformations

First and foremost, a linear transformation is a function. Before continuing on in this section, you may want to reacquaint yourself with the basic function concepts and notation outlined in SectionΒ 0.2.

Definition 5.1.1. Linear transformations.

Let \(V\) and \(W\) be vector spaces. A function \(T\colon V\rightarrow W\) is a linear transformation (or linear) if it satisfies the following properties:
  1. Respects vector addition.
    For all \(\boldv_1, \boldv_2\in V\text{,}\) we have \(T(\boldv_1+\boldv_2)=T(\boldv_1)+T(\boldv_2)\text{.}\)
  2. Respects scalar multiplication.
    For all \(c\in \R\) and \(\boldv\in V\) we have \(T(c\boldv)=cT(\boldv)\text{.}\)
A function between vector spaces is nonlinear if it is not a linear transformation.

Remark 5.1.2. Linear transformations.

How precisely does a linear transformation β€œrespect” vector space structure? In plain English, the two axioms defining a linear transformation read as follows: the image of a sum is the sum of the images, and the image of a scalar multiple is the scalar multiple of the image. Alternatively, we could say that the application of a linear transformation to vectors distributes over vector addition and scalar multiplication.

Warning 5.1.3. Linear transformations.

A common pitfall is to conflate the definition of a linear transformation with the definition of a subspace. The language and notation of the two definitions share many commonalities, but they are completely different notions. In particular, the linear transformation axioms describe properties of a function between two vector spaces, whereas the subspace axioms describe properties of a subset of a single vector space.
Before getting to examples of linear transformations, it will perhaps be enlightening to consider how a function \(T\colon V\rightarrow W\) between two vector spaces could fail to be a linear transformation. FigureΒ 5.1.4 is an attempt at visualizing what it means for a function could fail one of the two linear transformation axioms. We will often fall back on these types of conceptual visualizations as a means of organizing our thinking about linear transformations. The diagrams deliberately mirror our general function notation
\begin{align*} T\colon V \amp \rightarrow W\\ \boldv \amp \mapsto T(\boldv)\text{,} \end{align*}
placing the domain and codomain on the left and right, respectively, and using mapsto notation \(\boldv\mapsto T(\boldv)\) to indicate where domain elements \(v\in V\) get mapped to by \(T\) in the codomain \(W\text{.}\)
(a) \(T\) fails AxiomΒ i: \(T(\boldv_1+\boldv_2)\ne T(\boldv_1)+T(\boldv_2)\text{.}\)
T fails axiom ii
(b) \(T\) fails AxiomΒ ii: \(T(c\boldv)\ne cT(\boldv)\text{.}\)
Figure 5.1.4. Visualizing the failure of linear transformation axioms

Example 5.1.5. Nonlinear function.

Let \(T\colon \R^2\rightarrow \R^2\) be defined as \(T(x,y)=(x^2-y^2,2xy)\text{.}\)
  1. Does \(T\) satisfy AxiomΒ i? If so, prove it. Otherwise, give an explicit counterexample.
  2. Does \(T\) satisfy AxiomΒ ii? If so, prove it. Otherwise, give an explicit counterexample.
Solution.
  1. \(T\) does not satisfy AxiomΒ i. Let \(\boldx_1=(1,0)\) and \(\boldx_2=(0,1)\text{.}\) We have
    \begin{align*} T(\boldx_1+\boldx_2) \amp=T(1,1)\\ \amp = (0,2)\\ T(\boldx_1)+T(\boldx_2) \amp =T(1,0)+T(0,1)\\ \amp =(1,0)+(-1,0)=0\text{.} \end{align*}
    We thus see that \(T(\boldx_1+\boldx_2)\ne T(\boldx_1)+T(\boldx_2)\text{.}\)
  2. \(T\) does not satisfy AxiomΒ ii. Let \(\boldx=(1,0)\) and \(c=2\text{.}\) We have
    \begin{align*} T(2\boldx) \amp=T(2,0)=(4,0)\\ 2T(\boldx)\amp = 2(1,0)=(2,0)\text{.} \end{align*}
    We thus see that \(T(2\boldx)\ne 2T(\boldx)\text{.}\)

Remark 5.1.6. Notational quirk.

ExampleΒ 5.1.5 brings to light a notational quirk when dealing with functions of the form \(T\colon \R^n\rightarrow W\text{.}\) Technically speaking, given an input \(\boldx=(x_1,x_2,\dots, x_n)\in \R^n\) we should write
\begin{equation*} T(\boldx)=T((x_1,x_2,\dots, x_n))\text{.} \end{equation*}
And yet our inner aesthete cries out at the unnecessary nested parentheses, and pleads that the notational laws be relaxed in this specific setting. We shall make it so.
We now turn to functions that do satisfy the linear transformation axioms. As our first examples of linear transformations, we define zero transformations and identity transformations.

Definition 5.1.8. Zero, identity, and scaling transformations.

Let \(V\) and \(W\) be vector spaces.
  • Zero transormation.
    The zero transformation from \(V\) to \(W\), denoted \(T_0\text{,}\) is defined as follows:
    \begin{align*} T_0\colon V \amp\rightarrow W \\ \boldv \amp\mapsto T_0(\boldv)=\boldzero_W\text{,} \end{align*}
    where \(\boldzero_W\) is the zero vector of \(W\text{.}\) In other words, \(T_0\) is the function that maps all elements of \(V\) to the zero vector of \(W\text{.}\)
  • Identity transformation.
    The identity transformation of \(V\), denoted \(\id_V\text{,}\) is defined as follows:
    \begin{align*} \id_V\colon V \amp\rightarrow V \\ \boldv \amp\mapsto \id_V(\boldv)=\boldv\text{.} \end{align*}
    In other words, \(\id_V(\boldv)=\boldv\) for all \(\boldv\in V\text{.}\) When the underlying vector space is clear from the context, we will drop the subscript and write \(\id\) for \(\id_V\text{.}\)
  • Scaling transformation.
    Let \(c\in \R\) be a fixed scalar. The function
    \begin{align*} T\colon V \amp \rightarrow V\\ \boldv \amp \mapsto c \boldv \end{align*}
    is called a scaling transformation with scale factor \(c\text{.}\)

Proof.

  1. Let \(T_0\colon V\rightarrow W\) be the zero function: i.e., \(T(\boldv)=\boldzero\) for all \(\boldv\in V\text{.}\) We verify each defining property separately.
    1. Given \(\boldv, \boldw\in V\text{,}\) we have
      \begin{align*} T_0(\boldv_1+\boldv_2)\amp =\boldzero_W \amp (\text{by def.}) \\ \amp =\boldzero_W+\boldzero_W \\ \amp = T_0(\boldv_1)+T_0(\boldv_2) \amp (\text{by def.})\text{.} \end{align*}
    2. Given \(c\in \R\) and \(\boldv\in V\text{,}\) we have
      \begin{align*} T_0(c\boldv) \amp = \boldzero_W \amp (\text{def. of } T_0)\\ \amp = c\boldzero_W \amp (\knowl{./knowl/xref/th_vectorspace_props.html}{\text{1.1.9}} ) \\ \amp = cT_0(\boldv) \amp (\text{def. of } T_0)\text{.} \end{align*}
    This proves that \(T_0\colon V\rightarrow W\) is a linear transformation.
  2. Fix a scalar \(c\in \R\) and let \(T\colon V\rightarrow V\) be the scaling transformation defined as \(T(\boldv)=c\boldv\) for all \(\boldv\in V\text{.}\)
    1. Given \(\boldv, \boldw\in V\text{,}\) we have
      \begin{align*} T(\boldv_1+\boldv_2)\amp =c(\boldv_1+\boldv_2) \amp (\text{def. of } T)\\ \amp = c\boldv_1+c\boldv_2 \amp (\text{vec. props.})\\ \amp = cT(\boldv_1)+cT(\boldv_2) \amp (\text{def. of } T)\text{.} \end{align*}
    2. Given \(d\in \R\) and \(\boldv\in V\text{,}\) we have
      \begin{align*} T(d\boldv) \amp = c(d\boldv) \amp (\text{def. of } T)\\ \amp = (cd)\boldv \amp (\text{vec. props.}) \\ \amp = (dc)\boldv \amp (\text{real mult. is comm.})\\ \amp = d(c\boldv) \amp (\text{vec. props.})\\ \amp = dT(\boldv) \amp (\text{def. of } T)\text{.} \end{align*}
    This proves that \(T\colon V\rightarrow V\) is a linear transformation.

Proof.

  1. We employ some similar trickery to what was done in the proof of TheoremΒ 1.1.9. Assuming \(T\) is linear:
    \begin{align*} T(\boldzero_V) \amp= T(\boldzero_V+\boldzero_V)\\ \amp =T(\boldzero_V)+T(\boldzero_V) \amp (\knowl{./knowl/xref/d_linear_transform.html}{\text{Definition 5.1.1}} ) \text{.} \end{align*}
    Thus, whatever \(T(\boldzero_V)\in W\) may be, it satisfies
    \begin{equation*} T(\boldzero_V)=T(\boldzero_V)+T(\boldzero_V)\text{.} \end{equation*}
    Canceling \(T(\boldzero_V)\) on both sides using \(-T(\boldzero_V)\text{,}\) we conclude
    \begin{equation*} \boldzero_W=T(\boldzero_V)\text{.} \end{equation*}
  2. The argument is similar:
    \begin{align*} \boldzero_W \amp= T(\boldzero_V) \amp (\text{by (1)})\\ \amp =T(-\boldv+\boldv)\\ \amp = T(-\boldv)+T(\boldv)\text{.} \end{align*}
    Since \(\boldzero_W=T(-\boldv)+T(\boldv)\text{,}\) adding \(-T(\boldv)\) to both sides of the equation yields
    \begin{equation*} -T(\boldv)=T(-\boldv)\text{.} \end{equation*}
  3. This is an easy proof by induction using the two defining properties of a linear transformation in tandem.
As a sort of converse to statementΒ 3 of TheoremΒ 5.1.10, observe that if \(T\colon V\rightarrow W\) satisfies
\begin{equation*} T(c\boldv_1+d\boldv_2)=cT(\boldv_1)+dT(\boldv_2) \end{equation*}
for all \(c, d\in \R\) and \(\boldv_1,\boldv_2\in V\text{,}\) then \(T\) is linear. Indeed, taking the special case \(c=d=1\) yields AxiomΒ i of DefinitionΒ 5.1.1; and choosing \(d=0\) yields AxiomΒ ii of DefinitionΒ 5.1.1. As a consequence, we have the following one-step procedure for proving whether a function \(T\colon V\rightarrow W\) between vector spaces is a linear transformation.

Example 5.1.13. Linear transformation: one-step technique.

Define \(T\colon \R^3\rightarrow \R^2\) as \(T(x,y,z)=(2x+z, y-z)\text{.}\) Use ProcedureΒ 5.1.12 to show \(T\) is a linear transformation.
Solution.
Given scalars \(c,d\in \R\) and vectors \(\boldx_1=(x_1,y_1,z_1),\boldx_2=(x_2,y_2,z_2)\in \R^3\text{,}\) we have
\begin{align*} T(c\boldx_1+d\boldx_2) \amp = T(cx_1+dx_2,cy_1+dy_2,cz_1+dz_2) \amp (\text{vector arith.})\\ \amp =\left(2(cx_1+dx_2)+(cz_1+dz_2),(cy_1+dy_2)-(cz_1+dz_2)\right) \amp (\text{def. of } T) \\ \amp = c(2x_1+z_1,y_1-z_1)+d(2x_2+z_2,y_2-z_2) \amp (\text{vector arith.})\\ \amp = cT(\boldx_1)+dT(\boldx_2) \amp (\text{def. of } T)\text{.} \end{align*}
Thus \(T\) is a linear transformation.
We continue with some examples of linear transformations involving vector spaces other than \(\R^n\text{.}\) Some of the operations we have already defined on matrices can be viewed as transformations.

Proof.

We leave the proof of (1) to the reader, and prove that the function \(F(A)=A^T\) is a linear transformation from \(M_{mn}\) to \(M_{nm}\text{.}\) We use the one-step technique. Given scalars \(c,d\in \R\) and matrices \(A,B\in M_{mn}\text{,}\) we have
\begin{align*} F(cA+dB) \amp = (cA+dB)^T \amp (\text{def. of } F) \\ \amp = cA^T+dB^T \amp (\knowl{./knowl/xref/th_trans_props.html}{\text{Theorem 3.2.12}} ) \\ \amp = cF(A)+dF(B) \amp (\text{def. of } F)\text{.} \end{align*}
Later, when discussing changes of bases and diagonalizable linear transformations, our computational techniques will rely heavily on the notion of conjugation, defined below. As we show in TheoremΒ 5.1.16, conjugation is also a linear operation. This ends up being very valuable to us, as it means computing conjugates of matrices interacts nicely with matrix addition and scalar multiplication.

Definition 5.1.15. Matrix conjugation.

Let \(n\) be a positive integer, and let \(Q\in M_{nn}\) be a fixed invertible matrix. Given \(A\in M_{mn}\text{,}\) the matrix \(B=Q^{-1}AQ\) is called the conjugate of \(A\) by \(Q\text{.}\) The operation
\begin{align*} M_{nn} \amp \rightarrow M_{nn}\\ A \amp \mapsto Q^{-1}AQ \end{align*}
is called conjugation by \(Q\text{.}\)

Proof.

Example 5.1.17. Left-shift transformation on \(\R^\infty\).

Define the left-shift operation, \(T_\ell\colon \R^\infty \rightarrow R^{\infty}\) as follows:
\begin{equation*} T_\ell\left( (a_{i})_{i=1}^\infty\right)= (a_{i+1})_{i=1}^\infty\text{.} \end{equation*}
In other words, we have
\begin{equation*} T_\ell \left( (a_1,a_2,a_3,\dots)\right)=(a_2,a_3,\dots)\text{.} \end{equation*}
Show that \(T_\ell\) is a linear transformation.
Solution.
Let \(\boldv=(a_i)_{i=1}^\infty\) and \(\boldw=(b_i)_{i=1}^\infty\) be two infinite sequences in \(\R^\infty\text{.}\) For any \(c,d\in\R\) we have
\begin{align*} T_\ell(c\boldv+d\boldw) \amp=T_\ell\left((ca_i+db_i)_{i=1}^\infty \right)\amp (\knowl{./knowl/xref/eg_infinite_sequences.html}{\text{Example 1.1.11}} ) \\ \amp= (ca_{i+1}+db_{i+1})_{i=1}^\infty \amp (\text{by def.})\\ \amp=c(a_{i+1})_{i=1}^\infty+d(b_{i+1})_{i=1}^\infty \\ \amp=cT_\ell(\boldv)+dT_\ell(\boldw)\amp (\text{by def.}) \text{.} \end{align*}
This proves \(T_\ell\) is a linear transformation.

Example 5.1.18. Video examples: deciding if \(T\) is linear.

Figure 5.1.19. Video: deciding if \(T\) is linear
Figure 5.1.20. Video: deciding if \(T\) is linear

Subsection 5.1.2 Bases and linear transformations

In SectionΒ 4.3 we saw that a vector space \(V\) is completely and concisely determined by a basis \(B\) in the sense that all elements of \(V\) can be expressed in a unique was as a linear combination of elements of \(B\text{.}\) A similar principle applies to linear transformations. Roughly speaking, a linear transformation defined on a vector space \(V\) is completely determined by where its sends elements of a basis \(B\) for \(V\text{.}\) This is spelled out in more detail in TheoremΒ 5.1.21 and the remark that follows.

Proof.

Proof of (1).
Assume \(T\) and \(T'\) are linear transformations from \(V\) to \(W\) satisfying \(T(\boldv_i)=T'(\boldv_i)\) for all \(1\leq i\leq n\text{.}\) Given any \(\boldv\in V\) we can write \(\boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n\text{.}\) It follows that
\begin{align*} T(\boldv) \amp = T(c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n)\\ \amp = c_1T(\boldv_1)+c_2T(\boldv_2)+\cdots +c_nT(\boldv_n) \amp (T \text{ is linear})\\ \amp =c_1T'(\boldv_1)+c_2T'(\boldv_2)+\cdots +c_nT'(\boldv_n)\\ \amp = T'(c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n) \amp (T' \text{ is linear}) \\ \amp = T'(\boldv)\text{.} \end{align*}
Since \(T(\boldv)=T'(\boldv)\) for all \(\boldv\in V\text{,}\) we have \(T=T'\text{.}\)
Proof of (2).
Since any \(\boldv\in V\) has a unique expression of the form
\begin{equation*} \boldv=c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n\text{,} \end{equation*}
the formula in (5.1) defines a function \(T\colon V\rightarrow W\) in a well-defined manner.
We now show that \(T\) is linear. To minimize the unwieldiness of our expressions, we will use sigma notation. We use the 1-step technique. Given andy scalars \(c,d\in \R\) and vectors \(\boldv, \boldv'\in V\text{,}\) we first write
\begin{align*} \boldv \amp = c_1\boldv_1+c_2\boldv_2+\cdots +c_n\boldv_n\\ \amp =\sum_{i=1}^nc_i\boldv_i\\ \boldv'\amp=d_1\boldv_1+d_2\boldv_2+\cdots +d_n\boldv_n \\ \amp =\sum_{i=1}^nd_i\boldv_i\text{,} \end{align*}
where \(c_i, d_i\in \R\text{.}\) By definition (5.1) we thus have, using sigma notation,
\begin{align*} T(\boldv) \amp = \sum_{i=1}^nc_i\boldw_i\\ T(\boldv') \amp = \sum_{i=1}^nd_i\boldw_i\text{.} \end{align*}
It follows that
\begin{align*} T(c\boldv+d\boldv')\amp=T(c\sum_{i=1}^nc_i\boldv_i+d\sum_{i=1}^nd_i\boldv_i) \\ \amp= T\left(\sum_{i=1}^n(cc_i+dd_i)\boldv_i\right) \\ \amp =\sum_{i=1}^n(cc_i+dd_i)\boldw_i \amp \knowl{./knowl/xref/eq_bases_transformations.html}{\text{(5.1)}}\\ \amp = c\sum_{i=1}^nc_i\boldw_i+d\sum_{i=1}^nd_i\boldw_i\\ \amp =cT(\boldv)+dT(\boldv')\text{,} \end{align*}
as desired.

Remark 5.1.22. Transformations determined by behavior on basis.

Let’s paraphrase the two results of TheoremΒ 5.1.21.
  1. Once we have a basis \(B=\{\boldv_1,\boldv_2,\dots, \boldv_n\}\) on hand, it is easy to construct linear transformations \(T\colon V\rightarrow W\text{:}\) simply choose images \(\boldw_i=T(\boldv_i)\in W\) for all \(\boldv_i\in B\) in any manner you like, and then define \(T(\boldv)\) for any element \(\boldv\in V\) using (5.1).
  2. A linear transformation \(T\colon V\rightarrow W\) is completely determined by its behavior on a basis \(B=\{\boldv_1,\boldv_2,\dots, \boldv_n\}\) of \(V\text{.}\) Once we know the images \(T(\boldv_i)\) for all \(1\leq i\leq n\text{,}\) the image \(T(\boldv)\) for any other \(\boldv\in V\) is then completely determined. Put another way, if two linear transformations from \(V\) to \(W\) agree on the elements of a basis \(B\subseteq V\text{,}\) then they agree for all elements of \(V\text{.}\)
Our remarks are worthy of another mantra.

Example 5.1.24. Bases and transformations.

Assume \(T\colon \R^2\rightarrow \R^3\) satisfies \(T(1,1)=(0,1,1)\) and \(T(1,-1)=(2,1,-1)\text{.}\)
  1. Use the fact that \((1,0)=\frac{1}{2}\left((1,1)+(1,-1)\right)\) to compute \(T(1,0)\text{.}\)
  2. Use Uniqueness of transformations to prove that \(T(x,y)=(x-y,x,y)\) for all \((x,y)\in \R^2\text{.}\)
Solution.
Let \(\boldx_1=(1,1)\) and \(\boldx_2=(1,-1)\text{.}\)
  1. Since \((1,0)=\frac{1}{2}(\boldx_1+\boldx_2)\text{,}\) we have
    \begin{align*} T(1,0) \amp =T\left(\frac{1}{2}(\boldx_1+\boldx_2) \right)\\ \amp = \frac{1}{2}(T(\boldx_1)+T(\boldx_2))\\ \amp = \frac{1}{2}((0,1,1)+(2,1,-1))=(1,1,0)\text{.} \end{align*}
  2. Let \(T'\colon \R^2\rightarrow \R^2\) be defined as \(T'(x,y)=(x-y,x,y)\text{.}\) We wish to show that \(T=T'\text{.}\) First, it is not difficult to show, using the 1-step technique, that \(T'\) is a linear transformation. Next, since \(B=\{(1,1),(1,-1)\}\) is clearly linearly independent, and since \(\R^2\) has dimension two, we see that \(B\) is a basis of \(\R^2\text{.}\) Thus, according to Uniqueness of transformations, to show \(T=T'\) it suffices to show that \(T(1,1)=T'(1,1)\) and \(T(1,-1)=T'(1,-1)\text{.}\) This is easy:
    \begin{align*} T'(1,1) \amp =(1-1,1,1)\\ \amp = (0,1,1) \amp \text{(def. of ) T'}\\ \amp = T(1,1) \amp \text{(given)}\\ T'(1,-1) \amp =(1-(-1),1,-1)\\ \amp = (2,1,-1) \amp \text{(def. of ) T'}\\ \amp = T(1,-1) \amp \text{(given)}\text{.} \end{align*}
    Since \(T\) and \(T'\) are both linear and agree on the basis \(B\text{,}\) we conclude that \(T=T'\text{.}\)

Subsection 5.1.3 Matrix transformations

We now describe what turns out to be an entire family of examples of linear transformations: so-called matrix transformations of the form \(T_A\colon \R^n\rightarrow \R^m\text{,}\) where \(A\) is a given \(m\times n\) matrix. This is a good place to recall the matrix mantra. Not only can a matrix represent a system of linear equations, it can represent a linear transformation. These are two very different concepts, and the matrix mantra helps us to not confuse the two. In the end a matrix is just a matrix: a mathematical tool that can be employed to diverse ends. Observe that the definition of matrix multiplication marks the first point where FiatΒ 3.1.10 comes into play.

Definition 5.1.25. Matrix transformations.

Let \(A\) be an \(m\times n\) matrix. The matrix transformation associated to \(A\) is the function \(T_A\) defined as follows:
\begin{align*} T_A\colon \R^n \amp\rightarrow \R^m \\ \boldx\amp\mapsto T_A(\boldx)=A\boldx \text{.} \end{align*}
In other words, given input \(\boldx\in \R^n\text{,}\) the output \(T(\boldx)\) is defined as \(A\boldx\text{.}\)

Proof.

  1. We use the one-step technique. For any \(c,d\in \R\) and \(\boldx_1, \boldx_2\in \R^n\text{,}\) we have
    \begin{align*} T_A(c\boldx_1+d\boldx_2) \amp =A(c\boldx_1+d\boldx_2)\\ \amp =A(c\boldx_1)+A(d\boldx_2) \amp (\knowl{./knowl/xref/th_matrix_alg_props.html}{\text{Theorem 3.2.1}} ) \\ \amp =cA\boldx_1+dA\boldx_2 \amp (\knowl{./knowl/xref/th_matrix_alg_props.html}{\text{Theorem 3.2.1}} )\\ \amp =cT_A(\boldx_1)+dT_A(\boldx_2)\text{.} \end{align*}
    This proves \(T_A\) is a linear transformation.
  2. Let \(B=\{\bolde_1,\bolde_2,\dots, \bolde_n\}\) be the standard basis of \(\R^n\text{.}\) Let \(A\in M_{mn}\) and let \(\bolda_j\) denote the \(j\)-th column of \(A\) for all \(1\leq j\leq n\text{.}\) We have
    \begin{align*} T=T_{A} \amp \iff T(\bolde_j)=T_A(\bolde_j) \text{ for all } 1\leq j\leq n \amp (\knowl{./knowl/xref/th_bases_transformations.html}{\text{Theorem 5.1.21}} , \knowl{./knowl/xref/th_transf_unique.html}{\text{(1)}} ) \\ \amp \iff T(\bolde_j)=\bolda_j \text{ for all } 1\leq j\leq n \text{,} \end{align*}
    where the last equality follows from the column method of matrix multiplication:
    \begin{equation*} A\bolde_j= \begin{bmatrix}\vert\amp \vert\amp \amp \vert \\ \bolda_1\amp \bolda_2\amp \cdots \amp \bolda_n\\ \vert\amp \vert\amp \amp \vert \end{bmatrix}\colvec{0\\ \vdots \\ 1 \\ \vdots \\0}=1\bolda_j\text{.} \end{equation*}
    It follows that \(T=T_A\) if and only if \(A\) is the matrix whose \(j\)-th column is \(T(\bolde_j)\) for all \(1\leq j\leq n\text{.}\) In other words,
    \begin{equation*} A=\begin{bmatrix}\vert\amp \vert\amp \amp \vert \\ T(\bolde_1)\amp T(\bolde_2)\amp \cdots \amp T(\bolde_n)\\ \vert\amp \vert\amp \amp \vert \end{bmatrix} \end{equation*}
    is the unique \(m\times n\) matrix satisfying \(T=T_A\text{.}\)
Besides giving a complete description of linear transformations from \(\R^n\) to \(\R^m\) (they all come from matrices), TheoremΒ 5.1.26, or rather its proof, provides a recipe for computing a β€œmatrix formula” for a linear transformation \(T\colon \R^n\colon \rightarrow \R^m\text{.}\) In other words, it tells us how to build the unique matrix \(A\in M_{mn}\) satisfying \(T\boldx=A\boldx\) for all \(\boldx\in R^n\text{.}\) We call this matrix the standard matrix of \(T\text{.}\)

Definition 5.1.28. Standard matrix of linear \(T\colon \R^n\rightarrow \R^m\).

Let \(T\colon \R^n\rightarrow \R^m\) be a linear transformation. The standard matrix of \(T\) is the unique \(m\times n\) matrix \(A\) satisfying \(T=T_A\text{.}\) Equivalently, \(A\) is the unique matrix satisfying
\begin{equation*} T(\boldx)=A\boldx \end{equation*}
for all \(\boldx\in \R^n\text{.}\)

Example 5.1.30. Standard matrix computation.

The function \(T\colon \R^3\rightarrow \R^2\) defined as \(T(x,y,z)=T(x+y+z, 2x+3y-4z)\) is linear.
  1. Use 5.1.26 to compute the standard matrix of \(A\text{.}\)
  2. Use \(A\) to compute \(T((-2,3,4))\text{.}\)
Solution.
We have
\begin{align*} A \amp = \begin{amatrix}[ccc]\vert\amp \vert\amp \vert\\ T(1,0,0)\amp T(0,1,0)\amp T(0,0,1) \\ \vert\amp \vert\amp \vert \end{amatrix} \\ \amp = \begin{amatrix}[rrr] 1 \amp 1 \amp 1 \\ 2\amp 3 \amp -4 \end{amatrix} \text{.} \end{align*}
Let \(\boldx=(-2,3,4)\text{.}\) Since \(A\) provides a β€œmatrix formula” for \(T\) we have
\begin{align*} T(\boldx) \amp = A\boldx \\ \amp = \begin{amatrix}[rrr] 1 \amp 1 \amp 1 \\ 2\amp 3 \amp -4 \end{amatrix} \colvec{-2\\ 3\\ 4}\\ \amp = \colvec{5\\ -9} \text{.} \end{align*}
Thus \(T((-2,3,4))=(5,-9)\text{,}\) as you can confirm.

Remark 5.1.31.

It should also be noted that TheoremΒ 5.1.26 gives rise to an alternative technique for showing a function \(T\colon \R^n\rightarrow \R^m\) is a linear transformation: namely, show that \(T=T_A\) for some matrix \(A\text{.}\) For example, to show that the function \(T\colon \R^2\rightarrow \R^3\) defined as \(T(x,y)=(7x+2y,-y,x)\) is linear, it suffices to remark that \(T=T_A\) where
\begin{equation*} A=\begin{amatrix}[rr]7\amp 2\\ 0\amp -1\\ 1\amp 0 \end{amatrix}\text{.} \end{equation*}

Subsection 5.1.4 Reflections and rotations in the plane

TheoremΒ 5.1.26 provides a convenient means of showing that certain familiar geometric transformations of the plane are in fact linear transformations. In this subsection we consider rotations about the origin and reflections through a line.

Definition 5.1.33. Rotation in the plane.

Fix an angle \(\alpha\) and define
\begin{equation*} \rho_\alpha\colon \R^2\rightarrow \R^2 \end{equation*}
to be the function that takes an input vector \(\boldx=(x_1,x_2)\text{,}\) considered as the position vector \(\overrightarrow{OP}\) of the point \(P=(x_1,x_2)\text{,}\) and returns the output \(\boldy=(y_1,y_2)\) obtained by rotating the vector \(\boldx\) by an angle of \(\alpha\) about the origin. The function \(\rho_\alpha\) is called rotation about the origin by the angle \(\alpha\text{.}\)
We can extract a formula from the rule defining \(\rho_\alpha\) by using polar coordinates: if \(\boldx\) has polar coordinates \((r,\theta)\text{,}\) then \(\boldy=\rho_\alpha(\boldx)\) has polar coordinates \((r,\theta+\alpha)\text{.}\)

Proof.

By RemarkΒ 5.1.31, we need only show that \(\rho_\alpha=T_A\) for the matrix indicated.
If the vector \(\boldx=(x,y)\) has polar coordinates \((r,\theta)\) (so that \(x_1=r\cos\theta\) and \(x_2=r\sin\theta\)), then its image \(\boldy=\rho_{\alpha}(P)\) under our rotation has polar coordinates \((r,\theta+\alpha)\text{.}\) Translating back to rectangular coordinates, we see that
\begin{align*} \rho_\alpha(\boldx)\amp= \boldy \\ \amp =\left(r\cos(\theta+\alpha),r\sin(\theta+\alpha)\right)\\ \amp =(r\cos\theta\cos\alpha-r\sin\theta\sin\alpha, r\sin\theta\cos\alpha+r\cos\theta\sin\alpha) \amp (\text{trig. identities}) \\ \amp=(\cos\alpha\, x_1-\sin\alpha\, x_2, \sin\alpha\, x_1+\cos\alpha\, x_2) \amp (\text{since } x_1=r\cos\theta, x_2=r\sin\theta) \text{.} \end{align*}
It follows that \(\rho_{\alpha}=T_A\text{,}\) where
\begin{equation*} A=\begin{amatrix}[rr] \cos\alpha\amp -\sin\alpha\\ \sin\alpha \amp \cos\alpha \end{amatrix}\text{,} \end{equation*}
as claimed.

Remark 5.1.35.

Observe that it is not at all obvious geometrically that the rotation operation is linear: i.e., that it preserves addition and scalar multiplication of vectors in \(\R^2\text{.}\) Indeed, our proof does not even show this directly, but instead first gives a matrix formula for rotation and then uses statement (2) of TheoremΒ 5.1.26.
Since matrices of the form
\begin{equation*} \begin{amatrix}[rr] \cos\alpha\amp -\sin\alpha\\ \sin\alpha \amp \cos\alpha \end{amatrix} \end{equation*}
can be understood as defining rotations of the plane, we call them rotation matrices.

Example 5.1.36. Rotation matrices.

Find formulas for \(\rho_\pi\colon \R^2\rightarrow \R^2\) and \(\rho_{2\pi/3}\colon \R^2\rightarrow \R^2\text{,}\) expressing your answer in terms of pairs (as opposed to column vectors).
Solution.
The rotation matrix corresponding to \(\alpha=\pi\) is
\begin{equation*} A=\begin{amatrix}[rr]\cos\pi\amp -\sin\pi\\ \sin\pi \amp \cos\pi \end{amatrix}= \begin{amatrix}[rr]-1\amp 0\\ 0 \amp -1 \end{amatrix}\text{.} \end{equation*}
Thus \(\rho_\pi=T_A\) has formula
\begin{equation*} \rho_{\pi}(x,y)=(-x,-y)=-(x,y)\text{.} \end{equation*}
Note: this is as expected! Rotating by 180 degrees produces the vector inverse.
The rotation matrix corresponding to \(\alpha=2\pi/3\) is
\begin{equation*} B=\begin{amatrix}[rr]\cos(2\pi/3)\amp -\sin(2\pi/3)\\ \sin(2\pi/3) \amp \cos(2\pi/3) \end{amatrix}= \begin{amatrix}[rr]-\frac{1}{2}\amp -\frac{\sqrt{3}}{2}\\ \frac{\sqrt{3}}{2} \amp -\frac{1}{2} \end{amatrix}\text{.} \end{equation*}
Thus \(\rho_{2\pi/3}=T_B\) has formula
\begin{equation*} \rho_{2\pi/3}(x,y)=\frac{1}{2}(-x-\sqrt{3}y, \sqrt{3}x-y)\text{.} \end{equation*}
Let’s check our formula for \(\rho_{2\pi/3}\)for the vectors \((1,0)\) and \((0,1)\text{:}\)
\begin{align*} \rho_{2\pi/3}(1,0) \amp =(-1/2, \sqrt{3}/2) \\ \rho_{2\pi/3}(0,1) \amp =(-\sqrt{3}/2, -1/2) \text{.} \end{align*}
Confirm for yourself geometrically that these are the vectors you get by rotating the vectors \((1,0)\) and \((0,1)\) by an angle of \(2\pi/3\) about the origin.
A second example of a geometric linear transformation is furnished by reflection through a line in \(\R^2\text{.}\)

Definition 5.1.37. Reflection through a line.

Fix an angle \(\alpha\) with \(0\leq \alpha \leq \pi \text{,}\) and let \(\ell_\alpha\) be the line through the origin that makes an angle of \(\alpha\) with the positive \(x\)-axis.
Define \(r_\alpha\colon \R^2\rightarrow \R^2\) to be the function that takes an input \(\boldx=(x_1,x_2)\text{,}\) considered as a point \(P\text{,}\) and returns the coordinates \(\boldy=(y_1,y_2)\) of the point \(P'\) obtained by reflecting \(P\) through the line \(\ell_\alpha\text{.}\) In more detail: if \(P\) lies on \(\ell_\alpha\text{,}\) then \(P'=P\text{;}\) otherwise, \(P'\) is obtained by drawing the perpendicular through \(\ell_\alpha\) that passes through \(P\) and taking the point on the other side of this line whose distance to \(\ell_\alpha\) is equal to the distance from \(P\) to \(\ell_\alpha\text{.}\)
The function \(r_{\alpha}\) is called reflection through the line \(\ell_\alpha\text{.}\)

Proof.

Example 5.1.39. Visualizing reflection and rotation.

The GeoGebra interactive below helps visualize rotations and reflections in \(\R^2\) (thought of as operations on points) by showing how they act on the triangle \(\triangle ABC\text{.}\)
  • Move or alter the triangle as you see fit.
  • Check the box of the desired operation, rotation or reflection.
  • If rotation is selected, the slider adjusts the angle \(\alpha\) of rotation.
  • If reflection is selected, the slider adjusts the angle \(\alpha\) determining the line \(\ell_\alpha\) of reflection. Click the β€œDraw perps” box to see the the perpendicular lines used to define the reflections of vertices \(A, B, C\text{.}\)
Figure 5.1.40. Visualizing reflection and rotation. Made with GeoGebra.

Subsection 5.1.5 Composition of linear transformations and matrix multiplication

We end by making good on a promise we made long ago to retroactively make sense of the definition of matrix multiplication. The key connecting concept, as it turns out, is composition of functions. We first need a result showing that composition preserves linearity.

Proof.

Turning now to matrix multiplication, suppose \(A\) is \(m\times n\) and \(B\) is \(n\times r\text{.}\) Let \(C=AB\) be their product. These matrices give rise to linear transformations
\begin{align*} T_A\colon \R^n \amp\rightarrow \R^m \amp T_B\colon \R^r \amp\rightarrow \R^n \amp T_C\colon \R^r \amp\rightarrow \R^m \text{.} \end{align*}
According to TheoremΒ 5.1.41 the composition \(T_A\circ T_B\) is a linear transformation from \(\R^r\) (the domain of \(T_B\)) to \(\R^m\) (the codomain of \(T_A\)). We claim that \(T_A\circ T_B=T_C\text{.}\) Indeed, identifying elements of \(\R^r\) with column vectors, for all \(\boldx\in \R^n\) we have
\begin{align*} T_A\circ T_B(\boldx) \amp = T_A(T_B(\boldx)) \amp (\knowl{./knowl/xref/d_function_composition.html}{\text{Definition 0.2.9}} ) \\ \amp =T_A(B\boldx) \amp (\knowl{./knowl/xref/d_matrix_transform.html}{\text{Definition 5.1.25}} )\\ \amp= A(B\boldx) \amp (\knowl{./knowl/xref/d_matrix_transform.html}{\text{Definition 5.1.25}} )\\ \amp = (AB)\boldx \amp (\text{assoc.})\\ \amp = T_C(\boldx) \amp (\text{since } C=AB)\text{.} \end{align*}
Thus, we can now understand the definition of matrix multiplication as being chosen precisely to encode how to compute the composition of two matrix transformations. The restriction on the dimension of the ingredient matrices is now understood as guaranteeing that the corresponding matrix transformations can be composed!

Example 5.1.42. Composition of reflections.

Let \(r_0\colon \R^2\rightarrow\R^2\) be reflection across the \(x\)-axis, and let \(r_{\pi/2}\colon \R^2\rightarrow \R^2\) be reflection across the \(y\)-axis. (See ExerciseΒ 5.1.6.18.) Use an argument in the spirit of statement (i) from RemarkΒ 5.1.22 to show that
\begin{equation*} r_{\pi/2}\circ r_{0}=\rho_{\pi} \text{.} \end{equation*}
(Note: this equality can also be shown using our matrix formulas for rotations and reflections. See ExerciseΒ 5.1.6.19. )
Solution.
Since \(r_0\) and \(r_{\pi/2}\) are both linear transformations (ExerciseΒ 5.1.6.18), so is the composition \(T=r_{\pi/2}\circ r_{0}\text{.}\) We wish to show \(T=\rho_{\pi}\text{.}\) Since \(\rho_{\pi}\) is also a linear transformation, it suffices by TheoremΒ 5.1.21 to show that \(T\) and \(\rho_\pi\) agree on a basis of \(\R^2\text{.}\) Take the standard basis \(B=\{(1,0), (0,1)\}\text{.}\) Compute:
\begin{align*} T(1,0) \amp=r_{\pi/2}(r_{0}(1,0)) \\ \amp =r_{\pi/2}(1,0) \\ \amp =(-1,0)\\ \amp =\rho_{\pi}(1,0)\\ T(0,1) \amp \\ T(0,1) \amp=r_{\pi/2}(r_{0}(0,1)) \\ \amp =r_{\pi/2}(0,-1) \\ \amp =(0,-1)\\ \amp =\rho_{\pi}(0,1)\text{.} \end{align*}
Since \(T\) and \(\rho_\pi\) agree on the basis \(B\text{,}\) we have \(T=\rho_\pi\text{.}\)

Exercises 5.1.6 Exercises

WeBWork Exercises

1.
Let \(T:{\mathbb R}^2 \rightarrow {\mathbb R}^2\) be a linear transformation that sends the vector \(\vec{u} =(5,2)\) into \((2,1)\) and maps \(\vec{v}= (1,3)\) into \((-1, 3)\text{.}\) Use properties of a linear transformation to calculate the following. (Enter your answers as ordered pairs, such as (1,2), including the parentheses.)
\(T(-4 \vec{u}) =\) ,
\(T(9 \vec{v}) =\) ,
\(T(-4 \vec{u} + 9 \vec{v}) =\) .
Answer 1.
\(\left(-8,-4\right)\)
Answer 2.
\(\left(-9,27\right)\)
Answer 3.
\(\left(-17,23\right)\)
2.
Let \(V\) be a vector space, and \(T:V \rightarrow V\) a linear transformation such that \(T(2 \vec{v}_1 - 3 \vec{v}_2)= 3 \vec{v}_1 + 2 \vec{v}_2\) and \(T(-3 \vec{v}_1 + 5 \vec{v}_2)= -3 \vec{v}_1 - 5 \vec{v}_2\text{.}\) Then
\(T(\vec{v}_1)=\) \(\vec{v}_1+\) \(\vec{v}_2\text{,}\)
\(T(\vec{v}_2)=\) \(\vec{v}_1+\) \(\vec{v}_2\text{,}\)
\(T(-2 \vec{v}_1 + 2 \vec{v}_2)=\) \(\vec{v}_1+\) \(\vec{v}_2\text{.}\)
Answer 1.
Answer 2.
Answer 3.
Answer 4.
Answer 5.
Answer 6.
3.
Let
\begin{equation*} \vec{v}_1= \left[\begin{array}{c} -3\cr -8 \end{array}\right] \ \mbox{ and } \ \vec{v}_2=\left[\begin{array}{c} -1\cr -3 \end{array}\right] . \end{equation*}
Let \(T:{\mathbb R}^2 \rightarrow {\mathbb R}^2\) be the linear transformation satisfying
\begin{equation*} T(\vec{v}_1)=\left[\begin{array}{c} -42\cr -28 \end{array}\right] \ \mbox{ and } \ T(\vec{v}_2)=\left[\begin{array}{c} -16\cr -10 \end{array}\right] . \end{equation*}
Find the image of an arbitrary vector \(\left[\begin{array}{c} x\cr y\cr \end{array}\right] .\)
\(T \left(\left[\begin{array}{c} x\cr y\cr \end{array}\right]\right) =\) (2Β Γ—Β 1 array)
4.
Let
\begin{equation*} A = \left[\begin{array}{ccc} 1 \amp -1 \amp -7\cr -3 \amp -9 \amp 4 \end{array}\right]. \end{equation*}
Define the linear transformation \(T: {\mathbb R}^3 \rightarrow {\mathbb R}^2\) by \(T(\vec{x}) = A\vec{x}\text{.}\) Find the images of \(\vec{u} = \left[\begin{array}{c} -1\cr -4\cr 5 \end{array}\right]\) and \(\vec{v} =\left[\begin{array}{c} a\cr b\cr c\cr \end{array}\right]\) under \(T\text{.}\)
\(T(\vec{u}) =\) (2Β Γ—Β 1 array)
\(T(\vec{v}) =\) (2Β Γ—Β 1 array)
5.
Let \(V\) be a vector space, \(v, u \in V\text{,}\) and let \(T_1: V \rightarrow V\) and \(T_2: V \rightarrow V\) be linear transformations such that
\begin{equation*} T_1(v) = 5 v - 5 u, \ \ \ T_1(u) = -7 v - 2 u, \end{equation*}
\begin{equation*} T_2(v) = 3 v + 2 u, \ \ \ T_2(u) = -7 v - 5 u. \end{equation*}
Find the images of \(v\) and \(u\) under the composite of \(T_1\) and \(T_2\text{.}\)
\((T_2 T_1)(v) =\) ,
\((T_2 T_1)(u) =\) .
Answer 1.
\(50v+35u\)
Answer 2.
\(-7v+-4u\)

Exercise Group.

Show that the function \(T\) defined is nonlinear by providing an explicit counterexample to one of the defining axioms or a consequence thereof.
6.
\(T\colon \R^2\rightarrow \R^2\text{,}\) \(T((x,y))=(x,y)+(1,1)\)
9.
\(T\colon\R^3\rightarrow \R^2\text{,}\) \(T(x,y,z)=(xy,yz)\)

10. Scalar multiplication.

Let \(V\) be a vector space. Fix \(c\in \R\) and define \(T\colon V\rightarrow V\) as \(T(\boldv)=c\boldv\text{:}\) i.e., \(T\) is scalar multiplication by \(c\text{.}\) Show that \(T\) is a linear transformation.

12. Left/right matrix multiplication.

Let \(B\) be an \(r\times m\) matrix, and let \(C\) be an \(n\times s\) matrix. Define the functions \(T\) and \(S\) as follows:
\begin{align*} T\colon M_{mn} \amp\rightarrow M_{rn} \\ A \amp\mapsto T(A)=BA \\ \amp \\ S\colon M_{mn} \amp\rightarrow M_{ms} \\ A\amp\mapsto S(A)=AC \text{.} \end{align*}
In other words, \(T\) is the β€œmultiply on the left by \(B\)” operation, and \(S\) is the β€œmultiply on the right by C” operation Show that \(T\) and \(S\) are linear transformations.

14. Sequence shift operators.

Let \(V=\R^\infty=\{(a_1,a_2,\dots, )\colon a_i\in\R\}\text{,}\) the space of all infinite sequences. Define the shift left function, \(T_\ell\text{,}\) and shift right function, \(T_r\text{,}\) as follows:
\begin{align*} T_\ell\colon \R^\infty\amp \rightarrow \R^\infty \amp T_r\colon \R^\infty\amp \rightarrow \R^\infty\\ s=(a_1,a_2, a_3,\dots )\amp \longmapsto T_\ell(s)=(a_2, a_3,\dots) \amp s=(a_1,a_2, a_3,\dots )\amp \longmapsto T_r(s)=(0,a_1,a_2,\dots) \end{align*}
Prove that \(T_\ell\) and \(T_r\) are linear transformations.

15. Adding and scaling linear transformations.

Suppose that \(T\colon V\rightarrow W\) and \(S\colon V\rightarrow W\) are linear transformations.
  1. Define the function \(T+S\colon V\rightarrow W\) as \((T+S)(\boldv)=T(\boldv)+S(\boldv)\text{.}\) Show that \(T+S\) is a linear transformation.
  2. Define the function \(cT\colon V\rightarrow W\) as \(cT(\boldv)=c(T(\boldv))\text{.}\) Show that \(cT\) is a linear transformation.

17. Linear transformations, span, and independence.

Suppose \(T\colon V\rightarrow W\) is a linear transformation. Let \(S=\{\boldv_1,\boldv_2,\dots ,\boldv_r\}\) be a subset of \(V\text{,}\) and let \(S'\) be the image of \(S\) under \(T\text{:}\) i.e.,
\begin{equation*} S'=T(S)=\{T(\boldv_1), T(\boldv_2),\dots, T(\boldv_r)\}\text{.} \end{equation*}
Assume \(\boldv_i\ne \boldv_j\) and \(T(\boldv_i)\ne T(\boldv_j)\) for all \(i\ne j\text{.}\)
Answer true or false: if true, provide a proof; if false, provide an explicit counterexample. Note: for a complete counterexample you need to specify \(T\colon V\rightarrow W\text{,}\) and \(S\text{.}\)
  1. If \(S\) is linearly independent, then \(S'\) is linearly independent.
  2. If \(S'\) is linearly independent, then \(S\) is linearly independent.
  3. If \(S\) is a spanning set for \(V\text{,}\) then \(S'\) is a spanning set for \(\im T\text{.}\)

18. Reflection through a line.

Fix an angle \(\alpha\) with \(0\leq \alpha \leq \pi \text{,}\) let \(\ell_\alpha\) be the line through the origin that makes an angle of \(\alpha\) with the positive \(x\)-axis, and let \(r_\alpha\colon\R^2\rightarrow \R^2\) be the reflection operation as described in DefinitionΒ 5.1.37. Prove that \(r_\alpha\) is a linear transformation following the steps below.
  1. In a manner similar to TheoremΒ 5.1.34, describe \(P'=r_\alpha(P)\) in terms of the polar coordinates \((r,\theta)\) of \(P\text{.}\) Additionally, it helps to write \(\theta=\alpha+\phi\text{,}\) where \(\phi\) is the angle the line segment from the origin to \(P\) makes with the line \(\ell_\alpha\text{.}\) Include a drawing to support your explanation.
  2. Use your description in (a), along with some trigonometric identities, to show \(r_\alpha=T_A\) where
    \begin{equation*} A=\begin{bmatrix}\cos 2\alpha \amp \sin 2\alpha\\ \sin 2\alpha \amp -\cos 2\alpha \end{bmatrix}\text{.} \end{equation*}

19. Compositions of rotations and reflections.

In this exercise we will show that if we compose a rotation or reflection with another rotation or reflection, as defined in DefinitionΒ 5.1.33 and DefinitionΒ 5.1.37, the result is yet another rotation or reflection. For each part, express the given composition either as a rotation \(\rho_\theta\) or reflection \(r_\theta\text{,}\) where \(\theta\) is expressed in terms of \(\alpha\) and \(\beta\text{.}\)
  1. \(\displaystyle \rho_\alpha\circ\rho_\beta\)
  2. \(\displaystyle r_\alpha\circ r_\beta\)
  3. \(\displaystyle \rho_\alpha\circ r_\beta\)
  4. \(\displaystyle r_\alpha\circ \rho_\beta\)
Hint.
Use TheoremΒ 5.1.34 and TheoremΒ 5.1.38, along with some trigonometric identities.

20.

The set \(B=\{\boldv_1=(1,-1), \boldv_2=(1,1)\}\) is a basis of \(\R^2\text{.}\) Suppose the linear transformation \(T\colon \R^2\rightarrow \R^3\) satisfies
\begin{equation*} T(\boldv_1)=(4,1,2), T(\boldv_2)=(1,0,2)\text{.} \end{equation*}
Find a formula for \(T(\boldx)\text{,}\) where \(\boldx=(x,y)\) is a general element of \(\R^2\text{.}\)

21.

Suppose \(T\colon V\rightarrow V\) is a linear transformation, and \(B\) is a basis of \(V\) for which \(T(\boldv)=\boldzero\) for all \(\boldv\in B\text{.}\) Show that \(T_0\text{:}\) i.e., \(T\) is the zero transformation from \(V\) to \(V\text{.}\)

22.

Suppose \(T\colon V\rightarrow V\) is a linear transformation, and \(B\) is a basis of \(V\) for which \(T(\boldv)=\boldv\) for all \(\boldv\in B\text{.}\) Show that \(T=\id_V\text{:}\) i.e., \(T\) is the identity transformation of \(V\text{.}\)

23.

Let \(T\colon \R^n\rightarrow \R^n\) be a linear transformation. Assume there is a basis \(B\) of \(\R^n\) and a constant \(c\in \R\) such that \(T(\boldv)=c\boldv\) for all \(\boldv\in B\text{.}\) Prove: \(T=T_{A}\text{,}\) where
\begin{equation*} A=cI_n=\begin{amatrix}[cccc] c \amp 0\amp \dots\amp 0\\ 0\amp c\amp \dots \amp 0 \\ \vdots \amp \amp \amp \vdots\\ 0\amp 0\amp \dots \amp c \end{amatrix}\text{.} \end{equation*}

Matrix transformations.

For each linear transformation \(T\colon \R^n\rightarrow \R^m\) and \(\boldx\in \R^n\) : (a) compute the standard matrix \(A\) of \(T\) using ProcedureΒ 5.1.29; (b) compute \(T(\boldx)\) using \(A\text{.}\) You may take for granted that the given \(T\) is linear.
24.
\begin{align*} T\colon \R^2\amp \rightarrow\R^4 \amp \boldx\amp=(1,3) \\ (x,y) \amp\mapsto (2x-y, 2y, x+y, x) \end{align*}
25.
\begin{align*} T\colon \R^4\amp \rightarrow \R^3 \amp \boldx=(0,2,4,-1)\\ (x_1,x_2,x_3,x_4)\amp\mapsto (2x_1-x_2+x_4, x_2-x_3, x_1+3x_2-x_3-x_4) \end{align*}