For the remaining sections of this chapter we will focus our investigation on linear transformations of the form \(T\colon V\rightarrow V\text{:}\) that is, transformations from a space \(V\) into itself. When \(V\) is finite-dimensional we can get a computational grip on \(T\) by choosing an ordered basis \(B\) and considering the matrix representation \([T]_B\text{.}\) As was illustrated in ExampleΒ 4.2.12, different matrix representations \([T]_B\) and \([T]_{B'}\) provide different insights into the nature of \(T\text{.}\) Furthermore, we see from this example that if the action of \(T\) on a chosen basis is simple to describe, then so too is the matrix representation of \(T\) with respect to that basis.
is diagonal! Diagonal matrices are about as simple as they come: they wear all of their properties (rank, nullity, invertibility, etc.) on their sleeve. If we hope to find a diagonal matrix representation of \(T\text{,}\) then we should seek nonzero vectors \(\boldv\) satisfying \(T(\boldv)=c\boldv\) for some \(c\in \R\text{:}\) these are called eigenvectors of \(T\text{.}\)
It turns out that \(T=T_A\) has a simple geometric description, though you would not have guessed this from the standard matrix \(A\text{.}\) To reveal the geometry at play, we represent \(T\) with respect to the nonstandard basis \(B'=(\boldv_1=(1,2), \boldv_2=(2,-1))\text{.}\) We compute
The alternative representation given by \(A'=[T]_{B'}\) nicely summarizes the action of \(T\text{:}\) it fixes the vector \(\boldv_1=(1,2)\) and maps the vector \(\boldv_2=(2,-1)\text{,}\) which is orthogonal to \((1,2)\text{,}\) to \(-(2,-1)\text{.}\) We claim this implies that \(T\) is none other than reflection \(r_\ell\) through the line \(\ell=\Span\{(1,2)\}\text{!}\) This follows from TheoremΒ 3.6.15. In more detail, from the geometry of the setup, it is easy to see that we have \(r_\ell((1,2))=(1,2)\) (reflection fixes the line \(\ell\)) and \(r_\ell((2,-1))=-(2,-1)\) (reflection ``flips" vectors perpendicular to \(\ell\)). Since \(T\) and \(r_\ell\) agree on the basis \(B'\text{,}\) we must have \(T=r_\ell\text{.}\)
Together, formulas (4.16)β(4.17) give us a satisfying geometric understanding of our reflection operator \(T=r_\ell\text{.}\) Let \(\ell^\perp=\Span\{(2,-1)\}\text{:}\) this is the line perpendicular to \(\ell\) passing through the origin. Formula (4.16) gives a decomposition of the vector \(\boldv\) in terms of its components along \(\ell\) and \(\ell^\perp\text{.}\) Formula (4.17) tells us what \(T=r_\ell\) does to this decomposition: namely, it leaves the component along \(\ell\) unchanged and ``flips" the component along \(\ell^\perp\text{.}\)
The success of our analysis in ExampleΒ 4.4.1 depended on finding the vectors \(\boldv_1\) and \(\boldv_2\) satisfying \(T(\boldv_1)=\boldv_1\) and \(T(\boldv_2)=(-1)\boldv_2\text{,}\) as they provided a basis on which the action of \(T\) is particularly simple. As we make official in DefinitionΒ 4.4.3 the vectors \(\boldv_1, \boldv_2\) are called eigenvectors of \(T\text{.}\)
for some \(\lambda\in \R\) is called an eigenvector of \(A\) with eigenvalue \(\lambda\text{.}\) In other words, an eigenvector (resp., eigenvalue) of \(A\) is just an eigenvector (resp., eigenvalue) of the matrix transformation \(T_A\colon \R^n\rightarrow \R^n\text{.}\)
Note well the important condition that an eigenvector must be nonzero. This means the zero vector \(\boldzero\) by definition is not an eigenvector. If we allowed \(\boldzero\) as an eigenvector, then the notion of the eigenvalue of an eigenvector would no longer be well-defined. This is because for any linear transformation we have
Suppose \(\boldv\ne\boldzero\) is an eigenvector of the linear transformation \(T\colon V\rightarrow V\text{.}\) Letting \(W=\Span\{\boldv\}\text{,}\) this means that \(T(\boldv)=\lambda\boldv\in W\text{:}\) i.e., \(T\) maps an eigenvector to some other element of the one-dimensional subspace it defines. The special case where \(V=\R^2\) is easy to visualize and can help guide your intuition in the general case. (See FigureΒ 4.4.7) Here the space \(\Span\{\boldv\}=\ell\) is a line passing through origin. If \(\boldv\) is an eigenvector of a given linear transformation, then it must be mapped to some other vector pointing along \(\ell\text{:}\) e.g., \(\lambda_1\boldv\) or \(\lambda_2\boldv_2\text{.}\) It it is not an eigenvector, it gets mapped to a vector \(\boldw\) that does not point along \(\ell\text{.}\)
Given a linear transformation \(T\colon V\rightarrow V\) we wish to (a) determine which values \(\lambda\in \R\) are eigenvalues of \(T\text{,}\) and (b) find all the eigenvectors corresponding to a given eigenvalue \(\lambda\text{.}\) In the next examples we carry out such an investigation in an ad hoc manner.
Assume \(V\) is nonzero. Recall that the zero transformation \(0_V\colon V\rightarrow V \) and identity transformation \(\id_V\colon V\rightarrow V\) are defined as \(0_V(\boldv)=\boldzero\) and \(\id_V(\boldv)=\boldv\) for all \(\boldv\in V\text{.}\) Find all eigenvalues and eigenvectors of \(0_V\) and \(\id_V\text{.}\)
Since \(0_V(\boldv)=\boldzero=0\boldv\) for all \(\boldv\in V\text{,}\) we see that \(0\) is the only eigenvalue of \(0_V\text{,}\) and that all nonzero vectors of \(V\) are \(0\)-eigenvectors.
Similarly, since \(\id_V(\boldv)=\boldv=1\boldv\) for all \(\boldv\in V\text{,}\) we see that \(1\) is the only eigenvalue of \(\id_V\text{,}\) and that all nonzero vectors of \(V\) are \(1\)-eigenvectors.
Let \(\ell\) be a line in \(\R^2\) passing through the origin, and define \(T\colon \R^2\rightarrow \R^2\) to be reflection through \(\ell\text{.}\) (See DefinitionΒ 3.2.16.) Find all eigenvectors and eigenvalues of \(T\text{.}\) Use a geometric argument.
Since the reflection operator fixes all elements of the line \(\ell\text{,}\) we have \(T(\boldx)=\boldx\) for any \(\boldx\in \ell\text{.}\) This shows that any nonzero element of \(\ell\) is an eigenvectors of \(T\) with eigenvalue \(1\text{.}\)
Similarly, since \(\ell^\perp\) is orthogonal to \(\ell\text{,}\) reflection through \(\ell\) takes any element \(\boldx=(x,y)\in \ell^\perp\) and maps it to \(-\boldx=(-x,-y)\text{.}\) Thus any nonzero element \(\boldx\in \ell^\perp\) is an eigenvector of \(T\) with eigenvalue \(-1\text{.}\)
We claim that these two cases exhaust all eigenvectors of \(T\text{.}\) Indeed, in general a nonzero vector \(\boldv\) lies in the line \(\ell'=\Span\{\boldx\}\text{,}\) and its reflection \(T(\boldx)\) lies in the line \(\ell''=\Span\{T(\boldx)\}\text{,}\) which itself is the result of reflecting the line \(\ell'\) through \(\ell\text{.}\) Now assume \(T(\boldx)=\lambda\boldx\text{.}\) We must have \(\lambda\ne 0\text{,}\) since \(T(\boldv)\ne \boldzero\) if \(\boldx\ne \boldzero\text{;}\) but if \(\lambda\ne 0\) it follows that the line \(\ell=\Span\{\boldx\}\) and its reflection \(\ell''=\Span\{T(\boldv)\}\) are equal. Clearly the only lines that are mapped to themselves by reflection through \(\ell\) are \(\ell\) and \(\ell^\perp\text{.}\) Thus if \(\boldx\) is an eigenvector of \(T\) it must lie in \(\ell\) or \(\ell^\perp\text{.}\)
Fix \(\theta\in (0,2\pi)\) and define \(T\colon \R^2\rightarrow \R^2\) to be rotation by \(\theta\text{.}\) (See DefinitionΒ 3.2.12) Find all eigenvectors and eigenvalues of \(T\text{.}\) Use a geometric argument. Your answer will depend on the choice of \(\theta\text{.}\)
Rotation by \(\pi\) sends every vector \(\boldx\in \R^2\) to \(-\boldx\text{:}\) i.e., \(T(\boldx)=-\boldx=(-1)\boldx\text{.}\) It follows that \(\lambda=-1\) is the only eigenvalue of \(T\) and all nonzero elements of \(\R^2\) are eigenvectors with eigenvalue \(-1\text{.}\)
A similar argument to the one in ExampleΒ 4.4.9 shows that \(T\) has no eigenvalues in this case. In more detail, a nonzero vector \(\boldx\) lies in the line \(\ell=\Span\{\boldx\}\text{,}\) and its rotation \(T(\boldx)\) lies in the line \(\ell'=\Span\{T(\boldx)\}\text{,}\) which is the result of rotating \(\ell\) by the angle \(\theta\text{.}\) Since \(\theta\ne \pi\text{,}\) it is clear that \(\ell\ne \ell'\text{,}\) and thus we cannot have \(T(\boldv)=\lambda\boldv\) for some \(\lambda\in \R\text{.}\)
To be an eigenvector of \(S\) a nonzero matrix \(A\) must satisfy \(S(A)=\lambda A\) for some \(\lambda\in \R\text{.}\) Using the definition of \(S\text{,}\) this means
We ask for which scalars \(\lambda\in \R\) does there exist a nonzero matrix \(A\) satisfying (4.20). Letβs consider some specific choices of \(\lambda\text{.}\)
For this choice of \(\lambda\) we seek nonzero matrices satisfying \(S(A)=A^T=(-1)A=-A\text{.}\) These are precisely the nonzero skew-symmetric matrices: i.e.,
\begin{equation*}
A=\begin{amatrix}[rr]0\amp a\\ -a \amp 0 \end{amatrix}\text{.}
\end{equation*}
Suppose \(A=\begin{amatrix}[cc]a\amp b \\ c\amp d \end{amatrix}\) satisfies \(A^T=\lambda A\text{.}\) Equating the entries of these two matrices yields the system
\begin{align*}
a \amp =\lambda a \\
d \amp = \lambda d\\
b \amp =\lambda c \\
c \amp =\lambda b \text{.}
\end{align*}
The first two equations imply \(a=d=0\text{,}\) using the fact that \(\lambda\ne 1\text{.}\) The second two equations imply further that \(b=\lambda^2 b\) and \(c=\lambda^2 c\text{.}\) Since \(\lambda\ne \pm 1\text{,}\)\(\lambda^2\ne 1\text{.}\) It follows that \(b=c=0\text{.}\) We conclude that for \(\lambda\ne \pm 1\text{,}\) if \(A^T=\lambda A\text{,}\) then \(A=\boldzero\text{.}\) It follows that \(\lambda\) is not an eigenvalue of \(S\) in this case.
for some \(\lambda\in\R\text{.}\) Thus \(\lambda\) is an eigenvalue of \(T\) if and only if the differential equation (4.21) has a nonzero solution. This is true for all \(\lambda\in \R\text{!}\) Indeed for any \(\lambda\) the exponential function \(f(x)=e^{\lambda x}\) satisfies \(f'(x)=\lambda e^{\lambda x}=\lambda f(x)\) for all \(x\in \R\text{.}\) Furthermore, any solution to (4.21) is of the form \(f(x)=Ce^{\lambda x}\) for some \(C\in \R\text{.}\) We conclude that (a) every \(\lambda\in \R\) is an eigenvalue of \(T\text{,}\) and (b) for a given \(\lambda\text{,}\) the \(\lambda\)-eigenvectors of \(T\) are precisely the functions of the form \(f(x)=Ce^{\lambda x}\) for some \(C\ne 0\text{.}\)
SubsectionFinding eigenvalues and eigenvectors systematically
You can imagine that our ad hoc approach to finding eigenvalues and eigenvectors will break down once the linear transformation under consideration becomes complicated enough. As such it is vital to have a systematic method of finding all eigenvalues and eigenvectors of a linear transformation \(T\colon V\rightarrow V\text{.}\) The rest of this section is devoted to describing just such a method in the special case where \(\dim V=n\lt\infty\text{.}\) The first key observation is that we can answer the eigenvalues/eigenvectors of \(T\) by answering the same question about \(A=[T]_B\text{,}\) where \(B\) is an ordered basis of \(V\text{.}\)
Theorem4.4.13.Eigenvectors of a linear transformation.
Let \(T\colon V\rightarrow V\) be a linear transformation, let \(B=(\boldv_1, \boldv_2, \dots, \boldv_n)\) be an ordered basis of \(V\text{,}\) and let \(A=[T]_B\text{.}\)
A vector \(\boldv\in V\) is an eigenvector of \(T\) with eigenvalue \(\lambda\) if and only if \(\boldx=[\boldv]_B\) is an eigenvector of \(A\) with eigenvalue \(\lambda\text{.}\)
A value \(\lambda\in \R\) is an eigenvalue of \(T\) if and only if it is an eigenvalue of \(A\text{.}\) Thus \(T\) and \(A\) have the same set of eigenvalues.
We prove statement (1) as a chain of equivalences:
\begin{align*}
\boldv \text{ is an eigenvector of } T \amp \iff \boldv\ne 0 \text{ and } T\boldv=\lambda \boldv \\
\amp \iff \boldx=[\boldv]_B\ne \boldzero \text{ and } [T\boldv]_B=[\lambda\boldv] \amp (\knowl{./knowl/xref/th_coordinates.html}{\text{4.1.10}}, (2)) \\
\amp \iff \boldx=[\boldv]_B\ne \boldzero \text{ and } [T\boldv]_B=\lambda[\boldv]_B \amp (\knowl{./knowl/xref/th_coordinates.html}{\text{4.1.10}}, (1))\\
\amp \iff \boldx=[\boldv]\ne \boldzero \text{ and } [T]_B[\boldv]_B=\lambda[\boldv]_B \amp (\knowl{./knowl/xref/th_matrixrep.html}{\text{4.2.6}})\\
\amp \iff \boldx\ne 0 \text{ and } A\boldx=\lambda\boldx\\
\amp \iff \boldx \text{ is an eigenvector of } A\text{.}
\end{align*}
From (1) it follows directly that if \(\lambda\) is an an eigenvalue of \(T\text{,}\) then it is an eigenvalue of \(A=[T]_B\text{.}\) Conversely, if \(\lambda\) is an eigenvalue of \(A=[T]_B\text{,}\) then there is a nonzero \(\boldx\in\R^n \) such that \(A\boldx=\lambda \boldx\text{.}\) Since \([\phantom{\boldv}]_B\) is surjective (TheoremΒ 4.1.10, (3)), there is a vector \(\boldv\in V\) such that \([\boldv]_B\text{.}\) It follows from (1) that \(\boldv\) is a \(\lambda\)-eigenvector of \(T\text{,}\) and thus that \(\lambda\) is an eigenvalue of \(T\text{.}\)
Thanks to TheoremΒ 4.4.13, we can boil down the eigenvector/eigenvalue question for linear transformations of finite vector spaces to the analogous question about square matrices. The next theorem is the key result.
Since an eigenvector must be nonzero, we conclude that the \(\lambda\)-eigenvectors of \(A\) are precisely the nonzero elements of \(\NS(\lambda I-A)\text{.}\) This proves statement (1). As a consequence, we see that \(A\) has \(\lambda\) as an eigenvalue if and only if \(\NS (\lambda I-A)\) contains nonzero element elements: i.e., if and only if \(\NS (\lambda I-A)\ne \{\boldzero\}\text{.}\) By the invertibility theorem this is true if and only if \(\lambda I-A\) is not invertible.
According to TheoremΒ 4.4.14, the eigenvectors of \(A\) live in null spaces of matrices of the form \(\lambda I-A\text{.}\) Accordingly, we call these spaces eigenspaces.
Let \(A\) be an \(n\times n\) matrix. Given \(\lambda\in \R\) the \(\lambda\)-eigenspace of \(A\) is the subspace \(W_\lambda\subseteq \R^n\) defined as
\begin{equation*}
W_\lambda=\NS (\lambda I -A)\text{.}
\end{equation*}
Similarly, given a linear transformation \(T\colon V\rightarrow V\) and \(\lambda\in \R\text{,}\) the \(\lambda\)-eigenspace of \(T\) is the subspace \(W\subseteq V\) defined as
We nearly have a complete method for computing the eigenvalues and eigenvectors of a square matrix \(A\text{.}\) The last step is to identify the values of \(\lambda\) for which \(\lambda I-A\) is not invertible. By the invertibility theorem, the matrix \(\lambda I-A\) is not invertible if and only if \(\det (\lambda I-A)=0\text{.}\) Thus the eigenvalues of \(A\) are precisely the zeros of the function \(p(t)=\det(tI-A)\text{.}\) We have proved the following corollary.
When \(\theta=\pi\text{,}\) this reduces to \(t=\cos\pi=-1\text{,}\) confirming our conclusion in ExampleΒ 4.4.10 that \(\lambda=-1\) is the only eigenvalue of the rotation by \(\pi\) operator.
When \(\theta\in (0,\pi)\) and \(\theta\ne \pi\text{,}\) then \(\cos^2\theta-1\lt 0\) and we see that \(p(t)\) has no real roots. This confirms our conclusion in ExampleΒ 4.4.10 that such rotations have no eigenvalues.
We will show below that \(p(t)=\det(tI-A)\) is indeed a polynomial (TheoremΒ 4.4.25). We postpone that discussion for now in order to present some examples of systematically computing eigenvalues and eigenvectors of matrices. Below you find the the complete description of this procedure.
Procedure4.4.19.Computing eigenspaces of a matrix.
Let \(A\) be an \(n\times n\) matrix. To find all eigenvalues \(\lambda\) of \(A\) and compute a basis for the corresponding eigenspace \(W_\lambda\text{,}\) proceed as follows.
Compute \(p(t)=\det(tI-A)\text{.}\) Let \(\lambda_1, \lambda_2, \dots, \lambda_r\) be the distinct real roots of \(p(t)\text{.}\) These are the eigenvalues of \(A\text{.}\)
Zero is an eigenvalue of \(A\) if and only if \(W_0\) is nontrivial, if and only if \(\NS A\) is nontrivial (by (1)), if and only if \(A\) is not invertible.
Of course statement (2) of CorollaryΒ 4.4.22 gives rise to yet another equivalent formulation of invertibility, and we include this in our ever-expanding invertibility theorem at the end of the section. We end our current discussion with an example illustrating how to compute the eigenvalues and eigenvectors of an arbitrary linear transformation \(T\colon V\rightarrow V\) of a finite-dimensional space. The idea is to first represent \(T\) as a matrix with respect to some basis \(B\text{,}\) apply ProcedureΒ 4.4.19 to this matrix, and then βliftβ the results back to \(V\text{.}\)
Procedure4.4.23.Computing eigenspaces of a linear transformation.
Let \(T\colon V\rightarrow V\) be a linear transformation of a finite-dimensional vector space \(V\text{.}\) To compute the eigenvalues and eigenspaces of \(T\text{,}\) proceed as follows.
For each eigenvalue \(\lambda\text{,}\) βliftβ the basis of \(W_\lambda\subseteq \R^n\) back up to \(V\) using the coordinate transformation \([\phantom{\boldv}]_B\text{.}\) The result is the basis for the \(\lambda\)-eigenspace of \(T\text{.}\)
Now apply ProcedureΒ 4.4.19 to \(A\text{.}\) From \(p(t)=\det(tI-A)=(t-1)^2(t^2-1)=(t-1)^3(t+1)\) we conclude that \(\lambda=1\) and \(\lambda=-1\) are the only eigenvalues of \(A\) (and hence also \(S\)). Bases for the corresponding eigenspaces of \(A\) are easily computed as
It is easy to see that \(W_1\) and \(W_2\) are the subspaces of symmetric and skew-symmetric matrices, respectively. This is consistent with our analysis in ExampleΒ 4.4.11.
Fix \(n\geq 3\) and assume the claim is true for all \((n-1)\times (n-1)\) matrices. Let \(A=[a_{ij}]_{1\leq i,j\leq n}\text{.}\) Expanding \(p(t)=\det (tI-A)\) along the first row yields
(Recall that for any matrix \(B\) the notation \(B_{ij}\) denotes the submatrix obtained by removing the \(i\)-th row and \(j\)-th column of \(B\text{.}\)) First observe that \((t-A)_{11}=t-A_{11}\text{,}\) and thus \(\det(t-A)_{11}\) is the characteristic polynomial of the \((n-1)\times (n-1)\) matrix \(A_{11}\text{.}\) By induction this is a monic polynomial of degree \(n-1\text{,}\) and hence the first term of (4.27), \((t-a_{11})\det(t-A)_{11}\) is a monic polynomial of degree \(n\text{.}\) Unfortunately, the remaining terms in the expansion (4.27) do not lend themselves to a direct application of the induction hypothesis. However, we observe that the \((n-1)\times (n-1)\) submatrices \((t-A)_{1j}\) for \(j\geq 2\) all satisfy the following property: their first columns contains only scalars; the remaining columns contain exactly one entry of the form \(t-c\text{,}\) while the rest of the entries are scalars. An easy (additional) induction argument shows that the determinant of such a matrix is polynomial of degree at most \(n-2\text{.}\) (We leave this to you!) Since the first term of (4.27) is a monic polynomial of degree \(n\) and the rest of the terms are polynomials of degree at most \(n-2\text{,}\) we conclude that \(p(t)\) itself is a monic polynomial of degree \(n\text{,}\) as desired.
Statement (2) is the fundamental theorem of algebra: every polynomial with real coeofficients factors completely over the complex numbers. Statement (3) follows from CorollaryΒ 4.4.16.
This proves half of statements (4) and (5). The fact that \(a_{n-1}=-\tr A\) can be proved by induction using a modified version of the argument from the proof of (1) above. It remains to show that \(a_n=(-1)^n\det A\text{.}\) We have
Remark4.4.26.Characteristic polynomial for \(2\times 2\) matrices.
Let \(A\) be a \(2\times 2\) matrix, and let \(p(t)=\det(tI-A)\text{.}\) Using (4.22)β(4.24) we have
\begin{equation*}
p(t)=t^2-(\tr A) t+\det A\text{.}
\end{equation*}
This is a useful trick if you want to produce a \(2\times 2\) matrix with a prescribed characteristic polynomial. For example, a matrix with characteristic polynomial \(p(t)=t^2-2\) has trace equal to 0 and determinant equal to \(-2\text{.}\) Such matrices are easy to construct: e.g.,
An important consequence of TheoremΒ 4.4.25 is that an \(n\times n\) matrix \(A\) can have at most \(n\) distinct eigenvalues. Indeed, the eigenvalues of \(A\) are the real roots appearing among the \(\lambda_i\) in the factorization
Any of the following equivalent conditions about the set \(S\) of columns of \(A\) hold: \(S\) is a basis of \(\R^n\text{;}\)\(S\) spans \(\R^n\text{;}\)\(S\) is linearly independent.
Any of the following equivalent conditions about the set \(S\) of rows of \(A\) hold: \(S\) is a basis of \(\R^n\text{;}\)\(S\) spans \(\R^n\text{;}\)\(S\) is linearly independent.
Suppose a \(3\times 3\) matrix \(A\) has only two distinct eigenvalues. Suppose that \({\rm tr}(A)=1\) and \({\rm det}(A)=63\text{.}\) Find the eigenvalues of \(A\) with their algebraic multiplicities.
If \(\vec{v}_1 = \left[\begin{array}{c}
-1\cr
-5
\end{array}\right]\) and \(\vec{v}_2=\left[\begin{array}{c}
-3\cr
2
\end{array}\right]\) are eigenvectors of a matrix \(A\) corresponding to the eigenvalues \(\lambda_1=-2\) and \(\lambda_2=-5\text{,}\) respectively,
be eigenvectors of the matrix \(A\) which correspond to the eigenvalues \(\lambda_1 = -2\text{,}\)\(\lambda_2 = 0\text{,}\) and \(\lambda_3 = 2\text{,}\) respectively, and let
For each matrix below (a) compute the characteristic polynomial \(p(t)\text{,}\) (b) find all eigenvalues of the matrix, and (c) compute bases for eigenspaces of each eigenvalue.
Matrices \(A\) and \(B\) below both have characteristic polynomial \(p(t)=t^3-3t+2\text{.}\) For each matrix compute a basis of \(W_\lambda\) for each eigenvalue \(\lambda\text{.}\)
Fix \(\lambda\in \R\) and let \(W_\lambda, W_\lambda'\) be the \(\lambda\)-eigenspaces of \(A\) and \(A^T\text{,}\) respectively. Prove: \(\dim W_\lambda=\dim W_\lambda'\text{.}\)
Assume \(A\) is a square matrix satisfying \(A^k=\boldzero\) for some positive integer \(k\text{.}\) Show that \(0\) is the only eigenvalue of \(A\text{.}\) Your argument must make clear that \(0\) is in fact an eigenvalue of \(A\text{.}\)