Skip to main content
Logo image

Section 2.3 Invertible matrices

Picking up the thread of Remark 2.2.10, we observe that the cancellation property enjoyed in real number algebra is a consequence of the fact that every nonzero real number \(a\ne 0\) has a multiplicative inverse, denoted \(a^{-1}\) or \(1/a\text{,}\) that satisfies \(aa^{-1}=1\text{.}\) Indeed, “canceling” the \(a\) in the equation \(ab=ac\) (assuming \(a\ne 0\)) is really the act of multiplying both sides of this equation by the multiplicative inverse \(a^{-1}\text{.}\)
Ever on the lookout for connections between real number and matrix algebra, we ask whether there is a sensible analogue of multiplicative inverses for matrices. We have seen already that identity matrices \(I_n\) play the role of multiplicative identities for \(n\times n\) matrices, just as the number \(1\) does for real numbers. This suggests we should restrict our attention to \(n\times n\) matrices. The following definition is then the desired analogue of the multiplicative inverse of a nonzero real number.

Subsection 2.3.1 Invertible matrices

Definition 2.3.1. Invertible matrix.

An \(n\times n\) matrix \(A\) is invertible (or nonsingular) if there is a \(n\times n\) matrix \(B\) satisfying
\begin{equation} AB=BA=I_n\text{.}\tag{2.3.1} \end{equation}
When this is the case we call \(B\) an inverse of \(A\text{,}\) and we say that \(A\) and \(B\) are inverses of one another.
A matrix \(B\) satisfying (2.3.1) is called an inverse of \(A\text{,}\) denoted \(A^{-1}\text{.}\)
A square matrix that is not invertible is called singular.
The phrase “an inverse” in Definition 2.3.1 is somewhat jarring. Shouldn’t we speak of the inverse of a matrix? Not surprisingly, if a matrix is invertible, then it has one and only one inverse. As intuitive as this fact may seem, however, it still requires proof.
Suppose matrices \(B\) and \(C\) both satisfy the properties of the multiplicative inverse: i.e.,
\begin{align*} AB\amp =BA=I \\ AC\amp=CA=I \text{.} \end{align*}
Then
\begin{align*} AB=I\text{ and } AC=I \amp\implies AB=AC\\ \amp \implies BAB=BAC\\ \amp \implies I\,B=I\,C \\ \amp \implies B=C\text{.} \end{align*}
Thus we see that \(B=C\text{,}\) showing that the inverse of \(A\text{,}\) if it exists, is unique.
The next theorem tells us that we can multiplicatively cancel a matrix if it is invertible.
  1. We prove both implications of \(QX=QY \iff X=Y\) separately. The reverse implication (\(\Leftarrow\)) is obvious:
    \begin{equation*} X=Y\implies QX=QY\text{.} \end{equation*}
    For the forward implication (\(\Rightarrow\)), we have
    \begin{align*} QX=QY \amp \implies Q^{-1}(QX)=Q^{-1}(QY) \amp (Q \text{ inv.})\\ \amp \implies Q^{-1}QX=Q^{-1}QY \\ \amp \implies IX=IY\\ \amp \implies X=Y \text{.} \end{align*}
  2. The argument for right cancellation is exactly similar.
The next corollary shows how we can solve some matrix equations uniquely using invertible matrices.
  1. We have
    \begin{align*} QX=Y \amp \iff Q^{-1}QX=Q^{-1}Y \amp (\knowl{./knowl/th_inverse_cancel.html}{\text{2.3.3}}, Q^{-1} \text{ inv.})\\ \amp \iff X=Q^{-1}Y\text{.} \end{align*}
  2. We have
    \begin{align*} XQ=Y \amp \iff XQQ^{-1}=YQ^{-1} \amp (\knowl{./knowl/th_inverse_cancel.html}{\text{2.3.3}}, Q^{-1} \text{ inv.})\\ \amp \iff X=YQ^{-1}\text{.} \end{align*}
Without any additional theory at our disposal, to show a matrix \(A\) is invertible we must exhibit an inverse. The onus is on us to find a matrix \(B\) satisfying both \(AB=I\) and \(BA=I\text{.}\) (Remember: since we cannot assume \(BA=AB\text{,}\) we really need to show both equations hold.)
By the same token, to show \(A\) is not invertible we must show that an inverse does not exist: that is, we must prove that there is no \(B\) satisfying \(AB=BA=I\text{.}\) The next example illustrates this technique for a variety of matrices.

Example 2.3.5. Invertible matrices.

  1. Identity matrices are invertible, and in fact we have \(I^{-1}=I\text{,}\) as witnessed by the fact that \(II=I\text{.}\)
  2. Square zero matrices \(\boldzero\) are never invertible, since for any square matrix \(B\) of the same dimension we have
    \begin{equation*} B\,\boldzero=\boldzero B=\boldzero\ne I\text{.} \end{equation*}
    Thus there is no matrix satisfying the inverse property (2.3.1) with respect to \(\boldzero\text{.}\)
  3. The inverse of the matrix \(\begin{amatrix}[rr]2\amp 1\\ 3\amp 2 \end{amatrix}\) is \(\begin{amatrix} 2\amp -1\\ -3\amp 2 \end{amatrix} \text{.}\) Indeed, we have
    \begin{equation*} \begin{amatrix}[rr]2\amp 1\\ 3\amp 2 \end{amatrix}\begin{amatrix}[rr] 2\amp -1\\ -3\amp 2 \end{amatrix}=\begin{amatrix}[rr] 2\amp -1\\ -3\amp 2 \end{amatrix}\begin{amatrix}[rr]2\amp 1\\ 3\amp 2 \end{amatrix}=I_2\text{,} \end{equation*}
    as you can easily verify.
  4. The matrix \(A= \begin{amatrix}[rrr] 1\amp 1 \amp 1\\ 1\amp 1 \amp 1\\ 1\amp 1\amp 1 \end{amatrix}\) is not invertible. Indeed, using the row method of matrix multiplication, we see that given any matrix \(B\text{,}\) each row of \(AB\) is given by
    \begin{equation*} \begin{amatrix}1\amp 1\amp 1 \end{amatrix}\,B \text{.} \end{equation*}
    It follows that all the rows of \(AB\) are identical, and hence that we cannot have \(AB=I_3\text{,}\) since the rows of \(I_3\) are not identical.
As the preceding example illustrates, deciding whether a matrix is invertible is not so straightforward, especially if the matrix is large. For the \(2\times 2\) case, however, we have a relatively simple test for invertibility. (We will generalize this to the \(n\times n\) case in Section 2.5.)
If \(ad-bc\ne 0\text{,}\) the proposed matrix is indeed an inverse of \(A\text{,}\) as one readily verifies.
Assume \(ad-bc=0\text{.}\) If \(A=\boldzero\text{,}\) then \(A\) is not invertible, as we saw in the example above. Thus we can assume \(A\) is nonzero, in which case \(B=\begin{amatrix}[rr]d\amp -b\\ -c\amp a \end{amatrix}\) is also nonzero. An easy computation shows
\begin{equation*} AB=\abcdmatrix{ad-bc}{0}{0}{ad-bc}=\abcdmatrix{0}{0}{0}{0}. \end{equation*}
This implies \(A\) is not invertible. Indeed if it were, then the inverse \(A^{-1}\) would exist, and we’d have
\begin{align*} AB=\boldzero \amp\implies A^{-1}AB=\boldzero \\ \amp \implies IB=\boldzero \\ \amp \implies B=\boldzero \text{,} \end{align*}
which is a contradiction. We have proved that if \(ad-bc=0\text{,}\) then \(A\) is not invertible.
Sage has a number of useful tools related to invertibility. The boolean function is_invertible() tests for invertibility, and the method inverse() computes the inverse of an invertible matrix. Below we generate a random matrix with rational coefficients, test whether it is invertible, and compute its inverse if it is invertible. The density=0.5 ensures that roughly half of the matrix entries are zero; and this in turn increases the likelihood that the matrix is singular, for reasons that will become somewhat clearer later.
  • Evaluate the Sage cell below multiple times.
  • When the matrix is invertible, verify that \(AA^{-1}=A^{-1}A=I\text{.}\) If you like, use the blank Sage cell to compute \(AA^{-1}\) and \(A^{-1}A\text{.}\)
  • Try increasing the density setting in random_element() (e.g., density=0.75, density=.875) and see if the matrix is more or less likely to be invertible.
The next theorem tells us that invertibility is preserved by matrix multiplication: that is, if \(A\) and \(B\) are invertible \(n\times n\) matrices, then so is \(C=AB\text{.}\)
Assume \(A\) and \(B\) are invertible. The statement of the theorem proposes a candidate for the inverse of \(AB\text{:}\) namely, \(C=B^{-1}A^{-1}\text{.}\) We need only show that \(C\) satisfies \(C(AB)=(AB)C=I\text{.}\) Here goes:
\begin{align*} C(AB)\amp =\amp (B^{-1}A^{-1})AB=B^{-1}A^{-1}AB=B^{-1}IB=B^{-1}B=I\\ (AB)C\amp =\amp (AB)B^{-1}A^{-1}=ABB^{-1}A^{-1}=AIA^{-1}=AA^{-1}=I\text{.} \end{align*}
We prove by induction on the number \(r\) of matrices, \(r\geq 1\text{,}\) that if the \(A_i\) are invertible, then the proposed inverse formula is valid.

Base step: \(r=1\).

For \(r=1\text{,}\) the inverse formula reads \(A_1^{-1}=A_1^{-1}\text{,}\) which is clearly true.

Induction step.

For the induction step we assume that the inverse formula is valid for any collection of \(r-1\) invertible matrices, and then show it is valid for any collection of \(r\) invertible matrices. Let \(A_1,A_2,\dots, A_r\) be invertible \(n\times n\) matrices. Define \(A=A_1A_2\cdots A_{r-1}\text{.}\) Then
\begin{align*} (A_1A_2\cdots A_{r-1})^{-1} \amp=\left((A_1A_2\cdots A_{r-1})A_r\right)^{-1} \\ \amp=(AA_r)^{-1} \\ \amp= A_r^{-1}A^{-1} \amp (\knowl{./knowl/th_invertible_prod.html}{\text{Theorem 2.3.7}})\\ \amp =A_r^{-1}(A_{r-1}^{-1}A_{r-2}^{-1}\cdots A_1^{-1}) \amp (\text{induction})\\ \amp=A_r^{-1}A_{r-1}^{-1}\cdots A_1^{-1} \amp (\text{assoc.}) \text{.} \end{align*}

Remark 2.3.9.

Whenever confronted with a logical implication of the form \(\mathcal{P}\implies\mathcal{Q}\text{,}\) where \(\mathcal{P}\) and \(\mathcal{Q}\) denote arbitrary propositions, you should always ask whether the implication “goes the other way”. In other words, does the converse implication \(\mathcal{Q}\implies \mathcal{P} \) also hold?
The answer with regard to the implication (2.3.2) is yes, though the proof of this is more difficult then you think. (See Corollary 2.4.15.)
The following argument is a common invalid proof of the reverse implication:
  1. Assume \(AB\) is invertible.
  2. Then \(AB\) has an inverse matrix.
  3. Then the inverse of \(AB\) is \(B^{-1}A^{-1}\text{.}\)
  4. Then \(A^{-1}\) and \(B^{-1}\) exist. Hence \(A\) and \(B\) are invertible.
Where is the flaw in our logic here? The second statement only allows us to conclude that there is some mystery matrix \(C\) satisfying \((AB)C=C(AB)=I\text{.}\) We cannot yet say that \(C=B^{-1}A^{-1}\text{,}\) as this formula from Theorem 2.3.7 only applies when we already know that \(A\) and \(B\) are both invertible. But this is exactly what we are trying to prove! As such we are guilty here of “begging the question”, or petitio principii in Latin.

Subsection 2.3.2 Powers of matrices, matrix polynomials

We end this section by exploring how the matrix inverse operation fits into our matrix algebra. First, we can now use the inverse operation to define matrix powers of the form \(A^r\text{,}\) where \(A\) is a square matrix and \(r\) is an arbitrary integer.

Definition 2.3.10. Matrix powers.

Let \(A\) be an \(n\times n\) matrix, and let \(r\in\Z\) be an integer. We define the power matrix \(A^r\) as follows:
\begin{equation*} A^r=\begin{cases} I\amp \text{if } r=0;\\[2ex] \underset{r \text{ times}}{\underbrace{AA\cdots A}}\amp \text{if } r>0; \\[2ex] (A^{-1})^s \amp \text{if } r=-s < 0 \text{ and } A \text{ is invertible}. \end{cases}\text{.} \end{equation*}
Equipped with a notion of matrix powers, we can further define matrix polynomials for square matrices.

Definition 2.3.11. Matrix polynomials.

Let \(f(x)=\anpoly\) be a polynomial with real coefficients. For any square matrix \(A\) of size \(n\times n\text{,}\) we define the matrix \(f(A)\) as
\begin{equation} f(A)=a_nA^n+a_{n-1}A^{n-1}+\cdots +a_1A+a_0I_n\text{.}\tag{2.3.5} \end{equation}
We call \(f(A)\) the result of evaluating the polynomial \(f\) at the matrix \(A\text{.}\)

Remark 2.3.12.

It is both easy and perilous to forget the identity matrix in the term \(a_0I_n\) appearing in (2.3.5). Take caution not to make this mistake; without an identity matrix of appropriate size, the expression \(f(A)\) simply does not make sense.

Example 2.3.13. Matrix polynomials.

Let \(f(x)=x^2-2x+1\text{.}\) Evaluate \(f\) at the matrices
\begin{equation*} A=\begin{bmatrix} 1\amp 1\\ 0\amp 1 \end{bmatrix} \end{equation*}
and
\begin{equation*} B=\begin{bmatrix} 1\amp 1\amp 1\\ 1\amp 1\amp 1\\ 1\amp 1\amp 1 \end{bmatrix}\text{.} \end{equation*}
Solution.
We have
\begin{align*} f(A) \amp =A^2-2A+I_2\\ \amp= \begin{bmatrix} 1\amp 2\\ 0\amp 1 \end{bmatrix}-2\begin{bmatrix} 1\amp 1\\ 0\amp 1 \end{bmatrix}+\begin{bmatrix} 1\amp 0\\ 0 \amp 1 \end{bmatrix} \\ \amp = \begin{bmatrix} 0\amp 0 \\ 0\amp 0 \end{bmatrix}=\underset{2\times 2}\boldzero \end{align*}
and
\begin{align*} f(B) \amp=B^2-2B+I_3 \\ \amp=\begin{bmatrix} 3\amp 3\amp 3\\ 3\amp 3\amp 3\\ 3\amp 3\amp 3 \end{bmatrix}-2\begin{bmatrix} 1\amp 1\amp 1\\ 1\amp 1\amp 1\\ 1\amp 1\amp 1 \end{bmatrix}+\begin{bmatrix} 1\amp 0\amp 0 \\ 0\amp 1\amp 0\\ 0\amp 0\amp 1 \end{bmatrix} \\ \amp = \begin{bmatrix} 2\amp 1\amp 1\\ 1\amp 2\amp 1\\ 1\amp 1\amp 2 \end{bmatrix}\text{.} \end{align*}
An integer matrix power \(A^n\) is computed in Sage as A^n.
Of course the matrix needs to be invertible for a negative of power to be computed. Sage will throw an error in this case if the matrix is singular.
Polynomial expressions can then be easily computed manually in Sage. The next cell computes \(f(A)\) and \(f(B)\) for \(f(x)=x^2-3x+33\text{.}\)
We took care to heed the warning in Remark 2.3.12, making sure to include \(I_3\) for \(f(A)\) (identity_matrix(3)) and \(I_2\) for \(f(B)\) (identity_matrix(2)). Interestingly, Sage is smart enough to figure out what we mean even if we are sloppy in this regard.
The proofs of the first three statements are elementary, and closely resemble proofs of similar results in real number algebra. We leave these as an (unassigned) exercise.
For the fourth statement to make sense, we must assume that \(A\) is invertible. The claim here is that \(A^{-1}\) is invertible, and that its inverse is \(A\) itself. To prove this we need only show \(A^{-1}A=AA^{-1}=I\text{,}\) which follows from the definition of the inverse.
The fifth statement also tacitly assumes \(A\) is invertible. To prove it, we consider the three cases \(r=0\text{,}\) \(r>0\) and \(r < 0\text{.}\)
If \(r=0\text{,}\) then by definition \(A^{-r}=A^0=I=(A^{-1})^0=(A^{-1})^r\text{.}\)
If \(r>0\text{,}\) then by definition \(A^{-r}=(A^{-1})^r\text{.}\)
Suppose \(r=-s < 0\text{.}\) Then
\begin{align*} (A^{-1})^{r} \amp = ((A^{-1})^{-1})^s \amp (\knowl{./knowl/d_matrix_powers.html}{\text{Definition 2.3.10}}) \\ \amp =A^s \amp ((A^{-1})^{-1}=A)\\ \amp=A^{-r} \amp (r=-s) \text{.} \end{align*}
We prove both implications of the if and only if statement separately.
Suppose \(A\) is invertible with inverse \(A^{-1}\text{.}\) To see that \(A^T\) is invertible, with inverse as specified in (2.3.6), we need only show that
\begin{equation*} A^T(A^{-1})^T=(A^{-1})^T\, A^T=I\text{.} \end{equation*}
We verify the two equalities separately:
\begin{align*} A^T(A^{-1})^T\amp =\left(A^{-1}A\right)^T \amp (\knowl{./knowl/th_trans_props.html}{\text{Theorem 2.2.11}})\\ \amp =I_n^T \\ \amp =I_n \checkmark \end{align*}
\begin{align*} (A^{-1})^TA^T \amp =(AA^{-1})^T \amp (\knowl{./knowl/th_trans_props.html}{\text{Theorem 2.2.11}})\\ \amp =I_n^T=I_n \ \checkmark\text{.} \end{align*}
In both chains of equality we make use of the obvious claim \(I^T=I\text{.}\)
For the other direction, assume \(A^T\) is invertible. Setting \(B=A^T\text{,}\) we see that \(A=(A^T)^T=B^T\text{.}\) By the first implication, we know that if \(B\) is invertible, then so is \(B^T=A\text{.}\)

Exercises 2.3.3 Exercises

WeBWork Exercises

1.
If \(A\) and \(B\) are invertible \(n\times n\) matrices, then the inverse of \(A+B\) is \(A^{-1}+B^{-1}\text{.}\)
  • True
  • False
Answer.
\(\text{False}\)
Solution.
SOLUTION: False. For example, let \(A = I_n\text{,}\) and \(B = -I_n\text{,}\) then \(A+B = 0_n\text{,}\) which is not invertible.
2.
Solve for the matrix \(X\) if \(AX(D+B X)^{-1} = C\text{.}\) Assume that all matrices are \(n\times n\) and invertible as needed.
\(X =\)
Answer.
\(\left(A-CB\right)^{-1}CD\)
Solution.
SOLUTION: Note that
\begin{equation*} \begin{align*} AX(D+BX)^{-1} = C \amp \Rightarrow AX = C(D+BX) = CD + CBX \\ \amp \Rightarrow AX - CBX = CD\\ \amp \Rightarrow (A-CB)X = CD \\ \amp \Rightarrow X = (A-CB)^{-1} CD. \end{align*} \end{equation*}
3.
Are the following matrices invertible?
  1. \(\displaystyle \left[\begin{array}{cc} 9 \amp -4\cr 0 \amp 0 \end{array}\right]\)
  2. \(\displaystyle \left[\begin{array}{cc} -9 \amp -4\cr 9 \amp 4 \end{array}\right]\)
  3. \(\displaystyle \left[\begin{array}{cc} -1 \amp 3\cr 5 \amp 2 \end{array}\right]\)
  4. \(\displaystyle \left[\begin{array}{cc} -1 \amp -3\cr -1 \amp -4 \end{array}\right]\)
4.
For what values of \(c\) will \(\displaystyle A = \begin{bmatrix} 1 \amp 1 \\ c \amp c^2 \end{bmatrix}\) be invertible?
For all \(c\) such that \(c\neq\) and \(c\neq\) .
Answer 1.
\(0\)
Answer 2.
\(1\)
Solution.
SOLUTION: The matrix \(\displaystyle A = \begin{bmatrix} 1 \amp 1 \\ c \amp c^2 \end{bmatrix}\) is invertible provided the columns of \(A\) are linearly independent, which will be the case if \(c \neq c^2\text{.}\) Thus, we require that \(c\neq 0\) and \(c \neq 1\text{.}\)
5.
Let
\begin{equation*} A=\left(\begin{array}{cc} 7 \amp 4 \cr -4 \amp 8 \end{array}\right), B=\left(\begin{array}{cc} 6 \amp -6 \cr 4 \amp 2 \end{array}\right). \end{equation*}
Then
\begin{equation*} AB=\left(\begin{array}{cc} a_{11} \amp a_{12} \cr a_{21} \amp a_{22} \end{array}\right) \end{equation*}
where \(a_{11}=\) , \(a_{12}=\) , \(a_{21}=\) , \(a_{22}=\) ,
\begin{equation*} BA=\left(\begin{array}{cc} b_{11} \amp b_{12} \cr b_{21} \amp b_{22} \end{array}\right) \end{equation*}
where \(b_{11}=\) , \(b_{12}=\) , \(b_{21}=\) , \(b_{22}=\) ,
and
\begin{equation*} A^T B^T=\left(\begin{array}{cc} c_{11} \amp c_{12} \cr c_{21} \amp c_{22} \end{array}\right) \end{equation*}
where \(c_{11}=\) , \(c_{12}=\) , \(c_{21}=\) , \(c_{22}=\) .
Answer 1.
\(58\)
Answer 2.
\(-34\)
Answer 3.
\(8\)
Answer 4.
\(40\)
Answer 5.
\(66\)
Answer 6.
\(-24\)
Answer 7.
\(20\)
Answer 8.
\(32\)
Answer 9.
\(66\)
Answer 10.
\(20\)
Answer 11.
\(-24\)
Answer 12.
\(32\)

Written Exercises

6.
For each matrix either provide an inverse or show the matrix is not invertible. Justify your answer.
  1. \(\displaystyle A=\begin{amatrix}[rr]1\amp 4\\ -2\amp -7 \end{amatrix}\)
  2. \(\displaystyle A=\begin{amatrix}[rr] 2\amp -1 \\ 1/2 \amp -1/4 \end{amatrix}\)
  3. \(A=p\left( \begin{amatrix}[rr] 1\amp -2 \\ -2 \amp 0 \end{amatrix}\right)\text{,}\) where \(p(x)=x^2+x+1\text{.}\)
7.
Each \(A\) below is invertible. Find \(A^{-1}\) by guess and check. You may want to use the row or column method of matrix multiplication to justify your answer.
  1. \(\displaystyle A=\begin{amatrix}[ccc]1\amp 0\amp 0\\ 0\amp 0\amp 1\\ 0\amp 1\amp 0 \end{amatrix}\)
  2. \(\displaystyle A=\begin{amatrix}[rrr]0\amp 0\amp 1 \\0 \amp -1 \amp 0 \\ -1 \amp 0 \amp 0 \end{amatrix}\)
  3. \(\displaystyle A=\begin{amatrix}[rrr] 1 \amp 0\amp 0\\ 0\amp 1\amp 2 \\ 0\amp 0\amp 1 \end{amatrix}\)
8.
Suppose \(A\) is an invertible matrix. Prove: for any nonzero \(c\in\R\) the matrix \(cA\) is invertible.
9.
Assume \(A\) is a square \(n\times n\) matrix with \(n\geq 2\text{.}\)
  1. Prove: if \(A\) has two identical columns, then \(A\) is not invertible.
  2. Prove: if \(A\) has a row that is a scalar multiple of another row, then \(A\) is not invertible.
Hint.
Use the column and/or row method of matrix multiplication to show directly that \(A\) cannot have an inverse matrix.
10.
Find all invertible matrices \(A\) satisfying the given equation, or show there is no such \(A\text{.}\) Justify your answer.
  1. \(A^5=\boldzero_{n\times n}\text{.}\)
  2. \(A^{-3}=A^{-4}\text{.}\)
  3. \(\displaystyle A^2-3A=\boldzero_{n\times n}\)
  4. \(\displaystyle ((A^T)^2A)^T-(A^{-1}(A^{-1})^T)^{-1}=\boldzero_{n\times n}\)
11.
Let \(A=\begin{bmatrix} 1\amp 1\\ 0\amp 1 \end{bmatrix}\text{.}\) Find a formula for \(A^r\text{,}\) where \(r\geq 1\) is an integer. Justify your answer using a proof by induction.
12.
Let \(A=[1]_{n\times n}\text{,}\) the \(n\times n\) matrix consisting of all ones. Find a formula for \(A^r\text{,}\) where \(r\geq 1\) is an integer. Justify your answer using a proof by induction.
13.
Let \(p(x)=x^2-5x+c\text{,}\) where \(c\in \R\) is some fixed scalar. Suppose \(A\) is an \(n\times n\) matrix satisfying \(p(A)=\boldzero_{n\times n}\text{.}\)
  1. Prove: if \(c\ne 0\text{,}\) then \(A\) is invertible.
  2. Suppose further that \(A\) is not a scalar multiple of \(I_n\text{.}\)
    Prove: if \(c=0\text{,}\) then \(A\) is singular.
14. Expanding matrix products.
Fix a positive integer \(n\text{.}\) Given linear combinations of \(n\times n\) matrices
\begin{align*} A \amp =c_1A_1+c_2A_2+\cdots c_rA_r=\sum_{i=1}^rc_iA_i\\ B \amp =d_1B_1+d_2B_2+\cdots +d_sB_s=\sum_{j=1}^sd_jB_j\text{,} \end{align*}
prove by induction on \(r\geq 1\) that
\begin{equation*} AB=\sum_{i=1}^r\sum_{j=1}^sc_id_jA_iB_j\text{.} \end{equation*}
Note that each step (base and induction) of your induction on \(r\) will require an argument that uses induction on \(s\text{!}\) This is sometimes called double induction. For example, in the base step \(r=1\) you must show that
\begin{equation*} c_1A_1(d_1B_1+d_2B_2+\cdots +d_sB_s)=c_1d_1A_1B_1+c_1d_2A_1B_2+\cdots c_1d_sA_1B_s \end{equation*}
for any \(s\geq 1\text{;}\) this should be proved by induction on \(s\text{.}\)
15. Polynomial expressions of \(A\) commute.
Let \(p(x)=\anpoly\) and \(q(x)=\bmpoly\) be polynomials with real coefficients. For any square matrix \(A\text{,}\) show that the matrices \(p(A)\) and \(q(A)\) commute: i.e.,
\begin{equation*} p(A)q(A)=q(A)p(A)\text{.} \end{equation*}
You may use the result of Exercise 2.3.3.14.
16.
Suppose \(A\) is an \(n\times n\) matrix satisfying \(A^r=\boldzero\) for some \(r\geq 1\text{.}\)
Show that \((A-I)\) is invertible, and that in fact
\begin{equation*} (A-I)^{-1}=-(A^{r-1}+A^{r-2}+\cdots +A+I)\text{.} \end{equation*}
You may use the results of Exercise 2.3.3.14 and/or Exercise 2.3.3.15. .