Skip to main content

Euclidean vector spaces

Section 7.1 Inner product spaces

At last we return to that additional layer that can added atop a vector space structure: the inner product.

Subsection 7.1.1 Norm and distance

As mentioned above, once an inner product is established, we can define further notions like norm (or length), distance, and angle in terms of the given inner product. When the inner product in question is the standard dot product on \(\R^2\) or \(\R^3\text{,}\) then these are precisely the familiar notions you may have met in multivariable calculus. Things get really interesting when we treat a more exotic inner product space. For example, consider \(C([a,b])\text{.}\) The integral inner product
\begin{equation*} \angvec{f,g}=\int_a^b f(x)g(x)\, dx \end{equation*}
gives rise to useful notions of the length of a function \(f\in C([a,b])\text{,}\) as well as the distance or angle between two functions \(f,g\in C([a,b])\text{.}\)

Definition 7.1.1. Norm (or length) of a vector.

Let \((V, \langle , \rangle )\) be an inner product space. Given \(\boldv\in V\) we define its norm (or length), denoted \(\norm{\boldv}, \) as
\begin{equation*} \norm{\boldv}=\sqrt{\langle \boldv, \boldv \rangle }\text{.} \end{equation*}
A unit vector is a vector \(\boldv\) of length one: i.e., a vector \(\boldv\) satisfying \(\norm{\boldv}=1\text{.}\)

Example 7.1.2. Norm with respect to dot product.

Consider \(V=\R^4\) with the standard dot product. Compute \(\norm{(1,-1,-2,1)}\text{.}\)
Solution.
We have
\begin{align*} \norm{(1,-1,-2,1)}\amp =\sqrt{(1,-1,-2,1)\cdot (1,-1,-2,1)} \\ \amp=\sqrt{1+1+4+1} \\ \amp = \sqrt{7}\text{.} \end{align*}

Example 7.1.3. Norm with respect to weighted dot product.

Consider \(V=\R^3\) equipped with the dot product with weights \(1,2,3\text{.}\) Compute \(\norm{(3,1,-2)}\text{.}\)
Solution.
We have
\begin{align*} \norm{(3,1,-2)}\amp =\sqrt{\left\langle (3,1,-2), (3,1,-2)\right\rangle} \\ \amp=\sqrt{1(3^2)+2(1^2)+3((-2)^2)} \\ \amp = \sqrt{23}\text{.} \end{align*}

Remark 7.1.4. Unit vectors.

Given any \(\boldv\ne \boldzero\in V\text{,}\) the vector \(\boldu=\frac{1}{\norm{\boldv}}\boldv\) is a unit vector. To verify this, let \(c=\norm{\boldv}\) and compute
\begin{align*} \norm{\boldu} \amp =\sqrt{\left\langle \frac{1}{c}\boldv,\frac{1}{c}\boldv \right\rangle }\\ \amp =\sqrt{\frac{1}{c^2}\langle \boldv,\boldv \rangle} \amp (\knowl{./knowl/xref/d_innerproduct.html}{\text{Definition 1.2.3}}, \text{(ii)}) \\ \amp =\frac{1}{\val{c}}\sqrt{\langle \boldv,\boldv \rangle } \\ \amp =\frac{1}{c}\sqrt{\langle \boldv,\boldv \rangle } \amp (c=\norm{\boldv}\geq 0)\\ \amp =\frac{\norm{\boldv}}{\norm{\boldv}}=1\text{.} \end{align*}

Example 7.1.5. Unit vectors.

For each inner product space \((V, \langle\,, \rangle)\) and \(\boldv\in V\) compute the associated unit vector \(\boldu=\frac{1}{\norm{v}}\boldv\)
  1. \(V=\R^4\) with dot product, \(\boldv=(1,-1,2,1)\)
  2. \(V=\R^3\) with dot product with weights \(1,2,3\text{,}\) \(\boldv=(3,1,-2)\)
Solution.
The norms of the vectors in each case were computed in earlier examples. We simply scale to compute the corresponding unit vectors.
  1. \(\displaystyle \boldu=\frac{1}{\sqrt{7}}(1,-1,2,1)=(\sqrt{7}/7,-\sqrt{7}/7,2\sqrt{7}/7,\sqrt{7}/7)\)
Next, we define the distance between two vectors in an inner product space as the length of their vector difference.

Definition 7.1.6. Distance between vectors.

Let \((V,\langle , \rangle )\) be an inner product space. The distance between \(\boldv, \boldw\in V\), denoted \(d(\boldv, \boldw)\text{,}\) is defined as
\begin{equation*} d(\boldv, \boldw)=\norm{\boldv-\boldw}=\sqrt{\langle \boldv-\boldw,\boldv-\boldw \rangle }\text{.} \end{equation*}

Example 7.1.7.

For each inner product space \(V\text{,}\) compute the distance between the given vectors.
  1. \(V=\R^3\) with the dot product, \(\boldx=(x_1,x_2,x_3)\text{,}\) \(\boldy=(y_1,y_2,y_3)\)
Solution.
  1. We have
    \begin{align*} d(\boldx,\boldy) \amp = \norm{\boldx-\boldy} \\ \amp = \sqrt{\langle \boldx-\boldy,\boldx-\boldy\rangle} \\ \amp =\sqrt{(x_1-y_1,x_2-y_2,x_3-y_3)\cdot(x_1-y_1,x_2-y_2,x_3-y_3)} \\ \amp = \sqrt{(x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2}\text{.} \end{align*}

Proof.

We prove (2) and leave the rest as an exercise (ExerciseΒ 7.1.4.16).
Given \(c\in \R\) and \(\boldv\in V\) we have
\begin{align*} \norm{c\boldv} \amp =\sqrt{\langle c\boldv, c\boldv\rangle}\\ \amp=\sqrt{c^2\langle \boldv, \boldv\rangle} \amp (\knowl{./knowl/xref/d_innerproduct.html}{\text{Definition 1.2.3}})\\ \amp =\val{c}\sqrt{\langle \boldv, \boldv\rangle} \amp (\sqrt{c^2}=\val{c})\text{.} \end{align*}

Subsection 7.1.2 Cauchy-Schwarz inequality, triangle inequalities, and angles between vectors

The famous Cauchy-Schwarz inequality has a knack of cropping up all over the world of science: from properties of covariance in statistics, to the Heisenberg uncertainty principle of quantum mechanics. More directly pertinent to our discussion, the Cauchy-Schwarz inequality implies the triangle inequalities (7.1.10) and ensures that our notion of the angle between two nonzero vectors (DefinitionΒ 7.1.11) is well-defined.

Proof.

Fix vectors \(\boldv\) and \(\boldw\text{.}\) For any \(t\in\R\) we have by positivity
\begin{equation*} 0\leq \langle t\boldv-\boldw,t\boldv-\boldw\rangle=\langle\boldv,\boldv\rangle t^2-2\langle\boldv,\boldw\rangle t+\langle\boldw,\boldw\rangle=at^2-2bt+c\text{,} \end{equation*}
where
\begin{equation} a=\langle\boldv,\boldv\rangle, \ b=\langle\boldv,\boldw\rangle, \ c=\langle\boldw,\boldw\rangle=\norm{w}^2\text{.}\tag{7.1} \end{equation}
Since \(at^2-2b\,t+c\geq 0\) for all \(t\in \R\) the quadratic polynomial \(p(t)=at^2-2b\,t+c\) has at most one root. Using the quadratic formula we conclude that we must have \(4b^2-4ac\leq 0\text{,}\) since otherwise \(p(t)\) would have two distinct roots. It follows that
\begin{equation*} 4\langle \boldv, \boldw\rangle^2-4\norm{\boldv}^2\norm{\boldw}^2\leq 0\text{,} \end{equation*}
or equivalently
\begin{equation*} \langle\boldv,\boldw\rangle^2\leq \norm{\boldv}^2\norm{\boldw}^2\text{.} \end{equation*}
Taking square-roots yields the desired inequality.
The same reasoning shows that the Cauchy-Schwarz inequality is an actual equality if and only if \(p(t)=0\) for some \(t\) if and only if \(0=\langle t\boldv-\boldw,t\boldv-\boldw\rangle\) if and only if \(\boldv=t\boldw\) for some \(t\) (by positivity).
The following triangle inequalities are more or less direct consequences of the Cauchy-Schwarz inequality.

Proof.

This is an elementary exercise of unpacking the definitions of norm and distance in terms of the inner product, and then applying the Cauchy-Schwarz inequality appropriately. The proof is left as an exercise.
Let \((V, \langle , \rangle )\) be an inner product space. For any nonzero vectors \(\boldv, \boldw\text{,}\) the Cauchy-Schwarz inequality tells us that
\begin{equation*} \val{\langle \boldv, \boldw \rangle }\leq \norm{\boldv}\, \norm{\boldw}\text{,} \end{equation*}
or equivalently,
\begin{equation*} -1\leq \frac{\langle \boldv, \boldw \rangle}{\norm{\boldv}\, \norm{\boldw}} \leq 1\text{.} \end{equation*}
It follows that there is a unique real number \(\theta\in [0,\pi]\) satisfying
\begin{equation*} \cos\theta=\frac{\langle \boldv, \boldw \rangle}{\norm{\boldv}\, \norm{\boldw}}\text{.} \end{equation*}
We call \(\theta\) the angle between \(\boldv\) and \(\boldw\).

Definition 7.1.11. Angle between vectors.

Let \((V,\langle , \rangle )\) be an inner product space. Given nonzero vectors \(\boldv, \boldw\in V\text{,}\) the angle between \(\boldv\) and \(\boldw\) is defined to be the unique \(\theta\in [0,\pi]\) satisfying
\begin{equation*} \cos\theta=\frac{\langle \boldv, \boldw \rangle}{\norm{\boldv}\, \norm{\boldw}}\text{.} \end{equation*}
Equivalently, we have
\begin{equation*} \theta=\arccos\left( \frac{\langle \boldv, \boldw \rangle}{\norm{\boldv}\, \norm{\boldw}} \right)\text{.} \end{equation*}

Remark 7.1.12.

Our definition of the angle between two vectors may remind you of the dot product angle formula for vectors in \(\R^3\text{:}\)
\begin{equation} \cos\theta=\frac{\boldx\cdot\boldy}{\norm{\boldx}\norm{\boldy}}\text{.}\tag{7.2} \end{equation}
Interestingly, whereas (7.2) is typically treated as a theorem, derived from properties of the dot product and the law of cosines, in a general inner product space the equation
\begin{equation*} \cos\theta =\frac{\langle \boldv, \boldw\rangle}{\norm{\boldv}\norm{\boldw}} \end{equation*}
is understood as the definition of the angle between two vectors.

Example 7.1.13.

Consider \(\R^2\) along with the dot product. Verify that our definition of the angle \(\theta\) between \((1,1)\) and \((1,0)\) is consistent with our planar geometry notion of angle.
Solution.
According to DefinitionΒ 7.1.11, \(\theta\) is the unique element of \([0,\pi]\) satisfying
\begin{equation*} \cos \theta=\frac{(1,1)\cdot (1,0)}{\norm{(1,1)}\norm{(1,0)}}=\frac{1}{\sqrt{2}}=\frac{\sqrt{2}}{2}\text{.} \end{equation*}
We recognize \(\theta\) as the familiar angle \(\pi/4\text{,}\) as expected.

Example 7.1.14.

Consider \(\R^2\) with the weighted dot product
\begin{equation*} \langle (x_1,x_2), (y_1,y_2)\rangle=2x_1x_2+y_1y_2 \end{equation*}
Compute the angle \(\theta\) between \((1,1)\) and \((0,0)\) with respect to this inner product
Solution.
First compute
\begin{align*} \langle (1,1), (1,0)\rangle \amp= 2(1)+1(0)=2 \\ \norm{(1,1)}\amp =\sqrt{\langle (1,1),(1,1)\rangle} \\ \amp =\sqrt{2+1}=\sqrt{3} \\ \norm{(1,0)}\amp =\sqrt{\langle (1,0),(1,0)\rangle} \\ \amp =\sqrt{2} \end{align*}
By definition \(\theta\) is the unique value in \([0,\pi]\) satisfying
\begin{equation*} \cos\theta=\frac{\langle (1,1), (1,0)\rangle}{\norm{(1,1)}\norm{(1,0)}}=\frac{2}{\sqrt{3}\sqrt{2}}=\frac{\sqrt{6}}{3}\text{.} \end{equation*}
We see that \(\theta\) is not one of our familiar angles from the unit circle (e.g., \(\pi/6, \pi/4\text{,}\) etc.) and so express \(\theta\) in terms of the \(\arccos\) function:
\begin{equation*} \theta=\arccos(\sqrt{6}/3)\approx 35.3^\circ\text{.} \end{equation*}

Subsection 7.1.3 Choosing your inner product

Why, given a fixed vector space \(V\text{,}\) would we prefer one inner product definition to another? One way of understanding a particular choice of inner product is to ask what its corresponding notion of distance measures.

Example 7.1.15. Weighted dot product distance.

Consider \(\R^n\) with a choice of weighted dot product
\begin{equation*} \langle (x_1,x_2,\dots, x_n), (y_1,y_2,\dots, y_n)\rangle=k_1x_1y_1+k_2x_2y_2+\cdots +k_nx_ny_n, \end{equation*}
where \(k_1,k_2,\dots, k_n\) are fixed positive constants. With respect to this inner product the distance between two vectors \(\boldx=(x_1,x_2,\dots, x_n)\) and \(\boldy=(y_1,y_2,\dots, y_n)\) is
\begin{equation*} d(\boldx,\boldy)=\norm{\boldx-\boldy}=\sqrt{k_1(x_1-y_1)^2+k_2(x_2-y_2)^2+\cdots +k_n(x_n-y_n)^2}\text{.} \end{equation*}
Thus \(d(\boldx, \boldy)\) is an aggregate measure of the difference between the corresponding entries of \(\boldx\) and \(\boldy\text{,}\) as weighted by our choice of the constants \(k_i\text{.}\)
Imagine that each element of \(\boldx\in \R^n\) is a data point collected by measuring \(n\) different properties of a sample \(s\) : i.e., \(x_i\) is the measured value of property \(P_i\) on \(s\) for all \(1\leq i\leq n\text{.}\) Given samples \(s\) and \(s'\) with corresponding measurement vectors \(\boldx\) and \(\boldy\text{,}\) the weighted distance \(d(\boldx,\boldy)\) is then a quantitative way of saying how β€œclose” the two samples are to one another. The choice of weights \(k_i\) allows us to adjust the relative influence of a given property \(P_i\) in determining this closeness. For example, the standard dot product (\(k_i=1\) for all \(i\)) yields a notion of distance that gives each property equal standing.

Exercises 7.1.4 Exercises

WeBWork Exercises

1.
Find the norm of \(\vec{x}\) and the unit vector \(\vec{u}\) in the direction of \(\vec{x}\) if
\begin{equation*} \vec{x} = \left[\begin{array}{c} -4\cr 4\cr 1\cr 2 \end{array}\right]. \end{equation*}
\(\| \vec{x} \| =\) ,
\(\vec{u} =\) (4Β Γ—Β 1 array)
Answer.
\(\sqrt{37}\)
2.
Find the angle \(\alpha\) between the vectors
\begin{equation*} \left[\begin{array}{c} -2\cr -1\cr -3 \end{array}\right] \ \mbox{ and } \ \left[\begin{array}{c} -5\cr -5\cr 1 \end{array}\right]. \end{equation*}
\(\alpha =\) .
Answer.
\(\cos^{-1}\mathopen{}\left(0.449089\right)\)
3.
If \(f(x)\) and \(g(x)\) are arbitrary polynomials of degree at most 2, then the mapping
\begin{equation*} \langle f,g\rangle = f(-2)g(-2) + f(0)g(0) + f(1)g(1) \end{equation*}
defines an inner product in \(P_2\text{.}\) Use this inner product to find \(\langle f,g\rangle\text{,}\) \(\| f \|\text{,}\) \(\| g \|\text{,}\) and the angle \(\alpha_{f,g}\) between \(f(x)\) and \(g(x)\) for
\begin{equation*} f(x) = 3 x^2 + 2 x - 1 \ \mbox{ and } \ g(x) = 3 x^2 - 5 x + 9. \end{equation*}
\(\langle f,g\rangle =\) ,
\(\| f \| =\) ,
\(\| g \| =\) ,
\(\alpha_{f,g} =\) .
Answer 1.
Answer 2.
\(8.12404\)
Answer 3.
\(33.0303\)
Answer 4.
\(0.496021\)
4.
If \(A\) and \(B\) are arbitrary real \(m\times n\) matrices, then the mapping
\begin{equation*} \langle A,B \rangle = {\rm trace}(A^T B) \end{equation*}
defines an inner product in \({\mathbb R}^{m\times n}\text{.}\) Use this inner product to find \(\langle A,B \rangle\text{,}\) the norms \(\|A\|\) and \(\|B\|\text{,}\) and the angle \(\alpha_{A,B}\) between \(A\) and \(B\) for
\begin{equation*} A = \left[\begin{array}{cc} 2 \amp 3\cr 3 \amp -3\cr -1 \amp -2 \end{array}\right] \ \mbox{ and } \ B = \left[\begin{array}{cc} -1 \amp 3\cr 2 \amp 2\cr -2 \amp -3 \end{array}\right]. \end{equation*}
\(\langle A,B \rangle =\) ,
\(\|A\|=\) ,
\(\|B\|=\) ,
\(\alpha_{A,B}=\) .
Answer 1.
Answer 2.
Answer 3.
\(5.56776\)
Answer 4.
\(1.10514\)
5.
Use the inner product
\begin{equation*} \langle f,g \rangle = \int_0^1 f(x)g(x) \, dx \end{equation*}
in the vector space \(C^0 \lbrack 0,1\rbrack\) of continuous functions on the domain \(\lbrack 0, 1 \rbrack\) to find \(\langle f,g\rangle\text{,}\) \(\| f \|\text{,}\) \(\| g\|\text{,}\) and the angle \(\alpha_{f,g}\) between \(f(x)\) and \(g(x)\) for
\begin{equation*} f(x) = 5 x^2 - 6 \ \mbox{ and } \ g(x) = -3 x + 2. \end{equation*}
\(\langle f,g\rangle =\) ,
\(\|f\| =\) ,
\(\|g\| =\) ,
\(\alpha_{f,g}\) .
Answer 1.
\(-3.41667\)
Answer 2.
\(4.58258\)
Answer 3.
Answer 4.
\(2.4122\)

6.

For each of the following operations on \(\R^2\text{,}\) determine whether it defines an inner product on \(\R^2\text{.}\) If it fails to be an inner product, identify which of the three inner product axioms (if any) it does satisfy, and provide explicit counterexamples for any axiom that fails.
  1. \(\angvec{(x_1,x_2),\ (y_1,y_2)}=x_1y_2+x_2y_1\text{.}\)
  2. \(\angvec{(x_1,x_2),\ (y_1,y_2)}=2x_1y_1+x_1y_2+x_2y_1+3x_2y_2\text{.}\)
  3. \(\angvec{(x_1,x_2), \ (y_1,y_2)}=x_1^2y_1^2+x_2^2y_2^2\text{.}\)
Hint.
The operation in (b) is an inner product. Use that fact that
\begin{equation*} \angvec{\boldx,\ \boldy}=\boldx^T \begin{amatrix}[cc]2\amp 1 \\ 1 \amp 3 \end{amatrix}\boldy\text{,} \end{equation*}
where we treat \(\boldx, \boldy\) as column vectors. This helps to prove axioms (i)-(ii). For axiom (iii), use either a β€œcomplete the square” or quadratic formula argument on the expression \(\langle \boldx, \boldx\rangle=2x_1^2+2x_1x_2+3x_2^2\text{.}\)

7.

We work within the inner product space given by \(V=P_2\) together with the evaluation at 0, 1, 2 inner product.
Let \(q(x)=x\text{.}\) Give a parametric description of the set
\begin{equation*} W=\{p(x)\in P_2\colon \langle p(x), q(x)\rangle =0\}\text{.} \end{equation*}

8.

We work in the inner product space given by \(V=C([-\pi,\pi])\) together with the integral inner product.
  1. Let \(f(x)=\cos x, g(x)=\sin x\text{.}\) Compute \(\langle f,g \rangle \) and \(\norm{g}\text{.}\)
  2. Show that if \(f(x)\) is an odd function (i.e., \(f(x)=-f(-x)\) for all \(x\)) and \(g(x)\) is an even function (\(g(-x)=g(x)\) for all \(x\)), then \(\langle f, g \rangle=0 \text{.}\) Hint: use the area interpretation of the integral and properties of even/odd functions.

.

i.e.\(\arccos\text{.}\)
9.
\(V=\R^4\) with the standard dot product; \(\boldv=(1,1,1,1), \boldw=(1,-1,1,1)\)
10.
\(V=\R^2\) with the dot product with weights \(1,2\text{;}\) \(\boldv=(1,0), \boldw=(-2,\sqrt{2})\)
11.
\(V=C([0,1])\) with the integral inner product; \(f(x)=1, g(x)=x\)
12.
\(V=P_2\) with evaluation at \(-1, 1\) inner product; \(p(x)=-\frac{1}{2}x+\frac{1}{2}, q(x)=2x\)

13.

Let \((V,\langle\, , \rangle)\) be an inner product space. Prove that \(\langle \boldv, \boldzero\rangle=0\) for all \(\boldv\in V\text{.}\)

14.

Let \(\boldv\) and \(\boldw\) be nonzero vectors of the inner product space \((V, \langle\, , \rangle)\text{,}\) and let \(\theta\) be the angle between them. Prove the following equivalence:
\begin{equation*} \norm{\boldv+\boldw}=\norm{\boldv}+\norm{\boldw}\text{ if and only if } \theta=0\text{.} \end{equation*}
Your proof should be a chain of equivalences with each step justified.
Hint.
The equality is true if and only if it is true after squaring both sides. (Why?) Use the definition
\begin{equation*} \norm{\boldv+\boldw}^2=\langle \boldv+\boldw, \boldv+\boldw\rangle \end{equation*}
and expand the inner product.

15.

Let \((V, \langle , \rangle )\) be an inner product space. Suppose vectors \(\boldv, \boldw\in V\) satisfy \(\norm{\boldv}=2\) and \(\norm{\boldw}=3\text{.}\) Using the Cauchy-Schwarz inequality (7.1.9) find the maximum and minimum possible values of \(\norm{\boldv-\boldw}\text{,}\) and give explicit examples where those values occur.

17.

Prove each inequality below using the Cauchy-Schwarz inequality (7.1.9) applied to a judicious choice of inner product space, and possibly a judicious choice of vector in said inner product space.
  1. For all \(f, g\in C([a,b])\)
    \begin{equation*} \left(\int_a^b f(x)g(x) \ dx\right)^2\leq \int_a^b f^2(x)\ dx\int_a^b g^2(x) \ dx\text{.} \end{equation*}
  2. For all \((x_1,x_2,\dots, x_n)\in\R^n\text{,}\)
    \begin{equation*} (x_1+x_2+\cdots +x_n)\leq\sqrt{x_1^2+x_2^2+\cdots +x_n^2}\sqrt{n}\text{.} \end{equation*}
  3. For all \(a,b,\theta\in\R\)
    \begin{equation*} (a\cos\theta+b\sin\theta)^2\leq a^2+b^2\text{.} \end{equation*}

18. Isometries of inner product spaces.

Let \((V,\angvec{ \ , })\) be an inner product space. An isometry of \(V\) is a function \(f\colon V\rightarrow V\) that preserves distance: i.e.,
\begin{equation*} d(f(\boldv), f(\boldw))=d(\boldv, \boldw) \text{ for all \(\boldv, \boldw\in V\) }\text{.} \end{equation*}
In this exercise we will show that any isometry that maps \(\boldzero\) to \(\boldzero\) is a linear transformation. This is a very useful fact. For example, it implies the linearity of many geometric transformations we have considered: rotation about the origin in \(\R^2\text{,}\) reflection through a line in \(\R^2\text{,}\) etc..
In what follows assume that \(f\) is an isometry of \(V\) satisfying \(f(\boldzero)=\boldzero\text{.}\)
  1. Prove that \(\norm{f(\boldv)}=\norm{\boldv}\text{:}\) i.e., \(f\) preserves norms.
  2. Prove \(\angvec{f(\boldv), f(\boldw)}=\angvec{\boldv, \boldw}\text{:}\) i.e., \(f\) preserves inner products. Hint: first prove that \(\angvec{\boldv, \boldw}=\frac{1}{2}(\norm{\boldv}^2+\norm{\boldw}^2-\norm{\boldv-\boldw}^2)\text{.}\)
  3. To prove \(f\) is linear it is enough to show \(f(\boldv+c\boldw)=f(\boldv)+cf(\boldw)\) for all \(\boldv, \boldw\in V\text{,}\) \(c\in \R\text{.}\) To do so, use the above parts to show that
    \begin{equation*} \norm{f(\boldv+c\boldw)-(f(\boldv)+cf(\boldw))}=0\text{.} \end{equation*}