We now deepen our acquaintance with the different families of functions introduced in Sectionย 1.4, treating questions of implied domain, range, and graph properties.
The properties of power functions are nicely summarized by their graphs. Letโs look first at the graphs of power functions of the form \(f(x)=x^{n}\text{,}\) where \(n\in \Z\) is an integer.
The eight graphs in Figureย 1.5.1 and Figureย 1.5.2 nicely illustrate how the precise nature of the power \(n\) involved affects the properties of \(f(x)=x^n\) and its graph. In particular, we see that the general shape of the graph is determined by whether \(n\) is positive or negative (the sign of \(n\)), and whether \(n\) is even or odd (the parity of \(n\)). Tableย 1.5.3 summarizes some observations about these power functions that are strongly suggested by our graphs. The claims there about domain are easily verified using the definition of power functions (see Exampleย 1.4.3). The claims about range can be shown to be true using Theoremย 1.4.5, as Exampleย 1.5.4 illustrates.
where the last inequality follows from the observation that the square of any real number is nonnegative. Since the outputs of \(f\) are all nonnegative, we conclude
We now set about proving the reverse inclusion: \([0,\infty)\subseteq \range f\text{.}\) To this end, take any \(b\in [0,\infty)\text{.}\) Since \(r\geq 0\text{,}\) the number \(r^{1/n}=\sqrt[n]{r}\) is well defined. (In this case, since \(n\) is even, \(r^{1/n}\) by definition is the unique nonnegative \(n\)-th root of \(r\text{.}\)) Setting \(a=r^{1/n}\text{,}\) we have
This shows that \(r\) is an output of \(f\text{,}\) and thus that \(r\in \range f\text{,}\) as desired. We conclude that \(\range f=[0,\infty)\text{.}\)
Power functions of the form \(f(x)=x^{1/n}=\sqrt[n]{x}\text{,}\) where \(n\) is positive, also exhibit some easy to identify distinctive properties. We call these functions \(n\)-th root functions.
Again, we summarize some properties of these functions that the graphs make evident. In this case, since \(n\) is always positive, it is only the parity (even/odd) of \(n\) that governs the general shape of the graph.
Although not strictly necessary for solving this exercise, looking at Figureย 1.5.1โ1.5.5, we see that the function graphed in (a) should be of the form \(f(x)=x^{1/n}\text{,}\) where \(n\) is a positive odd numer, and the function graphed in (b) should be of the form \(f(x)=x^{-n}\text{,}\) where \(n\) is a positive even number.
Our function \(f\) satisfies \(f(x)=x^q\) for all \(x\) and \(f(32)=2\text{.}\) This implies that
\begin{equation*}
32^q=2\text{.}
\end{equation*}
Later, once we introduce logarithmic functions, we will have a systematic way of solving for \(q\text{.}\) Instead, using our observation at the top that \(q=1/n\) for some positive odd number \(n\text{,}\) and doing some guess-and-check, we see that \(q=1/5\) works. Thus \(f(x)=x^{1/5}\text{.}\)
Now our function \(f\) satisfies \(f(x)=x^q\) and \(f(-1/2)=4\text{:}\) i.e., \((-1/2)^q=4\text{.}\) Again, being guided by our observation at the top that \(q=-n\) for some positive even integer \(n\text{,}\) and trying out the choices \(n=2,4,\dots\text{,}\) we see that \(q=2\) works. We conclude that \(f(x)=x^{-2}\text{.}\)
In contrast to the simple power functions considered above, the behavior of polynomials can be quite complicated, and as a general rule becomes more and more complicated as the degree of the polynomial increases. This is illustrated in Figureย 1.5.9.
We see that as the degree of the polynomials increases, the graphs of the functions tend to have more โturning pointsโ (what we will later define as local maxima and local minimima). Furthermore, it is not clear from these examples how the range of a polynomial \(f(x)=\anpoly\) is determined by its degree and/or coefficients. Once again, calculus will provide us with the theoretical tools to answer questions related to these issues. For now, we will address another important question: how do we find the roots of a polynomial? Looking at the \(x\)-intercepts of the graphs in Figureย 1.5.9, it appears that the number of roots of a polynomial has something to do with its degree. The following theorem gives a precise (if somewhat incomplete) articulation of this relationship.
Statement (1) of Theoremย 1.5.10 tells us that the degree of a polynomial bounds the number of possible real roots, and thus also the number of possible \(x\)-intercepts. Thus, a quadratic polynomial has at most two roots, a cubic polynomial has at most three roots, a quartic polynomial has at most four roots, etc.. Note that the theorem does not claim that a degree-\(n\) polynomial must have \(n\) roots. Indeed we have seen already examples of quadratic polynomials (degree-2) that have no (real) roots!
Statement (2) of Theoremย 1.5.10 makes the connection between roots of polynomials and factorization. As a result, the more we are able to factor a polynomial, the more we can say about its roots; conversely, every time we find a root of polynomial \(f\text{,}\) we can make progress on the factorization of \(f\) by โfactoring outโ that root. Factoring polynomials is thus an important skill, and we gather here some techniques for doing this.
The process of computing \(q(x)\) and \(r(x)\) is called polynomial longdivision of \(f\) by \(g\text{.}\) We illustrate this process in Exampleย 1.5.12. See also Tableย 1.5.13
A real number \(a\) is a root of the polynomial \(f(x)\) if and only if the remainder upon dividing \(f(x)\) by \(x-a\) is the zero polynomial. In this case we have \(r(x)=0\) and \(f(x)=(x-a)q(x)\text{.}\)
You may remember that long division of \(f\) by \(g\) proceeds iteratively, where at each step we multiply \(g\) by a monomial \(cx^m\) in order to match the leading term of the polynomial being divided. We spell this out in more detail in the steps below, but you are probably more accustomed to steps in this process being shown as in Tableย 1.5.13, and indeed that is the way you should perform this operation. Consider the following as just an in-depth description of what we are actually doing in that process.
Multiply \(g\) by a monomial \(cx^n\) so that the leading terms of \(f\) and \((cx^n)g(x)\) match. In this case we choose the monomial \(3x^2\text{.}\)
If \(\deg r\) is not less than \(\deg g\) for our temporary remainder \(r(x)\text{,}\) repeat Step 1 with \(g(x)\) and \(r(x)\) (instead of \(g(x)\) and \(f(x)\)). Update our temporary quotient \(q(x)\) by adding to it the relevant monomial \(cx^m\text{,}\) and update our temporary remainder \(r(x)\) by subtracting \((cx^m)g(x)\) from it.
we multiply \(g(x)\) by the monomial \(-3x\) in order to match the leading term of \(-3x^3-x^2+7x-3\text{,}\) and update our \(q(x)\) and \(r(x)\) as follows:
In our case, we need to do one more iteration. To match the leading terms, we multiply \(g(x)\) by the monomial \(2\text{.}\) Updating our \(q(x)\) and \(r(x)\) yields
Let \(f(x)=x^3+4x^2+4x+3\text{.}\) For each \(a\in \R\text{,}\) decide whether \(a\) is a root by computing \(f(a)\text{,}\) then verify your answer using Procedureย 1.5.11.
we see that \(a=-1\) is not a root of \(f\text{.}\) According to Procedureย 1.5.11 this means that dividing \(f(x)\) by \(x+1\) should result in a nonzero remainder polynomial \(r(x)\text{.}\) We verify by performing longdivision:
Since \(r(x)=0\) is the zero polynomial, we conclude (again) that \(a=-3\) is a root, and furthermore the quotient \(q(x)=x^2+x+1\) yields the factorization
You may have noticed that the two candidate roots of the polynomial in Exampleย 1.5.14 were integer factors of its constant term. As it turns out, given a polynomial with integer coefficients, any integer root of that polynomial must be a factor of its constant term.
and assume that all the coefficients of \(f\) are integers: i.e., \(a_k\in \Z\) for all \(0\leq k\leq n\text{.}\) Any integer root of \(f\) must be a factor of \(a_0\text{:}\) i.e., if \(f(c)=0\) and \(c\in \Z\text{,}\) then \(c\) is a factor of \(a_0\text{.}\)
Note the crucial detail in Theoremย 1.5.15: an integer root of a polynomial with integer coefficients must divide the constant term. The theorem says nothing about possible non-integer roots, as the next example illustrates.
Since \(f\) has integer coefficents, if \(c\) is an integer root of \(f\text{,}\) then it must be a factor of its constant term \(-2\text{,}\) according to Theoremย 1.5.15. Thus the only possible integer roots of \(f\) are \(\pm 1, \pm 2\text{.}\) But \(f(\pm 1)=-1\) and \(f(\pm 2)=2\text{.}\) Thus none of these candidates is a root, showing \(f\) has no integer roots.
Since \(f\) is a quadratic function, it is easy to determine its roots. As usual, there are multiple ways of doing this, but in this case the easiest method would be to proceed as follows:
Once we find a root \(x=a\) of a polynomial \(f(x)\text{,}\) the resulting factorization \(f(x)=(x-a)q(x)\) allows us to find all further roots by continuiung the process on \(q(x)\text{,}\) as the next example illustrates.
Since \(a\ne -3\text{,}\)\(a+3\ne 0\text{.}\) By Theoremย 1.2.24, we have \(f(a)=0\) if and only if \(a^2+a+1=0\text{.}\) Thus, \(a\ne -3\) is a root of \(f\) if and only if it is a root of \(g(x)=x^2+x+1\text{.}\) But using Theoremย 1.2.27, we see that the quadratic function \(g(x)\) has not roots! We conclude that \(-3\) is the only root of \(f\text{.}\)
A real number \(a\) is a root of \(f\) if and only if it is a root of one of the polynomials \(f_k(x)\text{.}\) Thus by finding all roots of each polynomial \(f_k(x)\text{,}\) we find all roots of \(f\text{.}\)
Note that the success Procedureย 1.5.18 depends on our ability to find all the roots of the factor polynomials \(f_k(x)\text{.}\) Accordingly, the smaller the degree of the \(f_k\text{,}\) the more likely we are in being able to find all roots of \(f\text{.}\) In particular, we will be home free if we are able to factor \(f\) far enough so that all factor polynomials are of degree one or two.
Since the coefficients of \(f\) are integers, we first try and find an integer root of \(f\) using Theoremย 1.5.15. Evaluating \(f(a)\) for \(a=\pm 1, \pm 2, \pm 4\) (the factors of the constant term of \(f\)), we see that \(f(2)=0\text{,}\) and thus that \((x-2)\) is a factor of \(f\text{.}\) Using polynomial longdivision (we skip the steps here), we see that
Now continue the process by factoring \(g(x)=x^3-2x^2+x-2\text{.}\) Again using Theoremย 1.5.15, we try \(\pm 1, \pm 2\) as potential roots. It turns out that \(g(2)=0\text{,}\) and thus \(x-2\) can be factored out (again). Using polynomial longdivision (again we omit the steps), we find that
Lastly, we apply Theoremย 1.2.27 to the remaining factor \(x^2+1\text{,}\) concluding that it has no roots. By Procedureย 1.5.18, we conclude that the only root of \(f\) is \(2\text{.}\)
has a double root (i.e., root of multipicity two) at \(x=0\text{,}\) a simple root (i.e., root of multiplicity one) at \(x=-1\text{,}\) and a quadruple root at \(x=6\text{.}\) From the factorization, we know further that \(f\) has no additional roots, since \(x^2+1\) has not roots.
As we will see later in the course, the multiplicity of a root provides further nuanced information about how a polynomial \(f\) behaves near a the root.
Although we have been focusing on factoring polynomials by finding roots, this is not always the easiest way to proceed. Indeed, there are polynomials that can be factored nontrivially and yet have no roots. For example, the polynomial \(f(x)=x^4+2x^2+1\) factors as
Theoremย 1.5.21 provides a couple of tricks for factoring polynomials that sometimes come in handy when we cannot easily find roots. The factorization above can be thought of an instance of (2) of Theoremย 1.5.21. We have
Another important consequence of Theoremย 1.5.10 that we will make use of later in the course has to do with when two polynomials are equal (as functions), as articulated in Corollaryย 1.5.26. In order to make sense of that result, we need to first make clear what we mean by two functions being equal. Recall our original intuitive idea of a function as a rule (or assignment) that assigns to each element of a specified input set (the domain) a unique output. When should we consider two such things to be the same? For one thing, they should have the same specified set of inputs (i.e., the same domain): this is condition (1) of Definitionย 1.5.24. Moreover, in order to represent the same rule (or assignment), each input should be assigned to the same output: this is condition (2) of Definitionย 1.5.24.
Note well the all important โfor allโ quantifier appearing in condition (2) of Definitionย 1.5.24. That detail makes the notion of function equality more nuanced than a simple numeric equality (like \(5=2^2+1\)). Given two functions \(f\) and \(g\) with common domain \(D\text{,}\) the statement \(f=g\) is in fact a claim that a whole bunch of numeric equalities hold: namely, we have \(f(x)=g(x)\) for every \(x\in D\text{.}\) Thatโs one equality for each \(x\in D\text{.}\) Since \(D\) is most often an infinite set (like an interval) for us, this means that \(f=g\) typically asserts that infinitely many numeric equalities hold. In this sense a function equality is very similar to the notion of an identity (like \((x+1)^2=x^2+2x+1\)), which you may have encountered before.
Having made clear what we mean in general by function equality, we now look at the specific case of polynomial functions. Corollaryย 1.5.26 can be shown to be a consequence of Theoremย 1.5.10, though we will omit that argument.
be polynomials defined on a common infinite domain \(D\text{,}\) and assume \(a_n\) and \(b_m\) are both nonzero. We have \(f=g\) if and only if the following conditions hold:
Equipped now with some machinery for factoring polynomials, we can take a closer look at rational functions, which you will recall are just quotients of polynomials. In this section we will be interested only in determining the implied domain and zeros of a rational function. Before treating this specific family of functions, we first make official a simple observation about the domain and zeros of any function defined as a quotient of two functions.
In the special case where \(f(x)=p(x)/q(x)\) is a quotient of two polynomials, the statements of Theoremย 1.5.27 are simplified by the fact that \(p(x)\) and \(q(x)\) are defined everywhere, and have finitely many zeros.
Since the domain and zeros of a rational function \(f(x)=p(x)/q(x)\) are determined by the zeros of \(p\) and \(q\text{,}\) and since by Theoremย 1.5.10 nonzero polynomials have only finitely many zeros (bounded in number by the polynomialโs degree), it follows that \(f\) is defined everywhere except for the finitely many zeros of \(q\text{,}\) and \(f\) has finitely many zeros.
The set of zeros \(Z\) of \(f\) is the set of \(x\) where \(p(x)=0\) and \(q(x)\ne 0\text{.}\) Since \(p\) is zero only at \(x=3\text{,}\) and since \(3\) is a zero of \(q\text{,}\) we conclude that \(f\) has no zeros: i.e.,
as above. The set of zeros \(Z\) of \(f\) is the set of \(x\) where \(p(x)=0\) and \(q(x)\ne 0\text{.}\) Since \(p\) is zero at \(1\) and \(-1\text{,}\) and since \(1\) is a zero of \(q\text{,}\) we conclude that 1 is the only zero of \(f\text{.}\)
In Figureย 1.5.30 you find graphs of the functions \(f\) and \(g\) from Exampleย 1.5.29. Consistent with our work in that example, the graphs indicate that \(f\) has no zeros and \(g\) has exactly one zero, at \(x=-1\text{.}\)
Furthermore, we saw in Exampleย 1.5.29 that both functions have \(D=\R-\{1,3\}\) as their implied domain. Interestingly, the two graphs indicate that \(1\) and \(3\) are excluded from the domain in slightly different ways.
The graph of \(f\) has what we will eventually call a vertical asymptote at \(x=1\text{,}\) and a hole at \(x=3\text{.}\)
The vertical asymptote behavior of the two graphs is something we will discuss in great detail later. For now we content ourselves to an explanation of the holes appearing in these graphs. Letโs look only at
However, for any \(x\ne 3\text{,}\) we may cancel the nonzero terms \(x-3\) in our expression above to get the simpler expression \(1/(x-1)\text{.}\) That is, we have
Let \(h(x)=\frac{1}{x-1}\text{.}\) Technically \(f\) and \(h\) are not equal as functions, as is reflected in the fact that \(f\) has domain \(\R-\{1,3\}\) and \(h\) has domain \(\R-\{1\}\text{.}\) However, for all \(x\ne 3\text{,}\) we have \(f(x)=h(x)\text{.}\)
Since \(h(3)=1/2\text{,}\) the point \(P=(3,1/2)\) appears on the graph of \(h\text{.}\) Furthermore, if we graph \(h\text{,}\) we see that points \((x,h(x))\) on its graph approach \(P=(3,1/2)\) as \(x\) gets closer and closer to \(3\text{.}\) (As we will see later, this is a result of \(h\) being continuous at \(x=3\text{.}\))
Lastly, since the graph of \(f\) agrees with the graph of \(h\) for all \(x\ne 3\text{,}\) points \((x,f(x))\) on its graph also approach \((3,1/2)\) as \(x\) gets closer and closer to \(3\text{.}\) As a result, the graph of \(f\) appears to approach the point \(P=(3,1/2)\text{,}\) but without this point being included on the graph itself. Hence the hole!
Well, that was a bit long winded! One amazing feature of calculus is its ability to express such cumbersome arguments in a much more economical manner. For example, using the language of limits (to be introduced soon), our discussion above can be beautifully compressed into the following simple statement: