Math

MATH 1350 Calculus I -

Lab 4 - Newton's Method- Applications of the Derivative

Lab 4 - Newton's Method- Applications of the Derivative

Quite often in mathematics we are faced with the problem of trying to determine the solution or solutions to a given equation. For example, we may need to find the roots (zeros) of a polynomial function. If the polynomial is simple enough, say f(x)=x^{2}+5x-7, we can find the real roots exactly (if they exist) by applying the quadratic formula. There are similar formulas for third and fourth degree polynomials as well. In these cases it is also possible to determine exact solutions to the equation f(x)=0. But suppose we are confronted with trying to find the roots of a fifth degree polynomial. No formula exists which would allow us to determine the roots exactly. The same holds true for higher degree polynomials. Unless these polynomials can readily be factored, the only method we have of finding the roots is by numerically estimating their values. There are many other situations where it may be necessary to estimate the solution or solutions to a given equation.

Suppose we need to calculate the square root of 2. Typically, we would enter 2 into a calculator and let the calculator determine the value. But how does a calculator perform this operation? Certainly the calculator cannot possibly hold tables of values for the square roots of all the real numbers. It must make a numerical approximation to the exact value and it must do so in a timely fashion.

Let's say that you were trying to find the points of intersection between a polynomial and an exponential function. By setting the functions equal to each other, you would arrive at an equation which (if you were lucky enough) could be solved by inspection, i.e., x^{2}=2^{x} . However, an equation of this type cannot be solved algebraically, no matter how simple it is. Unless we can discern the solution(s) by inspection, the only method at our disposal in a numeric approximation. Only recently have calculators gained the ability to solve this type of equation. On a TI-85 calculator, you can enter in an equation and simply call the solve routine that is built into the calculator. Although the technological advances that have made this possible are recent, the actual methods employed have been known for some time.

One of the most successful methods of approximating solutions to the equation f(x)=0 is due to Isaac Newton and is quite appropriately known as Newton's method. This method has the advantage of being conceptually easy to understand and implement while at the same time being able to arrive at accurate approximations very quickly. The method is not foolproof however. The success of the method depends on the selection of a good starting value.

We will look at Newton's method in the context of an example. Suppose we wished to find the solutions to the equation 2x^{3}=3.1^{x}. Since Newton's method is designed to find the zeros of a function, we first need to rewrite this equation as f(x)=2x^{3}-3.1^{x}. By plotting this function in Maple, we can see that it has two *x* -intercepts:

`> `**f:=x->2*x^3-3.1^x;**

`> `**plot(f(x),x=-1.5..5);**

We will use Newton's method to find the smaller of the two solutions. We first need to decide on an initial guess for the value of the intercept. We'll choose x_{0}=2 as our starting value. We could have chosen a better estimate based on the graph but we only need a value that is close. Next we define a function in Maple that will calculate the next approximation of this root. Begin by defining the derivative of our function. Typing "fprime" to the left of the ":=" names our function "fprime." What we have typed to the right of the ":=" tells the computer to establish the function that sends the input, x, to the derivative of f with respect to x.

`> `**fprime:=D(f);**

We can now define the function that starts with one approximation to the solution and returns a better one. In doing this we defind a sequence.

In terms of the notation, this function would take x_{n} and return the value x_{n+1} . We will call the function that establishes the seuqence"newt":

`> `**newt:=x->x-f(x)/fprime(x);**

Since "newt" is defined as a function, we can use functional notation to calculate our approximations. The value of x_{1} will be obtained by evaluating Newton's function at our starting point, 2. x_{1} will give a value closer to the actual root. We then plug x_{1} into Newton's function to obtain x_{2} which will be an even better approximation of the root. We continue with the same process:

`> `**x1:=newt(2);**

`> `**x2:=newt(x1);**

`> `**x3:=newt(x2);**

The next three values are:

Notice that by the fifth iteration of Newton's function we have achieved stability in our solution to nine decimal places. What if our initial guess were worse? Suppose we chose x_{0}=3 ? We would achieve nine places of accuracy on the sixth iteration rather than the fifth.

Now let's try to determine the larger solution. Again we pick a starting value close to the *x* -intercept. Let's choose x_{0}=4 . Note that since our function hasn't changed and because of the way in which we defined our x_{n} , the only command that we have to modify is the first one. We can find the following values by simply re-executing the other commands.

`> `**x1:=newt(4);**

`> `**x2:=newt(x1);**

We then get the sequence of values

So far we have not been very careful about choosing our initial estimate. There is a potential difficulty that can arise. Notice that in finding the second solution it took longer for Newton's method to converge. Was it because our initial guess in the second case was further from the actual value than in the first case? There does not seem to be strong evidence to support this conclusion since both guesses were off by about the same amount. In fact, if we chose an initial guess of x_{0}=6 , further away from the actual second solution, we would converge to that solution in fewer steps. Hence x_{0}=6 is a "better" guess than x_{0}=4 . Why is this?

The key to understanding this question lies in looking at the function that defines Newton's method. This function depends on the derivative. Suppose that we choose an x_{0} very close to a maximum or minimum in the function f(x)=2x^{3}-3.1^{x}. Then the line tangent to the curve at that point will be very nearly parallel to the *x* -axis. This will have the effect of moving away from the solution rather than closer to it. Notice that with x_{0}=4, x_{1}=8.200115963 whereas with x_{0}=6 , x_{1}=5.422040253 ; i.e., our first iterate remained closer to the actual value.

One final point. Newton's method can be susceptible to very small differences in our initial guess. A very small change in the value chosen for x_{0} can result in a drastic difference in the result. Newton's method may converge to an entirely different solution (if there is more than one) or it may not converge at all. This susceptibility to initial conditions is one of the key features of chaotic dynamics (or chaos). In our example, the method converged to the smaller solution when had the value 3.8599 and converged to the larger root for x_{0}=3.85999, a difference of less than .0001.

**Exercises: **

1. Use Newton's method to find the value of

2. Use Newton's method to find the quantity for *b* = 3.1 accurate to five decimal places. Hint: you want to find the value *x* such that x^{5}=b ( *f* ( *x* ) should be what function?).

3. How many zeros do the following functions have? Find each zero accurate within .01.

a) f(x)=x^{3}+x-1 |
b) h(x)=10xe^{-x}-1 |

4. Define *f* ( *x* ) = . Use Newton's method to try to find the roots of the equation with the following initial estimates:

© University of Colorado Colorado Springs

1420 Austin Bluffs Pkwy, Colorado Springs, CO USA 80918

719-255-8227 (UCCS), 800-990-8227

1420 Austin Bluffs Pkwy, Colorado Springs, CO USA 80918

719-255-8227 (UCCS), 800-990-8227