Math 136 Calculus II - Lab 1: Numerical Integration Methods II
Math 136 - Calculus II - Lab 1. Numerical Integration Methods II

We have seen two basic approaches to determining the value of a definite integral: using Riemann sums and using antiderivatives in conjunction with the Fundamental Theorem of Calculus. As is often the case, you may be faced with the task of evaluating a definite integral which has no elementary antiderivative. When this is the case we must evaluate the integral numerically. This lab will focus on various numerical integration methods and compare the effectiveness of these methods. You have already become acquainted the left-hand and right-hand Riemann Sums. However, there are many other methods, including, but not limited to: the midpoint approximation, the trapezoidal approximation, and Simpson's rule. Maple stores these commands in a package called "student".  In order to run more efficiently the program doesn't load these commands every time you start Maple, but only pulls them up when you tell the program that you'll need them.  To tell the computer you'd like it to open the package, enter the following command line:

> with(student);

The functions "leftsum" and "rightsum" compute the left and right Riemann sums respectively. The midpoint method is accomplished by using the command "middlesum" . The trapezoidal and Simpson's rule are accessed using the "trapezoid" and "simpson" functions.

In theory at least, all of these methods can be employed to get a numerical estimate of the integral in question. In practice, however, this may not be feasible - the number of computations required in order get a specified degree of accuracy may be too great to make the method practical. This is the main criteria by which we can judge the relative effectiveness of these methods.

A second question which arises when we try to numerically estimate the value of an integral with any of these methods is: how do know when our answer is truly accurate to, say, three decimal places? Consider the following example. Suppose we need to evaluate

to three places of accuracy. Using Maple, we form consecutive left Riemann sums using 2000 and 2001 subdivisions and look at the difference in their values.

> evalf(leftsum(exp(t^3)/(t+1),t=0..2,2000));

> evalf(leftsum(exp(t^3)/(t+1),t=0..2,2001));

> %%-%;

The % in Maple allows you to use the results of the previous line.  Thus, the last command line which said %%-% asked the computer to subtract the value of the previous line from the value of the line before it.  The difference between the two values is less than 10-3, and so we might expect that our approximation is pretty accurate. In fact, this is not the case. If we look at a sum using 4000 subdivsions we get:

> evalf(leftsum(exp(t^3)/(t+1),t=0..2,4000));

Our first attempt at an answer was not even accurate to one decimal place. This is clearly not a reliable method. We must either find some way to bound the error or make comparisons using a different criteria. In the first case, we already know of one method to bound the error.

If the integrand is increasing or decreasing over the interval in question, we know that the exact value must lie somewhere between the left and right sums. If we increase the number of subdivisions to the point that the difference between these sums (in absolute value) is smaller than the amount of allowable error, we are assured that the error in our approximation is no greater than this difference. The same type of relationship holds between the trapezoidal and midpoint methods. The distinction is that now the integrand must not change concavity
over the interval of integration. Provided that the concavity remains the same, we can trap the exact value of the integral between the trapezoidal and midpoint sums for any number of subdivisions. The drawback to either of these methods however, is the amount of computational overhead involved. We must find two sums in order to bound the error.

Rather than attempting to trap the value of the integral between two sums, we could still try to make comparisons between successive values of a single sum. Instead of increasing the number of subdivisions by some linear term, we could try increasing the number of subdivisions by a multiplicative factor - say by two. We still have no guaranteed bound on the error, but it is much more likely that if the values remain stable to three decimal places, our approximation will be accurate to three decimal places. This approach has the drawback that the computational overhead grows quite rapidly after only a few steps since the number of subdivisions is growing exponentially.

Regardless of which approach we take, the most important factor is the rate at which an approximation method converges to the exact value of the integral in question.

Part I. Data Acquisition.

The first part of this lab is concerned with the rates of convergence using the various methods already mentioned. We will use the value of a known integral so that we can have a means of comparing the error arising from each of the approximation methods. We will use as our test integral. Maple knows this function symbolically and we can get the decimal value by entering:

> ln(2.0);

What we will do is estimate the value of this integral using varying numbers of subdivisions for each method. The maple commands are:

> evalf(leftsum(1/x,x=1..2,n));

> evalf(rightsum(1/x,x=1..2,n));

> evalf(middlesum(1/x,x=1..2,n));

> evalf(trapezoid(1/x,x=1..2,n));

The n value represents the number of divisions we are using. We use the "evalf" command in order to get a decimal value because if we don't, Maple will generate a symbolic value.  For example, we calculate the left sum of using 10 subdivisions.  Then we ask Maple to return a decimal estimate for this same problem.

> leftsum(1/x,x=1..2,10);

> evalf(leftsum(1/x,x=1..2,10));

Since we know the antiderivative in this problem, we can find the amount of error by subtracting the true value of the integral from the estimate Maple obtained as follows:

> %-ln(2.0);

Remember that the percent sign is a Maple shorthand way of referring to the result of the immediately preceding calculation.

Use Maple to complete the following table for the error amount in each approximation for the integral as I have done for 10 subdivisions in the last 2 calculations. Retain at least four significant digits in the table.

Table I. Errors in Approximation

It should be clear from the table that as the number of subdivisions is increased, the error is decreased. What we want to do is quantify how much the error decreases. We can do so by comparing the amount of error for each number of subdivisions. Using the data from the above table, complete the following table. For each method, compute the ratio of errors between successive numbers of subdivisions. For example, the first entry under the left sum column would be the error using 5 subdivisions divided by the error arising from the approximation using 10 subdivisions. In the far left-hand column is the factor by which the number of subdivisions was increased.

Table II. Ratio of Error Terms.

Questions:

Do you see a correspondence between the error ratios and the factor by which the number of subdivisions was increased? If so, what is the correspondence?

Left sum:

Right sum:

Midpoint:

Trapezoid:

If you were unable to determine a correspondence, explain why it was not possible to do so.

As you already are aware, and as should be obvious from Table I, the error arising from the left and right sum estimations are approximately equal but of opposite sign. The trapezoid rule, being the average of these two methods, gives us a much better approximation because the errors tend to cancel each other out. There is another method of approximation. It is known as Simpson's Rule. Whereas the midpoint and trapezoidal methods can be thought of as approximating the integrand with linear functions over each subdivision, Simpson's Rule approximates the integrand with a quadratic function which matches the value of the integrand at both endpoints and the midpoint of each subdivision. However, it is also equivalent to a weighted average based on reducing the errors arising from the midpoint and trapezoid rules.

Using the information from columns 3 and 4 of Table I, how do the midpoint and trapezoidal errors appear to be related?

Suppose we try to reduce the error further. Using the data from Table I, fill in the "Weighted Average" column in the table below. The weighted average is found by calculating the quanity

where MID ( n ) and TRAP ( n ) refer to the midpoint and trapezoid approximations for n subdivisions.  To fill in the third column, have Maple find the difference between the weighted average and what we know to be the actual value of the integral.  Now, to fill in the third and fourth columns of the table below use the Simpson command in Maple (as shown below) with 20, 60, 300, and 3000 subdivisions respectively and calculate the errors arising from each approximation.  We use 20, 60, 300 and 3000 subdivisions because the weighted average based on n subdivisions is equivalent to using Simpson's method with 2 n subdivisions.

Table III. Weighted Average vs. Simpson's Rule.

> evalf(simpson(1/x,x=1..2,n);

> %-ln(2.0);

How do these errors compare? They should be the same. What factors may have arisen in your data collection process that could have effected your results?

Part III. Error Bounds

We already know that if the integrand is increasing over the interval of integration, then the left sum will underestimate the exact value of the integral while the right sum will overestimate it. It should seem plausible that the difference between the left and right sums would be effected by how rapidly the integrand was increasing or decreasing over the interval: a rapidly increasing function should produce a greater difference between the left and right sums than an integrand which increases very slowly over the interval. This suggests that the maximum possible error using a left or right sum is somehow related to the size of the derivative over the interval, provided that the integrand is monotonic. This is indeed the case. The error of a left or right sum approximation is bounded by the derivative according to

The trapezoidal and midpoint methods may also be used to trap the exact value of the integral. If the integrand is concave up, the trapezoidal rule overestimates the exact value and the midpoint underestimates it. The size of difference between these two methods is effected by the amount of concavity of the integrand. In other words, the size of the second derivative over the interval of integration determines the amount of error in our estimation. The error arising from the trapezoidal rule is bounded by the second derivative according to

There is a similar bound for Simpson's Rule. This bound depends on the fourth derivative of the integrand:

In general, it might be quite tedious to use the bound for Simpson's Rule since it required that we find the fourth derivative of the integrand. With Maple, however, it is not difficult. We can simply use the "diff" command:

> diff(1/x,x\$4);

The syntax "x\$4" tells Maple that we want the fourth derivative of the function with respect ot the variable x. To determine the maximum size of this derivative over the interval [1,2], we can plot the absolute value of this expression for the interval in question and estimate it graphically:

> plot(abs(%),x=1..2);

From the graph it appears as though the fourth derivative is no greater than 24 on the interval [1,2]. If we want to estimate the value of the integral to within .000001, we would solve the inequality

for n .

> solve(2/(15*n^4)<.000001);

Therefore n must be greater than 19 in order to get an error of less than .000001 using Simpson's Rule.

> evalf(simpson(1/x,x=1..2,20))-ln(2.0);

Note that we don't have to find the maximum size exactly. But we do need to make sure that we do not underestimate the maximum size - we could end up with an invalid estimate otherwise.

Determine how many subdivisions would be required to match this level of accuracy using the left sum and trapezoidal rules. Hand in a printout of the Maple worksheet showing your work.

This method of bounding the error is not always efficient.  More specifically, explain what happens when you try to estimate

with an error of less than .01 using Simpson's Rule.