Constructing the interpolating polynomial is somewhat tedious. If our goal is merely to get the interpolated value, and we don't care to know the coefficients of the polynomial, we may use the Neville algorithm. This algorithm starts from the requested interpolation point and generates a table of the form

The identity

The student should verify that for , as it should.

Here we apply the Neville scheme to the example of the exponential integral, interpolated at .

0.1 | -1.6228 | ||||

-1.22230 | |||||

0.2 | -0.8218 | -1.18706 | |||

-1.08135 | -1.17642 | ||||

0.3 | -0.3027 | -1.12320 | -1.17186 | ||

-0.91395 | -1.13992 | ||||

0.4 | 0.1048 | -1.02289 | |||

-0.76870 | |||||

0.5 | 0.4542 |

Compare the two tables. As promised, the second column is a copy of the values. The third column gives the result of doing a linear interpolation between the two neighboring points and evaluating the resulting function at . Notice that only the first value, , is really an interpolation. The others in the third column are actually linear extrapolations back to 0.15 from interpolations on the other intervals. The fourth column gives the result of a quadratic interpolation of the central point on that row and its two neighbors. For only the first value in that column is really an interpolation. The others are extrapolations.

There can be two interpolating values in a column. For example, if we had constructed a similar table for interpolation at , the first two values in the fourth column would be interpolations, since the polynomials and both cover the interval.

Which value gives the best answer? Polynomial extrapolations can be risky, so clearly we should use only the interpolating values. For the example above, the interpolating values can be read from the first number in each column, starting with the third column: , , , . Since we expect the estimate to improve as we increase the degree of the polynomial, at least as long as the degree isn't too high, our best result should be , coming from the quartic polynomial interpolating all the points.

It is not always necessary to carry out the Neville iteration to the bitter end. To avoid disasters with high degree polynomials on evenly spaced points near the edges of a table, it is best to do interpolations with roughly the same number of tabulated values on either side of the interpolation point, which may make stopping advisable, before exhausting all the points in a table.

Finally, we need an estimate of the interpolation error in our best result without peeking at the exact value. (If we know it, why interpolate?) A crude estimate of the error is given by the magnitude of the change between the best value and the next best. In our case it is , or about . How far off are we, really? The actual value to six figures is , so the error in the last value is really , nearly twice our estimated error. Well, knowing what we do about the logarithmic singularity in this function at the origin and the hazards of high-degree polynomial interpolations, we might have expected trouble. The moral of the story: make your error estimates with a good measure of humility.