If we use the Newton-Raphson method for finding roots of the polynomial we need to evaluate both and its derivative for any .

It is often important to write efficient algorithms to complete a
project in a timely manner. So let us try to design the algorithm for
evaluating a polynomial so it takes the fewest flops (floating point
operations, counting both additions and multiplications). For
concreteness, consider the polynomial

The most direct evaluation computes each monomial one by one. It takes multiplications for each monomial and additions, resulting in flops for a polynomial of degree . That is, the example polynomial takes three flops for the first term, two for the second, one for the third, and three to add them together, for a total of nine. If we reuse from monomial to monomial we can reduce the effort. In the above example, working backwards, we can save from the second term and get for the first in one multiplication by . This strategy reduces the work to flops overall or eight flops for the example polynomial. For short polynomials, the difference is trivial, but for high degree polynomials it is huge. A still more economical approach regroups and nests the terms as follows:

(Check the identity by multiplying it out.) This procedure can be generalized to an arbitrary polynomial. Computation starts with the innermost parentheses using the coefficients of the highest degree monomials and works outward, each time multiplying the previous result by and adding the coefficient of the monomial of the next lower degree. Now it takes flops or six for the above example. This is the efficient

- Pseudocode and Playing Computer
- Evaluating a polynomial: poly.py
- Matrices
- Horner's Rule for a Polynomial and Its Derivative