
If we use the Newton-Raphson method for finding roots of the polynomial we need to evaluate both



It is often important to write efficient algorithms to complete a project in a timely manner. So let us try to design the algorithm for evaluating a polynomial so it takes the fewest flops (floating point operations, counting both additions and multiplications). For concreteness, consider the polynomial

The most direct evaluation computes each monomial one by one. It takes
multiplications for each monomial and
additions, resulting in
flops for a polynomial of degree
.
That is, the example polynomial takes three flops for the first term, two for the second, one for the third, and three to add them together, for a total of nine.(n=3) If we reuse from monomial to monomial we can reduce the effort. In the above example, working backwards, we can save
from the second term and get
for the first in one multiplication by
. This strategy reduces the work to
flops overall or eight flops for the example polynomial. For short polynomials, the difference is trivial, but for high degree polynomials it is huge. A still more economical approach regroups and nests the terms as follows:
![/begin{displaymath}2 - 4x + 5x^2 + 7x^3 = 2 + x[-4 + x(5 + 7x)]./end{displaymath}](/s/edu/utah/physics/www/G.http/~detar/lessons/c++/array/img14.gif)
(Check the identity by multiplying it out.) This procedure can be generalized to an arbitrary polynomial. Computation starts with the innermost parentheses using the coefficients of the highest degree monomials and works outward, each time multiplying the previous result by

