A standard numerical problem asks for the solution to the equation

for some (usually nonlinear) differentiable function . Among the many methods for solving this problem, the Newton-Raphson method is very efficient, but may require starting from a reasonably good guess for the solution.

Suppose that solves the problem. This means

We don't know , but suppose we have guessed an that is close to . We can expand in a Taylor series about up to terms linear in :

Using , we can solve for :

The result is only approximate, since we dropped quadratic and higher terms in the Taylor series. If these neglected terms are small, is a better approximation to the root than , but it isn't guaranteed to be the exact solution. So this formula can be used as the basis for an iterative algorithm: take the value of from the lhs, plug it in for on the rhs, and repeat. As we get closer to the root, the higher order terms in the Taylor expansion become even less important and the method converges very rapidly.

How do we know when the method has converged? That is, when do we tell our program to stop? One stopping criterion is to monitor the improvement and stop when it is less than a specified tolerance. Another is to monitor the value of and stop when it is small enough. Which one is best depends on our application, i.e. for what purpose are we solving this equation in the first place? Do we require knowing with high accuracy or does it suffice to get a suitably small ? Note that even if the change is smaller than some tolerance, there are no assurances that we have the root to that accuracy. However, for most situations, that method is sufficient.