The ordinary differential equation

Let's call the independent variable ``time''. A numerical approach to the solution starts from the initial time and steps forward in discrete time steps , namely for . Let denote the approximation to at . To step forward in time we begin with the finite-difference representation of the derivative, namely,

Using this approximation in the equation gives

or

This is a recursion relation (also called a difference equation), allowing us to step the approximate solution forward in time, starting at with . The solution is

In order for this result to be a good approximation, we require that , in which case .

The method outlined above is called the Euler method. The more
general problem to be solved is

for , where

Again we denote the approximate solution as and approximate the first derivative as a simple (forward) finite difference. The Euler method then gives the recursion relation

Let's see how the approximation to improves as is
decreased. To see this we consider the error in the finite difference
approximation to the derivative. A Taylor's series expansion gives

The Euler method truncates the series after the second term on the rhs. We see that the error made in such an approximation scales with the square of the step size. So if we decrease the step size by a factor of 2 the Euler error in a single step decreases by a factor of 4. Notice, however, that we must then take twice as many steps to reach . So the net error decreases by a factor of two when the step size is decreased by a factor of two.

We call the error in the difference equation the *local truncation
error* to distinguish it from the global error in the final result
. More generally, with a local truncation error of in each step and steps, the global error in is
or
. Since is constant, the
error in is linear in , as we have seen.

When the error in a method decreases as the first power of we call
it a *first order* method. The Euler method is therefore a first
order method.