next up previous
Next: About this document ... Up: ode Previous: Higher Order Systems

Precision and Stability

One might hope to improve the accuracy of the numerical solution to a differential equation by decreasing the step size. This is a correct strategy, but only up to a point. Then we run into limitations of machine precision. In the extreme case, consider what happens in the Euler method,

w_{i+1} = w_i + h f(t_i, w_i),
\end{displaymath} (13)

when the size of the second term on the rhs is so small that at machine precision, adding it to $w_i$ doesn't change any digits in $w_i$. Clearly we are not getting an accurate solution, mathematical claims of first order notwithstanding. Roughly speaking, we want to stop decreasing $h$ when the mathematical truncation error of the method is comparable to the loss in precision in representing the difference $w_{i+1} - w_i$.

Herein lies another great advantage of higher order methods. They give high accuracy at larger step sizes, so we can push to greater accuracy, before we are limited by machine precision.

Stability is another important consideration in solving difference equations. Here we have considered only recursion relations. But even with these we run into trouble, if the step size is too large. For example, suppose in the simple ODE (1) $\lambda$ is negative and the step size $h$ is chosen larger than $-1/\lambda$. Then the solution (2) oscillates wildly, and its magnitude grows explosively-- not at all resembling the true exponentially decaying solution $\exp(\lambda t)$. We say the solution is unstable.

Clearly choosing a step size appropriate to the problem is one way to avoid problems with stability. But some recursion relations can develop instabilities regardless of how small the step size. The Euler and Runge Kutta methods we have described are designed to be stable, at least at a reasonably small step size.

next up previous
Next: About this document ... Up: ode Previous: Higher Order Systems
Carleton DeTar 2008-12-01