next up previous
Next: Round-off errors and loss Up: Roots of a Quadratic: Previous: Roots of a Quadratic:

Precision

In scientific language we use the term ``precision'' to describe the number of significant digits in a number and the term ``accuracy'' to describe the confidence we have in the number. In statistical terms, accuracy can be described by a standard deviation. Here we concentrate on precision.

A computer represents numbers with a limited number of binary bits. The float single precision type is stored in 32 bits and the double double precision type, in 64 bits. Most computers use the IEEE standard storage format, which represents a number in binary scientific notation with a sign, mantissa and exponent. What is a mantissa? In decimal notation the number $-0.512
\times 10^5$ has a mantissa of 0.512, an exponent of 5 and a negative sign. Single precision has room for a mantissa of approximately 8 significant decimal digits and double precision, approximately 15. By shifting decimal places and changing the exponent, we can always arrange for the mantissa to start with ``0.'', followed by a nonzero digit. The significant digits are then counted to the right of the decimal point. For example, in decimal language the number .0012345678 would be written in single precision as $0.12345678 \times
10^{-2}$ and would have eight significant decimal digits. The allowed range for the exponent, expressed as a power of ten is approximately $\pm 38$ in single precision and $\pm 308$ in double precision. So a number that is precisely represented with an infinite number of digits must be rounded to a finite number of digits in the computer.


next up previous
Next: Round-off errors and loss Up: Roots of a Quadratic: Previous: Roots of a Quadratic:
Carleton DeTar 2017-09-05