Next: Round-off errors and loss Up: Roots of a Quadratic: Previous: Roots of a Quadratic:

Precision

In scientific language we use the term precision'' to describe the number of significant digits in a number and the term accuracy'' to describe the confidence we have in the number. In statistical terms, accuracy can be described by a standard deviation. Here we concentrate on precision.

A computer represents numbers with a limited number of binary bits. The float single precision type is stored in 32 bits and the double double precision type, in 64 bits. Most computers use the IEEE standard storage format, which represents a number in binary scientific notation with a sign, mantissa and exponent. What is a mantissa? In decimal notation the number has a mantissa of 0.512, an exponent of 5 and a negative sign. Single precision has room for a mantissa of approximately 8 significant decimal digits and double precision, approximately 15. By shifting decimal places and changing the exponent, we can always arrange for the mantissa to start with 0.'', followed by a nonzero digit. The significant digits are then counted to the right of the decimal point. For example, in decimal language the number .0012345678 would be written in single precision as and would have eight significant decimal digits. The allowed range for the exponent, expressed as a power of ten is approximately in single precision and in double precision. So a number that is precisely represented with an infinite number of digits must be rounded to a finite number of digits in the computer.

Next: Round-off errors and loss Up: Roots of a Quadratic: Previous: Roots of a Quadratic:
Carleton DeTar 2017-09-05