Accuracy of Inputs

Bill Walster - Staff OACIS uunet!oacis.org!gww
Mon Feb 25 12:54:00 PST 1991


There is a fundamental piece of information that must be supplied with
any of the examples cited by David Hough and Bob Knighten regarding
argument reduction for trigonometric functions. In fact, this piece of
information must be supplied for ANY meaningful discussion of what a
machine should do in response to a computing request. The missing piece
of information is the answer to the following question: How accurate is
a particular number? For example, is the number 0.1 assumed to be
0.100000..., with an infinite number of trailing decimal zeros? Or, is
it assumed to be any number in the closed interval [0, 0.2]? Or,
something else? Unless this question is answered, neither is it clear
what is meant by the number 0.1, nor is it possible to decide what ought
to be done when computing any function of the number 0.1. 

0.1 was chosen, because it is not machine representable in binary. Such
examples expose the need to specify numerical accuracy. In a vain
attempt to avoid explicitly specifying numerical accuracy of inputs,
many ACRITH researchers have proposed that machines should do decimal
arithmetic. They say the reason for this is to promote convenience.
However, the real reason appears to be to have infinitely precise
decimal numbers remain machine representable. They want this to be the
implicit assumption when any number is entered into a computer.

Provided computations are performed on such infinitely precise inputs,
everything progresses smoothly. With the aid of a long accumulator, to
compute exact dot products, 1 ulp accuracy results can be obtained.
However, once exact results have been rounded to the machine word
length, as they almost inevitably will be in any computation of
substance, the assumption of infinite precision no longer holds! From
this point on, there is no advantage in a decimal machine. Thus, it can
be legitimately questioned whether a general purpose decimal machine
architecture is worth the effort.

Instead of leaving ambiguous the interpretation of what is meant by a
particular number, why not make it explicitly unambiguous? The obvious
way to do this is with intervals. In most practical situations,
intervals constructed from 64--bit floating point numbers are more than
adequate to represent the available accuracy of numerical inputs. 

If one is interested in computing a theoretical quantity that does not
depend on fallible numerical data, then, the infinite precision domain
may be required if normal interval arithmetic is not sufficiently
accurate. Two alternatives are available in this case: a decimal
machine, with a long accumulator, and interval arithmetic; or, rational
arithmetic. Transcendental may be more easily computed using rational
arithmetic and continued fractions, as is done in numerous symbolic
mathematics packages.

Rather than making "matters murkier", interval arithmetic is required to
perform computations with guaranteed accuracy in the presence of
fallible inputs or fallible intermediate results. How much accuracy is
potentially available in any situation depends on the accuracy of
inputs. How much accuracy is attainable depends on the available
hardware and software. In any case, leaving the accuracy of inputs
ambiguous can only be done at the risk of mass confusion regarding what
is possible to compute in a given situation. For example:

 sin(3.14159...62E+25, 3.14159...64E+25) = [-1, 1].

- - -

G. William Walster,
Director of Research,
Computation and Algorithms,
Oregon Advanced Computing Institute (OACIS),
19500 N. W. Gibbs Drive, Suite 110,
Beaverton, OR 97006--6907
(503) 690--1203
gwwaoacis.org




More information about the Numeric-interest mailing list