fp equality

uunet!cwi.nl!Dik.Winter uunet!cwi.nl!Dik.Winter
Fri Dec 10 19:12:34 PST 1993


 > This is the heart of the issue.
 > If x and y are close enough together that "cancellation" occurs, 
 > x-y is computed EXACTLY.
 > This is a key insight of many algorithms that
 > obtain the benefit of higher precision without the cost, such as 
 > "doubled double" quad.   This is not a bug in those algorithms, but an
 > exploitation of an important feature of conventional floating-point 
 > arithmetic: roundoff is NOT random.   If x and y are within a factor of
 > two of each other, their difference will be computed exactly on IEEE,
 > VAX, 370 I think, but probably not Cray.

I will stress this.  It will also not be computed exactly on VAX and 370
if the subtraction results in underflow.

 >                                             The significance of the
 > difference will depend on the context, but the subtraction itself doesn't
 > add any uncertainty.    Algorithms based on this observation often try
 > to do the subtractions of close quantities early while they are still
 > exact rather than later after some uncertainty has accumulated. 

I have once written such an algorithm for the computation of the arcsine
and arccosine.  (This back around 1970 on the Electrologica X8; which
had gradual underflow and optimally rounding arithmetic; although round
to even was not there.)  Subtraction was done fairly late in the
computation, but analysis showed that if the subtraction was done correctly
(even for numbers extremely close to each other), errors would be
correlated and would cancel out still later in the computation.  Going
to a hard underflow would invalidate the algorithm.  And fuzzy equality
testing would also invalidate it.

There are many more such algorithms and it would be a pain to implement
them when fuzzy comparison is standard.

 > If I recall correctly, both Gaussian elimination and tridiagonal eigenvalue
 > routines work better than they "should" because the errors in the
 > intermediate results are correlated:  the answers come out fine even though
 > the intermediate results may have no significant figures correct.

This is true, *but*.  Both suffer from extensive error propagation if forward
error analysis is applied.  There is a very good reason that before J.H.
Wilkinson it was thought that Gaussian elimination was not feasible for
matrices of an order exceeding 10 in the then current precision (about 30
bits); there are quotes pronouncing this.  It is only when you apply
backward error analysis that you get reasonable error bounds.  But even
there the errors are in almost all cases very much better than the bounds
predicted.  But again, it is possible to construct special matrices that reach
exactly the error predicted.  So, error bounds are sharp, but almost never
reached.  Eigenvalue routines are inherently iterative, so the "work better"
part in that case is not related to "get better values" but to "faster
convergence".

(For those not in the know: forward error analysis assumes exact input and
tries to find bounds on the errors in the result.  On the other hand,
backward error analysis assumes an exact result and tries to relate that
to perturbations in the input.  Wilkinson was the first who saw the
feasibility of backward error analysis, and indeed, in numerical algebra
it is impossible to get reasonable results with forward error analysis.)

 > Some form of interval arithmetic is the simplest path in principle
 > to bullet-proof results, but in practice it's not so simple, and so there's
 > still a lot of interest in more
 > economical ways of finding satisfactory results.

It is bullet-proof but almost always far exaggerating the error.  Much work
is done in Karlsruhe to improve this but that is not simple.  Note however
that their algorithms need exact, not fuzzy, comparison; and also
directed rounding of course.

 > > Testing for exact equality to a singular point won't do much good when
 > > the singularity occurs at a mathematical value that is not exactly
 > > representable.  That's not a problem for x==0, x==-1, etc. but is
 > > germane when the singularity occurs at Pi, 1/e, etc.

 > Fortunately, by definition,
 > any function you compute in conventional floating-point arithmetic
 > doesn't have a singularity at a transcendental point, since it's only defined
 > at representable numbers, though the divided difference may be wicked across the
 > representable numbers nearest the transcendental singularity of the exact
 > functions, e.g. tan(x) with infinite-precision pi reduction.

I will note here that functions like tan do not even overflow at the
nearest representable number, provided the range is good enough (and
IEEE is good enough).  As long as the singularity is order 1 there is
in general no problem.  If the order is higher you would get problems
in a much wider range around the singularity, and even standard fuzzy
comparison is not sufficient in that case.

dik



More information about the Numeric-interest mailing list