fp equality

David G. Hough at validgh uunet!validgh.com!dgh
Fri Dec 10 15:14:10 PST 1993


Sorry to bore the compiler people who don't want to know about this, but it's
fairly fundamental to understanding what you should and shouldn't do to
numerical programs you compile...

> In the case
> of your example
> 	if (x == y)
> 		z = fprime(x) ;
> 	else
> 		z = (f(x)-f(y))/(x-y) ;
> if x and y have nearly the same value, (x-y) is computed with poor
> precision, and the result z could be wildly off the theoretical value.

This is the heart of the issue.
If x and y are close enough together that "cancellation" occurs, 
x-y is computed EXACTLY.
This is a key insight of many algorithms that
obtain the benefit of higher precision without the cost, such as 
"doubled double" quad.   This is not a bug in those algorithms, but an
exploitation of an important feature of conventional floating-point 
arithmetic: roundoff is NOT random.   If x and y are within a factor of
two of each other, their difference will be computed exactly on IEEE,
VAX, 370 I think, but probably not Cray.    The significance of the
difference will depend on the context, but the subtraction itself doesn't
add any uncertainty.    Algorithms based on this observation often try
to do the subtractions of close quantities early while they are still
exact rather than later after some uncertainty has accumulated. 

It is true that any uncertainty in x or y
may cause the difference x-y to have few or no correct significant digits,
but the point with respect to divided differences
is that f(x) and f(y) are computed with the same perturbed
arguments and their errors may well be correlated.   To see conceptually
how this might arise, suppose the "true" values of the arguments
x and y are equal,
but x is subject to an error dx, so the actual arguments are y+d and y.
Then the second formula computes

		z = (f(y+d)-f(y))/d

which for small d is approximately

		z = fprime(y)

which is what was wanted, and achieved, without a possibly expensive direct
computation of fprime, and despite the error in the first
argument!    Realistic cases are more complicated and would
involve consideration of any uncertainty in y, the uncertainty due to 
computing f, AND THE UNCERTAINTY DUE TO COMPUTING fprime, which may not have
any explicit subtraction or equality test but may still be subject to roundoff,
especially if it is the expensive sort of function one would rather avoid
computing.

If I recall correctly, both Gaussian elimination and tridiagonal eigenvalue
routines work better than they "should" because the errors in the
intermediate results are correlated:  the answers come out fine even though
the intermediate results may have no significant figures correct.

Some form of interval arithmetic is the simplest path in principle
to bullet-proof results, but in practice it's not so simple, and so there's
still a lot of interest in more
economical ways of finding satisfactory results.

> Testing for exact equality to a singular point won't do much good when
> the singularity occurs at a mathematical value that is not exactly
> representable.  That's not a problem for x==0, x==-1, etc. but is
> germane when the singularity occurs at Pi, 1/e, etc.

Fortunately, by definition,
any function you compute in conventional floating-point arithmetic
doesn't have a singularity at a transcendental point, since it's only defined
at representable numbers, though the divided difference may be wicked across the
representable numbers nearest the transcendental singularity of the exact
functions, e.g. tan(x) with infinite-precision pi reduction.



More information about the Numeric-interest mailing list