floating point comparisons illegitimate?

uunet!cup.portal.com!PLS uunet!cup.portal.com!PLS
Fri Mar 22 19:37:11 PST 1991


> From: earlamips.com (Earl Killian)
> To: cup.portal.com!PLSamips.com
> 
>    Date: Thu, 21 Mar 91 21:34:01 PST
>    From: cup.portal.com!PLSauunet.uu.net
> 
>    Perhaps I have too much experiene with non-IEEE architectures. I've seen 
>    machines where 
>        if (A .eq. A) 
>    could be false, because of interactions between automatic normalization an
d
>    overlength registers. Again, IMHO, testing for actual equality between 
>    floating point operands isn't good practice, and it certainly isn't portab
le
> 
> Another possible interpretation of the above would be that such
> architectures are broken, and shouldn't be used.  For example,
> normalized and unnormalized values should compare equal if they
> represent the same value.
> 
> Without equality comparison, how do you find the precision of your
> floating point?  The typical test is to iterate with smaller and
> smaller EPS until 1.0+EPS .EQ. 1.0 (with appropriate assignments to
> force rounding to storage precision).
> 
> Also, ABS(A-B) .LT. tolerance assumes you know the range of A and B
> (so that you can pick tolerance).

This raises an interesting question: should that test for machine precision
work? Certainly it doesn't on a lot of machines. If a compiler sees
  if (1.0 + EPS .EQ. 1.0)

is it justified in optimizing it to
  if (EPS .EQ. 0.0)
If not, why not.

The C standard chose to make information about precision and exponent range
available through the environment. In part because tricks like the above 
don't work on a lot of machines.

Let me define a term: a minimum precision implementation is one that does
EVERY floating point operation in the minimum precision called for by the
data types, whether in storage or in register. The result of each operation
is correctly rounded immediately. By this definition, a three operand 
instruction that computes A*B+C with a single instruction must do two
roundings. 

On a non-minimum precision implementation, the results of a computation will
depend on where the greater precision operations are used and where narrowing
happens. I do not think that floating point equality is a well defined 
operation on such machines, but only on minimum precision implementations.

A couple of the previous messages have said, in effect, that the IEEE
standard requires a minimum precision implementation. Does it?

    ++PLS



More information about the Numeric-interest mailing list