double rounding in x86
Samuel A. Figueroa
uunet!SLINKY.CS.NYU.EDU!figueroa
Wed Aug 30 17:52:02 PDT 1995
In a previous message, Vaughan Pratt <uunet!cs.Stanford.EDU!prattauunet.uu.net>
writes:
>Oops, I'd forgotten that. In that case I drop the recommendation in
>the message I just sent, that x86 OS's boot up with extended precision
>disabled, since this still doesn't provide double precision arithmetic
>in exact-bit agreement with the rest of the world.
It IS possible to get "exact-bit agreement" (modulo things like how underflow
is detected) but there is a price. (Surprise! :-) As I've mentioned before,
on an Intel Pentium, floating-point performance degrades by a factor of at
least 4 for single precision, and by a factor of at least 10 for double
precision. Incidentally, on a Motorola 68K series chip, you also pay a price
for "exact-bit agreement," though I don't know what that price is. (On a 68K
chip, you simply set the precision mode to the desired precision, and you're
done. Just sit back and relax, as each floating-point operation will now take
more clock cycles than before.)
>This would seem to make the x86 for x <= 5 a second class citizen for
>scientific computing.
I suspect other chip manufacturers avoided the problems related with
implementing extended precision largely because of the difficulty in
supporting a data type wider than 64 bits on a RISC architecture.
Otherwise, someone might have tried doing it, and perhaps the implementation
would have had some mistakes. On the other hand, if everyone did floating-
point arithmetic the way the x86 does, would the x86 still be a "second class
citizen?" Would we have figured out how to exploit the benefits of extended
precision by now?
(By the way, I don't work for Intel, nor do I claim the x86 architecture's
implementation of the IEEE Standard is a perfect one.)
- Sam Figueroa (figueroaacs.nyu.edu)
More information about the Numeric-interest
mailing list