double rounding in x86

Vaughan Pratt uunet!cs.Stanford.EDU!pratt
Wed Aug 30 12:04:40 PDT 1995


	Perhaps x86 should have a mode where the exponents a limited to match
	the significand precision.  Addition of such a mode would require
	agreement among people doing the math on these part around the
	industry.

Theoretically the x86's extended precision should be a boon to
scientific computing.  In practice it appears to have been more of a
hassle than the extra 11 bits are worth.  In hindsight it would seem
that any x86 OS claiming to support scientific computing should disable
extended precision at boot time, requiring users who want the extra 11
bits and know what they're letting themselves in for turn it back on
themselves as needed.  Similarly long double should by default not be
recognized by gcc, which should insist on a command line flag
authorizing its appearance in the source code being compiled.

Besides its role in creating compiler nondeterminism in gcc-x86 (which
can be fixed by the gcc team without *any* inconvenience or unpleasant
surprises to the users), the use of long double under gcc has the
further drawback that the implied precision is less predictable than
that of either double or float.  Whereas float and double always mean
32 and 64 bits respectively, long double means 80 bits on an x86 but
only 64 bits on a Sun, Alpha (!), etc. where gcc treats long double as
synonymous with double.

Furthermore there is no point consulting people about changes intended
to ameliorate gcc's execrably poor handling of extended precision until
the gcc team shows signs of interest in working on extended precision
problems.  But I can't say I blame them for their lack of interest,
they may justifiably perceive 80-bit precision as somewhere between a
crock and a cruel hoax perpetrated on the scientific computing
community.

Vaughan Pratt



More information about the Numeric-interest mailing list