FP Conversions

Tom MacDonald sun!hilbert.cray.com!tam
Tue Jun 12 09:23:14 PDT 1990


>>>>> Begin Cody

Just to add a little fuel to the fire.  The include file float.h
specifies the parameters DBL_MAX and DBL_MIN, for example, in
decimal.  We rely on the conversion routines to provide the
exact correct machine representation for these values.  After
all, for a given machine there is only one machine number that
satisfies each definition.  You might be surprised at how many
compilers are unable to generate those numbers from the float.h
values, and perhaps even at the magnitude of the errors (measured
in ULPs) in some cases.  A strong argument, IMHO, for exact
conversion.

>>>>> End Cody

The standard does not require that the values specified in <float.h>
be constatnts.  The MACRO names specified in <float.h> can (in most
cases) expand into function calls, implementation specific keywords,
or none of the above.  In fact the standard include files do not
have to be source files at all.  The compiler can contain internal
representations of all standard include files.  Therefore:

	#include <float.h>

might just cause the compiler to add some more predefined macro
expansions into its internal table that represent these values
perfectly.  The point is that there are other ways to handle <float.h>
than requiring perfect conversions.

Finally, I'd like to suggest that the to me, the real argument
has been lost.  I don't believe that anyone feels that perfect
conversions are inherently bad.  However, there is a trade off.
Ada requires runtime checks for all kinds of things.  This isn't
bad either.  The trade off comes when it takes an extra year to
design the automobile, an extra six months to get the animation
generated, or less acurate weather forcasts because the program
runs too slowly.  I just feel that requiring perfect conversions
is very much like requiring run time checks for subscripts or
pointer dereferencing.  It's nice to have that option but don't
make me run with it all the time because I have competition and
I'm trying to beat them to market.  You must find a way to provide
me with an environment that is fast.  This fast environment allows
me to explore areas that the slow envirnment doesn't.  If we really
wanted accuracy we would use fractions and abandon floating-point
representations.  If we abandon floating-point representations
then programs run slower and we lose customers.

Just consider a data type called fraction with a numerator and a
denominator.  Both the numerator and denominator are 2000 bits
long.  This gives excellent precision and adequate range.  Now
lets say we took a typical Cray vector register 64 elements,
each element is 64 bits giving us 4096 bits.  The numerator
could be stored in the first 32 elements and the denom in the
last 32 elements.  Then I write conversion packages, math library
routines, and figure out ways to do the arithmetic.  The final
result would be that even though the answers were very accurate,
almost no one would use it.  They wouldn't use it because problems
that used to take hours to slove would now take days.  Does this
imply that speed is better than accuracy?  No it just means that
there are trade offs.  We need to offer trade offs to customers.

Finally, I don't believe that C standards activity (like NCEG)
should mandate one side of the speed vs. accuracy coin.  We are
not serving their needs by doing this.  We need to provide perfect
conversions for those that need it but allow for those that are
trying to explore new ways to solve problems in a reasonable
amount of time.


Tom MacDonald



More information about the Numeric-interest mailing list