Automatic prompotion to extended precision
Alan M. McKenney
uunet!cims18.nyu.edu!mckenney
Tue Mar 19 08:07:13 PST 1991
In response to my mailing, David Hough (dghavalidgh.com) writes:
[">>" is David quoting me]
>> Actually, the idea of automatically promoting intermediate
>> results to a higher precision doesn't seem to me to be a win, anyway.
>
> What about the extra exponent range? Anyway the idea of extra
> precision is clear enough, on 80x87 and 6888x systems, if
> rounding to storage precision is enforced for assignments.
> Automatically computing in higher precision makes it possible for
> many programs to just work without a lot of analysis, which could
> be done to make programs work exploiting only storage precision,
> but usually isn't.
Well, without giving a philosophical lecture on different approaches
in numerical work, let me say that in some situations your perspective
is appropriate (e.g., porting Other People's Software to your company's
machine) and in others it is not. The considerations which make
automatic extended precision uninteresting for me are:
(1) portability. If I am developing code on a machine such as a Sun,
I usually want it to run correctly on other machines. If my
single-precision code only works well because some of the stuff is
really being done in > double-precision, it will not make porting
it easier. I have to assume that operations are done with no
more precision (and no greater exponent range) than required by the
language standard.
(2) debugging. You mention extra exponent range. If I have a
"single-precision" code, and generate an exponent larger than 10e39
(I'm talking IEEE arithmetic, now), then if the result is rounded
to single after each step (either machine instruction or source
statement), the exception will occur where the number is computed.
If, however, the code goes along using double or extended precision
for a while, then the exception may occur far from the offending
statement. For debugging, this is not a win. However, if rounding
is done at every assignment in the source, this will not be a
serious problem.
(3) There are certain algorithms which depend upon the precision being
as the programmer specifies it. The one that comes to mind is a
method I have heard of (but not used) for software doubled
precision, which involves using double-to-single conversion to chop
up a double-precision value so as to do quad precision. It goes
something like this:
REAL UPPER, LOWER
DOUBLE PRECISION X
UPPER = REAL( X )
LOWER = REAL( X - DBLE( UPPER ) )
If the compiler decides to keep A around in double or extended
precision, this won't work. Thus, automatic promotion to higher
precision will *break* this code! (This would, however, work
if explicit type-conversion functions forced a rounding step.)
In the spirit of compromise, though ;-), I will simply suggest
that language implementations which do floating-point operations in
a higher precision than the source code specifies provide a convenient
method which has the effect of turning this off, at the very least for
assignments and explicit type-conversion functions. (I assume that
Sun's -fstore has this effect.)
I do agree with you that performance is an issue. Perhaps if
chip manufacturers are given the message that good performance at
*all* precisions is desirable, then they will not design chips
which perform poorly in single precision, and so we will not have
to choose between (what I consider) "correct behavior" and good
performance.
Alan McKenney
E-mail: mckenneyacs.nyu.edu <-- "accept no substitutes!"
P.S.: I am glad that David (and others at Sun) do not agree with
PLSacup.portal.com:
> ....I persist in thinking that th source code is broken, though.
More information about the Numeric-interest
mailing list