NCEG IEEE rounding requirements
Tim Peters
uunet!ksr!tim
Sun Jul 15 19:16:51 PDT 1990
> [jim thomas]
> ...
> The intent of the NCEG draft specification is to require that IEEE
> implementations correctly round all conversions between binary
> floating- point representations and decimal ones with DECIMAL_DIG or
> fewer decimal digits. This does not cover conversions involving more
> than DECIMAL_DIG digits.
I'm confused again <grin -- what else is new?>. As I understand things,
754/854 require correct rounding for conversions within given ranges of
"number of significant decimal digits" and "magnitude of decimal
exponents". It sounds like all you're arguing for is removing the
latter excuse. Three questions:
1) Why bother? I can understand David Hough's desire to remove both
classes of excuse, but I really don't see the point in removing just
one of them.
2) (really just a rephrasing of #1 ...) Why not remove both classes of
excuse? I *assume* it's because you have specific algorithms in mind
that allow you to remove the second class of excuse efficiently but
that don't allow you to remove the first class of excuse efficiently.
Is that true? If not, what *is* the motivation?
3) I personally don't know how to remove either class of excuse using
what I consider to be efficient algorithms, although I do know how to
implement what 754/854 require with what I consider to be efficient
algorithms, and I do know how to implement what DGH wants using what
I consider to be inefficient algorithms. If the motivation for
removing only the "big exponent" excuse is in fact that there are
"efficient" algorithms that allow it to be removed, where are those
algorithms published (they're certainly not, e.g., the Steele/White &
Clinger algorithms)? If they're not published, I continue to object
to mandating this stuff on the original "not public art" grounds
(then again, it appears I'd object in any case <grin>).
> ...
> The NCEG IEEE specification makes binary-decimal conversion
> subject to the same simple rounding principle as IEEE arithmetic
> operations.
No it doesn't -- it just makes a proper subset of the conversions adhere
to the principle; so do 754/854. You guys really ought to fix 754 if
you're unhappy with it now.
> ...
> However, despite their intent, these efforts often undercut
> simplicity; they may succeed in some instances, but not in all.
If you're talking about the Steele/White output algorithm, I'd like to
see an example you consider to be a failure.
> ... then you're stuck with having to understand the subtleties of
> not only base conversion but the failed subterfuge as well.
Indeed, isn't the same true of the NCEG proposal when the number of
decimal digits is "too big"? DGH's "perfect rounding every time" and
Steele/White's are the only "surprise free" approaches I've heard of.
If surprises are going to be allowed anyway, I'm not willing to slow
things down.
going-too-far-or-not-far-enough-ly y'rs - tim
Tim Peters Kendall Square Research Corp
timaksr.com, ksr!timaharvard.harvard.edu
ps:
> > [tom macd]
> > As the computing world goes more an more parallel
> > the idea that a sum reduction of an array of values will always yield
> > the same result on every machine seems unattainable.
> [david h]
> Kulisch would disagree. Anybody who thinks that IEEE requirements
> are a nuisance should investigate Kulisch's work, which has attracted
> considerable interest (and implementations) from German manufacturers,
> including IBM. All he really needs to get going is a GF77 with
> a correctly-rounded scalar product operator.
But Tom said "the same result on *every* machine", which surely
includes, e.g., existing Crays. If you postulate special hardware (or
slow software simulation of same) and then define "every machine" to
mean precisely & only those platforms with the gimmick, then of course
it's not much of a trick <grin> ... and a parallelized sum reduction on,
e.g., the KSR machine may not even get the same answer from one run to
the next with the same data.
More information about the Numeric-interest
mailing list