Towards finding "the worst" input cases
Jon L White
uunet!lucid.com!jonl%kuwait
Wed Mar 27 23:27:37 PST 1991
I really want to participate in this discussion -- especially regarding
the part about mechanical generation of "hard" test cases for base-
conversion algorithms; but I'm up-to-the-ears in Lucid product schedules
and Lisp-world standardization issues (and whenever X3J13 rears its ugly
head . . .), and haven't even had time to read all the subsequent mail
on this topic. However, I'd like to make a high-order bit kind of
comment now, although more careful and considered comments will have to
wait until much later.
I was impressed by your search, having done something similar in 1977
on an otherwise nightitme-under-utilized IBM 370/90. My search went
looking for powers of 10 that were close to powers of 2 (after Matula).
There are some incredibly clever tricks one can play to make the 370 do
128-bit integer arithmetic quickly; and this is satisfactory for exploring
the question if you start from a fairly good approximation of log2(10).
I had found a number with a string of 29 zeros of "uncertainty"; but
it looks like you have done much better! Right on!
But it seems to me that the "worst case" number you turned up, however
interesting in its own right, doesn't quite support the conclusion:
Among other things, this shows that even a careful "doubled precision"
approach isn't good enough to do correct 17-digit decimal -> IEEE double
input, since the crucial "which side of a half ulp is it on?" isn't
resolved here until hitting the 59th excess bit.
The reason is that a 60-bit rounding on this 120-bit number gives the
"right" answer anyway (for the wrong reason, of course). The number was
something like ...01.10000000....<maybe-more-bits>, and your reasoning
was that you couldn't perform correct rounding without knowing whether
the <maybe-more-bits> were all zero or not. but consider the case
analysis:
(1) <maybe-more-bits> has some non-zero bits; so the rounding step
will increment the lsb [uh, the bit that I showed just to the
left of the fractional point] by the ieee rule about the discarded
remainder being greater than 1/2 lsb.
(2) <maybe-more-bits> is all zero; so the discarded remainder is
exactly 1/2 lsb; in this case, the ieee rule about round to
even applies, so the lsb is incremented in this case too.
So you get the right answer despite your best attempts to fail. But one
would expect that some similar such number wouldn't have the ambiguity
that lets the "round-to-even" rule come into play.
Maybe this got discussed in subsequent discussions, so my apologies if
I'm just rehashing old comments. Ideally, I'd like to program up a
programmable-precision input converter and give some experimental whacks
at it; given that only a few runs would ever be made by it, I'm sure that
Lisp's bignums are fully adequate in performance.
Four years or so ago, I put into Lucid Common Lisp's READer an algorithm
that is the basis of the Clinger approach for "Reading Floating Point
Numbers Accurately" [but I never published anything about it; Will mentions
this in a footnote of his SIGPLAN '90 paper.] This algorithm uses:
-- exact float doubles, for the cases that these can easily be proven
adequate;
-- "modest" size fixed-precision integer arithmetic, when float doubles
couldn't be used, but where the fixed-precision can easily be proven;
adequate; ("modest" size means between about 80 bits and 128 bits;)
The "proof" technique involves incremental accumulation of tables
which are _truncations_ to about 120 bits of the powers of 10, both
negative and positive;
-- fallback to (horrors) bignums when all else fails.
I note that when the fallback is taken, frequently the result from the
wide-precision integer case is in fact accurate anyway. To date, I haven't
been able to characterize the frequency or reason for this, but I suspect
there is something more lurking there than a simple coding bug (on the
other hand, I haven't really looked for the reason.)
So we have come back to the same old question I couldn't get an answer
to in 1977 -- namely, how can we prove (apart from exhaustion) that
"wide-integer" precision of exactly N bits is adequate for 64-bit
float-double conversions? I look forward to reading the rest of your
comments on this matter.
-- JonL --
More information about the Numeric-interest
mailing list