Towards finding "the worst" input cases

Jon L White uunet!lucid.com!jonl%kuwait
Fri Mar 29 22:31:48 PST 1991


re: [JonL]>  So we have come back to the same old question I couldn't get 
          >  an answer to in 1977 -- namely, how can we prove (apart from 
	  >  exhaustion) that "wide-integer" precision of exactly N bits is 
	  >  adequate for 64-bit float-double conversions?

    [Tim] Well, I think it's clear we can't, because N bits aren't enough
          regardless of how big N is ... if you'll put an upper bound on the
	  number of significant input digits you'll pay attention to, the 
	  question is answerable (in theory), else since the possible decimal 
	  inputs are dense in the reals the user can get arbitrarily close to
          any endcase the rounding discipline has trouble with (hence no N 
	  is "big enough").

I was referring to just the case of 17 decimal digits for input.  Not only
is there no limit on the amount of precision needed if the number of
decimal digits is not limited, but I'm sure it is trivial to construct
up a "sharp" test case to show that [i.e., in reference to your phrase 
"the user can get arbitrarily close to any endcase ...", you would ask
for a simple formula to map a precision size N into a decimal integer
D such that D+1 is closer to the next representable float; you don't
need "tight" bounds; just one such D will do. Maybe you need D and E
also where the decimal representation is   DDDDDD x 10^EEEEE)

The crucial observtion I made many years ago -- that caused dgh to
remark about the "Table Maker's Dilemma" -- is that with fixed-lengh 
"inputs", there are only finitely many representable numbers, so a 
requisite working precision can be found by exhaustion (and indeed for 
numbers in the arena of "double" precision, it is very exhausting.)
In fact, earlier in  your reply (i.e., the same msg as quoted from
above) you said:

    This shouldn't be sneezed at too quickly:  the worst case for 17-digit-
    or-less input I found (and it may in fact-- or may not <grin> --be *the*
    worst case> "only" required another 66 bits of precision.  If that's so,
    the Clinger- and Gay-like escapes to much larger integer precisions
    aren't really necessary (for 17-digit-or-less decimal -> IEEE double
    input).

which again brings us back to the 1977 Kahan query "Is 66 extra
bits enough?"   But alas, I haven't studied your test cases closely 
enough to verify that 66 *extra* bits more are required for some
17-digit decimal number.  Has anyone else verified it? (i.e., "read"
the proof?)


The reason I had wanted to spend some time coding up the "programmable
precision base converter" was simply to run your proffered test cases 
"to exhaustion".  Shouldn't take long to crank each one out in succesively
larger precisions until an adequate precision is found; in fact, it
shouldn't take nearly as much computer time as it takes me to think and 
worry about it (and read all the mail about it! but how we poor mortals
are wont to procrastination of so many things  . . .)


Cheers, 


-- JonL --



More information about the Numeric-interest mailing list