Towards finding "the worst" input cases

Tim Peters uunet!ksr.com!tim
Sat Mar 30 14:50:01 PST 1991


>  [jon, on "how many bits are enough?"]
>  ...
>  I was referring to just the case of 17 decimal digits for input.

Sorry; wasn't clear (to me) from context.  The general form of the
question has come up here in the past, so no harm (I hope) in addressing
that too.

>  Not only is there no limit on the amount of precision needed if the
>  number of decimal digits is not limited, but I'm sure it is trivial
>  to construct up a "sharp" test case to show that

Agree -- it's easy.

>  The crucial observtion I made many years ago -- that caused dgh to
>  remark about the "Table Maker's Dilemma" -- is that with fixed-lengh
>  "inputs", there are only finitely many representable numbers, so a
>  requisite working precision can be found by exhaustion

Understood; that's the whole thrust of what I was doing, except I was
trying to out-think the "exhaustion" part -- I don't think exhaustion is
necessary, since the cases that are capable of causing trouble have a
known form (i.e., the rounding end-cases (which are relative to the
specific rounding discipline in question)).  Analysis of that form
appears to lead to a procedure for finding "the worst" cases of that
form, without needing an exhaustive search, although still needing
enough bignum grunt work that a computer is essential.  Whether it's
possible to do the whole thing analytically is an interesting question,
but for my heathen purposes not *important* (i.e., if I can get "the
answer" thru a day of computer time instead of a day of analysis, that's
fine (actually preferable these days, but that's just my situation)).

>  ...
>  which again brings us back to the 1977 Kahan query "Is 66 extra
>  bits enough?"   But alas, I haven't studied your test cases closely
>  enough to verify that 66 *extra* bits more are required for some
>  17-digit decimal number.  Has anyone else verified it? (i.e., "read"
>  the proof?)

Two things:

  - I haven't given a proof of anything interesting; there's nothing of
    that sort on the table for anyone to verify (or refute <grin>).  I
    believe the approach is solid, and that given the time to work on it
    I could write a proof, but I can't take that time.  All I'm trying
    to do is to give enough info about the approach so that someone with
    the interest and the time can do it themselves.

  - I'm not claiming that any specific case *needs* 66 extra bits of
    precision, and it's hard to know what such a claim could mean.
    I.e., it all depends on the specific details of how you're doing
    your base conversion (how big a power-of-10 table are you using, how
    accurate is it, what kind of arithmetic are you using for the
    intermediate steps, how many operations are involved, ... need to
    know all the sources of potential error in excruciating detail, and
    a case that *needs* N bits of "working precision" given one specific
    formulation of a base-converion algorithm may (& I think this is
    obvious) *need* more or fewer bits of working precision under
    someone else's specific formulation of a base-converion algorithm).

    The claim (which I have *not* supported here in detail) is much more
    that, however that stuff works out, if you *end up* with an internal
    result that's good to within <= 0.5 ulp wrt a 66th excess bit, that
    *suffices*.  I.e., it's a claim of sufficiency, not of necessity.  I
    believe the procedure could be extended in straightforward ways to
    nail a sharp necessity claim too, but-- hey --one step at a time
    <grin>.  A rigorous sufficiency claim in the <100 extra bits range
    would already be a significant advance (& in practical terms it
    really doesn't matter to me if I could get away with (say) 63 extra
    bits instead of 66 ... the marginal speedup that would permit is
    relatively insignificant).

    I'll spell out one of the previously-posted test cases to try to
    hint at why I think the extra work of going for a "necessity" claim
    just isn't worth the effort:

    5.8483921078398283e73 is, in binary,

    1.0000100011001110010010011001010100011001110011100011
      0111111111111111111111111111111111111111111111111111111111111110+
      							             x
      * 2^245

    where the first line contains the 53 bits retained in the IEEE
    double format and the "x" marks the 64'th bit beyond that.  If
    whatever base-conversion algorithm you're using leaves you with a
    result good to within 1/2 ulp wrt the N'th excess bit, where N <=
    62, you'll be staring at an excess of exactly 1000..., so under
    nearest/even you'll (incorrectly) round up.  Does this get closer to
    the kinds of results you want to see?  If so, this kind of stuff (&
    lots more besides <grin>) is all "there" in the previous msgs, but
    alas it's been left for the unfortunate readers to extract
    themselves.

>  ...
>  Shouldn't take long to crank each one out in succesively larger
>  precisions until an adequate precision is found;

Adequate for that specific formulation of a base-conversion algorithm;
that's interesting for you (& me too <smile>), but I'm trying to stick
to a more general view of this.

>  in fact, it shouldn't take nearly as much computer time as it takes
>  me to think and worry about it

Exactly so.  Saving time (my own) drove every step of this --
paraphrasing dgh, a little analysis goes a long way, but I hit the point
of diminishing returns on "head work" real fast.  Of course, someone
with more talent than I may make quicker progress with it ...

have-fun!-ly y'rs  - tim


Tim Peters   Kendall Square Research Corp
timaksr.com,  ksr!timaharvard.harvard.edu



More information about the Numeric-interest mailing list