Oops -- Re: Towards finding "the worst" input cases

Tim Peters uunet!ksr.com!tim
Thu Mar 28 23:45:13 PST 1991


A good & long-overdue dose of Twin Peaks reminded me of why I said what
I said to begin with.  Retrieve the context:

>  But it seems to me that the "worst case" number you turned up, however
>  interesting in its own right, doesn't quite support the conclusion:
>      Among other things, this shows that even a careful "doubled
>      precision" approach isn't good enough to do correct 17-digit
>      decimal -> IEEE double input, since the crucial "which side of a
>      half ulp is it on?" isn't resolved here until hitting the 59th
>      excess bit.
>  The reason  is that a 60-bit rounding on this 120-bit number gives the
>  "right" answer anyway (for the wrong reason, of course).  The number was
>  something like   ...01.10000000....<maybe-more-bits>, and your reasoning
>  was that you couldn't perform correct rounding without knowing whether
>  the <maybe-more-bits> were all zero or not.

Part of the problem here is that I actually didn't explain my reasoning
at all -- to save time I just made a bald assertion.  The other part of
the problem is that, again to save time <sigh>, I therefore accepted
your divination of my reasoning as my own <grin>.

>  but consider the case  analysis:
>
>    (1) <maybe-more-bits> has some non-zero bits; so the rounding step
>        will increment the lsb [uh, the bit that I showed just to the
>        left of the fractional point] by the ieee rule about the discarded
>        remainder being greater than 1/2 lsb.
>
>    (2) <maybe-more-bits> is all zero; so the discarded remainder is
>        exactly 1/2 lsb; in this case, the ieee rule about round to
>        even applies, so the lsb is incremented in this case too.
>
>  So you get the right answer despite your best attempts to fail.

And the full case in question here was

3.2931890100779056e-308 = (rounded to 120 bits)


2^-1022 *
2#1.0111101011100011101000001001111010101101010111011101
A:  1000000000000000000000000000000000000000000000000000000000100100001 
							      x

where "x" marks the 59th excess bit.  Suppose we're faking an extended
fp precision that only gives us results good to (<=) 1/2 ulp in the 57th
excess bit (and which is better than we can achieve with a "doubled
precision" approach).  Then we'll be left staring at an excess of

B:  100000000000000000000000000000000000000000000000000000000
    <-------------------------- 57 bits -------------------->

(which is within 1/2 ulp of the true excess A), and *all* we know is
that the infinitely precise excess is within 1/2 ulp of this.  In
particular, for all we can tell given this much precision to work with,
the infinitely precise excess may be exactly (for example)

C:  011111111111111111111111111111111111111111111111111111111111
    						            y

where y marks the 57th excess bit -- this is 1/8 ulp away from B, so is
well within the possible error, and if this *is* the exact excess then
we should not round up after all.  So 57 excess bits (or, of course,
anything less than that) is not enough.

But if we get an answer good to within 1/2 ulp wrt a 58th excess bit,
we'll be left staring at an excess of

    1000000000000000000000000000000000000000000000000000000001
							     x

(x marks the 58th excess bit) and now we've nailed it:  knowing that the
true excess is within 1/2 ulp of that guarantees that we must round up.

When we know the result to more precision than we'll actually be working
with, it's real easy to get sucked into assuming we'll know more than we
actually will <0.1 grin>.

dreading-to-find-out-what-i-botched-in-haste-this-time!-ly y'rs  - tim

Tim Peters   Kendall Square Research Corp
timaksr.com,  ksr!timaharvard.harvard.edu



More information about the Numeric-interest mailing list