correctly-rounded transcendentals

uunet!research.att.com!sesv uunet!research.att.com!sesv
Tue Mar 5 13:26:36 PST 1991


Responding to some of David Hough's comments.


>many representable numbers.  For the common transcendentals, all the
>f(x) except the obvious ones are not rational, hence not on any
>boundary between representable numbers.  For each representable number,
>there will be some high enough precision in which you can conclude that
>f(x) is definitively on one side or another of a boundary.  Thus there
>is a worst case, and an algorithm that computed in that high enough
>precision would suffice.  That takes care of the first form of the
>incorrect inference.

But what is ``high enough precision?''  Hypothetically,
exhaustive testing may be able to show that correctly 
rounded single precision IEEE functions of a 
single argument can be done using at most double
precision for internal calculations.  Exhaustive testing
is not practical:
	- double precision functions
	- single precision functions of two arguments.
Can anyone prove that even quad precision suffices here?  Lacking
the ability to prove a worst case, a math library implementor has to
include a fallback to arbitrary precision calculations.
Note too that there is a big difference between correctly rounded
and ``nearly always correctly rounded.''

(By the way, pow (x**y) has an uncomfortably large
 number of results which lie exactly on a border.)

>For instance, correctly-rounded single-precision elementary
>transcendental functions can be computed relatively cheaply in average
>cost by using a double-precision algorithm that provides an error bound
>that tells you whether the single- precision result is correctly
>rounded to single precision.  In the rare cases that fails, you can use
>integer arithmetic or probably even table lookup to resolve the issue.

Does any conventional (e.g., not providing interval arithmetic),
commercial system do this?  Based on some of the IBM papers,
e.g., 

	P. W. Markstein,  ``Computation of elementary functions 
	on the IBM RISC System/6000 processor,''
	IBM J. Res. Develop, 34 January 1990.

I expected the RS6000 to have at least some correctly rounded
transcendental functions.  The best observed function 
when I looked last year was exp() with an 
observed error of 0.500007 ulps.  Very accurate, but not
correctly rounded.  [It is possible that I missed 
a compile or run-time flag.  If so, please let me know.]


Here's a ``what-if'', to consider.  Suppose a computer company
provided a math library which implemented correctly rounded
elementary functions with the following characteristics:
	- for 999,999 out of 1,000,000 arguments, the
	  correctly rounded value was returned in X microseconds.
	- In  1 out of 1,000,000 arguments, higher precision
	  internal computations must be used and so the
	  correctly rounded value now takes 100X microseconds.
Will customers accept this behavior?  I believe that Kahan
once said, roughly, that consistent performance was important.
Do customers feel this too?  If so, how do the systems which
implement IEEE denorms in software fare?  (Pretty well, in my opinion.)
Has anyone run in benchmark wars where an unfortunate choice
of arguments caused one vendor to look much worse than deserved?

Steve Sommars



More information about the Numeric-interest mailing list