Function philosophy (was Re: fp equality)
Tim Peters
uunet!ksr.com!tim
Tue Nov 30 20:58:16 PST 1993
> > [david hough]
> > ... [a] standard library [function], not knowing all the uses to
> > which it will be put, must assume the arguments are exact and return
> > a result as close to the exact result as the set of representable
> > numbers and economics allow.
> [dik]
> ... [taking a "a float is a representative of an interval" view] ...
> I.e. if you call the cosine function with argument (single precision)
> of 1.0, any result is valid that is consistent with an argument in the
> range 1.0-2^-24 to 1.0+2^-24. In this case it does not matter very
> much, but for larger arguments it makes a lot of difference.
So, e.g., cosine(1.0e30) can return anything whatsoever in [-1.0,1.0]?
Ditto sine(1.0e30), and tangent(1.0e30) can return anything at all? That
follows from the viewpoint you're taking. And if they're allowed to
return absolute nonsense, I hope nobody complains if, e.g., sine(1.0e30)
== sine(-1.0e30) == cosine(1.0e30) == tangent(1.0e30) == 0.125. Or are
they constrained to return mutually consistent nonsense (so that, e.g.,
sin(x)/cos(x) ~= tan(x), sin(-x) ~= -sin(x), etc: where does it end?)?
The problem I have with the argument-is-really-an-interval-representative
view is that, once you buy it, there's no compelling basis for saying a
silly behavior _is_ silly. Then portability goes out the window, as one
nasty side-effect.
I recently reimplemented my employer's trig functions, and treated args >
~2048*pi as if they were infinity (i.e., raised invalid and returned a
qNaN), rather than return a senseless result (reduction for arguments
less than that bound were done "as if" to infinite precision). Turns out
you can't even get through the Fortran FCVS tests with a limit that
small (although I own one pocket calculator that refuses to do a trig
function for any argument > 20*pi -- fine by me!).
So I extended the domain to all finites, doing "as if infinite precision"
reduction throughout. I must say I'm very happy with that, and very
unhappy with all the alternatives; there's a ~100-cycle penalty for using
arguments > 2048*pi, but I figure nobody who knows what they're doing
uses such large arguments anyway (&, no, the FCVS tests often don't know
what they're doing). And people who don't know what they're doing may
take some false-but-welcome comfort from the fact that our trig functions
and Sun's are never more than 1 ulp different for any argument now
(despite that we do both reduction & approximation in very different
ways).
> I know of one system where (on good mathematical grounds)
This is a meaning of "good" with which I was previously unacquainted <wink>.
> the range reduction was done by multiplication by the inverse of pi.
> But not the proper rounded value, but a bit tweaked, such that sin(k *
> pi) would deliver 0.0 for every integer k and machine precision pi.
> Some hand-held calculators have something similar. And I have heard
> numerical analysts complain that sin(1000 * pi) was not zero on their
> system.
Name 37 <grin>. Was it also fiddled so that sin(pi/6) returned exactly
0.5? So that atan(1) was the best machine approximation to pi/4, or so
that sin(atan(1)*2) was exactly 1? I recently saw a Fortran benchmark in
which the author expected SIGN, in
DO I = 1, A_VERY_LARGE_INTEGER
SIGN = COS(I*PI)
...
to cycle between -1.0 and 1.0 (exactly) on successive iterations.
Etc. I've heard complaints about these kinds of "failures" too, but at
some point people using conventional floating-point arithmetic just have
to accept that it's not what they wish it was. We can't hide it, either,
and the grouchier I get in old age, the less willing I am to waste my
time trying.
btw-agreeing-that-fp-equality-is-useful-but-it's-no-fun-
agreeing<smile>-ly y'rs - tim
Tim Peters timaksr.com
not speaking for Kendall Square Research Corp
More information about the Numeric-interest
mailing list