what is pi anyway - as recently posted to comp.arch
David G. Hough on validgh
dgh
Sat Feb 23 16:24:30 PST 1991
There's been a recent thread of argument in comp.arch about how
arguments to trigonometric functions should be reduced to the range
such as +-pi/4.
Mathematically speaking you want to discard the largest possible
multiple of 2*pi to get the argument into the range +-pi, then subtract
the largest possible multiple of pi/2 to get into the range +-pi/4,
retaining the integer multiple that you subtracted, which will be in
the range -2..+2, and will probably become the argument of a switch
statement. So you see Herman Rubin is correct in suggesting that an
instruction which returns a floating-point remainder and an integer
quotient makes sense, and in fact that's how the MC6888x fmod and frem
instructions work.
In practice we have a problem that pi is not representable exactly.
The language standards for Fortran and C aren't clear exactly what they
have in mind here, but the overall implication is that elementary
transcendental functions should be the rounded values of the
corresponding mathematical functions. This is a tall order, since
those functions are transcendental; "almost correctly rounded" might be
a more reasonable goal, but even that is expensive: it implies that
argument reduction must be carried out with extra precision even for
small arguments; see the March 1981 IEEE Computer for details.
Following Cody and Waite, typical Unix implementations performed
trigonometric argument reduction with a fixed higher-precision
approximation to pi, and then punt when the magnitude of the argument
is large enough that the simple algorithm breaks down. The breakdown
is an artifact of a simple algorithm and not intrinsic in any sense,
but it was enshrined in the System V definition as partial or total
loss of significance, and rationalized on the grounds that any
uncertainty in the argument led to a large relative uncertainty in the
result. While this might be true, it was also true of smaller
arguments, and also true of log(x) for x near 1, or even of x-y when x
and y are close. Generally speaking, assessing the effects of
uncertainty in the input data is a complicated job that must fall on
the user, because only the user knows what changes in the answer are
important. Interval arithmetic can aid in automating this analysis, or
can just render matters murkier.
SunOS 4.x has a little-documented mode that governs trigonometric
argument reduction; under some circumstances you can select argument
reduction with 53-bit pi, 66-bit pi, or "infinite" pi. That means:
53-bit pi means the double precision approximation to pi. The libm
trigonometric functions trig(x) will actually compute
trig((pi/pi.53)*x). If you set
P=3.141592653589793...d0
Y = SIN(2 * P)
Y will be zero.
Argument reduction is relatively cheap; you can use a normal
floating-point remainder instruction if you have one.
66-bit pi means the 66-bit approximation to pi used in the MC6888x
hardware. You compute trig((pi/pi.66)*x). The purpose of this, the
default, was to insure that similar argument reduction occurred when
the MC6888x hardware trig instructions were used, and when they
weren't, as on Sun-4 or with -fsoft on Sun-3. Argument reduction is
relatively inexpensive for modest magnitudes, using Cody and Waite
algorithms, and more complicated more larger magnitudes; SIN (2 * P)
will not be zero.
"infinite" precision pi means that as precise an approximation to pi as
needed is exploited in order that the rounded reduced argument be
within one unit of the mathematically precise reduced argument. This
is expensive, too, even for modest magnitudes, but presumably
corresponds best to the intent of the language standards. VAX
libraries have done this for some time, by default, since the extra
expense is modest when the maximum magnitude is small.
Future libraries for Suns will not have this mode. Modes, in general,
are not good for program portability. Furthermore measuring
compatibility with the MC6888x is no longer meaningful; nor is 66-bit
argument reduction a good default for a floating-point type with 113
significant bits.
So I decided to take the language standards at their word and compute
the standardized trigonometric functions with "infinite" precision pi.
In addition other functions provide intrinsically fast argument
reduction:
trigpi(x) computes trig(pi*x) for precise pi
trigp(x) computes trig(p*x) for p = pi rounded to the
precision of x,
e.g. pi.53 for double precision
trigd(x) computes trig((pi/180)*x), for arguments x in
degrees
The trigd functions are part of VAX VMS Fortran and hence a
defacto standard; the trigpi functions correspond to standard usage in
actual scientific coding; and the trigp functions are for applications
that find it expedient to have e.g. SIN(2 * P) be zero. Of course
trigp and trigpi are not part of any language standard (except
apparently Ada permits specification of pi as a second argument) but
perhaps that will be remedied in time if other vendors provide
identical interfaces.
One might well wonder that this is a lot of trouble to handle a case
that rarely arises - large arguments to trigonometric functions. The
issue is how much a programmer is obliged to remember about common math
library functions. Correct "infinite" precision argument reduction
makes it possible to provide trigonometric functions that are almost
correctly rounded, even for large arguments, but they are expensive for
moderate to large arguments. Correct argument reduction using pi rounded to the
precision of the argument is much less expensive for large arguments,
but means that the computed results have to be interpreted as almost
correctly rounded results of slightly perturbed arguments. As somebody
else mentioned, this is the difference between the results of
successful forward error analysis and successful backward error
analysis. The latter is adequate for many but not all purposes.
More information about the Numeric-interest
mailing list