(SC22WG11.419) pi and LIA

Ed Barkmeyer uunet!cme.nist.gov!edbark
Tue Jan 17 08:01:45 PST 1995


I think there is much merit in David Hough's position on the accuracy of
trig functions, but I will leave the standardization rules to those of you 
with more experience in the expectations and needs of numeric programmers.

I would like to comment on two issues David raises:

1.  Trig functions with preselected bases other than "radians".
David says:
> It is also appropriate for a new arithmetic standard to offer the end user
> an easy way around the quandary expressed by Wichmann:  namely by defining,
> in addition to the classical trig functions 
> 
> 	trig(x)			where x is interpreted in radians
> 
> also
> 
> 	trigd(x)		where x is interpreted in degrees
> 
> and
> 
> 	trigpi(x)		where x is interpreted in semicircles so
> 				trigpi(x) = trig( pi * x) for exact pi
> 
> The "trigd" functions are a common Fortran extension, and the trigpi functions
> less so, although they have been provided in Sun's libraries for C and
> Fortran since 1988.   They are just the TRIGFF functions prescribed in 
> LIA-2, with u fixed at 360 and 2 respectively; 
> I think they cover at least 99% of
> the need represented by TRIGFF without introducing the coding and testing
> burden required to support TRIGFF functions with two arbitrary arguments -
> two functions of one argument are far easier to test than one function of
> two arguments.

I agree with the latter part of this.  The two base-specific functions can
be coded with more efficiency, more reliability, and, in the case of 
<trig>pi(x) higher guaranteed accuracy.   That makes these functions more
desirable than TRIGFF and suggests that they be recommended for use OVER
TRIGFF where appropriate!

On the other hand, I suggest that a "good" implementation of TRIGFF might
TEST FOR the special cases u=1, u=2 and u=360 and route them to the "high
quality" base-specific implementations.  (I realize that that consumes a
few extra cycles, but I thought that we had, as a community, matured over
the 704 tactics of saving every possible cycle.)  In the 1960s, good
Fortran libraries for Real**Real tested whether the actual value for R was 
integral before reducing the expression to exp(R*log(X)), with concomitant
time and accuracy loss.  I think the same logic might apply here.

I really can't say whether the 99% is right.  I haven't done that much
numerical computation in the last 20 years, but I do recall that in my
time spent in a blue uniform we did a lot of radar and gunnery
position calculations in "mils", i.e. u=1000.

2.  Floating-point and spreadsheets.
David says:
> A different problem confronts us now.    Probably the bulk of floating-point
> operations are performed in PC spreadsheet programs, for although each such PC
> performs few, such PC's are many in number.   Most technical floating-point
> operations are probably performed on workstations and workstation-derived
> servers, which are slower but far more numerous than supercomputers.   But the
> PC's, workstations, and most new supercomputer designs all use variations of
> IEEE 754 binary floating-point arithmetic.  So the job of porting mathematical
> software can be considered done for all time.
> 
> Well not quite.   Different choices permitted within IEEE 754, different
> expression evaluation paradigms, and different libraries - not to mention
> gross and subtle bugs in optimizing compilers, not to mention the 
> underlying hardware - cause identical numerical
> results on different systems to still be the exception rather than the rule,
> and to the justifiable complaint of the typical floating-point user - 
> now more likely an accountant or technician than a numerical analyst -
> technical support people often 
> respond that that's just the way floating-point arithmetic is - an inherently
> unpredictable process, like the weather.

All of this is so, but it misses the fact that floating-point is fundamentally
the WRONG arithmetic paradigm for MOST spreadsheet users.  Accounting rules
are not stated in "relative precision";  they are stated (BY LAW in many cases)
in "decimal places" = ABSOLUTE PRECISION.  Thus the whole phenomenon of 
floating-point is a poor approximation to the arithmetic functions most
spreadsheet users are trying to use.  Calculation of FHA loan interest,
for example, requires the monthly interest rate to be rounded to exactly
8 decimal places and the resulting payments and principal/interest splits
to be rounded to the penny, favoring interest when the sum is too large and
principal when the sum is too small!  This causes the a priori calculation
of the finite series for a 25-year loan to require careful arithmetic, for
which almost all floating-point implementations are intrinsically flawed.
While floating-point can be used to perform such computations, the user
must impose an entirely different kind of accuracy regimen from that used
for scientific calculations.  It is probably fair to say that technical
support people who are versed in spreadsheet applications tend to see 
floating-point as a "strange and dubious" method of doing arithmetic at all,
and with some justification.

An old joke of the 1960s was that nobody's paycheck depended on the Fortran
compiler (Deo gratia), and the COBOL compiler was smarter than to use
floating-point arithmetic.  Spreadsheet users no longer use "compilers" as
we used to know them, and who knows what spreadsheet developers do?

3.  The goals and limits of standardization.
David says:
> A useful arithmetic standard to address these current problems in computer
> arithmetic would prescribe the DEFAULT:
> 
> 1)	expression evaluation: what happens when the language does not
> 	specify the order of evaluation of (a op b op c), or how mixed-mode
> 	arithmetic is to be evaluated, or the precision of variables in
> 	registers or storage, and related issues.
> 
> 2)	correctly-rounded conversion between binary and decimal floating point:
> 	public domain code is available.
> 
> 3)	correctly-rounded elementary algebraic and transcendental functions:
> 	error bound 0.5, not 0.5+, in the terms of LIA-2.

Some philosopher with both feet on the ground recently described "standards"
as "agreements among vendors which improve interchangeability and 
interoperability to their perceived mutual advantage."

The problem with (1) above is that EXISTING software and hardware cannot
be easily modified to produce uniformity in these regards, IEEE 754
notwithstanding.  And as a consequence there will be no such "standard",
because vendors will not be able to agree to it.  Who is going to tell Intel
that 64-bit precision means exactly 64 always, not sometimes 80?  Who is 
going to modify the compilers whose register- and storage-management 
algorithms don't conform?  Rather we need to agree for each language exactly
what syntax forces arithmetic to be performed in a particular order and 
rounded to the stated precision at a particular point.  These are 
language-specific problems that the compiler writers will have a chance
to implement as the syntactic formulation is developed.  And programmers
who care must discipline themselves to use that syntax.  As a user
community, we have to get past the "do it my way" approach to the language
committees and ask "give me a way to do this", but in fairness we must
also get the committees past the "why would you want to" attitude.

Software is an engineering discipline, not an art form.  There is not
"elegant" and "beautiful" but rather "what works" and "what doesn't".

I agree with (2), and I don't know enough to judge (3).

-Ed
--------------------------------------------------------------------------
Edward J. Barkmeyer                             Email: edbarkacme.nist.gov
National Institute of Standards & Technology
Manufacturing Engineering Laboratory
Building 220, Room A127                         Tel: +1 301-975-3528
Gaithersburg, MD 20899                          FAX: +1 301-258-9749

"The opinions expressed above do not reflect consensus of NIST, and
 have not been reviewed by any Government authority, unless so stated."



More information about the Numeric-interest mailing list