5th operation?

Alan M. McKenney uunet!GAUSS.CIMS.NYU.EDU!mckenney
Mon Oct 5 07:48:58 PDT 1992


Tim Hoff -- or is it George Corliss (georgecaboris.mscs.mu.edu) writes:

[ example involving adding and subtracting O(10^11) and getting ]
[ floating point errors. ]


> Dear reader: Did you know that modern computers performing
> floating-point arithmetic of the best quality may fail in simple
> accumulations which every child in fourth grade can do correctly
> within a few minutes?


\begin{grouch mode}

Did I come in late in the discussion, and miss something?  Who exactly
finds this a revelation?  Or is this little paper disingenuously naive?

Maybe I'm missing some context somewhere, but I wonder how anyone who
has learned enough about floating point arithmetic to have any concept
of what IEEE arithmetic is can *not* know this.  It's usually chapter
1 of any undergraduate numerical analysis textbook.  (I don't know
about non-NA CS textbooks.)  It's called catastrophic cancellation.
It also appears in most beginning FORTRAN texts.

I also read in such places, with boring regularity, how bankers and
such people who need to accumulate stuff accurately use fixed-point
numbers for the purpose.  ``Engineers work with quantities where
errors of 10^10 may be negligible, but bankers care about getting
every last penny right.  So engineers use floating point, and bankers
used fixed point/BCD arithmetic.''  Am I unusual in having seen this?

\end{grouch mode}


You could even do this example in FORTRAN; just do the arithmetic with
INTEGER variables representing the number of cents (well, you'd have to
have > 47 bits/integer -- e.g., a Cray or a 48-bit Burroughs machine.)

However, (according to my understanding of things) nobody who is
keeping track of billions of dollars to the last penny uses
floating point arithmetic, IEEE or otherwise.  Bank accounting
programs are written in languages like COBOL, which afford greater
specification of the precision needed than languages like FORTRAN or C,
and, I believe, don't even have any provision for floating point.

So all that this little paper has pointed out is the
not-very-surprising fact that IEEE arithmetic, like all versions of
floating-point arithmetic, doesn't eliminate all disparities between
computer and exact arithmetic.  (IEEE-754 only claims to be better than
other floating-point models.)  It also suggests, incorrectly, that IEEE
arithmetic has no way of letting you know that the answer is wrong.
(Hint: look at the "inexact" flag.)  A properly-implemented IEEE-754
8-decimal-digit calculator would have had a little "inexact" light,
that would have lit up early on in John's calculation.



And, as the follow-ups suggest, the "cure" suggested in the "5th op."
paper is an old one, and only solves one of the many accuracy problems
that heavy floating-point users encounter, while inserting an
annoyingly non-orthogonal feature into any language that supports it.

An example of how this "cure" does nothing for a similar problem that
I have actually had: in writing a Buneman (fast Poisson) solver, I
found significant errors when using large grids.  It turned out that
part of the calculation required multiplying a whole bunch of constants
together, and I was multiplying them in an order which caused a partial
product to underflow, even though the final product should have been
O(1).  I rewrote the code to reorder the constants so that whenever the
partial product was < 1, it would choose a constant > 1 to be next, and
vice versa, and solved the problem.  In the spirit of the "5th op", we
would need a sixth operation, product with unlimited exponent range.

And then there are those who need (or think they need) sin(10^10), and
stuff like that.



The more common, and, I believe, reasonable approach is to

(a) try to "clean up" floating point as it is presently conceived,
    i.e., to come up with a model which is not radically different from
    present floating-point models, yet eliminates what were arguably
    ill-considered decisions, by considering the expected user base.
    This was (I believe) the IEEE-754 approach.  I recall a discussion
    here of how to evaluate things like sin(10^10), done in this spirit.

while

(b) speculating about new arithmetic models which are both practical and
    make getting accurate results easier.  Lots of models have been
    proposed, including ones based on the "5th operation"; all the ones
    I have heard of have problems or turn out to be less generally
    useful than was at first supposed.  (This is to be expected: most
    new ideas turn out to be not very good, that's why you need lots of
    them.)



Alan McKenney        E-mail:  mckenneyacims.nyu.edu         (INTERNET)
Courant Institute,NYU,USA     ...!cmcl2!cims.nyu.edu!mckenney   (UUCP)



More information about the Numeric-interest mailing list