Jamming (aka von Neumann rounding)

Samuel A. Figueroa uunet!SPUNKY.CS.NYU.EDU!figueroa
Mon Dec 16 14:28:32 PST 1991


While writing my survey paper, I came across a method of rounding known as
jamming or von Neumann rounding.  I have seen what I think are two slightly
different explanations of how this works, and I am wondering if anyone has
some insight into which of the two explanations is correct.  The first
explanation can be found in one of Kahan's papers [1], and is repeated (I
presume - I am not an expert in comparative literature) in IBM's comments
on the proposed Language Compatible Arithmetic Standard (LCAS):

   If the result of an operation is not exactly representable, force the
   least significant bit of the fraction to be a 1.

However, my understanding of the original explanation of jamming [2] is

   Force the least significant bit of the fraction of the result to be a 1.

which seems to imply to me that this would be the case even if the result
is exactly representable.

I am not sure if I should trust my own understanding on this matter, since
I feel Kahan does not ordinarily give inaccurate explanations in his papers.

References:

[1] W. Kahan.  Why do we need a floating-point arithmetic standard?
    Technical report, University of California at Berkeley, Feb. 1981.

[2] A. W. Burks, H. H. Goldstine, and J. von Neumann.  Preliminary discus-
    sion of the logical design of an electronic computing instrument.  In
    A. H. Taub, ed., _Design of Computers, Theory of Automata and Numerical
    Analysis_, vol. V of _John von Neumann: Collected Works_, pp. 34-79,
    Pergamon Press--Macmillan, New York, 1963.



More information about the Numeric-interest mailing list