Jamming (aka von Neumann rounding)

Bill Voegtli uunet!dalek.mips.com!voegtli
Mon Dec 16 19:11:33 PST 1991


> 
> While writing my survey paper, I came across a method of rounding known as
> jamming or von Neumann rounding.  I have seen what I think are two slightly
> different explanations of how this works, and I am wondering if anyone has
> some insight into which of the two explanations is correct.  The first
> explanation can be found in one of Kahan's papers [1], and is repeated (I
> presume - I am not an expert in comparative literature) in IBM's comments
> on the proposed Language Compatible Arithmetic Standard (LCAS):
> 
>    If the result of an operation is not exactly representable, force the
>    least significant bit of the fraction to be a 1.
> 
> However, my understanding of the original explanation of jamming [2] is
> 
>    Force the least significant bit of the fraction of the result to be a 1.
> 
> which seems to imply to me that this would be the case even if the result
> is exactly representable.
> 


    I offer no insight as to correctness, but Kuck, volume 1, Chapter 3.4.2
"Disposal of mantissa underflow digits" indicates :

  "Jamming is performed by truncating the guard bits and forcing the
  low-order bit of the mantissa to be a 1."

    It's maximum error is worse than rounding, but it's total bias
(summation of errors for all cases) is the same as rounding.


  Since jamming involves as much hardware as chopping  :-),
  and with much better error characteristics, it's odd that it hasn't 
  been adopted by anyone who is currently truncating.




More information about the Numeric-interest mailing list