Note on Java Numerics

R. Baker Kearfott rbkausl.edu
Fri Jan 31 06:58:28 PST 1997


At 05:02 PM 1/27/97 PST, Jerome Coonen wrote:
>
>====================================================================
>A Note On Java Numerics                               Jan. 25, 1997
>====================================================================
>
>Overview
>
 .
 .
 .

>arbitrary decisions about right and wrong results.  Today, the Java
>specification favors processors like Sun's Sparc, at the expense of
>millions of users of other hardware.

I am confused.  I thought that Sun implemented IEEE 754, that includes
the extended register for argument reduction, etc.  Furthermore, other
aspects of Sun products, such as promotion of single precision 
constants to double in f77, seem to be counter the rigid stand on 
coercion into a floating point storage unit as you describe in the
proposed Java standard.

Can someone enlighten me?  Is it something I don't understand about 
Sparc architecture?

>Two Examples
>
>Consider this simple piece of a larger computation:
>
>double h, a, b, c, d;
>h = (a * b) / (c * d);
>
>Programmers familiar with the Pentium processor might expect expression
>evaluation to proceed in the form
>
>fld.d	a	; push a onto FPU stack
>fmul.d	b	; a * b on stack
>fld.d	c	; push c
>fmul.d	d	; a * b and c * d on stack
>fdivp		; a * b / c * d on stack
>fstp	h	; pop and store as double
>
>This is an excellent application of the floating point evaluation stack
>in the x86 architecture.  And because stack values are stored to 11 more
>significant bits and with substantially wider exponent range than 64-bit
>double values, rounding error is reduced and the possibility of intermediate
>overflow or underflow is eliminated.
>
>A careful reading of "Java: The Lanuage Specification" by Gosling, Joy, and
>Steele, however, reveals language suggesting that all intermediate results
>should be stored to "double precision."  According to numerical aficionados
>at Sun, the spec ought to say "double precision and range." This restriction
>leads to this more cumbersome evaluation on some Pentium systems:
>
>fld.d	a
>fmul.d	b	; a * b on stack
>fst.d	temp1	; coerce a * b to double -- EXTRA STORE
>fld.d	c	; EXTRA LOAD
>fmul.d	d	; c * d on stack
>fst.d	temp2	; coerce c * d to double -- EXTRA STORE
>fld.d	temp1	; EXTRA LOAD
>fdiv.d	temp2	; temp1 / temp2 on stack
>fst.d	h
>
>The cost of converting intermediate results to double in this simple
>calculation is writing two double values to memory and then
>immediately reading them back -- nearly doubling the memory traffic
>and increasing the instruction count by half.  This is not what designers
>of the x86 FPU (nor the IEEE floating point standard) had in mind.
>
>Here is an example on the PowerPC processor.  Numerical approximations
>are often structured to have the form (base value) + (residual), where
>the base value is a fast but rough approximation, in turn refined by the
>smaller residual.  Such approximations lead to expressions of the form
>
>double y, base, x, h;
>y = base + (x * h);
>
>PowerPC enthusiasts crave such expressions because they are handled so
>well by a single instruction
>
>; Assume floating registers fr1 = base, fr2 = x, and fr3 = h.
>; Compute fr0 = y = base + (x * h)
>fmadd	fr0,fr1,fr2,fr3	; fr0 = f1 + (f2 * f3)
>
>The power of the "fused multiply-add" instructions lies in the evaluation
>of the product (x * h) with no rounding error before this value -- to a full
>106 significant bits -- is added to the value of base in fr1.
>
>The Java spec, however, would seem to imply that the product (x * h) must
>be explicitly rounded to double before it is added to base, leading to
>the alternative sequence
>
>; fr1 = base, fr2 = x, and fr3 = h.
>; Compute fr0 = f1 + (f2 * f3)
>fmul	f2,f2,f3	; f2 = f2 * f3, replaces x -- EXTRA ROUND TO DOUBLE
>fadd	f0,f1,f2
>
>Java has doubled the number of instructions, added a temporary value,
>and forced one gratuitous rounding on the PowerPC.
>

---------------------------------------------------------------
R. Baker Kearfott,       rbkausl.edu      (318) 482-5346 (fax)
(318) 482-5270 (work)                     (318) 981-9744 (home)
URL: http://interval.usl.edu/kearfott.html
Department of Mathematics, University of Southwestern Louisiana
USL Box 4-1010, Lafayette, LA 70504-1010, USA
---------------------------------------------------------------




More information about the Numeric-interest mailing list