Java Numerics

bill joy bill.joyasun.com
Thu Feb 13 11:51:15 PST 1997


> IEEE 754 Exceptions
> 
> We agree that some testing mechanism should be provided.  Bill Joy responded
> to Hough's comments with support for "well bracketed" exception handling in
> the form of try blocks:
> 
>  >> try enabling FloatingPointOverflow {
>  >>     // code where overflow enabled
>  >> } catch {
>  >>     ...
>  >> }
>
ALTERNATIVE PROPOSAL:

I have an alternative proposal, which doesn't involve language changes:
provide operations mul, add, etc. defined as
	double mul(double a, double b) throws FloatingPointOverflow,
FloatingPointUnderflow;
	double add(double a, double b) throws FloatingPointOverflow,
FloatingPointUnderflow;
Add such methods to a library in a standard place, and let code which
wishes
to have an exception on these operations use these routines.  JIT
compilers
can optimize these routines to be inline.

This is trivial to implement, and can be optimized by JIT's.

COMMENTS ON THE PREVIOUS PROPOSAL:

The syntax here needs some work... I would prefer to name the exceptions
only once
i.e. as in
	try {
	    ...
	} catch enabled (FloatingPointOverflow f) {
	    ...
	}
but this has the problem that you have to read the stuff at the bottom
to see
what is being enabled.  The alternative
	try enabling FloatingPointOverflow {
	    ...
	} catch (FloatingPointOverflow f) {
	    ...
	}
seems verbose.

> The suggestion is that, in the event the exception arose, an unspecified amount
> of the try block would be executed, but the catch block's execution would be
> guaranteed.

No, the proposal is that a SPECIFIED amount of the try block
would be executed, namely the code up to the first exception.
Java guarantees APPARENT evaluation order, so there is no indeterminism
here,
and no observable differences.

>...  Hough points out
> 
>  >> There would be discernible differences in implementations that
>  >> examined side effects produced by basic blocks whose partial execution
>  >> was interrupted by an exception, but such programs could be deemed
>  >> illegal Java.

No, this is not what is proposed since Java guarantees APPARENT
evaluation order.

> While good programming practice would dictate against dependence on the
> results of a partially executed block, results may differ, whether legally
> or not.  Might this not lead to exactly the kind of bit-checking between
> platforms Java's designers wish to avoid?

It would, and that is why it is not what I am proposing.

> On a smaller scale, Bill Joy proposes that the exceptions be
> 
>  >> enabled only in the code that is marked as enabled, e.g. not in the
>  >> methods called by this try statement.
> 
> This would seem to incur a high implementation cost because of the need
> to isolate the flags (static sticky bits, or possibly static trap enable
> bits, on most current processors) across calls.  It also leads one to
> wonder whether the programmer who wrote a statement like
> z = x * exp(y);
> 
> would care whether an overflow arose in the exponential (masked by Bill
> Joy's scheme) or in the product?

The person who wrote the exp routine wrote it relative to a certain way
of processing overflow.  It is completely inappropriate in a language
like
Java to have the IEEE flags be modified by the caller in a way that
would
invalidate this callee's expectations, and thus making the results of a
method such as exp unpredictable.



> 
> Operator Extension to User-defined Types
> 
> As I understand Hough's proposal, he would have operators like
> "+" and "*" apply to like pairs of user-defined types, such as complex or
> interval, but the operators would not support indiscriminate mixing
> of different types. Programmers would use explicit casts to mix types.
> Although this is the first mechanism from within the Java community that
> would permit the use of extended evaluation, it would force the casting
> of every float or double argument in a statement:
> 
> extended86 t = 0.0;
> for (i = 0; i < N; i++)
>       t += (extended86) x[i] * (extended86) y[i];
> 
> Presumably, there would be "reference" definitions of all popular types:
> extended86, float_complex, double_complex, float_interval, double_interval,
> etc.  In this way results could be identical across all implementations
> that support the extensions.

This is not what I would propose.  If we are to put these datatypes in
infix notation, then their use must be convenient.  I think we can tie
coercions to the type hierarchy that these types obey, provided we can
work these types into the (future parameterized) type hierarchy.


> Java is NOT a Machine-oriented Language
> 
> ...
>
> Nowhere in the larger discussion has there been any evidence that simple
> adherence to IEEE 754 is insufficient to achieve a useful level of portability.

If you want to do a language that varies the results it gets per
platform,
then you should do that language, but that is not Java.  Java's point
is that if you test once you get the same answer on other plaforms,
up to determinism.  This is a choice; we shouldn't (and won't) mix
in the other style of answer.  You can get the other style of answer
(namely, "whatever") from many other languages.
>
> ...
> This debate needs to be joined by members of the broader numerical
> community.  Just how onerous is the prospect of porting LAPACK or the
> IMSL libraries to a broad family of architectures supporting IEEE 754?
> 

There is too much numeric code being written to have each code
checked by an expert who can make sure it will run on all IEEE 754
platforms
and get the same answer.  This just isn't practical.  Java's alternative
of writing the code once, under a more limited model, and getting it
correct
there seems the only way to get reproducible results from thousands
or millions of different pieces of code.  People who want to tie more
closely to the hardware at hand, and take advantage of nooks and
crannies
to squeeze out performance have lots of other alternatives.

> The performance implications of the current Java specification drive
> the argument to have Java's designers accommodate more than one IEEE 754
> architecture.  According to David Hough, an expert in numerical performance,
> 
>  >> the performance penalties for adhering to Java's model are not
>  >> overwhelming and could be reduced further with a few modifications
>  >> in the next revisions of those architectures.
> 
> The inner product example of my original note showed that memory traffic
> nearly doubled for compiled x86 code forced to adhere to Java's model.
> Given that this penalty is suffered by users whose processors dominate
> the market by orders of magnitude, one might expect this to be of more
> than "not overwhelming" concern.  Other examples involve elementary
> functions, several of which are provided at high accuracy and speed on
> x86 machines.

As Guy's note pointed out, we welcome the addition, in the numeric
library
of "bit-accurate" versons of sin, cos, etc.  These would, conceivably,
run
very fast on machines with extended (like the x86).  They are not
suitable
for the standard (required) Java routines becauase they take egregious
amounts
of space and time on machines with only double precision hardware (a
decision
that has ALREADY been taken).

Intel can add a rounding operation in the registers of the x86 in the
future
to help the performance of "Pure Java" floating point.  As I mentioned
before,
I mentioned this issue to Intel engineers in 1995.  We are not
sacrificing
"write once, run anywhere" for performance on Intel or indeed any other
hardware.

> Sometimes performance arguments are expressed relative to the "high-end
> market," where RISC cpus rather than CISC x86s currently dominate.
> On the one hand it's natural to expect high-end users not to be overly
> concerned with problems on other users' processors; on further inspection,
> however, it turns out that the most performance-hungry users in the elite
> workstation market are interested in FORTRAN, not C or Java.  This does
> not lead to compelling arguments about Java across a breadth of networked
> platforms.
> 
> System designs have always involved a balance of speed and accuracy.
> Although historically some painful decisions have been made in the
> name of speed, Java's designers have reached the other extreme -- and
> not even in the name of accuracy, only sameness at all costs.
> 
> As Java finds wider numerical use, the pressure for higher performance --
> without sacrificing a bit of IEEE 754 conformance -- will rise.
> Implementors wishing to make a sale, or users who want to get their
> work done, will be forced to decide whether one arbitrary "reference"
> version of an elementary function or a LINPACK BLAS serves their
> needs best.  The decision about what is truly "100% pure" Java may
> become clouded by the legitimate demands of numerical users and by
> mathematical arguments that fast, accurate libraries on top of
> 100% pure IEEE 754 arithmetic lead to a more desirable mix of
> performance, accuracy, and portability.

I think a more constructive approach than the above would be to put
together a group of people interested in "high-performance numerics
in Java" and to come up with some proposals which can be adopted by
all the chip manufacturers: Intel, MIPS, SPARC, PowerPc, ARM, etc. to
deal with reproducible results across the set of processors.  So, for
example, the Intel folks could make rounding to 64-bit faster (it may
be already on Merced, depending on the details of its floating-point
design), the MIPS, SPARC and PowerPc folks could consider making
bit-accurate elementary functions go faster, etc.

The goal of this effort would be to move the performance of "write
once, run everywhere" numerics upward.  The raw performance is already
heated by the SPECmark and similar benchmarks in languages like C and
FORTRAN and doesn't need our help.

> -Jerome Coonen



More information about the Numeric-interest mailing list