historical context of Java and x86 architecture
validgh webmaster
validgh
Tue Nov 11 19:57:30 PST 1997
Persons still interested in discussions of Java and x86 can find
my contribution on that subject at my java web page
http://www.validgh.com/java
in troff and postscript and ASCII.
Here's the ASCII:
Many people have noticed that the Java language and virtual machine
specifications and the Intel 8087 floating-point architecture are not ideally
matched to each other, and wondered how that came about. While "what should
be done now" will continue to be a lively debate, revisionist theories of
"what happened then" keep cropping up, so I summon up remembrance of things
past:
Summary Sentence
Java was designed to accommodate the common denominator of IEEE 754
floating- point arithmetic implementations; Intel's 8087-based PC design,
being farther than most from that common denominator, is not as performance-
optimal with Java's design as some other arithmetic designs; that result arose
from Intel's choice of a floating-point architecture in 1977, not from Java's
choice of a floating-point architecture in 1992.
Summary Paragraph
In 1977 Intel chose a novel floating-point architecture based on poten-
tial software benefits that were never realized; even so, that style of archi-
tecture became popular for PC's, but not for embedded systems, workstations,
minicomputers, mainframes, or supercomputers. In 1992 Sun developed a
floating-point architecture for the Java language and virtual machine, pri-
marily based on a consensus of then-current hardware implementations at a
variety of performance levels, and with the unusual goal of eliminating that
gratuitous variation in floating-point arithmetic results, that costs far more
than its benefit, for programs intended to be portable among diverse plat-
forms. Therefore it's no surprise that Java's design goals were not comprom-
ised to accommodate one particular architecture that was not important for
floating-point computation at the time Java was designed.
Details
IEEE 754 specifies single, single-extended, double, and double-extended
types. Most C and Fortran compilers for IEEE 754 systems, including PC's,
support float and double types corresponding to IEEE 754's single and double
types. Most compilers for PC's abuse rather than exploit the double-extended
floating-point registers available in PC CPU's. After all, until 486DX and
Pentium systems became the majority of PC's around 1995, most PC's did not
have floating-point hardware, and other 1977-era design decisions in the x86
architecture, having to do with the floating-point stack architecture, limited
the available floating-point performance of PC's to a fraction of that of com-
parable RISC systems, so PC compiler implementors had no incentive to support
floating point well. Most don't even provide access to the double-extended
floating-point type. Some of these issues are discussed by Doug Priest at
http://www.validgh.com/goldberg/addendum.html.
When designing Java for its original embedded application target proces-
sors, it was clear that those processors either had no floating-point hardware
or had simple RISC-like single and double precision types. So nothing was
lost by specifying floating-point formats and expression evaluation com-
pletely, in a way that could be emulated on all likely targets.
Over several years before and after the design of Java, while attempting
to port programs to PC's, scientific computer users repeatedly rediscovered
that the simple models of arithmetic, that they had long relied on, were not
available with PC compilers. Neither did those compilers support the original
8087 design goals well. An obvious solution that occurred to at least one
of Intel's competitors and probably within Intel as well would be to provide
an additional floating-point mode bit that would cause the precision control
mode bits to control exponent range as well. But Intel never acted on that
possibility, and so neither did its competitors; perhaps Intel was looking
forward to Merced, an entirely new instruction set architecture that could
avoid the issue in a number of ways, although whether it does so has not been
publicly disclosed.
Unfortunately Intel's corporate culture, celebrated in CEO Andy Grove's
Only the Paranoia Survive, is a lens through which some see all external
events as conspiracies to gain competitive advantage against Intel. So
despite all the evidence, some partisans continue to believe that the design
of Java was an attempt to exclude Intel or put Intel at a disadvantage. But at
the time Java was designed, the Intel CPU's designed for the embedded market,
the 960 series, and the Intel CPU's designed for the scientific computing
market, the 860 series, fit the Java design readily without performance disad-
vantage. These instruction set architectures were designed years later than
the 8087 and, in 1992, appeared to be Intel's future. If Java had been
designed to create a competitive disadvantage for Intel, it would have been
designed to be incompatible with these processors, if that were possible.
But these, like most recent instruction set designs, are based on common RISC
principles, and it's impossible to design a high-level language that offers
one such RISC design significant advantage over another.
More information about the Numeric-interest
mailing list