SPARC quad format
Jon L White
uunet!lucid.com!jonl%kuwait
Wed Jun 5 22:12:49 PDT 1991
I'm a little surprised by one of the reasons you cited for progressing
ahead with the Sparc V8 128-bit float format. You mention some experiece
with Cray customers who refused to buy in to "Unix" (or, Sun products?)
because it didn't support the 128-bit format.
Now, quite coincidentaly I've been carrying on a private conversation with
Tim Peters at Kendall Research who apparently used to be a Fortran compiler
hacker for Cray [the basic topic was the user of "rounding modes" to aid
in detecting algorithmic instability.] One of his tales from the Cray days
pointed to something I've suspected for years, and am always looking for
hard evidence to support; here is Tim's "story":
My favorites were the cases where someone ported from a 32-bit
world and claimed to get "the wrong answer" on the 64-bit Cray: on more
than one occasion, after plotting "the answer" against a wide range of
-trunc values, we found that "the answer" varied wildly around the
-trunc values that corresponded to their 32-bit machine's significand
size, and that "the answer" settled down quickly as the precision
increased. There was no more effective means of suggesting that what
they had been accepting as "the answer" was cheese than to work up this
kind of graph.
In response to this, I recounted the bit of search/research that led Lucid
to adopt a unform, double format as the normal float representation within
Lisp, despite the permission of the language standard to allow up to four
distinct user-visible formats. And if I read Mark Homewood's message right,
he too would prefer to drop the multiplicity of formats that made much more
sense back when INT and LONG really needed to be different on a PDP-11.
My "story" doesn't actually rule out all need for quad formats, but it
has some substantiation that it will be of lesser, or rarer, utility.
(And perhaps just a handful of Cray adherents demanding quad precision
would be enough to provoke some serious concern.) So here is the Lucid
story in full:
Way back in late 1986, when Lucid was planning its 3.0 release, we faced
the question of whether or expand out to four distinct float formats,
as permited by CLtL ("Common Lisp: The Language"). Two obvious things
mitigated against doing this: it would be a moderate amount more work
for the part of our optimizing compiler that was doing data representation
changes for optimization (i.e. converting from "pointers" to actual the
machine representation for floats, and back again by "consing" when
necessary -- all in order to do Fortran level numeric optimization and
to side-step any intermediate consing); and it would generate further
incompatibilities not only with our 2.1 release that supported only
one 32-bit format, but likely with other vendors as well. Not surprisingly,
a couple of other Lisp vendors widened out their representational schemes
to include distinct formats for SINGLE, DOUBLE and LONG, but failed to
understand the portability failure under such schemes [evey Guy Steele
misunderstood this point until I reminded him that Lisp's introspective
typing capabililty meant that one could notice the difference just by a
form like (typep 1.2s0 'double-float). It wasn't just Guy; it was the
whole of the X3J13 committee.]
Well, after all, the multi-choice scheme for CLtL was chosen in the very
early 1980's when, for example, double-floating operations on typical
hardware were from 3 to 10 times slower than single; it seemed to make
sense to permit Lisp to be cluttered up with more choices of representational
matters than you could shake a stick at. But from looking at the MC68881
manual, I suspected that by the late 1980's, this speed advantage would
shrink or disappear; so we began quietly asking our contacts in customer
land whether they *really* used floating point arithmetic, and how much
speed mattered. The killer response came from a guy at Alliant: he was a
Fortran user/implementor and hardware hacker who was only vaguely familiar
with Lisp, but was willing to talk to us. He pointed out that:
(1) Alliant was planning to switch from a 32-bit memory bus to a 64-bit
one, so the memory cost of using doubles would only be space, not time;
(2) A large number of numerical algorithms in common use were unstable
enough to produce "cheese" under the IEEE 32-bit format, but *none*
were unstable enough to be proven losers in the 64-bit format; i.e.,
there were algorithmic reasons for requiring at least 64, but none
(or, none known) for requiring higher precision. [he was speaking
for the Fortran user community of which he was aware]
(3) Alliant was currently asking for bids on hardware sub-components;
their demands were:
(3a) the fastest possible IEEE-compliant 64-bit format chips
(3b) 32-bit format operations would not be required, but would
be tolerated providing they were not more than about 50%
slower than the double format.
Yes, that is no typo. SINGLE is ok as long as (for backwards compatibilty)
running it isn't significantly slower than the "default" DOUBLE. That in
fact seemed to be the "state of the FP art" even back then. So that ended
my survey right there. We made out decision to opt for only one "first
class" FLOAT type, and you can guess what format it was. But of course we
supported packed floats too -- SINGLE format in arrays, just as there are
packed integer arrays -- this is also required to have compability at the
Foreign Function Interface level where both C and Fortran programs may
require the use of packed float arrays.
I hope I haven't misremembered or misrepresented the Alliant concerns.
-- JonL --
More information about the Numeric-interest
mailing list