ANSI C & perfect decimal<-> fp conversions
Tim Peters
uunet!ksr!tim
Thu May 31 23:44:13 PDT 1990
> [TMacD]
> [proves his point from section 3.1.3.1 of the ANSI C std]
OK, Tom -- I'm convinced & I owe you a pie.
Another layer of confusion: I read that section of the C std as
constraining only the accuracy of decimal->fp conversion performed *by*
the compiler on floating literals appearing in the program text. I
read the "perfect rounding" section in the NCEG paper as constraining
not only that, but also the accuracy of conversions in both directions
performed by the std library routines (printf, scanf, atof, etc). Does
the ANSI std actually constrain more than the compiler? Or does the
NCEG proposal actually constrain just the compiler? (note in passing:
if the NCEG proposal does constrain the libraries too, it seems to go
beyond 754's section 5.6 not only in accuracy but also in requiring that
compile-time and run-time conversions yield the same results).
If the perfection requirement just applies to compile-time conversion, I
don't care if perfection takes 100X longer.
> [keith b]
> We shouldn't forget the modern problem of networking ... it is highly
> desireable to take the results of computation from machine to machine
> (perhaps via RPC's). Back when we used tapes and SneakerNet (real JPL
> deep space processing at one time!) the cost of moving data around was
> so high that it was seldom done ....
>
> but with ethernet and high performance networks one is very, very
> tempted to move computation from node to node.
I agree there are real benefits to perfect conversions, but the
networking benefits specifically would (I think) carry a lot more weight
if the proposal were being made under the auspices of a group aiming to
improve the usefulness of C in networked applications/environments. I
don't know how important networking considerations are under NCEG's
charter. I.e., benefits & costs are relative to the players involved,
and it's at least conceivable to me that networking benefits are no more
relevant to the NCEG effort than would be the also-real benefits of
mandating arbitrary-precision rational arithmetic. Networking and NCEG
aren't an *obvious* match because the arguments don't start out with "we
have to supply this because Fortran does ..." <grin>. On the other
hand, NCEG could score a major advantage over Fortran 90 by anticipating
the growing importance of networked solutions to numeric problems.
Alas, would that it were less fuzzy.
> ...
> If we don't tighten up these loose ends, moving code from even IEEE
> node to IEEE node will result in not so subtle problems in a wide
> variety of applications.
No argument, but think the proper place to fix 754 problems is in
revisions to 754; it's not really a *C* issue.
> [david h]
> [info on obtaining Coonen's PhD thesis and Sterbenz's "Floating-Point
> Computation"]
Many thanks, David! I'll pursue it.
> > [tim, griping about speed]
> [dgh]
> How about "doesn't have significant performance impact on realistic
> applications". The sloppy methods you are using now probably don't
> meet 754 requirements for normal exponents anyway.
Bell's First Law of supercomputer design is simply "everything counts";
vendors ignore it at their financial peril. By definition, a realistic
application is any nightmare any customer wants to run, and I really
don't know of any other definition that cuts it in this field. But I'm
willing to give up *some* speed here, and I'm even willing to have the
documentation for the unformatted I/O routines printed in large type so
that customers finally notice 'em <grin>.
Darned right the sloppy methods I'm using now don't meet normal 754
requirements -- they're whatever came with the Berkeley UNIX(tm)
distribution, and probably don't even meet Cray's stds <grin>.
> Aside from the supercomputer arena, speed seldom conflicts with
> accuracy, if the accuracy is designed in from the beginning by
> sufficiently clever people.
Agreed that you can usually get what you design for, and add that I
think the appalling state of fp arithmetic these days is a consequence
of putting accuracy at the bottom of the design list. Agreed too that
the supercomputer arena plays by unusual rules; what we may not agree on
is whether NCEG should cater to the supercomputer players too <0.9
grin>.
> The TI 8847 division and sqrt are good examples.
And also good examples of implementations unsuitable for supercomputers
(because the 8847 fdiv/sqrt don't pipeline -- the supercomputer game is
more concerned with maximizing issue rate than minimizing latency).
> The only place in 754 where a tradeoff between speed and accuracy is
> permitted is in the one specific instance of base conversion of large
> exponents.
Right (although it doesn't require that compile-time & run-time
conversions work the same either, regardless of exponent size ("should"
!= "shall", right?)). I was thinking along the more general lines of
the tradeoffs 754 permits between The Right Thing and Speed and/or Cost.
E.g., look at its large number of "shoulds" vs "shalls" vs "recommends"
("let's see, that was 32 predicates that could be formed, 26 that are
named, 6 that I have to supply, 1 more that I 'should' supply (oops, I
guess the 'negation clause' says I should supply 6 (or 7?) more beyond
that), and at least two more that are 'recommended' ... this is portable
<grin>?!"); or look at its odd permissiveness in how underflow may be
detected. The only reason I can think of for these concessions to The
Wrong and/or The Lazy is that even 754 had to draw lines somewhere on
"practicality" grounds.
> ... [networking benefits accruing from true portability] ...
> I suspect that once quadruple-precision hardware is available as
> envisioned by (at least) the IBM 370, VAX, HP PA, and SPARC
> architectures, then correctly-rounded double precision elementary
> transcendental functions will be a routine expectation.
Honest -- I'm not against it. But I have yet to hear anyone *ask* for
it.
> If that prospect worries anybody on this mailing list, then they may
> want to take steps to support people working on publicly available
> software in this area.
Take a personal check <smile>?
> > [... tim claiming that cray customers griped about addition most]
> Most of the problems are caused by division, and to a lesser extent
> multiplication, because it's very difficult to predict what the
> answer will be.
??? While I'm a compiler jockey by trade, due to a sequence of
historical accidents I became a lightning rod for customer complaints
about CRI's fp arithmetic, and it really was the case that addition
gripes outnumbered all others put together by an easy 10-to-1 ratio. It
was certainly the case that *numerical analysts* griped more about
multiply and (especially) divide, but 99% of Cray's customers knew less
about numerical analysis than even I do. I.e., the NA's were
understandably irked by the inability to come up with clean error
formulas for CRI's fmul/fdiv, which made sharp a priori analysis
extremely difficult (in practice, impossible). But most customers
really didn't care. The fmul and fdiv problems simply didn't hurt them
often in *practice*, but they were routinely thrust into difficulties by
the drift in a long chain of adds. Since they continued to get results
they were willing to pay $20 million for, and continued to gripe about
the same things, over time I grudgingly came to believe that--
numerically naive 'tho many/most clearly were --they really knew what was
actually important. That raises some fascinating issues, but I've
rambled too long here already ...
Just as a point of interest, the only repeated "customer-type" gripes
I ever heard about CRI's division were after stuff like the Fortran
X = 6.
Y = 3.
I = X/Y
PRINT *, I
printed "1" due to a too-small reciprocal and/or a too-small multiply
leading to the fractional portion of a bit-too-small quotient getting
truncated away by the conversion. People were (understandably!) *livid*
about this when it happened, but overall I was more surprised by how few
people brought it up (but maybe they figured it wouldn't do any good to
gripe to me <grin>).
> I think Cray will have to deal with the issue eventually because of
> network computing considerations as mentioned earlier.
Probably so.
> The main point again is that IEEE systems from diverse vendors
> benefit from correctly-rounded base conversion in a network computing
> environment, but VAX, IBM 370, and Cray systems will still get
> different results from IEEE and each other no matter how they do base
> conversion. So the value of constraining the proprietary
> floating-point architectures is less than the trouble it would take.
Ah, but don't you think you might be undervaluing the worth of perfect
conversions here? The monotonicity and "if you print it & read it back
you'll get back what you started with" guarantees perfect conversions
provide are worth quite a bit to the careful numerical programmer,
regardless of whose crazy arithmetic they're using -- even if they're
just using a laptop computer on a plane ride. *If* it's practical
("fast enough") then I think the general scientific community would eat
it up -- and it's something NCEG C could offer that Fortran doesn't. I
have no doubts about the benefits, just concerned about the costs and
the propriety of mandating something that apparently *isn't* public art.
> [earl killian]
> Our routines, which are accurate over the full range, take 860 cycles
> to do
> ecvt (value, 17, &decpt, &sign);
> and 903 to do
> atof (s);
> using 64-bit integer arithmetic. About half the time in the atof call
> is parsing, and not conversion. The ecvt time is mostly conversion.
>
> Is that slow or fast?
Is that a rhetorical question <grin>? Seriously, from what I can see
(is 903 best case? worst case? mean? etc) it looks pretty good to me,
although on the order of some 50+ cycles per decimal digit definitely on
the slow side (relative to the usual run of slop conversion algorithms).
But it's in the ballpark -- near what I'd be willing to put my head on
the line for, so I'm encouraged (thanks!). Now if you'll just sneak us
the source code under the table <smile> ...
not-too-proud-to-steal-ly y'rs - tim
Tim Peters Kendall Square Research Corp
timaksr.com, ksr!timaharvard.harvard.edu
More information about the Numeric-interest
mailing list