comments on Clinger and Steele/White papers

David G. Hough dgh
Tue Jun 19 07:32:38 PDT 1990


Clinger's algorithms require extended precision in hardware in order to
attain fast average execution time.  The most interesting sentence in the
Steele/White paper is "A portable and performance-tuned implementation
in C is in progress".  Clinger mentions that "The most practical solution
[to the problem of computing large powers] seems to be a pre-computed table of
powers, containing the range of powers that is apt to occur in practice.
Even when limited to the range needed for reasonable inputs, the table
of powers may be fairly large.  The size of the table can be reduced,
at the expense of accuracy, by factoring it into two smaller tables."
Accuracy need not be compromised but performance will be.

In other news:

> Are you quite sure NCEG "needs to provide perfect conversions for
> those that need it"?  I.e., I don't think "someone says they need it"
> is a strong enough reason for mandating it as a primitive -- else NCEG
> will surpass Fortran 90 in complexity by the end of next week <grin>.

The complexity of Fortran 90 is on the user's side as well as the implementers,
but correctly-rounded base conversion is a lot simpler to understand
on the user side and only more trouble for an implementer.

> I hope that whatever NCEG does here it
> makes it very clear.  E.g., I don't think the 754 text was at all
> clear about the intended handling of inexact on >17 digit conversions,

Regardless of the clarity of 754 and 854, requiring correct rounding simplifies
the document and continues to satisfy 754 and 854.  Correct rounding of base
conversion puts those operations on an equal footing with all other 754
and 854 operations, namely that one numerical value is to be converted to
another, observing rounding modes if inexact, and setting exception status
and possibly trapping on inexact, underflow, overflow, and invalid attempts.

> >  If you can tell the
> >  magnitude of a number by how long it takes to print it out, that opens
> >  up a security hole in formerly secure systems.

I suppose here that one somehow knows that a floating-point number is being
output and then obtains a timing for that operation.  The system that
permits all that to be known to a hostile observer is already somewhat
compromised, I would think.  Anyway the observer must from the timing 
infer the size of the number.  This only will work on large exponents
converted by the simplest non-table-driven algorithms.  In other situations
the base conversion time wouldn't correlate well with magnitude.

And as somebody else mentioned, factoring out other random i/o delays
might be difficult.




More information about the Numeric-interest mailing list