floating-point data types

Jon L White uunet!lucid.com!jonl%kuwait
Thu Jun 6 09:29:02 PDT 1991


re: Most physical data is only good to single precision, so 32 bits is ample
    to store it; more is wasteful. . . . 
    Since the data is good to 32 bits, most algorithms could use 32-bit 
    arithmetic most of the time, with occasional extensions to higher 
    precision at critical places.  However recognizing the critical places 
    in complicated programs is not so easy, so it may be cheaper to simply 
    to pay for the hardware than to pay for the mental analysis.   

Or, "cheaper than hiring a numerical analyst to figure out the RIGHT
answer"!  Trying to squeeze each-and-every program variable down to
it's minimal (significand) size may be penny-wise and pound-foolish.

Our decision for Lisp to provide packed arrays for 32-bit storage, but 
not to bother with with finer-than-double variations for individual 
variables, focuses on the one place where storage size really counts --
in huge arrays.  Indeed one would expect that time-critical operations 
over such arrays -- like for graphics and signal processing as you
mentioned -- would either be done by independent hardware co-processors,
or (in Lisp's case) by "foreign" function libraries especially optimized
by good analysts/programmers.  


re: I quickly learned that whenever anybody asked for a double-precision 
    matrix inversion routine (for the CDC 6400) they were really doing 
    linear least squares in the worst possible way.

Verrry interesting.  What may be lacking is some wider availablility
of how to asses software packages.  Not being in that world myself,
I wonder if there are generally accepted standards of the quality
of LINPACK like routines?


-- JonL --



More information about the Numeric-interest mailing list