pi and LIA

Mitch Alsup uunet!ross.com!mitch
Tue Jan 17 16:34:38 PST 1995


> From loyola!tdmuscs2!halley!cs.utexas.edu!uunet.uu.net!validgh!validgh.com!dgh%ghostwheel Tue Jan 17 15:19:43 1995
> Date: Sun, 15 Jan 95 23:06:09 PST
> From: loyola!dgh%validgh.com (David G. Hough at validgh)
> To: sc22wg11adkuug.dk
> Subject: pi and LIA
> Cc: numeric-interestavalidgh.com
> Content-Length: 14059
 
> A different problem confronts us now.    Probably the bulk of floating-point
> operations are performed in PC spreadsheet programs, for although each such PC
> performs few, such PC's are many in number.   Most technical floating-point
> operations are probably performed on workstations and workstation-derived
> servers, which are slower but far more numerous than supercomputers.   But the
> PC's, workstations, and most new supercomputer designs all use variations of
> IEEE 754 binary floating-point arithmetic.    So the job of porting mathematical
> software can be considered done for all time.
> 
> Well not quite.   Different choices permitted within IEEE 754, different
> expression evaluation paradigms, and different libraries - not to mention
> gross and subtle bugs in optimizing compilers, not to mention the 
> underlying hardware - cause identical numerical
> results on different systems to still be the exception rather than the rule,
> and to the justifiable complaint of the typical floating-point user - 
> now more likely an accountant or technician than a numerical analyst -
> technical support people often 
> respond that that's just the way floating-point arithmetic is - an inherently
> unpredictable process, like the weather.    All these differences don't
> help most floating-point users, whose performance is most often limited by
> other factors to a far lower level than that at which 
> aggressive hardware and software optimizations can help; yet the differences
> induced by those optimizations affect all users, whether they obtain a
> performance benefit or not.    These differences confuse and delay the
> recognition of real significant hardware and software bugs that can 
> masquerade for a time as normal roundoff variations.
> 
> A useful arithmetic standard to address these current problems in computer
> arithmetic would prescribe the DEFAULT:
> 
> 1)	expression evaluation: what happens when the language does not
> 	specify the order of evaluation of (a op b op c), or how mixed-mode
> 	arithmetic is to be evaluated, or the precision of variables in
> 	registers or storage, and related issues.
> 
> 2)	correctly-rounded conversion between binary and decimal floating point:
> 	public domain code is available.
> 
> 3)	correctly-rounded elementary algebraic and transcendental functions:
> 	error bound 0.5, not 0.5+, in the terms of LIA-2.
> 
> Thus conforming implementations built upon
> IEEE 754 arithmetic would provide identical
> results for identical types; identical conforming implementations upon VAX,
> IBM 370, or Cray arithmetic would provide identical results among their
> brethren, and any discrepancies would be immediate evidence of some kind
> of hardware or software problem instead of "normal roundoff variation".
> 
> These are the basics, although one could imagine going further to specify
> the mappings between language types and IEEE 754 types, and details of 
> exception handling.

Has anyone tried to use high precision FP arithmetic in Excel?  I did a few
weeks ago and found out that one cannot even input and output more than 16
decimal digits of precision.  This makes feeding Excel two FP numbers differing
in only their LSB impossible.  About 3 LSB is possible.  For this application,
I had to convert the FP numbers into HEX and then into decimal on the machine
which produced the data to be graphed.

Another strange thing happened last week when I installed MicroSoft newest
C++ compiler.  When I selected the x87 compilation mode, the install software
mad me also install either the emulation library, or the alternate library.
(Presumably for machines without x87 functionality).  The Help file on this
noted that I must install one of the libraries; AND that the emulation/alternate
libraries can produce different results than the x87 produces.  If the Compiler
cannot guarentee the accuracy of the FP operations, who can?  AND if different
PC's produce different results from the same Binary (one via emulation, the
other via x87) even the PC's do not agree on this Holy Grail.

Mitch Alsup          On this Matter I am definately speaking for Myself
mitchaross.com       Work (512)892-7802X215      Home (512)328-4808
                      Fax          3036           Fax      306-1111


----- End Included Message -----




More information about the Numeric-interest mailing list