Quad Versus 80-bit Arithmetic

David G. Hough at validgh dgh
Mon Mar 13 09:59:58 PST 1995


The following was posted to the numerical analysis digest which some but not
all numeric-interest readers follow.     It's an interesting question,
especially in the context of future CPU designs which may often have a RISC core
plus PC emulation enhancement hardware of some kind.    It's my belief that
"almost as good as a PC" is not a winning strategy for workstation vendors
long term, compared to "better than a PC", but though several RISC architectures
specify 113-bit quad IEEE floating-point formats, none have been implemented
in hardware.



 From: Alan Karp <karpahplahk2.hpl.hp.com>
 Date: Wed, 8 Mar 1995 14:09:43 -0800
 Subject: Quad Versus 80-bit Arithmetic

 I am conducting a survey of those people who need more precision than
 provided by IEEE double precision.  I would appreciate any insight you
 might have from your PERSONAL experience.  (We've got lots of hearsay
 evidence; I want something admissable in court.)

 1. Is your need met by a 64-bit mantissa?

 2. If you need more than 64 bits in the mantissa, how many more do you
   need?

 3. How important is the performance of the extra precision part of
   your code?

 4. At what ratio of performance to double precision would you consider
   using higher precision more often - never, only if the performance
   were the same as double precision, 2x slower, 10x slower, up to
   100x slower (e.g., I can't do without it)?

 A brief, 100 words or less, summary of your application, why more
 precision is needed, and where the extra precision is needed (if not
 throughout) would be appreciated.

 If I get enough responses, I'll post a summary.



More information about the Numeric-interest mailing list