double rounding in x86

Samuel A. Figueroa uunet!SLINKY.CS.NYU.EDU!figueroa
Thu Aug 31 12:20:14 PDT 1995


In a previous message, Michael Meissner <uunet!cygnus.com!meissnerauunet.uu.net>
writes:
  >Given that the x86 floating point is usually slower in real programs
  >than most other CPU's, it already is a second class citizen for
  >scientific computing.  It's only saving grace is that it is cheap and
  >available everywhere.  I believe that a lot of the slowness is due to
  >the fact that the stack architecture is hard to generate optimial code
  >for.  Another factor may be doing calculations in 80 bit precision
  >slows things down as compared to 64 bit precision.

Not having performed any tests to determine this, I wonder how much having
extended precision buys you in terms of being able to use faster, different
algorithms (or faster versions of the same algorithms), and whether this can
make up for the x86's slowness.  I would be interested to know if anyone has
any ideas on this.

- Sam Figueroa (figueroaacs.nyu.edu)



More information about the Numeric-interest mailing list