(SC22WG11.419) pi and LIA

Stuart McDonald uunet!hplabs.hpl.hp.com!mcdonald
Wed Jan 18 13:04:46 PST 1995


>>        You can put the Intel chip in a mode where it computes in
>>        64-bit precision.

>1.  Can all library routines in libc.a and libm.a be trusted to do
>double rather than extended precision arithmetic when you put the FPU
>in this mode?

        No.  On extended based architectures, where there is no reason
        not to do all floating-point ops to extended, it is quite common
        to write one math library, not three, when supporting single, double,
        and extended transcendentals.  That is, software transcendentals
        on extended based architectures may ignore precision control and
        always deliver extended results.
 
>2.  Can they all be trusted to leave the precision where you set it?

        Yes.  (It is not in the spirit of IEEE-754/854 to do otherwise.)

>In short, are libraries standardly designed to be transparent to the
>current rounding mode of the FPU?

        Rounding direction and precision control are two separate things.
        Precision control, which your questions 1) and 2) address,
        is the ability to simulate IEEE float-only and double-only
        architectures.  IEEE rounding modes are towards +/- infinity,
        to nearest, and towards zero.

        80-bit transcendental math _libraries_ typically ignore both
        the current settings of precision control AND rounding direction,
        and compute everything to 80-bit extended assuming IEEE default
        round-to-nearest.  _Inlined_ hardware transcendentals are
        another matter since good compilers can be provided hints to
        generate the correct suffixed opcode for the desired precision.

        (Aside:  Simulating float-only and double-only calculations
        on extended-based systems is plagued with difficulties like
        question 1.  More subtle problems arise because floating-point
        loads are typically non-arithmetic, i.e. don't honor precision
        control.  If the compiled code happens to have embedded
        floating-point constants with precision greater than the
        precision one is trying to simulate, strange things can happen
        during floating-point comparisons depending upon _when_ the
        compiler decides to touch that constant arithmetically in
        the generated code.)

        (Double aside:  It is possible as a math library writer to
        go beyond the call of duty and try to honor precision control
        and/or rounding direction in the transcendentals.  One common
        approach to doing this is to inherit the user's environment
        only during the very last floating-point op of the transcendental
        approximation.  The last op typically contributes the most
        Unit in the Last Place (ULP) error, so honoring precision control
        and/or rounding direction at that point goes a long way towards
        faking it.)

-Stuart McDonald



More information about the Numeric-interest mailing list