hardware support for interval arithmetic
David G. Hough at validgh
dgh
Wed Oct 4 09:53:54 PDT 1995
I posted some comments on this subject last year (reproduced below) and
have been thinking more about some of the issues raised then.
Probably external intervals should be a separate data type from conventional
intervals, so that the interpretive overhead would not be borne on problems
that don't need it.
I wonder if the directed rounding instructions intended to implement intervals
{add,subtract,multiply,divide} to {left,right}
with fixed rounding direction independent of the current IEEE rounding mode,
should also ignore inexact, underflow, and overflow exceptions. The interval
paradigm takes account of these exceptions in a reasonable way, so that they
are no longer exceptional in the way they would be for scalar operations.
So perhaps these instructions should not set IEEE accrued bits for those
exceptions and should disregard IEEE trap-enable mode bits.
Similarly the interval max/min instructions might want to handle NaNs
differently from the way one might want more general max/min instructions
to work.
Without new directed rounding instructions,
one could attain somewhat better interval performance if mode setting were
made faster. Reading and writing the mode register stops the pipe on some
implementations, and it's worse when the mode register and the status register
are the same register and so have to be read and written every time something
needs to be changed, in order to preserve the fields that are not to be changed.
Date: Mon, 14 Mar 94 10:04:22 PST
From: dgh (David G. Hough at validgh)
Subject: hardware support for interval arithmetic
Assuming you had a language which supported conventional
intervals as data types, rather
than by subroutine calls, what underlying hardware implementation would
be more convenient?
It seems obvious that the addition/subtraction operations would be
add rounding to left
add rounding to right
subtract rounding to left
subtract rounding to right
with the rounding directions built-in to the op codes rather than extracted
from mode bits that perhaps need to be saved/set/restored at great cost.
For this discussion, I'm assuming opcode space is free.
Multiplication is harder. If the number of scalar multiplications is
minimized, it's a rat's nest of cases, which would be very
expensive to execute inline. However a brute force approach would be to have
four instructions
multiply rounding to left
multiply rounding to right
max
min
with the intention that a general product [a,b]*[c,d] would be computed
lowerbound = min [ac, ad, bc, bd] each rounded to left
upperbound = max [ac, ad, bc, bd] each rounded to right
with eight multiplications and six min/max operations, but no explicit
conditionals. If a compiler noticed that b or d <=0, or a or c >=0,
it could generate
better code for the special case. The general case could exploit dual
multipliers if available in hardware, although a single pipelined multiplier
that could initiate a new operation on every cycle might suffice.
Min/Max are distasteful but probably acceptable in RISC design philosophy;
less difficult and less efficient would be conditional move instructions
instead. Perhaps less RISCy would be a multiply instruction that returned
two results, one rounded up and one down.
Division would be harder; it seems less clear that performing extra
divisions to avoid extra conditionals is a good idea. If the division
can be suitably pipelined, though, the extra divisions may not be too bad.
But properly handling division by intervals containing zero may be a problem.
That introduces
the question of whether "external" intervals ought to be supported, as Kahan
proposed in 1965:
[a,b] means a<=x<=b if a<=b
[a,b] means a<=x union x<=b if a>b
This extends intervals to neatly encompass division by zero, for continued
fractions particularly, but at the cost of inducing a fair amount of
interpretive overhead even if all intervals are normal.
The primary impediment to interval methods at present, I believe,
is lack of availability in widely available compilers such as GCC or better
GNU Fortran when it is ready. But if interval methods were available, then
people might notice that current hardware implementations are gratuitously
expensive compared to scalar arithmetic
since most are based on dynamic rounding modes that can only be
changed by clearing the floating-point pipeline.
More information about the Numeric-interest
mailing list