a query on hardware support for interval arithmetic

David G Hough at validgh validgh
Sun Apr 5 14:03:23 PDT 1998


> John Funge has asked the following question:
> 
> From: John Funge <John_Fungeaccm.sc.intel.com>
> 
> ...
> I have become interested in hardware, and in particular in
> hardware support for interval arithmetic. I am interested in 
> finding out what could be provided above and beyond IEEE 754.  
> My search through the literature has 
> so far turned up a few proposals on extra hardware support, but 
> I would gladly welcome any additional pointers you could give me?
> ...
> 
> I myself found the following references (see below), plus some
> additional references on parallelization which, if I understand
> correctly, are somewhat beyond his question. If you know of any more
> references or, ideally, a bibliography, please let him know. 
> 
> Thanks.
> 
> Vladik
> 
> P.S. Please take into consideration that John Funge is now working at
> Intel, so his email has (rather) recently changed. His current email is
> given in the header of his email message.
> 
> U. Kulisch, G. Bohlender, ``Features of a hardware implementation
> of an optimal arithmetic'', In: U. Kulisch, W. L. Miranker (eds), {\it
> A new approach to scientific computation}, Academic Press, Orlando,
> FL, 1983, pp. 269--290. 
> 
> M. J. Schulte and E.  E.  Swartzlander, Jr.,
> ``Parallel Hardware Designs for Correctly Rounded
> Elementary Functions'', 
> {\it Interval Computations}, 1993, No. 4, pp. 65--88.
> 
> M. J. Schulte and E. E. Swartzlander, Jr.,
> ``Design and applications for variable-precision, 
>  interval arithmetic coprocessors'', 
> In: V. Kreinovich (ed.), 
> {\it Reliable Computing}, 1995, Supplement (Extended Abstracts of 
> APIC'95: International Workshop on Applications of Interval Computations, 
> El Paso, TX, Febr. 23--25, 1995),
> pp. 166--172.
> 
> G.~Bohlender, ``What Do We Need Beyond IEEE Arithmetic?,'' in: 
> C. Ullrich (ed.), {\em Computer
>   Arithmetic and Self-Validating Numerical Methods},
> Academic Press, N.Y., 1990, pp. 1--32.
> 
> M.~Schulte and E.~Swartzlander, ``{Exact Rounding of Certain Elementary
>   Functions},'' In: {\em Eleventh Symposium on Computer Arithmetic},
> 1993, pp. 128--145.
> 
> M. J. Schulte and E. E. Swartzlander, Jr.,
> ``A software interface and hardware design for variable-precision
> interval arithmetic",
> {\it Reliable Computing}, 1995, Vol. 1, pp. 325--342. 
> 
> (In my own paper with Slava Nexterov and two other co-authors:
> 
> Hung T. Nguyen, Vladik Kreinovich, Vyacheslav Nesterov, 
> and Mutsumi Nakamura,
> ``On hardware support for interval computations and for 
> soft computing: a theorem'', {\it IEEE
> Transactions on Fuzzy Systems}, 1997, Vol. 5, No. 1, pp. 108--127.
> 
> we mainly discuss the theoretical issues of which operations should be
> supported, justifying the support for scalar product).
> 

This may not be news to the inquirer, since the numeric-interest archives at

http://www.validgh.com/numeric-interest/numeric-interest.archive

have been searched from Intel sites several times, but the subject has
been discussed lightly there.   I found the following after a 
cursory examination; opinions may have changed since these postings --
the one thing that has become abundantly clear recently, due most of all to
the work of Doug Priest, is that one needs
to figure out exception handling prior to defining the interval data type,
and that definition must occur prior to hardware implementation; but there
is quite a diversity of opinions about how exceptions should be handled.
But some past opinions follow:

=====

Date: Mon, 14 Mar 94 10:04:22 PST
From: dgh (David G. Hough at validgh)
Subject: hardware support for interval arithmetic


Assuming you had a language which supported conventional
intervals as data types, rather
than by subroutine calls, what underlying hardware implementation would
be more convenient?

It seems obvious that the addition/subtraction operations would be

        add rounding to left
        add rounding to right
        subtract rounding to left
        subtract rounding to right

with the rounding directions built-in to the op codes rather than extracted
from mode bits that perhaps need to be saved/set/restored at great cost.
For this discussion, I'm assuming opcode space is free.
 
Multiplication is harder.   If the number of scalar multiplications is
minimized, it's a rat's nest of cases, which would be very
expensive to execute inline.   However a brute force approach would be to have
four instructions
 
 
        multiply rounding to left
        multiply rounding to right
        max
        min

with the intention that a general product [a,b]*[c,d] would be computed

        lowerbound = min [ac, ad, bc, bd] each rounded to left
        upperbound = max [ac, ad, bc, bd] each rounded to right

with eight multiplications and six min/max operations, but no explicit
conditionals.    If a compiler noticed that b or d <=0, or a or c >=0,
it could generate
better code for the special case.   The general case could exploit dual
multipliers if available in hardware, although a single pipelined multiplier
that could initiate a new operation on every cycle might suffice.
Min/Max are distasteful but probably acceptable in RISC design philosophy;
less difficult and less efficient would be conditional move instructions
instead.    Perhaps less RISCy would be a multiply instruction that returned
two results, one rounded up and one down.

Division would be harder; it seems less clear that performing extra
divisions to avoid extra conditionals is a good idea.    If the division
can be suitably pipelined, though, the extra divisions may not be too bad.
But properly handling division by intervals containing zero may be a problem.

That introduces
the question of whether "external" intervals ought to be supported, as Kahan
proposed in 1965:

        [a,b]   means a<=x<=b           if a<=b
        [a,b]   means a<=x union x<=b   if a>b

This extends intervals to neatly encompass division by zero, for continued
fractions particularly, but at the cost of inducing a fair amount of
interpretive overhead even if all intervals are normal.

The primary impediment to interval methods at present, I believe,
is lack of availability in widely available compilers such as GCC or better
GNU Fortran when it is ready.    But if interval methods were available, then
people might notice that current hardware implementations are gratuitously
expensive compared to scalar arithmetic
since most are based on dynamic rounding modes that can only be
changed by clearing the floating-point pipeline.

=====

Date: Wed, 4 Oct 95 09:53:54 PDT
From: dgh (David G. Hough at validgh)
Subject: hardware support for interval arithmetic
 
 
I posted some comments on this subject last year (reproduced below) and
have been thinking more about some of the issues raised then.
 
Probably external intervals should be a separate data type from conventional
intervals, so that the interpretive overhead would not be borne on problems
that don't need it.
 
I wonder if the directed rounding instructions intended to implement intervals
 
        {add,subtract,multiply,divide} to {left,right}
 
with fixed rounding direction independent of the current IEEE rounding mode,
should also ignore inexact, underflow, and overflow exceptions.   The interval
paradigm takes account of these exceptions in a reasonable way, so that they
are no longer exceptional in the way they would be for scalar operations.
So perhaps these instructions should not set IEEE accrued bits for those
exceptions and should disregard IEEE trap-enable mode bits.

So perhaps these instructions should not set IEEE accrued bits for those
exceptions and should disregard IEEE trap-enable mode bits.
Similarly the interval max/min instructions might want to handle NaNs
differently from the way one might want more general max/min instructions
to work.

Without new directed rounding instructions,
one could attain somewhat better interval performance if mode setting were
made faster.   Reading and writing the mode register stops the pipe on some
implementations, and it's worse when the mode register and the status register
are the same register and so have to be read and written every time something
needs to be changed, in order to preserve the fields that are not to be changed.

=====

Subject: Re: hardware support for interval arithmetic
Date: Mon, 14 Mar 94 15:53:48 PST
From: Bob Alverson <uunet!tera.com!bob>

Can't mode bit shuffling be avoided by using negated versions of the
desired quantites?


        Interval x { double hi, lo };

        Interval(max,min) {hi = max; lo = -min; }
        add(x,y) { hi = (x.hi+y.hi)/CEIL; lo = (x.lo+y.lo)/CEIL; }

        negate(x) { hi = x.lo; lo = x.hi; }

Multiply is still a morass, though.
 
Bob

=====

From: Vaughan Pratt <uunet!cs.Stanford.EDU!pratt>
Subject: Re: hardware support for interval arithmetic
Date: Thu, 05 Oct 1995 23:18:55 -0700


        That introduces the question of whether "external" intervals
        ought to be supported, as Kahan proposed in 1965:

        [a,b]   means a<=x<=b           if a<=b
        [a,b]   means a<=x union x<=b   if a>b

You have my strong yes vote on this.  You don't need it for polynomial
arithmetic (+ - *), but when you pass to rational arithmetic by
bringing in division you pass from affine geometry to projective
geometry.  The real line is made projective by filling in the "hole at
infinity" to complete it to a circle.  Kahan's proposal gives a neat way
to split that circle into two connected components such that either of
0 and Inf can belong to either component.

        This extends intervals to neatly encompass division by zero,
        for continued fractions particularly, but at the cost of
        inducing a fair amount of interpretive overhead even if all
        intervals are normal.

Not that big a deal.  The only increment in interpretive overhead is
for addition.  Subtraction and division can be reduced to negation and
reciprocal, whose effect on [a,b] is to map it to [-b,-a] and [1/b,1/a]
respectively, so that's very simple and leaves just addition and
multiplication.  Already with polynomial arithmetic, multiplication has
plenty of interpretive overhead (try it).  The only significant
complication added by the passage to rational arithmetic is to bring
the interpretive complexity of addition up to essentially the same
level as that of multiplication.  That's all there is to it.

Transcendental arithmetic is more fun.

        Of course, mathematically the idea of distinct -0 and +0 values
        is pretty silly, since by any of the usual axiom systems one
        can quickly show that -0 = +0 and so there is no difference in
        value; why there should be a difference in representation when
        there is no difference in value may be a matter of convenience

        for some classes of computation, but it muddies the C standard
        which tries to dictate numeric properties only in terms of the
        resulting *values*.

Judging IEEE decisions by bringing in random mathematical facts is
pretty pointless given that IEEE addition is not associative to begin
with.  By mucking with the representation you can recover associativity
(Russian I think), but it is a nice result (Kahan I think) that you
*must* lose commutativity in the process!

IEEE users need practical rationalizations of the IEEE system.  The
rationalization I rely on, rightly or wrongly, is that IEEE numbers
form a finite subset of the projective real axis.  Unsigned Inf and
unsigned 0 are both missing (ok so I lied about filling in the hole),
but *nonstandard* (in the sense of A.  Robinson) approximants on either
side are provided (so it was a very white lie).  Standard mathematics
denies the existence of infinities and infinitesimals, but it requires
second order logic to say this.  Robinson's idea was to stick to the
language of first order logic so that infinities and infinitesimals
could be postulated and worked with without generating contradictions.
I haven't actually checked that signed 0 and Inf can be accounted for
in this way, but if not then I'd certainly want to know more about
which first order properties of the reals I was still allowed to
assume.  Those properties have no doubt been spelled out by Velvel
somewhere, yes?

Vaughan Pratt

=====

Date: Wed, 8 Nov 95 19:00:02 PST
From: dgh (David G. Hough at validgh)
Subject: report on hardware support for Interval Arithmetic


Prof. Dr. J. Wolff v. Gudenberg  has explored various hardware options
for accelerating interval multiplication in a technical report which
can be obtained from

ftp://sunshine.informatik.uni-wuerzburg.de/pub/publications/tr125.ps.gz
 




More information about the Numeric-interest mailing list