hardware support for interval arithmetic

David G. Hough at validgh dgh
Mon Mar 14 10:04:22 PST 1994


Assuming you had a language which supported conventional
intervals as data types, rather
than by subroutine calls, what underlying hardware implementation would
be more convenient?

It seems obvious that the addition/subtraction operations would be

	add rounding to left
	add rounding to right
	subtract rounding to left
	subtract rounding to right

with the rounding directions built-in to the op codes rather than extracted
from mode bits that perhaps need to be saved/set/restored at great cost.
For this discussion, I'm assuming opcode space is free.

Multiplication is harder.   If the number of scalar multiplications is
minimized, it's a rat's nest of cases, which would be very
expensive to execute inline.   However a brute force approach would be to have
four instructions

	multiply rounding to left
	multiply rounding to right
	max
	min

with the intention that a general product [a,b]*[c,d] would be computed

	lowerbound = min [ac, ad, bc, bd] each rounded to left
	upperbound = max [ac, ad, bc, bd] each rounded to right

with eight multiplications and six min/max operations, but no explicit
conditionals.    If a compiler noticed that b or d <=0, or a or c >=0,
it could generate
better code for the special case.   The general case could exploit dual
multipliers if available in hardware, although a single pipelined multiplier
that could initiate a new operation on every cycle might suffice.
Min/Max are distateful but probably acceptable in RISC design philosophy;
less difficult and less efficient would be conditional move instructions
instead.    Perhaps less RISCy would be a multiply instruction that returned
two results, one rounded up and one down.

Division would be harder; it seems less clear that performing extra
divisions to avoid extra conditionals is a good idea.    If the division
can be suitably pipelined, though, the extra divisions may not be too bad.
But properly handling division by intervals containing zero may be a problem.

That introduces
the question of whether "external" intervals ought to be supported, as Kahan
proposed in 1965:

	[a,b]	means a<=x<=b 		if a<=b
	[a,b]   means a<=x union x<=b 	if a>b

This extends intervals to neatly encompass division by zero, for continued
fractions particularly, but at the cost of inducing a fair amount of
interpretive overhead even if all intervals are normal.

Back to multiplication - are there better approaches?
What conclusions have been reached by those who have studied these matters
in depth?    The primary impediment to interval methods at present, I belive,
is lack of availability in widely available compilers such as GCC or better
GNU Fortran when it is ready.    But if interval methods were available, then
people might notice that current hardware implementations are gratuitously
expensive compared to scalar arithmetic
since most are based on dynamic rounding modes that can only be
changed by clearing the floating-point pipeline.

The correctly-rounded scalar product approach would be even more expensive
in hardware, I suspect, although the final verdict on cost-effectiveness would
require comparison of comparable hardware and software implementations, which
we're not likely to see soon.



More information about the Numeric-interest mailing list