IEEE signed zeros

David Hough uunet!Eng.Sun.COM!dgh
Mon Oct 29 09:09:04 PST 1990


> From: dikacwi.nl (Dik T. Winter)
> Newsgroups: comp.compilers,comp.lang.fortran
> Subject: Re: IEEE 754 vs Fortran arithmetic
> Date: 25 Oct 90 00:16:59 GMT
> Reply-To: dikacwi.nl (Dik T. Winter)
> 
> In the Ada Numerics Working Groups in Europe and in the US/Canada there is
> also discussion going on about -0.0 and +0.0 (Ada does not distinguish them
> like Fortran).  However, in a full implementation of IEEE including the
> recommendations the only way to distinguish -0.0 and +0.0 is by either
> calculating 1/X or by use of the copysign function.  Both ways do not
> conflict with the languages (in Fortran and Ada calculating 1/X for X=zero
> is undefined or defined to trap, depending on parameters, the Fortran
> copysign function is undefined for a zero sign argument).

I'm not aware of any Fortran copysign function other than
the Fortran-77 SIGN generic intrinsic function, which is perfectly well defined
for zero arguments:
	if a2 >= 0 then SIGN := ABS(a1) else SIGN := -ABS(a1)
and this differs from IEEE 754 copysign just when a2 is -0 or a2 is NaN.

> As Henry Spencer notes, Kahan has good arguments for -0.0 and +0.0 (not too
> surprising as he was one of the leading forces behind IEEE 754).  On the
> other hand in his use -0.0 stands for negative infinitesimal and +0.0
> for zero and positive infinitesimal.  In my opinion (and I am not the
> only one) it would have been better to have a single zero and two signed
> infinitesimals.  (And as an aside, there have been machines that had only
> representations for signed infinitesimals but not for zero, still not such
> a very bad idea.)

This is a common misconception.  IEEE 754 provides two representations for zero
and none for "infinitesimal".  754 old-timers will recall that Fraley and Walther
proposed one zero and two infinitesimals.  I implemented something similar in
my first microcode for the Tektronix 4061 around 1977, but gave up when I started
coding pow().  Underflow (infinitesimal) and overflow symbols are a poor man's partial
interval arithmetic, and turn out not to be worth the trouble.  But you have to
try it to believe it.  Kahan also thought they were a good idea until he pursued
the consequences, probably at Toronto in the early 1960's.  If you need interval
arithmetic you need all of it.

The two IEEE 754 representations for zero are equal and practically indistinguishable
by normal arithmetic means.  The sign bits tell you something extra besides the
value that may or may not be significant - that depends on the context, just as
the significance of the numerical value of any floating-point variable 
depends on the context: you might evaluate the result "1.0" differently if you
knew it was computed as a sum of positive quantities, as opposed to the case
where you knew it was computed as a difference of large positive quantities
on the order of 1.0e15, each subject to some uncertainty in the low-order bits.

A signed zero created by underflow carries the sign of
the lost underflowed quantity.  It's up to the programmer to determine whether
that information can be usefully applied in a particular algorithm.  Similarly
a signed zero created by division by an infinity, which was in turn created by
overflow, carries the sign that would have been borne by the lost quantity.
This all works out fine in continued fractions and symmetric eigenvalue
problems, for instance.  There may be other algorithms in which exact zeros
can't arise and signed zeros can be usefully interpreted as infinitesimals.
Generally speaking IEEE 754 attempts to preserve signs of zeros where there
is a sensible interpretation for so doing.

Kahan's scheme for interval arithmetic, proposed in 1965, envisions
conventional interior intervals and their complements, exterior intervals.
Signed zeros and infinities could be used in that context to indicate whether
intervals containing them are open or closed: [-1,+0] contains zero, while
[-1,-0] does not.  Until interval arithmetic is fully and efficiently implemented
in a programming language, it's hard to evaluate this approach.

There is a natural symmetry between signed zeros and signed infinities.
Both are somewhat more natural in real arithmetic than complex arithmetic.
Signs of zeros and infinities in complex arithmetic could be used for other
purposes, such as closing the definition of log or sqrt: complex analysis
books typically divide the principal domain of these functions along a curve
extending from the origin to infinity; the principal domain is closed from
a counter-clockwise approach and open from a clockwise approach.  Interpreting
the sign of the zero real part as a direction of approach allows both
boundaries of the domain to be "closed".  This has advantages in conformal
mapping, according to Kahan.  Conformal mapping is important in engineering
practice but I've never used it so I can't speak from my own experience.

Signed zeros don't cost much in hardware but they do have a minor software
cost: x - x and 0 * x can't be optimized away to zero unless the run-time
rounding mode and the run-time sign of x, respectively, are known.  I don't
lose any sleep over that, but programs that are generated automatically
may include expressions like x - x and 0 * x that are expected to be optimized
away.  I think the programs that generate those expressions should be fixed
(optimize as early as possible) but there are other points of view.

> this is why the point
> came up in the Ada Numerics Working Groups.  We are working on the
> standardization of elementary mathematical functions in Ada.  The current
> status is that the basic package of functions like SINE, COSINE etc. is
> very near to standardization.  One of the functions included is:
> 	ARCTAN(Y, X)
> which returns the arctangent of Y/X.  (Fortran users will recognize the
> ATAN2 function.)  The specification tells us that the result is the range
> from -PI to +PI (approximately).  The problem is, what is the result of
> ARCTAN(Y, zero).  Does it depend on the sign of zero?  Offhand I do not
> know what the Fortran standard tells us.

A lot of the confusion about atan2 and hypot would be resolved if these
functions were understood by their primary application, convering
rectangular to polar coordinates (or converting a grid coordinate to a
distance and direction).   From that point of view, preserving the
information in a signed zero is accomplished by reflecting it in the
sign of the direction associated with it (either +pi/2 or -pi/2). 

The Fortran-77 standard is not very helpful: atan2(y,zero) is
allowed to be either pi/2 or -pi/2; nothing seems to prevent 
a non-deterministic choice.  Based on the results of
other language standardization efforts, I'm skeptical of standardizing
elementary transcendental functions by language committees.



More information about the Numeric-interest mailing list