[Cfp-interest 2618] non-model numbers

David Hough CFP pcfp at oakapple.net
Wed Jan 18 10:50:17 PST 2023


After trying to follow a discussion about how to describe double-double
implementations in the C standard - which aren't even standardized among
themselves - I concluded that if I were starting from scratch it would
be along these lines -

 We already define a set of model normal and subnormal numbers for any
floating-point type that can be characterized by a precision p and 
exponenet range emin to emax.

 But some implementations have types that represent additional numbers, 
beyond their model numbers.

* So define a extended classification function for each floating-point type
xclassify which offers the extended classifications -

 non-model-subnormal - a non-model number whose magnitude is between 
zero and the minimum normal
 non-model-normal - a non-model number whose magnitude is between the
minimum normal and maximum normal
 non-model-supernormal - a non-model number whose magnitude is between the
maximum normal and infinity
 non-model-nan - an implementation-defined representation that is not a
number, and not an IEEE quiet or signaling NaN 

I suppose that covers all the possibilities.
For conventional types, xclassify might just be a macro for the usual classify.

* And for each floating-point type xyz, define a macro xyz-HAS-NONMODEL
if xclassify gives different results from classify for any inputs

* And for maximum compatility with most existing applications that don't
care much, the usual classify classifies, without raising any exceptions
 non-model-subnormals as subnormal
 non-model-normals as normal
 non-model-supernormals as normal
 non-model-nans as NaNs



Altogether, the idea is that most existing applications don't have to
change, but those that should, have 
 a cheap way of finding out right away, with the new macro, and
 a start to reprogramming where it matters, with the new
extended classification function.


This does not address the related issue of multiple non-canonical 
representations of the same model number - 
e.g. the double-double (x,x) vs (2x,0) - 
which has already existed with decimal and with 
the old x87 unnormalized representations of normal numbers.

For double-double, the canonical representation (x,y) of a representable
value could be the one with the minimum magntude of y, and with the same
signs for x and y where possible.   (1,-tiny) is a representable 
double-double value that can't be represented as a sum of positive doubles.


Since double-double can be implemented in software - probably fast sloppy
software, since if speed didn't matter, why not use software regular quad - it's
hard to guess all the anomalous situations that might arise.


More information about the Cfp-interest mailing list