64 bits

Earl Killian uunet!netcom.com!earl
Fri Dec 20 11:57:27 PST 1991


Here is my perspective on the 64 bit issue:

(1) I don't think NCEG should define the sizes of the standard C types
char, short, int, and long.  There is too much precedent for these
being variable.

(2) There are lots of implementations out there without any 64-bit
integer type.  To get a 64-bit integer type in an upward-compatible
way requires adding a new name, such as "long long".  These
implementations will not be able to, for example, change "long" to be
64 bits.

(3) The need for a 64-bit integer type is obvious.  Nothing
interesting fits in 32 bits anymore.

(4) "long long" is a terrible name for the new type because it cannot
be #define'd or typedef'd.  If you're a single manufactor, "long long"
may make sense, because you're trying to extend the language without
introducing new keywords, and such.  But such constraints should not
apply to NCEG.  I suggest instead that implementations call it
something hidden (like "__int64") and use a #include to get a more
reasonable name.

(5) There needs to be an implementation-independent way to get
integers of a fixed size.  Personally I would prefer a Pascal range
for specifying this, but I have a feeling that won't fit into C well,
so I propose instead
	#include <sizedints.h>
	int8, int16, int32, int64
	uint8, uint16, uint32, uint64
(for 36-bit machines out there, these types are defined to contain at
least the indicated size, not exactly the indicated size.)

(6) For many existing implementations, there will need to be two
compilation modes: 32bit and 64bit.  In 32bit mode, everything is
upward-compatible (e.g. "long" and "void *" are still 32 bits).  In
64bit mode, "void *" at least becomes 64 bits.  Below I argue "int"
and "long" should as well.

(7) Since implementations are still free to define the basic C types
as they like, the following is only a suggestion.  For implementations
that support a 64-bit address space, I believe that "int" should be 64
bits.  There are several reasons why I think that this is the only
sane choice:
  (a) "int i" is the preferred way to declare an array index, and if
      int is 32 bits, you won't be able to index a large array
  (b) some implementations that support or will support 64 bit
      addresses can't efficiently mix 32 and 64 bit operations, making
      "int"'s very expensive unless 64 bits
      I think the IBM POWER architecture is in this category.  The
      MIPS 64-bit architecture can intermix efficiently, because it
      used new opcodes instead of a mode for the new operations,
      although there is still some costs with mixing.
  (c) "int" is the type of lots of system and library functions that
      should accept a full 64-bit integer in a 64-bit address space
      (example: nbytes in read/write system calls)
  (d) There is code out there that coerces pointers to int's
Also, since I'm postulating that this binding only occurs under a
compilation option to get a 64-bit address space, most software won't
use it and won't be broken by the new size of "int".
The only disadvantage of this choice is that there is no standard C type
to get a 32-bit int.  But sizedint.h as proposed above solves this
problem.



More information about the Numeric-interest mailing list