your array extension (mentioned in a message to nceg) <9212081540.AA05742asass200.sherpa.sandia.gov>

queueing-rmail uunet!lupine!segfault!rfg
Tue Dec 8 16:45:00 PST 1992


Kent G. Budge, in a private reply to my posting of yesterday wrote:
  
  The subject of arrays in C and C++ is of enormous interest to me because of
  their importance to scientific computing in general and to computing on 
  high-performance architectures in particular.  I would love to see any new
  ideas on the subject...

I hope that Kent will forgive this breach of etiquette, but I'm going to
reply publically (in the NCEG mailing list) just because I've gotten
paranoid about sharing ideas with *any* limited audience.  It seems that
these days, almost *any* remotely useful idea, however trivial, may be
subject to patent and/or copyright protection unless prior general public
disclosure makes that legally untenable.  (I can't believe some of the
incredibly trivial software ideas that have been granted patents!)

That's enuff said about that!

Regarding the idea I had... well it started out really simple and then,
after "cooking" in my noggin for a couple of hours, it started to get
a bit more complexified, but it is still pretty simple.

(Please forgive me if all of the following material sounds like old ideas
that have already been either discarded or else incorporated into C* long
ago.  As I said the other day, I haven't done my homework on the existing
ideas in this area yet... but I will!)

Here's the rundown...

First let me clarify that what I have gathered from the old NCEG postings
I have read is that some NCEG people may want to add all sorts of whiz-bang,
grandious, and inefficient array manipulation features to C, along the
lines of (for example) Fortran do-lists and Ada array slicing, but I'm
not in the least bit interested in things like that.  I feel such things
are far too baroque for inclusion into an otherwise clean, simple, and
efficient language like C.  I'd like to just take what's there and tweek
it a bit while trying hard to maintain compatability with the existing
syntactic and semantic rules of ANSI C.  In my experience, big and
complex language changes often fail because they are just too damn
difficult to specify, too difficult to fully implement, or too difficult
to integrate into the language without cause nasty incompatabilities.
(C++ is a perfect example of all three of these pitfalls.)

Also let me clarify that as far as C++ is concerned, it seems apparent
that all sorts of fancy array manipulation things can be implemented
as classes and/or as templates, so for that language, I consider the
problem "solved".  My ideas (below) are primarily oriented towards
improving the situation for C programmers, although the ideas could be
carred over in to C++ also, and they would cause no conflicts with the
existing syntax or semantics of that language either.

Now before I tell you the basic idea, I have to give you some minimal
background.

Basically, I noted (yesterday) that in ANSI C (contrary to popular belief)
arrays *are* first-class data types.  Consider the following code:

		struct S { int member[10]; };
		extern struct S callee ();

		void caller ()
		{
			callee().member;
		}

What do you think the "type" of `callee().member' is?  You probably thought
that it "type decays" into a pointer to the first element of the array,
right?  WRONG!  I found that the ANSI C standard calls for *all* array
type values which are not operands of the unary `&' operator or the `sizeof'
operator to decay into pointers ONLY IF THEY ARE LVALUES.

But the standard clearly says that (a) a value returned from a function
is not an lvalue, and (b) given some expression of the form `exp.member',
that entire expression is an lvalue only if `exp' is itself an lvalue
(which is a perfectly sensible rule).

So in the above example above, `callee().member' actually has an array type,
and its type DOES NOT decay into a pointer.

Now in theory, that might mean that we could do:

		callee().member = callee().member;

and get the effect of an array assignment, but we can't actually do stuff
like that because (in this case) the left-hand-side of the assignment is
not an lvalue, and thus cannot be assigned to.  But note that as far as
*types* are concerned, there is nothing wrong with the above statement,
and it implies assignment of an array to an array.

So a light blub came on over my head, and I said "Hey!  If we could just
prevent those darn array lvalues from decaying (somehow) then we might
be able to treat them AS ARRAYS for a change."  Considering that we already
have the precedent of `sizeof' as a way to prevent "type decay", I thought
that maybe another "operator" (like sizeof) could be added which would
just have the effect of preventing "type decay" for lvalue arrays.  For
example:

		int sink[10];
		int source[10];

		void caller ()
		{
			no_decay (sink) = callee().member;
			no_decay (sink) = no_decay (source);
		}

That might work, but it isn't very pretty.  It also requires the introduction
of a new keyword (which is a big no-no).

Then I though of using some existing prefix operator which has no defined
meaning for pointers as my "prefix" no-decay operator, for example:

		void caller ()
		{
			~ sink = callee().member;
			~ sink = ~ source;
		}

(Note that the `~' operator has no defined meaning if the operand is a
pointer, so this usage would not conflict in any way with existing ANSI C
syntax or semantics.)

I didn't like this either, so I considered using some other (new) operator
character which isn't currently used as a "normal" operator in ANSI C.
I found two such things on my keyboard.  One is `a', but there is a good
reason why that's never been used for anything in C.  (For one thing, a
lot of old versions of `vi' won't take is as normal input unless you escape
it first.)  The other is `#' but by tradition, that's pretty much reserved
for the preprocessor.  So `~' is good but I still think it looks too
confusing.

Then I hit on a much better idea.  How about just treating `[]' as an
operator (and as a new token)?  That would make it intutively obvious that
we are doing something for which the operand is an array.  So how about:

		void caller ()
		{
			[] sink = callee().member;
			[] sink = [] source;
		}

Then I realized that I had a choice between using this new operator as
a prefix operator (as in this example) or perhaps as a postfix operator.

Well, it turns out that making it a postfix operator is better for two
reasons.  First, it avoids any possible syntactic ambiguity for the C++
statement:

			delete [] foo;

Second (and more importantly) it makes using this new operator far more
intutive in the context of multi-dimensional arrays, as in:

		int sink[5][10];
		int source[20][10];

		void assign_row (int i, int j)
		{
			sink[i][] = source[j][];
		}

Here, the j'th row of source is assigned to the i'th row of sink.

So that's what I decided would be the best thing.  Add a new postfix
operator `[]' which, when applied to a preceeding operand which (nominally)
has some array type, simply prevents the type from decaying into a
pointer type.  This is simple to understand, explain, and implement,
and it doesn't conflict in any way with any of the existing syntax or
semantics of either C or C++.  (Note that the rules for this new operator
would require that the operand be of some array type.)

Anyway, that's the "simple" part of the idea I had.

The idea extends quite naturally to operations other than `=', for example

		int sink[10];
		int source[10];

		void vector_add ()
		{
			sink[] += source[];
		}

This would have the intutively obvious meaning, i.e. element-wise addition,
as if we had written:

			sink[0] += source[0];
			sink[1] += source[1];
			...
			sink[9] += source[9];

The idea might also extend nicely from vectors to matricies, if we think
of those as undergoing (in ANSI C) multiple stages of "type decay" (as I
believe they do).  Then we could write:

		int sink[100][100];
		int source[100][100];

		void matrix_multiply ()
		{
			sink[][] *= source[][];
		}

(Note that this is definitely *not* the kind of matrix multiply that those
in the numerical computing community would like to have.)

At about this point in my thinking, I was ready to stop, declare victory,
and go have a beer, but then I realized that I'd only solved the easy parts
of the overall problem.  Several additional annoyances remain.

First, what do we do about comparison operators?  What does the expression:

			left[] <= right[]

really mean?

There are two possible "intutively obvious" meanings for such an expression.
The first possibility would be that we would treat the earlier members of
the vectors as having more "weight" than the later ones.  So in this case,
we would start by doing the comparison:

			left[0] <= right[0]

and if the left[0] was in fact less than right[0], we would stop and yield
TRUE (i.e. `1').  But if left[0] was *greater* than right[0], we would
stop and yield FALSE (i.e. `0').  If the two elements were equal, then we
would continue on and do the comparison for left[1] and right[1], etc. If
we reach the last elements of these vectors, and they also compare equal,
then we yield TRUE because the entire vecrors are in fact equal.

The second possibility is that:

			left[] <= right[]

yields a vector of ints, representing the results of each of the individual
element-wise comparisons.  I think this is a more reasonable choice.  Note
that the resulting vector itself has some appropriate array-of-int type,
and is not an lvalue (and thus cannot be assigned to, nor have its address
taken, nor be subscripted).

Naturally, some users would prefer to have comparison operators work one
way, and other users would prefer to have them work the other way.  As long
as we only have one set of comparison operators, we cannot accomodate both.

My first thought was to have the "normal" comparison operators yield arrays
(when applied to operands which are arrays) and to latch on to the proposed
new floating-point comparison operators (e.g. `!<=') to indicate that an
"ordered memberwise sequential" comparison (yielding a single int result)
was desired, but then I realized that this would make it impossible to do
vector-wise comparisons of floating-point vectors when it is desired to
have the results take account of "unordered" relationships among the values
compared.  In other words, I might want to have:

			left[] !<= right[]

still yield a vector of results when `left' and `right' are vectors of
floating-point values.  So the idea of "borrowing" the proposed new
floating-point comparsion operators, and giving them special semantics
for vector comparisons is simply a lousy idea.

The two remaining alternatives are (1) pick one kind of "vector comparison
semantics" or the other and have comparison operators (including the new
floating-point operators) designate those semantics (and in this case, I
personally would prefer the semantics which produce a vector result) or
else (2) do what was already done for floating-point and invent a whole
'nother set of comparison operators.  For example, we might take all of
the existing comparison operators (including the proposed new floating-
point ones), attach a `~' to the front of each of them, and say that the
resulting new tokens represent element-wise comparison operators for
arrays.  Thus, we might be able to write:

			left[] ~!<= right[]

to do an ordered element-wise comparison of two floating-point arrays.
(I know it looks ugly, but the idea seems reasonable enough.)

Now, having dispensed with the problem of comparison operators, we still
have a few more problems left.

What do we do about incomplete array types... for example:

		extern int array1[];
		int array2[20];

		void assign ()
		{
			array2[] += array1[];
		}

The answer here seems obvious (to me anyway).  If we have some binary operator
for which one operand is an array of known length and the other is an array
of some unknown length, we "coerce" the type of the array of unknown length
so that its type is the same as the other operand (which has a known length).
(This rule would also apply to the second and third operands of the ?:
operator.)

But wait!  What about unary operators applied to array of unknown length?
Also, what about binary operators where both operands are of some unknown
length?  Quite simply, we declare all such operations illegal.  (There now.
That was easy. :-)  Of course in some cases we may actually know the array
length (even if it is not part of the type of the array) so we should provide
some means to convey this information to the compiler.  The most intutively
obvious mechanism for doing this is explicit casts, but the current rules
of ANSI C disallow casting either from or to array types.  Well, that's
simple enough to fix.  We just change the rules to allow casting of values
which have array types (either complete array types or incomplete array
types) to arbitrary other array types.  We can now do:

		extern int array1[];
		extern int array2[];

		void clever ()
		{
			array1[] = ! (int[25]) array2[];
		}

This causes the logical negation of each element of array2 (assumed to be
25 elements long) and the assignment of the resulting vector (which itself
contains 25 elements) to array1 (or at least to the first 25 elements of
it if in reality it happens to be longer than that).

So now we can handle arrays of unknown length, but only when we actually do
know the lengths (apriori) at compile-time.  What about those cases where
the lengths are dynamically computed at run-time?  For example, what if
we want to have a "generic" version of the `vector_add' function shown
earlier?  Simple enough.  We can extend the notion of a cast to an array
type so as to permit the length to be some (dynamic) expression.  So, for
example, we could write:

	void vector_add (int vec1[], int vec2[], unsigned len)
	{
		vec1[] += (int[len]) vec2[];
	}

To make this work, some special "type compatability" rules would have to
apply to values which have some "dynamic array type".  Specifically, we
would want to stipulate that (for all applicable binary operations) a
value of some dynamic array type is *only* compatible with a value of
some *incomplete* array type which has the same element type.  This
rulw would insure that for:

	#define FIXED_BOUND 25
	int constrained[FIXED_BOUND];

	void vector_add (int vec2[], unsigned len)
	{
		constrained[] += (int[len]) vec2[];   /* error! */
	}

we would get an error on the line indicated because one of the operands
of the `+=' operator is a fixed-length array, while the other is a varible
length array (and thus the actual length to be used for the operation is
rather ambiguous).  But we would not want to rule out using fixed-length
arrays in such contexts entirely, so it would be reasonable to permit
such arrays to be casted to incomplete array types as in:

	#define FIXED_BOUND 25
	int constrained[FIXED_BOUND];

	void vector_add (int vec2[], unsigned len)
	{
		if (len > 25)
			abort ();

		(int[]) constrained[] += (int[len]) vec2[];   /* OK */
	}

Of couse this example also makes it obvious that we would probably want to
say that the result of a cast to an array type is a lvalue (and thus is
"assignable") if (and only if) the expression being casted is itself an
lvalue.  (I didn't say it earlier, but this property would also have to
apply to the `[]' operator.  It should likewise yield an lvalue if applied
to an lvalue.)  Normally, operators and casts do not yield lvalues, but
there are a few exceptions already (e.g. the unary prefix `*' operator).
I see no reason why we couldn't add another "lvalue-yielding" operator,
and also say that certain kinds of casts yield lvalues.

So now we have a complete, flexible mechanism for dealing with incomplete
array types, complete array types, "dynamic" array types, and combinations
of these things.  I even think that all of this stuff could be implemented
efficiently (on both "normal" machines and on vector machines).  So what's
left to worry about?

Well, the more asture readers will have noticed that I glossed over a rather
important point in my examples above.  Specifically, given a function like:

	void vector_add (int vec1[], int vec2[], unsigned len)
	{
		/* ... */
	}

the ANSI C standard says that this function is really equivalent to:

	void vector_add (int *vec1, int *vec2, unsigned len)
	{
		/* ... */
	}

because the "nominal" types of formal parameters which are declared to have
array types undergo their own kind of "type decay" to pointer types, and
this "parameter type decay" is entirely separate and different from the
type decay which takes place on array operands within expressions.  Thus,
due to this "declarative type decay" effect, my earlier example of:

	void vector_add (int vec1[], int vec2[], unsigned len)
	{
		vec1[] += (int[len]) vec2[];
	}

really ends up being interpreted as:

	void vector_add (int *vec1, int *vec2, unsigned len)
	{
		vec1[] += (int[len]) vec2[];
	}

but I said earlier that the new `[]' operator would only apply to array
operands, and we now have it being applied to pointer operands.  So should
this example be judged illegal at compile-time?  (That would be a very
unfortunate outcome.)

The answer (in my opinion) is a resounding NO!  Let's just modify (or clarlify)
the rules for the `[]' operator and say that rather than being the "no decay"
operator, it is really the "anti-decay" operator, and that array operands do
in fact decay to pointers (in most expression contexts) just as the current
ANSI C rules say they do, but that the `[]' operator (applicable only to
pointers) effectively converts a value of some pointer type to a value of
some array type, with the zeroth element of the resulting array value being
at exactly the place pointed to by the (input) pointer operand.  We would
want one additional (small) stipulation, i.e. that for an expression like
`exp[]' where `exp' initially has some complete array type (before it "decays")
then the value yielded by `exp[]' is also a value of that exact same array
type (and, in particular, that is has the same length).  Applying the
`[]' operator to other pointer values (i.e. ones which DID NOT result from
the decay of some array value with a "complete" array type would yield
result values with incomplete array types, but you could not legitimately
use such values in any expression unless you casted them to "bounded" (i.e.
"complete") array types first.

This redefinition of the semantics of the `[]' operator actually makes it
more powerful than before, and it also preserves the desired semantics of
our `vector_add' function (above).

Now we could stop right here and have a reasonable powerful set of array
handling capabilities for (extended) ANSI C, but why quit while we're
ahead? :-)  We've given ourselves the ability to create and operate on
array values, so what else do we need?

Nothing really, but there are still a few ways in which arrays are not
quite equal to other "first class types" even after we've added all of
these new features.

In particular, although we can now add, subtract, multiply, and divide
arrays (of arbitrary dimensionality) they still can't be passed into
(or return from) functions "by value" as scalar values and struct and
union values can be.

Do you care?  I can only answer for me.  I'm not yet really sure if I
care about this or not.  Passing arrays back and forth from functions
"by reference" (using pointers) seems adequate in most instances, and
it certainly is more efficient most of the time.  I supposed that there
is an argument to be made that their ought to be a way to pass and return
arrays "by value" but it is much harder (I think) to make the case for
adding features to the language to support "pass by value" for arrays
than it is to make the case for the `[]' operator or for extending the
semantics of (for example) the `+' operator so that they accept array
operands (as I have outlined above).

If I was forced to invent a syntax for declaring a formal parameter to
be a "pass by value" formal array parameter is guess I'd like:

		void foobar (int arg[*]);

Note that we can't use either of these:

		void foobar (int arg[]);
		void foobar (int arg[25]);

because existing ANSI rules call for those to be treated as:

		void foobar (int *arg);

so we would break lots of existing programs if we tried to change the
existing semantics of those declarations.

Using `[*]' as the array length specification ought to be both familiar
and comfortable to FORTRAN programmers, because FORTRAN 77 already uses
a similar notation for variable length (conformant?) array parameters.

Using some other character (e.g. `:') might be a better choice though,
because then we might be able to have one notation for pass-by-value
array parameter of "conformant" size and also provide a notation for
pass-by-value array parameters of fixed sizes, as in:

		void foobar (int arg1[:], int arg2[:25]);

Note that if we used `*' it might potentially screw-up parsing by being
mistaken for a unary prefix `*' operator when followed by an integer
literal (but maybe not).

In any case, it isn't really clear (to me anyway) if it is either necessary
or particularly useful to provide *both* a way to pass fixed-size arrays
by value and also to have a way of declaring "conformant" by-value array
formal parameters which will accept any size array (so long as it has the
proper element type).  I think that "conformant" by-value array paparameters
should be sufficient in all cases (but admitedly, then may lack a bit of
efficiency in some contexts).

As far as returning arrays by value... well, there is simply no existing
(ANSI) syntax or semantics to conflict with.  ANSI C doesn't allow you to
declare functions as returing array types,so we could just say that we have
suspended that prohibition.  This would permit us to say:

		int [] foobar ();
or:
		int [25] foobar ();
or even:
		typedef int array_type[];
		array_type foobar ();

As regards to the general issue of the implementation problems which might
arise in any attempt to implement passing and/or returning arrays by value
(especially ones whose bounds vary depending upon which call you look at)
let me just say that I don't think its a big issue.  C compiler hackers
are already adept at finding efficient ways to implement things, and this
is just one more minor challenge.  Passing and returning "conformant" arrays
by value may never be quite as efficient as any other operation in C, but
that's the price you accept when you (as a user) decide to use this extension.

--------------------------

There you have it.  That's my total brain-dump on possible array extensions
for ANSI C.

Opinions are welcomed.


// Ron ("Loose Cannon") Guilmette
// uucp: ...uunet!lupine!segfault!rfg
// X3J16:  Deadline?  What deadline?



More information about the Numeric-interest mailing list