A few thought regarding Cray's "Variable Length Array Proposal"
<9302081621.AA25803awillow29.cray.com>
uunet!netcom.com!segfault!rfg
uunet!netcom.com!segfault!rfg
Mon Feb 8 11:14:30 PST 1993
> > void foobar (int n)
> > {
> > static int (*ap) [n];
> > }
I really have no problem with including static pointers to VLAs.
to me it's a minor point. I will include your comments in my next
presentation before the nceg committee and we'll see what they say.
I'd feel a lot better if I someone could think of a good example
where this is useful.
As I said, it is useful in that it lets implementors be concerned only
about two simple restrictions regarding VLA types, i.e. (1) VLA types
can only appear in block scope or in prototype scope and (2) block-local
objects declared `extern' or `static' may not have types which are VLA types.
> ... I think
> it is a good idea to *require* a diagnostic for attempts to jump into
> a block in such a way that you *avoid* elaborating declarations which
> involve VLA types.
>
> Note that C++ already has similar rules. In C++ you may not jump into a
> block if doing so will cause you to miss the elaboration of declarations
> which include initializations.
>
> (I never liked those nasty old goto's anyway, so as far as I'm concerned,
> the more we clamp down on the most ill-structured uses of them, the better.)
Someone may complain that a required diagnostic is too hard to implement :^).
Actually, the proposal stands the way it currently is because section
3.1.2.4 Storage Duration of Objects doesn't have a constraint section.
I don't think that fact (in and of itself) is a valid reason for failing to
make the rules regarding VLA type objects "reasonable". Anyway, as I noted
previously, the proposal you posted seemed self-contradictory about whether
or not diagnostics are required. You have to resolve that internal contra-
diction one way or another.
If we make this an error, then bypassing an initializer ought to be
an error. If the re are good reasons for requiring a diagnostic in
this case, then those reasons apply to bypassing initializers also.
Nope. The conclusion does not follow from the premises. These are two
separate issues. You may introduce a restriction which only applies where
VLA type objects are declared and that restriction will not have any
effect upon existing ANSI C code. However if you simultaneously introduce
a *separate* restriction which *would* negatively affect existing ANSI C
code... well, you'd annoy a lot of people.
Both issues should be tackled at the same time.
Let me make myself clear. I think that jumping past initializers sucks,
and I personally NEVER write code that does that. Further, the C++ language
definition has already taken the bold step of outlawing such usage in C++
programs. I would be all in favor of adding this (separate) restriction
to ANSI C, if it were not for two things, i.e. (1) I'm often in the sad
position of having to recompile other people's crappy code, and (2) the
additional "jumping" restrictions we are *now* talking about have nothing
to do with VLA's per se, and thus should be addressed by some entirely
separate proposal.
> ... Is there some reason that:
>
> void f(int n, int[n][n]);
>
> ... would be "bad"?
Actually, yes, this can be bad. In a large development project,
someone can introduce:
#define n 10
which all of a sudden breaks your prototype.
That possibility has little direct relevance to VLA's. It is every bit as
possible that someone would introduce that same #define into a program
which contained the prototype:
void f(int n);
... and the results (in such a case) would be equally unsatisfactory.
If this is the *only* argument in favor of adding the specialized `[*]'
notation to the language (and to the burden of *all* implementors) then
I'd have to say that in my opinion, the case in favor of this language
extension has not yet been made convincingly.
If no names are present then it's a lot harder to introduce names that
conflict.
Quite true.
Personally, I never use names in prototypes, only in definitions.
I see. So it would appear that the introduction of this `[*]' notation
is really a round about way of getting us all to conform to *your* perfered
coding style. I (for one) am not ready to do so. I find that code is
vastly cleared and more readable if I *do* provide explicit parameter
names in my function prototypes. Tell me which prototype *you* would
rather see in a header file. This:
void set_time (int hours, int minutes, int seconds);
... or this:
void set_time (int, int, int);
Anyway, as I've said, possible conflicts arising from the haphazard intro-
duction of #define names is a general problem applicable to *all* uses of
the preprocessor's #define feature. To argue that this `[*]' feature of
the VLA proposal would permit us to be somewhat more careless when defining
names is (I think) a weak argument. No one should ever carelessly use
#define, regardless of how the VLA proposal comes out. To suggest other-
wise is an invitation to disaster.
> > > 7.6.2.1 (4.6.2.1) The longjmp function
> > >
> > > Description
> > >
> > > If a longjmp function invocation causes the termination
> > > of a function or block in which variable length array
> > > objects are still allocated, then the behavior is undefined.
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > No good. The behaviour *must* be defined. Otherwise you have invented
> > a highly crippled VLA feature.
>
> Really, it could be defined. However, in many implementations this
> could mean storage is lost. This is just trying to acknowledge
> that issue. If a VLA is allocated on the heap...
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> ... then that would be a poor implementation. Why would anyone want to
> implement this VLA feature in such a way that `auto' VLA objects get
> allocated out in the heap? That would be a rotten way to do it, and it
> would make longjmp unreliable (as you have noted).
Actually, the CRI implementation allocates VLAs on the heap sometimes.
Our stack implementation is not contiguous. I can't go into
all the details, but when you enter a parallel region, each task
gets its own stack, so the stack implementation allows a tree-like
structure.
The possible use of a "cactus stack" is irrelevant to the issue of where
storage class `auto' VLA objects are allocated.
Because of this, if there is not enough room in the
current stack segment, it is faster to put the VLAs on the heap.
Ah ha! Now we get the the *real* issue. What you are really saying is
that for YOUR ARCHITECTURE in particular, "stack segments" have some
particular size limitations which cannot easily be be made larger when
circumstances warrant it. I think that we are all familiar with such
"segment size" limitations (e.g. on measely 8086's) but I am somewhat
dismayed to hear that the Cray architecture imposes similar sorts of
restrictions.
No matter. It would seem that there may indeed be machines (e.g. 8086's,
Crays, etc.) for which certain `auto' VLA objects may exceed architectural
limitations for the current stack segment. In such cases, some implementors
(of VLA's) may choose to support these (excessively large?) `auto' VLA
objects anyway, and they might do so by allocating them out in the heap.
Such implementations *could* arrange for `longjmp' to implement an honest-
to-goodness "stack unwind" process whereby these regions of heap space
are reclaimed as various dynamic function invocations are "exited" (via
`longjmp'). (I might note that implementors already have to figure out
how to implement real "stack unwinding" in order to support Ada exceptions
and C++ exceptions anyway.)
Anyway, the point is that for most architectures, all `auto' VLAs *will*
be allocated on the stack, and the existing `longjmp' implementations for
these architectures *will* ensure that `longjmp' reliably reclaims all
space used by `auto' VLA objects. For some architectures, this same kind
of "reliable reclamation" may also be provided (at the discression of the
individual implementor) if support for the allocation of excessively
large `auto' VLAs in the heap is provided.
Based upon these facts, I would argue that `longjmp' should be defined to
provide "reliable reclamation" in the presence of `auto' VLAs for all
architectures. To define it otherwise would be to significantly degrade
the usefulness of `auto' VLA objects on *all* architectures just for the
sake of those few "odd-ball" architectures which impose draconian and
immutable limits with respect to the sizes of stack segments.
This is true for both our Fortran and C implementations. There are
good performance reasons for why we do this.
There are good performance reasons to do what the programmer tells you to
do. He is likely to know the run-time characteristics of his code better
than any compiler is. If the programmer asks for an `auto' VLA object
(rather than calling malloc) then I, for one, would assume that he under-
stands the tradeoffs, and that he thinks he will get better performance
if that particular object is allocated on the stack. (Remember that objects
allocated on the heap *always* require at least one additional indirection
for each access.)
So in the first instance, I would argue that you ought to give the pro-
grammer what he's obviously asking for (when it comes to `auto' VLA objects).
If you happen to be stuck with a machine whose architectural limitations
simply don't let you do that, you could then either (a) refuse the request
entirely or else (b) arrange for some specialized hack to kick in (e.g.
allocation via malloc). Whatever individual implementors choose to do
however, "reasonable" semantics for `longjmp' should be maintained.
// Ronald F. Guilmette
// domain address: rfgasegfault.uucp
// uucp address: ...!uunet!netcom.com!segfault!rfg
More information about the Numeric-interest
mailing list