open systems, free software, standards, benchmarks ... and computer arithmetic
David G. Hough on validgh
dgh
Sat Feb 16 13:01:43 PST 1991
In my previous message I neglected to suggest ways out of the self-
referential morass of compiler hardware architects designing for compiler
software architects designing for compiler hardware architects ..., with end-
users effectively factored out of the process.
Two of the driving forces leading to the morass are peculiar to the last
decade or so; Bill Joy says they will continue for about ten more years. One
is extremely fast technology evolution; another is the open systems revolu-
tion.
The fast technology evolution means there isn't a lot of time to refine
products, whether paper, hardware, or software. In April 1989 Sun introduced
products that obsoleted its entire product line. Less than two years later
only one of these, the 4/330, is still barely in production. The current pro-
duct line is mostly less than one year old and mostly has less than one year
to go before becoming obsolete. This doesn't allow a lot of time for fine
tuning. 32-bit RISC processors haven't caught up with 32-bit CISC processors
in total volume per year yet, but it's already time for both to figure out
their position with respect to 64-bit busses and address spaces.
The open systems revolution means that it is more possible than ever
before to change vendors - to a new hardware platform, or a new compiler, or a
new CAD application - because interfaces are more standardized than ever
before. This means that it is much harder than in the 1970's to lock in a
customer base with proprietary technology, and consequently vendors have lit-
tle latitude about margins - they have to accept pricing dictated by the most
efficient manufacturers and figure out some other way to make enough to fund
research and development. Consequently there is very little incentive for
customers to adopt proprietary hardware or software features; they don't want
to be locked into another vendor's pricing strategy. Thus vendors have little
incentive to offer proprietary added value unless it's so wonderful that it
will win over distrustful customers; that's happening less and less.
So no matter how wonderful an idea appears on comp.arch, the chance of
its adoption by vendors is no better than it ever was if it's different from
what has been proven to be sellable in the past.
One way of dealing with this is by prescriptive standards such as IEEE
754. That was really an anomaly in the normal standardization paradigm, but
it worked because it was so much better than any standard based on consensus
among all existing implementations could have been. Generally speaking con-
sensus standards produced by vendors and other directly interested parties are
as likely to stifle valuable innovation as gratuitous innovation.
There have been some efforts at prescriptive standards by end users. The
US government has tried to do this, with its usual skill; there is now a con-
sortium of petroleum industry companies trying something along those lines.
Whether standards bureaucrats at large organizations can really do anything
more than canonize the existing practice that got them where they are remains
to be seen.
In the mathematical arena, Kulisch and his colleagues got GAMM, a German
equivalent to SIAM, to adopt a resolution urging computer manufacturers to
provide correctly- rounded elementary arithmetic operations, defined as
Kulisch does to include scalar products. I don't know how much effect this
has had on computer manufacturers in Europe as a whole, although IBM and some
other companies have produced microcoded implementations of correctly-rounded
scalar products and some compilers to exploit them. If you take the point of
view that all Turing machines are equivalent because they all do matrix multi-
plies in O(n**p) operations for some 2 < p < 3, then that's great, but if you
are interested in the constant factor then you might be concerned about
whether correctly-rounded scalar products are the most cost- effective means
of getting satisfactory error bounds on realistic applications. So this exam-
ple suggests that standards set by user groups may not always be steps in the
right direction.
It used to be that to get feature X into a language, for instance, you
could try to convince the standards committee to put it into the standard,
then get the Feds to adopt the standard so that everybody wanting to sell to
them has to implement it; or you could try to get the industry leader - at
various times IBM or Cray or DEC - to put it into their language as an exten-
sion and so force all their competitors to follow, and THEN ask the standards
committee to endorse the result.
One of the great benefits of open systems, however, is that now you can
extend language definitions yourself, at a price that's very cheap by histori-
cal standards. Starting with a freely available implementation of GCC you can
add what you like to it and distribute that to anybody who is interested.
With a comparatively small amount of effort you can produce compilers with
feature X for most of the common Unix platforms. If feature X works out well
then it might even be incorporated into "standard" GCC. So a lot of the com-
plaining about language standards is not as productive as learning about how
to extend GCC.
Unfortunately there's no comparable mechanism yet for hardware. I
remember ten years ago when the Conway-Mead book ushered in a new era in which
anybody could design his own IC's. Ten years later, however, it seems that
IC's are still being designed by experts, and they can do more, but it's still
very hard and expensive. Special-purpose hardware implemented with ten-year-
old technology probably isn't competitive with general-purpose hardware imple-
mented with current technology - that's the thrust of RISC. Extending a chip
to include an integer multiply or sqrt instruction is a lot harder than
extending GCC to generate code for it.
So users still have to lobby with chip manufacturers or at least
instruction-set architects. One way to do this is by traditional standards
meetings like NCEG which is mostly dominated by technical experts rather than
standards bureaucrats, but they are mostly from hardware and software vendors
rather than real end users. Corresponding activities within NCEG or elsewhere
for interval arithmetic and integer and fixed-point computation might be
appropriate.
An approach I favor, whenever feasible, is standardization by benchmarks,
typified by SPEC and PERFECT, with respect to performance, and the IEEE test
vectors and PARANOIA and NAG FPV, with respect to correctness. Nothing helps
to focus hardware and software vendors like losing big on some reasonable-
looking realistic computation.
Existing efforts in this area tend to be of the form: how fast can you
execute this source code unchanged and still get correct results? The PERFECT
paradigm is better suited to technologies that are harder to express in exist-
ing languages: how fast can you solve the problem represented by this source
code, rewriting when that helps, and still get correct results? Unfortunately
applications of many aspects of IEEE 754, interval arithmetic, and a lot of
integer computation can't be reasonably expressed in existing languages and so
the "benchmark" has to be written in prose, leading to questions about whether
it represents a realistic application... and it might not, yet, so there's
some expert judgment involved about whether it would be if the necessary
hardware and software support were in place.
More information about the Numeric-interest
mailing list