Message from khb that may have gotten stuck

David G. Hough dgh
Sat Jun 2 06:18:42 PDT 1990


Everybody should have been getting a number of messages on this subject.

> From: uunet!Eng.Sun.COM!khb
> Subject: Re: ANSI C & perfect decimal<-> fp conversions 
> Date: Fri, 01 Jun 90 15:02:35 PDT

> >> > ???  While I'm a compiler jockey by trade, due to a sequence of
> >> > historical accidents I became a lightning rod for customer complaints
> >> > about CRI's fp arithmetic, and it really was the case that addition
> >> > gripes outnumbered all others put together by an easy 10-to-1 ratio.
> >> 
> >> Probably because the addition anomaly was easier to see and understand.
> 
> I spent about a decade tending to Kalman filtering codes (employed in
> missile, space and geologic survey work); from that I conclude that in
> _practice_ it is not typically feasible to determine whether:
> 
> 1)  You have an ill-conditioned/poorly observed system (bad physics)
> 2)  You have an unstable algorithm (bad mathematics)
> 3)  You have a bum computer (bad computer science, arithmetic)
> 4)  You have mis-modeled reality (bad engineering)
> 
> If one uses professionally coded libraries, one can hope that #2 is
> not a major contributor issue. As computer vendors we should strive to
> make #3 less important ... generally customers assume that their
> computer works and can't be expected to tell us which part of the
> arithmetic is _really_ causing trouble.
> 
> Items #1, #4 can mask the other two .... especially if #3 forces them
> to dink about with their model. Since #4 is the only problem they _want_
> to be solving (for publication, hitting the right target, etc.
> purposes) it is what they will tinker with the most.
> 
> #1 can often be diagnosed via SVD, condition number bounds and other
> techniques .. and their system will often evolve into observability.
> The other three error sources only go away via lots of dedicated work.
> 
> Since many algorithms rely on scaling to ensure accuracy, divide
> errors are particularly evil .... but the damage will escape notice
> until _much_ later in processing.
> 
> My earlier posting seems only to have gotten to Tim, so I am repeating
> it here:
> 
> ...
> >is "doesn't run appreciably slower than the sloppy methods we use now"
> >-- and I doubt that fast perfect conversion algorithms are publically
> ...
> 
> We shouldn't forget the modern problem of networking ... it is highly
> desireable to take the results of computation from machine to machine
> (perhaps via RPC's). Back when we used tapes and SneakerNet (real JPL
> deep space processing at one time!) the cost of moving data around was
> so high that it was seldom done ....
> 
> but with ethernet and high performance networks one is very, very
> tempted to move computation from node to node.
> 
> If we don't tighten up these loose ends, moving code from even IEEE
> node to IEEE node will result in not so subtle problems in a wide
> variety of applications.
> 
> Moving computation between non-identical non-IEEE nodes is probably
> doomed to suffer some hazards no matter how hard one works.
> ....
> 
> It would not be inappropriate, IMHO, for NCEG to hold C/IEEE systems
> to a higher standard than C/non-IEEE machines.
> 
> Looking at run-rate figures, it would appear that natural selection
> will take care of completely non-IEEE machines anyway.



More information about the Numeric-interest mailing list