Interval BLAS Proposal

David Hough David.HoughaEng.Sun.COM
Fri Sep 15 15:25:49 PDT 1995


> From owner-reliable_computingainterval.usl.edu Fri Sep 15 14:42 PDT 1995
> Date: Fri, 15 Sep 95 15:30 CDT
> From: georgecaboris.mscs.mu.edu (Dr. George F. Corliss  MU MSCS)
> To: reliable_computingainterval.usl.edu
> Subject: Pre-SCAN '95 meeting?
> 
> If you are coming to SCAN '95 in Wuppertal beginning Sept 26,
> I would like to invite you to meet at 3:00 PM Monday, 25 September
> (Andreas, can you suggest a location?), the afternoon before the
> meeting begins.  The purpose of the meeting is to discuss
> standards/cooperation for freely available interval packages.
> 
> This call comes from discussions of Corliss and Walter at ICIAM
> in Hamburg in July.  Bill Walster has agreed to help chair this
> discussion.  One possible outcome would be interval packages
> Sun can distribute with their standard distribution tapes as they
> are currently doing with Kearfott's INTPAK.
> 
> I hope that you are able to come to discuss ways we as a
> community can work cooperatively and reduce wheel-reinvention.
> If you CAN meet on Monday afternoon, 25 September, please let
> me know.  If you know of others interested in participating,
> please invite them and pass this message along.
> 
> 
> Some of my thoughts:
> 
> I was motivated by a paper by Walster, "Stimulating Hardware and
> Software Support for Interval Arithmetic" to appear in a volume 
> "Applications of Interval Computations," R. B. Kearfott and V. 
> Kreinovich, eds., Kluwer.  The volume is presently in production 
> at Kluwer, and should appear soon.  Walster's paper caused me to 
> reflect on the Basic Linear Algebra Subroutines (BLAS) library.  
> Packages for interval arithmetic might follow a similar model.
> 
> BLAS level 1 - (floating point) vector-vector operations
> BLAS level 2 - (floating point) matrix-vector operations
> BLAS level 3 - (floating point) matrix-matrix operations
> 
> I proposed:
> BIAS level 0 - interval-interval scalar operations
> BIAS level 1 - interval vector-vector operations
> BIAS level 2 - interval matrix-vector operations
> BIAS level 3 - interval matrix-matrix operations
> 
> In particular, the Hamburg BIAS package is already well along
> with this model.
> 
> I think the BLAS people did several things right.  We could
> do MUCH worse than to copy from them as much as we can.  It would
> be GREAT if we could get one or two of the BLAS people interested
> enough to at least serve as advisors to a BIAS effort.
> 
> 1.  BLAS offer a perfectly portable interface.  That is, if I write
> BLAS-calling code, it is perfectly portable across any machine.
> 
> 2.  BLAS offer ready-to-read, easy-to-port (but sub-optimal) code.
> Hence, I can run BLAS-calling code on ANY machine.
> 
> 3.  BLAS can be made into as machine-specific implemetation as one cares.
> Hence, if someone (perhaps the vendor) provides optimized BLAS,
> then I get great performance.  The release of at least one machine
> optimized implementation along with the portable implementation is
> essential to prove the concept of a portable, yet machine-specific
> library.
> 
> 4.  BLAS is a level-ed concept.  Their level 1, 2, and 3 are
> mathematically clear and allowed them to release "part" of their
> libraries at a time.  We need a level 0 for interval scalar-scalar
> operations on which level 1 BIAS routines will build.
> 
> 5.  BLAS is highly modular.  Individual routines can be developed
> at separate, loosely-coupled institutions.
> 
> 6.  BLAS offer a consistent interface derived from consideration of
> tasks needed by client programs, not from consideration of what
> some BLAS coder thought was cool.
> 
> 7.  BLAS is a relatively low-budget operation.  Most of the design
> and coding effort was done but a widely distributed set of
> researchers as part of their own research programs.
> 
> 8.  BLAS is freely available (in the portable form).
> Vendor-specific implementations can be included in vendors'
> libraries.
> 
> 9.  BLAS published early and often, attracting a lot of attention.
> 
> 10. BLAS involved the "big guns" of numerical linear algebra so
> that new code development in that research area use it.  Hence,
> BLAS has become THE standard low-level library interface in that
> area.
> 
> 11. BLAS developed new numerical analysis as it went.  One might
> think that a library to those interface specifications was "just" a
> coding exercise, but they proved (by a string of publications) that
> there remained research to be done.  A BIAS effort can take
> advantage of the BLAS research results, shortening our development
> time.  We should expect to do some original research along the way,
> too.
> 
> 
> We should be able to do all the same things for a BIAS library,
> PLUS the interval vector-vector, interval matrix-vector and
> interval matrix-matrix routines should have exactly (to the extent
> possible) the same interfaces as the corresponding BLAS routine.
> The BLAS people have even done our interface design work for us :-)
> 
> I would like to know more about how the BLAS people coordinated
> their distributed development efforts.  E-mail helps.  Meeting at
> conferences helps.  Visits of one team member to another helps.
> However, it seems that at least a LITTLE closed workshop of
> developers meeting from time to time would be essential.  That is
> where I was hoping that Sun might be able to take the first lead.
> 
> Speed is VERY important.  Good interval algorithms tend to run
> roughly 4 times as long as good point algorithms.  That figure
> varies A LOT, from < 1 for some global optimization and a few
> quadrature problems, to several hundred for some ODEs.  Hence, we
> must be able to present ourselves in the most favorable light
> possible.  That requires a fast library.
> 
> The other issue here is tightness.  A machine-specific library is
> likely to be BOTH faster and tighter than a generic library.  A
> machine-dependent operation will usually yeild 1 ULP results, while
> a portable library will usually yield 2-3 ULP results.  In some
> applications, the client software using the portable library will
> do additional iterations of some contractive map to overcome the
> excess width from its operators.  Hence, the portable algorithm is
> slower, as well as the individual portable operations being slower.
> 
> George F. Corliss
> Dept. Math, Stat, Comp Sci
> Marquette University
> P.O. Box 1881
> Milwaukee, WI  53201-1881  USA
> georgecamscs.mu.edu
> (414) 288-6599 (office)



More information about the Numeric-interest mailing list