from na news - ScaLapack
David G. Hough at validgh
dgh
Mon Jan 31 08:17:56 PST 1994
From: Jack Dongarra <dongarraacs.utk.edu>
Date: Fri, 28 Jan 94 12:49:55 -0500
Subject: Availability of ScaLapack
As part of the ScaLapack project, several new software items are now
available on netlib. Be aware that these are preliminary version
of the package. Major changes will occur over time. The new items
that have been introduced are:
1) Distributed memory version of the core routines from LAPACK
2) Fully parallel package to solve a symmetric positive definite sparse linear
system on a message passing multiprocessor using Cholesky factorization.
3) A package based on Arnoldi's method for solving large scale nonsymmetric,
symmetric, and generalized algebraic eigenvalue problems.
4) C version of LAPACK
5) LAPACK++ a C++ implementation of some of the LAPACK.
6) Templates for sparse iterative methods for non-symmetric Ax=b.
For more information on the availability of each of these
packages, consult the scalapack, clapack, c++, or linalg indexes on netlib.
echo "send index from scalapack" | mail netlibaornl.gov
echo "send index from clapack" | mail netlibaornl.gov
echo "send index from c++/lapack++" | mail netlibaornl.gov
echo "send index from linalg" | mail netlibaornl.gov
1) Distributed memory version of the core routines from LAPACK
Beta version 1.0 of this part of the package includes factorization
and solve routines for LU, QR, and Cholesky; decomposition routines to
Hessenberg form, tridiagonal form, and bidiagonal form; and,
preliminary versions of QR with column pivoting, triangular inversion,
and a parallel implementation of the SIGN function, which uses
deflation to calculate eigenvalues. Condition estimation and iterative
refinement routines are also provided for LU and Cholesky. The current
version of ScaLapack is in double precision real. Future releases of
ScaLapack will include complex versions of routines as well as the
single precision equivalents. At the present time, ScaLapack has been
ported to the Intel Gamma, Delta, and Paragon, Thinking Machines CM-5,
and PVM clusters. We are in the process of porting the BLACS to the
IBM SP-1.
A second release of PUMMA (Parallel Universal Matrix Multiply Algorithm)
is included with the ScaLapack software. Both a PICL implementation and
a BLACS implementation of PUMMA are provided.
2) Fully parallel package to solve a symmetric positive definite sparse linear
--More--
system on a message passing multiprocessor using Cholesky factorization.
CAPSS (CArtesian Parallel Sparse Solver) is a fully parallel package to
solve a symmetric positive definite sparse linear system on a message
passing multiprocessor using Cholesky factorization. All phases of the
computation, from ordering through numerical solution, are performed in
parallel. The ordering uses Cartesian nested dissection based on an
embedding of the problem in Euclidean space. This first release is
meant for Intel iPSC/860 machines; the code has been compiled and
tested on an Intel iPSC/860 with 128 processors. The code is written
in C with message passing extensions provided by PICL (Portable
Instrumented Communications Library), which is also available from
netlib. CAPSS also uses a few native iPSC/860 functions.
3) A package based on Arnoldi's method for solving large scale nonsymmetric,
symmetric, and generalized algebraic eigenvalue problems.
ARPACK is a Fortran 77 software package for solving large scale
eigenvalue problems. The package is designed to compute a few
eigenvalues and corresponding eigenvectors of a large (sparse) matrix.
The package provides a communication interface(RCI) to user applications.
RCI allows maximal flexibility with respect to user needs and allows
(and requires) a user to define its own matrix-vector multiply
and/or linear solver routines for the ARPACK supported modes
(simple REGULAR, simple SHIFT-AND-INVERT, generalized REGULAR,
generalized SHIFT-AND-INVERT and CAYLEY mode are supported).
A symmetric ARPACK Intel Touchstone Delta parallel implementation
is also available on netlib (see arnoldi-delta/SRC/ex-sym.doc).
ARPACK depends on standard BLAS (Levels 1 , 2 and 3) and LAPACK libraries
which exist in object form on the Delta.
4) C version of LAPACK
CLAPACK is an automated f2c conversion of Fortran 77 LAPACK into ANSI C.
Be aware that since this is an f2c conversion of existing column-oriented
Fortran 77 LAPACK code, all CLAPACK code is column-oriented NOT
row-oriented.
5) LAPACK++ a C++ implementation of some of the LAPACK.
LAPACK++ is the C++ version of LAPACK. This version includes support
for solving linear systems using LU, Cholesky, and QR matrix factorizations. LAPACK++ supports various matrix classes for vectors, non-symmetric
matrices, SPD matrices, symmetric matrices, banded, triangular,
and tridiagonal matrices; however, Version 0.9 does not include all
of the capabilities of original f77 LAPACK. Emphasis is given to
routines for solving linear systems consisting of non-symmetric matrices,
symmetric positive definite systems, and solving linear least-square systems. Support for eigenvalue problems and singular value decompositions are
not included in this prototype release. Future versions of LAPACK++
will support this as well as distributed matrix classes for parallel
computer architectures.
6) Templates for sparse iterative methods for non-symmetric Ax=b.
We have put together a book on iterative method for large sparse
nonsymmetric systems of linear equations. The book is available in
postscript form on netlib or can be ordered from SIAM.
Using concept of templates, we presents the algorithms using the
same notation in a straight forward manner permitting the user to
inspect, modify, or ignore any desired level of implementation detail.
Hints on parallelization, use, and other practical aspects are provided.
In addition to the algorithmic description in the book we have
provided a set of software in Fortran and in Matlab for the
following methods:
Bi-conjugate Gradient
Bi-conjugate Gradient stabilized
Chebyshev
Conjugate Gradient
Conjugate Gradient squared
Generalized Minimal Residual
Jacobi
Quasi-Minimal residual
Successive Over-Relaxation
The ScaLapack group:
Oak Ridge National Laboratory
Rice University
University of California, Berkeley
University of Illinois
University of Tennessee
Comments and questions can be sent to scalapackacs.utk.edu.
More information about the Numeric-interest
mailing list