Jim Demmel talk at Stanford 8 October 1990 - key points

David Hough sun!Eng!dgh
Mon Oct 8 18:37:56 PDT 1990


This was a talk on LAPACK progress toward solving standard problems of
linear algebra with greater accuracy but without significant speed penalties.
There was a lot of material and you might be able to get technical reports
from na.demmel; I just want to mention a couple of key points with
important morals, subject
to errors in my superficial understanding.

This work has been most successful at various special but important classes
of matrices such as symmetric positive definite and bidiagonal; the idea
is think in terms of error bounds and algorithms that preserve sparsity
and small components.

The usual kind of error analysis considers a perturbation dH of a matrix
H to be inconsequential if ||dh||/||H|| is small enough.  Using norms 
means that the biggest elements of dh must be smaller than the biggest
elements of H, but the smallest elements of H could be wiped out by dH.
In particular the zero elements of H that defines its sparsity could be
filled in.

The new approach is to consider a perturbation dH inconsequential if
max | dh(i,j)/H(i,j) | is small enough.  In other words every element
of the data may suffer only a small relative perturbation, and zero elements
none, so sparsity is preserved.  An example of a consequential kind of
algorithmic change is that a Jacobi eigenvalue termination criterion like

H(i,j) small enough compared to max H(k,k)

becomes the stricter

H(i,j) small enough compared to sqrt(H(i,i)*H(j,j))

Everything is absolute value.

Another change of viewpoint in a conventional QR algorithm allowed better
relative errors in small eigenvalues.  Where normally an implicit or explicit
shift QR algorithm would be used, it usually happens that very small eigenvalues
get clobbered, basically because

	(tiny - huge) + huge

removes most of the significant digits of tiny.  The new algorithm checks for
the case that small eigenvalues are present and if so, performs an implicit
zero shift algebraically; many terms cancel out, and the resulting formula
is computationally stable in the relative error sense.  This is a specific
illustration of the general principle that it is usually better to cancel early
than late, and usually better to cancel symbolically than numerically.

Somebody in Germany has proved that the problem of determining the nearest
singular matrix to a given one is NP complete.  I had already proven years
ago that the problem of me writing an acceptable thesis on that subject was 
at least that hard.  Instead I barely squeaked by with polynomials.  However
nobody needs to read my thesis anymore thanks to the Demmel's work published
in Math. Comp. and elsewhere.

Although there was nothing new about it, Jim reminded us of a good example 
of IEEE arithmetic in action; in the iteration for eigenvalues in tridiagonal
matrices,

	u = sigma - bi**2 / u

neither underflow nor overflow matters much; as in continued fractions,
infinities and zeros are here today and gone tomorrow with no intervention
required, and nobody even needs to think about it (until they try to port
the code to an older system with a punitive attitude toward overflow).



More information about the Numeric-interest mailing list