I'm working to expand the utility of the Gram-Schmidt vector
orthogonalization routines.  Mostly this is on the back of two QR
matrix decomposition routines, one for exact fields that contain their
square roots (ie QQbar), the other for matrices over RDF/CDF, plus an
existing routine.  The more stable algorithms find unit vectors as
they go.  The existing routine just finds an orthogonal set, side-
stepping the problem square roots needed to scale vectors to unit
vectors.

My problem is that the current routine returns a matrix of orthogonal
row vectors, G, and a matrix of coefficients, mu, which encodes the
conversion from the original matrix of row vectors, A.  The
relationship is given by  A = (mu + 1)*G, where the 1 means the
identity matrix.  The identity matrix is a result of no scaling, which
I cannot save for last in the superior algorithms, thus I don't always
get 1's on the diagonal.  Also, it implies mu is a square matrix,
which it can always be if you like - but if you start with a linearly
dependent set of vectors, you can toss out manufactured zero vectors
and arrive at a smaller mu that is rectangular.

In all cases, it is no problem to (easily) produce a lower-triangular
matrix M such that  A = M*G if the "diagonal 1's" requirement is
dropped.  How much use does the Gram-Schmidt routine see in folks'
code?  How big an impact would it have to change this returned value
slightly?  I ask in part, because the current routine has a big bug
when you feed it a linearly dependent set, so I'm wondering if it gets
used very much (or maybe all it ever sees is a linearly independent
set).

Thanks,
Rob

-- 
To post to this group, send an email to [email protected]
To unsubscribe from this group, send an email to 
[email protected]
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to