>
> The second option is cheaper and should be more accurate, as it avoids the 
> cost and undesirable conditioning effect of computing *A*' * *A*. The 
> speedup is noticeable, but a negative side effect overwhelms the 
> improvement in my case. Sparsity is important in my application, and I am 
> using *U* to rotate a quadratic (and to reverse the rotation later on). 
> So I must perform multiple matrix-matrix multiplications involving *U*. 
> It turns out that the overall algorithm cost is *greater* with svd than 
> with eig due to the difference in density of *U*. 


Up to a scale factor, the eigenvectors that you get from eig(A'A) should be 
identical to the corresponding singular vectors from svd(A), by the 
definition of the SVD.    If you are seeing a difference in sparsity, I can 
think of only two explanations:

1) The differences are just roundoff errors.  i.e. if you set U[abs(U) .< 
THRESHOLD] = 0 for some appropriate threshold, then the results will be the 
same.

2) You matrix has a lot of degeneracies, so the eigenvectors are not unique 
and different algorithms result in different choices.  You could try to 
find a sparsifying unitary transform (google this) to try and find the 
sparsest eigenvectors for each eigenvalue with multiplicity > 1.

Without knowing more about your matrix A, it's hard to say more.

Reply via email to