Toon Weyens <[email protected]> writes: > Hi, thanks for the answers! > > I think I expressed myself wrong: I can indeed get it to work with just > using AIJ matrices, as in example 13. This is the way that I am currently > solving my problem. There are only two issues: > 1. memory is indeed important so I would certainly like to decrease it by > one third if possible :-) The goal is to make the simulations as fast and > light as possible to be able to perform parameter studies (on the stability > of MHD configurations).
What do your matrices represent? If A and B are really tridiagonal, then memory needed for matrix storage is irrelevant because the Krylov vectors will dominate. > 2. I have played around a little bit with the different solvers but it > appears that the standard method and the Arnoldi with explicit restart > method are the best. Some of the others don't converge and some are slower. > > The thing is that in the end the matrices that I use are large but they > have a very easy structure: hermitian tri-diagonal. That's why, I think, > slepc usually converges in a few iterations (correct me if I'm wrong). > > The problem is that sometimes, when I consider more grid points, the solver > doesn't work any more because apparently it uses the LU decomposition (not > sure for the matrix A or B in A x = lambda B x) The factorization is for B because you are not using inversion (usually to target interior eigenvalues or those near 0). What is B?
pgpk4ZtWAeLQx.pgp
Description: PGP signature
