The computed residuals are not good. Something went wrong during the eigensolve, probably within your shell matrix multiply. You should check all operations, specially within your shell matrix ops, wrapping them with PetscCall(). Otherwise, failures go unnoticed.
Regarding shift-and-invert, you cannot use PCLU with a shell matrix. You should use an iterative KSP/PC, but this will likely be very inefficient. An alternative would be to implement the shift-and-invert transformation yourself as a shell matrix. Probably you are doing things too complicated. Instead I would try solving the generalized eigenproblem Q*y=lambda*C*y with C=B^H*B and y=B^-1*x. You can build C explicitly with PETSc Mat-Mat product operations, see https://petsc.org/release/manualpages/Mat/MatMatMult/ - then you will be able to use STSINVERT. Jose > El 15 ago 2023, a las 11:21, maitri ksh <[email protected]> escribió: > > I used 'Matshell' and associated it with a custom-defined matrix-vector > multiplication and used it to solve the eigenvalue problem > ((((LU)^-H)*Q*(LU)^-1)*x = lmbda*x, where LU is the factor of a matrix B). I > compared the eigenvalue results with matlab, however in matlab, I computed > the matrix A=(B^-H)*Q*B^-1 directly and used eig(A). Here are the results: > > petsc('eigVal.png'): (method: krylovschur) > lmbd1 = 22.937184 > lmbd2 = -6.306099 > lmbd3 = 2.904980 > lmbd4 = 0.026435 > > Matlab: > lmbd1 = 0.0021 > lmbd2 = 0.0840 > lmbd3 = 3.9060 > lmbd4 = 22.7579 > > It appears that the iterative procedure that I have adopted (in petsc) is > accurate only for the largest eigenvalue. Is this correct? or is it due to > some error in my code? > > Also, I tried using shift-invert-strategy ('code_snippet_sinvert.png') to see > if I can get accurate non-largest eigenvalue, but it throws error > ('error.png') related to 'MatSolverType mumps does not support matrix type > shell', and it gives the same error message with petsc's native > MATSOLVERSUPERLU. How to resolve this? > > > > > > > > On Mon, Aug 14, 2023 at 1:20 PM maitri ksh <[email protected]> wrote: > got it, thanks Pierre & Jose. > > On Mon, Aug 14, 2023 at 12:50 PM Jose E. Roman <[email protected]> wrote: > See for instance ex3.c and ex9.c > https://slepc.upv.es/documentation/current/src/eps/tutorials/index.html > > Jose > > > > El 14 ago 2023, a las 10:45, Pierre Jolivet <[email protected]> > > escribió: > > > > > > > >> On 14 Aug 2023, at 10:39 AM, maitri ksh <[email protected]> wrote: > >> > >> > >> Hi, > >> I need to solve an eigenvalue problem Ax=lmbda*x, where A=(B^-H)*Q*B^-1 > >> is a hermitian matrix, 'B^-H' refers to the hermitian of the inverse of > >> the matrix B. Theoretically it would take around 1.8TB to explicitly > >> compute the matrix B^-1 . A feasible way to solve this eigenvalue problem > >> would be to use the LU factors of the B matrix instead. So the problem > >> looks something like this: > >> (((LU)^-H)*Q*(LU)^-1)*x = lmbda*x > >> For a guess value of the (normalised) eigen-vector 'x', > >> 1) one would require to solve two linear equations to get 'Ax', > >> (LU)*y=x, solve for 'y', > >> ((LU)^H)*z=Q*y, solve for 'z' > >> then one can follow the conventional power-iteration procedure > >> 2) update eigenvector: x= z/||z|| > >> 3) get eigenvalue using the Rayleigh quotient > >> 4) go to step-1 and loop through with a conditional break. > >> > >> Is there any example in petsc that does not require explicit declaration > >> of the matrix 'A' (Ax=lmbda*x) and instead takes a vector 'Ax' as input > >> for an iterative algorithm (like the one above). I looked into some of the > >> examples of eigenvalue problems ( it's highly possible that I might have > >> overlooked, I am new to petsc) but I couldn't find a way to circumvent the > >> explicit declaration of matrix A. > > > > You could use SLEPc with a MatShell, that’s the very purpose of this > > MatType. > > > > Thanks, > > Pierre > > > >> Maitri > > <eigVal.png><code_snippet_sinvert.png><error.png>
