Re: [petsc-users] SLEPc: smallest eigenvalues

2021-10-05 Thread Jose E. Roman
Nonlinear eigenvalue problems can still be considered a research topic. The NEP 
package is more or less "finished", cf. https://doi.org/10.1145/3447544 , but 
your use case may require changes. I would suggest that you write another email 
to me (not the list) and we can discuss the details.

Jose

> El 5 oct 2021, a las 8:04, Varun Hiremath  escribió:
> 
> Hi Jose,
> 
> I have now gotten the quadratic problem working decently using the PEP 
> package with appropriate scaling and preconditioning, so thanks for all the 
> suggestions! For the case where K is a shell matrix, I used a scaling based 
> on an approximation of K, and that seems to be working well.
> 
> So now that both linear and quadratic problems are working, I wanted to get 
> your suggestions on solving a non-linear problem. In some of our cases, we 
> have a non-linear source term S(lambda) on the right-hand side of the 
> equation as follows:
> (K + lambda*C + lambda^2*M)*x = S(lambda)*x,
> where the source can sometimes be simplified as S(lambda) = exp(lambda*t)*A, 
> where A is a constant matrix.
> 
> I am currently solving this non-linear problem iteratively. For each 
> eigenvalue, I compute the source and add it into the K matrix, and then 
> iterate until convergence. For this reason, I end up solving the system 
> multiple times which makes it very slow. I saw some examples of non-linear 
> problems included in the NEP package. I just wanted to get your thoughts if I 
> would benefit from using the NEP package for this particular problem? Will I 
> be able to use preconditioning and scaling as with the PEP package to speed 
> up the computation for the case where K is a shell matrix? Thanks for your 
> help.
> 
> Regards,
> Varun
> 
> 
> On Thu, Sep 30, 2021 at 10:12 PM Varun Hiremath  
> wrote:
> Hi Jose,
> 
> Thanks again for your valuable suggestions. I am still working on this but 
> wanted to give you a quick update.
> 
> For the linear problem, I tried different KSP solvers, and finally, I'm 
> getting good convergence using CGS with LU (using MUMPS) inexact inverse. So 
> thank you very much for your help!
> 
> But for the quadratic problem, I'm still struggling. As you suggested, I have 
> now started using the PEP solver. For the simple case where the K matrix is 
> explicitly known, everything works fine. But for the case where K is a shell 
> matrix, it struggles to converge. I am yet to try the scaling option and some 
> other preconditioning options. I will get back to you on this if I have any 
> questions. Appreciate your help!
> 
> Thanks,
> Varun
> 
> On Tue, Sep 28, 2021 at 8:09 AM Jose E. Roman  wrote:
> 
> 
> > El 28 sept 2021, a las 7:50, Varun Hiremath  
> > escribió:
> > 
> > Hi Jose,
> > 
> > I implemented the LU factorized preconditioner and tested it using PREONLY 
> > + LU, but that actually is converging to the wrong eigenvalues, compared to 
> > just using BICGS + BJACOBI, or simply computing EPS_SMALLEST_MAGNITUDE 
> > without any preconditioning. My preconditioning matrix is only a 1st order 
> > approximation, and the off-diagonal terms are not very accurate, so I'm 
> > guessing this is why the LU factorization doesn't help much? Nonetheless, 
> > using BICGS + BJACOBI with slightly relaxed tolerances seems to be working 
> > fine.
> 
> If your PCMAT is not an exact inverse, then you have to iterate, i.e. not use 
> KSPPREONLY but KSPBCGS or another.
> 
> > 
> > I now want to test the same preconditioning idea for a quadratic problem. I 
> > am solving a quadratic equation similar to Eqn.(5.1) in the SLEPc manual:
> >   (K + lambda*C + lambda^2*M)*x = 0,
> > I don't use the PEP package directly, but solve this by linearizing similar 
> > to Eqn.(5.3) and calling EPS. Without explicitly forming the full matrix, I 
> > just use the block matrix structure as explained in the below example and 
> > that works nicely for my case:
> > https://slepc.upv.es/documentation/current/src/eps/tutorials/ex9.c.html
> 
> Using PEP is generally recommended. The default solver TOAR is 
> memory-efficient and performs less computation than a trivial linearization. 
> In addition, PEP allows you to do scaling, which is often very important to 
> get accurate results in some problems, depending on conditioning.
> 
> In your case K is a shell matrix, so things may not be trivial. If I am not 
> wrong, you should be able to use STSetPreconditionerMat() for a PEP, where 
> the preconditioner in this case should be built to approximate Q(sigma), 
> where Q(.) is the quadratic polynomial and sigma is the target.
> 
> > 
> > In my case, K is not explicitly known, and for linear problems, where C = 
> > 0, I am using a 1st order approximation of K as the preconditioner. Now 
> > could you please tell me if there is a way to conveniently set the 
> > preconditioner for the quadratic problem, which will be of the form [-K 0; 
> > 0 I]? Note that K is constructed in parallel (the rows are distributed), so 
> > I wasn't 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-10-05 Thread Varun Hiremath
Hi Jose,

I have now gotten the quadratic problem working decently using the PEP
package with appropriate scaling and preconditioning, so thanks for all the
suggestions! For the case where K is a shell matrix, I used a scaling based
on an approximation of K, and that seems to be working well.

So now that both linear and quadratic problems are working, I wanted to get
your suggestions on solving a non-linear problem. In some of our cases, we
have a non-linear source term S(lambda) on the right-hand side of the
equation as follows:
(K + lambda*C + lambda^2*M)*x = S(lambda)*x,
where the source can sometimes be simplified as S(lambda) =
exp(lambda*t)*A, where A is a constant matrix.

I am currently solving this non-linear problem iteratively. For each
eigenvalue, I compute the source and add it into the K matrix, and then
iterate until convergence. For this reason, I end up solving the system
multiple times which makes it very slow. I saw some examples of non-linear
problems included in the NEP package. I just wanted to get your thoughts if
I would benefit from using the NEP package for this particular problem?
Will I be able to use preconditioning and scaling as with the PEP package
to speed up the computation for the case where K is a shell matrix? Thanks
for your help.

Regards,
Varun


On Thu, Sep 30, 2021 at 10:12 PM Varun Hiremath 
wrote:

> Hi Jose,
>
> Thanks again for your valuable suggestions. I am still working on this but
> wanted to give you a quick update.
>
> For the linear problem, I tried different KSP solvers, and finally, I'm
> getting good convergence using CGS with LU (using MUMPS) inexact inverse.
> So thank you very much for your help!
>
> But for the quadratic problem, I'm still struggling. As you suggested, I
> have now started using the PEP solver. For the simple case where the K
> matrix is explicitly known, everything works fine. But for the case where K
> is a shell matrix, it struggles to converge. I am yet to try the scaling
> option and some other preconditioning options. I will get back to you on
> this if I have any questions. Appreciate your help!
>
> Thanks,
> Varun
>
> On Tue, Sep 28, 2021 at 8:09 AM Jose E. Roman  wrote:
>
>>
>>
>> > El 28 sept 2021, a las 7:50, Varun Hiremath 
>> escribió:
>> >
>> > Hi Jose,
>> >
>> > I implemented the LU factorized preconditioner and tested it using
>> PREONLY + LU, but that actually is converging to the wrong eigenvalues,
>> compared to just using BICGS + BJACOBI, or simply computing
>> EPS_SMALLEST_MAGNITUDE without any preconditioning. My preconditioning
>> matrix is only a 1st order approximation, and the off-diagonal terms are
>> not very accurate, so I'm guessing this is why the LU factorization doesn't
>> help much? Nonetheless, using BICGS + BJACOBI with slightly relaxed
>> tolerances seems to be working fine.
>>
>> If your PCMAT is not an exact inverse, then you have to iterate, i.e. not
>> use KSPPREONLY but KSPBCGS or another.
>>
>> >
>> > I now want to test the same preconditioning idea for a quadratic
>> problem. I am solving a quadratic equation similar to Eqn.(5.1) in the
>> SLEPc manual:
>> >   (K + lambda*C + lambda^2*M)*x = 0,
>> > I don't use the PEP package directly, but solve this by linearizing
>> similar to Eqn.(5.3) and calling EPS. Without explicitly forming the full
>> matrix, I just use the block matrix structure as explained in the below
>> example and that works nicely for my case:
>> > https://slepc.upv.es/documentation/current/src/eps/tutorials/ex9.c.html
>>
>> Using PEP is generally recommended. The default solver TOAR is
>> memory-efficient and performs less computation than a trivial
>> linearization. In addition, PEP allows you to do scaling, which is often
>> very important to get accurate results in some problems, depending on
>> conditioning.
>>
>> In your case K is a shell matrix, so things may not be trivial. If I am
>> not wrong, you should be able to use STSetPreconditionerMat() for a PEP,
>> where the preconditioner in this case should be built to approximate
>> Q(sigma), where Q(.) is the quadratic polynomial and sigma is the target.
>>
>> >
>> > In my case, K is not explicitly known, and for linear problems, where C
>> = 0, I am using a 1st order approximation of K as the preconditioner. Now
>> could you please tell me if there is a way to conveniently set the
>> preconditioner for the quadratic problem, which will be of the form [-K 0;
>> 0 I]? Note that K is constructed in parallel (the rows are distributed), so
>> I wasn't sure how to construct this preconditioner matrix which will be
>> compatible with the shell matrix structure that I'm using to define the
>> MatMult function as in ex9.
>>
>> The shell matrix of ex9.c interleaves the local parts of the first block
>> and the second block. In other words, a process' local part consists of the
>> local rows of the first block followed by the local rows of the second
>> block. In your case, the local rows of K followed by the local 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-30 Thread Varun Hiremath
Hi Jose,

Thanks again for your valuable suggestions. I am still working on this but
wanted to give you a quick update.

For the linear problem, I tried different KSP solvers, and finally, I'm
getting good convergence using CGS with LU (using MUMPS) inexact inverse.
So thank you very much for your help!

But for the quadratic problem, I'm still struggling. As you suggested, I
have now started using the PEP solver. For the simple case where the K
matrix is explicitly known, everything works fine. But for the case where K
is a shell matrix, it struggles to converge. I am yet to try the scaling
option and some other preconditioning options. I will get back to you on
this if I have any questions. Appreciate your help!

Thanks,
Varun

On Tue, Sep 28, 2021 at 8:09 AM Jose E. Roman  wrote:

>
>
> > El 28 sept 2021, a las 7:50, Varun Hiremath 
> escribió:
> >
> > Hi Jose,
> >
> > I implemented the LU factorized preconditioner and tested it using
> PREONLY + LU, but that actually is converging to the wrong eigenvalues,
> compared to just using BICGS + BJACOBI, or simply computing
> EPS_SMALLEST_MAGNITUDE without any preconditioning. My preconditioning
> matrix is only a 1st order approximation, and the off-diagonal terms are
> not very accurate, so I'm guessing this is why the LU factorization doesn't
> help much? Nonetheless, using BICGS + BJACOBI with slightly relaxed
> tolerances seems to be working fine.
>
> If your PCMAT is not an exact inverse, then you have to iterate, i.e. not
> use KSPPREONLY but KSPBCGS or another.
>
> >
> > I now want to test the same preconditioning idea for a quadratic
> problem. I am solving a quadratic equation similar to Eqn.(5.1) in the
> SLEPc manual:
> >   (K + lambda*C + lambda^2*M)*x = 0,
> > I don't use the PEP package directly, but solve this by linearizing
> similar to Eqn.(5.3) and calling EPS. Without explicitly forming the full
> matrix, I just use the block matrix structure as explained in the below
> example and that works nicely for my case:
> > https://slepc.upv.es/documentation/current/src/eps/tutorials/ex9.c.html
>
> Using PEP is generally recommended. The default solver TOAR is
> memory-efficient and performs less computation than a trivial
> linearization. In addition, PEP allows you to do scaling, which is often
> very important to get accurate results in some problems, depending on
> conditioning.
>
> In your case K is a shell matrix, so things may not be trivial. If I am
> not wrong, you should be able to use STSetPreconditionerMat() for a PEP,
> where the preconditioner in this case should be built to approximate
> Q(sigma), where Q(.) is the quadratic polynomial and sigma is the target.
>
> >
> > In my case, K is not explicitly known, and for linear problems, where C
> = 0, I am using a 1st order approximation of K as the preconditioner. Now
> could you please tell me if there is a way to conveniently set the
> preconditioner for the quadratic problem, which will be of the form [-K 0;
> 0 I]? Note that K is constructed in parallel (the rows are distributed), so
> I wasn't sure how to construct this preconditioner matrix which will be
> compatible with the shell matrix structure that I'm using to define the
> MatMult function as in ex9.
>
> The shell matrix of ex9.c interleaves the local parts of the first block
> and the second block. In other words, a process' local part consists of the
> local rows of the first block followed by the local rows of the second
> block. In your case, the local rows of K followed by the local rows of the
> identity (appropriately padded with zeros).
>
> Jose
>
>
> >
> > Thanks,
> > Varun
> >
> > On Fri, Sep 24, 2021 at 11:50 PM Varun Hiremath 
> wrote:
> > Ok, great! I will give that a try, thanks for your help!
> >
> > On Fri, Sep 24, 2021 at 11:12 PM Jose E. Roman 
> wrote:
> > Yes, you can use PCMAT
> https://petsc.org/release/docs/manualpages/PC/PCMAT.html then pass a
> preconditioner matrix that performs the inverse via a shell matrix.
> >
> > > El 25 sept 2021, a las 8:07, Varun Hiremath 
> escribió:
> > >
> > > Hi Jose,
> > >
> > > Thanks for checking my code and providing suggestions.
> > >
> > > In my particular case, I don't know the matrix A explicitly, I compute
> A*x in a matrix-free way within a shell matrix, so I can't use any of the
> direct factorization methods. But just a question regarding your suggestion
> to compute a (parallel) LU factorization. In our work, we do use MUMPS to
> compute the parallel factorization. For solving the generalized problem,
> A*x = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where
> factorization of B is computed using MUMPS. (We don't call MUMPS through
> SLEPc as we have our own MPI wrapper and other user settings to handle.)
> > >
> > > So for the preconditioning, instead of using the iterative solvers,
> can I provide a shell matrix that computes inv(P)*x corrections (where P is
> the preconditioner matrix) using MUMPS direct solver?
> > >
> > > And 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-28 Thread Jose E. Roman



> El 28 sept 2021, a las 7:50, Varun Hiremath  
> escribió:
> 
> Hi Jose,
> 
> I implemented the LU factorized preconditioner and tested it using PREONLY + 
> LU, but that actually is converging to the wrong eigenvalues, compared to 
> just using BICGS + BJACOBI, or simply computing EPS_SMALLEST_MAGNITUDE 
> without any preconditioning. My preconditioning matrix is only a 1st order 
> approximation, and the off-diagonal terms are not very accurate, so I'm 
> guessing this is why the LU factorization doesn't help much? Nonetheless, 
> using BICGS + BJACOBI with slightly relaxed tolerances seems to be working 
> fine.

If your PCMAT is not an exact inverse, then you have to iterate, i.e. not use 
KSPPREONLY but KSPBCGS or another.

> 
> I now want to test the same preconditioning idea for a quadratic problem. I 
> am solving a quadratic equation similar to Eqn.(5.1) in the SLEPc manual:
>   (K + lambda*C + lambda^2*M)*x = 0,
> I don't use the PEP package directly, but solve this by linearizing similar 
> to Eqn.(5.3) and calling EPS. Without explicitly forming the full matrix, I 
> just use the block matrix structure as explained in the below example and 
> that works nicely for my case:
> https://slepc.upv.es/documentation/current/src/eps/tutorials/ex9.c.html

Using PEP is generally recommended. The default solver TOAR is memory-efficient 
and performs less computation than a trivial linearization. In addition, PEP 
allows you to do scaling, which is often very important to get accurate results 
in some problems, depending on conditioning.

In your case K is a shell matrix, so things may not be trivial. If I am not 
wrong, you should be able to use STSetPreconditionerMat() for a PEP, where the 
preconditioner in this case should be built to approximate Q(sigma), where Q(.) 
is the quadratic polynomial and sigma is the target.

> 
> In my case, K is not explicitly known, and for linear problems, where C = 0, 
> I am using a 1st order approximation of K as the preconditioner. Now could 
> you please tell me if there is a way to conveniently set the preconditioner 
> for the quadratic problem, which will be of the form [-K 0; 0 I]? Note that K 
> is constructed in parallel (the rows are distributed), so I wasn't sure how 
> to construct this preconditioner matrix which will be compatible with the 
> shell matrix structure that I'm using to define the MatMult function as in 
> ex9.

The shell matrix of ex9.c interleaves the local parts of the first block and 
the second block. In other words, a process' local part consists of the local 
rows of the first block followed by the local rows of the second block. In your 
case, the local rows of K followed by the local rows of the identity 
(appropriately padded with zeros).

Jose


> 
> Thanks,
> Varun
> 
> On Fri, Sep 24, 2021 at 11:50 PM Varun Hiremath  
> wrote:
> Ok, great! I will give that a try, thanks for your help!
> 
> On Fri, Sep 24, 2021 at 11:12 PM Jose E. Roman  wrote:
> Yes, you can use PCMAT 
> https://petsc.org/release/docs/manualpages/PC/PCMAT.html then pass a 
> preconditioner matrix that performs the inverse via a shell matrix.
> 
> > El 25 sept 2021, a las 8:07, Varun Hiremath  
> > escribió:
> > 
> > Hi Jose,
> > 
> > Thanks for checking my code and providing suggestions. 
> > 
> > In my particular case, I don't know the matrix A explicitly, I compute A*x 
> > in a matrix-free way within a shell matrix, so I can't use any of the 
> > direct factorization methods. But just a question regarding your suggestion 
> > to compute a (parallel) LU factorization. In our work, we do use MUMPS to 
> > compute the parallel factorization. For solving the generalized problem, 
> > A*x = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where 
> > factorization of B is computed using MUMPS. (We don't call MUMPS through 
> > SLEPc as we have our own MPI wrapper and other user settings to handle.)
> > 
> > So for the preconditioning, instead of using the iterative solvers, can I 
> > provide a shell matrix that computes inv(P)*x corrections (where P is the 
> > preconditioner matrix) using MUMPS direct solver?
> > 
> > And yes, thanks, #define PETSC_USE_COMPLEX 1 is not needed, it works 
> > without it.
> > 
> > Regards,
> > Varun
> > 
> > On Fri, Sep 24, 2021 at 9:14 AM Jose E. Roman  wrote:
> > If you do
> > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> > then it is using an LU factorization (the default), which is fast.
> > 
> > Use -eps_view to see which solver settings are you using.
> > 
> > BiCGStab with block Jacobi does not work for you matrix, it exceeds the 
> > maximum 1 iterations. So this is not viable unless you can find a 
> > better preconditioner for your problem. If not, just using 
> > EPS_SMALLEST_MAGNITUDE will be faster.
> > 
> > Computing smallest magnitude eigenvalues is a difficult task. The most 
> > robust way is to compute a (parallel) LU factorization if you can afford it.
> > 
> > 
> > A 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-27 Thread Varun Hiremath
Hi Jose,

I implemented the LU factorized preconditioner and tested it using
PREONLY + LU, but that actually is converging to the wrong eigenvalues,
compared to just using BICGS + BJACOBI, or simply computing
EPS_SMALLEST_MAGNITUDE without any preconditioning. My preconditioning
matrix is only a 1st order approximation, and the off-diagonal terms are
not very accurate, so I'm guessing this is why the LU factorization doesn't
help much? Nonetheless, using BICGS + BJACOBI with slightly relaxed
tolerances seems to be working fine.

I now want to test the same preconditioning idea for a quadratic problem. I
am solving a quadratic equation similar to Eqn.(5.1) in the SLEPc manual:
  (K + lambda*C + lambda^2*M)*x = 0,
I don't use the PEP package directly, but solve this by linearizing similar
to Eqn.(5.3) and calling EPS. Without explicitly forming the full matrix, I
just use the block matrix structure as explained in the below example and
that works nicely for my case:
https://slepc.upv.es/documentation/current/src/eps/tutorials/ex9.c.html

In my case, K is not explicitly known, and for linear problems, where C =
0, I am using a 1st order approximation of K as the preconditioner. Now
could you please tell me if there is a way to conveniently set the
preconditioner for the quadratic problem, which will be of the form [-K 0;
0 I]? Note that K is constructed in parallel (the rows are distributed), so
I wasn't sure how to construct this preconditioner matrix which will be
compatible with the shell matrix structure that I'm using to define the
MatMult function as in ex9.

Thanks,
Varun

On Fri, Sep 24, 2021 at 11:50 PM Varun Hiremath 
wrote:

> Ok, great! I will give that a try, thanks for your help!
>
> On Fri, Sep 24, 2021 at 11:12 PM Jose E. Roman  wrote:
>
>> Yes, you can use PCMAT
>> https://petsc.org/release/docs/manualpages/PC/PCMAT.html then pass a
>> preconditioner matrix that performs the inverse via a shell matrix.
>>
>> > El 25 sept 2021, a las 8:07, Varun Hiremath 
>> escribió:
>> >
>> > Hi Jose,
>> >
>> > Thanks for checking my code and providing suggestions.
>> >
>> > In my particular case, I don't know the matrix A explicitly, I compute
>> A*x in a matrix-free way within a shell matrix, so I can't use any of the
>> direct factorization methods. But just a question regarding your suggestion
>> to compute a (parallel) LU factorization. In our work, we do use MUMPS to
>> compute the parallel factorization. For solving the generalized problem,
>> A*x = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where
>> factorization of B is computed using MUMPS. (We don't call MUMPS through
>> SLEPc as we have our own MPI wrapper and other user settings to handle.)
>> >
>> > So for the preconditioning, instead of using the iterative solvers, can
>> I provide a shell matrix that computes inv(P)*x corrections (where P is the
>> preconditioner matrix) using MUMPS direct solver?
>> >
>> > And yes, thanks, #define PETSC_USE_COMPLEX 1 is not needed, it works
>> without it.
>> >
>> > Regards,
>> > Varun
>> >
>> > On Fri, Sep 24, 2021 at 9:14 AM Jose E. Roman 
>> wrote:
>> > If you do
>> > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
>> > then it is using an LU factorization (the default), which is fast.
>> >
>> > Use -eps_view to see which solver settings are you using.
>> >
>> > BiCGStab with block Jacobi does not work for you matrix, it exceeds the
>> maximum 1 iterations. So this is not viable unless you can find a
>> better preconditioner for your problem. If not, just using
>> EPS_SMALLEST_MAGNITUDE will be faster.
>> >
>> > Computing smallest magnitude eigenvalues is a difficult task. The most
>> robust way is to compute a (parallel) LU factorization if you can afford it.
>> >
>> >
>> > A side note: don't add this to your source code
>> > #define PETSC_USE_COMPLEX 1
>> > This define is taken from PETSc's include files, you should not mess
>> with it. Instead, you probably want to add something like this AFTER
>> #include :
>> > #if !defined(PETSC_USE_COMPLEX)
>> > #error "Requires complex scalars"
>> > #endif
>> >
>> > Jose
>> >
>> >
>> > > El 22 sept 2021, a las 19:38, Varun Hiremath 
>> escribió:
>> > >
>> > > Hi Jose,
>> > >
>> > > Thank you, that explains it and my example code works now without
>> specifying "-eps_target 0" in the command line.
>> > >
>> > > However, both the Krylov inexact shift-invert and JD solvers are
>> struggling to converge for some of my actual problems. The issue seems to
>> be related to non-symmetric general matrices. I have extracted one such
>> matrix attached here as MatA.gz (size 100k), and have also included a short
>> program that loads this matrix and then computes the smallest eigenvalues
>> as I described earlier.
>> > >
>> > > For this matrix, if I compute the eigenvalues directly (without using
>> the shell matrix) using shift-and-invert (as below) then it converges in
>> less than a minute.
>> > > $ ./acoustic_matrix_test.o 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-25 Thread Varun Hiremath
Ok, great! I will give that a try, thanks for your help!

On Fri, Sep 24, 2021 at 11:12 PM Jose E. Roman  wrote:

> Yes, you can use PCMAT
> https://petsc.org/release/docs/manualpages/PC/PCMAT.html then pass a
> preconditioner matrix that performs the inverse via a shell matrix.
>
> > El 25 sept 2021, a las 8:07, Varun Hiremath 
> escribió:
> >
> > Hi Jose,
> >
> > Thanks for checking my code and providing suggestions.
> >
> > In my particular case, I don't know the matrix A explicitly, I compute
> A*x in a matrix-free way within a shell matrix, so I can't use any of the
> direct factorization methods. But just a question regarding your suggestion
> to compute a (parallel) LU factorization. In our work, we do use MUMPS to
> compute the parallel factorization. For solving the generalized problem,
> A*x = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where
> factorization of B is computed using MUMPS. (We don't call MUMPS through
> SLEPc as we have our own MPI wrapper and other user settings to handle.)
> >
> > So for the preconditioning, instead of using the iterative solvers, can
> I provide a shell matrix that computes inv(P)*x corrections (where P is the
> preconditioner matrix) using MUMPS direct solver?
> >
> > And yes, thanks, #define PETSC_USE_COMPLEX 1 is not needed, it works
> without it.
> >
> > Regards,
> > Varun
> >
> > On Fri, Sep 24, 2021 at 9:14 AM Jose E. Roman 
> wrote:
> > If you do
> > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> > then it is using an LU factorization (the default), which is fast.
> >
> > Use -eps_view to see which solver settings are you using.
> >
> > BiCGStab with block Jacobi does not work for you matrix, it exceeds the
> maximum 1 iterations. So this is not viable unless you can find a
> better preconditioner for your problem. If not, just using
> EPS_SMALLEST_MAGNITUDE will be faster.
> >
> > Computing smallest magnitude eigenvalues is a difficult task. The most
> robust way is to compute a (parallel) LU factorization if you can afford it.
> >
> >
> > A side note: don't add this to your source code
> > #define PETSC_USE_COMPLEX 1
> > This define is taken from PETSc's include files, you should not mess
> with it. Instead, you probably want to add something like this AFTER
> #include :
> > #if !defined(PETSC_USE_COMPLEX)
> > #error "Requires complex scalars"
> > #endif
> >
> > Jose
> >
> >
> > > El 22 sept 2021, a las 19:38, Varun Hiremath 
> escribió:
> > >
> > > Hi Jose,
> > >
> > > Thank you, that explains it and my example code works now without
> specifying "-eps_target 0" in the command line.
> > >
> > > However, both the Krylov inexact shift-invert and JD solvers are
> struggling to converge for some of my actual problems. The issue seems to
> be related to non-symmetric general matrices. I have extracted one such
> matrix attached here as MatA.gz (size 100k), and have also included a short
> program that loads this matrix and then computes the smallest eigenvalues
> as I described earlier.
> > >
> > > For this matrix, if I compute the eigenvalues directly (without using
> the shell matrix) using shift-and-invert (as below) then it converges in
> less than a minute.
> > > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> > >
> > > However, if I use the shell matrix and use any of the preconditioned
> solvers JD or Krylov shift-invert (as shown below) with the same matrix as
> the preconditioner, then they struggle to converge.
> > > $ ./acoustic_matrix_test.o -usejd 1 -deflate 1
> > > $ ./acoustic_matrix_test.o -sinvert 1 -deflate 1
> > >
> > > Could you please check the attached code and suggest any changes in
> settings that might help with convergence for these kinds of matrices? I
> appreciate your help!
> > >
> > > Thanks,
> > > Varun
> > >
> > > On Tue, Sep 21, 2021 at 11:14 AM Jose E. Roman 
> wrote:
> > > I will have a look at your code when I have more time. Meanwhile, I am
> answering 3) below...
> > >
> > > > El 21 sept 2021, a las 0:23, Varun Hiremath 
> escribió:
> > > >
> > > > Hi Jose,
> > > >
> > > > Sorry, it took me a while to test these settings in the new builds.
> I am getting good improvement in performance using the preconditioned
> solvers, so thanks for the suggestions! But I have some questions related
> to the usage.
> > > >
> > > > We are using SLEPc to solve the acoustic modal eigenvalue problem.
> Attached is a simple standalone program that computes acoustic modes in a
> simple rectangular box. This program illustrates the general setup I am
> using, though here the shell matrix and the preconditioner matrix are the
> same, while in my actual program the shell matrix computes A*x without
> explicitly forming A, and the preconditioner is a 0th order approximation
> of A.
> > > >
> > > > In the attached program I have tested both
> > > > 1) the Krylov-Schur with inexact shift-and-invert (implemented under
> the option sinvert);
> > > > 2) the JD solver with preconditioner 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-25 Thread Jose E. Roman
Yes, you can use PCMAT https://petsc.org/release/docs/manualpages/PC/PCMAT.html 
then pass a preconditioner matrix that performs the inverse via a shell matrix.

> El 25 sept 2021, a las 8:07, Varun Hiremath  
> escribió:
> 
> Hi Jose,
> 
> Thanks for checking my code and providing suggestions. 
> 
> In my particular case, I don't know the matrix A explicitly, I compute A*x in 
> a matrix-free way within a shell matrix, so I can't use any of the direct 
> factorization methods. But just a question regarding your suggestion to 
> compute a (parallel) LU factorization. In our work, we do use MUMPS to 
> compute the parallel factorization. For solving the generalized problem, A*x 
> = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where 
> factorization of B is computed using MUMPS. (We don't call MUMPS through 
> SLEPc as we have our own MPI wrapper and other user settings to handle.)
> 
> So for the preconditioning, instead of using the iterative solvers, can I 
> provide a shell matrix that computes inv(P)*x corrections (where P is the 
> preconditioner matrix) using MUMPS direct solver?
> 
> And yes, thanks, #define PETSC_USE_COMPLEX 1 is not needed, it works without 
> it.
> 
> Regards,
> Varun
> 
> On Fri, Sep 24, 2021 at 9:14 AM Jose E. Roman  wrote:
> If you do
> $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> then it is using an LU factorization (the default), which is fast.
> 
> Use -eps_view to see which solver settings are you using.
> 
> BiCGStab with block Jacobi does not work for you matrix, it exceeds the 
> maximum 1 iterations. So this is not viable unless you can find a better 
> preconditioner for your problem. If not, just using EPS_SMALLEST_MAGNITUDE 
> will be faster.
> 
> Computing smallest magnitude eigenvalues is a difficult task. The most robust 
> way is to compute a (parallel) LU factorization if you can afford it.
> 
> 
> A side note: don't add this to your source code
> #define PETSC_USE_COMPLEX 1
> This define is taken from PETSc's include files, you should not mess with it. 
> Instead, you probably want to add something like this AFTER #include 
> :
> #if !defined(PETSC_USE_COMPLEX)
> #error "Requires complex scalars"
> #endif
> 
> Jose
> 
> 
> > El 22 sept 2021, a las 19:38, Varun Hiremath  
> > escribió:
> > 
> > Hi Jose,
> > 
> > Thank you, that explains it and my example code works now without 
> > specifying "-eps_target 0" in the command line.
> > 
> > However, both the Krylov inexact shift-invert and JD solvers are struggling 
> > to converge for some of my actual problems. The issue seems to be related 
> > to non-symmetric general matrices. I have extracted one such matrix 
> > attached here as MatA.gz (size 100k), and have also included a short 
> > program that loads this matrix and then computes the smallest eigenvalues 
> > as I described earlier.
> > 
> > For this matrix, if I compute the eigenvalues directly (without using the 
> > shell matrix) using shift-and-invert (as below) then it converges in less 
> > than a minute.
> > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> > 
> > However, if I use the shell matrix and use any of the preconditioned 
> > solvers JD or Krylov shift-invert (as shown below) with the same matrix as 
> > the preconditioner, then they struggle to converge.
> > $ ./acoustic_matrix_test.o -usejd 1 -deflate 1
> > $ ./acoustic_matrix_test.o -sinvert 1 -deflate 1
> > 
> > Could you please check the attached code and suggest any changes in 
> > settings that might help with convergence for these kinds of matrices? I 
> > appreciate your help!
> > 
> > Thanks,
> > Varun
> > 
> > On Tue, Sep 21, 2021 at 11:14 AM Jose E. Roman  wrote:
> > I will have a look at your code when I have more time. Meanwhile, I am 
> > answering 3) below...
> > 
> > > El 21 sept 2021, a las 0:23, Varun Hiremath  
> > > escribió:
> > > 
> > > Hi Jose,
> > > 
> > > Sorry, it took me a while to test these settings in the new builds. I am 
> > > getting good improvement in performance using the preconditioned solvers, 
> > > so thanks for the suggestions! But I have some questions related to the 
> > > usage.
> > > 
> > > We are using SLEPc to solve the acoustic modal eigenvalue problem. 
> > > Attached is a simple standalone program that computes acoustic modes in a 
> > > simple rectangular box. This program illustrates the general setup I am 
> > > using, though here the shell matrix and the preconditioner matrix are the 
> > > same, while in my actual program the shell matrix computes A*x without 
> > > explicitly forming A, and the preconditioner is a 0th order approximation 
> > > of A.
> > > 
> > > In the attached program I have tested both
> > > 1) the Krylov-Schur with inexact shift-and-invert (implemented under the 
> > > option sinvert);
> > > 2) the JD solver with preconditioner (implemented under the option usejd)
> > > 
> > > Both the solvers seem to work decently, compared to no preconditioning. 
> 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-25 Thread Varun Hiremath
Hi Jose,

Thanks for checking my code and providing suggestions.

In my particular case, I don't know the matrix A explicitly, I compute A*x
in a matrix-free way within a shell matrix, so I can't use any of the
direct factorization methods. But just a question regarding your suggestion
to compute a (parallel) LU factorization. In our work, we do use MUMPS to
compute the parallel factorization. For solving the generalized problem,
A*x = lambda*B*x, we are computing inv(B)*A*x within a shell matrix, where
factorization of B is computed using MUMPS. (We don't call MUMPS through
SLEPc as we have our own MPI wrapper and other user settings to handle.)

So for the preconditioning, instead of using the iterative solvers, can I
provide a shell matrix that computes inv(P)*x corrections (where P is the
preconditioner matrix) using MUMPS direct solver?

And yes, thanks, #define PETSC_USE_COMPLEX 1 is not needed, it works
without it.

Regards,
Varun

On Fri, Sep 24, 2021 at 9:14 AM Jose E. Roman  wrote:

> If you do
> $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> then it is using an LU factorization (the default), which is fast.
>
> Use -eps_view to see which solver settings are you using.
>
> BiCGStab with block Jacobi does not work for you matrix, it exceeds the
> maximum 1 iterations. So this is not viable unless you can find a
> better preconditioner for your problem. If not, just using
> EPS_SMALLEST_MAGNITUDE will be faster.
>
> Computing smallest magnitude eigenvalues is a difficult task. The most
> robust way is to compute a (parallel) LU factorization if you can afford it.
>
>
> A side note: don't add this to your source code
> #define PETSC_USE_COMPLEX 1
> This define is taken from PETSc's include files, you should not mess with
> it. Instead, you probably want to add something like this AFTER #include
> :
> #if !defined(PETSC_USE_COMPLEX)
> #error "Requires complex scalars"
> #endif
>
> Jose
>
>
> > El 22 sept 2021, a las 19:38, Varun Hiremath 
> escribió:
> >
> > Hi Jose,
> >
> > Thank you, that explains it and my example code works now without
> specifying "-eps_target 0" in the command line.
> >
> > However, both the Krylov inexact shift-invert and JD solvers are
> struggling to converge for some of my actual problems. The issue seems to
> be related to non-symmetric general matrices. I have extracted one such
> matrix attached here as MatA.gz (size 100k), and have also included a short
> program that loads this matrix and then computes the smallest eigenvalues
> as I described earlier.
> >
> > For this matrix, if I compute the eigenvalues directly (without using
> the shell matrix) using shift-and-invert (as below) then it converges in
> less than a minute.
> > $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> >
> > However, if I use the shell matrix and use any of the preconditioned
> solvers JD or Krylov shift-invert (as shown below) with the same matrix as
> the preconditioner, then they struggle to converge.
> > $ ./acoustic_matrix_test.o -usejd 1 -deflate 1
> > $ ./acoustic_matrix_test.o -sinvert 1 -deflate 1
> >
> > Could you please check the attached code and suggest any changes in
> settings that might help with convergence for these kinds of matrices? I
> appreciate your help!
> >
> > Thanks,
> > Varun
> >
> > On Tue, Sep 21, 2021 at 11:14 AM Jose E. Roman 
> wrote:
> > I will have a look at your code when I have more time. Meanwhile, I am
> answering 3) below...
> >
> > > El 21 sept 2021, a las 0:23, Varun Hiremath 
> escribió:
> > >
> > > Hi Jose,
> > >
> > > Sorry, it took me a while to test these settings in the new builds. I
> am getting good improvement in performance using the preconditioned
> solvers, so thanks for the suggestions! But I have some questions related
> to the usage.
> > >
> > > We are using SLEPc to solve the acoustic modal eigenvalue problem.
> Attached is a simple standalone program that computes acoustic modes in a
> simple rectangular box. This program illustrates the general setup I am
> using, though here the shell matrix and the preconditioner matrix are the
> same, while in my actual program the shell matrix computes A*x without
> explicitly forming A, and the preconditioner is a 0th order approximation
> of A.
> > >
> > > In the attached program I have tested both
> > > 1) the Krylov-Schur with inexact shift-and-invert (implemented under
> the option sinvert);
> > > 2) the JD solver with preconditioner (implemented under the option
> usejd)
> > >
> > > Both the solvers seem to work decently, compared to no
> preconditioning. This is how I run the two solvers (for a mesh size of
> 1600x400):
> > > $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1
> -eps_target 0
> > > $ ./acoustic_box_test.o -nx 1600 -ny 400 -sinvert 1 -deflate 1
> -eps_target 0
> > > Both finish in about ~10 minutes on my system in serial. JD seems to
> be slightly faster and more accurate (for the imaginary part of eigenvalue).
> > > The 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-24 Thread Jose E. Roman
If you do
$ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
then it is using an LU factorization (the default), which is fast.

Use -eps_view to see which solver settings are you using.

BiCGStab with block Jacobi does not work for you matrix, it exceeds the maximum 
1 iterations. So this is not viable unless you can find a better 
preconditioner for your problem. If not, just using EPS_SMALLEST_MAGNITUDE will 
be faster.

Computing smallest magnitude eigenvalues is a difficult task. The most robust 
way is to compute a (parallel) LU factorization if you can afford it.


A side note: don't add this to your source code
#define PETSC_USE_COMPLEX 1
This define is taken from PETSc's include files, you should not mess with it. 
Instead, you probably want to add something like this AFTER #include 
:
#if !defined(PETSC_USE_COMPLEX)
#error "Requires complex scalars"
#endif

Jose


> El 22 sept 2021, a las 19:38, Varun Hiremath  
> escribió:
> 
> Hi Jose,
> 
> Thank you, that explains it and my example code works now without specifying 
> "-eps_target 0" in the command line.
> 
> However, both the Krylov inexact shift-invert and JD solvers are struggling 
> to converge for some of my actual problems. The issue seems to be related to 
> non-symmetric general matrices. I have extracted one such matrix attached 
> here as MatA.gz (size 100k), and have also included a short program that 
> loads this matrix and then computes the smallest eigenvalues as I described 
> earlier.
> 
> For this matrix, if I compute the eigenvalues directly (without using the 
> shell matrix) using shift-and-invert (as below) then it converges in less 
> than a minute.
> $ ./acoustic_matrix_test.o -shell 0 -st_type sinvert -deflate 1
> 
> However, if I use the shell matrix and use any of the preconditioned solvers 
> JD or Krylov shift-invert (as shown below) with the same matrix as the 
> preconditioner, then they struggle to converge.
> $ ./acoustic_matrix_test.o -usejd 1 -deflate 1
> $ ./acoustic_matrix_test.o -sinvert 1 -deflate 1
> 
> Could you please check the attached code and suggest any changes in settings 
> that might help with convergence for these kinds of matrices? I appreciate 
> your help!
> 
> Thanks,
> Varun
> 
> On Tue, Sep 21, 2021 at 11:14 AM Jose E. Roman  wrote:
> I will have a look at your code when I have more time. Meanwhile, I am 
> answering 3) below...
> 
> > El 21 sept 2021, a las 0:23, Varun Hiremath  
> > escribió:
> > 
> > Hi Jose,
> > 
> > Sorry, it took me a while to test these settings in the new builds. I am 
> > getting good improvement in performance using the preconditioned solvers, 
> > so thanks for the suggestions! But I have some questions related to the 
> > usage.
> > 
> > We are using SLEPc to solve the acoustic modal eigenvalue problem. Attached 
> > is a simple standalone program that computes acoustic modes in a simple 
> > rectangular box. This program illustrates the general setup I am using, 
> > though here the shell matrix and the preconditioner matrix are the same, 
> > while in my actual program the shell matrix computes A*x without explicitly 
> > forming A, and the preconditioner is a 0th order approximation of A.
> > 
> > In the attached program I have tested both
> > 1) the Krylov-Schur with inexact shift-and-invert (implemented under the 
> > option sinvert);
> > 2) the JD solver with preconditioner (implemented under the option usejd)
> > 
> > Both the solvers seem to work decently, compared to no preconditioning. 
> > This is how I run the two solvers (for a mesh size of 1600x400):
> > $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
> > $ ./acoustic_box_test.o -nx 1600 -ny 400 -sinvert 1 -deflate 1 -eps_target 0
> > Both finish in about ~10 minutes on my system in serial. JD seems to be 
> > slightly faster and more accurate (for the imaginary part of eigenvalue).
> > The program also runs in parallel using mpiexec. I use complex builds, as 
> > in my main program the matrix can be complex.
> > 
> > Now here are my questions:
> > 1) For this particular problem type, could you please check if these are 
> > the best settings that one could use? I have tried different combinations 
> > of KSP/PC types e.g. GMRES, GAMG, etc, but BCGSL + BJACOBI seems to work 
> > the best in serial and parallel.
> > 
> > 2) When I tested these settings in my main program, for some reason the JD 
> > solver was not converging. After further testing, I found the issue was 
> > related to the setting of "-eps_target 0". I have included 
> > "EPSSetTarget(eps,0.0);" in the program and I assumed this is equivalent to 
> > passing "-eps_target 0" from the command line, but that doesn't seem to be 
> > the case. For instance, if I run the attached program without "-eps_target 
> > 0" in the command line then it doesn't converge.
> > $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
> >  the above finishes in about 10 minutes
> > $ 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-21 Thread Jose E. Roman
I will have a look at your code when I have more time. Meanwhile, I am 
answering 3) below...

> El 21 sept 2021, a las 0:23, Varun Hiremath  
> escribió:
> 
> Hi Jose,
> 
> Sorry, it took me a while to test these settings in the new builds. I am 
> getting good improvement in performance using the preconditioned solvers, so 
> thanks for the suggestions! But I have some questions related to the usage.
> 
> We are using SLEPc to solve the acoustic modal eigenvalue problem. Attached 
> is a simple standalone program that computes acoustic modes in a simple 
> rectangular box. This program illustrates the general setup I am using, 
> though here the shell matrix and the preconditioner matrix are the same, 
> while in my actual program the shell matrix computes A*x without explicitly 
> forming A, and the preconditioner is a 0th order approximation of A.
> 
> In the attached program I have tested both
> 1) the Krylov-Schur with inexact shift-and-invert (implemented under the 
> option sinvert);
> 2) the JD solver with preconditioner (implemented under the option usejd)
> 
> Both the solvers seem to work decently, compared to no preconditioning. This 
> is how I run the two solvers (for a mesh size of 1600x400):
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -sinvert 1 -deflate 1 -eps_target 0
> Both finish in about ~10 minutes on my system in serial. JD seems to be 
> slightly faster and more accurate (for the imaginary part of eigenvalue).
> The program also runs in parallel using mpiexec. I use complex builds, as in 
> my main program the matrix can be complex.
> 
> Now here are my questions:
> 1) For this particular problem type, could you please check if these are the 
> best settings that one could use? I have tried different combinations of 
> KSP/PC types e.g. GMRES, GAMG, etc, but BCGSL + BJACOBI seems to work the 
> best in serial and parallel.
> 
> 2) When I tested these settings in my main program, for some reason the JD 
> solver was not converging. After further testing, I found the issue was 
> related to the setting of "-eps_target 0". I have included 
> "EPSSetTarget(eps,0.0);" in the program and I assumed this is equivalent to 
> passing "-eps_target 0" from the command line, but that doesn't seem to be 
> the case. For instance, if I run the attached program without "-eps_target 0" 
> in the command line then it doesn't converge.
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
>  the above finishes in about 10 minutes
> $ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1
>  the above doesn't converge even though "EPSSetTarget(eps,0.0);" is included 
> in the code
> 
> This only seems to affect the JD solver, not the Krylov shift-and-invert 
> (-sinvert 1) option. So is there any difference between passing "-eps_target 
> 0" from the command line vs using "EPSSetTarget(eps,0.0);" in the code? I 
> cannot pass any command line arguments in my actual program, so need to set 
> everything internally.
> 
> 3) Also, another minor related issue. While using the inexact 
> shift-and-invert option, I was running into the following error:
> 
> ""
> Missing or incorrect user input
> Shift-and-invert requires a target 'which' (see EPSSetWhichEigenpairs), for 
> instance -st_type sinvert -eps_target 0 -eps_target_magnitude
> ""
> 
> I already have the below two lines in the code:
> EPSSetWhichEigenpairs(eps,EPS_SMALLEST_MAGNITUDE);
> EPSSetTarget(eps,0.0);
> 
> so shouldn't these be enough? If I comment out the first line 
> "EPSSetWhichEigenpairs", then the code works fine.

You should either do

EPSSetWhichEigenpairs(eps,EPS_SMALLEST_MAGNITUDE);

without shift-and-invert or

EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE);
EPSSetTarget(eps,0.0);

with shift-and-invert. The latter can also be used without shift-and-invert 
(e.g. in JD).

I have to check, but a possible explanation why in your comment above (2) the 
command-line option -eps_target 0 works differently is that it also sets 
-eps_target_magnitude if omitted, so to be equivalent in source code you have 
to call both
EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE);
EPSSetTarget(eps,0.0);

Jose

> I have some more questions regarding setting the preconditioner for a 
> quadratic eigenvalue problem, which I will ask in a follow-up email.
> 
> Thanks for your help!
> 
> -Varun
> 
> 
> On Thu, Jul 1, 2021 at 5:01 AM Varun Hiremath  wrote:
> Thank you very much for these suggestions! We are currently using version 
> 3.12, so I'll try to update to the latest version and try your suggestions. 
> Let me get back to you, thanks!
> 
> On Thu, Jul 1, 2021, 4:45 AM Jose E. Roman  wrote:
> Then I would try Davidson methods https://doi.org/10.1145/2543696
> You can also try Krylov-Schur with "inexact" shift-and-invert, for instance, 
> with preconditioned BiCGStab or GMRES, see section 3.4.1 of the users manual.
> 
> In both cases, you have to 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-09-20 Thread Varun Hiremath
Hi Jose,

Sorry, it took me a while to test these settings in the new builds. I am
getting good improvement in performance using the preconditioned solvers,
so thanks for the suggestions! But I have some questions related to the
usage.

We are using SLEPc to solve the acoustic modal eigenvalue problem. Attached
is a simple standalone program that computes acoustic modes in a simple
rectangular box. This program illustrates the general setup I am using,
though here the shell matrix and the preconditioner matrix are the same,
while in my actual program the shell matrix computes A*x without explicitly
forming A, and the preconditioner is a 0th order approximation of A.

In the attached program I have tested both
1) the Krylov-Schur with inexact shift-and-invert (implemented under the
option sinvert);
2) the JD solver with preconditioner (implemented under the option usejd)

Both the solvers seem to work decently, compared to no preconditioning.
This is how I run the two solvers (for a mesh size of 1600x400):
$ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
$ ./acoustic_box_test.o -nx 1600 -ny 400 -sinvert 1 -deflate 1 -eps_target 0
Both finish in about ~10 minutes on my system in serial. JD seems to be
slightly faster and more accurate (for the imaginary part of eigenvalue).
The program also runs in parallel using mpiexec. I use complex builds, as
in my main program the matrix can be complex.

Now here are my questions:
1) For this particular problem type, could you please check if these are
the best settings that one could use? I have tried different combinations
of KSP/PC types e.g. GMRES, GAMG, etc, but BCGSL + BJACOBI seems to work
the best in serial and parallel.

2) When I tested these settings in my main program, for some reason the JD
solver was not converging. After further testing, I found the issue was
related to the setting of "-eps_target 0". I have included "
EPSSetTarget(eps,0.0);" in the program and I assumed this is equivalent to
passing "-eps_target 0" from the command line, but that doesn't seem to be
the case. For instance, if I run the attached program without "-eps_target
0" in the command line then it doesn't converge.
$ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1 -eps_target 0
 the above finishes in about 10 minutes
$ ./acoustic_box_test.o -nx 1600 -ny 400 -usejd 1 -deflate 1
 the above doesn't converge even though "EPSSetTarget(eps,0.0);" is
included in the code

This only seems to affect the JD solver, not the Krylov shift-and-invert
(-sinvert 1) option. So is there any difference between passing "-eps_target
0" from the command line vs using "EPSSetTarget(eps,0.0);" in the code? I
cannot pass any command line arguments in my actual program, so need to set
everything internally.

3) Also, another minor related issue. While using the inexact
shift-and-invert option, I was running into the following error:

""
Missing or incorrect user input
Shift-and-invert requires a target 'which' (see EPSSetWhichEigenpairs), for
instance -st_type sinvert -eps_target 0 -eps_target_magnitude
""

I already have the below two lines in the code:
EPSSetWhichEigenpairs(eps,EPS_SMALLEST_MAGNITUDE);
EPSSetTarget(eps,0.0);

so shouldn't these be enough? If I comment out the first line
"EPSSetWhichEigenpairs", then the code works fine.

I have some more questions regarding setting the preconditioner for a
quadratic eigenvalue problem, which I will ask in a follow-up email.

Thanks for your help!

-Varun


On Thu, Jul 1, 2021 at 5:01 AM Varun Hiremath 
wrote:

> Thank you very much for these suggestions! We are currently using version
> 3.12, so I'll try to update to the latest version and try your suggestions.
> Let me get back to you, thanks!
>
> On Thu, Jul 1, 2021, 4:45 AM Jose E. Roman  wrote:
>
>> Then I would try Davidson methods https://doi.org/10.1145/2543696
>> You can also try Krylov-Schur with "inexact" shift-and-invert, for
>> instance, with preconditioned BiCGStab or GMRES, see section 3.4.1 of the
>> users manual.
>>
>> In both cases, you have to pass matrix A in the call to EPSSetOperators()
>> and the preconditioner matrix via STSetPreconditionerMat() - note this
>> function was introduced in version 3.15.
>>
>> Jose
>>
>>
>>
>> > El 1 jul 2021, a las 13:36, Varun Hiremath 
>> escribió:
>> >
>> > Thanks. I actually do have a 1st order approximation of matrix A, that
>> I can explicitly compute and also invert. Can I use that matrix as
>> preconditioner to speed things up? Is there some example that explains how
>> to setup and call SLEPc for this scenario?
>> >
>> > On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman  wrote:
>> > For smallest real parts one could adapt ex34.c, but it is going to be
>> costly
>> https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
>> > Also, if eigenvalues are clustered around the origin, convergence may
>> still be very slow.
>> >
>> > It is a tough problem, unless you are able to compute a good
>> 

Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Varun Hiremath
Thank you very much for these suggestions! We are currently using version
3.12, so I'll try to update to the latest version and try your suggestions.
Let me get back to you, thanks!

On Thu, Jul 1, 2021, 4:45 AM Jose E. Roman  wrote:

> Then I would try Davidson methods https://doi.org/10.1145/2543696
> You can also try Krylov-Schur with "inexact" shift-and-invert, for
> instance, with preconditioned BiCGStab or GMRES, see section 3.4.1 of the
> users manual.
>
> In both cases, you have to pass matrix A in the call to EPSSetOperators()
> and the preconditioner matrix via STSetPreconditionerMat() - note this
> function was introduced in version 3.15.
>
> Jose
>
>
>
> > El 1 jul 2021, a las 13:36, Varun Hiremath 
> escribió:
> >
> > Thanks. I actually do have a 1st order approximation of matrix A, that I
> can explicitly compute and also invert. Can I use that matrix as
> preconditioner to speed things up? Is there some example that explains how
> to setup and call SLEPc for this scenario?
> >
> > On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman  wrote:
> > For smallest real parts one could adapt ex34.c, but it is going to be
> costly
> https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
> > Also, if eigenvalues are clustered around the origin, convergence may
> still be very slow.
> >
> > It is a tough problem, unless you are able to compute a good
> preconditioner of A (no need to compute the exact inverse).
> >
> > Jose
> >
> >
> > > El 1 jul 2021, a las 13:23, Varun Hiremath 
> escribió:
> > >
> > > I'm solving for the smallest eigenvalues in magnitude. Though is it
> cheaper to solve smallest in real part, as that might also work in my case?
> Thanks for your help.
> > >
> > > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman  wrote:
> > > Smallest eigenvalue in magnitude or real part?
> > >
> > >
> > > > El 1 jul 2021, a las 11:58, Varun Hiremath 
> escribió:
> > > >
> > > > Sorry, no both A and B are general sparse matrices (non-hermitian).
> So is there anything else I could try?
> > > >
> > > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman 
> wrote:
> > > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on
> the pair (A,B). But this will likely be slow as well, unless you can
> provide a good preconditioner.
> > > >
> > > > Jose
> > > >
> > > >
> > > > > El 1 jul 2021, a las 11:37, Varun Hiremath <
> varunhirem...@gmail.com> escribió:
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I am trying to compute the smallest eigenvalues of a generalized
> system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using
> a shell matrix with a custom matmult function) however, the matrix B is
> explicitly known so I compute inv(B)*A within the shell matrix and solve
> inv(B)*A*x = lambda*x.
> > > > >
> > > > > To compute the smallest eigenvalues it is recommended to solve the
> inverted system, but since matrix A is not explicitly known I can't invert
> the system. Moreover, the size of the system can be really big, and with
> the default Krylov solver, it is extremely slow. So is there a better way
> for me to compute the smallest eigenvalues of this system?
> > > > >
> > > > > Thanks,
> > > > > Varun
> > > >
> > >
> >
>
>


Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Jose E. Roman
Then I would try Davidson methods https://doi.org/10.1145/2543696
You can also try Krylov-Schur with "inexact" shift-and-invert, for instance, 
with preconditioned BiCGStab or GMRES, see section 3.4.1 of the users manual.

In both cases, you have to pass matrix A in the call to EPSSetOperators() and 
the preconditioner matrix via STSetPreconditionerMat() - note this function was 
introduced in version 3.15.

Jose



> El 1 jul 2021, a las 13:36, Varun Hiremath  escribió:
> 
> Thanks. I actually do have a 1st order approximation of matrix A, that I can 
> explicitly compute and also invert. Can I use that matrix as preconditioner 
> to speed things up? Is there some example that explains how to setup and call 
> SLEPc for this scenario? 
> 
> On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman  wrote:
> For smallest real parts one could adapt ex34.c, but it is going to be costly 
> https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
> Also, if eigenvalues are clustered around the origin, convergence may still 
> be very slow.
> 
> It is a tough problem, unless you are able to compute a good preconditioner 
> of A (no need to compute the exact inverse).
> 
> Jose
> 
> 
> > El 1 jul 2021, a las 13:23, Varun Hiremath  
> > escribió:
> > 
> > I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper 
> > to solve smallest in real part, as that might also work in my case? Thanks 
> > for your help.
> > 
> > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman  wrote:
> > Smallest eigenvalue in magnitude or real part?
> > 
> > 
> > > El 1 jul 2021, a las 11:58, Varun Hiremath  
> > > escribió:
> > > 
> > > Sorry, no both A and B are general sparse matrices (non-hermitian). So is 
> > > there anything else I could try?
> > > 
> > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman  wrote:
> > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the 
> > > pair (A,B). But this will likely be slow as well, unless you can provide 
> > > a good preconditioner.
> > > 
> > > Jose
> > > 
> > > 
> > > > El 1 jul 2021, a las 11:37, Varun Hiremath  
> > > > escribió:
> > > > 
> > > > Hi All,
> > > > 
> > > > I am trying to compute the smallest eigenvalues of a generalized system 
> > > > A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a 
> > > > shell matrix with a custom matmult function) however, the matrix B is 
> > > > explicitly known so I compute inv(B)*A within the shell matrix and 
> > > > solve inv(B)*A*x = lambda*x.
> > > > 
> > > > To compute the smallest eigenvalues it is recommended to solve the 
> > > > inverted system, but since matrix A is not explicitly known I can't 
> > > > invert the system. Moreover, the size of the system can be really big, 
> > > > and with the default Krylov solver, it is extremely slow. So is there a 
> > > > better way for me to compute the smallest eigenvalues of this system?
> > > > 
> > > > Thanks,
> > > > Varun
> > > 
> > 
> 



Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Varun Hiremath
Thanks. I actually do have a 1st order approximation of matrix A, that I
can explicitly compute and also invert. Can I use that matrix as
preconditioner to speed things up? Is there some example that explains how
to setup and call SLEPc for this scenario?

On Thu, Jul 1, 2021, 4:29 AM Jose E. Roman  wrote:

> For smallest real parts one could adapt ex34.c, but it is going to be
> costly
> https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
> Also, if eigenvalues are clustered around the origin, convergence may
> still be very slow.
>
> It is a tough problem, unless you are able to compute a good
> preconditioner of A (no need to compute the exact inverse).
>
> Jose
>
>
> > El 1 jul 2021, a las 13:23, Varun Hiremath 
> escribió:
> >
> > I'm solving for the smallest eigenvalues in magnitude. Though is it
> cheaper to solve smallest in real part, as that might also work in my case?
> Thanks for your help.
> >
> > On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman  wrote:
> > Smallest eigenvalue in magnitude or real part?
> >
> >
> > > El 1 jul 2021, a las 11:58, Varun Hiremath 
> escribió:
> > >
> > > Sorry, no both A and B are general sparse matrices (non-hermitian). So
> is there anything else I could try?
> > >
> > > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman 
> wrote:
> > > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on
> the pair (A,B). But this will likely be slow as well, unless you can
> provide a good preconditioner.
> > >
> > > Jose
> > >
> > >
> > > > El 1 jul 2021, a las 11:37, Varun Hiremath 
> escribió:
> > > >
> > > > Hi All,
> > > >
> > > > I am trying to compute the smallest eigenvalues of a generalized
> system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using
> a shell matrix with a custom matmult function) however, the matrix B is
> explicitly known so I compute inv(B)*A within the shell matrix and solve
> inv(B)*A*x = lambda*x.
> > > >
> > > > To compute the smallest eigenvalues it is recommended to solve the
> inverted system, but since matrix A is not explicitly known I can't invert
> the system. Moreover, the size of the system can be really big, and with
> the default Krylov solver, it is extremely slow. So is there a better way
> for me to compute the smallest eigenvalues of this system?
> > > >
> > > > Thanks,
> > > > Varun
> > >
> >
>
>


Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Jose E. Roman
For smallest real parts one could adapt ex34.c, but it is going to be costly 
https://slepc.upv.es/documentation/current/src/eps/tutorials/ex36.c.html
Also, if eigenvalues are clustered around the origin, convergence may still be 
very slow.

It is a tough problem, unless you are able to compute a good preconditioner of 
A (no need to compute the exact inverse).

Jose


> El 1 jul 2021, a las 13:23, Varun Hiremath  escribió:
> 
> I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper 
> to solve smallest in real part, as that might also work in my case? Thanks 
> for your help.
> 
> On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman  wrote:
> Smallest eigenvalue in magnitude or real part?
> 
> 
> > El 1 jul 2021, a las 11:58, Varun Hiremath  
> > escribió:
> > 
> > Sorry, no both A and B are general sparse matrices (non-hermitian). So is 
> > there anything else I could try?
> > 
> > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman  wrote:
> > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the 
> > pair (A,B). But this will likely be slow as well, unless you can provide a 
> > good preconditioner.
> > 
> > Jose
> > 
> > 
> > > El 1 jul 2021, a las 11:37, Varun Hiremath  
> > > escribió:
> > > 
> > > Hi All,
> > > 
> > > I am trying to compute the smallest eigenvalues of a generalized system 
> > > A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a 
> > > shell matrix with a custom matmult function) however, the matrix B is 
> > > explicitly known so I compute inv(B)*A within the shell matrix and solve 
> > > inv(B)*A*x = lambda*x.
> > > 
> > > To compute the smallest eigenvalues it is recommended to solve the 
> > > inverted system, but since matrix A is not explicitly known I can't 
> > > invert the system. Moreover, the size of the system can be really big, 
> > > and with the default Krylov solver, it is extremely slow. So is there a 
> > > better way for me to compute the smallest eigenvalues of this system?
> > > 
> > > Thanks,
> > > Varun
> > 
> 



Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Varun Hiremath
I'm solving for the smallest eigenvalues in magnitude. Though is it cheaper
to solve smallest in real part, as that might also work in my case? Thanks
for your help.

On Thu, Jul 1, 2021, 4:08 AM Jose E. Roman  wrote:

> Smallest eigenvalue in magnitude or real part?
>
>
> > El 1 jul 2021, a las 11:58, Varun Hiremath 
> escribió:
> >
> > Sorry, no both A and B are general sparse matrices (non-hermitian). So
> is there anything else I could try?
> >
> > On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman  wrote:
> > Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the
> pair (A,B). But this will likely be slow as well, unless you can provide a
> good preconditioner.
> >
> > Jose
> >
> >
> > > El 1 jul 2021, a las 11:37, Varun Hiremath 
> escribió:
> > >
> > > Hi All,
> > >
> > > I am trying to compute the smallest eigenvalues of a generalized
> system A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using
> a shell matrix with a custom matmult function) however, the matrix B is
> explicitly known so I compute inv(B)*A within the shell matrix and solve
> inv(B)*A*x = lambda*x.
> > >
> > > To compute the smallest eigenvalues it is recommended to solve the
> inverted system, but since matrix A is not explicitly known I can't invert
> the system. Moreover, the size of the system can be really big, and with
> the default Krylov solver, it is extremely slow. So is there a better way
> for me to compute the smallest eigenvalues of this system?
> > >
> > > Thanks,
> > > Varun
> >
>
>


Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Jose E. Roman
Smallest eigenvalue in magnitude or real part?


> El 1 jul 2021, a las 11:58, Varun Hiremath  escribió:
> 
> Sorry, no both A and B are general sparse matrices (non-hermitian). So is 
> there anything else I could try?
> 
> On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman  wrote:
> Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair 
> (A,B). But this will likely be slow as well, unless you can provide a good 
> preconditioner.
> 
> Jose
> 
> 
> > El 1 jul 2021, a las 11:37, Varun Hiremath  
> > escribió:
> > 
> > Hi All,
> > 
> > I am trying to compute the smallest eigenvalues of a generalized system 
> > A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a 
> > shell matrix with a custom matmult function) however, the matrix B is 
> > explicitly known so I compute inv(B)*A within the shell matrix and solve 
> > inv(B)*A*x = lambda*x.
> > 
> > To compute the smallest eigenvalues it is recommended to solve the inverted 
> > system, but since matrix A is not explicitly known I can't invert the 
> > system. Moreover, the size of the system can be really big, and with the 
> > default Krylov solver, it is extremely slow. So is there a better way for 
> > me to compute the smallest eigenvalues of this system?
> > 
> > Thanks,
> > Varun
> 



Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Varun Hiremath
Sorry, no both A and B are general sparse matrices (non-hermitian). So is
there anything else I could try?

On Thu, Jul 1, 2021 at 2:43 AM Jose E. Roman  wrote:

> Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the
> pair (A,B). But this will likely be slow as well, unless you can provide a
> good preconditioner.
>
> Jose
>
>
> > El 1 jul 2021, a las 11:37, Varun Hiremath 
> escribió:
> >
> > Hi All,
> >
> > I am trying to compute the smallest eigenvalues of a generalized system
> A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a
> shell matrix with a custom matmult function) however, the matrix B is
> explicitly known so I compute inv(B)*A within the shell matrix and solve
> inv(B)*A*x = lambda*x.
> >
> > To compute the smallest eigenvalues it is recommended to solve the
> inverted system, but since matrix A is not explicitly known I can't invert
> the system. Moreover, the size of the system can be really big, and with
> the default Krylov solver, it is extremely slow. So is there a better way
> for me to compute the smallest eigenvalues of this system?
> >
> > Thanks,
> > Varun
>
>


Re: [petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Jose E. Roman
Is the problem symmetric (GHEP)? In that case, you can try LOBPCG on the pair 
(A,B). But this will likely be slow as well, unless you can provide a good 
preconditioner.

Jose


> El 1 jul 2021, a las 11:37, Varun Hiremath  escribió:
> 
> Hi All,
> 
> I am trying to compute the smallest eigenvalues of a generalized system A*x= 
> lambda*B*x. I don't explicitly know the matrix A (so I am using a shell 
> matrix with a custom matmult function) however, the matrix B is explicitly 
> known so I compute inv(B)*A within the shell matrix and solve inv(B)*A*x = 
> lambda*x.
> 
> To compute the smallest eigenvalues it is recommended to solve the inverted 
> system, but since matrix A is not explicitly known I can't invert the system. 
> Moreover, the size of the system can be really big, and with the default 
> Krylov solver, it is extremely slow. So is there a better way for me to 
> compute the smallest eigenvalues of this system?
> 
> Thanks,
> Varun



[petsc-users] SLEPc: smallest eigenvalues

2021-07-01 Thread Varun Hiremath
Hi All,

I am trying to compute the smallest eigenvalues of a generalized system
A*x= lambda*B*x. I don't explicitly know the matrix A (so I am using a
shell matrix with a custom matmult function) however, the matrix B is
explicitly known so I compute inv(B)*A within the shell matrix and solve
inv(B)*A*x = lambda*x.

To compute the smallest eigenvalues it is recommended to solve the inverted
system, but since matrix A is not explicitly known I can't invert the
system. Moreover, the size of the system can be really big, and with the
default Krylov solver, it is extremely slow. So is there a better way for
me to compute the smallest eigenvalues of this system?

Thanks,
Varun