Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Mark Adams
Yes, easy to solve. It is a kind of Lapalcian that comes up in graph
algorithms.

You mentioned problems with pinned points, with zero displacement.
Conceptually you want to create A'A without any modification for pinned
points. Then remove the rows and columns associated with pinned points.
A'A has a null space, but your method requires pinning and that will remove
the null space, so I don't think you need to worry about that.

Mark

On Tue, May 7, 2024 at 10:30 AM Pierre Jolivet  wrote:

> OK, it’s very trivial to solve with algebraic solvers. If you are willing
> to share larger test cases, maybe then some issue will arise (please do not
> attach them to your mail, send a URL, if you cannot and must attach it to
> your mail, switch
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> OK, it’s very trivial to solve with algebraic solvers.
> If you are willing to share larger test cases, maybe then some issue will
> arise (please do not attach them to your mail, send a URL, if you cannot
> and must attach it to your mail, switch to the following mailing list:
> petsc-ma...@mcs.anl.gov)
>
> Thanks,
> Pierre
>
> $ mpirun -n 8 ./sparse_ls -mat_name
> /tmp/system/matdump_step0001_dir0.bin -options_file gmres.rc -pc_type
> hypre -options_left 0
>   0 KSP Residual norm 3.453633143461e+01
>   1 KSP Residual norm 1.454005637026e+00
>   2 KSP Residual norm 7.254564223178e-02
>   3 KSP Residual norm 2.595967767025e-03
>   4 KSP Residual norm 8.616844003887e-05
>   5 KSP Residual norm 2.519788767995e-06
>   6 KSP Residual norm 6.830984907384e-08
>   Linear solve converged due to CONVERGED_RTOL iterations 6
> ||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00
> $ mpirun -n 8 ./sparse_ls -mat_name
> /tmp/system/matdump_step0001_dir0.bin -options_file gmres.rc -pc_type
> gamg -options_left 0
>   0 KSP Residual norm 3.453633143461e+01
>   1 KSP Residual norm 9.410331701775e+00
>   2 KSP Residual norm 7.249670993944e-01
>   3 KSP Residual norm 8.415995591379e-02
>   4 KSP Residual norm 8.230048116783e-03
>   5 KSP Residual norm 8.200299281980e-04
>   6 KSP Residual norm 8.332302028270e-05
>   7 KSP Residual norm 9.103576883087e-06
>   8 KSP Residual norm 8.837914581158e-07
>   9 KSP Residual norm 8.295880033739e-08
>   Linear solve converged due to CONVERGED_RTOL iterations 9
> ||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00
> $ mpirun -n 8 ./sparse_ls -mat_name
> /tmp/system/matdump_step0001_dir0.bin -options_file gmres.rc -pc_type
> hpddm -options_left 0
>   0 KSP Residual norm 3.453633143461e+01
>   1 KSP Residual norm 4.455259710220e+00
>   2 KSP Residual norm 1.190904699181e-01
>   3 KSP Residual norm 5.220970280301e-03
>   4 KSP Residual norm 6.214389182723e-05
>   5 KSP Residual norm 1.693927074022e-06
>   6 KSP Residual norm 3.700055659502e-08
>   Linear solve converged due to CONVERGED_RTOL iterations 6
> ||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00
>
>
> On 7 May 2024, at 2:50 PM, Seiz,Marco  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
> Pierre,
>
> I've attached the dumps of the matrix + RHS for something of about 3k x 1k.
>
>
> Regarding the weird divergence behaviour, I tried again at home but I still 
> get the same results.
>
> I am running a rolling release distribution on both machines, but that really 
> shouldn't matter for divergence behavior I would think.
>
> Is there some kind of option in PETSc to get more information about the 
> breakdown from my side?
>
>
> Best regards,
>
> Marco
>
> - Original Message -
> >> From: Pierre Jolivet 
> >> To: Marco Seiz 
> >> Cc: petsc-users@mcs.anl.gov
> >> Date: 2024-05-07 18:12:18
> >> Subject: Re: [petsc-users] Reasons for breakdown in preconditioned LSQR
> >>
>
> >>
> >> > On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
> >> >
> >> > Thanks for the quick response!
> >> >
> >> > On 07.05.24 14:24, Pierre Jolivet wrote:
> >> >>
> >> >>
> >> >>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
> >> >>>
> >> >>> This Message Is From an External Sender
> >> >>> This message came from outside your organization.
> >> >>> Hello,
> >> >>>
> >> >>> something a bit different from my last question, since that didn't
> >> >>> progress so well:
> >> >>> I have a related model which generally produces a rect

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Pierre Jolivet
OK, it’s very trivial to solve with algebraic solvers.
If you are willing to share larger test cases, maybe then some issue will arise 
(please do not attach them to your mail, send a URL, if you cannot and must 
attach it to your mail, switch to the following mailing list: 
petsc-ma...@mcs.anl.gov)

Thanks,
Pierre

$ mpirun -n 8 ./sparse_ls -mat_name /tmp/system/matdump_step0001_dir0.bin 
-options_file gmres.rc -pc_type hypre -options_left 0
  0 KSP Residual norm 3.453633143461e+01
  1 KSP Residual norm 1.454005637026e+00
  2 KSP Residual norm 7.254564223178e-02
  3 KSP Residual norm 2.595967767025e-03
  4 KSP Residual norm 8.616844003887e-05
  5 KSP Residual norm 2.519788767995e-06
  6 KSP Residual norm 6.830984907384e-08
  Linear solve converged due to CONVERGED_RTOL iterations 6
||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00
$ mpirun -n 8 ./sparse_ls -mat_name /tmp/system/matdump_step0001_dir0.bin 
-options_file gmres.rc -pc_type gamg -options_left 0
  0 KSP Residual norm 3.453633143461e+01
  1 KSP Residual norm 9.410331701775e+00
  2 KSP Residual norm 7.249670993944e-01
  3 KSP Residual norm 8.415995591379e-02
  4 KSP Residual norm 8.230048116783e-03
  5 KSP Residual norm 8.200299281980e-04
  6 KSP Residual norm 8.332302028270e-05
  7 KSP Residual norm 9.103576883087e-06
  8 KSP Residual norm 8.837914581158e-07
  9 KSP Residual norm 8.295880033739e-08
  Linear solve converged due to CONVERGED_RTOL iterations 9
||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00
$ mpirun -n 8 ./sparse_ls -mat_name /tmp/system/matdump_step0001_dir0.bin 
-options_file gmres.rc -pc_type hpddm -options_left 0
  0 KSP Residual norm 3.453633143461e+01
  1 KSP Residual norm 4.455259710220e+00
  2 KSP Residual norm 1.190904699181e-01
  3 KSP Residual norm 5.220970280301e-03
  4 KSP Residual norm 6.214389182723e-05
  5 KSP Residual norm 1.693927074022e-06
  6 KSP Residual norm 3.700055659502e-08
  Linear solve converged due to CONVERGED_RTOL iterations 6
||A^T(Ax-b)|| / ||Ax-b|| = 0.00 / 17.017700 = 0.00


> On 7 May 2024, at 2:50 PM, Seiz,Marco  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Pierre,
> 
> I've attached the dumps of the matrix + RHS for something of about 3k x 1k.
> 
> 
> Regarding the weird divergence behaviour, I tried again at home but I still 
> get the same results.
> 
> I am running a rolling release distribution on both machines, but that really 
> shouldn't matter for divergence behavior I would think.
> 
> Is there some kind of option in PETSc to get more information about the 
> breakdown from my side?
> 
> 
> Best regards,
> 
> Marco
> 
> - Original Message -
> >> From: Pierre Jolivet mailto:pie...@joliv.et>>
> >> To: Marco Seiz mailto:ma...@kit.ac.jp>>
> >> Cc: petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> >> Date: 2024-05-07 18:12:18
> >> Subject: Re: [petsc-users] Reasons for breakdown in preconditioned LSQR
> >> 
> 
> >> 
> >> > On 7 May 2024, at 9:10 AM, Marco Seiz  >> > <mailto:ma...@kit.ac.jp>> wrote:
> >> > 
> >> > Thanks for the quick response!
> >> > 
> >> > On 07.05.24 14:24, Pierre Jolivet wrote:
> >> >> 
> >> >> 
> >> >>> On 7 May 2024, at 7:04 AM, Marco Seiz  >> >>> <mailto:ma...@kit.ac.jp>> wrote:
> >> >>> 
> >> >>> This Message Is From an External Sender
> >> >>> This message came from outside your organization.
> >> >>> Hello,
> >> >>> 
> >> >>> something a bit different from my last question, since that didn't
> >> >>> progress so well:
> >> >>> I have a related model which generally produces a rectangular matrix A,
> >> >>> so I am using LSQR to solve the system.
> >> >>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
> >> >>> to a finite difference Poisson matrix if the rows were permuted 
> >> >>> randomly.
> >> >>> The problem is singular in that the solution is only specified up to a
> >> >>> constant from the matrix, with my target solution being a weighted zero
> >> >>> average one, which I can handle by adding a nullspace to my matrix.
> >> >>> However, I'd also like to pin (potentially many) DOFs in the future so 
> >> >>> I
> >> >>> also tried pinning a single value, and afterwards subtracting the
> >> >>> average from the KSP solution.
> >> >>> This leads to the KSP *sometimes

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Seiz,Marco




 The system basically describes the displacements of particles relative to each other due to processes occurring on their interfaces, i. e. vacancy absorption. The idea is to say there's a displacement jump u_a - u_b = du_{a,b}, for every interface




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




The system basically describes the displacements of particles relative to each other due to processes occurring on their interfaces, i.e. vacancy absorption.The idea is to say there's a displacement jump u_a  - u_b = du_{a,b}, for every interface (a,b).
Since all of them ought to be roughly satisfied at the same time, I get a system Au = du, with the LHS forming the matrix A times displacement u and the RHS simply being the values for the jumps.

The pinned DOFs are physically motivated by some particles being attached to a surface which prevents their motion, but their contacts still generate displacement jumps for the rest of the system to deal with.

The particle connectivity can be quite arbitrary since the particle numbering has nothing to do with the physical location, so particle 1 can connect to particle 2142143 and particle 2 at the same time.

In A this just generates extra rows, but from what I've observed looking at A^TA one row is more or less one positive number k and k -1s; for a 1D finite difference Poisson equation we'd just get k = 2 except for boundaries, hence the Poisson interpretation.

The permutation comes from the fact that I don't get the particle pairs in any regular ordering. 

But actually looking over the pmat from the example I sent Pierre, A^TA seems to be of the following structure:

- For row R, there's a positive number k at column R, with the remaining columns being -1 of which there are k, so it's weakly diagonally dominant.

- When there's a pinned DOF, the row that DOF belongs to sums to > 0.

- Which columns the nonzeros are in is determined via the connectivity so there's large jumps in those, but as Mark pointed out this shouldn't interefere with convergence.

I do seem to remember it being less orderly, but that probably makes the problem easier for now. I think that's a result from a renumbering between the particle numbers and their column index in the matrix which I introduced in my application recently in order to deal with particles being overgrown. (I dread the MPI implementation of that already...)

So regarding Matt and Mark's suggestion, would it be sensible from a performance perspective to basically work with a KSP for a square system while using a MATNORMAL which represents A^TA?
Since as long as the input is ordered like that it does end up being close to a Poisson problem even with pinned DOFs, which might also just be a Poisson matrix with some Dirichlet BCs if I remember that structure right.
That should hopefully also solve the weird convergence behaviour.

Best regards,
Marco
- Original Message -
>> From: Matthew Knepley 
>> To: Mark Adams 
>> Cc: Pierre Jolivet ,petsc-users@mcs.anl.gov,Marco Seiz 
>> Date: 2024-05-07 21:32:55
>> Subject: Re: [petsc-users] Reasons for breakdown in preconditioned LSQR
>> 
>> On Tue, May 7, 2024 at 8:28 AM Mark Adams  wrote:
>> 
>> > "A^T A being similar to a finite difference Poisson matrix if the rows
>> > were permuted randomly."
>> > Normal eqs are a good option in general for rectangular systems and we
>> > have Poisson solvers.
>> >
>> 
>> I missed that. Why not first permute back to the Poisson matrix? Then it
>> would be trivial.
>> 
>>   Thanks,
>> 
>> Matt
>> 
>> 
>> > I'm not sure what you mean by "permuted randomly." A random permutation
>> > of the matrix can kill performance but not math.
>> >
>> > Mark
>> >
>> >
>> > On Tue, May 7, 2024 at 8:03 AM Matthew Knepley  wrote:
>> >
>> >> On Tue, May 7, 2024 at 5: 12 AM Pierre Jolivet 
>> >> wrote: On 7 May 2024, at 9: 10 AM, Marco Seiz 
>> >> wrote: Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet
>> >> wrote: On 7 May 2024,
>> >> ZjQcmQRYFpfptBannerStart
>> >> This Message Is From an External Sender
>> >> This message came from outside your organization.
>> >>
>> >> ZjQcmQRYFpfptBannerEnd
>> >> On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:
>> >>
>> >>> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote:
>> >>> Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote:
>> >>> On 7 May 2024, at 7: 04 AM, Marco Seiz  wrote: This
>> >>

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Seiz,Marco




 Pierre, I've attached the dumps of the matrix + RHS for something of about 3k x 1k. Regarding the weird divergence behaviour, I tried again at home but I still get the same results. I am running a rolling release distribution on both machines,




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Pierre,

I've attached the dumps of the matrix + RHS for something of about 3k x 1k.


Regarding the weird divergence behaviour, I tried again at home but I still get the same results.

I am running a rolling release distribution on both machines, but that really shouldn't matter for divergence behavior I would think.

Is there some kind of option in PETSc to get more information about the breakdown from my side?


Best regards,

Marco

- Original Message -
>> From: Pierre Jolivet 
>> To: Marco Seiz 
>> Cc: petsc-users@mcs.anl.gov
>> Date: 2024-05-07 18:12:18
>> Subject: Re: [petsc-users] Reasons for breakdown in preconditioned LSQR
>> 

>> 
>> > On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>> > 
>> > Thanks for the quick response!
>> > 
>> > On 07.05.24 14:24, Pierre Jolivet wrote:
>> >> 
>> >> 
>> >>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>> >>> 
>> >>> This Message Is From an External Sender
>> >>> This message came from outside your organization.
>> >>> Hello,
>> >>> 
>> >>> something a bit different from my last question, since that didn't
>> >>> progress so well:
>> >>> I have a related model which generally produces a rectangular matrix A,
>> >>> so I am using LSQR to solve the system.
>> >>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>> >>> to a finite difference Poisson matrix if the rows were permuted randomly.
>> >>> The problem is singular in that the solution is only specified up to a
>> >>> constant from the matrix, with my target solution being a weighted zero
>> >>> average one, which I can handle by adding a nullspace to my matrix.
>> >>> However, I'd also like to pin (potentially many) DOFs in the future so I
>> >>> also tried pinning a single value, and afterwards subtracting the
>> >>> average from the KSP solution.
>> >>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>> >>> the target size of the matrix will be something like ([1,20] N) x N,
>> >>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>> >>> for reasonable execution time.
>> >>> 
>> >>> For a smaller example system, I set up my application to dump the input
>> >>> to the KSP when it breaks down and I've attached a simple python script
>> >>> + data using petsc4py to demonstrate the divergence for those specific
>> >>> systems.
>> >>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>> >>> particular system shows breakdown, but if I remove the pinned DOF and
>> >>> add the nullspace (pass -usens) it converges. I did try different PCs
>> >>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>> >>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>> >>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>> >>> interestingly it doesn't break down when I explicitly form A^T A (don't
>> >>> pass -usenormal).
>> >> 
>> >> What version are you using? All those commands are returning
>> >>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>> >> So I cannot reproduce any breakdown, but there have been recent changes to KSPLSQR.
>> > For those tests I've been using PETSc 3.20.5 (last githash was
>> > 4b82c11ab5d ).
>> > I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
>> > but I had to drop --download-suitesparse=1 from my earlier config due to
>> > errors.
>> > Should I write a separate mail about this?
>> > 
>> > The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
>> > lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
>> > -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
>> > but the QR example fails since I had to remove suitesparse.
>> > p

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Matthew Knepley
On Tue, May 7, 2024 at 8:28 AM Mark Adams  wrote:

> "A^T A being similar to a finite difference Poisson matrix if the rows
> were permuted randomly."
> Normal eqs are a good option in general for rectangular systems and we
> have Poisson solvers.
>

I missed that. Why not first permute back to the Poisson matrix? Then it
would be trivial.

  Thanks,

Matt


> I'm not sure what you mean by "permuted randomly." A random permutation
> of the matrix can kill performance but not math.
>
> Mark
>
>
> On Tue, May 7, 2024 at 8:03 AM Matthew Knepley  wrote:
>
>> On Tue, May 7, 2024 at 5: 12 AM Pierre Jolivet 
>> wrote: On 7 May 2024, at 9: 10 AM, Marco Seiz 
>> wrote: Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet
>> wrote: On 7 May 2024,
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>> On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:
>>
>>> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote:
>>> Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote:
>>> On 7 May 2024, at 7: 04 AM, Marco Seiz  wrote: This
>>> Message Is From an External
>>> ZjQcmQRYFpfptBannerStart
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>>
>>> ZjQcmQRYFpfptBannerEnd
>>>
>>>
>>> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>>>
>>> Thanks for the quick response!
>>>
>>> On 07.05.24 14:24, Pierre Jolivet wrote:
>>>
>>>
>>>
>>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>>>
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> Hello,
>>>
>>> something a bit different from my last question, since that didn't
>>> progress so well:
>>> I have a related model which generally produces a rectangular matrix A,
>>> so I am using LSQR to solve the system.
>>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>>> to a finite difference Poisson matrix if the rows were permuted randomly.
>>> The problem is singular in that the solution is only specified up to a
>>> constant from the matrix, with my target solution being a weighted zero
>>> average one, which I can handle by adding a nullspace to my matrix.
>>> However, I'd also like to pin (potentially many) DOFs in the future so I
>>> also tried pinning a single value, and afterwards subtracting the
>>> average from the KSP solution.
>>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>>> the target size of the matrix will be something like ([1,20] N) x N,
>>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>>> for reasonable execution time.
>>>
>>> For a smaller example system, I set up my application to dump the input
>>> to the KSP when it breaks down and I've attached a simple python script
>>> + data using petsc4py to demonstrate the divergence for those specific
>>> systems.
>>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>>> particular system shows breakdown, but if I remove the pinned DOF and
>>> add the nullspace (pass -usens) it converges. I did try different PCs
>>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>>> interestingly it doesn't break down when I explicitly form A^T A (don't
>>> pass -usenormal).
>>>
>>>
>>> What version are you using? All those commands are returning
>>>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>>> So I cannot reproduce any breakdown, but there have been recent changes
>>> to KSPLSQR.
>>>
>>> For those tests I've been using PETSc 3.20.5 (last githash was
>>> 4b82c11ab5d ).
>>> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
>>> but I had to drop --download-suitesparse=1 from my earlier config due to
>>> errors.
>>> Should I write a separate mail about this?
>>>
>>> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
>>> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
>>> -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
>>> but the QR example fails since I had to remove suitesparse.
>>> petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
>>> then `ldd app` gives me `libpetsc.so
>>> .3.21
>>> =>
>>> /opt/petsc/linux-c-opt/lib/libpetsc.so
>>> .3.21`
>>> so it should be using the
>>> newly built one.
>>> The application then still eventually yields a DIVERGED_BREAKDOWN.
>>> I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so 

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Mark Adams
"A^T A being similar to a finite difference Poisson matrix if the rows were
permuted randomly."
Normal eqs are a good option in general for rectangular systems and we have
Poisson solvers.

I'm not sure what you mean by "permuted randomly." A random permutation of
the matrix can kill performance but not math.

Mark


On Tue, May 7, 2024 at 8:03 AM Matthew Knepley  wrote:

> On Tue, May 7, 2024 at 5: 12 AM Pierre Jolivet  wrote:
> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote: Thanks
> for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote: On 7
> May 2024,
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
> On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:
>
>> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote:
>> Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote:
>> On 7 May 2024, at 7: 04 AM, Marco Seiz  wrote: This
>> Message Is From an External
>> ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>>
>> ZjQcmQRYFpfptBannerEnd
>>
>>
>> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>>
>> Thanks for the quick response!
>>
>> On 07.05.24 14:24, Pierre Jolivet wrote:
>>
>>
>>
>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>> Hello,
>>
>> something a bit different from my last question, since that didn't
>> progress so well:
>> I have a related model which generally produces a rectangular matrix A,
>> so I am using LSQR to solve the system.
>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>> to a finite difference Poisson matrix if the rows were permuted randomly.
>> The problem is singular in that the solution is only specified up to a
>> constant from the matrix, with my target solution being a weighted zero
>> average one, which I can handle by adding a nullspace to my matrix.
>> However, I'd also like to pin (potentially many) DOFs in the future so I
>> also tried pinning a single value, and afterwards subtracting the
>> average from the KSP solution.
>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>> the target size of the matrix will be something like ([1,20] N) x N,
>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>> for reasonable execution time.
>>
>> For a smaller example system, I set up my application to dump the input
>> to the KSP when it breaks down and I've attached a simple python script
>> + data using petsc4py to demonstrate the divergence for those specific
>> systems.
>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>> particular system shows breakdown, but if I remove the pinned DOF and
>> add the nullspace (pass -usens) it converges. I did try different PCs
>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>> interestingly it doesn't break down when I explicitly form A^T A (don't
>> pass -usenormal).
>>
>>
>> What version are you using? All those commands are returning
>>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>> So I cannot reproduce any breakdown, but there have been recent changes
>> to KSPLSQR.
>>
>> For those tests I've been using PETSc 3.20.5 (last githash was
>> 4b82c11ab5d ).
>> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
>> but I had to drop --download-suitesparse=1 from my earlier config due to
>> errors.
>> Should I write a separate mail about this?
>>
>> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
>> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
>> -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
>> but the QR example fails since I had to remove suitesparse.
>> petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
>> then `ldd app` gives me `libpetsc.so
>> .3.21
>> =>
>> /opt/petsc/linux-c-opt/lib/libpetsc.so
>> .3.21`
>> so it should be using the
>> newly built one.
>> The application then still eventually yields a DIVERGED_BREAKDOWN.
>> I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so if we are on
>> the same version and there's still a discrepancy it is quite weird.
>>
>>
>> Quite weird indeed…
>> $ python3 lsdiv.py -pc_type lu -ksp_converged_reason
>>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>> $ python3 lsdiv.py -usens -pc_type lu 

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Matthew Knepley
On Tue, May 7, 2024 at 5:12 AM Pierre Jolivet  wrote:

> On 7 May 2024, at 9: 10 AM, Marco Seiz  wrote: Thanks
> for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote: On 7
> May 2024, at 7: 04 AM, Marco Seiz  wrote: This
> Message Is From an External
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>
> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
>
> Thanks for the quick response!
>
> On 07.05.24 14:24, Pierre Jolivet wrote:
>
>
>
> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
> something a bit different from my last question, since that didn't
> progress so well:
> I have a related model which generally produces a rectangular matrix A,
> so I am using LSQR to solve the system.
> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
> to a finite difference Poisson matrix if the rows were permuted randomly.
> The problem is singular in that the solution is only specified up to a
> constant from the matrix, with my target solution being a weighted zero
> average one, which I can handle by adding a nullspace to my matrix.
> However, I'd also like to pin (potentially many) DOFs in the future so I
> also tried pinning a single value, and afterwards subtracting the
> average from the KSP solution.
> This leads to the KSP *sometimes* diverging when I use a preconditioner;
> the target size of the matrix will be something like ([1,20] N) x N,
> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
> for reasonable execution time.
>
> For a smaller example system, I set up my application to dump the input
> to the KSP when it breaks down and I've attached a simple python script
> + data using petsc4py to demonstrate the divergence for those specific
> systems.
> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
> particular system shows breakdown, but if I remove the pinned DOF and
> add the nullspace (pass -usens) it converges. I did try different PCs
> but they tend to break down at different steps, e.g. `python3 lsdiv.py
> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
> for PCQR when I use MatCreateNormal for creating the PC mat, but
> interestingly it doesn't break down when I explicitly form A^T A (don't
> pass -usenormal).
>
>
> What version are you using? All those commands are returning
>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> So I cannot reproduce any breakdown, but there have been recent changes to
> KSPLSQR.
>
> For those tests I've been using PETSc 3.20.5 (last githash was
> 4b82c11ab5d ).
> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
> but I had to drop --download-suitesparse=1 from my earlier config due to
> errors.
> Should I write a separate mail about this?
>
> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
> -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
> but the QR example fails since I had to remove suitesparse.
> petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
> then `ldd app` gives me `libpetsc.so
> .3.21
> =>
> /opt/petsc/linux-c-opt/lib/libpetsc.so
> .3.21`
> so it should be using the
> newly built one.
> The application then still eventually yields a DIVERGED_BREAKDOWN.
> I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so if we are on
> the same version and there's still a discrepancy it is quite weird.
>
>
> Quite weird indeed…
> $ python3 lsdiv.py -pc_type lu -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -usens -pc_type lu -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -pc_type qr -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> $ python3 lsdiv.py -usens -pc_type qr -ksp_converged_reason
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>
> For the moment I can work by adding the nullspace but eventually the
> need for pinning DOFs will resurface, so I'd like to ask where the
> breakdown is coming from. What causes the breakdowns? Is that a generic
> problem occurring when adding (dof_i = val) rows to least-squares
> systems which prevents these preconditioners from being robust? If so,
> what preconditioners could be robust?
> I did a minimal sweep of the available PCs by going over the possible
> 

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Pierre Jolivet


> On 7 May 2024, at 9:10 AM, Marco Seiz  wrote:
> 
> Thanks for the quick response!
> 
> On 07.05.24 14:24, Pierre Jolivet wrote:
>> 
>> 
>>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>>> 
>>> This Message Is From an External Sender
>>> This message came from outside your organization.
>>> Hello,
>>> 
>>> something a bit different from my last question, since that didn't
>>> progress so well:
>>> I have a related model which generally produces a rectangular matrix A,
>>> so I am using LSQR to solve the system.
>>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>>> to a finite difference Poisson matrix if the rows were permuted randomly.
>>> The problem is singular in that the solution is only specified up to a
>>> constant from the matrix, with my target solution being a weighted zero
>>> average one, which I can handle by adding a nullspace to my matrix.
>>> However, I'd also like to pin (potentially many) DOFs in the future so I
>>> also tried pinning a single value, and afterwards subtracting the
>>> average from the KSP solution.
>>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>>> the target size of the matrix will be something like ([1,20] N) x N,
>>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>>> for reasonable execution time.
>>> 
>>> For a smaller example system, I set up my application to dump the input
>>> to the KSP when it breaks down and I've attached a simple python script
>>> + data using petsc4py to demonstrate the divergence for those specific
>>> systems.
>>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>>> particular system shows breakdown, but if I remove the pinned DOF and
>>> add the nullspace (pass -usens) it converges. I did try different PCs
>>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>>> interestingly it doesn't break down when I explicitly form A^T A (don't
>>> pass -usenormal).
>> 
>> What version are you using? All those commands are returning
>>  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
>> So I cannot reproduce any breakdown, but there have been recent changes to 
>> KSPLSQR.
> For those tests I've been using PETSc 3.20.5 (last githash was
> 4b82c11ab5d ).
> I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
> but I had to drop --download-suitesparse=1 from my earlier config due to
> errors.
> Should I write a separate mail about this?
> 
> The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
> lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
> -usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
> but the QR example fails since I had to remove suitesparse.
> petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
> then `ldd app` gives me `libpetsc.so 
>   >.3.21 =>
> /opt/petsc/linux-c-opt/lib/libpetsc.so 
>   >.3.21` so it should be using the
> newly built one.
> The application then still eventually yields a DIVERGED_BREAKDOWN.
> I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so if we are on
> the same version and there's still a discrepancy it is quite weird.

Quite weird indeed…
$ python3 lsdiv.py -pc_type lu -ksp_converged_reason
  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
$ python3 lsdiv.py -usens -pc_type lu -ksp_converged_reason
  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
$ python3 lsdiv.py -pc_type qr -ksp_converged_reason
  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
$ python3 lsdiv.py -usens -pc_type qr -ksp_converged_reason
  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1

>>> For the moment I can work by adding the nullspace but eventually the
>>> need for pinning DOFs will resurface, so I'd like to ask where the
>>> breakdown is coming from. What causes the breakdowns? Is that a generic
>>> problem occurring when adding (dof_i = val) rows to least-squares
>>> systems which prevents these preconditioners from being robust? If so,
>>> what preconditioners could be robust?
>>> I did a minimal sweep of the available PCs by going over the possible
>>> inputs of -pc_type for my application while pinning one DOF. Excepting
>>> unavailable PCs (not compiled for, other setup missing, ...) and those
>>> that did break down, I am left with ( hmg jacobi mat none pbjacobi sor
>>> svd ).
>> It’s unlikely any of these preconditioners will scale (or even converge) for 
>> problems with up 

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-07 Thread Marco Seiz




 Thanks for the quick response! On 07. 05. 24 14: 24, Pierre Jolivet wrote: > > >> On 7 May 2024, at 7: 04 AM, Marco Seiz  wrote: >> >> This Message Is From an External Sender >> This message




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Thanks for the quick response!

On 07.05.24 14:24, Pierre Jolivet wrote:
> 
> 
>> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
>>
>> This Message Is From an External Sender
>> This message came from outside your organization.
>> Hello,
>>
>> something a bit different from my last question, since that didn't
>> progress so well:
>> I have a related model which generally produces a rectangular matrix A,
>> so I am using LSQR to solve the system.
>> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
>> to a finite difference Poisson matrix if the rows were permuted randomly.
>> The problem is singular in that the solution is only specified up to a
>> constant from the matrix, with my target solution being a weighted zero
>> average one, which I can handle by adding a nullspace to my matrix.
>> However, I'd also like to pin (potentially many) DOFs in the future so I
>> also tried pinning a single value, and afterwards subtracting the
>> average from the KSP solution.
>> This leads to the KSP *sometimes* diverging when I use a preconditioner;
>> the target size of the matrix will be something like ([1,20] N) x N,
>> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
>> for reasonable execution time.
>>
>> For a smaller example system, I set up my application to dump the input
>> to the KSP when it breaks down and I've attached a simple python script
>> + data using petsc4py to demonstrate the divergence for those specific
>> systems.
>> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
>> particular system shows breakdown, but if I remove the pinned DOF and
>> add the nullspace (pass -usens) it converges. I did try different PCs
>> but they tend to break down at different steps, e.g. `python3 lsdiv.py
>> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
>> for PCQR when I use MatCreateNormal for creating the PC mat, but
>> interestingly it doesn't break down when I explicitly form A^T A (don't
>> pass -usenormal).
> 
> What version are you using? All those commands are returning
>   Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
> So I cannot reproduce any breakdown, but there have been recent changes to KSPLSQR.
For those tests I've been using PETSc 3.20.5 (last githash was
4b82c11ab5d ).
I pulled the latest version from gitlab ( 6b3135e3cbe ) and compiled it,
but I had to drop --download-suitesparse=1 from my earlier config due to
errors.
Should I write a separate mail about this?

The LU example still behaves the same for me (`python3 lsdiv.py -pc_type
lu -ksp_converged_reason` gives DIVERGED_BREAKDOWN, `python3 lsdiv.py
-usens -pc_type lu -ksp_converged_reason` gives CONVERGED_RTOL_NORMAL)
but the QR example fails since I had to remove suitesparse.
petsc4py.__version__ reports 3.21.1 and if I rebuild my application,
then `ldd app` gives me `libpetsc.so.3.21 =>
/opt/petsc/linux-c-opt/lib/libpetsc.so.3.21` so it should be using the
newly built one.
The application then still eventually yields a DIVERGED_BREAKDOWN.
I don't have a ~/.petscrc and PETSC_OPTIONS is unset, so if we are on
the same version and there's still a discrepancy it is quite weird.


>> For the moment I can work by adding the nullspace but eventually the
>> need for pinning DOFs will resurface, so I'd like to ask where the
>> breakdown is coming from. What causes the breakdowns? Is that a generic
>> problem occurring when adding (dof_i = val) rows to least-squares
>> systems which prevents these preconditioners from being robust? If so,
>> what preconditioners could be robust?
>> I did a minimal sweep of the available PCs by going over the possible
>> inputs of -pc_type for my application while pinning one DOF. Excepting
>> unavailable PCs (not compiled for, other setup missing, ...) and those
>> that did break down, I am left with ( hmg jacobi mat none pbjacobi sor
>> svd ).
> It’s unlikely any of these preconditioners will scale (or even converge) for problems with up to 1E6 unknowns.
> I could help you setup https://urldefense.us/v3/__https://epubs.siam.org/doi/abs/10.1137/21M1434891__;!!G_uCfscf7eWS!aWDq3JS_vt1DI4eRxW31gy63yeEiP3gFDvXJqtIwfT82UxySisoMMrkx3lRqZYkyiI2hKuR6tYlSN58qivjclg$ if you are willing to share a larger example (the current Mat are extremely tiny).
Yes, that would be great. About how large of a matrix do you need? I can
probably quickly get something non-artificial up to O(N) ~ 1e3, bigger
matrices will take some time since I purposefully ignored MPI previously.
The matrix basically describes the contacts between particles which 

Re: [petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-06 Thread Pierre Jolivet


> On 7 May 2024, at 7:04 AM, Marco Seiz  wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
> 
> something a bit different from my last question, since that didn't
> progress so well:
> I have a related model which generally produces a rectangular matrix A,
> so I am using LSQR to solve the system.
> The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
> to a finite difference Poisson matrix if the rows were permuted randomly.
> The problem is singular in that the solution is only specified up to a
> constant from the matrix, with my target solution being a weighted zero
> average one, which I can handle by adding a nullspace to my matrix.
> However, I'd also like to pin (potentially many) DOFs in the future so I
> also tried pinning a single value, and afterwards subtracting the
> average from the KSP solution.
> This leads to the KSP *sometimes* diverging when I use a preconditioner;
> the target size of the matrix will be something like ([1,20] N) x N,
> with N ~ [2, 1e6] so for the higher end I will require a preconditioner
> for reasonable execution time.
> 
> For a smaller example system, I set up my application to dump the input
> to the KSP when it breaks down and I've attached a simple python script
> + data using petsc4py to demonstrate the divergence for those specific
> systems.
> With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
> particular system shows breakdown, but if I remove the pinned DOF and
> add the nullspace (pass -usens) it converges. I did try different PCs
> but they tend to break down at different steps, e.g. `python3 lsdiv.py
> -usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
> for PCQR when I use MatCreateNormal for creating the PC mat, but
> interestingly it doesn't break down when I explicitly form A^T A (don't
> pass -usenormal).

What version are you using? All those commands are returning
  Linear solve converged due to CONVERGED_RTOL_NORMAL iterations 1
So I cannot reproduce any breakdown, but there have been recent changes to 
KSPLSQR.
> For the moment I can work by adding the nullspace but eventually the
> need for pinning DOFs will resurface, so I'd like to ask where the
> breakdown is coming from. What causes the breakdowns? Is that a generic
> problem occurring when adding (dof_i = val) rows to least-squares
> systems which prevents these preconditioners from being robust? If so,
> what preconditioners could be robust?
> I did a minimal sweep of the available PCs by going over the possible
> inputs of -pc_type for my application while pinning one DOF. Excepting
> unavailable PCs (not compiled for, other setup missing, ...) and those
> that did break down, I am left with ( hmg jacobi mat none pbjacobi sor
> svd ).
It’s unlikely any of these preconditioners will scale (or even converge) for 
problems with up to 1E6 unknowns.
I could help you setup 
https://urldefense.us/v3/__https://epubs.siam.org/doi/abs/10.1137/21M1434891__;!!G_uCfscf7eWS!cEzzrez47D-rTba54wHj2wMvjkRrPiIlyhhfvWp2H4HT4EO9-soMFVTQUt0bjRj08xqYWHXUBJusDpvvB-LqSQ$
  if you are willing to share a larger example (the current Mat are extremely 
tiny).

Thanks,
Pierre
> 
> 
> Best regards,
> Marco
> 
> 



[petsc-users] Reasons for breakdown in preconditioned LSQR

2024-05-06 Thread Marco Seiz




 Hello, something a bit different from my last question, since that didn't progress so well: I have a related model which generally produces a rectangular matrix A, so I am using LSQR to solve the system. The matrix A has two nonzeros (1, -1)




ZjQcmQRYFpfptBannerStart




  

  
	This Message Is From an External Sender
  
  
This message came from outside your organization.
  



 
  


ZjQcmQRYFpfptBannerEnd




Hello,

something a bit different from my last question, since that didn't
progress so well:
I have a related model which generally produces a rectangular matrix A,
so I am using LSQR to solve the system.
The matrix A has two nonzeros (1, -1) per row, with A^T A being similar
to a finite difference Poisson matrix if the rows were permuted randomly.
The problem is singular in that the solution is only specified up to a
constant from the matrix, with my target solution being a weighted zero
average one, which I can handle by adding a nullspace to my matrix.
However, I'd also like to pin (potentially many) DOFs in the future so I
also tried pinning a single value, and afterwards subtracting the
average from the KSP solution.
This leads to the KSP *sometimes* diverging when I use a preconditioner;
the target size of the matrix will be something like ([1,20] N) x N,
with N ~ [2, 1e6] so for the higher end I will require a preconditioner
for reasonable execution time.

For a smaller example system, I set up my application to dump the input
to the KSP when it breaks down and I've attached a simple python script
+ data using petsc4py to demonstrate the divergence for those specific
systems.
With `python3 lsdiv.py -pc_type lu -ksp_converged_reason` that
particular system shows breakdown, but if I remove the pinned DOF and
add the nullspace (pass -usens) it converges. I did try different PCs
but they tend to break down at different steps, e.g. `python3 lsdiv.py
-usenormal -qrdiv -pc_type qr -ksp_converged_reason` shows the breakdown
for PCQR when I use MatCreateNormal for creating the PC mat, but
interestingly it doesn't break down when I explicitly form A^T A (don't
pass -usenormal).

For the moment I can work by adding the nullspace but eventually the
need for pinning DOFs will resurface, so I'd like to ask where the
breakdown is coming from. What causes the breakdowns? Is that a generic
problem occurring when adding (dof_i = val) rows to least-squares
systems which prevents these preconditioners from being robust? If so,
what preconditioners could be robust?
I did a minimal sweep of the available PCs by going over the possible
inputs of -pc_type for my application while pinning one DOF. Excepting
unavailable PCs (not compiled for, other setup missing, ...) and those
that did break down, I am left with ( hmg jacobi mat none pbjacobi sor
svd ).


Best regards,
Marco


<>