Re: [petsc-dev] Right-preconditioned GMRES

2019-11-07 Thread Pierre Jolivet via petsc-dev


> On 7 Nov 2019, at 5:32 AM, Smith, Barry F.  wrote:
> 
> 
>  Some idiot logged what they did, but not why they did it.
> 
> commit bf108f309acab50613e150419c680842cf4b8a05 (HEAD)
> Author: Barry Smith 
> Date:   Thu Mar 18 20:40:53 2004 -0600
> 
>bk-changeset-1.2063.1.1
>barrysmith@barry-smiths-computer.local|ChangeSet|20040319024053|12244
>ChangeSet
>  1.2063.1.1 04/03/18 20:40:53 barrysmith@barry-smiths-computer.local +5 -0
>  if matrix is symmetric try to use preconditioner options for symmetric 
> matrices
> 
> 
>   Here is my guess as to this guys reasoning 15 years ago: if the user knows 
> their problem is SPD and they thus switch to CG they will get garbage using 
> the default ASM type, they will be confused and unhappy with the result not 
> realizing that it is due to an inappropriate preconditioner default when 
> using CG. The penalty is, of course, someone using GMRES will get slower 
> convergence than they should as you point out.
> 
>   Today I think we could do better. We could introduce the concept of a 
> "symmetric" preconditioner PCIsSymmetric() PCIsSymmetricKnown() and then 
> CG/all KSP that require symmetric preconditioners could query this 
> information and error immediately if the PC indicates it is NOT symmetric.

Don’t forget about PCIsHermitian() and PCIsHermitianKnown().
But what you suggest goes along what Matt answered earlier.
All this is OK for me.
Though, in the meantime, I’ll force the type to RESTRICT in my PC 
(https://gitlab.com/petsc/petsc/commit/7e2068805d975b7ad588c59b62e2c0b3e60cd4af#2c7d367ac831f3b0c5fb767c0eb16c1ea7ae7fe0_720_722
 
<https://gitlab.com/petsc/petsc/commit/7e2068805d975b7ad588c59b62e2c0b3e60cd4af#2c7d367ac831f3b0c5fb767c0eb16c1ea7ae7fe0_720_722>),
 because I don’t think it’s OK to go from a convergence in 9 iterations:
[…]
   9 KSP Residual norm 3.632028841798e-05
[…]
  PC Object: (pc_hpddm_levels_1_) 4 MPI processes
type: asm
  total subdomain blocks = 4, user-defined overlap
  restriction/interpolation type - RESTRICT
[…]
To a convergence in 26 iterations:
[…]
 26 KSP Residual norm 1.548760754380e-04
[…]
  PC Object: (pc_hpddm_levels_1_) 4 MPI processes
type: asm
  total subdomain blocks = 4, user-defined overlap
  restriction/interpolation type - BASIC
[…]
For a Helmholtz equation with only 4 subdomains. But that’s just my opinion of 
course.

Thanks,
Pierre

> Or we could have PCSetSymmetric() which turns on whatever PC type specific 
> options are needed to make the preconditioner symmetric and the symmetric 
> requiring KSP could turn on this option. One has to be careful but because if 
> the user specifically set RAS then it should not be overruled and changed 
> without their knowledge. Note that in asm.c it has the code if 
> (!osm->type_set) { so if the user set RAS it will not change it. 
> 
>  I am not sure that there is a perfect solution that satisfies all use cases 
> but I agree the current behavior is questionable and could be replaced with 
> better behavior that still prevents tragedies of failure of ASM with CG due 
> to the defaults.
> 
>  Barry
> 
> 
> 
> 
>> On Nov 6, 2019, at 12:12 PM, Pierre Jolivet  
>> wrote:
>> 
>> I need to figure this out myself first.
>> In the meantime, here is another (PCASM related) question for you: why is 
>> PCASMType switched to BASIC when using a symmetric Pmat? (I guess I don’t 
>> have to tell you about the performance of RAS vs. ASM)
>> To me, that would make sense if the default KSP was also switched to CG 
>> instead of GMRES if the Pmat and Amat were symmetric, but I don’t think this 
>> is the case.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 24 Oct 2019, at 5:40 PM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> Send the code and exact instructions to run a "good" and a "bad" ASM 
>>> 
>>> Barry
>>> 
>>> 
>>>> On Oct 14, 2019, at 10:44 AM, Pierre Jolivet  
>>>> wrote:
>>>> 
>>>> Here are the three logs.
>>>> FGMRES also gives a wrong first iterate.
>>>> I think Mark was right in the sense that the problem is _most likely_ in 
>>>> my RHS.
>>>> But I need to figure out why I only get this problem with 
>>>> right-preconditioned KSPs with restrict or none.
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>> 
>>>> 
>>>>> On 13 Oct 2019, at 8:16 PM, Smith, Barry F.  wrote:
>>>>> 
>>>>> 
>>>>> Is this one process with one subdomain? (And hence no meaningful overlap 
>>>>> since th

Re: [petsc-dev] Right-preconditioned GMRES

2019-11-06 Thread Pierre Jolivet via petsc-dev


> On 6 Nov 2019, at 7:44 PM, Matthew Knepley  wrote:
> 
> On Wed, Nov 6, 2019 at 1:32 PM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> On 6 Nov 2019, at 7:30 PM, Fande Kong > <mailto:fdkong...@gmail.com>> wrote:
>> 
>> Even if both the linear operator and the preconditioner  operator are 
>> symmetric, the preconditioned operator by RAS will  NOT be symmetric so that 
>> CG is not applicable. 
>> 
>> RAS keeps overlapping elements during restriction but discards them when 
>> doing interpolation, which leads to an unsymmetric preconditioned operator.
> 
> Of course.
> What’s your point?
> 
> I guess Fande's point is that we should consider switching to CG or MINRES, 
> since we are being careful about the PC.

OK, that would make sense.
Still, it is known that in some cases, ASM will not converge when RAS will (OK, 
that’s only true with KSPRICHARDSON…), so I’m a little bit sceptical on the 
default switch to BASIC.

Thanks,
Pierre

>   Thanks,
> 
>  Matt
>  
> Thanks,
> Pierre
> 
>> Fande,
>> 
>>  
>> 
>> On Wed, Nov 6, 2019 at 11:13 AM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> I need to figure this out myself first.
>> In the meantime, here is another (PCASM related) question for you: why is 
>> PCASMType switched to BASIC when using a symmetric Pmat? (I guess I don’t 
>> have to tell you about the performance of RAS vs. ASM)
>> To me, that would make sense if the default KSP was also switched to CG 
>> instead of GMRES if the Pmat and Amat were symmetric, but I don’t think this 
>> is the case.
>> 
>> Thanks,
>> Pierre
>> 
>> > On 24 Oct 2019, at 5:40 PM, Smith, Barry F. > > <mailto:bsm...@mcs.anl.gov>> wrote:
>> > 
>> > 
>> >  Send the code and exact instructions to run a "good" and a "bad" ASM 
>> > 
>> >   Barry
>> > 
>> > 
>> >> On Oct 14, 2019, at 10:44 AM, Pierre Jolivet > >> <mailto:pierre.joli...@enseeiht.fr>> wrote:
>> >> 
>> >> Here are the three logs.
>> >> FGMRES also gives a wrong first iterate.
>> >> I think Mark was right in the sense that the problem is _most likely_ in 
>> >> my RHS.
>> >> But I need to figure out why I only get this problem with 
>> >> right-preconditioned KSPs with restrict or none.
>> >> 
>> >> Thanks,
>> >> Pierre
>> >> 
>> >> 
>> >> 
>> >>> On 13 Oct 2019, at 8:16 PM, Smith, Barry F. > >>> <mailto:bsm...@mcs.anl.gov>> wrote:
>> >>> 
>> >>> 
>> >>> Is this one process with one subdomain? (And hence no meaningful overlap 
>> >>> since there is nothing to overlap?) And you expect to get the "exact" 
>> >>> answer on one iteration? 
>> >>> 
>> >>> Please run the right preconditioned GMRES with -pc_asm_type [restrict 
>> >>> and basic and none]  -ksp_monitor_true_solution  and send the output for 
>> >>> the three cases.
>> >>> 
>> >>> For kicks you can also try FGMRES (which always uses right 
>> >>> preconditioning) to see if the same problem appears.
>> >>> 
>> >>> Barry
>> >>> 
>> >>> 
>> >>>> On Oct 13, 2019, at 2:41 AM, Pierre Jolivet via petsc-dev 
>> >>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> >>>> 
>> >>>> Hello,
>> >>>> I’m struggling to understand the following weirdness with PCASM with 
>> >>>> exact subdomain solvers.
>> >>>> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann 
>> >>>> BCs.
>> >>>> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either 
>> >>>> preconditioned on the right or on the left, I get the expected result, 
>> >>>> cf. attached screenshot.
>> >>>> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on 
>> >>>> the left, I get the expected result as well.
>> >>>> However, with PCASM + 1 iteration of GMRES preconditioned on the right, 
>> >>>> I don’t get what I should (I believe).
>> >>>> Furthermore, this problem is specific to -pc_asm_type restrict,none (I 
>> >>>> get the expected result with basic,interpolate).
>> >>>> 
>> >>>> Any hint?
>> >>>> 
>> >>>> Thanks,
>> >>>> Pierre
>> >>>> 
>> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi 
>> >>>> -ksp_pc_side right -> bjacobi_OK
>> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm 
>> >>>> -ksp_pc_side left -> asm_OK
>> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm 
>> >>>> -ksp_pc_side right -> asm_KO
>> >>>> 
>> >>>> 
>> >>> 
>> >> 
>> >> 
>> > 
>> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-dev] Right-preconditioned GMRES

2019-11-06 Thread Pierre Jolivet via petsc-dev


> On 6 Nov 2019, at 7:30 PM, Fande Kong  wrote:
> 
> Even if both the linear operator and the preconditioner  operator are 
> symmetric, the preconditioned operator by RAS will  NOT be symmetric so that 
> CG is not applicable. 
> 
> RAS keeps overlapping elements during restriction but discards them when 
> doing interpolation, which leads to an unsymmetric preconditioned operator.

Of course.
What’s your point?

Thanks,
Pierre

> Fande,
> 
>  
> 
> On Wed, Nov 6, 2019 at 11:13 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> I need to figure this out myself first.
> In the meantime, here is another (PCASM related) question for you: why is 
> PCASMType switched to BASIC when using a symmetric Pmat? (I guess I don’t 
> have to tell you about the performance of RAS vs. ASM)
> To me, that would make sense if the default KSP was also switched to CG 
> instead of GMRES if the Pmat and Amat were symmetric, but I don’t think this 
> is the case.
> 
> Thanks,
> Pierre
> 
> > On 24 Oct 2019, at 5:40 PM, Smith, Barry F.  > <mailto:bsm...@mcs.anl.gov>> wrote:
> > 
> > 
> >  Send the code and exact instructions to run a "good" and a "bad" ASM 
> > 
> >   Barry
> > 
> > 
> >> On Oct 14, 2019, at 10:44 AM, Pierre Jolivet  >> <mailto:pierre.joli...@enseeiht.fr>> wrote:
> >> 
> >> Here are the three logs.
> >> FGMRES also gives a wrong first iterate.
> >> I think Mark was right in the sense that the problem is _most likely_ in 
> >> my RHS.
> >> But I need to figure out why I only get this problem with 
> >> right-preconditioned KSPs with restrict or none.
> >> 
> >> Thanks,
> >> Pierre
> >> 
> >> 
> >> 
> >>> On 13 Oct 2019, at 8:16 PM, Smith, Barry F.  >>> <mailto:bsm...@mcs.anl.gov>> wrote:
> >>> 
> >>> 
> >>> Is this one process with one subdomain? (And hence no meaningful overlap 
> >>> since there is nothing to overlap?) And you expect to get the "exact" 
> >>> answer on one iteration? 
> >>> 
> >>> Please run the right preconditioned GMRES with -pc_asm_type [restrict and 
> >>> basic and none]  -ksp_monitor_true_solution  and send the output for the 
> >>> three cases.
> >>> 
> >>> For kicks you can also try FGMRES (which always uses right 
> >>> preconditioning) to see if the same problem appears.
> >>> 
> >>> Barry
> >>> 
> >>> 
> >>>> On Oct 13, 2019, at 2:41 AM, Pierre Jolivet via petsc-dev 
> >>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
> >>>> 
> >>>> Hello,
> >>>> I’m struggling to understand the following weirdness with PCASM with 
> >>>> exact subdomain solvers.
> >>>> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann 
> >>>> BCs.
> >>>> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either 
> >>>> preconditioned on the right or on the left, I get the expected result, 
> >>>> cf. attached screenshot.
> >>>> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on 
> >>>> the left, I get the expected result as well.
> >>>> However, with PCASM + 1 iteration of GMRES preconditioned on the right, 
> >>>> I don’t get what I should (I believe).
> >>>> Furthermore, this problem is specific to -pc_asm_type restrict,none (I 
> >>>> get the expected result with basic,interpolate).
> >>>> 
> >>>> Any hint?
> >>>> 
> >>>> Thanks,
> >>>> Pierre
> >>>> 
> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi 
> >>>> -ksp_pc_side right -> bjacobi_OK
> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm 
> >>>> -ksp_pc_side left -> asm_OK
> >>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm 
> >>>> -ksp_pc_side right -> asm_KO
> >>>> 
> >>>> 
> >>> 
> >> 
> >> 
> > 
> 



Re: [petsc-dev] Right-preconditioned GMRES

2019-11-06 Thread Pierre Jolivet via petsc-dev
I need to figure this out myself first.
In the meantime, here is another (PCASM related) question for you: why is 
PCASMType switched to BASIC when using a symmetric Pmat? (I guess I don’t have 
to tell you about the performance of RAS vs. ASM)
To me, that would make sense if the default KSP was also switched to CG instead 
of GMRES if the Pmat and Amat were symmetric, but I don’t think this is the 
case.

Thanks,
Pierre

> On 24 Oct 2019, at 5:40 PM, Smith, Barry F.  wrote:
> 
> 
>  Send the code and exact instructions to run a "good" and a "bad" ASM 
> 
>   Barry
> 
> 
>> On Oct 14, 2019, at 10:44 AM, Pierre Jolivet  
>> wrote:
>> 
>> Here are the three logs.
>> FGMRES also gives a wrong first iterate.
>> I think Mark was right in the sense that the problem is _most likely_ in my 
>> RHS.
>> But I need to figure out why I only get this problem with 
>> right-preconditioned KSPs with restrict or none.
>> 
>> Thanks,
>> Pierre
>> 
>> 
>> 
>>> On 13 Oct 2019, at 8:16 PM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> Is this one process with one subdomain? (And hence no meaningful overlap 
>>> since there is nothing to overlap?) And you expect to get the "exact" 
>>> answer on one iteration? 
>>> 
>>> Please run the right preconditioned GMRES with -pc_asm_type [restrict and 
>>> basic and none]  -ksp_monitor_true_solution  and send the output for the 
>>> three cases.
>>> 
>>> For kicks you can also try FGMRES (which always uses right preconditioning) 
>>> to see if the same problem appears.
>>> 
>>> Barry
>>> 
>>> 
>>>> On Oct 13, 2019, at 2:41 AM, Pierre Jolivet via petsc-dev 
>>>>  wrote:
>>>> 
>>>> Hello,
>>>> I’m struggling to understand the following weirdness with PCASM with exact 
>>>> subdomain solvers.
>>>> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann 
>>>> BCs.
>>>> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either 
>>>> preconditioned on the right or on the left, I get the expected result, cf. 
>>>> attached screenshot.
>>>> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on the 
>>>> left, I get the expected result as well.
>>>> However, with PCASM + 1 iteration of GMRES preconditioned on the right, I 
>>>> don’t get what I should (I believe).
>>>> Furthermore, this problem is specific to -pc_asm_type restrict,none (I get 
>>>> the expected result with basic,interpolate).
>>>> 
>>>> Any hint?
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi 
>>>> -ksp_pc_side right -> bjacobi_OK
>>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
>>>> left -> asm_OK
>>>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
>>>> right -> asm_KO
>>>> 
>>>> 
>>> 
>> 
>> 
> 



Re: [petsc-dev] PetscLayoutFindOwner and PetscLayoutFindOwnerIndex

2019-10-28 Thread Pierre Jolivet via petsc-dev
https://gitlab.com/petsc/petsc/merge_requests// 
<https://gitlab.com/petsc/petsc/merge_requests//>

> On 24 Oct 2019, at 4:47 PM, Smith, Barry F.  wrote:
> 
> 
>   These routines should be fixed.
> 
> 
>> On Oct 16, 2019, at 5:19 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> These two functions use a parameter “owner” of type PetscInt*.
>> Shouldn’t this be PetscMPIInt*?
>> This implies changes left and right, so I want to check I’m not pushing an 
>> incorrect MR.
>> 
>> Thanks,
>> Pierre
> 



Re: [petsc-dev] PetscLayoutFindOwner and PetscLayoutFindOwnerIndex

2019-10-25 Thread Pierre Jolivet via petsc-dev
I can live with PetscInt instead of PetscMPIInt, this was more of a curiosity I 
noticed.
But if no one fixes these, I’ll give it a go in the near future.

Thanks,
Pierre

> On 24 Oct 2019, at 4:47 PM, Smith, Barry F.  wrote:
> 
> 
>   These routines should be fixed.
> 
> 
>> On Oct 16, 2019, at 5:19 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> These two functions use a parameter “owner” of type PetscInt*.
>> Shouldn’t this be PetscMPIInt*?
>> This implies changes left and right, so I want to check I’m not pushing an 
>> incorrect MR.
>> 
>> Thanks,
>> Pierre
> 



Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-21 Thread Pierre Jolivet via petsc-dev


> On 21 Oct 2019, at 3:01 PM, Jed Brown  wrote:
> 
> Pierre Jolivet via petsc-dev  writes:
> 
>>> On 21 Oct 2019, at 7:52 AM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> 
>>>> On Oct 21, 2019, at 12:23 AM, Pierre Jolivet  
>>>> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 21 Oct 2019, at 7:11 AM, Smith, Barry F.  wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>>> On Oct 20, 2019, at 11:52 PM, Pierre Jolivet 
>>>>>>  wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On 21 Oct 2019, at 6:42 AM, Smith, Barry F.  wrote:
>>>>>>> 
>>>>>>> Could you provide a use case where you want to access/have a block size 
>>>>>>> of a IS that is not an ISBlock? 
>>>>>> 
>>>>>> In the end, all I really want is get access to the underlying 
>>>>>> is->data->idx without having to worry about the subclass of is.
>>>>>> I don’t have such a use case, but I don’t think this is really related 
>>>>>> to what I want to achieve (or maybe it is…).
>>>>> 
>>>>> ISGetIndices()
>>>> 
>>>> Not true for ISBlock with bs > 1.
>>> 
>>> Certainly suppose to be, is there a bug?
>>> 
>>> static PetscErrorCode ISGetIndices_Block(IS in,const PetscInt *idx[])
>>> {
>>> IS_Block   *sub = (IS_Block*)in->data;
>>> PetscErrorCode ierr;
>>> PetscInt   i,j,k,bs,n,*ii,*jj;
>>> 
>>> PetscFunctionBegin;
>>> ierr = PetscLayoutGetBlockSize(in->map, );CHKERRQ(ierr);
>>> 
>>> Dang, there is that stupid layout stuff again. Who put this crap in. 
>>> 
>>> ierr = PetscLayoutGetLocalSize(in->map, );CHKERRQ(ierr);
>>> n   /= bs;
>>> if (bs == 1) *idx = sub->idx;
>>> else {
>> 
>> There it is, I don’t want this if/else. ISGetBlockIndices would have been a 
>> function always returning sub->idx.
> 
> Your code still can't skip the branch because ISGeneral can have bs>1.

Ah, OK, got confused by the lack of bs in the ISCreateGeneral constructor.

Thanks,
Pierre

> Block size in IS means "these indices go together" while ISBlock imposes
> the further constraint: "indices in each block are contiguous".  So you
> can't just take the code you quoted where it returns the raw idx:
> 
> if (!isblock) {
>  ISGetIndices(is,);
>  ISLocalToGlobalMappingCreate(comm,1,n,indices,PETSC_COPY_VALUES,mapping);
>  ISRestoreIndices(is,);
> } else {
>  ISGetBlockSize(is,);
>  ISBlockGetIndices(is,);
>  ISLocalToGlobalMappingCreate(comm,bs,n/bs,indices,PETSC_COPY_VALUES,mapping);
>  ISBlockRestoreIndices(is,);
> }
> 
> 
> PetscErrorCode  ISGetBlockSize(IS is,PetscInt *size)
> {
>  PetscErrorCode ierr;
> 
>  PetscFunctionBegin;
>  ierr = PetscLayoutGetBlockSize(is->map, size);CHKERRQ(ierr);
>  PetscFunctionReturn(0);
> }
> 



Re: [petsc-dev] ksp_error_if_not_converged in multilevel solvers

2019-10-21 Thread Pierre Jolivet via petsc-dev


> On 21 Oct 2019, at 3:04 PM, Jed Brown  wrote:
> 
> Pierre Jolivet via petsc-dev  writes:
> 
>> On Oct 20, 2019, at 6:07 PM, "Smith, Barry F."  wrote:
>> 
>>> 
>>>  The reason the code works this way is that normally 
>>> -ksp_error_if_not_converged is propagated into the inner (and innerer) 
>>> solves and normally it is desirable that these inner solves do not error 
>>> simply because they reach the maximum number of iterations since for nested 
>>> iterative methods generally we don't need or care if the inner solves 
>>> "converge". 
>> 
>> I fully agree with you on the last part of the above sentence. Thus, this 
>> makes me question the first part (which I wasn't aware of): why is 
>> error_if_not_converged being propagated to inner solves?
>> I'm sure there are good usages, but if one cares that ksp_1 (which depends 
>> on ksp_2) converges, why should an error be thrown if ksp_2 does not 
>> converge as long as ksp_1 does (I guess this goes along your last paragraph)?
> 
> What if the user is debugging a singular or indefinite coarse operator
> when they expect it to be SPD?  Sure, they could set that flag
> directly for the coarse KSP via the options database.

I most often end up debugging bottom-up (or coarse to fine) instead of 
top-down, but you probably have some convincing arguments that I should do it 
the other way around.

Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-21 Thread Pierre Jolivet via petsc-dev


> On 21 Oct 2019, at 7:52 AM, Smith, Barry F.  wrote:
> 
> 
> 
>> On Oct 21, 2019, at 12:23 AM, Pierre Jolivet  
>> wrote:
>> 
>> 
>> 
>>> On 21 Oct 2019, at 7:11 AM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> 
>>>> On Oct 20, 2019, at 11:52 PM, Pierre Jolivet  
>>>> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On 21 Oct 2019, at 6:42 AM, Smith, Barry F.  wrote:
>>>>> 
>>>>> Could you provide a use case where you want to access/have a block size 
>>>>> of a IS that is not an ISBlock? 
>>>> 
>>>> In the end, all I really want is get access to the underlying 
>>>> is->data->idx without having to worry about the subclass of is.
>>>> I don’t have such a use case, but I don’t think this is really related to 
>>>> what I want to achieve (or maybe it is…).
>>> 
>>> ISGetIndices()
>> 
>> Not true for ISBlock with bs > 1.
> 
>  Certainly suppose to be, is there a bug?
> 
> static PetscErrorCode ISGetIndices_Block(IS in,const PetscInt *idx[])
> {
>  IS_Block   *sub = (IS_Block*)in->data;
>  PetscErrorCode ierr;
>  PetscInt   i,j,k,bs,n,*ii,*jj;
> 
>  PetscFunctionBegin;
>  ierr = PetscLayoutGetBlockSize(in->map, );CHKERRQ(ierr);
> 
> Dang, there is that stupid layout stuff again. Who put this crap in. 
> 
>  ierr = PetscLayoutGetLocalSize(in->map, );CHKERRQ(ierr);
>  n   /= bs;
>  if (bs == 1) *idx = sub->idx;
>  else {

There it is, I don’t want this if/else. ISGetBlockIndices would have been a 
function always returning sub->idx.

Thanks,
Pierre

>ierr = PetscMalloc1(bs*n,);CHKERRQ(ierr);
>*idx = jj;
>k= 0;
>ii   = sub->idx;
>for (i=0; i  for (j=0; jjj[k++] = bs*ii[i] + j;
>  }
>  PetscFunctionReturn(0);
> }
> 
> 
>> 
>>>> Again, my goal is just to avoid having to write something like: 
>>>> https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322
>>>> If things were done as in ISLocalToGlobalMapping (which I know is not the 
>>>> same at all as IS/ISBlock), I think this could just read:
>>>> ISGetBlockIndices(is,)
>>>> ISLocalToGlobalMappingCreate(comm,bs,n/bs,indices,PETSC_COPY_VALUES,mapping)
>>> 
>>> If you want your code to take advantage of the underlying structure of the 
>>> IS (be it strided, blocked) for optimizations then you HAVE to go through 
>>> the subclasses by design and nature (for example VecScatterCreate() takes 
>>> advantage of the structure of the IS as does 
>>> ISLocalToGlobalMappingCreate()). There is no universal concept of a block 
>>> size in IS (despite the crap that other people put in using PetscLayout 
>>> which has nothing to do with ISBlock and is completely buggy.) nor should 
>>> there be or is there a need for. 
>> 
>> OK then.
>> 
>> Thanks,
>> Pierre
>> 
>>> Barry
>>> 
>>> 
>>> 
>>> 
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> 
>>>>>> On Oct 16, 2019, at 2:50 AM, Pierre Jolivet via petsc-dev 
>>>>>>  wrote:
>>>>>> 
>>>>>> Hello,
>>>>>> I’m trying to understand what is the rationale for naming a function 
>>>>>> ISBlockGetIndices and another ISLocalToGlobalMappingGetBlockIndices 
>>>>>> (BlockGet vs. GetBlock).
>>>>>> Also, it looks to me like the implementation of ISBlockGetIndices is 
>>>>>> somehow less versatile than ISLocalToGlobalMappingGetBlockIndices.
>>>>>> Indeed, I may call ISLocalToGlobalMappingGetBlockIndices with an 
>>>>>> underlying non-block IS, while I can’t for ISBlockGetIndices, so a check 
>>>>>> must be performed by the user, e.g., 
>>>>>> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/is/examples/tutorials/ex3.c.html#line58
>>>>>>  or this (IMHO) ugly code duplication 
>>>>>> https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322.
>>>>>> 
>>>>>> Thoughts and/or comments? Would it make sense to add an 
>>>>>> ISGetBlockIndices/ISRestoreBlockIndices or would that be too confusing 
>>>>>> for the user?
>>>>>> 
>>>>>> Thanks,
>>>>>> Pierre



Re: [petsc-dev] ksp_error_if_not_converged in multilevel solvers

2019-10-20 Thread Pierre Jolivet via petsc-dev



On Oct 20, 2019, at 6:07 PM, "Smith, Barry F."  wrote:

> 
>   The reason the code works this way is that normally 
> -ksp_error_if_not_converged is propagated into the inner (and innerer) solves 
> and normally it is desirable that these inner solves do not error simply 
> because they reach the maximum number of iterations since for nested 
> iterative methods generally we don't need or care if the inner solves 
> "converge". 

I fully agree with you on the last part of the above sentence. Thus, this makes 
me question the first part (which I wasn't aware of): why is 
error_if_not_converged being propagated to inner solves?
I'm sure there are good usages, but if one cares that ksp_1 (which depends on 
ksp_2) converges, why should an error be thrown if ksp_2 does not converge as 
long as ksp_1 does (I guess this goes along your last paragraph)?

Thanks,
Pierre

>Of course, in your particular use case this backfires. 
> 
>I'm not sure what the best design fix is. It seems anything will 
> complicate the code making it harder to maintain. 
> 
>Possible changes:
>   * if the error_if_not_converged is set directly on a KSP then also 
> cause it to error on maximum iterations, but not if it is propagated into a 
> KSP. This would require tracking how the error if not converged got set
>   * make error_if_not_converged an enum with, for example, the value 
> __always__ causing the error to be generated even on maximum iterations. This 
> is easier on the code then above 
> 
>A larger more far-reaching change would be to change the current model and 
> have the "inner" KSP use CONVERGED_ITERATING instead of DIVERGED_ITERATING 
> when it doesn't matter if they converge or not: 
> KSPSetReachMaximumIterationsConverged(), then the outer KSP would call this 
> on the inner KSP at construction time. This is appealing from code clarity 
> point of view since now the inner solves return a negative (diverged) reason 
> when that is acceptable and that is confusing, to return a diverged reason 
> while everything is fine. For the given use case one would need to do 
> -pc_bddc_coarse_ksp_error_if_not_converged  
> -pc_bddc_coarse_ksp_reach_maximum_iterations_converged false but this is 
> cumbersome, who would remember the second option.
> 
>   I'm not really excited by any of my proposed solutions.
> 
>  Thoughts?
> 
> 
>  Barry
> 
> 
> 
> 
>> On Oct 20, 2019, at 7:55 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> I’m trying to get multilevel solvers to error out when coarse levels are not 
>> converging, but I’m failing…
>> Could someone either tell me if this is not possible to do so, or help me 
>> find what the problem in my options is, please?
>> (in src/ksp/ksp/examples/tutorials)
>> $ mpirun -n 8 ./ex71 -pde_type Elasticity -cells 7,9,8 -dim 3 
>> -pc_bddc_monolithic -pc_bddc_use_faces -pc_bddc_coarse_pc_type none 
>> -pc_bddc_coarse_ksp_type gmres -pc_bddc_coarse_ksp_converged_reason 
>> -pc_bddc_coarse_ksp_error_if_not_converged -ksp_converged_reason
>>  Linear pc_bddc_coarse_ solve did not converge due to DIVERGED_ITS 
>> iterations 1 <- I want to error out here
>> […]
>>  Linear pc_bddc_coarse_ solve did not converge due to DIVERGED_ITS 
>> iterations 1
>> Linear solve converged due to CONVERGED_RTOL iterations 31
>> $ mpirun -np 1 ./ex1 -ksp_type gmres -pc_type gamg -ksp_converged_reason 
>> -mg_coarse_pc_type lu -mg_coarse_ksp_max_it 5 -mg_coarse_ksp_type gmres 
>> -ksp_type fgmres -mg_levels_1_ksp_type fgmres 
>> -mg_coarse_ksp_error_if_not_converged -mg_coarse_ksp_converged_reason
>>Linear mg_coarse_ solve did not converge due to DIVERGED_ITS iterations 5 
>>  <- I want to error out here
>> […]
>>Linear mg_coarse_ solve did not converge due to DIVERGED_ITS iterations 5
>> Linear solve converged due to CONVERGED_RTOL iterations 3
>> 
>> This is probably linked with this SETERRQ1 
>> https://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c.html#line836
>>  and the condition ksp->reason != KSP_DIVERGED_ITS.
>> If one just wants to run a fixed number of iterations, not checking for 
>> convergence, why would one set ksp->errorifnotconverged to true?
>> 
>> Thanks,
>> Pierre
> 


Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Pierre Jolivet via petsc-dev



> On 21 Oct 2019, at 7:11 AM, Smith, Barry F.  wrote:
> 
> 
> 
>> On Oct 20, 2019, at 11:52 PM, Pierre Jolivet  
>> wrote:
>> 
>> 
>> 
>>> On 21 Oct 2019, at 6:42 AM, Smith, Barry F.  wrote:
>>> 
>>> Could you provide a use case where you want to access/have a block size of 
>>> a IS that is not an ISBlock? 
>> 
>> In the end, all I really want is get access to the underlying is->data->idx 
>> without having to worry about the subclass of is.
>> I don’t have such a use case, but I don’t think this is really related to 
>> what I want to achieve (or maybe it is…).
> 
>   ISGetIndices()

Not true for ISBlock with bs > 1.

>> Again, my goal is just to avoid having to write something like: 
>> https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322
>> If things were done as in ISLocalToGlobalMapping (which I know is not the 
>> same at all as IS/ISBlock), I think this could just read:
>> ISGetBlockIndices(is,)
>> ISLocalToGlobalMappingCreate(comm,bs,n/bs,indices,PETSC_COPY_VALUES,mapping)
> 
>  If you want your code to take advantage of the underlying structure of the 
> IS (be it strided, blocked) for optimizations then you HAVE to go through the 
> subclasses by design and nature (for example VecScatterCreate() takes 
> advantage of the structure of the IS as does ISLocalToGlobalMappingCreate()). 
> There is no universal concept of a block size in IS (despite the crap that 
> other people put in using PetscLayout which has nothing to do with ISBlock 
> and is completely buggy.) nor should there be or is there a need for. 

OK then.

Thanks,
Pierre

>  Barry
> 
> 
> 
> 
>> 
>> Thanks,
>> Pierre
>> 
>>> 
>>>> On Oct 16, 2019, at 2:50 AM, Pierre Jolivet via petsc-dev 
>>>>  wrote:
>>>> 
>>>> Hello,
>>>> I’m trying to understand what is the rationale for naming a function 
>>>> ISBlockGetIndices and another ISLocalToGlobalMappingGetBlockIndices 
>>>> (BlockGet vs. GetBlock).
>>>> Also, it looks to me like the implementation of ISBlockGetIndices is 
>>>> somehow less versatile than ISLocalToGlobalMappingGetBlockIndices.
>>>> Indeed, I may call ISLocalToGlobalMappingGetBlockIndices with an 
>>>> underlying non-block IS, while I can’t for ISBlockGetIndices, so a check 
>>>> must be performed by the user, e.g., 
>>>> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/is/examples/tutorials/ex3.c.html#line58
>>>>  or this (IMHO) ugly code duplication 
>>>> https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322.
>>>> 
>>>> Thoughts and/or comments? Would it make sense to add an 
>>>> ISGetBlockIndices/ISRestoreBlockIndices or would that be too confusing for 
>>>> the user?
>>>> 
>>>> Thanks,
>>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-20 Thread Pierre Jolivet via petsc-dev


> On 21 Oct 2019, at 6:42 AM, Smith, Barry F.  wrote:
> 
>  Could you provide a use case where you want to access/have a block size of a 
> IS that is not an ISBlock? 

In the end, all I really want is get access to the underlying is->data->idx 
without having to worry about the subclass of is.
I don’t have such a use case, but I don’t think this is really related to what 
I want to achieve (or maybe it is…).
Again, my goal is just to avoid having to write something like: 
https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322 
<https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322>
If things were done as in ISLocalToGlobalMapping (which I know is not the same 
at all as IS/ISBlock), I think this could just read:
ISGetBlockIndices(is,)
ISLocalToGlobalMappingCreate(comm,bs,n/bs,indices,PETSC_COPY_VALUES,mapping)

Thanks,
Pierre

> 
>> On Oct 16, 2019, at 2:50 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> I’m trying to understand what is the rationale for naming a function 
>> ISBlockGetIndices and another ISLocalToGlobalMappingGetBlockIndices 
>> (BlockGet vs. GetBlock).
>> Also, it looks to me like the implementation of ISBlockGetIndices is somehow 
>> less versatile than ISLocalToGlobalMappingGetBlockIndices.
>> Indeed, I may call ISLocalToGlobalMappingGetBlockIndices with an underlying 
>> non-block IS, while I can’t for ISBlockGetIndices, so a check must be 
>> performed by the user, e.g., 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/is/examples/tutorials/ex3.c.html#line58
>>  or this (IMHO) ugly code duplication 
>> https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322.
>> 
>> Thoughts and/or comments? Would it make sense to add an 
>> ISGetBlockIndices/ISRestoreBlockIndices or would that be too confusing for 
>> the user?
>> 
>> Thanks,
>> Pierre
> 



[petsc-dev] ksp_error_if_not_converged in multilevel solvers

2019-10-20 Thread Pierre Jolivet via petsc-dev
Hello,
I’m trying to get multilevel solvers to error out when coarse levels are not 
converging, but I’m failing…
Could someone either tell me if this is not possible to do so, or help me find 
what the problem in my options is, please?
(in src/ksp/ksp/examples/tutorials)
$ mpirun -n 8 ./ex71 -pde_type Elasticity -cells 7,9,8 -dim 3 
-pc_bddc_monolithic -pc_bddc_use_faces -pc_bddc_coarse_pc_type none 
-pc_bddc_coarse_ksp_type gmres -pc_bddc_coarse_ksp_converged_reason 
-pc_bddc_coarse_ksp_error_if_not_converged -ksp_converged_reason
  Linear pc_bddc_coarse_ solve did not converge due to DIVERGED_ITS iterations 
1 <- I want to error out here
[…]
  Linear pc_bddc_coarse_ solve did not converge due to DIVERGED_ITS iterations 1
Linear solve converged due to CONVERGED_RTOL iterations 31
$ mpirun -np 1 ./ex1 -ksp_type gmres -pc_type gamg -ksp_converged_reason 
-mg_coarse_pc_type lu -mg_coarse_ksp_max_it 5 -mg_coarse_ksp_type gmres 
-ksp_type fgmres -mg_levels_1_ksp_type fgmres 
-mg_coarse_ksp_error_if_not_converged -mg_coarse_ksp_converged_reason
Linear mg_coarse_ solve did not converge due to DIVERGED_ITS iterations 5  
<- I want to error out here
[…]
Linear mg_coarse_ solve did not converge due to DIVERGED_ITS iterations 5
Linear solve converged due to CONVERGED_RTOL iterations 3

This is probably linked with this SETERRQ1 
https://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c.html#line836
 

 and the condition ksp->reason != KSP_DIVERGED_ITS.
If one just wants to run a fixed number of iterations, not checking for 
convergence, why would one set ksp->errorifnotconverged to true?

Thanks,
Pierre



Re: [petsc-dev] PetscLayoutFindOwner and PetscLayoutFindOwnerIndex

2019-10-16 Thread Pierre Jolivet via petsc-dev

> On 16 Oct 2019, at 8:01 PM, Zhang, Junchao  wrote:
> 
> The value of "owner" should fit in PetscMPIInt.

Are you implying that BuildSystem always promotes PetscInt to be able to store 
a PetscMPIInt (what if you configure with 32 bit indices and a 64 bit MPI 
implementation)?

Thanks,
Pierre

> But if you change prototype of the two functions, you have to change all 
> their uses.
> In petsc, values representing MPI ranks are not always of type PetscMPIInt. 
> Only those closely tied to MPI routines are in PetscMPIInt.
> 
> --Junchao Zhang
> 
> 
> On Wed, Oct 16, 2019 at 5:19 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> These two functions use a parameter “owner” of type PetscInt*.
> Shouldn’t this be PetscMPIInt*?
> This implies changes left and right, so I want to check I’m not pushing an 
> incorrect MR.
> 
> Thanks,
> Pierre



[petsc-dev] PetscLayoutFindOwner and PetscLayoutFindOwnerIndex

2019-10-16 Thread Pierre Jolivet via petsc-dev
Hello,
These two functions use a parameter “owner” of type PetscInt*.
Shouldn’t this be PetscMPIInt*?
This implies changes left and right, so I want to check I’m not pushing an 
incorrect MR.

Thanks,
Pierre

[petsc-dev] BlockGetIndices and GetBlockIndices

2019-10-16 Thread Pierre Jolivet via petsc-dev
Hello,
I’m trying to understand what is the rationale for naming a function 
ISBlockGetIndices and another ISLocalToGlobalMappingGetBlockIndices (BlockGet 
vs. GetBlock).
Also, it looks to me like the implementation of ISBlockGetIndices is somehow 
less versatile than ISLocalToGlobalMappingGetBlockIndices.
Indeed, I may call ISLocalToGlobalMappingGetBlockIndices with an underlying 
non-block IS, while I can’t for ISBlockGetIndices, so a check must be performed 
by the user, e.g., 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/is/examples/tutorials/ex3.c.html#line58
 

 or this (IMHO) ugly code duplication 
https://www.mcs.anl.gov/petsc/petsc-dev/src/vec/is/utils/isltog.c.html#line322 
.

Thoughts and/or comments? Would it make sense to add an 
ISGetBlockIndices/ISRestoreBlockIndices or would that be too confusing for the 
user?

Thanks,
Pierre

Re: [petsc-dev] Right-preconditioned GMRES

2019-10-14 Thread Pierre Jolivet via petsc-dev
Here are the three logs.
FGMRES also gives a wrong first iterate.
I think Mark was right in the sense that the problem is _most likely_ in my RHS.
But I need to figure out why I only get this problem with right-preconditioned 
KSPs with restrict or none.

Thanks,
Pierre



dump-basic
Description: Binary data


dump-none
Description: Binary data


dump-restrict
Description: Binary data


> On 13 Oct 2019, at 8:16 PM, Smith, Barry F.  wrote:
> 
> 
>  Is this one process with one subdomain? (And hence no meaningful overlap 
> since there is nothing to overlap?) And you expect to get the "exact" answer 
> on one iteration? 
> 
>  Please run the right preconditioned GMRES with -pc_asm_type [restrict and 
> basic and none]  -ksp_monitor_true_solution  and send the output for the 
> three cases.
> 
>  For kicks you can also try FGMRES (which always uses right preconditioning) 
> to see if the same problem appears.
> 
>  Barry
> 
> 
>> On Oct 13, 2019, at 2:41 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> I’m struggling to understand the following weirdness with PCASM with exact 
>> subdomain solvers.
>> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann BCs.
>> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either 
>> preconditioned on the right or on the left, I get the expected result, cf. 
>> attached screenshot.
>> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on the 
>> left, I get the expected result as well.
>> However, with PCASM + 1 iteration of GMRES preconditioned on the right, I 
>> don’t get what I should (I believe).
>> Furthermore, this problem is specific to -pc_asm_type restrict,none (I get 
>> the expected result with basic,interpolate).
>> 
>> Any hint?
>> 
>> Thanks,
>> Pierre
>> 
>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi 
>> -ksp_pc_side right -> bjacobi_OK
>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
>> left -> asm_OK
>> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
>> right -> asm_KO
>> 
>> 
> 



Re: [petsc-dev] Right-preconditioned GMRES

2019-10-13 Thread Pierre Jolivet via petsc-dev


> On 13 Oct 2019, at 3:48 PM, Mark Adams  wrote:
> 
> Oh I see you have established that it is the overlap and got something that 
> looks like the load on a homogenous BC and the processor boundary ... very 
> odd.
> 
> Does the bad version converge to the correct solution if you iterate? It 
> looks like your BC is wrong.

Yeah, of course: I switched back from P_3 to P_1 and now the first iterate is 
OK.
I am indeed most definitely messing up with the RHS at some point before the 
KSPSolve.

Thanks!
Pierre

> It sure looks like a bug. Can you valgrind this?
> 
> 
> On Sun, Oct 13, 2019 at 9:34 AM Mark Adams  <mailto:mfad...@lbl.gov>> wrote:
> Try -pc_asm_overlap 0 with ASM.
> 
> And I trust the KO run works with 1 processor (direct solve)
> 
> On Sun, Oct 13, 2019 at 3:41 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> I’m struggling to understand the following weirdness with PCASM with exact 
> subdomain solvers.
> I’m dealing with a very simple Poisson problem with Dirichlet + Neumann BCs.
> If I use PCBJACOBI + KSPPREONLY or 1 iteration of GMRES either preconditioned 
> on the right or on the left, I get the expected result, cf. attached 
> screenshot.
> If I use PCASM + KSPPREONLY or 1 iteration of GMRES preconditioned on the 
> left, I get the expected result as well.
> However, with PCASM + 1 iteration of GMRES preconditioned on the right, I 
> don’t get what I should (I believe).
> Furthermore, this problem is specific to -pc_asm_type restrict,none (I get 
> the expected result with basic,interpolate).
> 
> Any hint?
> 
> Thanks,
> Pierre
> 
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type bjacobi -ksp_pc_side 
> right -> bjacobi_OK
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
> left -> asm_OK
> $ -sub_pc_type lu -ksp_max_it 1 -ksp_type gmres -pc_type asm -ksp_pc_side 
> right -> asm_KO
> 
> 



[petsc-dev] SNES_DIVERGED_TR_DELTA

2019-10-06 Thread Pierre Jolivet via petsc-dev
Hello,
Shouldn’t there be a deprecation warning for this commit 
https://gitlab.com/petsc/petsc/commit/c1c6be5a2ecdf9c2ab80f7794960faf5930a5b23 

 instead of a compilation failure for people still using 
SNES_CONVERGED_TR_DELTA?

Thanks,
Pierre

[petsc-dev] Mixing separate and shared ouputs

2019-09-28 Thread Pierre Jolivet via petsc-dev
Hello,
If I put something like this in src/ksp/ksp/examples/tutorials/ex12.c
  args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}separate output} -pc_type {{jacobi bjacobi lu}separate output}
I get
# success 9/13 tests (69.2%)

Now 
  args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}shared output} -pc_type {{jacobi bjacobi lu}shared output}
Still gives me
# success 9/13 tests (69.2%)

But
  args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}shared output} -pc_type {{jacobi bjacobi lu}separate output}
Gives me
# success 6/7 tests (85.7%)

Is this the expected behavior?
Any easy way to get 13 tests as well?

Thanks,
Pierre

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-23 Thread Pierre Jolivet via petsc-dev
Hong,
You should probably cherry pick 
https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80?merge_request_iid=2069
 

 (and remove the MatNest part).
This fixes a similar issue in MatTransposeMatMult with nontrivial LDAs.
Since this commit is part of a feature MR that is unlikely to be ready for 
tomorrow, this fix (as of now) is also unlikely to be in master for the release.

Thanks,
Pierre

> On 23 Sep 2019, at 6:02 PM, Zhang, Hong  wrote:
> 
> Barry:
> As a hack for this release could you have the Numeric portion of the 
> multiply routines check if the symbolic data is there and if not just call 
> the symbolic an attach the needed data? You might need to have a utility 
> function that does all the symbolic part except the allocation of the matrix 
> and then call this from the numeric part as well as the real symbolic part.
>  
> I'm working on this now.  I was not aware of MatSeqDenseSetLDA() which 
> changes pattern of data access in seqdense matrix. 
> Pierre's patch:
> "change Bm here 
> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line549
>  
> 
>  to the LDA of B" 
> fix this bug. I'll further test it and submit a pull request. 
> Then, I'll check slepc's bug report.
> Hong



Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev

> On 23 Sep 2019, at 4:16 AM, h...@aspiritech.org wrote:
> 
> Now I understand why the latest changes in MatMatMult_MPIAIJ_MPIDense() cause 
> errors in your application (likely for slepc as well). 
> I always assume "MAT_REUSE_MATRIX requires that the C matrix has come from a 
> previous call with MAT_INITIAL_MATRIX" for all matrix products in petsc. I 
> noticed 
> 
> /*
> This is a "dummy function" that handles the case where matrix C was 
> created as a dense matrix
>   directly by the user and passed to MatMatMult() with the MAT_REUSE_MATRIX 
> option
> 
>   It is the same as MatMatMultSymbolic_MPIAIJ_MPIDense() except does not 
> create C
> */
> PetscErrorCode MatMatMultNumeric_MPIDense(Mat A,Mat B,Mat C)
> 
> but I do not see this routine is being used by any petsc routines or tests, 
> thus I thought it might be a dead routine,

It sure isn’t: 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/dense/mpi/mpidense.c.html#line1192
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/dense/mpi/mpidense.c.html#line1192>
> and planned to remove it after a careful investigation.
> The name of this routine is misleading as well.

I agree.

> I prefer a uniformed and clean design for all matrix products, i.e., do not 
> allow "REUSE" without previous call with "INITIAL". Almost all products 
> include internal data structures for various reasons. 
> In latest MatMatMult_MPIAIJ_MPIDense(), we added internal data structures and 
> obtained an impressive improvement in memory usage.

As I said to Barry, the change in behavior will likely result in worse memory 
usage for all applications that currently use this “feature”.
The previous symbolic phase was also basically a no-op, I’ve not yet 
benchmarked the new implementation, but I’m betting this will be slightly 
costlier.
Codes which previously didn’t cache the Mat C and relied once again on this 
feature will also have to pay the prize of doing multiple symbolic phases 
(assuming the rest remains as is and is not changed to cache C).

Thanks,
Pierre

> Hong
> 
> On Sun, Sep 22, 2019 at 4:38 PM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> 
> 
>> On 22 Sep 2019, at 8:32 PM, Smith, Barry F. > <mailto:bsm...@mcs.anl.gov>> wrote:
>> 
>> 
>>  Since this a common used feature we will need to support it in the release 
>> or it will break a variety of codes.
>> 
>>  I am not sure how to "deprecate it" in a useful way. How would the code 
>> actively tell the user that the approach is deprecated and they should 
>> update their code before the next release? Having it print warnings while it 
>> is running if they never used the INITIAL is too intrusive but what else 
>> could be done? Save the message and print it when the program ends? I guess 
>> we could do that. Is that too intrusive? Will it break other peoples tests? 
>> Do we want it to break other people's tests with this message?
>> 
>>  Suggestions?  For sure this feature will be removed at some point, how to 
>> give users useful warning (reading a document doesn't work).
> 
> I believe that if you deprecate this behavior, it should mean that you 
> deprecate MatMatMultNumeric_MPIDense as well.
> Which means that there are tests that are becoming pretty meaningless, e.g., 
> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line606
>  
> <https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line606>
>  unless there is a memory corruption of some sort, but then the user has 
> bigger concerns.
> They could instead be replaced by a check for the presence of the correct 
> stuff (MPI_Datatype and whatnot) introduced by the new MR, and if not, do 
> what you suggest (error message at the end, PetscInfo, whatever…) + go back 
> to the symbolic phase to avoid any kind of segfault like I’m getting right 
> now?
> Just my two cents…
> 
> Thanks,
> Pierre
> 
>>  Barry
>> 
>> 
>> 
>> 
>>> On Sep 22, 2019, at 1:16 PM, Pierre Jolivet >> <mailto:pierre.joli...@enseeiht.fr>> wrote:
>>> 
>>> Just to be sure: can we expect this "feature" to be fixed for the upcoming 
>>> release and deprecated later on, or will you get rid of this for good for 
>>> the release?
>>> 
>>> Thanks,
>>> Pierre
>>> 
>>> On Sep 22, 2019, at 7:11 PM, "Smith, Barry F." >> <mailto:bsm...@mcs.anl.gov>> wrote:
>>> 
>>>> 
>>>> Jose,
>>>> 
>>>>   Thanks for the pointer. 
>>>&g

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev


> On 22 Sep 2019, at 8:32 PM, Smith, Barry F.  wrote:
> 
> 
>  Since this a common used feature we will need to support it in the release 
> or it will break a variety of codes.
> 
>  I am not sure how to "deprecate it" in a useful way. How would the code 
> actively tell the user that the approach is deprecated and they should update 
> their code before the next release? Having it print warnings while it is 
> running if they never used the INITIAL is too intrusive but what else could 
> be done? Save the message and print it when the program ends? I guess we 
> could do that. Is that too intrusive? Will it break other peoples tests? Do 
> we want it to break other people's tests with this message?
> 
>  Suggestions?  For sure this feature will be removed at some point, how to 
> give users useful warning (reading a document doesn't work).

I believe that if you deprecate this behavior, it should mean that you 
deprecate MatMatMultNumeric_MPIDense as well.
Which means that there are tests that are becoming pretty meaningless, e.g., 
https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line606
 
<https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line606>
 unless there is a memory corruption of some sort, but then the user has bigger 
concerns.
They could instead be replaced by a check for the presence of the correct stuff 
(MPI_Datatype and whatnot) introduced by the new MR, and if not, do what you 
suggest (error message at the end, PetscInfo, whatever…) + go back to the 
symbolic phase to avoid any kind of segfault like I’m getting right now?
Just my two cents…

Thanks,
Pierre

>  Barry
> 
> 
> 
> 
>> On Sep 22, 2019, at 1:16 PM, Pierre Jolivet  
>> wrote:
>> 
>> Just to be sure: can we expect this "feature" to be fixed for the upcoming 
>> release and deprecated later on, or will you get rid of this for good for 
>> the release?
>> 
>> Thanks,
>> Pierre
>> 
>> On Sep 22, 2019, at 7:11 PM, "Smith, Barry F."  wrote:
>> 
>>> 
>>> Jose,
>>> 
>>>   Thanks for the pointer. 
>>> 
>>>   Will this change dramatically affect the organization of SLEPc? As noted 
>>> in my previous email eventually we need to switch to a new API where the 
>>> REUSE with a different matrix is even more problematic.
>>> 
>>>If you folks have use cases that fundamentally require reusing a 
>>> previous matrix instead of destroying and getting a new one created we will 
>>> need to think about additional features in the API that would allow this 
>>> reusing of an array. But it seems to me that destroying the old matrix and 
>>> using the initial call to create the matrix should be ok and just require 
>>> relatively minor changes to your codes?
>>> 
>>> Barry
>>> 
>>> 
>>> 
>>> 
>>>> On Sep 22, 2019, at 11:55 AM, Jose E. Roman  wrote:
>>>> 
>>>> The man page of MatMatMult says:
>>>> "In the special case where matrix B (and hence C) are dense you can create 
>>>> the correctly sized matrix C yourself and then call this routine with 
>>>> MAT_REUSE_MATRIX, rather than first having MatMatMult() create it for you."
>>>> 
>>>> If you are going to change the usage, don't forget to remove this 
>>>> sentence. This use case is what we use in SLEPc and is now causing trouble.
>>>> Jose
>>>> 
>>>> 
>>>> 
>>>>> El 22 sept 2019, a las 18:49, Pierre Jolivet via petsc-dev 
>>>>>  escribió:
>>>>> 
>>>>> 
>>>>>> On 22 Sep 2019, at 6:33 PM, Smith, Barry F.  wrote:
>>>>>> 
>>>>>> 
>>>>>> Ok. So we definitely need better error checking and to clean up the 
>>>>>> code, comments and docs 
>>>>>> 
>>>>>> As the approaches for these computations of products get more 
>>>>>> complicated it becomes a bit harder to support the use of a raw product 
>>>>>> matrix so I don't think we want to add all the code needed to call the 
>>>>>> symbolic part (after the fact) when the matrix is raw.
>>>>> 
>>>>> To the best of my knowledge, there is only a single method (not taking MR 
>>>>> 2069 into account) that uses a MPIDense B and for which these approaches 
>>>>> are necessary, so it’s not like there is a hundred of code paths to fix, 
>>>>> but I understand yo

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev
Just to be sure: can we expect this "feature" to be fixed for the upcoming 
release and deprecated later on, or will you get rid of this for good for the 
release?

Thanks,
Pierre

On Sep 22, 2019, at 7:11 PM, "Smith, Barry F."  wrote:

> 
>   Jose,
> 
> Thanks for the pointer. 
> 
> Will this change dramatically affect the organization of SLEPc? As noted 
> in my previous email eventually we need to switch to a new API where the 
> REUSE with a different matrix is even more problematic.
> 
>  If you folks have use cases that fundamentally require reusing a 
> previous matrix instead of destroying and getting a new one created we will 
> need to think about additional features in the API that would allow this 
> reusing of an array. But it seems to me that destroying the old matrix and 
> using the initial call to create the matrix should be ok and just require 
> relatively minor changes to your codes?
> 
>  Barry
> 
> 
> 
> 
>> On Sep 22, 2019, at 11:55 AM, Jose E. Roman  wrote:
>> 
>> The man page of MatMatMult says:
>> "In the special case where matrix B (and hence C) are dense you can create 
>> the correctly sized matrix C yourself and then call this routine with 
>> MAT_REUSE_MATRIX, rather than first having MatMatMult() create it for you."
>> 
>> If you are going to change the usage, don't forget to remove this sentence. 
>> This use case is what we use in SLEPc and is now causing trouble.
>> Jose
>> 
>> 
>> 
>>> El 22 sept 2019, a las 18:49, Pierre Jolivet via petsc-dev 
>>>  escribió:
>>> 
>>> 
>>>> On 22 Sep 2019, at 6:33 PM, Smith, Barry F.  wrote:
>>>> 
>>>> 
>>>> Ok. So we definitely need better error checking and to clean up the code, 
>>>> comments and docs 
>>>> 
>>>> As the approaches for these computations of products get more complicated 
>>>> it becomes a bit harder to support the use of a raw product matrix so I 
>>>> don't think we want to add all the code needed to call the symbolic part 
>>>> (after the fact) when the matrix is raw.
>>> 
>>> To the best of my knowledge, there is only a single method (not taking MR 
>>> 2069 into account) that uses a MPIDense B and for which these approaches 
>>> are necessary, so it’s not like there is a hundred of code paths to fix, 
>>> but I understand your point.
>>> 
>>>> Would that make things terribly difficult for you not being able to use a 
>>>> raw matrix?
>>> 
>>> Definitely not, but that would require some more memory + one copy after 
>>> the MatMatMult (depending on the size of your block Krylov space, that can 
>>> be quite large, and that defeats the purpose of MR 2032 of being more 
>>> memory efficient).
>>> (BTW, I now remember that I’ve been using this “feature” since our SC16 
>>> paper on block Krylov methods)
>>> 
>>>> I suspect that the dense case was just lucky that using a raw matrix 
>>>> worked.
>>> 
>>> I don’t think so, this is clearly the intent of MatMatMultNumeric_MPIDense 
>>> (vs. MatMatMultNumeric_MPIAIJ_MPIDense).
>>> 
>>>> The removal of the de facto support for REUSE on the raw matrix should be 
>>>> added to the changes document.
>>>> 
>>>> Sorry for the difficulties. We have trouble testing all the combinations 
>>>> of possible usage, even a coverage tool would not have indicated a 
>>>> problems the lack of lda support.
>>> 
>>> No problem!
>>> 
>>> Thank you,
>>> Pierre
>>> 
>>>> Hong,
>>>> 
>>>>  Can you take a look at these things on Monday and maybe get a clean into 
>>>> a MR so it gets into the release?
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> Barry
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> On Sep 22, 2019, at 11:12 AM, Pierre Jolivet  
>>>>> wrote:
>>>>> 
>>>>> 
>>>>>> On 22 Sep 2019, at 6:03 PM, Smith, Barry F.  wrote:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Sep 22, 2019, at 10:14 AM, Pierre Jolivet via petsc-dev 
>>>>>>>  wrote:
>>>>>>> 
>>>>>>> FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here 
>>>>>>> https://gitlab.com/petsc/p

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev


> On 22 Sep 2019, at 6:33 PM, Smith, Barry F.  wrote:
> 
> 
>   Ok. So we definitely need better error checking and to clean up the code, 
> comments and docs 
> 
>   As the approaches for these computations of products get more complicated 
> it becomes a bit harder to support the use of a raw product matrix so I don't 
> think we want to add all the code needed to call the symbolic part (after the 
> fact) when the matrix is raw.

To the best of my knowledge, there is only a single method (not taking MR 2069 
into account) that uses a MPIDense B and for which these approaches are 
necessary, so it’s not like there is a hundred of code paths to fix, but I 
understand your point.

> Would that make things terribly difficult for you not being able to use a raw 
> matrix?

Definitely not, but that would require some more memory + one copy after the 
MatMatMult (depending on the size of your block Krylov space, that can be quite 
large, and that defeats the purpose of MR 2032 of being more memory efficient).
(BTW, I now remember that I’ve been using this “feature” since our SC16 paper 
on block Krylov methods)

>   I suspect that the dense case was just lucky that using a raw matrix 
> worked. 

I don’t think so, this is clearly the intent of MatMatMultNumeric_MPIDense (vs. 
MatMatMultNumeric_MPIAIJ_MPIDense).

>   The removal of the de facto support for REUSE on the raw matrix should be 
> added to the changes document.
> 
>   Sorry for the difficulties. We have trouble testing all the combinations of 
> possible usage, even a coverage tool would not have indicated a problems the 
> lack of lda support. 

No problem!

Thank you,
Pierre

>  Hong,
> 
> Can you take a look at these things on Monday and maybe get a clean into 
> a MR so it gets into the release?
> 
>   Thanks
> 
> 
>   Barry
> 
> 
> 
> 
> 
>> On Sep 22, 2019, at 11:12 AM, Pierre Jolivet  
>> wrote:
>> 
>> 
>>> On 22 Sep 2019, at 6:03 PM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> 
>>>> On Sep 22, 2019, at 10:14 AM, Pierre Jolivet via petsc-dev 
>>>>  wrote:
>>>> 
>>>> FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here 
>>>> https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80
>>>>  (with MAT_INITIAL_MATRIX).
>>>> I believe there is something not right in your MR (2032) with 
>>>> MAT_REUSE_MATRIX (without having called MAT_INITIAL_MATRIX first), cf. 
>>>> https://gitlab.com/petsc/petsc/merge_requests/2069#note_220269898.
>>>> Of course, I’d love to be proved wrong!
>>> 
>>>  I don't understand the context.
>>> 
>>>   MAT_REUSE_MATRIX requires that the C matrix has come from a previous call 
>>> with MAT_INITIAL_MATRIX, you cannot just put any matrix in the C location.
>> 
>> 1) It was not the case before the MR, I’ve used that “feature” (which may be 
>> specific for MatMatMult_MPIAIJ_MPIDense) for as long as I can remember
>> 2) If it is not the case anymore, I think it should be mentioned somewhere 
>> (and not only in the git log, because I don’t think all users will go 
>> through that)
>> 3) This comment should be removed from the code as well: 
>> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line398
>> 
>>> This is documented in the manual page. We should have better error checking 
>>> that this is the case so the code doesn't crash at memory access but 
>>> instead produces a very useful error message if the matrix was not obtained 
>>> with MAT_INITIAL_MATRIX. 
>>> 
>>>  Is this the issue or do I not understand?
>> 
>> This is exactly the issue.
>> 
>>>  Barry
>>> 
>>> BTW: yes MAT_REUSE_MATRIX has different meanings for different matrix 
>>> operations in terms of where the matrix came from, this is suppose to be 
>>> all documented in each methods manual page but some may be missing or 
>>> incomplete, and error checking is probably not complete for all cases.  
>>> Perhaps the code should be changed to have multiple different names for 
>>> each reuse case for clarity to user?
>> 
>> Definitely, cf. above.
>> 
>> Thanks,
>> Pierre
>> 
>>>> 
>>>> Thanks,
>>>> Pierre
>>>> 
>>>>> On 22 Sep 2019, at 5:04 PM, Zhang, Hong  wrote:
>>>>> 
>>>>> I'll check it tomorrow.
>>>>> Hong
>>>>> 
>>>>> On Sun, Sep 22, 2019 at 1:04 AM Pierre Jolivet via petsc-dev 
>>>>>  wrote:
>

Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev

> On 22 Sep 2019, at 6:03 PM, Smith, Barry F.  wrote:
> 
> 
> 
>> On Sep 22, 2019, at 10:14 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here 
>> https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80
>>  (with MAT_INITIAL_MATRIX).
>> I believe there is something not right in your MR (2032) with 
>> MAT_REUSE_MATRIX (without having called MAT_INITIAL_MATRIX first), cf. 
>> https://gitlab.com/petsc/petsc/merge_requests/2069#note_220269898.
>> Of course, I’d love to be proved wrong!
> 
>   I don't understand the context.
> 
>MAT_REUSE_MATRIX requires that the C matrix has come from a previous call 
> with MAT_INITIAL_MATRIX, you cannot just put any matrix in the C location.

1) It was not the case before the MR, I’ve used that “feature” (which may be 
specific for MatMatMult_MPIAIJ_MPIDense) for as long as I can remember
2) If it is not the case anymore, I think it should be mentioned somewhere (and 
not only in the git log, because I don’t think all users will go through that)
3) This comment should be removed from the code as well: 
https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line398
 
<https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line398>
> This is documented in the manual page. We should have better error checking 
> that this is the case so the code doesn't crash at memory access but instead 
> produces a very useful error message if the matrix was not obtained with 
> MAT_INITIAL_MATRIX. 
> 
>   Is this the issue or do I not understand?

This is exactly the issue.

>   Barry
> 
> BTW: yes MAT_REUSE_MATRIX has different meanings for different matrix 
> operations in terms of where the matrix came from, this is suppose to be all 
> documented in each methods manual page but some may be missing or incomplete, 
> and error checking is probably not complete for all cases.  Perhaps the code 
> should be changed to have multiple different names for each reuse case for 
> clarity to user?

Definitely, cf. above.

Thanks,
Pierre

>> 
>> Thanks,
>> Pierre
>> 
>>> On 22 Sep 2019, at 5:04 PM, Zhang, Hong  wrote:
>>> 
>>> I'll check it tomorrow.
>>> Hong
>>> 
>>> On Sun, Sep 22, 2019 at 1:04 AM Pierre Jolivet via petsc-dev 
>>>  wrote:
>>> Jed,
>>> I’m not sure how easy it is to put more than a few lines of code on GitLab, 
>>> so I’ll just send the (tiny) source here, as a follow-up of our discussion 
>>> https://gitlab.com/petsc/petsc/merge_requests/2069#note_220229648.
>>> Please find attached a .cpp showing the brokenness of C=A*B with A of type 
>>> MPIAIJ and B of type MPIDense when the LDA of B is not equal to its number 
>>> of local rows.
>>> It does [[1,1];[1,1]] * [[0,1,2,3];[0,1,2,3]]
>>> C should be equal to 2*B, but it’s not, unless lda = m (= 1).
>>> Mat Object: 2 MPI processes
>>>  type: mpidense
>>> 0.e+00 1.e+00 2.e+00 
>>> 3.e+00
>>> 0.e+00 1.e+00 2.e+00 
>>> 3.e+00
>>> 
>>> If you change Bm here 
>>> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line549
>>>  to the LDA of B, you’ll get the correct result.
>>> Mat Object: 2 MPI processes
>>>  type: mpidense
>>> 0.e+00 2.e+00 4.e+00 
>>> 6.e+00
>>> 0.e+00 2.e+00 4.e+00 
>>> 6.e+00
>>> 
>>> Unfortunately, w.r.t. MR 2069, I still don’t get the same results with a 
>>> plain view LDA > m (KO) and a view + duplicate LDA = m (OK).
>>> So there might be something else to fix (or this might not even be a 
>>> correct fix), but the only reproducer I have right now is the full solver.
>>> 
>>> Thanks,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev
FWIW, I’ve fixed MatMatMult and MatTransposeMatMult here 
https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80 
<https://gitlab.com/petsc/petsc/commit/93d7d1d6d29b0d66b5629a261178b832a925de80>
 (with MAT_INITIAL_MATRIX).
I believe there is something not right in your MR (2032) with MAT_REUSE_MATRIX 
(without having called MAT_INITIAL_MATRIX first), cf. 
https://gitlab.com/petsc/petsc/merge_requests/2069#note_220269898 
<https://gitlab.com/petsc/petsc/merge_requests/2069#note_220269898>.
Of course, I’d love to be proved wrong!

Thanks,
Pierre

> On 22 Sep 2019, at 5:04 PM, Zhang, Hong  wrote:
> 
> I'll check it tomorrow.
> Hong
> 
> On Sun, Sep 22, 2019 at 1:04 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Jed,
> I’m not sure how easy it is to put more than a few lines of code on GitLab, 
> so I’ll just send the (tiny) source here, as a follow-up of our discussion 
> https://gitlab.com/petsc/petsc/merge_requests/2069#note_220229648 
> <https://gitlab.com/petsc/petsc/merge_requests/2069#note_220229648>.
> Please find attached a .cpp showing the brokenness of C=A*B with A of type 
> MPIAIJ and B of type MPIDense when the LDA of B is not equal to its number of 
> local rows.
> It does [[1,1];[1,1]] * [[0,1,2,3];[0,1,2,3]]
> C should be equal to 2*B, but it’s not, unless lda = m (= 1).
> Mat Object: 2 MPI processes
>   type: mpidense
> 0.e+00 1.e+00 2.e+00 
> 3.e+00
> 0.e+00 1.e+00 2.e+00 
> 3.e+00
> 
> If you change Bm here 
> https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line549
>  
> <https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line549>
>  to the LDA of B, you’ll get the correct result.
> Mat Object: 2 MPI processes
>   type: mpidense
> 0.e+00 2.e+00 4.e+00 
> 6.e+00
> 0.e+00 2.e+00 4.e+00 
> 6.e+00
> 
> Unfortunately, w.r.t. MR 2069, I still don’t get the same results with a 
> plain view LDA > m (KO) and a view + duplicate LDA = m (OK).
> So there might be something else to fix (or this might not even be a correct 
> fix), but the only reproducer I have right now is the full solver.
> 
> Thanks,
> Pierre
> 



[petsc-dev] Broken MatMatMult_MPIAIJ_MPIDense

2019-09-22 Thread Pierre Jolivet via petsc-dev
Jed,I’m not sure how easy it is to put more than a few lines of code on GitLab, so I’ll just send the (tiny) source here, as a follow-up of our discussion https://gitlab.com/petsc/petsc/merge_requests/2069#note_220229648.Please find attached a .cpp showing the brokenness of C=A*B with A of type MPIAIJ and B of type MPIDense when the LDA of B is not equal to its number of local rows.It does [[1,1];[1,1]] * [[0,1,2,3];[0,1,2,3]]C should be equal to 2*B, but it’s not, unless lda = m (= 1).Mat Object: 2 MPI processes  type: mpidense0.e+00 1.e+00 2.e+00 3.e+000.e+00 1.e+00 2.e+00 3.e+00If you change Bm here https://www.mcs.anl.gov/petsc/petsc-dev/src/mat/impls/aij/mpi/mpimatmatmult.c.html#line549 to the LDA of B, you’ll get the correct result.Mat Object: 2 MPI processes  type: mpidense0.e+00 2.e+00 4.e+00 6.e+000.e+00 2.e+00 4.e+00 6.e+00Unfortunately, w.r.t. MR 2069, I still don’t get the same results with a plain view LDA > m (KO) and a view + duplicate LDA = m (OK).So there might be something else to fix (or this might not even be a correct fix), but the only reproducer I have right now is the full solver.Thanks,Pierre

matmatmult.cpp
Description: Binary data


Re: [petsc-dev] How to check that MatMatMult is available

2019-09-20 Thread Pierre Jolivet via petsc-dev

> On 20 Sep 2019, at 7:36 AM, Jed Brown  <mailto:j...@jedbrown.org>> wrote:
> 
> Pierre Jolivet via petsc-dev  <mailto:petsc-dev@mcs.anl.gov>> writes:
> 
>> Hello,
>> Given a Mat A, I’d like to know if there is an implementation available for 
>> doing C=A*B
>> I was previously using MatHasOperation(A, MATOP_MATMAT_MULT, ) 
>> but the result is not correct in at least two cases:
> 
> Do you want MATOP_MAT_MULT and MATOP_TRANSPOSE_MAT_MULT?

Ah, OK, MATMAT => MatMatMat, so you are right.
I’ll make sure MatHasOperation_Transpose and MatHasOperation_Nest return the 
correct value then.

>> 1) A is a MATTRANSPOSE and the underlying Mat B=A^T has a 
>> MatTransposeMatMult implementation (there is currently no 
>> MATOP_MATTRANSPOSEMAT_MULT)
>> 2) A is a MATNEST. This could be fixed in MatHasOperation_Nest, by checking 
>> MATOP_MATMAT_MULT of all matrices in the MATNEST, but this would be 
>> incorrect again if there is a single MATTRANPOSE in there
>> What is then the proper way to check that I can indeed call MatMatMult(A,…)?
>> Do I need to copy/paste all this 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/interface/matrix.c.html#line9801
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/mat/interface/matrix.c.html#line9801>
>>  in user code?
> 
> Unfortunately, I don't think there is any common interface for the
> string handling, though it would make sense to add one because code of
> this sort is copied many times:
> 
>   /* dispatch based on the type of A and B from their PetscObject's 
> PetscFunctionLists. */
>   char multname[256];
>   ierr = PetscStrncpy(multname,"MatMatMult_",sizeof(multname));CHKERRQ(ierr);
>   ierr = 
> PetscStrlcat(multname,((PetscObject)A)->type_name,sizeof(multname));CHKERRQ(ierr);
>   ierr = PetscStrlcat(multname,"_",sizeof(multname));CHKERRQ(ierr);
>   ierr = 
> PetscStrlcat(multname,((PetscObject)B)->type_name,sizeof(multname));CHKERRQ(ierr);
>   ierr = PetscStrlcat(multname,"_C",sizeof(multname));CHKERRQ(ierr); /* e.g., 
> multname = "MatMatMult_seqdense_seqaij_C" */
>   ierr = 
> PetscObjectQueryFunction((PetscObject)B,multname,);CHKERRQ(ierr);
>   if (!mult) 
> SETERRQ2(PetscObjectComm((PetscObject)A),PETSC_ERR_ARG_INCOMP,"MatMatMult 
> requires A, %s, to be compatible with B, 
> %s",((PetscObject)A)->type_name,((PetscObject)B)->type_name);
> 
>> Thanks,
>> Pierre
>> 
>> PS: in my case, C and B are always of type MATDENSE. Should we handle
>> this in MatMatMult and never error out for such a simple case. 
> 
> I would say yes.

Perfect, that’s all I need!

Thanks,
Pierre

>> Indeed, one can just loop on the columns of B and C by doing multiple
>> MatMult. This is what I’m currently doing in user code when
>> hasMatMatMult == PETSC_FALSE.



[petsc-dev] How to check that MatMatMult is available

2019-09-19 Thread Pierre Jolivet via petsc-dev
Hello,
Given a Mat A, I’d like to know if there is an implementation available for 
doing C=A*B
I was previously using MatHasOperation(A, MATOP_MATMAT_MULT, ) 
but the result is not correct in at least two cases:
1) A is a MATTRANSPOSE and the underlying Mat B=A^T has a MatTransposeMatMult 
implementation (there is currently no MATOP_MATTRANSPOSEMAT_MULT)
2) A is a MATNEST. This could be fixed in MatHasOperation_Nest, by checking 
MATOP_MATMAT_MULT of all matrices in the MATNEST, but this would be incorrect 
again if there is a single MATTRANPOSE in there
What is then the proper way to check that I can indeed call MatMatMult(A,…)?
Do I need to copy/paste all this 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/interface/matrix.c.html#line9801
 

 in user code?

Thanks,
Pierre

PS: in my case, C and B are always of type MATDENSE. Should we handle this in 
MatMatMult and never error out for such a simple case. Indeed, one can just 
loop on the columns of B and C by doing multiple MatMult. This is what I’m 
currently doing in user code when hasMatMatMult == PETSC_FALSE.

Re: [petsc-dev] fieldsplit + composite + ksp

2019-09-18 Thread Pierre Jolivet via petsc-dev


> On 18 Sep 2019, at 4:04 PM, Matthew Knepley  wrote:
> 
> On Wed, Sep 18, 2019 at 10:01 AM Pierre Jolivet  <mailto:pierre.joli...@enseeiht.fr>> wrote:
> 
> 
>> On 18 Sep 2019, at 3:48 PM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote:
>> 
>> On Wed, Sep 18, 2019 at 2:49 AM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Hello,
>> I’m solving the following dummy system 
>> http://jolivet.perso.enseeiht.fr/composite_ksp.tar.gz 
>> <http://jolivet.perso.enseeiht.fr/composite_ksp.tar.gz>
>> [A, B];[C, D], with a PCFIELDSPLIT. For the PC of D, I’m using a PCCOMPOSITE 
>> with two sub PCs. One of which is a PCKSP.
>> Could you please help me figure out what is wrong in the following piece of 
>> code, that may be launched with the following arguments:
>> $ mpirun -n 1 ./a.out -ksp_type preonly -pc_type fieldsplit 
>> -fieldsplit_1_pc_type composite -fieldsplit_1_sub_1_pc_type ksp 
>> -fieldsplit_1_sub_1_ksp_ksp_type gmres -fieldsplit_1_sub_1_ksp_pc_type gamg 
>> -fieldsplit_1_sub_1_ksp_ksp_converged_reason 
>> -fieldsplit_1_sub_1_ksp_pc_gamg_sym_graph 1 
>> -fieldsplit_1_sub_1_ksp_pc_gamg_square_graph 10 
>> -fieldsplit_1_sub_1_ksp_ksp_rtol 1e-8
>> 
>> It solves the dummy system twice, with a varying block D.
>> 
>> Its not the PC, its the matrix. Everything in the PC gets resetup just like 
>> you want.
>> 
>> I did MatEqual(S2_1, S2_001, ) and equal was false.
> 
> They are not supposed to be equal, so that’s a good thing.
> Or are you doing the comparison _after_ the MatCopy?
> 
> Yes.
>  
> I’m not sure of what is your point, sorry.
> 
> I think they might not actually have identical nonzero structure.

OK, that was easier than expected.

Thanks!
Pierre

>   Thanks,
> 
>  Matt
>  
> Thanks,
> Pierre
> 
>>   Thanks,
>> 
>>  Matt
>>  
>> It should give you:
>>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
>> iterations 8
>> solve #0: 16098.3
>>   Linear fieldsplit_1_sub_1_ksp_ solve did not converge due to 
>> DIVERGED_PC_FAILED iterations 0
>>  PC_FAILED due to SUBPC_ERROR
>> solve #1: inf
>> 
>> If I switch line 70 to #if 0, I get the expected output:
>>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
>> iterations 8
>> solve #0: 16098.3
>>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
>> iterations 8
>> solve #1: 325.448
>> 
>> I’m realizing that this has probably nothing to do with the outer 
>> PCFIELDSPLIT, but this comes from a rather large FSI solver, so reproducing 
>> this behavior in “only” 97 SLOC is good enough for your I hope.
>> 
>> Thanks in advance,
>> Pierre
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-dev] fieldsplit + composite + ksp

2019-09-18 Thread Pierre Jolivet via petsc-dev


> On 18 Sep 2019, at 3:48 PM, Matthew Knepley  wrote:
> 
> On Wed, Sep 18, 2019 at 2:49 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> I’m solving the following dummy system 
> http://jolivet.perso.enseeiht.fr/composite_ksp.tar.gz 
> <http://jolivet.perso.enseeiht.fr/composite_ksp.tar.gz>
> [A, B];[C, D], with a PCFIELDSPLIT. For the PC of D, I’m using a PCCOMPOSITE 
> with two sub PCs. One of which is a PCKSP.
> Could you please help me figure out what is wrong in the following piece of 
> code, that may be launched with the following arguments:
> $ mpirun -n 1 ./a.out -ksp_type preonly -pc_type fieldsplit 
> -fieldsplit_1_pc_type composite -fieldsplit_1_sub_1_pc_type ksp 
> -fieldsplit_1_sub_1_ksp_ksp_type gmres -fieldsplit_1_sub_1_ksp_pc_type gamg 
> -fieldsplit_1_sub_1_ksp_ksp_converged_reason 
> -fieldsplit_1_sub_1_ksp_pc_gamg_sym_graph 1 
> -fieldsplit_1_sub_1_ksp_pc_gamg_square_graph 10 
> -fieldsplit_1_sub_1_ksp_ksp_rtol 1e-8
> 
> It solves the dummy system twice, with a varying block D.
> 
> Its not the PC, its the matrix. Everything in the PC gets resetup just like 
> you want.
> 
> I did MatEqual(S2_1, S2_001, ) and equal was false.

They are not supposed to be equal, so that’s a good thing.
Or are you doing the comparison _after_ the MatCopy?
I’m not sure of what is your point, sorry.

Thanks,
Pierre

>   Thanks,
> 
>  Matt
>  
> It should give you:
>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
> iterations 8
> solve #0: 16098.3
>   Linear fieldsplit_1_sub_1_ksp_ solve did not converge due to 
> DIVERGED_PC_FAILED iterations 0
>  PC_FAILED due to SUBPC_ERROR
> solve #1: inf
> 
> If I switch line 70 to #if 0, I get the expected output:
>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
> iterations 8
> solve #0: 16098.3
>   Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
> iterations 8
> solve #1: 325.448
> 
> I’m realizing that this has probably nothing to do with the outer 
> PCFIELDSPLIT, but this comes from a rather large FSI solver, so reproducing 
> this behavior in “only” 97 SLOC is good enough for your I hope.
> 
> Thanks in advance,
> Pierre
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>


[petsc-dev] fieldsplit + composite + ksp

2019-09-18 Thread Pierre Jolivet via petsc-dev
Hello,
I’m solving the following dummy system 
http://jolivet.perso.enseeiht.fr/composite_ksp.tar.gz 

[A, B];[C, D], with a PCFIELDSPLIT. For the PC of D, I’m using a PCCOMPOSITE 
with two sub PCs. One of which is a PCKSP.
Could you please help me figure out what is wrong in the following piece of 
code, that may be launched with the following arguments:
$ mpirun -n 1 ./a.out -ksp_type preonly -pc_type fieldsplit 
-fieldsplit_1_pc_type composite -fieldsplit_1_sub_1_pc_type ksp 
-fieldsplit_1_sub_1_ksp_ksp_type gmres -fieldsplit_1_sub_1_ksp_pc_type gamg 
-fieldsplit_1_sub_1_ksp_ksp_converged_reason 
-fieldsplit_1_sub_1_ksp_pc_gamg_sym_graph 1 
-fieldsplit_1_sub_1_ksp_pc_gamg_square_graph 10 
-fieldsplit_1_sub_1_ksp_ksp_rtol 1e-8

It solves the dummy system twice, with a varying block D.

It should give you:
  Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
iterations 8
solve #0: 16098.3
  Linear fieldsplit_1_sub_1_ksp_ solve did not converge due to 
DIVERGED_PC_FAILED iterations 0
 PC_FAILED due to SUBPC_ERROR
solve #1: inf

If I switch line 70 to #if 0, I get the expected output:
  Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
iterations 8
solve #0: 16098.3
  Linear fieldsplit_1_sub_1_ksp_ solve converged due to CONVERGED_RTOL 
iterations 8
solve #1: 325.448

I’m realizing that this has probably nothing to do with the outer PCFIELDSPLIT, 
but this comes from a rather large FSI solver, so reproducing this behavior in 
“only” 97 SLOC is good enough for your I hope.

Thanks in advance,
Pierre

Re: [petsc-dev] MAT_HERMITIAN

2019-09-14 Thread Pierre Jolivet via petsc-dev



On Sep 14, 2019, at 7:19 AM, "Smith, Barry F."  wrote:

> 
> 
>> On Sep 13, 2019, at 11:19 PM, Pierre Jolivet  
>> wrote:
>> 
>> 
>> 
>>> On 11 Sep 2019, at 3:46 PM, Smith, Barry F.  wrote:
>>> 
>>> 
>>>  The phrase Hermitian symmetric was originally an attempt to make the text 
>>> clearer, but in fact it is confusing and wrong.  The phrase  Hermitian 
>>> symmetric should be removed and just  Hermitian used.
>>> 
>>>   Real matrices are either either both symmetric and Hermitian or not 
>>> symmetric and not Hermitian
>>> 
>>>   Complex matrices can be set to symmetric or  Hermitian. For SBAIJ this 
>>> changes the meaning of the values in the matrix; for matrices with full 
>>> storage this means the user is stating that the matrix values are actually 
>>> symmetric or Hermitian in the storage. 
>>> 
>>>   Currently for complex matrices one can state the matrix is both symmetric 
>>> and Hermitian which would mean one is stating the imaginary parts are all 
>>> zero; if this is the case then it doesn't matter if the  
>>> MatMult_SeqSBAIJ_1_Hermitian() or the MatMult_SeqSBAIJ_1() is used for 
>>> SeqBAIJ. Note that calling both of these doesn't mean the imaginary parts 
>>> will be zeroed out, it is statement by the user that the imaginary parts 
>>> are zero.
>>> 
>>>   For parallel matrices, even in debug node, checking the user is telling 
>>> the truth when they set symmetric or  Hermitian is considered too expensive 
>>> so we just trust the user. 
>>> 
>>>   With the replacement of Hermitian symmetric by Hermitian will the text 
>>> now be clear or should more clarification be given in the manual pages (as 
>>> above)?
>> 
>> I think it makes the text more clear.
>> However, it’s not very clear to me:
>> 1) how to setup a SeqSBAIJ Mat as Hermitian, i.e., is MatSetOption(A, 
>> MAT_SYMMETRIC, PETSC_FALSE); MatSetOption(A, MAT_HERMITIAN, PETSC_TRUE); OK?
> 
>  That would work or I would just say the next overwrites the previous, so 
> just call MatSetOption(A, MAT_HERMITIAN, PETSC_TRUE); and it flips the 
> symmetric flag also.
> Similar for setting symmetric.

So, how do I let PETSc and SLEPc know that my matrix is both symmetric _and_ 
Hermitian if you flip the flag internally?

>  There should presumably really be only one flag inside the Mat object for 
> tracking this something like enum {MAT_FORM_HERMITIAN, MAT_FORM_SYMMETRIC, 
> MAT_FORM_GENERAL} MatSymmetricForm; and for SBAIJ it could only have the 
> values MAT_FORM_HERMITIAN, MAT_FORM_SYMMETRIC
> 
>> 2) why isn’t there more checks in MatSetOption_SeqSBAIJ.
> 
>  Oh, because hardly anyone ever uses it for complex numbers and people 
> probably never have some symmetric matrices in the simulation and some 
> Hermitian. My guess is 95% of all cases are one or the other only.
> 
>  Yes, definitely there should be a bunch more error checking.
> 
>> As we can all agree one, if(MAT_SYMMETRIC and MAT_HERMITIAN), then there 
>> should be no error for bs > 1, but on the contrary, isn’t (!MAT_SYMMETRIC 
>> and MAT_HERMITIAN and bs > 1) problematic right now?
> 
>   Why does it matter what bs is?

I don't know, the people who put the SETERRQ in MatSetOption_SeqSBAIJ may be 
able to answer that question.
Presumably because there are some BLAS/LAPACK calls with "T" instead of "H"?

Thanks,
Pierre

> Are you thinking of the diagonal blocks? 
> 
> 
>> 3) why any of those checks should be performed in MatSetOption_SeqSBAIJ. 
>> Calling MatAXPY with two matrices of the same type with the same flags is 
>> feasible no matter the value of bs.
> 
>   True, but you would need to check the flags when it is called. 
>> 
>> Thanks,
>> Pierre
>> 
>>> Barry
>>> 
>>> 
>>> case MAT_HERMITIAN:
>>> #if defined(PETSC_USE_COMPLEX) /* MAT_HERMITIAN is a synonym for 
>>> MAT_SYMMETRIC with reals */
>>>  if (!A->assembled) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_WRONGSTATE,"Must 
>>> call MatAssemblyEnd() first");
>>>  if (A->cmap->n < 65536 && A->cmap->bs == 1) {
>>>A->ops->mult = MatMult_SeqSBAIJ_1_Hermitian_ushort;
>>>  } else if (A->cmap->bs == 1) {
>>>A->ops->mult = MatMult_SeqSBAIJ_1_Hermitian;
>>>  } else SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"No support for Hermitian 
>>> with block size greater than 1");
>>> #endif
>>> 
>>> 
>>>

Re: [petsc-dev] MAT_HERMITIAN

2019-09-13 Thread Pierre Jolivet via petsc-dev



> On 11 Sep 2019, at 3:46 PM, Smith, Barry F.  wrote:
> 
> 
>The phrase Hermitian symmetric was originally an attempt to make the text 
> clearer, but in fact it is confusing and wrong.  The phrase  Hermitian 
> symmetric should be removed and just  Hermitian used.
> 
> Real matrices are either either both symmetric and Hermitian or not 
> symmetric and not Hermitian
> 
> Complex matrices can be set to symmetric or  Hermitian. For SBAIJ this 
> changes the meaning of the values in the matrix; for matrices with full 
> storage this means the user is stating that the matrix values are actually 
> symmetric or Hermitian in the storage. 
> 
> Currently for complex matrices one can state the matrix is both symmetric 
> and Hermitian which would mean one is stating the imaginary parts are all 
> zero; if this is the case then it doesn't matter if the  
> MatMult_SeqSBAIJ_1_Hermitian() or the MatMult_SeqSBAIJ_1() is used for 
> SeqBAIJ. Note that calling both of these doesn't mean the imaginary parts 
> will be zeroed out, it is statement by the user that the imaginary parts are 
> zero.
> 
> For parallel matrices, even in debug node, checking the user is telling 
> the truth when they set symmetric or  Hermitian is considered too expensive 
> so we just trust the user. 
> 
> With the replacement of Hermitian symmetric by Hermitian will the text 
> now be clear or should more clarification be given in the manual pages (as 
> above)?

I think it makes the text more clear.
However, it’s not very clear to me:
1) how to setup a SeqSBAIJ Mat as Hermitian, i.e., is MatSetOption(A, 
MAT_SYMMETRIC, PETSC_FALSE); MatSetOption(A, MAT_HERMITIAN, PETSC_TRUE); OK?
2) why isn’t there more checks in MatSetOption_SeqSBAIJ. As we can all agree 
one, if(MAT_SYMMETRIC and MAT_HERMITIAN), then there should be no error for bs 
> 1, but on the contrary, isn’t (!MAT_SYMMETRIC and MAT_HERMITIAN and bs > 1) 
problematic right now?
3) why any of those checks should be performed in MatSetOption_SeqSBAIJ. 
Calling MatAXPY with two matrices of the same type with the same flags is 
feasible no matter the value of bs.

Thanks,
Pierre

>   Barry
> 
> 
> case MAT_HERMITIAN:
> #if defined(PETSC_USE_COMPLEX) /* MAT_HERMITIAN is a synonym for 
> MAT_SYMMETRIC with reals */
>if (!A->assembled) SETERRQ(PETSC_COMM_SELF,PETSC_ERR_ARG_WRONGSTATE,"Must 
> call MatAssemblyEnd() first");
>if (A->cmap->n < 65536 && A->cmap->bs == 1) {
>  A->ops->mult = MatMult_SeqSBAIJ_1_Hermitian_ushort;
>} else if (A->cmap->bs == 1) {
>  A->ops->mult = MatMult_SeqSBAIJ_1_Hermitian;
>} else SETERRQ(PETSC_COMM_SELF,PETSC_ERR_SUP,"No support for Hermitian 
> with block size greater than 1");
> #endif
> 
> 
> 
>  case MAT_SYMMETRIC:
>mat->symmetric = flg;
>if (flg) mat->structurally_symmetric = PETSC_TRUE;
>mat->symmetric_set  = PETSC_TRUE;
>mat->structurally_symmetric_set = flg;
> #if !defined(PETSC_USE_COMPLEX)
>mat->hermitian = flg;
>mat->hermitian_set = PETSC_TRUE;
> #endif
>break;
>  case MAT_HERMITIAN:
>mat->hermitian = flg;
>if (flg) mat->structurally_symmetric = PETSC_TRUE;
>mat->hermitian_set  = PETSC_TRUE;
>mat->structurally_symmetric_set = flg;
> #if !defined(PETSC_USE_COMPLEX)
>mat->symmetric = flg;
>mat->symmetric_set = PETSC_TRUE;
> #endif
> 
> 
> 
> 
>> On Sep 11, 2019, at 5:55 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> (Putting back petsc-dev on c/c)
>> I don’t think you are the one that needs to do the changes.
>> There should be clarifications from the PETSc side of things, first.
>> From the man page your are quoting 
>> (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSEQSBAIJ.html):
>> 1) how to make a SBAIJ Mat Hermitian but not symmetric? Is this handled by 
>> PETSc? Is this legal to do MatSetOption(A, MAT_SYMMETRIC, PETSC_FALSE) 
>> followed by MatSetOption(A, MAT_HERMITIAN, PETSC_TRUE)? I think this is the 
>> case where bs>1 is not handled internally by PETSc, so for this case it 
>> should be OK to error out.
>> 2) “Hermitian symmetric” is a fancy way of saying that the matrix has no 
>> imaginary part. Or is just “Hermitian" what is supposed to be written on the 
>> page? In which case, how do you tell to PETSc that you are indeed dealing 
>> with a matrix with no imaginary part.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 11 Sep 2019, at 11:51 AM, Jose E. Roman  wrote:
>>> 
>>> Now I understand better. You are rig

Re: [petsc-dev] MAT_HERMITIAN

2019-09-11 Thread Pierre Jolivet via petsc-dev
(Putting back petsc-dev on c/c)
I don’t think you are the one that needs to do the changes.
There should be clarifications from the PETSc side of things, first.
From the man page your are quoting 
(https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSEQSBAIJ.html
 
):
1) how to make a SBAIJ Mat Hermitian but not symmetric? Is this handled by 
PETSc? Is this legal to do MatSetOption(A, MAT_SYMMETRIC, PETSC_FALSE) followed 
by MatSetOption(A, MAT_HERMITIAN, PETSC_TRUE)? I think this is the case where 
bs>1 is not handled internally by PETSc, so for this case it should be OK to 
error out.
2) “Hermitian symmetric” is a fancy way of saying that the matrix has no 
imaginary part. Or is just “Hermitian" what is supposed to be written on the 
page? In which case, how do you tell to PETSc that you are indeed dealing with 
a matrix with no imaginary part.

Thanks,
Pierre

> On 11 Sep 2019, at 11:51 AM, Jose E. Roman  wrote:
> 
> Now I understand better. You are right. Maybe I should do some changes in 
> SLEPc. I will think about this next week.
> 
> 
>> El 11 sept 2019, a las 11:38, Pierre Jolivet  
>> escribió:
>> 
>> 
>> 
>>> On 11 Sep 2019, at 11:21 AM, Jose E. Roman  wrote:
>>> 
>>> In 
>>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MATSEQSBAIJ.html
>>>  it says "To make it Hermitian symmetric you can call MatSetOption(Mat, 
>>> MAT_HERMITIAN);" So my understanding is that in SBAIJ assumes MAT_SYMMETRIC 
>>> and the user must set MAT_HERMITIAN if desired.
>> 
>> Right, that is exactly what you do in STMatSetHermitian. You get from the 
>> user a SBAIJ matrix (so SYMMETRIC = true), and you set yourself HERMITIAN = 
>> true.
>> 
>>> I don't think it makes sense to set both MAT_HERMITIAN and MAT_SYMMETRIC.
>> 
>> Why? A symmetric (or Hermitian) matrix with no imaginary part is both 
>> Hermitian and symmetric, so at least this set of options defines correctly 
>> the structure of the matrix (which is not the case if you only assume it is 
>> _either_ symmetric or Hermitian).
>> 
>>> But I may be wrong because I find this part of PETSc very confusing.
>>> 
>>> 
 El 11 sept 2019, a las 11:08, Jose E. Roman  escribió:
 
 Do you mean setting both flags MAT_HERMITIAN and MAT_SYMMETRIC? Is this 
 possible?
>> 
>> It was possible before my commit for bs == 1, but not for bs > 1.
>> 
>> At least now my tests are not failing.
>> 
> El 11 sept 2019, a las 11:04, Pierre Jolivet  
> escribió:
> 
> Symmetric <=> a_ij = a_ji
> Hermitian <=> a_ij = conj(a_ji)
> 
> Symmetric + Hermitian <=> a_ij = conj(a_ji) = a_ji <=> imag(a_ji) = 0 
> \forall ij
> 
> Or am I stupid?
> 
>> On 11 Sep 2019, at 10:59 AM, Jose E. Roman  wrote:
>> 
>> Not sure if I understand you. Do you mean that a complex SBAIJ Mat with 
>> MAT_HERMITIAN flag can be assumed to have zero imaginary part? I don't 
>> think so. This matrix should have real diagonal entries, but 
>> off-diagonal entries should be allowed to have nonzero imaginary part. 
>> This is what is done in MatMult_SeqSBAIJ_1_Hermitian(), where 
>> off-diagonal entries are conjugated when used for the strict lower 
>> triangular part. So I guess the right fix is to implement 
>> MatMult_SeqSBAIJ_2_Hermitian(), MatMult_SeqSBAIJ_3_Hermitian() and so on 
>> with appropriate use of PetscConj().
>> 
>> Jose
>> 
>>> El 11 sept 2019, a las 10:36, Pierre Jolivet 
>>>  escribió:
>>> 
>>> Nevermind, this is the wrong fix.
>>> The proper fix is in PETSc. It should not error out if the matrix is 
>>> also symmetric.
>>> Indeed, complex symmetric Hermitian => complex with no imaginary part.
>>> Thus all operations like MatMult, MatMultHermitianTranspose, Cholesky… 
>>> will work for bs > 1, since all is filled with zeroes.
>>> I will take care of this, I’m c/c’ing petsc-dev so that they don’t have 
>>> to “reverse engineer” the trivial change to MatSetOption_SeqSBAIJ.
>>> 
>>> Sorry about the noise.
>>> 
>>> Thank you,
>>> Pierre
>>> 
 On 10 Sep 2019, at 8:37 AM, Pierre Jolivet 
  wrote:
 
 Hello,
 Could you consider not setting MAT_HERMITIAN here 
 http://slepc.upv.es/documentation/current/src/sys/classes/st/interface/stsles.c.html#line276
  when using SBAIJ matrices with bs > 1?
 This makes PETSc error out with
 #  [1]PETSC ERROR: No support for this operation for this object 
 type
 #  [1]PETSC ERROR: No support for Hermitian with block size 
 greater than 1
 
 The change does not bring any regression, since PETSc is always giving 
 an error without it, but on the contrary, it improves the range of 
 applicability of SLEPc, e.g., for complex Hermitian problems with 

Re: [petsc-dev] MAT_HERMITIAN

2019-09-11 Thread Pierre Jolivet via petsc-dev
Nevermind, this is the wrong fix.
The proper fix is in PETSc. It should not error out if the matrix is also 
symmetric.
Indeed, complex symmetric Hermitian => complex with no imaginary part.
Thus all operations like MatMult, MatMultHermitianTranspose, Cholesky… will 
work for bs > 1, since all is filled with zeroes.
I will take care of this, I’m c/c’ing petsc-dev so that they don’t have to 
“reverse engineer” the trivial change to MatSetOption_SeqSBAIJ.

Sorry about the noise.

Thank you,
Pierre

> On 10 Sep 2019, at 8:37 AM, Pierre Jolivet  wrote:
> 
> Hello,
> Could you consider not setting MAT_HERMITIAN here 
> http://slepc.upv.es/documentation/current/src/sys/classes/st/interface/stsles.c.html#line276
>  
> 
>  when using SBAIJ matrices with bs > 1?
> This makes PETSc error out with
> # [1]PETSC ERROR: No support for this operation for this object type
> # [1]PETSC ERROR: No support for Hermitian with block size greater than 1
> 
> The change does not bring any regression, since PETSc is always giving an 
> error without it, but on the contrary, it improves the range of applicability 
> of SLEPc, e.g., for complex Hermitian problems with SBAIJ matrices and bs > 1 
> that _don’t_ require the flag MAT_HERMITIAN set to true.
> 
> Thanks,
> Pierre



Re: [petsc-dev] SetPreallocationCSR

2019-08-24 Thread Pierre Jolivet via petsc-dev



> On 24 Aug 2019, at 3:07 PM, Smith, Barry F.  wrote:
> 
> 
> 
>   It is an oversight. It saves a global reduction in the MatAssembly since 
> all the processes know they don't need to communicate anything.

OK, that’s what I was hoping for.
I tried to force the flag and it doesn’t seem to break down my PC, so I’ll push 
this in a branch (and create the according MR when we receive your green light).

Thanks,
Pierre

>   Barry
> 
>> On Aug 24, 2019, at 7:55 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> Out of the three SetPreallocationCSR implementations (for MPIAIJ, MPIBAIJ, 
>> and MPISBAIJ), only MPIAIJ is forcing MAT_NO_OFF_PROC_ENTRIES to PETSC_TRUE 
>> before the final AssemblyBegin/AssemblyEnd:
>> - AIJ 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/aij/mpi/mpiaij.c.html#line3957
>> - BAIJ 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/mpi/mpibaij.c.html#line2782
>>  and SBAIJ 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/sbaij/mpi/mpisbaij.c.html#line2262
>> 
>> Is this option not relevant for those two types or is this an oversight?
>> 
>> Thanks,
>> Pierre
> 



[petsc-dev] SetPreallocationCSR

2019-08-24 Thread Pierre Jolivet via petsc-dev
Hello,
Out of the three SetPreallocationCSR implementations (for MPIAIJ, MPIBAIJ, and 
MPISBAIJ), only MPIAIJ is forcing MAT_NO_OFF_PROC_ENTRIES to PETSC_TRUE before 
the final AssemblyBegin/AssemblyEnd:
- AIJ 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/aij/mpi/mpiaij.c.html#line3957
 

- BAIJ 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/mpi/mpibaij.c.html#line2782
 

 and SBAIJ 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/sbaij/mpi/mpisbaij.c.html#line2262
 


Is this option not relevant for those two types or is this an oversight?

Thanks,
Pierre

Re: [petsc-dev] Sequential external packages and MPI

2019-08-22 Thread Pierre Jolivet via petsc-dev
OK, thanks to you both for your comments.
I’ll try to figure something out.

Pierre

> On 22 Aug 2019, at 8:19 AM, Smith, Barry F.  wrote:
> 
> 
>  cmake, make, sowing and utilities that are not libraries that go into the 
> overall simulation are not compiled with mpicxx etc. 
> 
>> On Aug 22, 2019, at 1:03 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> 
>> 
>>> On 22 Aug 2019, at 7:42 AM, Balay, Satish  wrote:
>>> 
>>> On Thu, 22 Aug 2019, Pierre Jolivet via petsc-dev wrote:
>>> 
>>>> Hello,
>>>> PETSc is linking “sequential” libraries with MPI libraries.
>>>> $ otool -L libmetis.dylib
>>>>/usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
>>>> current version 14.7.0)
>>>> $ otool -L libfftw3.dylib
>>>>/usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
>>>> current version 14.7.0)
>>> 
>>> this will occur if one uses mpi compilers to build PETSc.
>> 
>> Why, though?
>> If MPICXX_SHOW != “Unavailable”, is it mandatory to force CXX=MPICXX in 
>> Metis CMake?
>> Wouldn’t it be possible to just extract the compiler binary name and use 
>> that as CXX?
>> I understand you don’t want to either overcomplicate things or fix something 
>> that is not broken — for you — so I’m just making sure that it would be OK 
>> if I patch this like that locally.
>> 
>>>> Is there anyway to avoid this, by using a “sequential” compiler and/or 
>>>> linker?
>>> 
>>> Yes - you can build these (sequential) packages/petsc with --with-mpi=0 
>>> [and without mpi compilers]
>>> 
>>>> I’m asking because we use PETSc libraries to compile both parallel and 
>>>> sequential wrappers.
>>>> Our Metis wrapper is marked as a sequential one, but since you are linking 
>>>> libmetis with MPI, this is problematic for some configurations.
>>> 
>>> Not sure what What mean by 'wrappers' here - esp 'Metis wrapper'. Its
>>> just a library.
>> 
>> It’s just another dynamic library compiled on top of libmetis that is then 
>> dynamically loaded by a DSL, which may or may not be launched with MPIRUN.
>> 
>>> If you are using petsc build tools to install these packages for a
>>> different use [other than the petsc usage specified by configure] -
>>> use different petsc builds as indicated above for different packages -
>>> as needed.
>> 
>> Having to configure + build PETSc with both real and complex numbers is 
>> already long enough.
>> That would mean a 3rd build, but why not.
>> Are there some guarantees that CXX with --with-mpi=0 will be the same as the 
>> underlying compiler of MPICXX? (I’m thinking of incompatible libc++ that 
>> would make it impossible to link in the same library Metis and the later one 
>> compiled PETSc with --with-mpi=1)
> 
>  This can be tricky. If you using --download-mpich then you know they are 
> compatible since the MPI is built from the sequential, but if you are only 
> given mpicc you have to be careful how you pull out the parts, You can start 
> with  -show which works for some of them. OpenMPI has several options to get 
> info about exactly what it uses for flags etc.
> 
>  Instead of hacking PETSc I would recommending having simple scripts that use 
> PETSc configure in the way I described in my previous email
> 
>   Barry
> 
>> 
>> Thanks,
>> Pierre
>> 
>>> BTW: Current petsc configure/builder builds only parallel fftw. [it does 
>>> not support building sequential fftw. But I guess this could be added]
>>> 
>>> Satish



Re: [petsc-dev] Sequential external packages and MPI

2019-08-22 Thread Pierre Jolivet via petsc-dev



> On 22 Aug 2019, at 7:42 AM, Balay, Satish  wrote:
> 
> On Thu, 22 Aug 2019, Pierre Jolivet via petsc-dev wrote:
> 
>> Hello,
>> PETSc is linking “sequential” libraries with MPI libraries.
>> $ otool -L libmetis.dylib
>>  /usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
>> current version 14.7.0)
>> $ otool -L libfftw3.dylib
>>  /usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
>> current version 14.7.0)
> 
> this will occur if one uses mpi compilers to build PETSc.

Why, though?
If MPICXX_SHOW != “Unavailable”, is it mandatory to force CXX=MPICXX in Metis 
CMake?
Wouldn’t it be possible to just extract the compiler binary name and use that 
as CXX?
I understand you don’t want to either overcomplicate things or fix something 
that is not broken — for you — so I’m just making sure that it would be OK if I 
patch this like that locally.

>> Is there anyway to avoid this, by using a “sequential” compiler and/or 
>> linker?
> 
> Yes - you can build these (sequential) packages/petsc with --with-mpi=0 [and 
> without mpi compilers]
> 
>> I’m asking because we use PETSc libraries to compile both parallel and 
>> sequential wrappers.
>> Our Metis wrapper is marked as a sequential one, but since you are linking 
>> libmetis with MPI, this is problematic for some configurations.
> 
> Not sure what What mean by 'wrappers' here - esp 'Metis wrapper'. Its
> just a library.

It’s just another dynamic library compiled on top of libmetis that is then 
dynamically loaded by a DSL, which may or may not be launched with MPIRUN.

> If you are using petsc build tools to install these packages for a
> different use [other than the petsc usage specified by configure] -
> use different petsc builds as indicated above for different packages -
> as needed.

Having to configure + build PETSc with both real and complex numbers is already 
long enough.
That would mean a 3rd build, but why not.
Are there some guarantees that CXX with --with-mpi=0 will be the same as the 
underlying compiler of MPICXX? (I’m thinking of incompatible libc++ that would 
make it impossible to link in the same library Metis and the later one compiled 
PETSc with --with-mpi=1)

Thanks,
Pierre

> BTW: Current petsc configure/builder builds only parallel fftw. [it does not 
> support building sequential fftw. But I guess this could be added]
> 
> Satish



[petsc-dev] Sequential external packages and MPI

2019-08-21 Thread Pierre Jolivet via petsc-dev
Hello,
PETSc is linking “sequential” libraries with MPI libraries.
$ otool -L libmetis.dylib
/usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
current version 14.7.0)
$ otool -L libfftw3.dylib
/usr/local/opt/mpich/lib/libmpi.12.dylib (compatibility version 14.0.0, 
current version 14.7.0)
Is there anyway to avoid this, by using a “sequential” compiler and/or linker?
I’m asking because we use PETSc libraries to compile both parallel and 
sequential wrappers.
Our Metis wrapper is marked as a sequential one, but since you are linking 
libmetis with MPI, this is problematic for some configurations.

Thanks,
Pierre

Re: [petsc-dev] ISLocalToGlobalMappingGetInfo

2019-08-09 Thread Pierre Jolivet via petsc-dev
Nevermind, the first component of the arrays are too frequently used in PCBDDC.
So, I guess, the documentation should be updated by someone who knows what is 
stored there.

Also, it’s not clear to me what would happened with a domain decomposition with 
only non-connected subdomains.
If there are more than a single subdomain, is nproc = 1 or 0 like when there is 
a single subdomain? Cf. 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1012
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1012>.

Thanks anyway,
Pierre

> On 9 Aug 2019, at 9:35 AM, Pierre Jolivet via petsc-dev 
>  wrote:
> 
> Actually, I don’t think it is a bug, but rather an undocumented feature.
> Someone clearly knew what they were doing when writing this: 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1379
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1379>
> Now, the question becomes: is this really needed? Can’t the values associated 
> to "rank" be stripped from the arrays from the get go?
> I’m guessing this also explains why some loop start at 1 and not 0, e.g., 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1583
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1583>
> 
> I’ll try to take a crack at it, anyway, now I know that the information I’m 
> looking for is basically all the arrays shifted by 1 and nproc = nproc - 1.
> 
> Thanks,
> Pierre
> 
>> On 9 Aug 2019, at 8:52 AM, Smith, Barry F. > <mailto:bsm...@mcs.anl.gov>> wrote:
>> 
>> 
>>  The documents and the code indicate that nproc should be 1 in this case. So 
>> there is a bug, if you are unable to determine the cause I can try to look 
>> at it but have stones of things to do.
>> 
>>  Barry
>> 
>> 
>>> On Aug 9, 2019, at 12:37 AM, Pierre Jolivet >> <mailto:pierre.joli...@enseeiht.fr>> wrote:
>>> 
>>> Again, with only 2 processes, where nproc is set to 2, I get the following 
>>> numprocs and indices arrays on rank 0:
>>> 0   110 111 112 113 114 115 116 117 … 
>>> 1   110 111 112 113 114 115 116 117 …
>>> Are these the expected values? If so, what is the true definition of nproc, 
>>> please?
>>> 
>>> Thanks,
>>> Pierre
>>> 
>>>> On 9 Aug 2019, at 3:45 AM, Smith, Barry F. >>> <mailto:bsm...@mcs.anl.gov>> wrote:
>>>> 
>>>> 
>>>> There could be a bug. Perhaps check the entries for that "extra" 
>>>> connection, are they all actual meaningful connections. The code that 
>>>> fills up these data structures is somewhat involved.
>>>> 
>>>> Barry
>>>> 
>>>> 
>>>>> On Aug 8, 2019, at 9:39 AM, Pierre Jolivet via petsc-dev 
>>>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>>>> 
>>>>> Hello,
>>>>> When I use ISLocalToGlobalMappingGetInfo, e.g., in 
>>>>> src/ksp/ksp/examples/tutorials/ex71.c ISLocalToGlobalMappingGetInfo(map, 
>>>>> , , , ), I get the following value for nproc:
>>>>> - with 1 process: 0
>>>>> - with 2 processes: 2
>>>>> In the source code, it says that nproc is the “number of processors that 
>>>>> are connected to this one”.
>>>>> How come this value is equal to the size of the global communicator when 
>>>>> using 2 processes?
>>>>> According to the comment in the source code, shouldn’t the returned value 
>>>>> be 1?
>>>>> 
>>>>> Thanks,
>>>>> Pierre
>>>> 
>>> 
>> 
> 



Re: [petsc-dev] ISLocalToGlobalMappingGetInfo

2019-08-09 Thread Pierre Jolivet via petsc-dev
Actually, I don’t think it is a bug, but rather an undocumented feature.
Someone clearly knew what they were doing when writing this: 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1379
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1379>
Now, the question becomes: is this really needed? Can’t the values associated 
to "rank" be stripped from the arrays from the get go?
I’m guessing this also explains why some loop start at 1 and not 0, e.g., 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1583
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/vec/is/utils/isltog.c.html#line1583>

I’ll try to take a crack at it, anyway, now I know that the information I’m 
looking for is basically all the arrays shifted by 1 and nproc = nproc - 1.

Thanks,
Pierre

> On 9 Aug 2019, at 8:52 AM, Smith, Barry F.  wrote:
> 
> 
>  The documents and the code indicate that nproc should be 1 in this case. So 
> there is a bug, if you are unable to determine the cause I can try to look at 
> it but have stones of things to do.
> 
>  Barry
> 
> 
>> On Aug 9, 2019, at 12:37 AM, Pierre Jolivet  
>> wrote:
>> 
>> Again, with only 2 processes, where nproc is set to 2, I get the following 
>> numprocs and indices arrays on rank 0:
>> 0110 111 112 113 114 115 116 117 … 
>> 1110 111 112 113 114 115 116 117 …
>> Are these the expected values? If so, what is the true definition of nproc, 
>> please?
>> 
>> Thanks,
>> Pierre
>> 
>>> On 9 Aug 2019, at 3:45 AM, Smith, Barry F.  wrote:
>>> 
>>> 
>>> There could be a bug. Perhaps check the entries for that "extra" 
>>> connection, are they all actual meaningful connections. The code that fills 
>>> up these data structures is somewhat involved.
>>> 
>>> Barry
>>> 
>>> 
>>>> On Aug 8, 2019, at 9:39 AM, Pierre Jolivet via petsc-dev 
>>>>  wrote:
>>>> 
>>>> Hello,
>>>> When I use ISLocalToGlobalMappingGetInfo, e.g., in 
>>>> src/ksp/ksp/examples/tutorials/ex71.c ISLocalToGlobalMappingGetInfo(map, 
>>>> , , , ), I get the following value for nproc:
>>>> - with 1 process: 0
>>>> - with 2 processes: 2
>>>> In the source code, it says that nproc is the “number of processors that 
>>>> are connected to this one”.
>>>> How come this value is equal to the size of the global communicator when 
>>>> using 2 processes?
>>>> According to the comment in the source code, shouldn’t the returned value 
>>>> be 1?
>>>> 
>>>> Thanks,
>>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] ISLocalToGlobalMappingGetInfo

2019-08-08 Thread Pierre Jolivet via petsc-dev
Again, with only 2 processes, where nproc is set to 2, I get the following 
numprocs and indices arrays on rank 0:
0   110 111 112 113 114 115 116 117 … 
1   110 111 112 113 114 115 116 117 …
Are these the expected values? If so, what is the true definition of nproc, 
please?

Thanks,
Pierre

> On 9 Aug 2019, at 3:45 AM, Smith, Barry F.  wrote:
> 
> 
>  There could be a bug. Perhaps check the entries for that "extra" connection, 
> are they all actual meaningful connections. The code that fills up these data 
> structures is somewhat involved.
> 
>   Barry
> 
> 
>> On Aug 8, 2019, at 9:39 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> When I use ISLocalToGlobalMappingGetInfo, e.g., in 
>> src/ksp/ksp/examples/tutorials/ex71.c ISLocalToGlobalMappingGetInfo(map, 
>> , , , ), I get the following value for nproc:
>> - with 1 process: 0
>> - with 2 processes: 2
>> In the source code, it says that nproc is the “number of processors that are 
>> connected to this one”.
>> How come this value is equal to the size of the global communicator when 
>> using 2 processes?
>> According to the comment in the source code, shouldn’t the returned value be 
>> 1?
>> 
>> Thanks,
>> Pierre
> 



[petsc-dev] ISLocalToGlobalMappingGetInfo

2019-08-08 Thread Pierre Jolivet via petsc-dev
Hello,
When I use ISLocalToGlobalMappingGetInfo, e.g., in 
src/ksp/ksp/examples/tutorials/ex71.c ISLocalToGlobalMappingGetInfo(map, 
, , , ), I get the following value for nproc:
- with 1 process: 0
- with 2 processes: 2
In the source code, it says that nproc is the “number of processors that are 
connected to this one”.
How come this value is equal to the size of the global communicator when using 
2 processes?
According to the comment in the source code, shouldn’t the returned value be 1?

Thanks,
Pierre

Re: [petsc-dev] SeqSBAIJ v. MPISBAIJ

2019-08-02 Thread Pierre Jolivet via petsc-dev
I don’t know if this is related, but the following example triggers a [0]PETSC ERROR: Argument out of range[0]PETSC ERROR: New nonzero at (0,1) caused a mallocWhile everything works OK if I switch from MPISBAIJ to MPIBAIJ.Thanks,PierreOn 2 Aug 2019, at 1:49 AM, Smith, Barry F.  wrote: Yes it is a bug, working on it now. On Aug 1, 2019, at 9:13 AM, Pierre Jolivet via petsc-dev  wrote:Hello,The attached example is a little confusing for me.How come I don’t get the same matrix out-of-the-box?For me, the “correct” matrix is the SeqSBAIJ, how can I get MatMPISBAIJSetPreallocationCSR to assemble the same matrix?Do I have to resort to first assembling SeqSBAIJ matrices and then calling MatCreateMPIMatConcatenateSeqMat if I don’t want to play around with my input arrays?Thanks,Pierre

ex232.c
Description: Binary data


[petsc-dev] SeqSBAIJ v. MPISBAIJ

2019-08-01 Thread Pierre Jolivet via petsc-dev
Hello,
The attached example is a little confusing for me.
How come I don’t get the same matrix out-of-the-box?
For me, the “correct” matrix is the SeqSBAIJ, how can I get 
MatMPISBAIJSetPreallocationCSR to assemble the same matrix?
Do I have to resort to first assembling SeqSBAIJ matrices and then calling 
MatCreateMPIMatConcatenateSeqMat if I don’t want to play around with my input 
arrays?

Thanks,
Pierre



foo.c
Description: Binary data


Re: [petsc-dev] PCREDUNDANT

2019-07-28 Thread Pierre Jolivet via petsc-dev


> On 28 Jul 2019, at 3:02 PM, Mark Adams  wrote:
> 
> 
> On Sun, Jul 28, 2019 at 2:54 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> I’m facing multiple issues with PCREDUNDANT and MATMPISBAIJ:
> 1) 
> https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/sbaij/mpi/mpisbaij.c.html#line3354
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/sbaij/mpi/mpisbaij.c.html#line3354>
>  shouldn’t this be sum != N? I’m running an example where it says that sum 
> (4) != Nbs (60), with a bs=15.
> 
> Clearly a bug.

OK, fixed here: 
https://bitbucket.org/petsc/petsc/pull-requests/1915/fixes/diff#Lsrc/mat/impls/sbaij/mpi/mpisbaij.cT3324
 
<https://bitbucket.org/petsc/petsc/pull-requests/1915/fixes/diff#Lsrc/mat/impls/sbaij/mpi/mpisbaij.cT3324>
 (once again, this isn’t tested in PETSc for a bs>1, I think, but now my 
preconditioner is working OK, for a bs>1).

>  
> 2) when I’m using MATMPIBAIJ, I can do stuff like: -prefix_mat_type baij 
> -prefix_pc_type redundant -prefix_redundant_pc_type ilu, and in the KSPView, 
> I have "package used to perform factorization: petsc”, so the underlying 
> MatType is indeed MATSEQBAIJ. 
> However, with MATMPISBAIJ, if I do: -prefix_mat_type sbaij -prefix_pc_type 
> redundant, first, it looks like you are hardwiring a PCLU (MatGetFactor() 
> line 4440 in src/mat/interface/matrix.c
> 
> Using LU as a default for symmetric matrices does seem wrong.

FWIW, I made a similar remark to Jose because the same matrices are being used 
by SLEPc for a GEVP with a SINVERT ST in a Krylov—Schur EPS (tentative fix 
here: https://bitbucket.org/slepc/slepc/branch/jose/sinvert-cholesky#diff 
<https://bitbucket.org/slepc/slepc/branch/jose/sinvert-cholesky#diff>).

> Could not locate a solver package.), then, if I append 
> -prefix_redundant_pc_type cholesky, I end up with an error related to MUMPS: 
> MatGetFactor_sbaij_mumps() line 2625 in src/mat/impls/aij/mpi/mumps/mumps.c 
> Cannot use PETSc SBAIJ matrices with block size > 1 with MUMPS Cholesky, use 
> AIJ matrix instead. Why isn’t this call dispatched to PETSc Cholesky for 
> SeqSBAIJ matrices?
> 
> 
> Generally, we don't like to switch parameters under the covers like this. We 
> would rather you get your inputs right so you know what is going on.

I’m not sure how to interpret your comment, but anyway, I found out that for 
SBAIJ matrices, the type of the output matrix in 
MatCreateMPIMatConcatenateSeqMat (used by PCREDUNDANT) was set to MPISBAIJ, 
whereas for BAIJ, it was BAIJ (and thus SeqBAIJ when used within PCREDUNDANT 
with “full” redundancy over PETSC_COMM_WORLD).
Here is the fix: 
https://bitbucket.org/petsc/petsc/pull-requests/1915/fixes/diff#Lsrc/mat/impls/sbaij/mpi/mpisbaij.cT3345
 
<https://bitbucket.org/petsc/petsc/pull-requests/1915/fixes/diff#Lsrc/mat/impls/sbaij/mpi/mpisbaij.cT3345>
 (inspired by 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/mpi/mpibaij.c.html#line3957
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/mpi/mpibaij.c.html#line3957>)

Now everything seems OK!
Thanks,
Pierre

>  
> Thanks,
> Pierre
> 
> 1) I don’t think this is tested right now, at least not in 
> src/ksp/ksp/examples/tutorials
> 2) reproducer: src/ksp/ksp/examples/tutorials/ex2.c
> $ mpirun -np 2 ./ex2 -pc_type redundant -mat_type sbaij
> // error because trying to do LU with a symmetric matrix
> $ mpirun -np 2 ./ex2 -pc_type redundant -mat_type sbaij -redundant_pc_type 
> cholesky -ksp_view
> // you’ll see: that MUMPS is being used, but since bs=1, it’s working, but it 
> won’t for the general case
> //  the MatType is mpisbaij with "1 MPI processes" whereas 
> with baij, it’s seqbaij
> 



[petsc-dev] PCREDUNDANT

2019-07-28 Thread Pierre Jolivet via petsc-dev
Hello,
I’m facing multiple issues with PCREDUNDANT and MATMPISBAIJ:
1) 
https://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/sbaij/mpi/mpisbaij.c.html#line3354
 

 shouldn’t this be sum != N? I’m running an example where it says that sum (4) 
!= Nbs (60), with a bs=15.
2) when I’m using MATMPIBAIJ, I can do stuff like: -prefix_mat_type baij 
-prefix_pc_type redundant -prefix_redundant_pc_type ilu, and in the KSPView, I 
have "package used to perform factorization: petsc”, so the underlying MatType 
is indeed MATSEQBAIJ.
However, with MATMPISBAIJ, if I do: -prefix_mat_type sbaij -prefix_pc_type 
redundant, first, it looks like you are hardwiring a PCLU (MatGetFactor() line 
4440 in src/mat/interface/matrix.c Could not locate a solver package.), then, 
if I append -prefix_redundant_pc_type cholesky, I end up with an error related 
to MUMPS: MatGetFactor_sbaij_mumps() line 2625 in 
src/mat/impls/aij/mpi/mumps/mumps.c Cannot use PETSc SBAIJ matrices with block 
size > 1 with MUMPS Cholesky, use AIJ matrix instead. Why isn’t this call 
dispatched to PETSc Cholesky for SeqSBAIJ matrices?

Thanks,
Pierre

1) I don’t think this is tested right now, at least not in 
src/ksp/ksp/examples/tutorials
2) reproducer: src/ksp/ksp/examples/tutorials/ex2.c
$ mpirun -np 2 ./ex2 -pc_type redundant -mat_type sbaij
// error because trying to do LU with a symmetric matrix
$ mpirun -np 2 ./ex2 -pc_type redundant -mat_type sbaij -redundant_pc_type 
cholesky -ksp_view
// you’ll see: that MUMPS is being used, but since bs=1, it’s working, but it 
won’t for the general case
//  the MatType is mpisbaij with "1 MPI processes" whereas with 
baij, it’s seqbaij



Re: [petsc-dev] Fwd: [mumps-users] MUMPS MPI (5.2.1) does not compile

2019-07-22 Thread Pierre Jolivet via petsc-dev
Ah, shoot, I misunderstood the mail from the MUMPS ML, which states that there 
is a FTBS, but fixes something else…
I just tried myself and it seems OK as well.
The only difference I see with my user's ./configure and the MUMPS ML vs. 
Satish and me is that we are using MPICH (IMPI in my case) and they are using 
OpenMPI.

I’ll ask for the user to send its configure.log to petsc-maint.

Thanks and sorry for the trouble,
Pierre

> On 22 Jul 2019, at 9:06 PM, Balay, Satish  wrote:
> 
> And the error below is with cmumps_save_restore.F - but I don't see fixes to 
> this file in the patches.
> 
> So I tried a build with intel-17 - and that worked fine for me.
> 
> balay@es^/sandbox/balay/petsc(master=) $ ifort --version
> ifort (IFORT) 17.0.2 20170213
> Copyright (C) 1985-2017 Intel Corporation.  All rights reserved.
> balay@es^/sandbox/balay/petsc(master=) $ ./configure CC=icc FC=ifort CXX=icpc 
> --download-mpich --download-scalapack --download-mumps 
> --with-blaslapack-dir=$MKL_HOME
> 
> Attaching the corresponding configure.log
> 
> Satish
> 
> On Mon, 22 Jul 2019, Smith, Barry F. via petsc-dev wrote:
> 
>> 
>>  Pierre,
>> 
>>   I see four patches here. 
>> 
>> 1) for examples
>> 2) for shared library support (but only for Linux) (two files for MUMPS 
>> and PORD) We can't use this unless also supports Mac etc
>> 3) MUMPS-Makefile.par.inc  What is this for?
>> 
>>  How does any of them resolve the problem with ifort failing to compile?
>> 
>>  Barry
>> 
>> 
>>> On Jul 22, 2019, at 1:28 PM, Pierre Jolivet via petsc-dev 
>>>  wrote:
>>> 
>>> We are having trouble compiling git.mumps with similar errors (using ifort 
>>> 2017, yikes).
>>> Could the forwarded patches be applied to petsc/pkg-mumps, please?
>>> 
>>> Thanks in advance (and sorry if this should be handled @petsc-maint),
>>> Pierre
>>> 
>>>> Begin forwarded message:
>>>> 
>>>> From: "Antonio Trande" (anto.tra...@gmail.com via mumps-users Mailing 
>>>> List) 
>>>> Subject: [mumps-users] MUMPS MPI (5.2.1) does not compile
>>>> Date: 29 June 2019 at 5:56:28 PM CEST
>>>> To: "mumps-us...@listes.ens-lyon.fr" 
>>>> Reply-To: mumps-us...@listes.ens-lyon.fr
>>>> 
>>>> 
>>>> Hi all.
>>>> 
>>>> MUMPS-5.2.1 is not compiling on Fedora 30 because of this error:
>>>> 
>>>> /usr/lib64/openmpi/bin/mpif77 -O2 -g -pipe -Wall -Werror=format-security
>>>> -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions
>>>> -fstack-protector-strong -grecord-gcc-switches
>>>> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
>>>> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
>>>> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
>>>> -Dscotch -Dmetis -Dptscotch -pthread -I/usr/lib64/gfortran/modules
>>>> -Dintel_ -Wno-unused-dummy-argument -Wno-maybe-uninitialized
>>>> -I/usr/include/openmpi-x86_64 -I/usr/include/openblas
>>>> -I/usr/include/openmpi-x86_64 -Dpord -I. -I../include -fPIC -c
>>>> cmumps_save_restore.F -o cmumps_save_restore.o
>>>> cmumps_save_restore.F:8105:50:
>>>> 
>>>> 8105 |IF(associated(id%MPITOOMP_PROCS_MAP)) THEN
>>>> |  1
>>>> Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'
>>>> structure
>>>> cmumps_save_restore.F:8108:49:
>>>> 
>>>> 8108 |  & size(id%MPITOOMP_PROCS_MAP,1)*SIZE_INT
>>>> | 1
>>>> Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'
>>>> structure
>>>> cmumps_save_restore.F:8113:72:
>>>> 
>>>> 8113 | elseif(trim(mode).EQ."save") then
>>>> |
>>>>  1
>>>> Error: Unexpected ELSE IF statement at (1)
>>>> cmumps_save_restore.F:8114:50:
>>>> 
>>>> 8114 |IF(associated(id%MPITOOMP_PROCS_MAP)) THEN
>>>> |  1
>>>> Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'
>>>> structure
>>>> cmumps_save_restore.F:8115:67:
>>>> 
>>>> 8115 |   write(unit,iostat=err)
>>>> size(id%MPITOOMP_PROCS_MAP,1)
>>&

[petsc-dev] Fwd: [mumps-users] MUMPS MPI (5.2.1) does not compile

2019-07-22 Thread Pierre Jolivet via petsc-dev
We are having trouble compiling git.mumps with similar errors (using ifort 2017, yikes).Could the forwarded patches be applied to petsc/pkg-mumps, please?Thanks in advance (and sorry if this should be handled @petsc-maint),PierreBegin forwarded message:From: "Antonio Trande" (anto.tra...@gmail.com via mumps-users Mailing List) Subject: [mumps-users] MUMPS MPI (5.2.1) does not compileDate: 29 June 2019 at 5:56:28 PM CESTTo: "mumps-us...@listes.ens-lyon.fr" Reply-To: mumps-us...@listes.ens-lyon.fr	Hi all.MUMPS-5.2.1 is not compiling on Fedora 30 because of this error:/usr/lib64/openmpi/bin/mpif77 -O2 -g -pipe -Wall -Werror=format-security-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions-fstack-protector-strong -grecord-gcc-switches-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection-Dscotch -Dmetis -Dptscotch -pthread -I/usr/lib64/gfortran/modules-Dintel_ -Wno-unused-dummy-argument -Wno-maybe-uninitialized-I/usr/include/openmpi-x86_64 -I/usr/include/openblas-I/usr/include/openmpi-x86_64 -Dpord -I. -I../include -fPIC -ccmumps_save_restore.F -o cmumps_save_restore.ocmumps_save_restore.F:8105:50: 8105 |    IF(associated(id%MPITOOMP_PROCS_MAP)) THEN  |  1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8108:49: 8108 |  & size(id%MPITOOMP_PROCS_MAP,1)*SIZE_INT  | 1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8113:72: 8113 | elseif(trim(mode).EQ."save") then  |   1Error: Unexpected ELSE IF statement at (1)cmumps_save_restore.F:8114:50: 8114 |    IF(associated(id%MPITOOMP_PROCS_MAP)) THEN  |  1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8115:67: 8115 |   write(unit,iostat=err)size(id%MPITOOMP_PROCS_MAP,1)  |   1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8124:62: 8124 |   write(unit,iostat=err) id%MPITOOMP_PROCS_MAP  |  1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8125:72: 8125 |    ELSE  |   1Error: Unexpected ELSE statement at (1)cmumps_save_restore.F:8136:18: 8136 |    ENDIF  |  1Error: Expecting END SELECT statement at (1)cmumps_save_restore.F:8145:72: 8145 | elseif(trim(mode).EQ."restore") then  |   1Error: Unexpected ELSE IF statement at (1)cmumps_save_restore.F:8146:44: 8146 |    nullify(id%MPITOOMP_PROCS_MAP)  |    1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8163:48: 8163 |   allocate(id%MPITOOMP_PROCS_MAP(size_array1),  |    1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8171:61: 8171 |   read(unit,iostat=err) id%MPITOOMP_PROCS_MAP  | 1Error: 'mpitoomp_procs_map' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8184:15: 8184 | endif  |   1Error: Expecting END SELECT statement at (1)cmumps_save_restore.F:8188:64: 8188 |    SIZE_VARIABLES(i1)=SIZE_INT*size(id%METIS_OPTIONS,1)  |    1Error: 'metis_options' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8190:54: 8190 |    write(unit,iostat=err) id%METIS_OPTIONS  |  1Error: 'metis_options' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8200:64: 8200 |    SIZE_VARIABLES(i1)=SIZE_INT*size(id%METIS_OPTIONS,1)  |    1Error: 'metis_options' at (1) is not a member of the 'cmumps_struc'structurecmumps_save_restore.F:8201:53: 8201 |    read(unit,iostat=err) id%METIS_OPTIONS  | 1Error: 'metis_options' at (1) is not a member of the 'cmumps_struc'structuremake[3]: *** [Makefile:396: cmumps_save_restore.o] Error 1I'm patching MUMPS for compiling shared libraries for Fedora. I'mattaching the patch used.-- ---Antonio TrandeFedora 

Re: [petsc-dev] circular dependencies SLEPc

2019-07-08 Thread Pierre Jolivet via petsc-dev
Hello,
I’m not sure what’s the status about this issue.
I’m trying to register a PC that is using EPSSolve during PCSetUp, but it’s 
falling because of undefined references to EPSSomething when linking libpetsc.so
How could I fix this, appart from removing my PC from PETSc and compiling this 
as a separate library (the idea would be to merge my PC in PETSc, so that would 
be rather counterproductive in the end)?

Thanks,
Pierre

> On 27 Jun 2019, at 1:04 AM, Matthew Knepley via petsc-dev 
>  wrote:
> 
> On Wed, Jun 26, 2019 at 4:11 PM Jed Brown  > wrote:
> Matthew Knepley mailto:knep...@gmail.com>> writes:
> 
> > On Wed, Jun 26, 2019 at 3:42 PM Jed Brown via petsc-dev <
> > petsc-dev@mcs.anl.gov > wrote:
> >
> >> "Smith, Barry F." mailto:bsm...@mcs.anl.gov>> writes:
> >>
> >> >> On Jun 26, 2019, at 1:53 PM, Jed Brown  >> >> > wrote:
> >> >>
> >> >> "Smith, Barry F." mailto:bsm...@mcs.anl.gov>> 
> >> >> writes:
> >> >>
> >> >>>  It is still a PC, it may as part of its computation solve an
> >> eigenvalue problem but its use is as a PC, hence does not belong in SLEPc.
> >> >>
> >> >> Fine; it does not belong in src/ksp/pc/.
> >> >
> >> >   Why not? From the code mangement point of view that is the perfect
> >> place for it. It just depends on an external package in the same way that
> >> PCHYPRE depends on an external library. Having it off in some other
> >> directory src/plugins would serve no purpose. Of course making sure it
> >> doesn't get compiled into -lpetsc may require a tweak to the make
> >> infrastructure. Make could, for example, skip plugin subdirectories for
> >> example.
> >>
> >> I think it's confusing to have code that is part of libpetsc.so
> >> alongside code that is not (e.g., won't be accessible to users unless
> >> they also build SLEPc and link the plugin).
> >>
> >> >   BTW: Matt's perverse use of SNES from DMPLEx could also be fixed to
> >> >   work this way instead of the disgusting PetscObject casting used to
> >> >   cancel the SNES object.
> >>
> >> That code could be part of libpetscsnes.so.
> >>
> >
> > What? I thought I moved everything to SNES a long time ago.
> 
> I thought there was a place where SNES was cast to PetscObject.  There
> is DMAddField, but it's different.
> 
> I can't find it right now. Yeah, AddField is waiting for a true 
> Discretization object.
> I think its spelled G-O-D-O-T.
>  
> PetscViewerVTKAddField is another example of code that uses PetscObject
> to avoid depending on a higher level type like Vec.
> 
> Viewer is a weird mixin with everything.
> 
>Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] alternatives to cygwin on Windows with PETSc

2019-07-02 Thread Pierre Jolivet via petsc-dev


> On 2 Jul 2019, at 12:26 AM, Smith, Barry F. via petsc-dev 
>  wrote:
> 
> 
> https://bitbucket.org/petsc/petsc/pull-requests/1836/installationhtml-edited-online-with/diff
> 
> I try to provide a better guild for Windows possibilities without windows 
> compilers. (Could probably do with some light editing). Maybe more options?

You mentioned previously “regular” MS users, yet there is no occurence of MS 
MPI in installation.html.
Isn’t MS MPI the canonical MPI implementation for MS users?
(Sorry if there is some reason I’m not aware of so that it’s not mentioned in 
the instructions)

Thanks,
Pierre

>  Satish,
> 
> At the bottom of the Windows installation instructions on installation.html 
> you should list your bullets below to explain the difficulties of using 
> Windows compilers in general and perhaps inspire someone to add code for one 
> of the other systems.
> 
> 
>> On Jul 1, 2019, at 4:43 PM, Balay, Satish  wrote:
>> 
>> This discussion comes up each time a user has issues with cygwin.
>> 
>> For any alternate system, we would have to redo win32fe functionality for 
>> that system.
>> 
>>  • Marshal gcc type compiler options to Cl
>>  • Convert paths in some of these options from this system ( for ex 
>> cygwin paths) to Windows paths.
>>  • Have python that works with system path notation.
>>  • Have the ability equivalent to Windows process spawning cygwin 
>> process spawning Windows process. Wsl1 lacked this. Don't know about wsl2..
>> 
>> Current issue with cygwin was some bash config issue. Even if we manage to 
>> port build tools to wsl2 or alternative system,  such sub-tool issues can 
>> still come up in the new system.
>> 
>> 
>> Satish
>> 
>> From: Smith, Barry F. via petsc-dev
>> Sent: Monday, July 1, 2019 2:17 PM
>> To: Mills, Richard Tran
>> Cc: petsc-dev@mcs.anl.gov
>> Subject: Re: [petsc-dev] alternatives to cygwin on Windows with PETSc
>> 
>> 
>>   Richard,
>> 
>> Thanks. The important thing is to be able to build PETSc for Microsoft 
>> and Intel Windows compilers (so that users can use the libraries from the 
>> Microsoft development system as a "regular" Windows users).
>> 
>>   Barry
>> 
>> 
>>> On Jul 1, 2019, at 3:59 PM, Mills, Richard Tran via petsc-dev 
>>>  wrote:
>>> 
>>> I played around with WSL1 quite some time ago and it seemed pretty 
>>> promising. I have not tried WSL2, but I'm guessing that it may be the best 
>>> option for building PETSc on a Windows 10 machine. I've got a Windows 10 
>>> machine (it basically just runs my television/media center) and I'll give 
>>> it a try on there.
>>> 
>>> --Richard
>>> 
>>> On 6/29/19 8:11 PM, Jed Brown via petsc-dev wrote:
 "Smith, Barry F. via petsc-dev" 
 writes:
 
 
>  Does it make sense to recommend/suggest  git bash for Windows as an 
> alternative/in addition to Cygwin?
> 
 I would love to be able to recommend git-bash and/or WSL2 (which now
 includes a full Linux kernel).  I don't have a system on which to test,
 but it should be possible to make it work (if it doesn't already).
 
>>> 
> 



[petsc-dev] TAOIPM v. IPOPT

2019-06-15 Thread Pierre Jolivet via petsc-dev
Hello,
I’m trying to solve the following constrained problem:
min   x0*x3*(x0+x1+x2) + x2
s.t.x0*x1*x2*x3 >= 25 (a)
 x0**2 + x1**2 + x2**2 + x3**2 = 40
 1 <= x0, x1, x2, x3 <= 5
I modified src/tao/constrained/examples/tutorials/toy.c (you can git diff the 
attached file) to try to solve this problem.
If I remove the inequality constraint (a), by commenting out line 69 and line 
72, both TAOIPM and IPOPT converge to the same solution
1.
5.
3.60555
1.
If I try to impose the inequality (a), TAOIPM does not converge, while I get 
the desired solution from IPOPT (1 4.742999644 3.821149979 1.379408293)
Do you see what I’m doing wrong?

Thanks in advance for your help,
Pierre



toy.c
Description: Binary data


Re: [petsc-dev] Unevenly distributed MatNest and FieldSplit

2019-06-13 Thread Pierre Jolivet via petsc-dev
OK, turns out this had not much to do with PETSc itself.
I remember having told you (either here or on Bitbucket, I can’t remember the 
thread) that some LU packages were behaving weirdly with empty local matrices.
It was said that something should be done about this, but apparently it hasn’t.

So, just changing all my 
-fieldsplit_fluid_interior_fieldsplit_[x,y,z]velocity_sub_pc_factor_mat_solver_type
 mumps
to
-fieldsplit_fluid_interior_fieldsplit_[x,y,z]velocity_sub_pc_factor_mat_solver_type
 [petsc|superlu]
fixed the SNES behaviour.

Thank you,
Pierre

> On 12 Jun 2019, at 12:16 PM, Smith, Barry F.  wrote:
> 
> 
>  mpiexec -n   ./myprogrm over options -log_trace > afile
> 
>  grep "\[0\]" afile > process0
>  grep "\[1\]" afile > process1 
> 
>  paste process0 process1 | more
> 
>  For the two processes  pick ones that take the different paths in the code. 
> 
>  Almost for sure something is defined on a sub communicator where the group 
> of processes in that sub communicator are getting "behind" the other 
> processes caught by their own MPI reduction. Hopefully the above will make 
> clear where the two sets of processes branch.
> 
>   Good luck,
> 
>   Barry
> 
> 
>> On Jun 12, 2019, at 1:44 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> We are using a SNES to solve a steady-state FSI problem.
>> The operator is defined as a MatNest with multiple fields.
>> Some submatrices are entirely defined on a subset of processes (but they are 
>> still created on the same communicator as the MatNest).
>> The preconditioner is defined as a FieldSplit.
>> 
>> During the first call to KSPSolve within SNESSolve, I’m getting this (with 
>> debugging turned on):
>> [0]PETSC ERROR: VecGetSubVector() line 1243 in 
>> /Users/jolivet/Documents/repositories/petsc/src/vec/vec/interface/rvector.c 
>> MPI_Allreduce() called in different locations (code lines) on different 
>> processors
>> [2]PETSC ERROR: VecNorm_MPI() line 57 in 
>> /Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c 
>> MPI_Allreduce() called in different locations (code lines) on different 
>> processors
>> [1]PETSC ERROR: VecNorm_MPI() line 57 in 
>> /Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c 
>> MPI_Allreduce() called in different locations (code lines) on different 
>> processors
>> 
>> As you may have guessed, process 0 (resp. 1–2) is where the structure (resp. 
>> fluid) is handled.
>> I’m attaching both stacks. I don’t see what could trigger such an error from 
>> my side, since everything is delegated to PETSc in the SNESSolve.
>> Is there an easy way to debug this?
>> Is there some way to dump _everything_ related to a KSP (Mat + PC + ISes) 
>> for “easier” debugging?
>> 
>> Thank you,
>> Pierre
>> 
>> 
> 



[petsc-dev] Unevenly distributed MatNest and FieldSplit

2019-06-12 Thread Pierre Jolivet via petsc-dev
Hello,
We are using a SNES to solve a steady-state FSI problem.
The operator is defined as a MatNest with multiple fields.
Some submatrices are entirely defined on a subset of processes (but they are 
still created on the same communicator as the MatNest).
The preconditioner is defined as a FieldSplit.

During the first call to KSPSolve within SNESSolve, I’m getting this (with 
debugging turned on):
[0]PETSC ERROR: VecGetSubVector() line 1243 in 
/Users/jolivet/Documents/repositories/petsc/src/vec/vec/interface/rvector.c 
MPI_Allreduce() called in different locations (code lines) on different 
processors
[2]PETSC ERROR: VecNorm_MPI() line 57 in 
/Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c 
MPI_Allreduce() called in different locations (code lines) on different 
processors
[1]PETSC ERROR: VecNorm_MPI() line 57 in 
/Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c 
MPI_Allreduce() called in different locations (code lines) on different 
processors

As you may have guessed, process 0 (resp. 1–2) is where the structure (resp. 
fluid) is handled.
I’m attaching both stacks. I don’t see what could trigger such an error from my 
side, since everything is delegated to PETSc in the SNESSolve.
Is there an easy way to debug this?
Is there some way to dump _everything_ related to a KSP (Mat + PC + ISes) for 
“easier” debugging?

Thank you,
Pierre

frame #5: 0x00011a8d29fd 
libpetsc.3.011.dylib`::PetscAttachDebuggerErrorHandler(comm=1140850689, 
line=57, fun="VecNorm_MPI", 
file="/Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c",
 num=77, p=PETSC_ERROR_INITIAL, mess="MPI_Allreduce() called in different 
locations (code lines) on different processors", ctx=0x) at 
adebug.c:444:10
frame #6: 0x00011a8d4e32 
libpetsc.3.011.dylib`::PetscError(comm=1140850689, line=57, func="VecNorm_MPI", 
file="/Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c",
 n=77, p=PETSC_ERROR_INITIAL, mess="MPI_Allreduce() called in different 
locations (code lines) on different processors") at err.c:355:19
frame #7: 0x00011a7a7c6d 
libpetsc.3.011.dylib`::PetscAllreduceBarrierCheck(comm=-2080374780, ctn=1, 
line=57, func="VecNorm_MPI", 
file="/Users/jolivet/Documents/repositories/petsc/src/vec/vec/impls/mpi/pvec2.c")
 at pbarrier.c:28:31
frame #8: 0x00011aa9af9b 
libpetsc.3.011.dylib`::VecNorm_MPI(xin=0x7fd157aa4060, type=NORM_2, 
z=0x7ffee2afe588) at pvec2.c:57:12
frame #9: 0x00011aabd056 
libpetsc.3.011.dylib`::VecNorm(x=0x7fd157aa4060, type=NORM_2, 
val=0x7ffee2afe588) at rvector.c:221:10
frame #10: 0x00011bc15a10 
libpetsc.3.011.dylib`KSPSolve_PREONLY(ksp=0x7fd156ab9660) at preonly.c:32:14
frame #11: 0x00011bc8f88d 
libpetsc.3.011.dylib`::KSPSolve(ksp=0x7fd156ab9660, b=0x7fd1579ee460, 
x=0x7fd157aa4060) at itfunc.c:764:10
frame #12: 0x00011ba3c965 
libpetsc.3.011.dylib`PCApply_FieldSplit(pc=0x7fd15820ec60, 
x=0x7fd158321a60, y=0x7fd15832c860) at fieldsplit.c:1266:14
frame #13: 0x00011bb2fb19 
libpetsc.3.011.dylib`::PCApply(pc=0x7fd15820ec60, x=0x7fd158321a60, 
y=0x7fd15832c860) at precon.c:444:10
frame #14: 0x00011bc5496b 
libpetsc.3.011.dylib`KSP_PCApply(ksp=0x7fd157afd660, x=0x7fd158321a60, 
y=0x7fd15832c860) at kspimpl.h:281:12
frame #15: 0x00011bc532ff 
libpetsc.3.011.dylib`KSPFGMRESCycle(itcount=0x7ffee2afe9e8, 
ksp=0x7fd157afd660) at fgmres.c:166:12
frame #16: 0x00011bc56099 
libpetsc.3.011.dylib`KSPSolve_FGMRES(ksp=0x7fd157afd660) at fgmres.c:291:10
frame #17: 0x00011bc8f88d 
libpetsc.3.011.dylib`::KSPSolve(ksp=0x7fd157afd660, b=0x7fd157b66260, 
x=0x7fd157bd3260) at itfunc.c:764:10
frame #18: 0x00011be2d916 
libpetsc.3.011.dylib`SNESSolve_NEWTONLS(snes=0x7fd157b6f460) at ls.c:225:12
frame #19: 0x00011bda9830 
libpetsc.3.011.dylib`::SNESSolve(snes=0x7fd157b6f460, b=0x, 
x=0x7fd157b90060) at snes.c:4433:12
frame #5: 0x0001112039fd 
libpetsc.3.011.dylib`::PetscAttachDebuggerErrorHandler(comm=1140850689, 
line=1243, fun="VecGetSubVector", 
file="/Users/jolivet/Documents/repositories/petsc/src/vec/vec/interface/rvector.c",
 num=77, p=PETSC_ERROR_INITIAL, mess="MPI_Allreduce() called in different 
locations (code lines) on different processors", ctx=0x) at 
adebug.c:444:10
frame #6: 0x000111205e32 
libpetsc.3.011.dylib`::PetscError(comm=1140850689, line=1243, 
func="VecGetSubVector", 
file="/Users/jolivet/Documents/repositories/petsc/src/vec/vec/interface/rvector.c",
 n=77, p=PETSC_ERROR_INITIAL, mess="MPI_Allreduce() called in different 
locations (code lines) on different processors") at err.c:355:19
frame #7: 0x0001110d8c6d 
libpetsc.3.011.dylib`::PetscAllreduceBarrierCheck(comm=-2080374777, ctn=1, 
line=1243, func="VecGetSubVector", 

Re: [petsc-dev] SNESSetKSP and SNESDestroy

2019-05-30 Thread Pierre Jolivet via petsc-dev


> On 30 May 2019, at 10:51 PM, Stefano Zampini  
> wrote:
> 
> 
> 
>>> 
>>> 
>> 
>> Yeah… about that hack…
>> Before my first message, I tried something like:
>> 
>>  SNESCreate();
>>  KSP backup;
>>  SNESGetKSP(snes, );
>>  SNESSetKSP(snes, );
>>  SNESSolve(snes);
>>  SNESSetKSP(snes, );
>>  SNESDestroy();
>> 
>> And this is bailing out ([1]PETSC ERROR: SNESSetKSP() Wrong type of object: 
>> Parameter # 2), which I find quite ironing since I’m getting the KSP from 
>> SNESGetKSP.
>> 
> 
> Pierre,
> 
> I assume you meant
> 
> SNESGetKSP(snes,);
> SNESSetKSP(snes,ksp);
> SNESSolve(snes);
> SNESSetKSP(snes,backup);
> 
> The issue here is that the getter SNESGetKSP does not increase reference 
> count on backup.
> So, when you do SNESSetKSP(snes,ksp) the object pointed to by backup is 
> actually destroyed.
> This is why you got the error.
> 
> This is the correct code

Oops, didn’t check my inbox before sending the previous message.
This does get the job done in a much cleaner way.

Thanks!
Pierre

> SNESGetKSP(snes,);
> PetscObjectReference((PetscObject)backup); // increase reference count
> SNESSetKSP(snes,ksp); // calls KSPdestroy on backup -> decrease reference 
> count
> SNESSolve(snes);
> SNESSetKSP(snes,backup); // backup still points to a valid object
> KSPDestroy(); // decrease reference count to backup (or destroy if it 
> goes to zero)
> 
> 
>> SNESGetKSP(snes, );
> 
>>  KSP backup;
>>  
>>  SNESSetKSP(snes, );
>> SNESSolve(snes);
> 
>> SNESSetKSP(snes, );
> 
>> As I’m typing this email I just tried something and the following fixes the 
>> hack: just add an extra KSPCreate on backup before the SNESDestroy.
>> 
>> Thanks,
>> Pierre
>> 
>>>>> 
>>>  KSP dummy;
>>>  KSPCreate(...,);
>>>>> KSP ksp;
>>>>> KSPSetUp(ksp);
>>>>> while(cond) {
>>>>>  SNES snes;
>>>>>  SNESCreate();
>>>>>  SNESSetKSP(snes, );
>>>>>  SNESSolve(snes);
>>>  SNESSetKSP(snes,dummy);
>>>>>  SNESDestroy();
>>>>> }
>>> It should work even if you don't setup dummy. But perhaps it is picky and 
>>> you may need to add a matrix to dummy etc (don't worry it doesn't copy the 
>>> matrix).
>>> 
>>> Barry
>>> 
>>> 
>>> 
>>> 
>>>> On May 30, 2019, at 9:51 AM, Stefano Zampini via petsc-dev 
>>>>  wrote:
>>>> 
>>>> I meant SNESDestroy should call KSPDestroy
>>>> 
>>>> The reason for having the XXXReset methods called by the XXXDestroy is 
>>>> mostly code reuse, and it may lead to troubles.
>>>> The Reset concept is not uniform throughout the library.
>>>> 
>>>>> On May 30, 2019, at 3:59 PM, Stefano Zampini  
>>>>> wrote:
>>>>> 
>>>>> I think snes reset should call ksp destroy. I don't see the case for 
>>>>> which this can lead to troubles 
>>>>> 
>>>>> Il Gio 30 Mag 2019, 15:43 Matthew Knepley via petsc-dev 
>>>>>  ha scritto:
>>>>> On Thu, May 30, 2019 at 6:08 AM Pierre Jolivet via petsc-dev 
>>>>>  wrote:
>>>>> Hello,
>>>>> I’m doing a continuation loop with something like:
>>>>> KSP ksp;
>>>>> KSPSetUp(ksp);
>>>>> while(cond) {
>>>>>  SNES snes;
>>>>>  SNESCreate();
>>>>>  SNESSetKSP(snes, );
>>>>>  SNESSolve(snes);
>>>>>  SNESDestroy();
>>>>> }
>>>>> 
>>>>> For the first iteration, everything is OK.
>>>>> After that, it looks like SNESDestroy is messing up with my outer-defined 
>>>>> KSP(*), i.e., for the code to work I need to comment out the SNESDestroy.
>>>>> Is this the expected behavior? (if not, I’ll provide a MWE)
>>>>> How to fix this properly, without having leaks by not destroying the 
>>>>> inner-defined SNESes?
>>>>> 
>>>>> Thanks in advance,
>>>>> Pierre
>>>>> 
>>>>> (*) to the outer-defined KSP is attached a PCFIELDSPLIT, first SNESSolve, 
>>>>> everything OK:
>>>>> PC Object: 1 MPI processes
>>>>> type: fieldsplit
>>>>>  FieldSpl

Re: [petsc-dev] SNESSetKSP and SNESDestroy

2019-05-30 Thread Pierre Jolivet via petsc-dev



> On 30 May 2019, at 10:44 PM, Smith, Barry F.  wrote:
> 
> 
> 
>> On May 30, 2019, at 2:15 PM, Pierre Jolivet  
>> wrote:
>> 
>>> 
>>> On 30 May 2019, at 5:48 PM, Smith, Barry F. via petsc-dev 
>>>  wrote:
>>> 
>>> 
>>> Yeah, this is a bug in the PETSc design. But I don't see any simple 
>>> solution. Given the models of reset, SNESSetKSP() is dangerous because it 
>>> only increases the reference count and has no control over the reset 
>>> process. At times I've regretted SNESSetKSP() and been tempted to remove it.
>>> 
>>>  As Matt says if you can move the SNESCreation outside the loop the problem 
>>> will be resolved.
>>> 
>>>  As an alternative hack you could change the code as follows:
>> 
>> Yeah… about that hack…
>> Before my first message, I tried something like:
>> 
>>  SNESCreate();
>>  KSP backup;
>>  SNESGetKSP(snes, );
>>  SNESSetKSP(snes, );
>>  SNESSolve(snes);
>>  SNESSetKSP(snes, );
> 
>   Hmm, the compiler shouldn't except the & in the line above.

Sorry about that wrong pseudo-code.
I had the error (Wrong type of object: Parameter # 2) with the correct calls to 
SNESSetKSP (without &).

But with a KSPCreate + KSPDestroy on backup, I get no error and no leak, so I’m 
good.

Thanks,
Pierre

>>  SNESDestroy();
>> 
>> And this is bailing out ([1]PETSC ERROR: SNESSetKSP() Wrong type of object: 
>> Parameter # 2), which I find quite ironing since I’m getting the KSP from 
>> SNESGetKSP.
>> 
>> As I’m typing this email I just tried something and the following fixes the 
>> hack: just add an extra KSPCreate on backup before the SNESDestroy.
>> 
>> Thanks,
>> Pierre
>> 
>>>>> 
>>>  KSP dummy;
>>>  KSPCreate(...,);
>>>>> KSP ksp;
>>>>> KSPSetUp(ksp);
>>>>> while(cond) {
>>>>>  SNES snes;
>>>>>  SNESCreate();
>>>>>  SNESSetKSP(snes, );
>>>>>  SNESSolve(snes);
>>>  SNESSetKSP(snes,dummy);
>>>>>  SNESDestroy();
>>>>> }
>>> It should work even if you don't setup dummy. But perhaps it is picky and 
>>> you may need to add a matrix to dummy etc (don't worry it doesn't copy the 
>>> matrix).
>>> 
>>> Barry
>>> 
>>> 
>>> 
>>> 
>>>> On May 30, 2019, at 9:51 AM, Stefano Zampini via petsc-dev 
>>>>  wrote:
>>>> 
>>>> I meant SNESDestroy should call KSPDestroy
>>>> 
>>>> The reason for having the XXXReset methods called by the XXXDestroy is 
>>>> mostly code reuse, and it may lead to troubles.
>>>> The Reset concept is not uniform throughout the library.
>>>> 
>>>>> On May 30, 2019, at 3:59 PM, Stefano Zampini  
>>>>> wrote:
>>>>> 
>>>>> I think snes reset should call ksp destroy. I don't see the case for 
>>>>> which this can lead to troubles 
>>>>> 
>>>>> Il Gio 30 Mag 2019, 15:43 Matthew Knepley via petsc-dev 
>>>>>  ha scritto:
>>>>> On Thu, May 30, 2019 at 6:08 AM Pierre Jolivet via petsc-dev 
>>>>>  wrote:
>>>>> Hello,
>>>>> I’m doing a continuation loop with something like:
>>>>> KSP ksp;
>>>>> KSPSetUp(ksp);
>>>>> while(cond) {
>>>>>  SNES snes;
>>>>>  SNESCreate();
>>>>>  SNESSetKSP(snes, );
>>>>>  SNESSolve(snes);
>>>>>  SNESDestroy();
>>>>> }
>>>>> 
>>>>> For the first iteration, everything is OK.
>>>>> After that, it looks like SNESDestroy is messing up with my outer-defined 
>>>>> KSP(*), i.e., for the code to work I need to comment out the SNESDestroy.
>>>>> Is this the expected behavior? (if not, I’ll provide a MWE)
>>>>> How to fix this properly, without having leaks by not destroying the 
>>>>> inner-defined SNESes?
>>>>> 
>>>>> Thanks in advance,
>>>>> Pierre
>>>>> 
>>>>> (*) to the outer-defined KSP is attached a PCFIELDSPLIT, first SNESSolve, 
>>>>> everything OK:
>>>>> PC Object: 1 MPI processes
>>>>> type: fieldsplit
>>>>>  FieldSplit with MULTIPLICATIVE composition: total splits = 2
>>>>> 
&

Re: [petsc-dev] SNESSetKSP and SNESDestroy

2019-05-30 Thread Pierre Jolivet via petsc-dev


> On 30 May 2019, at 5:48 PM, Smith, Barry F. via petsc-dev 
>  wrote:
> 
> 
>   Yeah, this is a bug in the PETSc design. But I don't see any simple 
> solution. Given the models of reset, SNESSetKSP() is dangerous because it 
> only increases the reference count and has no control over the reset process. 
> At times I've regretted SNESSetKSP() and been tempted to remove it.
> 
>As Matt says if you can move the SNESCreation outside the loop the problem 
> will be resolved.
> 
>As an alternative hack you could change the code as follows:

Yeah… about that hack…
Before my first message, I tried something like:

SNESCreate();
KSP backup;
SNESGetKSP(snes, );
SNESSetKSP(snes, );
SNESSolve(snes);
SNESSetKSP(snes, );
SNESDestroy();

And this is bailing out ([1]PETSC ERROR: SNESSetKSP() Wrong type of object: 
Parameter # 2), which I find quite ironing since I’m getting the KSP from 
SNESGetKSP.

As I’m typing this email I just tried something and the following fixes the 
hack: just add an extra KSPCreate on backup before the SNESDestroy.

Thanks,
Pierre

>>> 
>KSP dummy;
>KSPCreate(...,);
>>> KSP ksp;
>>> KSPSetUp(ksp);
>>> while(cond) {
>>>SNES snes;
>>>SNESCreate();
>>>SNESSetKSP(snes, );
>>>SNESSolve(snes);
>SNESSetKSP(snes,dummy);
>>>SNESDestroy();
>>> }
> It should work even if you don't setup dummy. But perhaps it is picky and you 
> may need to add a matrix to dummy etc (don't worry it doesn't copy the 
> matrix).
> 
>  Barry
> 
> 
> 
> 
>> On May 30, 2019, at 9:51 AM, Stefano Zampini via petsc-dev 
>>  wrote:
>> 
>> I meant SNESDestroy should call KSPDestroy
>> 
>> The reason for having the XXXReset methods called by the XXXDestroy is 
>> mostly code reuse, and it may lead to troubles.
>> The Reset concept is not uniform throughout the library.
>> 
>>> On May 30, 2019, at 3:59 PM, Stefano Zampini  
>>> wrote:
>>> 
>>> I think snes reset should call ksp destroy. I don't see the case for which 
>>> this can lead to troubles 
>>> 
>>> Il Gio 30 Mag 2019, 15:43 Matthew Knepley via petsc-dev 
>>>  ha scritto:
>>> On Thu, May 30, 2019 at 6:08 AM Pierre Jolivet via petsc-dev 
>>>  wrote:
>>> Hello,
>>> I’m doing a continuation loop with something like:
>>> KSP ksp;
>>> KSPSetUp(ksp);
>>> while(cond) {
>>>SNES snes;
>>>SNESCreate();
>>>SNESSetKSP(snes, );
>>>SNESSolve(snes);
>>>SNESDestroy();
>>> }
>>> 
>>> For the first iteration, everything is OK.
>>> After that, it looks like SNESDestroy is messing up with my outer-defined 
>>> KSP(*), i.e., for the code to work I need to comment out the SNESDestroy.
>>> Is this the expected behavior? (if not, I’ll provide a MWE)
>>> How to fix this properly, without having leaks by not destroying the 
>>> inner-defined SNESes?
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>>> (*) to the outer-defined KSP is attached a PCFIELDSPLIT, first SNESSolve, 
>>> everything OK:
>>> PC Object: 1 MPI processes
>>>  type: fieldsplit
>>>FieldSplit with MULTIPLICATIVE composition: total splits = 2
>>> 
>>> Second SNESSolve:
>>> [0]PETSC ERROR: PCFieldSplitSetDefaults() Unhandled case, must have at 
>>> least two fields, not 1
>>> 
>>> I believe SNESDestroy() calls SNESReset(), which calls KSPReset(), which is 
>>> wiping out your information.
>>> 
>>> This pattern is in a lot of places. Destroy calls Reset because it is 
>>> natural to reuse the code which tears down
>>> the internal structures. Reset calls reset on subobjects because we also 
>>> want to tear down their internal structures (usually).
>>> Reset wouldn't do what we want on resizing if we left subobjects unreset. 
>>> However, this chain is causing subobjects to
>>> be wiped clean on Destroy, so that we cannot reuse subobjects.
>>> 
>>> It would not even work if you change the Destroy to Reset in the 
>>> continuation loop. However, I think you can hoist all
>>> the SNES setup outside the loop. If you are changing sizes, then the KSP 
>>> needs to be wiped out anyway. If not, then
>>> you do not need to wipe out the SNES.
>>> 
>>>  Thanks,
>>> 
>>> Matt
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/
>> 
> 



Re: [petsc-dev] SNESSetKSP and SNESDestroy

2019-05-30 Thread Pierre Jolivet via petsc-dev


> On 30 May 2019, at 2:42 PM, Matthew Knepley  wrote:
> 
> On Thu, May 30, 2019 at 6:08 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> I’m doing a continuation loop with something like:
> KSP ksp;
> KSPSetUp(ksp);
> while(cond) {
> SNES snes;
> SNESCreate();
> SNESSetKSP(snes, );
> SNESSolve(snes);
> SNESDestroy();
> }
> 
> For the first iteration, everything is OK.
> After that, it looks like SNESDestroy is messing up with my outer-defined 
> KSP(*), i.e., for the code to work I need to comment out the SNESDestroy.
> Is this the expected behavior? (if not, I’ll provide a MWE)
> How to fix this properly, without having leaks by not destroying the 
> inner-defined SNESes?
> 
> Thanks in advance,
> Pierre
> 
> (*) to the outer-defined KSP is attached a PCFIELDSPLIT, first SNESSolve, 
> everything OK:
> PC Object: 1 MPI processes
>   type: fieldsplit
> FieldSplit with MULTIPLICATIVE composition: total splits = 2
> 
> Second SNESSolve:
> [0]PETSC ERROR: PCFieldSplitSetDefaults() Unhandled case, must have at least 
> two fields, not 1
> 
> I believe SNESDestroy() calls SNESReset(), which calls KSPReset(), which is 
> wiping out your information.

OK, I thought there was a reference count in XXXReset, but it looks like it is 
just in the XXXDestroy. That explains why.

> This pattern is in a lot of places. Destroy calls Reset because it is natural 
> to reuse the code which tears down
> the internal structures. Reset calls reset on subobjects because we also want 
> to tear down their internal structures (usually).
> Reset wouldn't do what we want on resizing if we left subobjects unreset. 
> However, this chain is causing subobjects to
> be wiped clean on Destroy, so that we cannot reuse subobjects.
> 
> It would not even work if you change the Destroy to Reset in the continuation 
> loop. However, I think you can hoist all
> the SNES setup outside the loop. If you are changing sizes, then the KSP 
> needs to be wiped out anyway.

Indeed, when I turn on mesh adaptation, I wipe out the KSP myself and set 
everything again, so no problem.

> If not, then
> you do not need to wipe out the SNES.

Now, this is more of a limitation of our DSL. I don’t expose PETSc SNES type to 
users, we just do SNESSolve(my Mat+KSP) in the loop, which Create, Solve, and 
Destroy.
But I guess I’ll have to split those steps.

Thanks,
Pierre

>   Thanks,
> 
>  Matt
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>



[petsc-dev] SNESSetKSP and SNESDestroy

2019-05-30 Thread Pierre Jolivet via petsc-dev
Hello,
I’m doing a continuation loop with something like:
KSP ksp;
KSPSetUp(ksp);
while(cond) {
SNES snes;
SNESCreate();
SNESSetKSP(snes, );
SNESSolve(snes);
SNESDestroy();
}

For the first iteration, everything is OK.
After that, it looks like SNESDestroy is messing up with my outer-defined 
KSP(*), i.e., for the code to work I need to comment out the SNESDestroy.
Is this the expected behavior? (if not, I’ll provide a MWE)
How to fix this properly, without having leaks by not destroying the 
inner-defined SNESes?

Thanks in advance,
Pierre

(*) to the outer-defined KSP is attached a PCFIELDSPLIT, first SNESSolve, 
everything OK:
PC Object: 1 MPI processes
  type: fieldsplit
FieldSplit with MULTIPLICATIVE composition: total splits = 2

Second SNESSolve:
[0]PETSC ERROR: PCFieldSplitSetDefaults() Unhandled case, must have at least 
two fields, not 1

Re: [petsc-dev] MatNest and FieldSplit

2019-04-15 Thread Pierre Jolivet via petsc-dev


> On 15 Apr 2019, at 2:17 PM, Matthew Knepley  <mailto:knep...@gmail.com>> wrote:
> 
> On Mon, Apr 15, 2019 at 5:03 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> OK, my solver is now working properly with a call to
> PetscObjectCompose((PetscObject)isS, "pmat", (PetscObject) myS);
> 
> I have two follow-up questions:
> 1) am I supposed to call this, or is it the sign of something done wrong in 
> my sequence of SNESSolve/KSPSetUp/KSPSetFromOptions/KSPSetOperators…?
> 
> Yes, unfortunately. We want to let the user pass in preconditioning matrices 
> for arbitrarily nested splits without pulling out
> the objects explicitly, so we need a way to locate a particular split. I 
> guess you could imagine using the prefix, which would
> be an XML way of doing things. We are doing it by attaching things to the IS, 
> since the Solver/Mat objects are created on the
> fly. In fact, the IS is created on the fly by a DM sometimes, so you can 
> attach to the DM (I do this for Plex).

Indeed, I found — to my surprise one of the very few occurence of 
PetscObjectCompose((PetscObject)isS, “pmat”, …) — in dm/interface/dmi.c.
And actually, I realised later on that even for a “simple” example like 
snes/examples/tutorials/ex70.c with -user_ksp turned on, the call to 
KSPSetOperators(subksp[1], s->myS, s->myS) is not sufficient to avoid the call 
to MatCreateSubMatrix on the Schur complement. Which means (at least) this part 
of the solver is behaving like it should, thanks for making it clear to me.

> 2) I have currently hardwired the split name I’m using when calling 
> PCFieldSplitGetIS to get “isS” (from above) for debugging purposes. Could I 
> create a PR with a new function like PCFieldSplitGetISByIndex(PC pc,const 
> PetscInt n,IS *is) that will return the nth IS? Right now, I would need to 
> get the KSP prefix followed up by some string comparison to get the actual IS 
> prefix, whereas I know the position of the KSP in the PC_FieldSplitLink.
> 
> That sounds fine to me.

Done (PR #1544).

Thanks,
Pierre

>Matt
>  
> Thanks,
> Pierre
> 
>> On 14 Apr 2019, at 10:54 PM, Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> 
>> I think I figured out what my problem is _exactly_.
>> The Mat inside the MATNEST on which I’m using a PCFIELDSPLIT is unassembled 
>> before the first KSPSolve, except for the last field.
>> Matt, you nailed it, when I call KSPSetFromOptions on the global 
>> PCFIELDSPLIT, then KSPSetUp explicitly on the inner PCFIELDSPLIT, the 
>> matrices associated to all fields are created here: 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656>
>> Then, whatever I do with the matrix from the last field, currently trying 
>> MatCopy(myAssembledS, generatedS), before the first KSPSolve is reset by 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688>
>>  and my solver fails…
>> 
>> So pretty simple question, how do I set a “pmat” for my last assembled field 
>> so that 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646>
>>  and 
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686>
>>  will return a non null pmat.
>> 
>> This may sound really trivial but I’m lost in limbo right now. When 
>> everything is not wrapped inside an outer PCFIELDSPLIT, everything just work.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 25 Mar 2019, at 6:57 PM, Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> 
>>> Thanks, this makes (slightly) more sense to me know.
>>> For some reason my application is still not acting properly but I must be 
>>> screwing somewhere else the nested FieldSplit…
>>> 
>>> Thank you,
>>> Pierre
>>> 
>>>> On 24 Mar 2019, at 11:42 PM, Dave May via petsc-dev >>> <mailto:petsc-dev@mcs.anl.gov>> wrote:
>>>> 
>>>> Matt is right.
>>>> 
>>>> When you defined the operator S, you basica

Re: [petsc-dev] MatNest and FieldSplit

2019-04-15 Thread Pierre Jolivet via petsc-dev
OK, my solver is now working properly with a call to
PetscObjectCompose((PetscObject)isS, "pmat", (PetscObject) myS);

I have two follow-up questions:
1) am I supposed to call this, or is it the sign of something done wrong in my 
sequence of SNESSolve/KSPSetUp/KSPSetFromOptions/KSPSetOperators…?
2) I have currently hardwired the split name I’m using when calling 
PCFieldSplitGetIS to get “isS” (from above) for debugging purposes. Could I 
create a PR with a new function like PCFieldSplitGetISByIndex(PC pc,const 
PetscInt n,IS *is) that will return the nth IS? Right now, I would need to get 
the KSP prefix followed up by some string comparison to get the actual IS 
prefix, whereas I know the position of the KSP in the PC_FieldSplitLink.

Thanks,
Pierre

> On 14 Apr 2019, at 10:54 PM, Pierre Jolivet via petsc-dev 
>  wrote:
> 
> I think I figured out what my problem is _exactly_.
> The Mat inside the MATNEST on which I’m using a PCFIELDSPLIT is unassembled 
> before the first KSPSolve, except for the last field.
> Matt, you nailed it, when I call KSPSetFromOptions on the global 
> PCFIELDSPLIT, then KSPSetUp explicitly on the inner PCFIELDSPLIT, the 
> matrices associated to all fields are created here: 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656>
> Then, whatever I do with the matrix from the last field, currently trying 
> MatCopy(myAssembledS, generatedS), before the first KSPSolve is reset by 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688>
>  and my solver fails…
> 
> So pretty simple question, how do I set a “pmat” for my last assembled field 
> so that 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646>
>  and 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686>
>  will return a non null pmat.
> 
> This may sound really trivial but I’m lost in limbo right now. When 
> everything is not wrapped inside an outer PCFIELDSPLIT, everything just work.
> 
> Thanks,
> Pierre
> 
>> On 25 Mar 2019, at 6:57 PM, Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> 
>> Thanks, this makes (slightly) more sense to me know.
>> For some reason my application is still not acting properly but I must be 
>> screwing somewhere else the nested FieldSplit…
>> 
>> Thank you,
>> Pierre
>> 
>>> On 24 Mar 2019, at 11:42 PM, Dave May via petsc-dev >> <mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> 
>>> Matt is right.
>>> 
>>> When you defined the operator S, you basically invalidate the operator N 
>>> (in the sense that they are no longer consistent). Hence when you use KSP 
>>> nest to solve your problem your A matrix looks like 
>>>   A = diag[1, 2, 4, 0, 8]
>>> but the B matrix you have defined looks like
>>>   B = diag[1, 2, 4, 0.1]
>>> 
>>> The only way to obtain the correct answer with your code is thus to use the 
>>> option
>>> -ksp_type preonly
>>> 
>>> Thanks
>>> Dave
>>> 
>>> 
>>> 
>>> On Sun, 24 Mar 2019 at 22:09, Mark Adams via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> I think he is saying that this line seems to have no effect (and the 
>>> comment is hence wrong):
>>> 
>>> KSPSetOperators(subksp[nsplits - 1], S, S);
>>> // J2 = [[4, 0] ; [0, 0.1]]
>>> 
>>> J2 is a 2x2 but this block has been changed into two single equation 
>>> fields. Does this KSPSetOperators supposed to copy this 1x1 S matrix into 
>>> the (1,1) block of the "J2", or do some sort of correct mixing internally, 
>>> to get what he wants?
>>> 
>>> BTW, this line does not seem necessary to me so maybe I'm missing something.
>>> 
>>> KSPSetOperators(sub, J2, J2);
>>> 
>>> 
>>> On Sun, Mar 24, 2019 at 4:33 PM Matthew Knepley via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> On Sun, Mar 24, 2019 at 10:21 AM Pierre Jolivet >> <mailto:pierre.joli...@enseeiht.fr>

Re: [petsc-dev] MatNest and FieldSplit

2019-04-14 Thread Pierre Jolivet via petsc-dev
I think I figured out what my problem is _exactly_.
The Mat inside the MATNEST on which I’m using a PCFIELDSPLIT is unassembled 
before the first KSPSolve, except for the last field.
Matt, you nailed it, when I call KSPSetFromOptions on the global PCFIELDSPLIT, 
then KSPSetUp explicitly on the inner PCFIELDSPLIT, the matrices associated to 
all fields are created here: 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line656>
Then, whatever I do with the matrix from the last field, currently trying 
MatCopy(myAssembledS, generatedS), before the first KSPSolve is reset by 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line688>
 and my solver fails…

So pretty simple question, how do I set a “pmat” for my last assembled field so 
that 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line646>
 and 
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/impls/fieldsplit/fieldsplit.c.html#line686>
 will return a non null pmat.

This may sound really trivial but I’m lost in limbo right now. When everything 
is not wrapped inside an outer PCFIELDSPLIT, everything just work.

Thanks,
Pierre

> On 25 Mar 2019, at 6:57 PM, Pierre Jolivet via petsc-dev 
>  wrote:
> 
> Thanks, this makes (slightly) more sense to me know.
> For some reason my application is still not acting properly but I must be 
> screwing somewhere else the nested FieldSplit…
> 
> Thank you,
> Pierre
> 
>> On 24 Mar 2019, at 11:42 PM, Dave May via petsc-dev > <mailto:petsc-dev@mcs.anl.gov>> wrote:
>> 
>> Matt is right.
>> 
>> When you defined the operator S, you basically invalidate the operator N (in 
>> the sense that they are no longer consistent). Hence when you use KSP nest 
>> to solve your problem your A matrix looks like 
>>   A = diag[1, 2, 4, 0, 8]
>> but the B matrix you have defined looks like
>>   B = diag[1, 2, 4, 0.1]
>> 
>> The only way to obtain the correct answer with your code is thus to use the 
>> option
>> -ksp_type preonly
>> 
>> Thanks
>> Dave
>> 
>> 
>> 
>> On Sun, 24 Mar 2019 at 22:09, Mark Adams via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> I think he is saying that this line seems to have no effect (and the comment 
>> is hence wrong):
>> 
>> KSPSetOperators(subksp[nsplits - 1], S, S);
>> // J2 = [[4, 0] ; [0, 0.1]]
>> 
>> J2 is a 2x2 but this block has been changed into two single equation fields. 
>> Does this KSPSetOperators supposed to copy this 1x1 S matrix into the (1,1) 
>> block of the "J2", or do some sort of correct mixing internally, to get what 
>> he wants?
>> 
>> BTW, this line does not seem necessary to me so maybe I'm missing something.
>> 
>> KSPSetOperators(sub, J2, J2);
>> 
>> 
>> On Sun, Mar 24, 2019 at 4:33 PM Matthew Knepley via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> On Sun, Mar 24, 2019 at 10:21 AM Pierre Jolivet > <mailto:pierre.joli...@enseeiht.fr>> wrote:
>> It’s a 4x4 matrix.
>> The first 2x2 diagonal matrix is a field.
>> The second 2x2 diagonal matrix is another field.
>> In the second field, the first diagonal coefficient is a subfield.
>> In the second field, the second diagonal coefficient is another subfield.
>> I’m changing the operators from the second subfield (last diagonal 
>> coefficient of the matrix).
>> When I solve a system with the complete matrix (2 fields), I get a different 
>> “partial solution" than when I solve the “partial system” on just the second 
>> field (with the two subfields in which I modified the operators from the 
>> second one).
>> 
>> I may understand waht you are doing.
>> Fieldsplit calls MatGetSubMatrix() which can copy values, depending on the 
>> implementation,
>> so changing values in the original matrix may or may not change it in the PC.
>>  
>>Matt
>> 
>> I don’t know if this makes more or less sense… sorry :\
>> Thanks,
>> Pierre
>> 
>>> On 24 Mar 2019, at 8:42 PM, Matthew Knepley >> <mailto:knep...@gmail.com>> wrote:
>>> 
>>> On Sat, Mar 23, 2019 at 9:12 P

[petsc-dev] Transpose and AXPY

2019-04-09 Thread Pierre Jolivet via petsc-dev
We are trying to solve a generalized eigenvalue problem with -st_type sinvert 
using a MATNEST as a RHS and a nonzero shift.
Some Mats in the MATNEST are either created with MatCreateHermitianTranspose or 
MatCreateTranspose.

MatAXPY (needed by -st_type sinvert) is failing us because there is obviously 
no MatGetRow for MATTRANSPOSE.
What I would like to do is get the associated matrix and explicitly (Hermitian) 
transpose it, then call MatAXPY.

Would such an ugly code (that fixes our solver) be mergeable to master 
https://bitbucket.org/petsc/petsc/branch/jolivet/axpy-transpose#diff 
?
If not, do you have some suggestions?

Thanks,
Pierre

Re: [petsc-dev] SNESSolve and changing dimensions

2019-04-03 Thread Pierre Jolivet via petsc-dev

> On 3 Apr 2019, at 3:40 PM, Matthew Knepley  wrote:
> 
> On Wed, Apr 3, 2019 at 9:31 AM Pierre Jolivet  <mailto:pierre.joli...@enseeiht.fr>> wrote:
>> On 3 Apr 2019, at 3:15 PM, Stefano Zampini > <mailto:stefano.zamp...@gmail.com>> wrote:
>> 
>> Pierre,
>> 
>> using MatHeaderReplace should be discouraged in a SNES/TS callback ;-)
> 
> Gotcha!
> 
>> I coded this way to have the possibility of changing the MatType while 
>> running SNES or TS, not for variable sizes.
>> 
>> A tricky way of achieving what you want is to code a proper hook function 
>> (see 
>> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetUpdate.html
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetUpdate.html>),
>>  which gets called right before each linear solve.
>> This is very tricky (and implementation dependent) because you also need to 
>> update/interpolate all the relevant vectors SNES may use later for line 
>> search
>> 
>> The proper solution would be to extend DMAdaptor to the case of user-defined 
>> refinements and stopping criterion callbacks. Matt? 
> 
> Following Matt’s message, I’ve permuted the two “loops" (SNESSolve and mesh 
> adaptation), but if there is some way to do the adaptation in the inner loop 
> coming along, I’ll give it a go.
> 
> For now, this is the best.
> 
> Can you tell me a little about what you are adapting, and why adapting after 
> only a linearized solve would be preferable
> to adapting after the nonlinear solve?

I am just adapting the mesh depending on the solution from the previous 
SNESSolve.
At first, I wanted to avoid writing an outer loop around the SNESSolve, so I 
thought, let’s put the adaptation in the SNESSetJacobian.
It would have been preferable because it would have required fewer lines of 
code (as I had imagined this to work), that’s the main reason. I understand 
this is too much to ask of PETSc to continue working without any further 
information from the application.

Thanks,
Pierre

>   Thanks,
> 
> Matt
>  
> Thanks,
> Pierre
> 
>> Il giorno mer 3 apr 2019 alle ore 14:52 Matthew Knepley via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> ha scritto:
>> On Wed, Apr 3, 2019 at 5:43 AM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> I’d like to do mesh adaptation in a Newton loop.
>> I’m using a SNES with SNESSetFunction and SNESSetJacobian.
>> My first question is: may the dimension of the linear systems change 
>> throughout the nonlinear iterations?
>> 
>> No. Everything we do is algebraic right now. You would have to reformulate 
>> Newton in
>> some Banach space, and make sure all the measures you were using were 
>> discretization
>> independent. Then you could allow the underlying linear algebra to change, 
>> but you would
>> need operations for projecting the the new space, making BC in the new 
>> space, etc. There
>> is a lot of machinery in the continuous space that PETSc does not have.
>> 
>> It sounds like it should be setup as an FAS.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> If so, what are the proper things to do? I tried to do as Stefano in MFEM 
>> and use MatHeaderReplace 
>> https://github.com/mfem/mfem/blob/master/linalg/petsc.cpp#L3833 
>> <https://github.com/mfem/mfem/blob/master/linalg/petsc.cpp#L3833> inside the 
>> function supplied to SNESSetJacobian, but I end up with a [0]PETSC ERROR: 
>> PCApply() line 455 in petsc/src/ksp/pc/interface/precon.c Preconditioner 
>> number of local rows 6561 does not equal resulting vector number of rows 
>> 1681 further down the road. Should I also call something like 
>> VecHeaderReplace (which apparently does not exist) on the residual Vec?
>> 
>> Thanks,
>> Pierre
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>> 
>> 
>> -- 
>> Stefano
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-dev] SNESSolve and changing dimensions

2019-04-03 Thread Pierre Jolivet via petsc-dev

> On 3 Apr 2019, at 3:15 PM, Stefano Zampini  wrote:
> 
> Pierre,
> 
> using MatHeaderReplace should be discouraged in a SNES/TS callback ;-)

Gotcha!

> I coded this way to have the possibility of changing the MatType while 
> running SNES or TS, not for variable sizes.
> 
> A tricky way of achieving what you want is to code a proper hook function 
> (see 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetUpdate.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetUpdate.html>),
>  which gets called right before each linear solve.
> This is very tricky (and implementation dependent) because you also need to 
> update/interpolate all the relevant vectors SNES may use later for line search
> 
> The proper solution would be to extend DMAdaptor to the case of user-defined 
> refinements and stopping criterion callbacks. Matt? 

Following Matt’s message, I’ve permuted the two “loops" (SNESSolve and mesh 
adaptation), but if there is some way to do the adaptation in the inner loop 
coming along, I’ll give it a go.

Thanks,
Pierre

> Il giorno mer 3 apr 2019 alle ore 14:52 Matthew Knepley via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> ha scritto:
> On Wed, Apr 3, 2019 at 5:43 AM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> I’d like to do mesh adaptation in a Newton loop.
> I’m using a SNES with SNESSetFunction and SNESSetJacobian.
> My first question is: may the dimension of the linear systems change 
> throughout the nonlinear iterations?
> 
> No. Everything we do is algebraic right now. You would have to reformulate 
> Newton in
> some Banach space, and make sure all the measures you were using were 
> discretization
> independent. Then you could allow the underlying linear algebra to change, 
> but you would
> need operations for projecting the the new space, making BC in the new space, 
> etc. There
> is a lot of machinery in the continuous space that PETSc does not have.
> 
> It sounds like it should be setup as an FAS.
> 
>   Thanks,
> 
>  Matt
>  
> If so, what are the proper things to do? I tried to do as Stefano in MFEM and 
> use MatHeaderReplace 
> https://github.com/mfem/mfem/blob/master/linalg/petsc.cpp#L3833 
> <https://github.com/mfem/mfem/blob/master/linalg/petsc.cpp#L3833> inside the 
> function supplied to SNESSetJacobian, but I end up with a [0]PETSC ERROR: 
> PCApply() line 455 in petsc/src/ksp/pc/interface/precon.c Preconditioner 
> number of local rows 6561 does not equal resulting vector number of rows 1681 
> further down the road. Should I also call something like VecHeaderReplace 
> (which apparently does not exist) on the residual Vec?
> 
> Thanks,
> Pierre
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 
> 
> -- 
> Stefano



[petsc-dev] SNESSolve and changing dimensions

2019-04-03 Thread Pierre Jolivet via petsc-dev
I’d like to do mesh adaptation in a Newton loop.
I’m using a SNES with SNESSetFunction and SNESSetJacobian.
My first question is: may the dimension of the linear systems change throughout 
the nonlinear iterations?
If so, what are the proper things to do? I tried to do as Stefano in MFEM and 
use MatHeaderReplace 
https://github.com/mfem/mfem/blob/master/linalg/petsc.cpp#L3833 
 inside the 
function supplied to SNESSetJacobian, but I end up with a [0]PETSC ERROR: 
PCApply() line 455 in petsc/src/ksp/pc/interface/precon.c Preconditioner number 
of local rows 6561 does not equal resulting vector number of rows 1681 further 
down the road. Should I also call something like VecHeaderReplace (which 
apparently does not exist) on the residual Vec?

Thanks,
Pierre

Re: [petsc-dev] MatNest and FieldSplit

2019-03-25 Thread Pierre Jolivet via petsc-dev
Thanks, this makes (slightly) more sense to me know.
For some reason my application is still not acting properly but I must be 
screwing somewhere else the nested FieldSplit…

Thank you,
Pierre

> On 24 Mar 2019, at 11:42 PM, Dave May via petsc-dev  
> wrote:
> 
> Matt is right.
> 
> When you defined the operator S, you basically invalidate the operator N (in 
> the sense that they are no longer consistent). Hence when you use KSP nest to 
> solve your problem your A matrix looks like 
>   A = diag[1, 2, 4, 0, 8]
> but the B matrix you have defined looks like
>   B = diag[1, 2, 4, 0.1]
> 
> The only way to obtain the correct answer with your code is thus to use the 
> option
> -ksp_type preonly
> 
> Thanks
> Dave
> 
> 
> 
> On Sun, 24 Mar 2019 at 22:09, Mark Adams via petsc-dev  <mailto:petsc-dev@mcs.anl.gov>> wrote:
> I think he is saying that this line seems to have no effect (and the comment 
> is hence wrong):
> 
> KSPSetOperators(subksp[nsplits - 1], S, S);
> // J2 = [[4, 0] ; [0, 0.1]]
> 
> J2 is a 2x2 but this block has been changed into two single equation fields. 
> Does this KSPSetOperators supposed to copy this 1x1 S matrix into the (1,1) 
> block of the "J2", or do some sort of correct mixing internally, to get what 
> he wants?
> 
> BTW, this line does not seem necessary to me so maybe I'm missing something.
> 
> KSPSetOperators(sub, J2, J2);
> 
> 
> On Sun, Mar 24, 2019 at 4:33 PM Matthew Knepley via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> On Sun, Mar 24, 2019 at 10:21 AM Pierre Jolivet  <mailto:pierre.joli...@enseeiht.fr>> wrote:
> It’s a 4x4 matrix.
> The first 2x2 diagonal matrix is a field.
> The second 2x2 diagonal matrix is another field.
> In the second field, the first diagonal coefficient is a subfield.
> In the second field, the second diagonal coefficient is another subfield.
> I’m changing the operators from the second subfield (last diagonal 
> coefficient of the matrix).
> When I solve a system with the complete matrix (2 fields), I get a different 
> “partial solution" than when I solve the “partial system” on just the second 
> field (with the two subfields in which I modified the operators from the 
> second one).
> 
> I may understand waht you are doing.
> Fieldsplit calls MatGetSubMatrix() which can copy values, depending on the 
> implementation,
> so changing values in the original matrix may or may not change it in the PC.
>  
>Matt
> 
> I don’t know if this makes more or less sense… sorry :\
> Thanks,
> Pierre
> 
>> On 24 Mar 2019, at 8:42 PM, Matthew Knepley > <mailto:knep...@gmail.com>> wrote:
>> 
>> On Sat, Mar 23, 2019 at 9:12 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> I’m trying to figure out why both solutions are not consistent in the 
>> following example.
>> Is what I’m doing complete nonsense?
>> 
>> The code does not make clear what you are asking. I can see its a nested 
>> fieldsplit.
>> 
>>   Thanks,
>> 
>>  Matt
>>  
>> Thanks in advance for your help,
>> Pierre
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>



Re: [petsc-dev] MatNest and FieldSplit

2019-03-24 Thread Pierre Jolivet via petsc-dev
It’s a 4x4 matrix.
The first 2x2 diagonal matrix is a field.
The second 2x2 diagonal matrix is another field.
In the second field, the first diagonal coefficient is a subfield.
In the second field, the second diagonal coefficient is another subfield.
I’m changing the operators from the second subfield (last diagonal coefficient 
of the matrix).
When I solve a system with the complete matrix (2 fields), I get a different 
“partial solution" than when I solve the “partial system” on just the second 
field (with the two subfields in which I modified the operators from the second 
one).

I don’t know if this makes more or less sense… sorry :\
Thanks,
Pierre

> On 24 Mar 2019, at 8:42 PM, Matthew Knepley  wrote:
> 
> On Sat, Mar 23, 2019 at 9:12 PM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> I’m trying to figure out why both solutions are not consistent in the 
> following example.
> Is what I’m doing complete nonsense?
> 
> The code does not make clear what you are asking. I can see its a nested 
> fieldsplit.
> 
>   Thanks,
> 
>  Matt
>  
> Thanks in advance for your help,
> Pierre
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>



[petsc-dev] MatNest and FieldSplit

2019-03-24 Thread Pierre Jolivet via petsc-dev
I’m trying to figure out why both solutions are not consistent in the following 
example.
Is what I’m doing complete nonsense?

Thanks in advance for your help,
Pierre



nest.c
Description: Binary data


[petsc-dev] KSPMonitor and petsc4py

2019-02-25 Thread Pierre Jolivet via petsc-dev
I’m not sure this is the expected behavior. Is it?
$ git diff ex100.py
diff --git a/src/ksp/ksp/examples/tutorials/ex100.py 
b/src/ksp/ksp/examples/tutorials/ex100.py
index dbb3f8e012..051637ce1d 100644
--- a/src/ksp/ksp/examples/tutorials/ex100.py
+++ b/src/ksp/ksp/examples/tutorials/ex100.py
@@ -29,6 +29,7 @@ def RunTest():
 ksp.setOperators(A, A)
 ksp.setFromOptions()
 ksp.solve(b, x)
+ksp.solve(b, x)

 r = b.duplicate()
 A.mult(x, r)

$ mpirun -np 8 python ./ex100.py -ksp_converged_reason -ksp_monitor
  0 KSP Residual norm 5.101520253035e-04
  1 KSP Residual norm 1.168905135944e-03
[..]
 12 KSP Residual norm 5.101520253035e-05
 13 KSP Residual norm 7.504522604366e-20
Linear solve converged due to CONVERGED_RTOL iterations 13
 13 KSP Residual norm 5.101520253035e-04
 14 KSP Residual norm 1.168905135944e-03
[..]
 25 KSP Residual norm 5.101520253035e-05
 26 KSP Residual norm 7.504522604366e-20
Linear solve converged due to CONVERGED_RTOL iterations 26

Shouldn’t the iterate number be reset to 0 for the 2nd KSPSolve?

Thanks,
Pierre

Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-16 Thread Pierre Jolivet via petsc-dev



> On 16 Jan 2019, at 3:22 PM, Jed Brown  wrote:
> 
> Matthew Knepley  writes:
> 
>> On Wed, Jan 16, 2019 at 9:01 AM Jed Brown via petsc-dev <
>> petsc-dev@mcs.anl.gov> wrote:
>> 
>>> Pierre Jolivet via petsc-dev  writes:
>>> 
>>>> OK, I was wrong about MATAIJ, as Jed already pointed out.
>>>> What about BAIJ or Dense matrices?
>>> 
>>> BAIJ (and SBAIJ) is handled by MatXAIJSetPreallocation.
>>> 
>> 
>> Dense matrices don't need preallocation.
> 
> True, MatMPIDenseSetPreallocation is more like a *PlaceArray or
> Create*WithArray.  The caller needs to be aware of parallelism to create
> and use arrays; I guess Pierre does that but doesn't want the noise at
> the particular call site (though at least the if statements aren't
> needed).

That is exactly what I’m doing, and what I would want. But I can live with a 
couple of if—else statements (should this never get fixed).



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev

> On 15 Jan 2019, at 10:01 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 08:50, Pierre Jolivet  <mailto:pierre.joli...@enseeiht.fr>> wrote:
> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?
> 
> The preallocation methods for BAIJ and Dense both internally use 
> PetscTryMethod.

I don’t see any MatDenseSetPreallocation in master, what are you referring to 
please?

>  
> What about VecCreateMPIWithArray which seems to explicitly call 
> VecCreate_MPI_Private which explicitly sets the type to VECMPI 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522>
>  so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?
> 
> That looks problematic. 
> 
> Possibly there should either be an if statement in VecCreateMPIWithArray() 
> associated with the comm size, or there should be a new API 
> VecCreateWithArray() with the same args as VecCreateMPIWithArray.
> 
> As a work around, you could add VecCreateWithArray() in your code base which 
> does the right thing. 

Sure, I can find a workaround in my code, but I’m still thinking it is best not 
to have PETSc segfaults when a user is doing something they are allowed to do :)

Thanks,
Pierre

>  
> 
> Thanks,
> Pierre  
> 
>> On 15 Jan 2019, at 9:40 AM, Dave May > <mailto:dave.mayhe...@gmail.com>> wrote:
>> 
>> 
>> 
>> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Actually this is not the case.
>> 
>> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
>> Seq or MPI is done internally to MatCreate and is not required in the user 
>> code. Additionally, in your preallocation routine, you can call safely both 
>> (without your comm_size if statement)
>> MatSeqAIJSetPreallocation()
>> and
>> MatMPIAIJSetPreallocation()
>> If the matrix type matches that expected by the API, then it gets executed. 
>> Otherwise nothing happens.
>> 
>> This is done all over the place to enable the matrix type to be a run-time 
>> choice.
>> 
>> For example, see here
>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>>  
>> <https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ>
>> and look at lines 1511 and 1512. 
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>>  
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong >> <mailto:hzh...@mcs.anl.gov>> wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev
OK, I was wrong about MATAIJ, as Jed already pointed out.
What about BAIJ or Dense matrices?
What about VecCreateMPIWithArray which seems to explicitly call 
VecCreate_MPI_Private which explicitly sets the type to VECMPI 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522>
 so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?

Thanks,
Pierre  

> On 15 Jan 2019, at 9:40 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
> mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
> else MatMPIXSetPreallocation in the rest of my code, which is something I 
> would rather avoid.
> 
> Actually this is not the case.
> 
> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
> Seq or MPI is done internally to MatCreate and is not required in the user 
> code. Additionally, in your preallocation routine, you can call safely both 
> (without your comm_size if statement)
> MatSeqAIJSetPreallocation()
> and
> MatMPIAIJSetPreallocation()
> If the matrix type matches that expected by the API, then it gets executed. 
> Otherwise nothing happens.
> 
> This is done all over the place to enable the matrix type to be a run-time 
> choice.
> 
> For example, see here
> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ>
> and look at lines 1511 and 1512. 
> 
> Thanks,
>   Dave
> 
> 
> 
>  
> 
> Thanks,
> Pierre
> 
>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong > <mailto:hzh...@mcs.anl.gov>> wrote:
>> 
>> Replace 
>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>> to
>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>> 
>> Replace 
>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>> to
>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>> 
>> Then add
>> MatSeqAIJSetPreallocation()
>> MatSeqDenseSetPreallocation()
>> 
>> Hong
>> 
>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
>> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)
>> 
>> Thanks in advance,
>> Pierre
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Pierre Jolivet via petsc-dev

> On 15 Jan 2019, at 6:26 AM, Jed Brown  wrote:
> 
> We should repair the MPI matrix implementations so that this works on 
> communicators of size 1

Great, I was worried that I missed somewhere that it is explicitly stated that 
you should not use MPI types on communicators of size 1.

> but why can't you use MatXAIJSetPreallocation().

OK, replace my if then else by if(comm_size == 1) VecCreateSeqWithArray else 
VecCreateMPIWithArray
There is also no MatXBAIJSetPreallocationCSR or MatXDenseSetPreallocation, so 
that's other if then elses.

Or am I missing something?

Thanks,
Pierre

> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html>
> 
> Pierre Jolivet via petsc-dev  <mailto:petsc-dev@mcs.anl.gov>> writes:
> 
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov> 
>>> <mailto:petsc-dev@mcs.anl.gov <mailto:petsc-dev@mcs.anl.gov>>> wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Pierre Jolivet via petsc-dev
Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
that is not an option I have right now)”
All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
else MatMPIXSetPreallocation in the rest of my code, which is something I would 
rather avoid.

Thanks,
Pierre

> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
> 
> Replace 
> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
> to
> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
> 
> Replace 
> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
> to
> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
> 
> Then add
> MatSeqAIJSetPreallocation()
> MatSeqDenseSetPreallocation()
> 
> Hong
> 
> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Hello,
> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)
> 
> Thanks in advance,
> Pierre
> 



[petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-14 Thread Pierre Jolivet via petsc-dev
Hello,
Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
that is not an option I have right now)

Thanks in advance,
Pierre



ex15.c
Description: Binary data


Re: [petsc-dev] Faulty logic in MatHYPRESetPreallocation?

2019-01-03 Thread Pierre Jolivet via petsc-dev
Stefano,Please find attach a trivial example that will segfault from within hypre if you put back on the (IMHO unnecessary) hypre_AuxParCSRMatrixNeedAux(aux_matrix) = 1 in PR #1305.You need to use two processes or more.Thanks,Pierre

ex15.c
Description: Binary data
On 29 Dec 2018, at 10:13 AM, Stefano Zampini <stefano.zamp...@gmail.com> wrote:Barry,Probably this is an oversight from me. When I coded the MATHYPRE class, I did not make much attention to the preallocation. Also, hypre did not seem very robust to matrix reusage at the time. Take a look at the comments at line 85 in ex115 from mat tests. Probably Pierre's fix will curate that problem too. StefanoIl giorno Sab 22 Dic 2018, 06:30 Smith, Barry F. <bsm...@mcs.anl.gov> ha scritto:
   Stefeno,

    Doe this look right to you?

    Thanks

    Barry


> On Dec 17, 2018, at 3:18 AM, Pierre Jolivet via petsc-dev <petsc-dev@mcs.anl.gov> wrote:
> 
> Hello,
> I am not sure the hypre routines are called in the right order in MatHYPRESetPreallocation.
> Indeed, attached, you’ll find two Massif logs, one with the current order (massif.out.bad), and the other (massif.out.better) with the attached patch applied.
> In the former, you can see at the snapshot #31 that there is a call to hypre_CTAlloc in hypre_AuxParCSRMatrixInitialize (aux_parcsr_matrix.c:163).
> This means that hypre is allocating memory as if it was not aware of the number of nonzero per row, even though I supply in my application the two arrays dnnz and onnz.
> 
> Am I missing something?
> 
> Thanks,
> Pierre
> 
> 




Re: [petsc-dev] MatNest and inner KSP

2018-12-19 Thread Pierre Jolivet via petsc-dev



> On 19 Dec 2018, at 9:46 AM, Stefano Zampini  wrote:
> 
> Pierre, 
> 
> The details of the  inner solvers for PCFIELDSPLIT are not known if you don’t 
> call PCSetUp, so the cascade approach should be preferred.

OK, great! It was actually not that big of a deal to implement.

Thanks,
Pierre

> After PCSetUp, you can access the inner solves, see e.g.  
> https://bitbucket.org/petsc/petsc/src/4ddd364d454d7bce974e92e0b5bd4f4516dbff02/src/ksp/pc/impls/bddc/bddc.c#lines-2715
>> On Dec 19, 2018, at 9:01 AM, Pierre Jolivet via petsc-dev 
>>  wrote:
>> 
>> Hello,
>> I have a PCFIELDSPLIT where the fields are defined by a MATNEST 
>> “decomposition”.
>> Right now, I am first defining the KSP of the diagonal blocks of the 
>> MATNEST, which are themselves PCFIELDSPLIT, then creating the Mat itself, 
>> and eventually I’d like to solve systems with the complete MATNEST.
>> However, the KSP and PC options of the “first level" inner solvers are not 
>> propagated to the global outer solver.
>> Can this be achieved somehow, or do I always need to set the options in the 
>> opposite direction (first outer solver, then inner solvers, e.g., through a 
>> cascade of PCFieldSplitGetSubKSP)?
>> 
>> Thanks in advance,
>> Pierre
> 



[petsc-dev] MatNest and inner KSP

2018-12-18 Thread Pierre Jolivet via petsc-dev
Hello,
I have a PCFIELDSPLIT where the fields are defined by a MATNEST “decomposition”.
Right now, I am first defining the KSP of the diagonal blocks of the MATNEST, 
which are themselves PCFIELDSPLIT, then creating the Mat itself, and eventually 
I’d like to solve systems with the complete MATNEST.
However, the KSP and PC options of the “first level" inner solvers are not 
propagated to the global outer solver.
Can this be achieved somehow, or do I always need to set the options in the 
opposite direction (first outer solver, then inner solvers, e.g., through a 
cascade of PCFieldSplitGetSubKSP)?

Thanks in advance,
Pierre