Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Smith, Barry F. via petsc-dev

   I concur that there should be a VecCreateWithArray() to automatically uses 
Seq vector for one process and MPI for multiple.

   Barry


> On Jan 15, 2019, at 3:01 AM, Dave May via petsc-dev  
> wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 08:50, Pierre Jolivet  
> wrote:
> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?
> 
> The preallocation methods for BAIJ and Dense both internally use 
> PetscTryMethod.
> 
>  
> What about VecCreateMPIWithArray which seems to explicitly call 
> VecCreate_MPI_Private which explicitly sets the type to VECMPI 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
>  so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?
> 
> That looks problematic. 
> 
> Possibly there should either be an if statement in VecCreateMPIWithArray() 
> associated with the comm size, or there should be a new API 
> VecCreateWithArray() with the same args as VecCreateMPIWithArray.
> 
> As a work around, you could add VecCreateWithArray() in your code base which 
> does the right thing. 
> 
>  
> 
> Thanks,
> Pierre  
> 
>> On 15 Jan 2019, at 9:40 AM, Dave May  wrote:
>> 
>> 
>> 
>> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
>>  wrote:
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Actually this is not the case.
>> 
>> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
>> Seq or MPI is done internally to MatCreate and is not required in the user 
>> code. Additionally, in your preallocation routine, you can call safely both 
>> (without your comm_size if statement)
>> MatSeqAIJSetPreallocation()
>> and
>> MatMPIAIJSetPreallocation()
>> If the matrix type matches that expected by the API, then it gets executed. 
>> Otherwise nothing happens.
>> 
>> This is done all over the place to enable the matrix type to be a run-time 
>> choice.
>> 
>> For example, see here
>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>> and look at lines 1511 and 1512. 
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>>  
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>>  wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev

> On 15 Jan 2019, at 10:01 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 08:50, Pierre Jolivet  > wrote:
> OK, I was wrong about MATAIJ, as Jed already pointed out.
> What about BAIJ or Dense matrices?
> 
> The preallocation methods for BAIJ and Dense both internally use 
> PetscTryMethod.

I don’t see any MatDenseSetPreallocation in master, what are you referring to 
please?

>  
> What about VecCreateMPIWithArray which seems to explicitly call 
> VecCreate_MPI_Private which explicitly sets the type to VECMPI 
> https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
>  
> 
>  so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?
> 
> That looks problematic. 
> 
> Possibly there should either be an if statement in VecCreateMPIWithArray() 
> associated with the comm size, or there should be a new API 
> VecCreateWithArray() with the same args as VecCreateMPIWithArray.
> 
> As a work around, you could add VecCreateWithArray() in your code base which 
> does the right thing. 

Sure, I can find a workaround in my code, but I’m still thinking it is best not 
to have PETSc segfaults when a user is doing something they are allowed to do :)

Thanks,
Pierre

>  
> 
> Thanks,
> Pierre  
> 
>> On 15 Jan 2019, at 9:40 AM, Dave May > > wrote:
>> 
>> 
>> 
>> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)”
>> All my Mat are of type MATMPIX. Switching to MATX here as you suggested 
>> would mean that I need to add a bunch of if(comm_size == 1) 
>> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, 
>> which is something I would rather avoid.
>> 
>> Actually this is not the case.
>> 
>> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
>> Seq or MPI is done internally to MatCreate and is not required in the user 
>> code. Additionally, in your preallocation routine, you can call safely both 
>> (without your comm_size if statement)
>> MatSeqAIJSetPreallocation()
>> and
>> MatMPIAIJSetPreallocation()
>> If the matrix type matches that expected by the API, then it gets executed. 
>> Otherwise nothing happens.
>> 
>> This is done all over the place to enable the matrix type to be a run-time 
>> choice.
>> 
>> For example, see here
>> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>>  
>> 
>> and look at lines 1511 and 1512. 
>> 
>> Thanks,
>>   Dave
>> 
>> 
>> 
>>  
>> 
>> Thanks,
>> Pierre
>> 
>>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong >> > wrote:
>>> 
>>> Replace 
>>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>>> 
>>> Replace 
>>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>>> to
>>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>>> 
>>> Then add
>>> MatSeqAIJSetPreallocation()
>>> MatSeqDenseSetPreallocation()
>>> 
>>> Hong
>>> 
>>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>>> Hello,
>>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program 
>>> could run _with a single_ process? (I know, I could switch to 
>>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>> 
>>> Thanks in advance,
>>> Pierre
>>> 
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Pierre Jolivet via petsc-dev
OK, I was wrong about MATAIJ, as Jed already pointed out.
What about BAIJ or Dense matrices?
What about VecCreateMPIWithArray which seems to explicitly call 
VecCreate_MPI_Private which explicitly sets the type to VECMPI 
https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522
 

 so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?

Thanks,
Pierre  

> On 15 Jan 2019, at 9:40 AM, Dave May  wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev 
> mailto:petsc-dev@mcs.anl.gov>> wrote:
> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but 
> that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested would 
> mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation 
> else MatMPIXSetPreallocation in the rest of my code, which is something I 
> would rather avoid.
> 
> Actually this is not the case.
> 
> If you do as Hong suggests and use MATAIJ then the switch for comm_size for 
> Seq or MPI is done internally to MatCreate and is not required in the user 
> code. Additionally, in your preallocation routine, you can call safely both 
> (without your comm_size if statement)
> MatSeqAIJSetPreallocation()
> and
> MatMPIAIJSetPreallocation()
> If the matrix type matches that expected by the API, then it gets executed. 
> Otherwise nothing happens.
> 
> This is done all over the place to enable the matrix type to be a run-time 
> choice.
> 
> For example, see here
> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
>  
> 
> and look at lines 1511 and 1512. 
> 
> Thanks,
>   Dave
> 
> 
> 
>  
> 
> Thanks,
> Pierre
> 
>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong > > wrote:
>> 
>> Replace 
>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>> to
>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>> 
>> Replace 
>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>> to
>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>> 
>> Then add
>> MatSeqAIJSetPreallocation()
>> MatSeqDenseSetPreallocation()
>> 
>> Hong
>> 
>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev 
>> mailto:petsc-dev@mcs.anl.gov>> wrote:
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and 
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could 
>> run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but 
>> that is not an option I have right now)
>> 
>> Thanks in advance,
>> Pierre
>> 
> 



Re: [petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

2019-01-15 Thread Dave May via petsc-dev
On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:

> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense,
> but that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested
> would mean that I need to add a bunch of if(comm_size == 1)
> MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my
> code, which is something I would rather avoid.
>

Actually this is not the case.

If you do as Hong suggests and use MATAIJ then the switch for comm_size for
Seq or MPI is done internally to MatCreate and is not required in the user
code. Additionally, in your preallocation routine, you can call safely both
(without your comm_size if statement)
MatSeqAIJSetPreallocation()
and
MatMPIAIJSetPreallocation()
If the matrix type matches that expected by the API, then it gets executed.
Otherwise nothing happens.

This is done all over the place to enable the matrix type to be a run-time
choice.

For example, see here
https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ
and look at lines 1511 and 1512.

Thanks,
  Dave





>
> Thanks,
> Pierre
>
> On 14 Jan 2019, at 10:30 PM, Zhang, Hong  wrote:
>
> Replace
> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
> to
> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>
> Replace
> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
> to
> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>
> Then add
> MatSeqAIJSetPreallocation()
> MatSeqDenseSetPreallocation()
>
> Hong
>
> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev <
> petsc-dev@mcs.anl.gov> wrote:
>
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and
>> MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program
>> could run _with a single_ process? (I know, I could switch to
>> SeqAIJ_SeqDense, but that is not an option I have right now)
>>
>> Thanks in advance,
>> Pierre
>>
>>
>