Re: [petsc-users] Understanding preallocation for MPI

2017-07-16 Thread Barry Smith

> On Jul 16, 2017, at 9:53 PM, Florian Lindner  wrote:
> 
> Hi Barry,
> 
> Am 17.07.2017 um 10:42 schrieb Barry Smith:
>>The intention with the AO is you map everything to the PETSc ordering and 
>> then you just work in the PETSc ordering completely, this way there is no 
>> "sending around with MPI preallocation information". That is your matrix 
>> assembly and preallocation (as well as vectors) always works in the PETSc 
>> numbering. In the PETSc ordering there is no need to send information 
>> between processes about row allocations since the information is always on 
>> the correct process.
> 
> The problem I have, is that the information needed to preallocate AO row 1 => 
> PO row 8 is located at rank 0 of my
> application.
> 
> So I either need to send the data around to be able to generate the prealloc 
> information of PO row 8 on rank 1 OR send
> the preallocation data to prealloc PO row 8 on rank 1, albeit generated on 
> rank 0.
> 
> Is that correct?

  Yes, you are suppose to move your data around so that the information for the 
first rows is on the first process, the information for next rows is on the 
second process etc. You need to do this for efficiency anyways; this is often 
the most tedious part of the code to write which is why many users use DMPlex 
or libMesh or Deal.ii or Firedrake or some other mesh/finite element package 
since they manage that process and then use PETSc to solver the systems.


   Barry

> 
> Best,
> Florian
> 
> 
>> 
>>   Barry
>> 
>>> On Jul 16, 2017, at 9:26 PM, Florian Lindner  wrote:
>>> 
>>> Hello,
>>> 
>>> Am 11.07.2017 um 02:45 schrieb Barry Smith:
 
 You might consider using 
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateInitialize.html
  and 
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateSetLocal.html#MatPreallocateSetLocal
  and friends
 
  These take out some of the busywork of setting the preallocation arrays. 
 They are macros in petscmat.h so even if you don't use them you can see 
 the code they use.
>>> 
>>> Thanks for your input, I took a look at it and learned from it.
>>> 
>>> I have this preallocation routine which works perfectly for a index set on 
>>> the columns and non (resp. an identity index
>>> set) on the columns:
>>> 
>>> void doPreallocation(Mat &A, ISLocalToGlobalMapping &ISmapping, double 
>>> nz_ratio, int nz_number)
>>> {
>>> PetscInt d_nnz[local_rows], o_nnz[local_rows];
>>> 
>>> const PetscInt *mapIndizes;
>>> ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
>>> 
>>> for (int row = row_range_start; row < row_range_end; row++) {
>>>   // PetscInt relative_row = mapIndizes[row- row_range_start];
>>>   PetscInt relative_row = row - row_range_start;
>>>   d_nnz[relative_row] = 0;
>>>   o_nnz[relative_row] = 0;
>>>   int colStart, colEnd;
>>>   std::tie(colStart, colEnd) = colBorders(row);
>>>   for (int col = 0; col < global_cols; col++) {
>>> if (col >= colStart and col < colEnd) {
>>>   int petsc_col = mapIndizes[col];
>>>   if (petsc_col >= col_range_start and petsc_col < col_range_end) {
>>> d_nnz[relative_row]++;
>>>   } else {
>>> o_nnz[relative_row]++;
>>>   }
>>> }
>>>   }
>>> }
>>> ierr = ISLocalToGlobalMappingRestoreIndices(ISmapping, &mapIndizes); 
>>> CHKERRV(ierr);
>>> ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRV(ierr); // 
>>> works only on minimum 2 ranks
>>> }
>>> 
>>> colStart and colEnd give a defined number of diagonals for the matrix, just 
>>> for testing. *_range_* are from the PETSc
>>> routines.
>>> 
>>> But I'm puzzled how to do it when I have also a (the same) index set for 
>>> the rows.
>>> 
>>> Let's say I have 2 procs, a global size of 10 and a mapping [0, 8, 9, 1, 3] 
>>> on proc 0 and [2, 5, 4, 7, 6] on proc 1.
>>> 
>>> row_range is [0, 5] on the first proc.
>>> 
>>> On the first row-iteration I generate data for row 0 (application ordering 
>>> AO) which is mapped to row 0 (petsc ordering
>>> PO), I add these nnz to o_/d_nnz[0]
>>> 
>>> On the second row-iteration I generate data for row 1 (AO) which is mapped 
>>> to row 8 (PO). As with the columns I expect
>>> the nnz numbering to be in PO, which means either
>>> 
>>> 1) nnz[8] (which is not existing), what I would be do in the commented out 
>>> line.
>>> 2) nnz[4] on the second proc (which is unreachable from the first proc)
>>> 3) nnz[1] on the first proc, thus in AO, which produces new mallocs.
>>> 
>>> 2. still seems to most probable for me, but before implementing sending the 
>>> stuff around using MPI, I wanted to ask you
>>> if it is correct that way? If it needs to be done that way, is there some 
>>> PETSc magic which helps exchanging that array?
>>> 
>>> If the answer lies within MatPreallocate* routines, I failed to see :-/
>>> 
>>> Best Thanks,
>>> Florian
>>> 
>>> 
> On Jul 10, 2017, at 3:22 AM, Florian Lindner  wrote:
> 
> Hey,
> 
>

Re: [petsc-users] Understanding preallocation for MPI

2017-07-16 Thread Florian Lindner
Hi Barry,

Am 17.07.2017 um 10:42 schrieb Barry Smith:
> The intention with the AO is you map everything to the PETSc ordering and 
> then you just work in the PETSc ordering completely, this way there is no 
> "sending around with MPI preallocation information". That is your matrix 
> assembly and preallocation (as well as vectors) always works in the PETSc 
> numbering. In the PETSc ordering there is no need to send information between 
> processes about row allocations since the information is always on the 
> correct process.

The problem I have, is that the information needed to preallocate AO row 1 => 
PO row 8 is located at rank 0 of my
application.

So I either need to send the data around to be able to generate the prealloc 
information of PO row 8 on rank 1 OR send
the preallocation data to prealloc PO row 8 on rank 1, albeit generated on rank 
0.

Is that correct?

Best,
Florian


> 
>Barry
> 
>> On Jul 16, 2017, at 9:26 PM, Florian Lindner  wrote:
>>
>> Hello,
>>
>> Am 11.07.2017 um 02:45 schrieb Barry Smith:
>>>
>>>  You might consider using 
>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateInitialize.html
>>>  and 
>>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateSetLocal.html#MatPreallocateSetLocal
>>>  and friends
>>>
>>>   These take out some of the busywork of setting the preallocation arrays. 
>>> They are macros in petscmat.h so even if you don't use them you can see the 
>>> code they use.
>>
>> Thanks for your input, I took a look at it and learned from it.
>>
>> I have this preallocation routine which works perfectly for a index set on 
>> the columns and non (resp. an identity index
>> set) on the columns:
>>
>> void doPreallocation(Mat &A, ISLocalToGlobalMapping &ISmapping, double 
>> nz_ratio, int nz_number)
>> {
>>  PetscInt d_nnz[local_rows], o_nnz[local_rows];
>>
>>  const PetscInt *mapIndizes;
>>  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
>>
>>  for (int row = row_range_start; row < row_range_end; row++) {
>>// PetscInt relative_row = mapIndizes[row- row_range_start];
>>PetscInt relative_row = row - row_range_start;
>>d_nnz[relative_row] = 0;
>>o_nnz[relative_row] = 0;
>>int colStart, colEnd;
>>std::tie(colStart, colEnd) = colBorders(row);
>>for (int col = 0; col < global_cols; col++) {
>>  if (col >= colStart and col < colEnd) {
>>int petsc_col = mapIndizes[col];
>>if (petsc_col >= col_range_start and petsc_col < col_range_end) {
>>  d_nnz[relative_row]++;
>>} else {
>>  o_nnz[relative_row]++;
>>}
>>  }
>>}
>>  }
>>  ierr = ISLocalToGlobalMappingRestoreIndices(ISmapping, &mapIndizes); 
>> CHKERRV(ierr);
>>  ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRV(ierr); // 
>> works only on minimum 2 ranks
>> }
>>
>> colStart and colEnd give a defined number of diagonals for the matrix, just 
>> for testing. *_range_* are from the PETSc
>> routines.
>>
>> But I'm puzzled how to do it when I have also a (the same) index set for the 
>> rows.
>>
>> Let's say I have 2 procs, a global size of 10 and a mapping [0, 8, 9, 1, 3] 
>> on proc 0 and [2, 5, 4, 7, 6] on proc 1.
>>
>> row_range is [0, 5] on the first proc.
>>
>> On the first row-iteration I generate data for row 0 (application ordering 
>> AO) which is mapped to row 0 (petsc ordering
>> PO), I add these nnz to o_/d_nnz[0]
>>
>> On the second row-iteration I generate data for row 1 (AO) which is mapped 
>> to row 8 (PO). As with the columns I expect
>> the nnz numbering to be in PO, which means either
>>
>> 1) nnz[8] (which is not existing), what I would be do in the commented out 
>> line.
>> 2) nnz[4] on the second proc (which is unreachable from the first proc)
>> 3) nnz[1] on the first proc, thus in AO, which produces new mallocs.
>>
>> 2. still seems to most probable for me, but before implementing sending the 
>> stuff around using MPI, I wanted to ask you
>> if it is correct that way? If it needs to be done that way, is there some 
>> PETSc magic which helps exchanging that array?
>>
>> If the answer lies within MatPreallocate* routines, I failed to see :-/
>>
>> Best Thanks,
>> Florian
>>
>>
 On Jul 10, 2017, at 3:22 AM, Florian Lindner  wrote:

 Hey,

 one more question about preallocation:

 I can determine if a column index is diagonal or off-diagonal using that 
 code

 if (col >= col_range_start and col < col_range_end)
   d_nnz[relative_row]++;
 else
   o_nnz[relative_row]++;


 My code, however uses index sets from which a ISlocalToGlobalMapping 
 created:

 // Creates a mapping from permutation and use that for the cols
 ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, permutation.data(), 
 PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
 ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
 ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV

Re: [petsc-users] Understanding preallocation for MPI

2017-07-16 Thread Barry Smith

   Florian,

The intention with the AO is you map everything to the PETSc ordering and 
then you just work in the PETSc ordering completely, this way there is no 
"sending around with MPI preallocation information". That is your matrix 
assembly and preallocation (as well as vectors) always works in the PETSc 
numbering. In the PETSc ordering there is no need to send information between 
processes about row allocations since the information is always on the correct 
process.

   Barry

> On Jul 16, 2017, at 9:26 PM, Florian Lindner  wrote:
> 
> Hello,
> 
> Am 11.07.2017 um 02:45 schrieb Barry Smith:
>> 
>>  You might consider using 
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateInitialize.html
>>  and 
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateSetLocal.html#MatPreallocateSetLocal
>>  and friends
>> 
>>   These take out some of the busywork of setting the preallocation arrays. 
>> They are macros in petscmat.h so even if you don't use them you can see the 
>> code they use.
> 
> Thanks for your input, I took a look at it and learned from it.
> 
> I have this preallocation routine which works perfectly for a index set on 
> the columns and non (resp. an identity index
> set) on the columns:
> 
> void doPreallocation(Mat &A, ISLocalToGlobalMapping &ISmapping, double 
> nz_ratio, int nz_number)
> {
>  PetscInt d_nnz[local_rows], o_nnz[local_rows];
> 
>  const PetscInt *mapIndizes;
>  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
> 
>  for (int row = row_range_start; row < row_range_end; row++) {
>// PetscInt relative_row = mapIndizes[row- row_range_start];
>PetscInt relative_row = row - row_range_start;
>d_nnz[relative_row] = 0;
>o_nnz[relative_row] = 0;
>int colStart, colEnd;
>std::tie(colStart, colEnd) = colBorders(row);
>for (int col = 0; col < global_cols; col++) {
>  if (col >= colStart and col < colEnd) {
>int petsc_col = mapIndizes[col];
>if (petsc_col >= col_range_start and petsc_col < col_range_end) {
>  d_nnz[relative_row]++;
>} else {
>  o_nnz[relative_row]++;
>}
>  }
>}
>  }
>  ierr = ISLocalToGlobalMappingRestoreIndices(ISmapping, &mapIndizes); 
> CHKERRV(ierr);
>  ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRV(ierr); // 
> works only on minimum 2 ranks
> }
> 
> colStart and colEnd give a defined number of diagonals for the matrix, just 
> for testing. *_range_* are from the PETSc
> routines.
> 
> But I'm puzzled how to do it when I have also a (the same) index set for the 
> rows.
> 
> Let's say I have 2 procs, a global size of 10 and a mapping [0, 8, 9, 1, 3] 
> on proc 0 and [2, 5, 4, 7, 6] on proc 1.
> 
> row_range is [0, 5] on the first proc.
> 
> On the first row-iteration I generate data for row 0 (application ordering 
> AO) which is mapped to row 0 (petsc ordering
> PO), I add these nnz to o_/d_nnz[0]
> 
> On the second row-iteration I generate data for row 1 (AO) which is mapped to 
> row 8 (PO). As with the columns I expect
> the nnz numbering to be in PO, which means either
> 
> 1) nnz[8] (which is not existing), what I would be do in the commented out 
> line.
> 2) nnz[4] on the second proc (which is unreachable from the first proc)
> 3) nnz[1] on the first proc, thus in AO, which produces new mallocs.
> 
> 2. still seems to most probable for me, but before implementing sending the 
> stuff around using MPI, I wanted to ask you
> if it is correct that way? If it needs to be done that way, is there some 
> PETSc magic which helps exchanging that array?
> 
> If the answer lies within MatPreallocate* routines, I failed to see :-/
> 
> Best Thanks,
> Florian
> 
> 
>>> On Jul 10, 2017, at 3:22 AM, Florian Lindner  wrote:
>>> 
>>> Hey,
>>> 
>>> one more question about preallocation:
>>> 
>>> I can determine if a column index is diagonal or off-diagonal using that 
>>> code
>>> 
>>> if (col >= col_range_start and col < col_range_end)
>>>   d_nnz[relative_row]++;
>>> else
>>>   o_nnz[relative_row]++;
>>> 
>>> 
>>> My code, however uses index sets from which a ISlocalToGlobalMapping 
>>> created:
>>> 
>>> // Creates a mapping from permutation and use that for the cols
>>> ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, permutation.data(), 
>>> PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>>> ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>>> ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS 
>>> from all processors
>>> ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping); CHKERRV(ierr); 
>>> // Make it a mapping
>>> 
>>> // Create an identity mapping and use that for the rows of A.
>>> ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1, 
>>> &ISidentity); CHKERRV(ierr);
>>> ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>>> ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>>> ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlo

Re: [petsc-users] Understanding preallocation for MPI

2017-07-16 Thread Florian Lindner
Hello,

Am 11.07.2017 um 02:45 schrieb Barry Smith:
> 
>   You might consider using 
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateInitialize.html
>  and 
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateSetLocal.html#MatPreallocateSetLocal
>  and friends
> 
>These take out some of the busywork of setting the preallocation arrays. 
> They are macros in petscmat.h so even if you don't use them you can see the 
> code they use.

Thanks for your input, I took a look at it and learned from it.

I have this preallocation routine which works perfectly for a index set on the 
columns and non (resp. an identity index
set) on the columns:

void doPreallocation(Mat &A, ISLocalToGlobalMapping &ISmapping, double 
nz_ratio, int nz_number)
{
  PetscInt d_nnz[local_rows], o_nnz[local_rows];

  const PetscInt *mapIndizes;
  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);

  for (int row = row_range_start; row < row_range_end; row++) {
// PetscInt relative_row = mapIndizes[row- row_range_start];
PetscInt relative_row = row - row_range_start;
d_nnz[relative_row] = 0;
o_nnz[relative_row] = 0;
int colStart, colEnd;
std::tie(colStart, colEnd) = colBorders(row);
for (int col = 0; col < global_cols; col++) {
  if (col >= colStart and col < colEnd) {
int petsc_col = mapIndizes[col];
if (petsc_col >= col_range_start and petsc_col < col_range_end) {
  d_nnz[relative_row]++;
} else {
  o_nnz[relative_row]++;
}
  }
}
  }
  ierr = ISLocalToGlobalMappingRestoreIndices(ISmapping, &mapIndizes); 
CHKERRV(ierr);
  ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRV(ierr); // 
works only on minimum 2 ranks
}

colStart and colEnd give a defined number of diagonals for the matrix, just for 
testing. *_range_* are from the PETSc
routines.

But I'm puzzled how to do it when I have also a (the same) index set for the 
rows.

Let's say I have 2 procs, a global size of 10 and a mapping [0, 8, 9, 1, 3] on 
proc 0 and [2, 5, 4, 7, 6] on proc 1.

row_range is [0, 5] on the first proc.

On the first row-iteration I generate data for row 0 (application ordering AO) 
which is mapped to row 0 (petsc ordering
PO), I add these nnz to o_/d_nnz[0]

On the second row-iteration I generate data for row 1 (AO) which is mapped to 
row 8 (PO). As with the columns I expect
the nnz numbering to be in PO, which means either

1) nnz[8] (which is not existing), what I would be do in the commented out line.
2) nnz[4] on the second proc (which is unreachable from the first proc)
3) nnz[1] on the first proc, thus in AO, which produces new mallocs.

2. still seems to most probable for me, but before implementing sending the 
stuff around using MPI, I wanted to ask you
if it is correct that way? If it needs to be done that way, is there some PETSc 
magic which helps exchanging that array?

If the answer lies within MatPreallocate* routines, I failed to see :-/

Best Thanks,
Florian


>> On Jul 10, 2017, at 3:22 AM, Florian Lindner  wrote:
>>
>> Hey,
>>
>> one more question about preallocation:
>>
>> I can determine if a column index is diagonal or off-diagonal using that code
>>
>> if (col >= col_range_start and col < col_range_end)
>>d_nnz[relative_row]++;
>> else
>>o_nnz[relative_row]++;
>>
>>
>> My code, however uses index sets from which a ISlocalToGlobalMapping created:
>>
>>  // Creates a mapping from permutation and use that for the cols
>>  ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, permutation.data(), 
>> PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>>  ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>>  ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS 
>> from all processors
>>  ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping); CHKERRV(ierr); 
>> // Make it a mapping
>>
>>  // Create an identity mapping and use that for the rows of A.
>>  ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1, 
>> &ISidentity); CHKERRV(ierr);
>>  ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>>  ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>>  ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal, 
>> &ISidentityMapping); CHKERRV(ierr);
>>
>>  ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping); 
>> CHKERRV(ierr);
>>
>> since SetPreallocation routines define the diagonal / off-diagonal blocks 
>> from the petsc ordering, I have to translate
>> the col to a petsc_col.
>>
>> What is the best / fastest way to do that?
>>
>> Is that the way to go?
>>
>>  PetscInt mapSize;
>>  ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
>>  const PetscInt *mapIndizes;
>>  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
>>
>> Thanks,
>> Florian
>>
>>
>>
>> Am 07.07.2017 um 17:31 schrieb Florian Lindner:
>>> Hello,
>>>
>>> I'm having some struggle understanding the preallocation for MPIAIJ 
>>> matrices

Re: [petsc-users] Understanding preallocation for MPI

2017-07-10 Thread Barry Smith

  You might consider using 
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateInitialize.html
 and 
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatPreallocateSetLocal.html#MatPreallocateSetLocal
 and friends

   These take out some of the busywork of setting the preallocation arrays. 
They are macros in petscmat.h so even if you don't use them you can see the 
code they use.

   Barry

> On Jul 10, 2017, at 3:22 AM, Florian Lindner  wrote:
> 
> Hey,
> 
> one more question about preallocation:
> 
> I can determine if a column index is diagonal or off-diagonal using that code
> 
> if (col >= col_range_start and col < col_range_end)
>d_nnz[relative_row]++;
> else
>o_nnz[relative_row]++;
> 
> 
> My code, however uses index sets from which a ISlocalToGlobalMapping created:
> 
>  // Creates a mapping from permutation and use that for the cols
>  ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, permutation.data(), 
> PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>  ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>  ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS from 
> all processors
>  ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping); CHKERRV(ierr); 
> // Make it a mapping
> 
>  // Create an identity mapping and use that for the rows of A.
>  ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1, 
> &ISidentity); CHKERRV(ierr);
>  ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>  ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>  ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal, &ISidentityMapping); 
> CHKERRV(ierr);
> 
>  ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping); 
> CHKERRV(ierr);
> 
> since SetPreallocation routines define the diagonal / off-diagonal blocks 
> from the petsc ordering, I have to translate
> the col to a petsc_col.
> 
> What is the best / fastest way to do that?
> 
> Is that the way to go?
> 
>  PetscInt mapSize;
>  ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
>  const PetscInt *mapIndizes;
>  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
> 
> Thanks,
> Florian
> 
> 
> 
> Am 07.07.2017 um 17:31 schrieb Florian Lindner:
>> Hello,
>> 
>> I'm having some struggle understanding the preallocation for MPIAIJ 
>> matrices, especially when a value is in off-diagonal
>> vs. diagonal block.
>> 
>> The small example program is at https://pastebin.com/67dXnGm3
>> 
>> In general it should be parallel, but right now I just run it in serial.
>> 
>> According to my understanding of
>> 
>> http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html
>> 
>> a entry is in the diagonal submatrix, if its row is in the OwnershipRange 
>> and its column is in OwnershipRangeColumn.
>> That also means that in a serial run, there is only a diagonal submatrix.
>> 
>> However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
>> 
>> Inserting 6 elements in row 2, though I have exactly
>> 
>> 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal submatrix 
>> of row 2)
>> 
>> Error is:
>> 
>> [0]PETSC ERROR: Argument out of range
>> [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off 
>> this check
>> 
>> 
>> What is wrong with my understanding?
>> 
>> Thanks,
>> Florian
>> 



Re: [petsc-users] Understanding preallocation for MPI

2017-07-10 Thread Stefano Zampini
Look at the function that begins at line 1050 as the link should redirect to

Il 10 Lug 2017 11:18 AM, "Florian Lindner"  ha scritto:

Hey Stefano,

Am 10.07.2017 um 16:36 schrieb Stefano Zampini:
> Florian,
>
> Perhaps you might want to take a look at how this is done for MatIS
>
> https://bitbucket.org/petsc/petsc/src/f9d5775f43f69cbce5a7014a6ce3b2
4cc0e1214a/src/mat/impls/is/matis.c?at=master&fileviewer=
file-view-default#matis.c-1050

to what piece of code your are specifically refering to?

Line 655, MatSetValuesLocal_SubMat_IS?

and I should use ISLocalToGlobalMappingApply?

but this, if I understand correctly, would involve such a call for every
index.

Right now, my preallocation loop looks like:


PetscInt d_nnz[local_rows], o_nnz[local_rows];

PetscInt mapSize;
ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
const PetscInt *mapIndizes;
ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes); // is called
just once

for (int row = row_range_start; row < row_range_end; row++) {
  PetscInt relative_row = row - row_range_start;
  d_nnz[relative_row] = 0;
  o_nnz[relative_row] = 0;
  int colStart = row - nz_ratio/2 * size;
  int colEnd   = row + nz_ratio/2 * size;
  colStart = (colStart < 0)? 0: colStart;
  colEnd   = (colEnd   > size) ? size : colEnd;
  for (int col = 0; col < global_cols; col++) {
if (col >= colStart and col < colEnd) {

  int petsc_col = mapIndizes[col]; // should be cheap
  if (petsc_col >= col_range_start and petsc_col < col_range_end) {
d_nnz[relative_row]++;
  } else {
o_nnz[relative_row]++;
  }

}
  }
}
ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRQ(ierr);

Best,
Florian

>
> Stefano
>
> Il 10 Lug 2017 10:23 AM, "Florian Lindner" > ha scritto:
>
> Hey,
>
> one more question about preallocation:
>
> I can determine if a column index is diagonal or off-diagonal using
that code
>
> if (col >= col_range_start and col < col_range_end)
> d_nnz[relative_row]++;
> else
> o_nnz[relative_row]++;
>
>
> My code, however uses index sets from which a ISlocalToGlobalMapping
created:
>
>   // Creates a mapping from permutation and use that for the cols
>   ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols,
permutation.data(), PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>   ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>   ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather
the IS from all processors
>   ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping);
CHKERRV(ierr); // Make it a mapping
>
>   // Create an identity mapping and use that for the rows of A.
>   ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows,
row_range_start, 1, &ISidentity); CHKERRV(ierr);
>   ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>   ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>   ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal,
&ISidentityMapping); CHKERRV(ierr);
>
>   ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping);
CHKERRV(ierr);
>
> since SetPreallocation routines define the diagonal / off-diagonal
blocks from the petsc ordering, I have to translate
> the col to a petsc_col.
>
> What is the best / fastest way to do that?
>
> Is that the way to go?
>
>   PetscInt mapSize;
>   ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
>   const PetscInt *mapIndizes;
>   ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
>
> Thanks,
> Florian
>
>
>
> Am 07.07.2017 um 17:31 schrieb Florian Lindner:
> > Hello,
> >
> > I'm having some struggle understanding the preallocation for MPIAIJ
matrices, especially when a value is in
> off-diagonal
> > vs. diagonal block.
> >
> > The small example program is at https://pastebin.com/67dXnGm3
> >
> > In general it should be parallel, but right now I just run it in
serial.
> >
> > According to my understanding of
> >
> > http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/
Mat/MatMPIAIJSetPreallocation.html
> 
> >
> > a entry is in the diagonal submatrix, if its row is in the
OwnershipRange and its column is in OwnershipRangeColumn.
> > That also means that in a serial run, there is only a diagonal
submatrix.
> >
> > However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error
when
> >
> > Inserting 6 elements in row 2, though I have exactly
> >
> > 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal
submatrix of row 2)
> >
> > Error is:
> >
> > [0]PETSC ERROR: Argument out of range
> > [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
> > Use MatSetOption(A, MAT_NEW_NONZERO

Re: [petsc-users] Understanding preallocation for MPI

2017-07-10 Thread Florian Lindner
Hey Stefano,

Am 10.07.2017 um 16:36 schrieb Stefano Zampini:
> Florian,
> 
> Perhaps you might want to take a look at how this is done for MatIS
> 
> https://bitbucket.org/petsc/petsc/src/f9d5775f43f69cbce5a7014a6ce3b24cc0e1214a/src/mat/impls/is/matis.c?at=master&fileviewer=file-view-default#matis.c-1050

to what piece of code your are specifically refering to?

Line 655, MatSetValuesLocal_SubMat_IS?

and I should use ISLocalToGlobalMappingApply?

but this, if I understand correctly, would involve such a call for every index.

Right now, my preallocation loop looks like:


PetscInt d_nnz[local_rows], o_nnz[local_rows];

PetscInt mapSize;
ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
const PetscInt *mapIndizes;
ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes); // is called just 
once

for (int row = row_range_start; row < row_range_end; row++) {
  PetscInt relative_row = row - row_range_start;
  d_nnz[relative_row] = 0;
  o_nnz[relative_row] = 0;
  int colStart = row - nz_ratio/2 * size;
  int colEnd   = row + nz_ratio/2 * size;
  colStart = (colStart < 0)? 0: colStart;
  colEnd   = (colEnd   > size) ? size : colEnd;
  for (int col = 0; col < global_cols; col++) {
if (col >= colStart and col < colEnd) {

  int petsc_col = mapIndizes[col]; // should be cheap
  if (petsc_col >= col_range_start and petsc_col < col_range_end) {
d_nnz[relative_row]++;
  } else {
o_nnz[relative_row]++;
  }

}
  }
}
ierr = MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz); CHKERRQ(ierr);

Best,
Florian

> 
> Stefano
> 
> Il 10 Lug 2017 10:23 AM, "Florian Lindner"  > ha scritto:
> 
> Hey,
> 
> one more question about preallocation:
> 
> I can determine if a column index is diagonal or off-diagonal using that 
> code
> 
> if (col >= col_range_start and col < col_range_end)
> d_nnz[relative_row]++;
> else
> o_nnz[relative_row]++;
> 
> 
> My code, however uses index sets from which a ISlocalToGlobalMapping 
> created:
> 
>   // Creates a mapping from permutation and use that for the cols
>   ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, 
> permutation.data(), PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>   ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>   ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS 
> from all processors
>   ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping); 
> CHKERRV(ierr); // Make it a mapping
> 
>   // Create an identity mapping and use that for the rows of A.
>   ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1, 
> &ISidentity); CHKERRV(ierr);
>   ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>   ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>   ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal, 
> &ISidentityMapping); CHKERRV(ierr);
> 
>   ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping); 
> CHKERRV(ierr);
> 
> since SetPreallocation routines define the diagonal / off-diagonal blocks 
> from the petsc ordering, I have to translate
> the col to a petsc_col.
> 
> What is the best / fastest way to do that?
> 
> Is that the way to go?
> 
>   PetscInt mapSize;
>   ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
>   const PetscInt *mapIndizes;
>   ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
> 
> Thanks,
> Florian
> 
> 
> 
> Am 07.07.2017 um 17:31 schrieb Florian Lindner:
> > Hello,
> >
> > I'm having some struggle understanding the preallocation for MPIAIJ 
> matrices, especially when a value is in
> off-diagonal
> > vs. diagonal block.
> >
> > The small example program is at https://pastebin.com/67dXnGm3
> >
> > In general it should be parallel, but right now I just run it in serial.
> >
> > According to my understanding of
> >
> > 
> http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html
> 
> 
> >
> > a entry is in the diagonal submatrix, if its row is in the 
> OwnershipRange and its column is in OwnershipRangeColumn.
> > That also means that in a serial run, there is only a diagonal 
> submatrix.
> >
> > However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
> >
> > Inserting 6 elements in row 2, though I have exactly
> >
> > 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal 
> submatrix of row 2)
> >
> > Error is:
> >
> > [0]PETSC ERROR: Argument out of range
> > [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
> > Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to 
> turn of

Re: [petsc-users] Understanding preallocation for MPI

2017-07-10 Thread Stefano Zampini
Florian,

Perhaps you might want to take a look at how this is done for MatIS

https://bitbucket.org/petsc/petsc/src/f9d5775f43f69cbce5a7014a6ce3b24cc0e1214a/src/mat/impls/is/matis.c?at=master&fileviewer=file-view-default#matis.c-1050

Stefano

Il 10 Lug 2017 10:23 AM, "Florian Lindner"  ha scritto:

> Hey,
>
> one more question about preallocation:
>
> I can determine if a column index is diagonal or off-diagonal using that
> code
>
> if (col >= col_range_start and col < col_range_end)
> d_nnz[relative_row]++;
> else
> o_nnz[relative_row]++;
>
>
> My code, however uses index sets from which a ISlocalToGlobalMapping
> created:
>
>   // Creates a mapping from permutation and use that for the cols
>   ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols,
> permutation.data(), PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
>   ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
>   ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS
> from all processors
>   ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping);
> CHKERRV(ierr); // Make it a mapping
>
>   // Create an identity mapping and use that for the rows of A.
>   ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1,
> &ISidentity); CHKERRV(ierr);
>   ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
>   ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
>   ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal,
> &ISidentityMapping); CHKERRV(ierr);
>
>   ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping);
> CHKERRV(ierr);
>
> since SetPreallocation routines define the diagonal / off-diagonal blocks
> from the petsc ordering, I have to translate
> the col to a petsc_col.
>
> What is the best / fastest way to do that?
>
> Is that the way to go?
>
>   PetscInt mapSize;
>   ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
>   const PetscInt *mapIndizes;
>   ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);
>
> Thanks,
> Florian
>
>
>
> Am 07.07.2017 um 17:31 schrieb Florian Lindner:
> > Hello,
> >
> > I'm having some struggle understanding the preallocation for MPIAIJ
> matrices, especially when a value is in off-diagonal
> > vs. diagonal block.
> >
> > The small example program is at https://pastebin.com/67dXnGm3
> >
> > In general it should be parallel, but right now I just run it in serial.
> >
> > According to my understanding of
> >
> > http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/
> Mat/MatMPIAIJSetPreallocation.html
> >
> > a entry is in the diagonal submatrix, if its row is in the
> OwnershipRange and its column is in OwnershipRangeColumn.
> > That also means that in a serial run, there is only a diagonal submatrix.
> >
> > However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
> >
> > Inserting 6 elements in row 2, though I have exactly
> >
> > 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal
> submatrix of row 2)
> >
> > Error is:
> >
> > [0]PETSC ERROR: Argument out of range
> > [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
> > Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to
> turn off this check
> >
> >
> > What is wrong with my understanding?
> >
> > Thanks,
> > Florian
> >
>


Re: [petsc-users] Understanding preallocation for MPI

2017-07-10 Thread Florian Lindner
Hey,

one more question about preallocation:

I can determine if a column index is diagonal or off-diagonal using that code

if (col >= col_range_start and col < col_range_end)
d_nnz[relative_row]++;
else
o_nnz[relative_row]++;


My code, however uses index sets from which a ISlocalToGlobalMapping created:

  // Creates a mapping from permutation and use that for the cols
  ierr = ISCreateGeneral(PETSC_COMM_WORLD, local_cols, permutation.data(), 
PETSC_COPY_VALUES, &ISlocal); CHKERRV(ierr);
  ierr = ISSetPermutation(ISlocal); CHKERRV(ierr);
  ierr = ISAllGather(ISlocal, &ISglobal); CHKERRV(ierr); // Gather the IS from 
all processors
  ierr = ISLocalToGlobalMappingCreateIS(ISglobal, &ISmapping); CHKERRV(ierr); 
// Make it a mapping

  // Create an identity mapping and use that for the rows of A.
  ierr = ISCreateStride(PETSC_COMM_WORLD, local_rows, row_range_start, 1, 
&ISidentity); CHKERRV(ierr);
  ierr = ISSetIdentity(ISidentity); CHKERRV(ierr);
  ierr = ISAllGather(ISidentity, &ISidentityGlobal); CHKERRV(ierr);
  ierr = ISLocalToGlobalMappingCreateIS(ISidentityGlobal, &ISidentityMapping); 
CHKERRV(ierr);

  ierr = MatSetLocalToGlobalMapping(A, ISidentityMapping, ISmapping); 
CHKERRV(ierr);

since SetPreallocation routines define the diagonal / off-diagonal blocks from 
the petsc ordering, I have to translate
the col to a petsc_col.

What is the best / fastest way to do that?

Is that the way to go?

  PetscInt mapSize;
  ISLocalToGlobalMappingGetSize(ISmapping, &mapSize);
  const PetscInt *mapIndizes;
  ISLocalToGlobalMappingGetIndices(ISmapping, &mapIndizes);

Thanks,
Florian



Am 07.07.2017 um 17:31 schrieb Florian Lindner:
> Hello,
> 
> I'm having some struggle understanding the preallocation for MPIAIJ matrices, 
> especially when a value is in off-diagonal
> vs. diagonal block.
> 
> The small example program is at https://pastebin.com/67dXnGm3
> 
> In general it should be parallel, but right now I just run it in serial.
> 
> According to my understanding of
> 
> http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html
> 
> a entry is in the diagonal submatrix, if its row is in the OwnershipRange and 
> its column is in OwnershipRangeColumn.
> That also means that in a serial run, there is only a diagonal submatrix.
> 
> However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
> 
> Inserting 6 elements in row 2, though I have exactly
> 
> 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal submatrix 
> of row 2)
> 
> Error is:
> 
> [0]PETSC ERROR: Argument out of range
> [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off 
> this check
> 
> 
> What is wrong with my understanding?
> 
> Thanks,
> Florian
> 


Re: [petsc-users] Understanding preallocation for MPI

2017-07-07 Thread Matthew Knepley
On Fri, Jul 7, 2017 at 2:49 AM, Dave May  wrote:

> On Fri, 7 Jul 2017 at 11:31, Florian Lindner  wrote:
>
>> Hello,
>>
>> I'm having some struggle understanding the preallocation for MPIAIJ
>> matrices, especially when a value is in off-diagonal
>> vs. diagonal block.
>>
>> The small example program is at https://pastebin.com/67dXnGm3
>>
>> In general it should be parallel, but right now I just run it in serial.
>
>
> When you run this code in serial, the mat type will be MATSEQAIJ. Hence,
> the call to MatMPIAIJSetPreallocation() will have no effect because the mat
> type does not match MPIAIJ. As a result, your code doesn't perform any
> preallocation for SEQAIJ matrices.
>
> In addition to calling MatMPIAIJSetPreallocation(), add a call to
> MatSEQAIJSetPreallocation.
>

To make it easier we now provide


http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatXAIJSetPreallocation.html

Matt


> Thanks,
>   Dave
>
>
>>
>> According to my understanding of
>>
>> http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/
>> Mat/MatMPIAIJSetPreallocation.html
>>
>> a entry is in the diagonal submatrix, if its row is in the OwnershipRange
>> and its column is in OwnershipRangeColumn.
>> That also means that in a serial run, there is only a diagonal submatrix.
>>
>> However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
>>
>> Inserting 6 elements in row 2, though I have exactly
>>
>> 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal
>> submatrix of row 2)
>>
>> Error is:
>>
>> [0]PETSC ERROR: Argument out of range
>> [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
>> off this check
>>
>>
>> What is wrong with my understanding?
>>
>> Thanks,
>> Florian
>>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/


Re: [petsc-users] Understanding preallocation for MPI

2017-07-07 Thread Dave May
On Fri, 7 Jul 2017 at 11:31, Florian Lindner  wrote:

> Hello,
>
> I'm having some struggle understanding the preallocation for MPIAIJ
> matrices, especially when a value is in off-diagonal
> vs. diagonal block.
>
> The small example program is at https://pastebin.com/67dXnGm3
>
> In general it should be parallel, but right now I just run it in serial.


When you run this code in serial, the mat type will be MATSEQAIJ. Hence,
the call to MatMPIAIJSetPreallocation() will have no effect because the mat
type does not match MPIAIJ. As a result, your code doesn't perform any
preallocation for SEQAIJ matrices.

In addition to calling MatMPIAIJSetPreallocation(), add a call to
MatSEQAIJSetPreallocation.

Thanks,
  Dave


>
> According to my understanding of
>
>
> http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html
>
> a entry is in the diagonal submatrix, if its row is in the OwnershipRange
> and its column is in OwnershipRangeColumn.
> That also means that in a serial run, there is only a diagonal submatrix.
>
> However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when
>
> Inserting 6 elements in row 2, though I have exactly
>
> 2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal
> submatrix of row 2)
>
> Error is:
>
> [0]PETSC ERROR: Argument out of range
> [0]PETSC ERROR: New nonzero at (2,5) caused a malloc
> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn
> off this check
>
>
> What is wrong with my understanding?
>
> Thanks,
> Florian
>


[petsc-users] Understanding preallocation for MPI

2017-07-07 Thread Florian Lindner
Hello,

I'm having some struggle understanding the preallocation for MPIAIJ matrices, 
especially when a value is in off-diagonal
vs. diagonal block.

The small example program is at https://pastebin.com/67dXnGm3

In general it should be parallel, but right now I just run it in serial.

According to my understanding of

http://www.mcs.anl.gov/petsc/petsc-3.7/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html

a entry is in the diagonal submatrix, if its row is in the OwnershipRange and 
its column is in OwnershipRangeColumn.
That also means that in a serial run, there is only a diagonal submatrix.

However, having MAT_NEW_NONZERO_ALLOCATION_ERR set, I get an error when

Inserting 6 elements in row 2, though I have exactly

2 o_nnz = 0, d_nnz = 6 (means 6 elements allocated in the diagonal submatrix of 
row 2)

Error is:

[0]PETSC ERROR: Argument out of range
[0]PETSC ERROR: New nonzero at (2,5) caused a malloc
Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off 
this check


What is wrong with my understanding?

Thanks,
Florian