[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset
Hi, I am using 'ISPartitioningToNumbering' to generate new global numbering from a partitioning indexset and I'm baffled at the following situation. I'm debugging my code on a simple grid consisting of 81 grid points partitioned among two processes. When I look into the partitioning indexset (i.e. looking at the indecies via ISView) I can see that 40 points have been assigned to proc 0 and 41 to processor 1. Isn't it true that when 81 points are distributed among two processors 41 should go to proc 0 and 40 to proc 1? I have based my whole code on the assumption (verified before through mailing list i guess) that natural ordering in PETSc leads to a distribution of points such that all processors get the same number of points ( [0, n/p) on proc 0, [n/p, 2n/p) on proc 1, ... ) unless n%p != 0, in which case the first k (with k = n%p) processors receive 1 point extra. Am I wrong to assume this? Thanks, Mohammad PS: Is it relevant that the partitioning indexset is obtained via ParMetis?partitiong -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/1e5928c7/attachment.htm
[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset
On Apr 9, 2012, at 5:06 PM, Mohammad Mirzadeh wrote: Hi, I am using 'ISPartitioningToNumbering' to generate new global numbering from a partitioning indexset and I'm baffled at the following situation. I'm debugging my code on a simple grid consisting of 81 grid points partitioned among two processes. When I look into the partitioning indexset (i.e. looking at the indecies via ISView) I can see that 40 points have been assigned to proc 0 and 41 to processor 1. Isn't it true that when 81 points are distributed among two processors 41 should go to proc 0 and 40 to proc 1? I have based my whole code on the assumption (verified before through mailing list i guess) that natural ordering in PETSc leads to a distribution of points such that all processors get the same number of points ( [0, n/p) on proc 0, [n/p, 2n/p) on proc 1, ... ) unless n%p != 0, in which case the first k (with k = n%p) processors receive 1 point extra. Am I wrong to assume this? Thanks, Mohammad PS: Is it relevant that the partitioning indexset is obtained via ParMetis?partitiong Yes, ParMetis provides no guarantee about how many points would get assigned to each process. Barry
[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset
Aaah! Thanks Barry. Just to make sure though, is my assumption on the natural ordering of PETSc correct? Thanks On Mon, Apr 9, 2012 at 3:10 PM, Barry Smith bsmith at mcs.anl.gov wrote: On Apr 9, 2012, at 5:06 PM, Mohammad Mirzadeh wrote: Hi, I am using 'ISPartitioningToNumbering' to generate new global numbering from a partitioning indexset and I'm baffled at the following situation. I'm debugging my code on a simple grid consisting of 81 grid points partitioned among two processes. When I look into the partitioning indexset (i.e. looking at the indecies via ISView) I can see that 40 points have been assigned to proc 0 and 41 to processor 1. Isn't it true that when 81 points are distributed among two processors 41 should go to proc 0 and 40 to proc 1? I have based my whole code on the assumption (verified before through mailing list i guess) that natural ordering in PETSc leads to a distribution of points such that all processors get the same number of points ( [0, n/p) on proc 0, [n/p, 2n/p) on proc 1, ... ) unless n%p != 0, in which case the first k (with k = n%p) processors receive 1 point extra. Am I wrong to assume this? Thanks, Mohammad PS: Is it relevant that the partitioning indexset is obtained via ParMetis?partitiong Yes, ParMetis provides no guarantee about how many points would get assigned to each process. Barry -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/a0fecbf8/attachment.htm
[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset
On Mon, Apr 9, 2012 at 17:25, Mohammad Mirzadeh mirzadeh at gmail.com wrote: Just to make sure though, is my assumption on the natural ordering of PETSc correct? Yes, but please don't write code that depends on that. It's plenty easy to query the local size or the sizes/starts of other processes. -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/0ad28fbe/attachment.htm
[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset
Thanks Jed for the advice. Actually I looked at the code and I think this was the only place I had made this implicit assumption (but the most important place). As a matter of fact, I make my vectors layout according to the AO that I get from partitioning. I think my main mistake was that I had assumed the partitioning uses the same ordering as the PETSc ordering. The code seems to run correctly now :) On Mon, Apr 9, 2012 at 5:09 PM, Jed Brown jedbrown at mcs.anl.gov wrote: On Mon, Apr 9, 2012 at 17:25, Mohammad Mirzadeh mirzadeh at gmail.comwrote: Just to make sure though, is my assumption on the natural ordering of PETSc correct? Yes, but please don't write code that depends on that. It's plenty easy to query the local size or the sizes/starts of other processes. -- next part -- An HTML attachment was scrubbed... URL: http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/4d31b0fa/attachment.htm