Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Smith, Barry F. via petsc-users
> On Nov 15, 2018, at 1:02 PM, Mark Adams wrote: > > There is a lot of load imbalance in VecMAXPY also. The partitioning could be > bad and if not its the machine. > > On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users > wrote: > > Something is odd about your configurat

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Mark Adams via petsc-users
There is a lot of load imbalance in VecMAXPY also. The partitioning could be bad and if not its the machine. On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users < petsc-users@mcs.anl.gov> wrote: > > Something is odd about your configuration. Just consider the time for > VecMAXPY w

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Smith, Barry F. via petsc-users
Something is odd about your configuration. Just consider the time for VecMAXPY which is an embarrassingly parallel operation. On 1000 MPI processes it produces Time

Re: [petsc-users] On unknown ordering

2018-11-15 Thread Smith, Barry F. via petsc-users
> On Nov 15, 2018, at 4:48 AM, Appel, Thibaut via petsc-users > wrote: > > Good morning, > > I would like to ask about the importance of the initial choice of ordering > the unknowns when feeding a matrix to PETSc. > > I have a regular grid, using high-order finite differences and I simply

Re: [petsc-users] petsc4py help with parallel execution

2018-11-15 Thread Ivan via petsc-users
Matthew, */As I wrote before, its not impossible. You could be directly calling PMI, but I do not think you are doing that./* Could you precise what is PMI? and how can we directly use it? It might be a key to this mystery! */Why do you think its running on 8 processes?/* Well, we base our

Re: [petsc-users] GAMG Parallel Performance

2018-11-15 Thread Matthew Knepley via petsc-users
On Thu, Nov 15, 2018 at 11:52 AM Karin&NiKo via petsc-users < petsc-users@mcs.anl.gov> wrote: > Dear PETSc team, > > I am solving a linear transient dynamic problem, based on a discretization > with finite elements. To do that, I am using FGMRES with GAMG as a > preconditioner. I consider here 10

Re: [petsc-users] petsc4py help with parallel execution

2018-11-15 Thread Matthew Knepley via petsc-users
On Thu, Nov 15, 2018 at 11:59 AM Ivan Voznyuk wrote: > Hi Matthew, > > Does it mean that by using just command python3 simple_code.py (without > mpiexec) you *cannot* obtain a parallel execution? > As I wrote before, its not impossible. You could be directly calling PMI, but I do not think you a

Re: [petsc-users] petsc4py help with parallel execution

2018-11-15 Thread Ivan Voznyuk via petsc-users
Hi Matthew, Does it mean that by using just command python3 simple_code.py (without mpiexec) you *cannot* obtain a parallel execution? It s been 5 days we are trying to understand with my colleague how he managed to do so. It means that by using simply python3 simple_code.py he gets 8 processors w

[petsc-users] GAMG Parallel Performance

2018-11-15 Thread Karin&NiKo via petsc-users
Dear PETSc team, I am solving a linear transient dynamic problem, based on a discretization with finite elements. To do that, I am using FGMRES with GAMG as a preconditioner. I consider here 10 time steps. The problem has round to 118e6 dof and I am running on 1000, 1500 and 2000 procs. So I have

[petsc-users] On unknown ordering

2018-11-15 Thread Appel, Thibaut via petsc-users
Good morning, I would like to ask about the importance of the initial choice of ordering the unknowns when feeding a matrix to PETSc. I have a regular grid, using high-order finite differences and I simply divide rows of the matrix with PetscSplitOwnership using vertex major, natural ordering

[petsc-users] petsc4py help with parallel execution

2018-11-15 Thread Ivan Voznyuk via petsc-users
Dear PETSC community, I have a question regarding the parallel execution of petsc4py. I have a simple code (here attached simple_code.py) which solves a system of linear equations Ax=b using petsc4py. To execute it, I use the command python3 simple_code.py which yields a sequential performance. W