> On Nov 15, 2018, at 1:02 PM, Mark Adams wrote:
>
> There is a lot of load imbalance in VecMAXPY also. The partitioning could be
> bad and if not its the machine.
>
> On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users
> wrote:
>
> Something is odd about your configurat
There is a lot of load imbalance in VecMAXPY also. The partitioning could
be bad and if not its the machine.
On Thu, Nov 15, 2018 at 1:56 PM Smith, Barry F. via petsc-users <
petsc-users@mcs.anl.gov> wrote:
>
> Something is odd about your configuration. Just consider the time for
> VecMAXPY w
Something is odd about your configuration. Just consider the time for
VecMAXPY which is an embarrassingly parallel operation. On 1000 MPI processes
it produces
Time
> On Nov 15, 2018, at 4:48 AM, Appel, Thibaut via petsc-users
> wrote:
>
> Good morning,
>
> I would like to ask about the importance of the initial choice of ordering
> the unknowns when feeding a matrix to PETSc.
>
> I have a regular grid, using high-order finite differences and I simply
Matthew,
*/As I wrote before, its not impossible. You could be directly calling
PMI, but I do not think you are doing that./*
Could you precise what is PMI? and how can we directly use it? It might
be a key to this mystery!
*/Why do you think its running on 8 processes?/*
Well, we base our
On Thu, Nov 15, 2018 at 11:52 AM Karin&NiKo via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Dear PETSc team,
>
> I am solving a linear transient dynamic problem, based on a discretization
> with finite elements. To do that, I am using FGMRES with GAMG as a
> preconditioner. I consider here 10
On Thu, Nov 15, 2018 at 11:59 AM Ivan Voznyuk
wrote:
> Hi Matthew,
>
> Does it mean that by using just command python3 simple_code.py (without
> mpiexec) you *cannot* obtain a parallel execution?
>
As I wrote before, its not impossible. You could be directly calling PMI,
but I do not think you a
Hi Matthew,
Does it mean that by using just command python3 simple_code.py (without
mpiexec) you *cannot* obtain a parallel execution?
It s been 5 days we are trying to understand with my colleague how he
managed to do so.
It means that by using simply python3 simple_code.py he gets 8 processors
w
Dear PETSc team,
I am solving a linear transient dynamic problem, based on a discretization
with finite elements. To do that, I am using FGMRES with GAMG as a
preconditioner. I consider here 10 time steps.
The problem has round to 118e6 dof and I am running on 1000, 1500 and 2000
procs. So I have
Good morning,
I would like to ask about the importance of the initial choice of ordering the
unknowns when feeding a matrix to PETSc.
I have a regular grid, using high-order finite differences and I simply divide
rows of the matrix with PetscSplitOwnership using vertex major, natural
ordering
Dear PETSC community,
I have a question regarding the parallel execution of petsc4py.
I have a simple code (here attached simple_code.py) which solves a system
of linear equations Ax=b using petsc4py. To execute it, I use the command
python3 simple_code.py which yields a sequential performance. W
11 matches
Mail list logo