Sorry know one answered it. I had hoped Mark Adams would since he knows much
more about it then me.
> On Jul 6, 2016, at 2:50 PM, Eduardo Jourdan
> wrote:
>
> Hi,
>
> I am kind of new to algebraic multigrid methods. I tried to figure it on my
> own but I'm not be sure about it.
>
> How t
Can you run with the additional option -ksp_view_mat binary and email the
resulting file which will be called binaryoutput to petsc-ma...@mcs.anl.gov
Barry
> On Jul 13, 2016, at 2:30 PM, Safin, Artur wrote:
>
> Dear PETSc community,
>
> I am working on solving a Helmholtz problem with P
On 14 July 2016 at 01:07, frank wrote:
> Hi Dave,
>
> Sorry for the late reply.
> Thank you so much for your detailed reply.
>
> I have a question about the estimation of the memory usage. There are
> 4223139840 allocated non-zeros and 18432 MPI processes. Double precision is
> used. So the memor
> On Jul 13, 2016, at 6:07 PM, frank wrote:
>
> Hi Dave,
>
> Sorry for the late reply.
> Thank you so much for your detailed reply.
>
> I have a question about the estimation of the memory usage. There are
> 4223139840 allocated non-zeros and 18432 MPI processes. Double precision is
> used.
Hi Dave,
Sorry for the late reply.
Thank you so much for your detailed reply.
I have a question about the estimation of the memory usage. There are
4223139840 allocated non-zeros and 18432 MPI processes. Double precision
is used. So the memory per process is:
4223139840 * 8bytes / 18432 / 1
On Wed, Jul 13, 2016 at 3:57 AM, Morten Nobel-Jørgensen wrote:
> I’m having problems distributing a simple FEM model using DMPlex. For test
> case I use 1x1x2 hex box elements (/cells) with 12 vertices. Each vertex
> has one DOF.
> When I distribute the system to two processors, each get a single
Dear PETSc community,
I am working on solving a Helmholtz problem with PML. The issue is that I am
finding it very hard to deal with the resulting matrix system; I can get the
correct solution for coarse meshes, but it takes roughly 2-4 times as long to
converge for each successively refined me
> On Jul 13, 2016, at 11:05 AM, Matthew Knepley wrote:
>
> On Wed, Jul 13, 2016 at 10:34 AM, Hoang Giang Bui wrote:
> Thanks Barry
>
> This is a good comment. Since material behaviour depends very much on the
> trajectory of the solution. I suspect that the error may concatenate during
> tim
On Wed, Jul 13, 2016 at 10:34 AM, Hoang Giang Bui
wrote:
> Thanks Barry
>
> This is a good comment. Since material behaviour depends very much on the
> trajectory of the solution. I suspect that the error may concatenate during
> time stepping.
>
> I have re-run the simulation as you suggested an
Thanks Barry
This is a good comment. Since material behaviour depends very much on the
trajectory of the solution. I suspect that the error may concatenate during
time stepping.
I have re-run the simulation as you suggested and post the log file here:
https://www.dropbox.com/s/d6l8ixme37uh47a/log
> On Jul 13, 2016, at 4:17 AM, Dave May wrote:
>
> Hi Barry,
>
>
> Dave,
>
>MatPtAP has to generate some work space. Is it possible the "guess" it
> uses for needed work space is so absurdly (and unnecessarily) large that it
> triggers a memory issue? It is possible that other place
Hi Barry,
> Dave,
>
>MatPtAP has to generate some work space. Is it possible the "guess" it
> uses for needed work space is so absurdly (and unnecessarily) large that it
> triggers a memory issue? It is possible that other places that require
> "guesses" for work space produce a problem?
I’m having problems distributing a simple FEM model using DMPlex. For test case
I use 1x1x2 hex box elements (/cells) with 12 vertices. Each vertex has one DOF.
When I distribute the system to two processors, each get a single element and
the local vector has the size 8 (one DOF for each vertex o
13 matches
Mail list logo