On Mon, Apr 7, 2008 at 2:27 PM, Randall Mackie wrote:
> I've run into a problem with my code where, for a smaller problem, it
> bombs out in creating a 3D DA (with error message about the partition being
> too fine in the z direction) for the case where np=121, but works fine
> for the case np=
I've run into a problem with my code where, for a smaller problem, it
bombs out in creating a 3D DA (with error message about the partition being
too fine in the z direction) for the case where np=121, but works fine
for the case np=484.
I would think that the creation of the DA should work fine f
Hello,
I am trying to run a PETSc code on a parallel machine (it may be
relevant that each node contains four AMD Opteron Quad-Core 64-bit
processors (16 cores in all) as an SMP unit with 32GB of memory) and I'm
observing some behaviour I don't understand.
I'm using PETSC_COMM_SELF in order to
nts/20080407/8b3ef343/attachment.htm>
Matt,
> > I'm using PETSC_COMM_SELF in order to construct the same matrix
> > on each processor (and solve the system with a different
> > right-hand side vector on each processor),
So its a bunch of similar sequential solves - over PETSC_COMM_SELF. So
a seq solve on a given mpi-thread should
st regards,
> Amjad Ali.*
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20080407/f50a5f59/attachment.htm>
It sounds like he is saying that the iterative solvers fail to
converge. It could be
that the systems become much more ill-conditioned. When solving anything,
first use LU
-ksp_type preonly -pc_type lu
to determine if the system is consistent. Then use something simple, like
GMRES by itself
Please send the corresponding confiure.log to petsc-maint at mcs.anl.gov
Satish
On Mon, 7 Apr 2008, Nicolas Tardieu wrote:
> Hi,
>
> I have some troubles using PETSc compiled with Intel compilers (version
> 10.1) in Fortran language in parallel on a 64 bits machine. The
> PetscInitialize alway
On Mon, 7 Apr 2008, David Knezevic wrote:
> Hello,
>
> I am trying to run a PETSc code on a parallel machine (it may be relevant that
> each node contains four AMD Opteron Quad-Core 64-bit processors (16 cores in
> all) as an SMP unit with 32GB of memory) and I'm observing some behaviour I
> don'