Matt
Why is config/PETSc/FEM.py where it is. It isn't configure stuff is it? Can
it be removed or moved to somewhere proper?
What about ./config/BuildSystem/install.old Can that be removed?
Thanks
Barry
Department of Mathematics and Center for Computation & Technology
> > Louisiana State University, Baton Rouge, LA 70803, USA
> > Tel. +1 (225) 578 1612, Fax +1 (225) 578 4276
> http://www.math.lsu.edu/~bourdin
> >
> >
> >
> >
> >
> >
> >
>
>
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/0b1017e4/attachment.html>
Suggestions for improvements for the checks?
#define PetscValidLogicalCollectiveScalar(a,b,c)\
do { \
PetscErrorCode _7_ierr; \
PetscReal b1[2],b2[2];
e matrix on preselected vertices during
MatAssemblyBegin/End? Note that this will imply that standard
Neumann-Neumann methods will not work (they need the unassembled matrix to
solve for the local Schur complements).
--
Stefano
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/2f2ca450/attachment.html>
Hi,
It looks like when VecWAXPY is called with alpha=Nan,
PetscValidLogicalCollectiveScalar causes the message "Scalar value must be same
on all processes, argument # 2" to be printed. This is a bit misleading, and
confusing when running on only 1 processor.
Is this something worth fixing?
Bl
have that info tomorrow.
>
> Can you instruct one process (or a few) to dump core?
I suppose so. How does one do that?
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/839046fa/attachment.html>
ed
printfs to this and submitted the job so I should have that info tomorrow.
>
>
> I can keep diving in, but anyone have any ideas on this?
>
> Mark
>
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/7fb6fd97/attachment.html>
the Cray.
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/899ca9e3/attachment.html>
I've got a consistent segv on the Cray at NERSC with 64K cores. No problems
with smaller jobs.
It seems to happen in here:
/* Done after init due to a bug in MPICH-GM? */
ierr = PetscErrorPrintfInitialize();CHKERRQ(ierr);
I can keep diving in, but anyone have any ideas on this?
Mark
s (or a few) to dump core?
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/e9c3a48d/attachment.html>
.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/2813d88a/attachment.html>
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/2191ec86/attachment.html>
assembled matrix to
> solve for the local Schur complements).
I'm not too concerned about that since I consider the classic N-N and
original FETI methods to be rather special-purpose compared to the newer
generation. I would like to limit the number of copies of a matrix to
control peak memory usage.
-- next part --
An HTML attachment was scrubbed...
URL:
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/f61c843e/attachment.html>
13 matches
Mail list logo