You can define 'PETSC_HAVE_BROKEN_RECURSIVE_MACRO' and then include
petsc.h in your sources to avoid these macros in amrex/application
codes.
PETSc logging is one of the important features - its best to not
disable it (globally for all) due to this issue.
Satish
On Wed, 2 Nov 2022, Erik Schnette
PETSc redefines MPI functions as macros when logging is enabled. This
breaks some C++ code; see e.g. <
https://github.com/AMReX-Codes/amrex/pull/3005> for an example. The reason
is that macros get confused about commas in template arguments.
It would be convenient if PETSc used a different way to
Thanks for the bug report with reproducing example. I have a fix in
https://gitlab.com/petsc/petsc/-/merge_requests/5797
Barry
> On Nov 2, 2022, at 6:52 AM, Stephan Köhler
> wrote:
>
> Dear PETSc/Tao team,
>
> it seems to be that there is a bug in the LMVM matrix class:
>
> In the fun
Stephan,
I have located the troublesome line in TaoSetUp_ALMM() it has the line
auglag->Px = tao->solution;
and in alma.h it has
Vec Px, LgradX, Ce, Ci, G; /* aliased vectors (do not destroy!) */
Now auglag->P in some situations alias auglag->P and in some cases auglag->Px
On Wed, Nov 2, 2022 at 8:57 AM Gong Yujie wrote:
> Dear development team,
>
> Now I'm doing a project about visualization. In the process of
> visualization, the surface mesh is preferred. I have two questions about
> the DMPlex mesh.
>
>
>1. Can I output the 3D volume mesh in DMPlex as a .ob
Yes, the normal approach is to partition your mesh once, then for each field,
resolve ownership of any interface dofs with respect to the element partition
(so shared vertex velocity can land on any process that owns an adjacent
element, though even this isn't strictly necessary).
Alexander Lin
Dear development team,
Now I'm doing a project about visualization. In the process of visualization,
the surface mesh is preferred. I have two questions about the DMPlex mesh.
1. Can I output the 3D volume mesh in DMPlex as a .obj or .fbx file? Both
these two meshes are just surface mesh.
So, in the latter case, IIUC we can maintain how we distribute data among
the processes (partitioning of elements) such that with respect to a
`-ksp_view_pmat` nothing changes and our velocity and pressure dofs are
interlaced on a global scale (e.g. each process has some velocity and
pressure dofs)
Dear PETSc/Tao team,
it seems to be that there is a bug in the LMVM matrix class:
In the function MatCreateVecs_LMVM, see, e.g.,
https://petsc.org/release/src/ksp/ksp/utils/lmvm/lmvmimpl.c.html at line
214.
it is not checked if the vectors *L, or *R are NULL. This is, in
particular, a probl