All,
I am running in a very strange issue with DMPlexVecGetClosure/DMPlexVecRestoreClosure in Fortran when passing a point such that the returned array should be of 0 size. The typical use case is a vector associated with cell-based unknowns, and requesting the values on the closure of a vertex.
I am attaching C and fortran90 examples. They define a Section with a single value at the first cell of a mesh, an associated Vec, then call DMPlexVecGetClosure/DMPlexVecRestoreClosure on the second element. The C example is fine, but the fortran one crashes when calling DMPlexVecRestoreClosure.
SiMini:tests (main)$ ./ex96f90 -i ${PETSC_DIR}/share/petsc/datafiles/meshes/SquareFaceSet.exo a
Point: 0
size(cval): 1
0: -1.0000e+00
Point: 1
size(cval): 0
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Object is in wrong state
[0]PETSC ERROR: Array was not checked out
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.17.4-1175-g59be91c8676 GIT Date: 2022-09-01 12:22:09 -0400
[0]PETSC ERROR: ./ex96f90 on a monterey-gcc12.2-arm64-basic-g named SiMini.local by blaise Mon Sep 5 11:24:40 2022
[0]PETSC ERROR: Configure options --CFLAGS="-Wimplicit-function-declaration -Wunused -Wuninitialized" --FFLAGS="-ffree-line-length-none -fallow-argument-mismatch -Wunused -Wuninitialized" --download-chaco-1 --download-exodusii=1 --download-hdf5=1 --download-metis=1 --download-netcdf=1 --download-parmetis=1 --download-pnetcdf=1 --download-zlib=1 --with-debugging=1 --with-exodusii-fortran-bindings --with-shared-libraries=1 --with-x11=1 --download-ctetgen=1 --download-triangle=1 --download-p4est=1
[0]PETSC ERROR: #1 DMRestoreWorkArray() at /opt/HPC/petsc-main/src/dm/interface/dm.c:1628
[0]PETSC ERROR: #2 DMPlexVecRestoreClosure() at /opt/HPC/petsc-main/src/dm/impls/plex/plex.c:5876
[0]PETSC ERROR: #3 ex96f90.F90:62
Abort(73) on node 0 (rank 0 in comm 16): application called MPI_Abort(MPI_COMM_SELF, 73) - process 0
I get the same behaviour on multiple platforms / compilers.
I followed the whole calling tree for DMGetWorkArray / DMRestoreArray in C and Fortran and didn’t see anything of interest.
I tried to modify DMGetWorkArray / DMRestoreWorkArray to handle the case of a work array of length 0 differently, but only managed to break other plex examples.
A possible fix at eth application code level would be to first compute the size of the closure in the section and only call DMPlexVecGetClosure/DMPlexVecRestoreClosure when it is >0. This is a bit silly however, as this is precisely the first thing does in DMPlexVecGetClosure.
Any help would be MUCH appreciated.
Regards,
Blaise
I am running in a very strange issue with DMPlexVecGetClosure/DMPlexVecRestoreClosure in Fortran when passing a point such that the returned array should be of 0 size. The typical use case is a vector associated with cell-based unknowns, and requesting the values on the closure of a vertex.
I am attaching C and fortran90 examples. They define a Section with a single value at the first cell of a mesh, an associated Vec, then call DMPlexVecGetClosure/DMPlexVecRestoreClosure on the second element. The C example is fine, but the fortran one crashes when calling DMPlexVecRestoreClosure.
SiMini:tests (main)$ ./ex96f90 -i ${PETSC_DIR}/share/petsc/datafiles/meshes/SquareFaceSet.exo a
Point: 0
size(cval): 1
0: -1.0000e+00
Point: 1
size(cval): 0
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Object is in wrong state
[0]PETSC ERROR: Array was not checked out
[0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.17.4-1175-g59be91c8676 GIT Date: 2022-09-01 12:22:09 -0400
[0]PETSC ERROR: ./ex96f90 on a monterey-gcc12.2-arm64-basic-g named SiMini.local by blaise Mon Sep 5 11:24:40 2022
[0]PETSC ERROR: Configure options --CFLAGS="-Wimplicit-function-declaration -Wunused -Wuninitialized" --FFLAGS="-ffree-line-length-none -fallow-argument-mismatch -Wunused -Wuninitialized" --download-chaco-1 --download-exodusii=1 --download-hdf5=1 --download-metis=1 --download-netcdf=1 --download-parmetis=1 --download-pnetcdf=1 --download-zlib=1 --with-debugging=1 --with-exodusii-fortran-bindings --with-shared-libraries=1 --with-x11=1 --download-ctetgen=1 --download-triangle=1 --download-p4est=1
[0]PETSC ERROR: #1 DMRestoreWorkArray() at /opt/HPC/petsc-main/src/dm/interface/dm.c:1628
[0]PETSC ERROR: #2 DMPlexVecRestoreClosure() at /opt/HPC/petsc-main/src/dm/impls/plex/plex.c:5876
[0]PETSC ERROR: #3 ex96f90.F90:62
Abort(73) on node 0 (rank 0 in comm 16): application called MPI_Abort(MPI_COMM_SELF, 73) - process 0
I get the same behaviour on multiple platforms / compilers.
I followed the whole calling tree for DMGetWorkArray / DMRestoreArray in C and Fortran and didn’t see anything of interest.
I tried to modify DMGetWorkArray / DMRestoreWorkArray to handle the case of a work array of length 0 differently, but only managed to break other plex examples.
A possible fix at eth application code level would be to first compute the size of the closure in the section and only call DMPlexVecGetClosure/DMPlexVecRestoreClosure when it is >0. This is a bit silly however, as this is precisely the first thing does in DMPlexVecGetClosure.
Any help would be MUCH appreciated.
Regards,
Blaise
—
Tier 1 Canada Research Chair in Mathematical and Computational Aspects of Solid Mechanics
Professor, Department of Mathematics & Statistics
Hamilton Hall room 409A, McMaster University
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
ex96.c
Description: ex96.c
ex96f90.F90
Description: ex96f90.F90