hi

On 24/10/17 13:55, Adrian Croucher wrote:

The stub for PetscSFSetGraph() is getting automatically generated (I can see it in /vec/f90-mod/ftn-auto-interfaces/petscpetscsf.h90), but the one for PetscSFGetGraph() is missing for some reason. Any clues?

Oh, I see, all it needs to turn on the auto stub is to remove the 'C' at src/vec/is/sf/interface/sf.c:429.

However it looks like this auto stub for PetscSFGetGraph() can't return pointers to the ilocal and iremote arrays, so you have to call it twice? First to get the number of leaves (with null in the array parameters), then allocate the arrays accordingly before calling PetscSFGetGraph() again to fill them up?

If so it might be more convenient to have a custom binding that returned pointer arrays, if possible, so you only had to call it once. That was the kind of thing I did with my custom binding, but it was using integer arrays rather than the PetscSFNode type.



I have a couple of other queries:

- you've allowed users to declare variables as being 'type(PetscSFNode)'. With most other PETSc Fortran types you can declare variables as 'type(tXXX) :: foo' or 'XXX :: foo' (but not 'type(XXX) :: foo'). Should this one perhaps be altered to work the same way, for consistency?

- With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have to use the C MPI types in the Fortran calling code, rather than the Fortran ones. I think it is a bit confusing to have to mix the two up in the same code. If you put MPI_INTEGER instead of MPI_INT for example, it dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the Fortran MPI types be supported in these routines just by adding them as alternatives into the conditionals?

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611

Reply via email to