Barry Smith <bsm...@mcs.anl.gov> writes:

>    Adrian,
>
>     You should not use F90Array1d *rptr as arguments in the Fortran 
> interface. You should just use a regular Fortran one dimensional array of 
> real/scalar.
> Fortran doesn't handle polymorphism in this way at all. You have to have 
> multiple f90 interfaces, one for each type and provide a C stub for real, 
> int, or whatever else you want to send.

Barry, look at the "use mpi_f08" way of calling MPI from Fortran.

>    What types do you need to send? It is probably easier if we just write the 
> Fortran interfaces for you.
>
>    If you can send a simple Fortran PETS  code that uses PetscSFBcastBegin() 
> and PetscSFBcastEnd() that I can use for testing then I will write them. Do a 
> bcast with int and another with real/scalar.
>
>
>    Barry
>  
>
>
>> On Oct 19, 2017, at 8:59 PM, Adrian Croucher <a.crouc...@auckland.ac.nz> 
>> wrote:
>> 
>> hi
>> 
>> On 19/10/17 06:45, Matthew Knepley wrote:
>>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
>>> <a.crouc...@auckland.ac.nz> wrote:
>>> 
>>> 
>>> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and 
>>> PetscSFBcastEnd().
>>> 
>>> From the C side I have added the following into 
>>> src/vec/is/sf/interface/f90-custom/zsff90.c:
>>> 
>>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, 
>>> MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>>> {
>>>   PetscDataType ptype;
>>>   const void* rootdata;
>>>   void* leafdata;
>>> 
>>>   *ierr = PetscMPIDataTypeToPetscDataType(*unit, &ptype);if (*ierr) return;
>>>   *ierr = F90Array1dAccess(rptr, ptype, (void**) &rootdata 
>>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>>   *ierr = F90Array1dAccess(lptr, ptype, (void**) &leafdata 
>>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>> 
>>>   *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>>> 
>>> }
>>> 
>>> and similarly for petscsfbcastend_(). Does this look plausible?
>>> 
>>> Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am 
>>> not sure how to do those.
>>> 
>>> The difficulty is in declaring the arrays that are passed in, which can be 
>>> of various types. In C they are declared as void*, but I'm not sure what to 
>>> do with that in Fortran. I can't seem to find any other example wrappers in 
>>> PETSc to model it on either. Any suggestions?
>> 
>> 
>> I think this is working now by just declaring those void* C variables as 
>> type(*) in the Fortran interface, e.g.:
>> 
>>       Interface
>>          Subroutine PetscSFBcastBegin(sf,unit,rarray,
>>      &       larray,ierr)
>>            use petscisdef
>>            PetscSF :: sf
>>            PetscInt :: unit
>>            type(*) :: rarray(:)
>>            type(*) :: larray(:)
>>            PetscErrorCode :: ierr
>>          End Subroutine PetscSFBcastBegin
>>       End Interface
>> 
>> The only difficulty I have left with this is in the MPI_Datatype variable. 
>> I'd forgotten that these datatypes are all different in C and Fortran as 
>> well.
>> 
>> I amended the C interface code to the following, to convert the Fortran MPI 
>> datatype (actually an integer) to a C MPI_Datatype:
>> 
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>>   PetscDataType pdtype;
>>   MPI_Datatype dtype;
>>   const void* rootdata;
>>   void* leafdata;
>> 
>>   dtype = MPI_Type_f2c(*unit);
>>   *ierr = PetscMPIDataTypeToPetscDataType(dtype, &pdtype);if (*ierr) return;
>>   *ierr = F90Array1dAccess(rptr, pdtype, (void**) &rootdata 
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>   *ierr = F90Array1dAccess(lptr, pdtype, (void**) &leafdata 
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>> 
>>   *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>> 
>> }
>> 
>> The problem is this only seems to work if I declare the datatype in the 
>> calling Fortran code to be of the appropriate C MPI datatype, e.g. MPI_INT, 
>> rather than the corresponding Fortran datatype, e.g. MPI_INTEGER (which 
>> causes PetscMPIDataTypeToPetscDataType() to fail, as something weird gets 
>> passed in for dtype).
>> 
>> I was expecting the opposite to be true. It doesn't seem right to have to 
>> use the C datatypes in Fortran code (confusing if the Fortran datatypes are 
>> used elsewhere). So I suspect I've messed something up. Anyone have any 
>> ideas?
>> 
>> - Adrian
>> -- 
>> Dr Adrian Croucher
>> Senior Research Fellow
>> Department of Engineering Science
>> University of Auckland, New Zealand
>> email: 
>> a.crouc...@auckland.ac.nz
>> 
>> tel: +64 (0)9 923 4611
>> 

Reply via email to