Re: [petsc-dev] PetscSF in Fortran

2017-11-01 Thread Adrian Croucher

hi


On 31/10/17 11:21, Adrian Croucher wrote:


I just pulled the latest next branch and found that the PetscSF stuff 
doesn't appear to work in Fortran anymore for me.




I think I've found the trouble- after you put in the check to see if 
Fortran has type(*) support, I needed to reconfigure PETSc (not just 
make). It's working now.


Sorry for the false alarm...

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Adrian Croucher

On 01/11/17 16:20, Jed Brown wrote:

Adrian Croucher  writes:


On 01/11/17 09:50, Smith, Barry F. wrote:

Please send the full traceback. Cut and paste

It's not really giving me much, only:

Program received signal SIGSEGV, Segmentation fault.
0x7efe8574d6a9 in Pack_int_1 (n=2, bs=1, idx=0xfd6430,
unpacked=0xfffb03ec, packed=0xfe9a00)
at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:497
497 DEF_PackCmp(int)

You need to use "bt" (backtrace) to print the rest of the stack.


Aha OK. Here it is:

Program received signal SIGSEGV, Segmentation fault.
0x7f93b64496a9 in Pack_int_1 (n=2, bs=1, idx=0x17b43e0,
unpacked=0xfffb03ec, packed=0x17c79b0)
at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:497
497 DEF_PackCmp(int)
(gdb) bt
#0  0x7f93b64496a9 in Pack_int_1 (n=2, bs=1, idx=0x17b43e0,
unpacked=0xfffb03ec, packed=0x17c79b0)
at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:497
#1  0x7f93b645e830 in PetscSFBcastBegin_Basic (sf=0x17acbb0,
unit=0x7f93b27ad5c0 , rootdata=0xfffb03ec,
leafdata=0xfffb)
at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:937
#2  0x7f93b646f2b2 in PetscSFBcastBegin (sf=0x17acbb0,
unit=0x7f93b27ad5c0 , rootdata=0xfffb03ec,
leafdata=0xfffb)
at /home/acro018/software/PETSc/code/src/vec/is/sf/interface/sf.c:977
#3  0x7f93b64769ab in petscsfbcastbegin_ (sf=0x7ffe1047db90,
unit=0x40222c, rptr=0x7ffe1047dba0, lptr=0x7ffe1047dc20,
ierr=0x7ffe1047dc3c)
at 
/home/acro018/software/PETSc/code/src/vec/is/sf/interface/ftn-custom/zsf.c:45

#4  0x00401c3a in MAIN__ () at ex1f.F90:90


Can you give instructions to reproduce on a different system?

This is just running Barry's src/vec/is/sf/examples/tutorials/ex1f example.

Does Valgrind have warnings?


Valgrind output up to the crash is below. There are some warnings there 
about PetscSFBcastBegin.


Cheers, Adrian

acro018@en-354401:~/software/PETSc/code/src/vec/is/sf/examples/tutorials$ 
valgrind --leak-check=yes ./ex1f


==17871== Memcheck, a memory error detector
==17871== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==17871== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info
==17871== Command: ./ex1f
==17871==
==17871== Syscall param writev(vector[...]) points to uninitialised byte(s)
==17871==at 0xB2192A7: writev (writev.c:49)
==17871==by 0xDF6C5C2: mca_oob_tcp_msg_send_handler (in 
/usr/lib/openmpi/lib/openmpi/mca_oob_tcp.so)
==17871==by 0xDF6D7BA: mca_oob_tcp_peer_send (in 
/usr/lib/openmpi/lib/openmpi/mca_oob_tcp.so)
==17871==by 0xDF71506: mca_oob_tcp_send_nb (in 
/usr/lib/openmpi/lib/openmpi/mca_oob_tcp.so)
==17871==by 0xDD61017: orte_rml_oob_send (in 
/usr/lib/openmpi/lib/openmpi/mca_rml_oob.so)
==17871==by 0xDD61697: orte_rml_oob_send_buffer (in 
/usr/lib/openmpi/lib/openmpi/mca_rml_oob.so)
==17871==by 0xD956BEE: ??? (in 
/usr/lib/openmpi/lib/openmpi/mca_grpcomm_bad.so)
==17871==by 0xA9AA64B: ompi_mpi_init (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0xA9C2969: PMPI_Init (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0x97957F7: PMPI_INIT (in 
/usr/lib/openmpi/lib/libmpi_f77.so.1.0.7)

==17871==by 0x4FCF435: petscinitialize_internal (zstart.c:328)
==17871==by 0x4FCFF36: petscinitialize_ (zstart.c:508)
==17871==  Address 0xf996431 is 161 bytes inside a block of size 256 alloc'd
==17871==at 0x4C2AF2E: realloc (vg_replace_malloc.c:692)
==17871==by 0xAA386D9: opal_dss_buffer_extend (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0xAA38A90: opal_dss_copy_payload (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0xAA147CD: orte_grpcomm_base_pack_modex_entries (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0xD956ACF: ??? (in 
/usr/lib/openmpi/lib/openmpi/mca_grpcomm_bad.so)
==17871==by 0xA9AA64B: ompi_mpi_init (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0xA9C2969: PMPI_Init (in 
/usr/lib/openmpi/lib/libmpi.so.1.0.8)
==17871==by 0x97957F7: PMPI_INIT (in 
/usr/lib/openmpi/lib/libmpi_f77.so.1.0.7)

==17871==by 0x4FCF435: petscinitialize_internal (zstart.c:328)
==17871==by 0x4FCFF36: petscinitialize_ (zstart.c:508)
==17871==by 0x4015D7: MAIN__ (ex1f.F90:25)
==17871==by 0x40214B: main (ex1f.F90:10)
==17871==
PetscSF Object: 1 MPI processes
  type: basic
sort=rank-order
  [0] Number of roots=6, leaves=2, remote ranks=1
  [0] 0 <- (0,2)
  [0] 2 <- (0,0)
  [0] Roots referenced by my leaves, by rank
  [0] 0: 2 edges
  [0]0 <- 2
  [0]2 <- 0
==17871== Conditional jump or move depends on uninitialised value(s)
==17871==at 0x4F2A6BB: f90array1daccessint_ (f90_fwrap.F:78)
==17871==by 0x513732C: F90Array1dAccess (f90_cwrap.c:107)
==17871==by 0x5209950: petscsfbcastbegin_ (zsf.c:43)
==17871==by 0x401C39: MAIN__ (ex1f.F90:90)

Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Jed Brown
Adrian Croucher  writes:

> On 01/11/17 09:50, Smith, Barry F. wrote:
>>Please send the full traceback. Cut and paste
>
> It's not really giving me much, only:
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x7efe8574d6a9 in Pack_int_1 (n=2, bs=1, idx=0xfd6430,
> unpacked=0xfffb03ec, packed=0xfe9a00)
> at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:497
> 497 DEF_PackCmp(int)

You need to use "bt" (backtrace) to print the rest of the stack.

Can you give instructions to reproduce on a different system?  Does
Valgrind have warnings?


Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Adrian Croucher

On 01/11/17 09:50, Smith, Barry F. wrote:

   Please send the full traceback. Cut and paste


It's not really giving me much, only:

Program received signal SIGSEGV, Segmentation fault.
0x7efe8574d6a9 in Pack_int_1 (n=2, bs=1, idx=0xfd6430,
unpacked=0xfffb03ec, packed=0xfe9a00)
at /home/acro018/software/PETSc/code/src/vec/is/sf/impls/basic/sfbasic.c:497
497 DEF_PackCmp(int)


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Smith, Barry F.

  Please send the full traceback. Cut and paste




> On Oct 31, 2017, at 3:37 PM, Adrian Croucher  
> wrote:
> 
> 
> On 01/11/17 03:02, Smith, Barry F. wrote:
>>   Adrian,
>> 
>> I fixed some bugs but apparently broke something at the same time. At a 
>> meeting now, maybe you could use -start_in_debugger and get the traceback 
>> where it crashes for you?
> OK I did that, and it appears to be dying in 
> src/vec/is/sf/impls/basic/sfbasic.c:497, which is:
> 
> DEF_PackCmp(int)
> 
> Dunno what that means though...
> 
> - Adrian
> 
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Adrian Croucher


On 01/11/17 03:02, Smith, Barry F. wrote:

   Adrian,

 I fixed some bugs but apparently broke something at the same time. At a 
meeting now, maybe you could use -start_in_debugger and get the traceback where 
it crashes for you?
OK I did that, and it appears to be dying in 
src/vec/is/sf/impls/basic/sfbasic.c:497, which is:


DEF_PackCmp(int)

Dunno what that means though...

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-31 Thread Smith, Barry F.

  Adrian,

I fixed some bugs but apparently broke something at the same time. At a 
meeting now, maybe you could use -start_in_debugger and get the traceback where 
it crashes for you?

  Barry


> On Oct 30, 2017, at 5:21 PM, Adrian Croucher  
> wrote:
> 
> hi,
> 
> I just pulled the latest next branch and found that the PetscSF stuff doesn't 
> appear to work in Fortran anymore for me.
> 
> When I run the ex1f example it gives:
> 
> [0]PETSC ERROR: 
> 
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see 
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to 
> find memory corruption erro
> rs
> [0]PETSC ERROR: likely location of problem given in stack below
> [0]PETSC ERROR: -  Stack Frames 
> 
> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
> [0]PETSC ERROR:   INSTEAD the line number of the start of the function
> [0]PETSC ERROR:   is given.
> [0]PETSC ERROR: [0] PetscSFBcastBegin_Basic line 922 
> /home/acro018/software/PETSc/code/src/vec/is/sf/impl
> s/basic/sfbasic.c
> [0]PETSC ERROR: [0] PetscSFBcastBegin line 972 
> /home/acro018/software/PETSc/code/src/vec/is/sf/interface/
> sf.c
> [0]PETSC ERROR: - Error Message 
> -
> 
> And the same thing happens with my own code.
> 
> Should this example still work? Or is there something else I need to do?
> 
> I did make allfortranstubs and make.
> 
> - Adrian
> 
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-30 Thread Adrian Croucher

hi,

I just pulled the latest next branch and found that the PetscSF stuff 
doesn't appear to work in Fortran anymore for me.


When I run the ex1f example it gives:

[0]PETSC ERROR: 

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
probably memory access out of range

[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS 
X to find memory corruption erro

rs
[0]PETSC ERROR: likely location of problem given in stack below
[0]PETSC ERROR: -  Stack Frames 


[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[0]PETSC ERROR:   INSTEAD the line number of the start of the function
[0]PETSC ERROR:   is given.
[0]PETSC ERROR: [0] PetscSFBcastBegin_Basic line 922 
/home/acro018/software/PETSc/code/src/vec/is/sf/impl

s/basic/sfbasic.c
[0]PETSC ERROR: [0] PetscSFBcastBegin line 972 
/home/acro018/software/PETSc/code/src/vec/is/sf/interface/

sf.c
[0]PETSC ERROR: - Error Message 
-


And the same thing happens with my own code.

Should this example still work? Or is there something else I need to do?

I did make allfortranstubs and make.

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Jed Brown
Barry Smith  writes:

>> On Oct 24, 2017, at 6:58 AM, Jed Brown  wrote:
>> 
>> Lawrence Mitchell  writes:
>> 
 On 24 Oct 2017, at 06:21, Barry Smith  wrote:
 
>>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
>>> to use the C MPI types in the Fortran calling code, rather than the 
>>> Fortran ones. I think it is a bit confusing to have to mix the two up 
>>> in the same code. If you put MPI_INTEGER instead of MPI_INT for 
>>> example, it dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. 
>>> Could the Fortran MPI types be supported in these routines just by 
>>> adding them as alternatives into the conditionals?
>> Hmm, Jed will need to provide wisdom the Linux manual page for 
>> MPI_INTEGER clearly states:
>> 
>> Note that the Fortran types should only be used in Fortran programs, and 
>> the C types should only be used in C programs. For example, it is in 
>> error to use MPI_INT for a Fortran INTEGER. Datatypes are of type 
>> MPI_Datatype in C and of type INTEGER in Fortran.
>> 
>> If this is true then there is no place we can do the change properly.
> 
> Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
> datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the 
> C side of the interface wouldn't have to worry about the Fortran MPI 
> datatypes, but it doesn't really seem to do that.
 
 Do did I. Its existence seems to contradict the statement in the Linux 
 manual page. Well wait for Jed.
>>> 
>>> A fortran INTEGER may have a different width to a C int.  Hence the 
>>> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
>>> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
>>> standard sheds some light:
>>> 
>>> Moving handles from Fortran to C
>>> 
>>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446
>> 
>> Thanks, Lawrence.  And my recollection is that the C
>
>Fortran types?  Within C code?

The C macros for Fortran types.


Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Barry Smith

  Never mind, this is MPI Uni ;) I'll fix it tonight


> On Oct 24, 2017, at 5:34 PM, Barry Smith  wrote:
> 
> 
>   This didn't last too long 
> 
> /opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c: In function 
> 'PetscErrorCode PetscMPIFortranDatatypeToC(MPI_Fint, MPI_Datatype*)':
> /opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:29:16: 
> error: 'MPI_INTEGER' was not declared in this scope
>   if (ftype == MPI_INTEGER) *dtype = MPI_INT;
>^
> /opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:30:21: 
> error: 'MPI_DOUBLE_PRECISION' was not declared in this scope
>   else if (ftype == MPI_DOUBLE_PRECISION) *dtype = MPI_DOUBLE;
> ^
> /opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:31:21: 
> error: 'MPI_COMPLEX16' was not declared in this scope
>   else if (ftype == MPI_COMPLEX16) *dtype = MPI_C_DOUBLE_COMPLEX;
> ^
> 
> 
>> On Oct 24, 2017, at 6:58 AM, Jed Brown  wrote:
>> 
>> Lawrence Mitchell  writes:
>> 
 On 24 Oct 2017, at 06:21, Barry Smith  wrote:
 
>>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
>>> to use the C MPI types in the Fortran calling code, rather than the 
>>> Fortran ones. I think it is a bit confusing to have to mix the two up 
>>> in the same code. If you put MPI_INTEGER instead of MPI_INT for 
>>> example, it dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. 
>>> Could the Fortran MPI types be supported in these routines just by 
>>> adding them as alternatives into the conditionals?
>> Hmm, Jed will need to provide wisdom the Linux manual page for 
>> MPI_INTEGER clearly states:
>> 
>> Note that the Fortran types should only be used in Fortran programs, and 
>> the C types should only be used in C programs. For example, it is in 
>> error to use MPI_INT for a Fortran INTEGER. Datatypes are of type 
>> MPI_Datatype in C and of type INTEGER in Fortran.
>> 
>> If this is true then there is no place we can do the change properly.
> 
> Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
> datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the 
> C side of the interface wouldn't have to worry about the Fortran MPI 
> datatypes, but it doesn't really seem to do that.
 
 Do did I. Its existence seems to contradict the statement in the Linux 
 manual page. Well wait for Jed.
>>> 
>>> A fortran INTEGER may have a different width to a C int.  Hence the 
>>> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
>>> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
>>> standard sheds some light:
>>> 
>>> Moving handles from Fortran to C
>>> 
>>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446
>> 
>> Thanks, Lawrence.  And my recollection is that the C types are
>> guaranteed to exist if the MPI was built to support Fortran, but may not
>> be defined otherwise.  So PETSc uses it in code that is only compiled
>> when Fortran is available, it should all work.
>> 
>>> Interlanguage communication
>>> 
>>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456
>>> 
>>> Lawrence
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Barry Smith

   This didn't last too long 

/opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c: In function 
'PetscErrorCode PetscMPIFortranDatatypeToC(MPI_Fint, MPI_Datatype*)':
/opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:29:16: error: 
'MPI_INTEGER' was not declared in this scope
   if (ftype == MPI_INTEGER) *dtype = MPI_INT;
^
/opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:30:21: error: 
'MPI_DOUBLE_PRECISION' was not declared in this scope
   else if (ftype == MPI_DOUBLE_PRECISION) *dtype = MPI_DOUBLE;
 ^
/opt/atlassian/pipelines/agent/build/src/sys/f90-src/f90_cwrap.c:31:21: error: 
'MPI_COMPLEX16' was not declared in this scope
   else if (ftype == MPI_COMPLEX16) *dtype = MPI_C_DOUBLE_COMPLEX;
 ^


> On Oct 24, 2017, at 6:58 AM, Jed Brown  wrote:
> 
> Lawrence Mitchell  writes:
> 
>>> On 24 Oct 2017, at 06:21, Barry Smith  wrote:
>>> 
>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
>> to use the C MPI types in the Fortran calling code, rather than the 
>> Fortran ones. I think it is a bit confusing to have to mix the two up in 
>> the same code. If you put MPI_INTEGER instead of MPI_INT for example, it 
>> dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the 
>> Fortran MPI types be supported in these routines just by adding them as 
>> alternatives into the conditionals?
> Hmm, Jed will need to provide wisdom the Linux manual page for 
> MPI_INTEGER clearly states:
> 
> Note that the Fortran types should only be used in Fortran programs, and 
> the C types should only be used in C programs. For example, it is in 
> error to use MPI_INT for a Fortran INTEGER. Datatypes are of type 
> MPI_Datatype in C and of type INTEGER in Fortran.
> 
> If this is true then there is no place we can do the change properly.
 
 Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
 datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the 
 C side of the interface wouldn't have to worry about the Fortran MPI 
 datatypes, but it doesn't really seem to do that.
>>> 
>>> Do did I. Its existence seems to contradict the statement in the Linux 
>>> manual page. Well wait for Jed.
>> 
>> A fortran INTEGER may have a different width to a C int.  Hence the 
>> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
>> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
>> standard sheds some light:
>> 
>> Moving handles from Fortran to C
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446
> 
> Thanks, Lawrence.  And my recollection is that the C types are
> guaranteed to exist if the MPI was built to support Fortran, but may not
> be defined otherwise.  So PETSc uses it in code that is only compiled
> when Fortran is available, it should all work.
> 
>> Interlanguage communication
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456
>> 
>> Lawrence



Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Barry Smith

   Adrian,


I have updated the branch barry/fortran-PetscSFBcastBegin to support using 
Fortran datatypes and PetscSFGetGraph()

Please let me know of additional difficulties.

   Barry


> On Oct 24, 2017, at 6:58 AM, Jed Brown  wrote:
> 
> Lawrence Mitchell  writes:
> 
>>> On 24 Oct 2017, at 06:21, Barry Smith  wrote:
>>> 
>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
>> to use the C MPI types in the Fortran calling code, rather than the 
>> Fortran ones. I think it is a bit confusing to have to mix the two up in 
>> the same code. If you put MPI_INTEGER instead of MPI_INT for example, it 
>> dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the 
>> Fortran MPI types be supported in these routines just by adding them as 
>> alternatives into the conditionals?
> Hmm, Jed will need to provide wisdom the Linux manual page for 
> MPI_INTEGER clearly states:
> 
> Note that the Fortran types should only be used in Fortran programs, and 
> the C types should only be used in C programs. For example, it is in 
> error to use MPI_INT for a Fortran INTEGER. Datatypes are of type 
> MPI_Datatype in C and of type INTEGER in Fortran.
> 
> If this is true then there is no place we can do the change properly.
 
 Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
 datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the 
 C side of the interface wouldn't have to worry about the Fortran MPI 
 datatypes, but it doesn't really seem to do that.
>>> 
>>> Do did I. Its existence seems to contradict the statement in the Linux 
>>> manual page. Well wait for Jed.
>> 
>> A fortran INTEGER may have a different width to a C int.  Hence the 
>> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
>> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
>> standard sheds some light:
>> 
>> Moving handles from Fortran to C
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446
> 
> Thanks, Lawrence.  And my recollection is that the C types are
> guaranteed to exist if the MPI was built to support Fortran, but may not
> be defined otherwise.  So PETSc uses it in code that is only compiled
> when Fortran is available, it should all work.
> 
>> Interlanguage communication
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456
>> 
>> Lawrence



Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Barry Smith

> On Oct 24, 2017, at 6:58 AM, Jed Brown  wrote:
> 
> Lawrence Mitchell  writes:
> 
>>> On 24 Oct 2017, at 06:21, Barry Smith  wrote:
>>> 
>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
>> to use the C MPI types in the Fortran calling code, rather than the 
>> Fortran ones. I think it is a bit confusing to have to mix the two up in 
>> the same code. If you put MPI_INTEGER instead of MPI_INT for example, it 
>> dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the 
>> Fortran MPI types be supported in these routines just by adding them as 
>> alternatives into the conditionals?
> Hmm, Jed will need to provide wisdom the Linux manual page for 
> MPI_INTEGER clearly states:
> 
> Note that the Fortran types should only be used in Fortran programs, and 
> the C types should only be used in C programs. For example, it is in 
> error to use MPI_INT for a Fortran INTEGER. Datatypes are of type 
> MPI_Datatype in C and of type INTEGER in Fortran.
> 
> If this is true then there is no place we can do the change properly.
 
 Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
 datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the 
 C side of the interface wouldn't have to worry about the Fortran MPI 
 datatypes, but it doesn't really seem to do that.
>>> 
>>> Do did I. Its existence seems to contradict the statement in the Linux 
>>> manual page. Well wait for Jed.
>> 
>> A fortran INTEGER may have a different width to a C int.  Hence the 
>> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
>> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
>> standard sheds some light:
>> 
>> Moving handles from Fortran to C
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446
> 
> Thanks, Lawrence.  And my recollection is that the C

   Fortran types?  Within C code?

> types are
> guaranteed to exist if the MPI was built to support Fortran, but may not
> be defined otherwise.  So PETSc uses it in code that is only compiled
> when Fortran is available, it should all work.
> 
>> Interlanguage communication
>> 
>> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456
>> 
>> Lawrence



Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Jed Brown
Lawrence Mitchell  writes:

>> On 24 Oct 2017, at 06:21, Barry Smith  wrote:
>> 
> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
> to use the C MPI types in the Fortran calling code, rather than the 
> Fortran ones. I think it is a bit confusing to have to mix the two up in 
> the same code. If you put MPI_INTEGER instead of MPI_INT for example, it 
> dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the 
> Fortran MPI types be supported in these routines just by adding them as 
> alternatives into the conditionals?
  Hmm, Jed will need to provide wisdom the Linux manual page for 
 MPI_INTEGER clearly states:
 
 Note that the Fortran types should only be used in Fortran programs, and 
 the C types should only be used in C programs. For example, it is in error 
 to use MPI_INT for a Fortran INTEGER. Datatypes are of type MPI_Datatype 
 in C and of type INTEGER in Fortran.
 
  If this is true then there is no place we can do the change properly.
>>> 
>>> Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
>>> datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the C 
>>> side of the interface wouldn't have to worry about the Fortran MPI 
>>> datatypes, but it doesn't really seem to do that.
>> 
>>  Do did I. Its existence seems to contradict the statement in the Linux 
>> manual page. Well wait for Jed.
>
> A fortran INTEGER may have a different width to a C int.  Hence the 
> distinction.  The MPI_XXX_f2c functions convert handles on the fortran side 
> to handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the 
> standard sheds some light:
>
> Moving handles from Fortran to C
>
> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446

Thanks, Lawrence.  And my recollection is that the C types are
guaranteed to exist if the MPI was built to support Fortran, but may not
be defined otherwise.  So PETSc uses it in code that is only compiled
when Fortran is available, it should all work.

> Interlanguage communication
>
> http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456
>
> Lawrence


Re: [petsc-dev] PetscSF in Fortran

2017-10-24 Thread Lawrence Mitchell

> On 24 Oct 2017, at 06:21, Barry Smith  wrote:
> 
 - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have to 
 use the C MPI types in the Fortran calling code, rather than the Fortran 
 ones. I think it is a bit confusing to have to mix the two up in the same 
 code. If you put MPI_INTEGER instead of MPI_INT for example, it dies in 
 F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the Fortran MPI 
 types be supported in these routines just by adding them as alternatives 
 into the conditionals?
>>>  Hmm, Jed will need to provide wisdom the Linux manual page for MPI_INTEGER 
>>> clearly states:
>>> 
>>> Note that the Fortran types should only be used in Fortran programs, and 
>>> the C types should only be used in C programs. For example, it is in error 
>>> to use MPI_INT for a Fortran INTEGER. Datatypes are of type MPI_Datatype in 
>>> C and of type INTEGER in Fortran.
>>> 
>>>  If this is true then there is no place we can do the change properly.
>> 
>> Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI datatypes 
>> to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the C side of 
>> the interface wouldn't have to worry about the Fortran MPI datatypes, but it 
>> doesn't really seem to do that.
> 
>  Do did I. Its existence seems to contradict the statement in the Linux 
> manual page. Well wait for Jed.

A fortran INTEGER may have a different width to a C int.  Hence the 
distinction.  The MPI_XXX_f2c functions convert handles on the fortran side to 
handles on the C side (MPI_XXX_c2f does the opposite).  Perhaps the standard 
sheds some light:

Moving handles from Fortran to C

http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node446.htm#Node446

Interlanguage communication

http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node456.htm#Node456

Lawrence

Re: [petsc-dev] PetscSF in Fortran

2017-10-23 Thread Barry Smith

> On Oct 23, 2017, at 11:11 PM, Adrian Croucher  
> wrote:
> 
> hi
> 
> 
> On 24/10/17 17:00, Barry Smith wrote:
>>> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have to 
>>> use the C MPI types in the Fortran calling code, rather than the Fortran 
>>> ones. I think it is a bit confusing to have to mix the two up in the same 
>>> code. If you put MPI_INTEGER instead of MPI_INT for example, it dies in 
>>> F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the Fortran MPI 
>>> types be supported in these routines just by adding them as alternatives 
>>> into the conditionals?
>>   Hmm, Jed will need to provide wisdom the Linux manual page for MPI_INTEGER 
>> clearly states:
>> 
>> Note that the Fortran types should only be used in Fortran programs, and the 
>> C types should only be used in C programs. For example, it is in error to 
>> use MPI_INT for a Fortran INTEGER. Datatypes are of type MPI_Datatype in C 
>> and of type INTEGER in Fortran.
>> 
>>   If this is true then there is no place we can do the change properly.
> 
> Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI datatypes 
> to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that the C side of the 
> interface wouldn't have to worry about the Fortran MPI datatypes, but it 
> doesn't really seem to do that.

  Do did I. Its existence seems to contradict the statement in the Linux manual 
page. Well wait for Jed.
> 
> - Adrian
> 
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-23 Thread Adrian Croucher

hi


On 24/10/17 17:00, Barry Smith wrote:

- With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have to use 
the C MPI types in the Fortran calling code, rather than the Fortran ones. I 
think it is a bit confusing to have to mix the two up in the same code. If you 
put MPI_INTEGER instead of MPI_INT for example, it dies in F90Array1dAccess() 
with 'unsupported MPI_Datatype'. Could the Fortran MPI types be supported in 
these routines just by adding them as alternatives into the conditionals?

   Hmm, Jed will need to provide wisdom the Linux manual page for MPI_INTEGER 
clearly states:

Note that the Fortran types should only be used in Fortran programs, and the C 
types should only be used in C programs. For example, it is in error to use 
MPI_INT for a Fortran INTEGER. Datatypes are of type MPI_Datatype in C and of 
type INTEGER in Fortran.

   If this is true then there is no place we can do the change properly.


Yes, I first thought that MPI_Type_f2c() would convert Fortran MPI 
datatypes to the C ones (e.g. turn MPI_INTEGER into MPI_INT), so that 
the C side of the interface wouldn't have to worry about the Fortran MPI 
datatypes, but it doesn't really seem to do that.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-23 Thread Barry Smith

> On Oct 23, 2017, at 9:53 PM, Adrian Croucher  
> wrote:
> 
> hi
> 
> 
> On 24/10/17 13:55, Adrian Croucher wrote:
>> 
>> The stub for PetscSFSetGraph() is getting automatically generated (I can see 
>> it in /vec/f90-mod/ftn-auto-interfaces/petscpetscsf.h90), but the one for 
>> PetscSFGetGraph() is missing for some reason. Any clues?
> 
> Oh, I see, all it needs to turn on the auto stub is to remove the 'C' at 
> src/vec/is/sf/interface/sf.c:429.
> 
> However it looks like this auto stub for PetscSFGetGraph() can't return 
> pointers to the ilocal and iremote arrays, so you have to call it twice? 
> First to get the number of leaves (with null in the array parameters), then 
> allocate the arrays accordingly before calling PetscSFGetGraph() again to 
> fill them up?
> 
> If so it might be more convenient to have a custom binding that returned 
> pointer arrays, if possible, so you only had to call it once. That was the 
> kind of thing I did with my custom binding, but it was using integer arrays 
> rather than the PetscSFNode type.

  Yes, you are right, needs custom code for this one. I'll try to add it.
> 
> 
> 
> I have a couple of other queries:
> 
> - you've allowed users to declare variables as being 'type(PetscSFNode)'. 
> With most other PETSc Fortran types you can declare variables as 'type(tXXX) 
> :: foo' or 'XXX :: foo' (but not 'type(XXX) :: foo'). Should this one perhaps 
> be altered to work the same way, for consistency?

   I am not sure. The ones with the tXXX business are opaque objects, that is 
you cannot directly access the contents inside with Fortran code. They have the 
alternative XXX :: foo to match the previous syntax for declaring such objects.

   I think I will leave it PetscSFNode as is but I've added example usage for 
Fortran to the manual page for PetscSFNode

> 
> - With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have to 
> use the C MPI types in the Fortran calling code, rather than the Fortran 
> ones. I think it is a bit confusing to have to mix the two up in the same 
> code. If you put MPI_INTEGER instead of MPI_INT for example, it dies in 
> F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the Fortran MPI 
> types be supported in these routines just by adding them as alternatives into 
> the conditionals?

  Hmm, Jed will need to provide wisdom the Linux manual page for MPI_INTEGER 
clearly states:

Note that the Fortran types should only be used in Fortran programs, and the C 
types should only be used in C programs. For example, it is in error to use 
MPI_INT for a Fortran INTEGER. Datatypes are of type MPI_Datatype in C and of 
type INTEGER in Fortran.

  If this is true then there is no place we can do the change properly.

  Jed needs to chime in here. Do you know the acceptable way to handle this? 
That is DO all MPI implementations support MPI_INTEGER from C?

  Barry


> 
> - Adrian
> 
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-23 Thread Adrian Croucher

hi


On 24/10/17 13:55, Adrian Croucher wrote:


The stub for PetscSFSetGraph() is getting automatically generated (I 
can see it in /vec/f90-mod/ftn-auto-interfaces/petscpetscsf.h90), but 
the one for PetscSFGetGraph() is missing for some reason. Any clues?


Oh, I see, all it needs to turn on the auto stub is to remove the 'C' at 
src/vec/is/sf/interface/sf.c:429.


However it looks like this auto stub for PetscSFGetGraph() can't return 
pointers to the ilocal and iremote arrays, so you have to call it twice? 
First to get the number of leaves (with null in the array parameters), 
then allocate the arrays accordingly before calling PetscSFGetGraph() 
again to fill them up?


If so it might be more convenient to have a custom binding that returned 
pointer arrays, if possible, so you only had to call it once. That was 
the kind of thing I did with my custom binding, but it was using integer 
arrays rather than the PetscSFNode type.




I have a couple of other queries:

- you've allowed users to declare variables as being 
'type(PetscSFNode)'. With most other PETSc Fortran types you can declare 
variables as 'type(tXXX) :: foo' or 'XXX :: foo' (but not 'type(XXX) :: 
foo'). Should this one perhaps be altered to work the same way, for 
consistency?


- With PetscSFBcastBegin() / PetscSFBcastEnd() you currently still have 
to use the C MPI types in the Fortran calling code, rather than the 
Fortran ones. I think it is a bit confusing to have to mix the two up in 
the same code. If you put MPI_INTEGER instead of MPI_INT for example, it 
dies in F90Array1dAccess() with 'unsupported MPI_Datatype'. Could the 
Fortran MPI types be supported in these routines just by adding them as 
alternatives into the conditionals?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-22 Thread Barry Smith

  Adrian,

   Sorry for the delay in generating the bindings for you. I have put them in 
the branch

barry/fortran-PetscSFBcastBegin

the example src/vec/sf/examples/tutorials/ex1f.F90 attempts to test them. 
Please let us know if this works for you and if you need anything else. They 
will only work for MPIU_INT and MPIU_REAL and MPIU_SCALAR because that is the 
only Fortran pointer arrays we support currently.

   Barry

I was shocked, shocked to discover that it looks like PetscSF has essentially 
no bindings for Fortran. We can add more relatively easily if you need them. I 
don't understand how you could previously test your code given that 
PetscSFCreate() didn't even have a Fortran stub generated?

BTW: PETSc folks we don't even seem to support MPIU_INT and MPIU_REAL/SCALAR in 
Fortran? Making an issue.


> On Oct 19, 2017, at 10:29 PM, Jed Brown  wrote:
> 
> Adrian Croucher  writes:
> 
>> hi
>> 
>> 
>> On 19/10/17 06:45, Matthew Knepley wrote:
>>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
>>> > wrote:
>>> 
>>> 
>>>So, now I'm trying to add Fortran bindings for PetscSFBcastBegin()
>>>and PetscSFBcastEnd().
>>> 
>>>From the C side I have added the following into
>>>src/vec/is/sf/interface/f90-custom/zsff90.c:
>>> 
>>>PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf,
>>>MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
>>>PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>>>{
>>>  PetscDataType ptype;
>>>  const void* rootdata;
>>>  void* leafdata;
>>> 
>>>  *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if
>>>(*ierr) return;
>>>  *ierr = F90Array1dAccess(rptr, ptype, (void**) 
>>>PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>>  *ierr = F90Array1dAccess(lptr, ptype, (void**) 
>>>PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>> 
>>>  *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>>> 
>>>}
>>> 
>>>and similarly for petscsfbcastend_(). Does this look plausible?
>>> 
>>>Then some wrappers need to be added to
>>>src/vec/f90-mod/petscis.h90. I am not sure how to do those.
>>> 
>>>The difficulty is in declaring the arrays that are passed in,
>>>which can be of various types. In C they are declared as void*,
>>>but I'm not sure what to do with that in Fortran. I can't seem to
>>>find any other example wrappers in PETSc to model it on either.
>>>Any suggestions?
>>> 
>> 
>> 
>> I think this is working now by just declaring those void* C variables as 
>> type(*) in the Fortran interface, e.g.:
>> 
>>   Interface
>>  Subroutine PetscSFBcastBegin(sf,unit,rarray,
>>  &   larray,ierr)
>>use petscisdef
>>PetscSF :: sf
>>PetscInt :: unit
>>type(*) :: rarray(:)
>>type(*) :: larray(:)
>>PetscErrorCode :: ierr
>>  End Subroutine PetscSFBcastBegin
>>   End Interface
>> 
>> The only difficulty I have left with this is in the MPI_Datatype 
>> variable. I'd forgotten that these datatypes are all different in C and 
>> Fortran as well.
>> 
>> I amended the C interface code to the following, to convert the Fortran 
>> MPI datatype (actually an integer) to a C MPI_Datatype:
>> 
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>>   PetscDataType pdtype;
>>   MPI_Datatype dtype;
>>   const void* rootdata;
>>   void* leafdata;
>> 
>>   dtype = MPI_Type_f2c(*unit);
>>   *ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) 
>> return;
>>   *ierr = F90Array1dAccess(rptr, pdtype, (void**)  
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>   *ierr = F90Array1dAccess(lptr, pdtype, (void**)  
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>> 
>>   *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>> 
>> }
>> 
>> The problem is this only seems to work if I declare the datatype in the 
>> calling Fortran code to be of the appropriate C MPI datatype, e.g. 
>> MPI_INT, rather than the corresponding Fortran datatype, e.g. 
>> MPI_INTEGER (which causes PetscMPIDataTypeToPetscDataType() to fail, as 
>> something weird gets passed in for dtype).
>> 
>> I was expecting the opposite to be true. It doesn't seem right to have 
>> to use the C datatypes in Fortran code (confusing if the Fortran 
>> datatypes are used elsewhere). So I suspect I've messed something up. 
>> Anyone have any ideas?
> 
> I think PetscMPIDataTypeToPetscDataType should be extended to handle the
> Fortran types.
> 
> BTW, this code looks very wrong to me:
> 
>  if (mtype == MPIU_INT) *ptype = PETSC_INT;
>  else if (mtype == MPI_INT) *ptype = PETSC_INT;
> 
> [...]
> 
>  else if (mtype == MPI_LONG)

Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Jed Brown
Barry Smith  writes:

>> What does that has to do with PetscSF, which never used PetscDataType
>> and uses MPI datatypes with support for derived types (albeit with a
>> limited number of combiners)?
>
>Because the code he started to write uses F90Array1dAccess() which
>is built around PetscDataType, hence he needs to convert the
>MPI_Datatype. Which is where all the trouble comes from. There are
>multiple ways of resolving his problem, it more depends on my time
>then anything else since it is unlikely anyone but you or I will
>"fix" this, and you don't have time.

Ah, okay.  Could we write a type generic interface for that now that we
use more recent Fortran?  We could pass an MPI_Datatype and just check
sizes instead of trying to fully resolve the type.  I think this is
possible with F2008+TS29113, but I haven't looked at those bindings for
a few years and don't remember details.


Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Barry Smith

> On Oct 20, 2017, at 8:09 PM, Jed Brown  wrote:
> 
> Barry Smith  writes:
> 
>>> On Oct 20, 2017, at 12:31 PM, Jed Brown  wrote:
>>> 
>>> Barry Smith  writes:
>>> 
  Adrian,
 
   You should not use F90Array1d *rptr as arguments in the Fortran 
 interface. You should just use a regular Fortran one dimensional array of 
 real/scalar.
 Fortran doesn't handle polymorphism in this way at all. You have to have 
 multiple f90 interfaces, one for each type and provide a C stub for real, 
 int, or whatever else you want to send.
>>> 
>>> Barry, look at the "use mpi_f08" way of calling MPI from Fortran.
>> 
>>  Jed,
>> 
>>   Rather terse response. 
>> 
>>   Are you suggesting in PETSc we use type(*) to manage multiple types 
>> through the same function?  Looks doable, I wasn't aware of this. This could 
>> possibly reduce a lot of code duplication we currently have. 
> 
> PetscSF uses MPI type handling and thus it would make sense to use a
> similarly designed Fortran module.
> 
>>   Still I would like to get rid of the use PetscDataType rather than write 
>> new code that uses it.
>> 
>>   I need to think more in this case.
>> 
>>   Waiting to hear from Adrian what types he needs to pass around (use of 
>> PetscDataType restricts to built in MPI datatypes regardless of what Fortran 
>> interface approach we use
> 
> What does that has to do with PetscSF, which never used PetscDataType
> and uses MPI datatypes with support for derived types (albeit with a
> limited number of combiners)?

   Because the code he started to write uses F90Array1dAccess() which is built 
around PetscDataType, hence he needs to convert the MPI_Datatype. Which is 
where all the trouble comes from. There are multiple ways of resolving his 
problem, it more depends on my time then anything else since it is unlikely 
anyone but you or I will "fix" this, and you don't have time.

   

  Barry






Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Jed Brown
Barry Smith  writes:

>> On Oct 20, 2017, at 12:31 PM, Jed Brown  wrote:
>> 
>> Barry Smith  writes:
>> 
>>>   Adrian,
>>> 
>>>You should not use F90Array1d *rptr as arguments in the Fortran 
>>> interface. You should just use a regular Fortran one dimensional array of 
>>> real/scalar.
>>> Fortran doesn't handle polymorphism in this way at all. You have to have 
>>> multiple f90 interfaces, one for each type and provide a C stub for real, 
>>> int, or whatever else you want to send.
>> 
>> Barry, look at the "use mpi_f08" way of calling MPI from Fortran.
>
>   Jed,
>
>Rather terse response. 
>
>Are you suggesting in PETSc we use type(*) to manage multiple types 
> through the same function?  Looks doable, I wasn't aware of this. This could 
> possibly reduce a lot of code duplication we currently have. 

PetscSF uses MPI type handling and thus it would make sense to use a
similarly designed Fortran module.

>Still I would like to get rid of the use PetscDataType rather than write 
> new code that uses it.
>
>I need to think more in this case.
>
>Waiting to hear from Adrian what types he needs to pass around (use of 
> PetscDataType restricts to built in MPI datatypes regardless of what Fortran 
> interface approach we use

What does that has to do with PetscSF, which never used PetscDataType
and uses MPI datatypes with support for derived types (albeit with a
limited number of combiners)?


Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Barry Smith

> On Oct 20, 2017, at 12:31 PM, Jed Brown  wrote:
> 
> Barry Smith  writes:
> 
>>   Adrian,
>> 
>>You should not use F90Array1d *rptr as arguments in the Fortran 
>> interface. You should just use a regular Fortran one dimensional array of 
>> real/scalar.
>> Fortran doesn't handle polymorphism in this way at all. You have to have 
>> multiple f90 interfaces, one for each type and provide a C stub for real, 
>> int, or whatever else you want to send.
> 
> Barry, look at the "use mpi_f08" way of calling MPI from Fortran.

  Jed,

   Rather terse response. 

   Are you suggesting in PETSc we use type(*) to manage multiple types through 
the same function?  Looks doable, I wasn't aware of this. This could possibly 
reduce a lot of code duplication we currently have. 

   Still I would like to get rid of the use PetscDataType rather than write new 
code that uses it.

   I need to think more in this case.

   Waiting to hear from Adrian what types he needs to pass around (use of 
PetscDataType restricts to built in MPI datatypes regardless of what Fortran 
interface approach we use


  Barry

> 
>>   What types do you need to send? It is probably easier if we just write the 
>> Fortran interfaces for you.
>> 
>>   If you can send a simple Fortran PETS  code that uses PetscSFBcastBegin() 
>> and PetscSFBcastEnd() that I can use for testing then I will write them. Do 
>> a bcast with int and another with real/scalar.
>> 
>> 
>>   Barry
>> 
>> 
>> 
>>> On Oct 19, 2017, at 8:59 PM, Adrian Croucher  
>>> wrote:
>>> 
>>> hi
>>> 
>>> On 19/10/17 06:45, Matthew Knepley wrote:
 On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
  wrote:
 
 
 So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and 
 PetscSFBcastEnd().
 
 From the C side I have added the following into 
 src/vec/is/sf/interface/f90-custom/zsff90.c:
 
 PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, 
 MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
 PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
 {
  PetscDataType ptype;
  const void* rootdata;
  void* leafdata;
 
  *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if (*ierr) return;
  *ierr = F90Array1dAccess(rptr, ptype, (void**)  
 PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
  *ierr = F90Array1dAccess(lptr, ptype, (void**)  
 PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
 
  *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
 
 }
 
 and similarly for petscsfbcastend_(). Does this look plausible?
 
 Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am 
 not sure how to do those.
 
 The difficulty is in declaring the arrays that are passed in, which can be 
 of various types. In C they are declared as void*, but I'm not sure what 
 to do with that in Fortran. I can't seem to find any other example 
 wrappers in PETSc to model it on either. Any suggestions?
>>> 
>>> 
>>> I think this is working now by just declaring those void* C variables as 
>>> type(*) in the Fortran interface, e.g.:
>>> 
>>>  Interface
>>> Subroutine PetscSFBcastBegin(sf,unit,rarray,
>>> &   larray,ierr)
>>>   use petscisdef
>>>   PetscSF :: sf
>>>   PetscInt :: unit
>>>   type(*) :: rarray(:)
>>>   type(*) :: larray(:)
>>>   PetscErrorCode :: ierr
>>> End Subroutine PetscSFBcastBegin
>>>  End Interface
>>> 
>>> The only difficulty I have left with this is in the MPI_Datatype variable. 
>>> I'd forgotten that these datatypes are all different in C and Fortran as 
>>> well.
>>> 
>>> I amended the C interface code to the following, to convert the Fortran MPI 
>>> datatype (actually an integer) to a C MPI_Datatype:
>>> 
>>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
>>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>>> {
>>>  PetscDataType pdtype;
>>>  MPI_Datatype dtype;
>>>  const void* rootdata;
>>>  void* leafdata;
>>> 
>>>  dtype = MPI_Type_f2c(*unit);
>>>  *ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) return;
>>>  *ierr = F90Array1dAccess(rptr, pdtype, (void**)  
>>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>>  *ierr = F90Array1dAccess(lptr, pdtype, (void**)  
>>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>> 
>>>  *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>>> 
>>> }
>>> 
>>> The problem is this only seems to work if I declare the datatype in the 
>>> calling Fortran code to be of the appropriate C MPI datatype, e.g. MPI_INT, 
>>> rather than the corresponding Fortran datatype, e.g. MPI_INTEGER (which 
>>> causes PetscMPIDataTypeToPetscDataType() to fail, as something weird 

Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Jed Brown
Barry Smith  writes:

>Adrian,
>
> You should not use F90Array1d *rptr as arguments in the Fortran 
> interface. You should just use a regular Fortran one dimensional array of 
> real/scalar.
> Fortran doesn't handle polymorphism in this way at all. You have to have 
> multiple f90 interfaces, one for each type and provide a C stub for real, 
> int, or whatever else you want to send.

Barry, look at the "use mpi_f08" way of calling MPI from Fortran.

>What types do you need to send? It is probably easier if we just write the 
> Fortran interfaces for you.
>
>If you can send a simple Fortran PETS  code that uses PetscSFBcastBegin() 
> and PetscSFBcastEnd() that I can use for testing then I will write them. Do a 
> bcast with int and another with real/scalar.
>
>
>Barry
>  
>
>
>> On Oct 19, 2017, at 8:59 PM, Adrian Croucher  
>> wrote:
>> 
>> hi
>> 
>> On 19/10/17 06:45, Matthew Knepley wrote:
>>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
>>>  wrote:
>>> 
>>> 
>>> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and 
>>> PetscSFBcastEnd().
>>> 
>>> From the C side I have added the following into 
>>> src/vec/is/sf/interface/f90-custom/zsff90.c:
>>> 
>>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, 
>>> MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>>> {
>>>   PetscDataType ptype;
>>>   const void* rootdata;
>>>   void* leafdata;
>>> 
>>>   *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if (*ierr) return;
>>>   *ierr = F90Array1dAccess(rptr, ptype, (void**)  
>>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>>   *ierr = F90Array1dAccess(lptr, ptype, (void**)  
>>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>> 
>>>   *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>>> 
>>> }
>>> 
>>> and similarly for petscsfbcastend_(). Does this look plausible?
>>> 
>>> Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am 
>>> not sure how to do those.
>>> 
>>> The difficulty is in declaring the arrays that are passed in, which can be 
>>> of various types. In C they are declared as void*, but I'm not sure what to 
>>> do with that in Fortran. I can't seem to find any other example wrappers in 
>>> PETSc to model it on either. Any suggestions?
>> 
>> 
>> I think this is working now by just declaring those void* C variables as 
>> type(*) in the Fortran interface, e.g.:
>> 
>>   Interface
>>  Subroutine PetscSFBcastBegin(sf,unit,rarray,
>>  &   larray,ierr)
>>use petscisdef
>>PetscSF :: sf
>>PetscInt :: unit
>>type(*) :: rarray(:)
>>type(*) :: larray(:)
>>PetscErrorCode :: ierr
>>  End Subroutine PetscSFBcastBegin
>>   End Interface
>> 
>> The only difficulty I have left with this is in the MPI_Datatype variable. 
>> I'd forgotten that these datatypes are all different in C and Fortran as 
>> well.
>> 
>> I amended the C interface code to the following, to convert the Fortran MPI 
>> datatype (actually an integer) to a C MPI_Datatype:
>> 
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>>   PetscDataType pdtype;
>>   MPI_Datatype dtype;
>>   const void* rootdata;
>>   void* leafdata;
>> 
>>   dtype = MPI_Type_f2c(*unit);
>>   *ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) return;
>>   *ierr = F90Array1dAccess(rptr, pdtype, (void**)  
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>   *ierr = F90Array1dAccess(lptr, pdtype, (void**)  
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>> 
>>   *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>> 
>> }
>> 
>> The problem is this only seems to work if I declare the datatype in the 
>> calling Fortran code to be of the appropriate C MPI datatype, e.g. MPI_INT, 
>> rather than the corresponding Fortran datatype, e.g. MPI_INTEGER (which 
>> causes PetscMPIDataTypeToPetscDataType() to fail, as something weird gets 
>> passed in for dtype).
>> 
>> I was expecting the opposite to be true. It doesn't seem right to have to 
>> use the C datatypes in Fortran code (confusing if the Fortran datatypes are 
>> used elsewhere). So I suspect I've messed something up. Anyone have any 
>> ideas?
>> 
>> - Adrian
>> -- 
>> Dr Adrian Croucher
>> Senior Research Fellow
>> Department of Engineering Science
>> University of Auckland, New Zealand
>> email: 
>> a.crouc...@auckland.ac.nz
>> 
>> tel: +64 (0)9 923 4611
>> 


Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Barry Smith

   Adrian,

You should not use F90Array1d *rptr as arguments in the Fortran interface. 
You should just use a regular Fortran one dimensional array of real/scalar.
Fortran doesn't handle polymorphism in this way at all. You have to have 
multiple f90 interfaces, one for each type and provide a C stub for real, int, 
or whatever else you want to send.

   What types do you need to send? It is probably easier if we just write the 
Fortran interfaces for you.

   If you can send a simple Fortran PETS  code that uses PetscSFBcastBegin() 
and PetscSFBcastEnd() that I can use for testing then I will write them. Do a 
bcast with int and another with real/scalar.


   Barry
 


> On Oct 19, 2017, at 8:59 PM, Adrian Croucher  
> wrote:
> 
> hi
> 
> On 19/10/17 06:45, Matthew Knepley wrote:
>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
>>  wrote:
>> 
>> 
>> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and 
>> PetscSFBcastEnd().
>> 
>> From the C side I have added the following into 
>> src/vec/is/sf/interface/f90-custom/zsff90.c:
>> 
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Datatype 
>> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>>   PetscDataType ptype;
>>   const void* rootdata;
>>   void* leafdata;
>> 
>>   *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if (*ierr) return;
>>   *ierr = F90Array1dAccess(rptr, ptype, (void**)  
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>   *ierr = F90Array1dAccess(lptr, ptype, (void**)  
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>> 
>>   *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>> 
>> }
>> 
>> and similarly for petscsfbcastend_(). Does this look plausible?
>> 
>> Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am not 
>> sure how to do those.
>> 
>> The difficulty is in declaring the arrays that are passed in, which can be 
>> of various types. In C they are declared as void*, but I'm not sure what to 
>> do with that in Fortran. I can't seem to find any other example wrappers in 
>> PETSc to model it on either. Any suggestions?
> 
> 
> I think this is working now by just declaring those void* C variables as 
> type(*) in the Fortran interface, e.g.:
> 
>   Interface
>  Subroutine PetscSFBcastBegin(sf,unit,rarray,
>  &   larray,ierr)
>use petscisdef
>PetscSF :: sf
>PetscInt :: unit
>type(*) :: rarray(:)
>type(*) :: larray(:)
>PetscErrorCode :: ierr
>  End Subroutine PetscSFBcastBegin
>   End Interface
> 
> The only difficulty I have left with this is in the MPI_Datatype variable. 
> I'd forgotten that these datatypes are all different in C and Fortran as well.
> 
> I amended the C interface code to the following, to convert the Fortran MPI 
> datatype (actually an integer) to a C MPI_Datatype:
> 
> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
> {
>   PetscDataType pdtype;
>   MPI_Datatype dtype;
>   const void* rootdata;
>   void* leafdata;
> 
>   dtype = MPI_Type_f2c(*unit);
>   *ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) return;
>   *ierr = F90Array1dAccess(rptr, pdtype, (void**)  
> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>   *ierr = F90Array1dAccess(lptr, pdtype, (void**)  
> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
> 
>   *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
> 
> }
> 
> The problem is this only seems to work if I declare the datatype in the 
> calling Fortran code to be of the appropriate C MPI datatype, e.g. MPI_INT, 
> rather than the corresponding Fortran datatype, e.g. MPI_INTEGER (which 
> causes PetscMPIDataTypeToPetscDataType() to fail, as something weird gets 
> passed in for dtype).
> 
> I was expecting the opposite to be true. It doesn't seem right to have to use 
> the C datatypes in Fortran code (confusing if the Fortran datatypes are used 
> elsewhere). So I suspect I've messed something up. Anyone have any ideas?
> 
> - Adrian
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: 
> a.crouc...@auckland.ac.nz
> 
> tel: +64 (0)9 923 4611
> 



Re: [petsc-dev] PetscSF in Fortran

2017-10-20 Thread Barry Smith

> On Oct 19, 2017, at 10:29 PM, Jed Brown  wrote:
> 
> Adrian Croucher  writes:
> 
>> hi
>> 
>> 
>> The problem is this only seems to work if I declare the datatype in the 
>> calling Fortran code to be of the appropriate C MPI datatype, e.g. 
>> MPI_INT, rather than the corresponding Fortran datatype, e.g. 
>> MPI_INTEGER (which causes PetscMPIDataTypeToPetscDataType() to fail, as 
>> something weird gets passed in for dtype).
>> 
>> I was expecting the opposite to be true. It doesn't seem right to have 
>> to use the C datatypes in Fortran code (confusing if the Fortran 
>> datatypes are used elsewhere). So I suspect I've messed something up. 
>> Anyone have any ideas?
> 
> I think PetscMPIDataTypeToPetscDataType should be extended to handle the
> Fortran types.
> 
> BTW, this code looks very wrong to me:
> 
>  if (mtype == MPIU_INT) *ptype = PETSC_INT;
>  else if (mtype == MPI_INT) *ptype = PETSC_INT;

   Yes definitely wrong. 

> 
> [...]
> 
>  else if (mtype == MPI_LONG)*ptype = PETSC_LONG;
> 
> 
> If PetscInt is 64-bit then MPI_INT (which corresponds to int) would be
> mapped to a 64-bit int.  Those PETSc datatypes either need to be
> abandoned (I think MPI types would do) or we need a terminology to
> distinguish PetscInt from int.

  From a manual page

-
/*E
PetscDataType - Used for handling different basic data types.

   Level: beginner

   Developer comment: It would be nice if we could always just use MPI 
Datatypes, why can we not?
--

   When MPI first came out I struggled to use just MPI datatypes and not 
introduce a parallel set of PETSc ones; I was unsuccessful but that may have 
just have come from my ignorance and not because it was not possible.

   Regardless of whether we can totally eliminate the use of PETSc datatypes we 
should try to limit them and NOT use them when not needed.


I have made two pull requests 

1) fix the bug 
https://bitbucket.org/petsc/petsc/pull-requests/782/mpi_int-mpi-datatype-with-64-bit-int/diff

2) removed unneeded use of PetscDataType 
https://bitbucket.org/petsc/petsc/pull-requests/783/remove-use-of-petscdatatype-from-vtk-code/diff

Note this does not resolve Adrian's problem. I will respond to Adrian in 
another email.

  Barry









Re: [petsc-dev] PetscSF in Fortran

2017-10-19 Thread Jed Brown
Adrian Croucher  writes:

> hi
>
>
> On 19/10/17 06:45, Matthew Knepley wrote:
>> On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
>> > wrote:
>>
>>
>> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin()
>> and PetscSFBcastEnd().
>>
>> From the C side I have added the following into
>> src/vec/is/sf/interface/f90-custom/zsff90.c:
>>
>> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf,
>> MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
>> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
>> {
>>   PetscDataType ptype;
>>   const void* rootdata;
>>   void* leafdata;
>>
>>   *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if
>> (*ierr) return;
>>   *ierr = F90Array1dAccess(rptr, ptype, (void**) 
>> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>>   *ierr = F90Array1dAccess(lptr, ptype, (void**) 
>> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>>
>>   *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>>
>> }
>>
>> and similarly for petscsfbcastend_(). Does this look plausible?
>>
>> Then some wrappers need to be added to
>> src/vec/f90-mod/petscis.h90. I am not sure how to do those.
>>
>> The difficulty is in declaring the arrays that are passed in,
>> which can be of various types. In C they are declared as void*,
>> but I'm not sure what to do with that in Fortran. I can't seem to
>> find any other example wrappers in PETSc to model it on either.
>> Any suggestions?
>>
>
>
> I think this is working now by just declaring those void* C variables as 
> type(*) in the Fortran interface, e.g.:
>
>Interface
>   Subroutine PetscSFBcastBegin(sf,unit,rarray,
>   &   larray,ierr)
> use petscisdef
> PetscSF :: sf
> PetscInt :: unit
> type(*) :: rarray(:)
> type(*) :: larray(:)
> PetscErrorCode :: ierr
>   End Subroutine PetscSFBcastBegin
>End Interface
>
> The only difficulty I have left with this is in the MPI_Datatype 
> variable. I'd forgotten that these datatypes are all different in C and 
> Fortran as well.
>
> I amended the C interface code to the following, to convert the Fortran 
> MPI datatype (actually an integer) to a C MPI_Datatype:
>
> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
> *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
> {
>PetscDataType pdtype;
>MPI_Datatype dtype;
>const void* rootdata;
>void* leafdata;
>
>dtype = MPI_Type_f2c(*unit);
>*ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) 
> return;
>*ierr = F90Array1dAccess(rptr, pdtype, (void**)  
> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>*ierr = F90Array1dAccess(lptr, pdtype, (void**)  
> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>
>*ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);
>
> }
>
> The problem is this only seems to work if I declare the datatype in the 
> calling Fortran code to be of the appropriate C MPI datatype, e.g. 
> MPI_INT, rather than the corresponding Fortran datatype, e.g. 
> MPI_INTEGER (which causes PetscMPIDataTypeToPetscDataType() to fail, as 
> something weird gets passed in for dtype).
>
> I was expecting the opposite to be true. It doesn't seem right to have 
> to use the C datatypes in Fortran code (confusing if the Fortran 
> datatypes are used elsewhere). So I suspect I've messed something up. 
> Anyone have any ideas?

I think PetscMPIDataTypeToPetscDataType should be extended to handle the
Fortran types.

BTW, this code looks very wrong to me:

  if (mtype == MPIU_INT) *ptype = PETSC_INT;
  else if (mtype == MPI_INT) *ptype = PETSC_INT;

[...]

  else if (mtype == MPI_LONG)*ptype = PETSC_LONG;


If PetscInt is 64-bit then MPI_INT (which corresponds to int) would be
mapped to a 64-bit int.  Those PETSc datatypes either need to be
abandoned (I think MPI types would do) or we need a terminology to
distinguish PetscInt from int.


Re: [petsc-dev] PetscSF in Fortran

2017-10-19 Thread Adrian Croucher

hi


On 19/10/17 06:45, Matthew Knepley wrote:
On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher 
> wrote:



So, now I'm trying to add Fortran bindings for PetscSFBcastBegin()
and PetscSFBcastEnd().

From the C side I have added the following into
src/vec/is/sf/interface/f90-custom/zsff90.c:

PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf,
MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
{
  PetscDataType ptype;
  const void* rootdata;
  void* leafdata;

  *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if
(*ierr) return;
  *ierr = F90Array1dAccess(rptr, ptype, (void**) 
PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
  *ierr = F90Array1dAccess(lptr, ptype, (void**) 
PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;

  *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);

}

and similarly for petscsfbcastend_(). Does this look plausible?

Then some wrappers need to be added to
src/vec/f90-mod/petscis.h90. I am not sure how to do those.

The difficulty is in declaring the arrays that are passed in,
which can be of various types. In C they are declared as void*,
but I'm not sure what to do with that in Fortran. I can't seem to
find any other example wrappers in PETSc to model it on either.
Any suggestions?




I think this is working now by just declaring those void* C variables as 
type(*) in the Fortran interface, e.g.:


  Interface
 Subroutine PetscSFBcastBegin(sf,unit,rarray,
 &   larray,ierr)
   use petscisdef
   PetscSF :: sf
   PetscInt :: unit
   type(*) :: rarray(:)
   type(*) :: larray(:)
   PetscErrorCode :: ierr
 End Subroutine PetscSFBcastBegin
  End Interface

The only difficulty I have left with this is in the MPI_Datatype 
variable. I'd forgotten that these datatypes are all different in C and 
Fortran as well.


I amended the C interface code to the following, to convert the Fortran 
MPI datatype (actually an integer) to a C MPI_Datatype:


PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, MPI_Fint 
*unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))

{
  PetscDataType pdtype;
  MPI_Datatype dtype;
  const void* rootdata;
  void* leafdata;

  dtype = MPI_Type_f2c(*unit);
  *ierr = PetscMPIDataTypeToPetscDataType(dtype, );if (*ierr) 
return;
  *ierr = F90Array1dAccess(rptr, pdtype, (void**)  
PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
  *ierr = F90Array1dAccess(lptr, pdtype, (void**)  
PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;


  *ierr = PetscSFBcastBegin(*sf, dtype, rootdata, leafdata);

}

The problem is this only seems to work if I declare the datatype in the 
calling Fortran code to be of the appropriate C MPI datatype, e.g. 
MPI_INT, rather than the corresponding Fortran datatype, e.g. 
MPI_INTEGER (which causes PetscMPIDataTypeToPetscDataType() to fail, as 
something weird gets passed in for dtype).


I was expecting the opposite to be true. It doesn't seem right to have 
to use the C datatypes in Fortran code (confusing if the Fortran 
datatypes are used elsewhere). So I suspect I've messed something up. 
Anyone have any ideas?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-18 Thread Matthew Knepley
On Tue, Oct 17, 2017 at 11:35 PM, Adrian Croucher  wrote:

> hi
>
> On 16/10/17 15:16, Jed Brown wrote:
>
>> Adrian Croucher  writes:
>>
>> So do you think the SF stuff in petsc/finclude/petscis.h should be taken
>>> out, and put into a new petsc/finclude/petscsf.h ?
>>>
>> I think that's desirable for symmetry with include/petscsf.h, but it
>> isn't important for your contribution.
>>
>
> I have got Fortran bindings for PetscSFGetGraph() and PetscSFSetGraph()
> working. First I tried separating the existing SF stuff out of the IS
> modules, but ran into dependency problems in other modules. I wasn't too
> confident about the best way to resolve those without breaking things, so I
> tried it again just doing minimal changes to the existing SF stuff-
> essentially just adding Fortran wrappers for PetscSFGetGraph() and
> PetscSFSetGraph() into src/vec/f90-mod/petscis.h90- and that seemed to work
> fine. Maybe someone more familiar with the setup could separate out the SF
> stuff from IS sometime.
>
>
> So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and
> PetscSFBcastEnd().
>
> From the C side I have added the following into
> src/vec/is/sf/interface/f90-custom/zsff90.c:
>
> PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf,
> MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr
> PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))
> {
>   PetscDataType ptype;
>   const void* rootdata;
>   void* leafdata;
>
>   *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if (*ierr)
> return;
>   *ierr = F90Array1dAccess(rptr, ptype, (void**) 
> PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
>   *ierr = F90Array1dAccess(lptr, ptype, (void**) 
> PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;
>
>   *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);
>
> }
>
> and similarly for petscsfbcastend_(). Does this look plausible?
>
> Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am
> not sure how to do those.
>
> The difficulty is in declaring the arrays that are passed in, which can be
> of various types. In C they are declared as void*, but I'm not sure what to
> do with that in Fortran. I can't seem to find any other example wrappers in
> PETSc to model it on either. Any suggestions?


I do not understand Fortran. However, it looks like you can choose an
arbitrary type for the array, such as character,
and then use the 'transfer' intrinsic from F95 in your code to get an array
like that

  https://jblevins.org/log/transfer

so you only need an interface with that generic array type in PETSc.

   Matt


> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] PetscSF in Fortran

2017-10-17 Thread Adrian Croucher

hi


On 16/10/17 15:16, Jed Brown wrote:

Adrian Croucher  writes:


So do you think the SF stuff in petsc/finclude/petscis.h should be taken
out, and put into a new petsc/finclude/petscsf.h ?

I think that's desirable for symmetry with include/petscsf.h, but it
isn't important for your contribution.


I have got Fortran bindings for PetscSFGetGraph() and PetscSFSetGraph() 
working. First I tried separating the existing SF stuff out of the IS 
modules, but ran into dependency problems in other modules. I wasn't too 
confident about the best way to resolve those without breaking things, 
so I tried it again just doing minimal changes to the existing SF stuff- 
essentially just adding Fortran wrappers for PetscSFGetGraph() and 
PetscSFSetGraph() into src/vec/f90-mod/petscis.h90- and that seemed to 
work fine. Maybe someone more familiar with the setup could separate out 
the SF stuff from IS sometime.



So, now I'm trying to add Fortran bindings for PetscSFBcastBegin() and 
PetscSFBcastEnd().


From the C side I have added the following into 
src/vec/is/sf/interface/f90-custom/zsff90.c:


PETSC_EXTERN void PETSC_STDCALL petscsfbcastbegin_(PetscSF *sf, 
MPI_Datatype *unit, F90Array1d *rptr, F90Array1d *lptr , int *ierr 
PETSC_F90_2PTR_PROTO(rptrd) PETSC_F90_2PTR_PROTO(lptrd))

{
  PetscDataType ptype;
  const void* rootdata;
  void* leafdata;

  *ierr = PetscMPIDataTypeToPetscDataType(*unit, );if (*ierr) return;
  *ierr = F90Array1dAccess(rptr, ptype, (void**)  
PETSC_F90_2PTR_PARAM(rptrd));if (*ierr) return;
  *ierr = F90Array1dAccess(lptr, ptype, (void**)  
PETSC_F90_2PTR_PARAM(lptrd));if (*ierr) return;


  *ierr = PetscSFBcastBegin(*sf, *unit, rootdata, leafdata);

}

and similarly for petscsfbcastend_(). Does this look plausible?

Then some wrappers need to be added to src/vec/f90-mod/petscis.h90. I am 
not sure how to do those.


The difficulty is in declaring the arrays that are passed in, which can 
be of various types. In C they are declared as void*, but I'm not sure 
what to do with that in Fortran. I can't seem to find any other example 
wrappers in PETSc to model it on either. Any suggestions?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Jed Brown
Adrian Croucher  writes:

> On 16/10/17 14:14, Matthew Knepley wrote:
>>
>> This last file petscao.h is including a "petsc/finclude/petscao.h", 
>> which contains various definitions including the definition of AO. Do 
>> I need to make some sort of similar "petsc/finclude/petscsf.h"?
>>
>>
>> It looks like PetscSF is already defined inside
>> petsc/finclude/petscis.h. It's not clear to me which bits of the
>> SF interface need to be in their own files and which are lumped in
>> with the IS interface.
>>
>>
>> If SF is already in the IS part then we do not technically need to 
>> move it out. It just seems to me that it would be cleaner to do so.
>
> So do you think the SF stuff in petsc/finclude/petscis.h should be taken 
> out, and put into a new petsc/finclude/petscsf.h ?

I think that's desirable for symmetry with include/petscsf.h, but it
isn't important for your contribution.


Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Adrian Croucher



On 16/10/17 14:14, Matthew Knepley wrote:


This last file petscao.h is including a "petsc/finclude/petscao.h", 
which contains various definitions including the definition of AO. Do 
I need to make some sort of similar "petsc/finclude/petscsf.h"?



It looks like PetscSF is already defined inside
petsc/finclude/petscis.h. It's not clear to me which bits of the
SF interface need to be in their own files and which are lumped in
with the IS interface.


If SF is already in the IS part then we do not technically need to 
move it out. It just seems to me that it would be cleaner to do so.


So do you think the SF stuff in petsc/finclude/petscis.h should be taken 
out, and put into a new petsc/finclude/petscsf.h ?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Matthew Knepley
On Sun, Oct 15, 2017 at 9:13 PM, Adrian Croucher 
wrote:

> On 16/10/17 13:12, Matthew Knepley wrote:
>
>
> Okay, I was wrong about what gets produced. Barry rewrote the Fortran
> bindings last, and I did not understand how everything
> was put together. You will need to make some modifications. Here is the
> idea: we build an F90 module for each class, and your
> stuff will go in the Vec module here:
>
>   https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2
> e1b7869aa6/src/vec/f90-mod/petscvecmod.F?at=master&
> fileviewer=file-view-default
>
> You will need to make a module in this file petscsf. It should look about
> like the petscao module. That means making
> a petscsf.h90 with your new interfaces, like this one
>
>   https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2
> e1b7869aa6/src/vec/f90-mod/petscis.h90?at=master&
> fileviewer=file-view-default
>
> and also a generic petscsf.h, like this one
>
>   https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2
> e1b7869aa6/src/vec/f90-mod/petscao.h?at=master&
> fileviewer=file-view-default
>
>
> This last file petscao.h is including a "petsc/finclude/petscao.h", which
> contains various definitions including the definition of AO. Do I need to
> make some sort of similar "petsc/finclude/petscsf.h"?
>
> It looks like PetscSF is already defined inside petsc/finclude/petscis.h.
> It's not clear to me which bits of the SF interface need to be in their own
> files and which are lumped in with the IS interface.
>

If SF is already in the IS part then we do not technically need to move it
out. It just seems to me that it would be cleaner to do so.

  Thanks,

 Matt


>
> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611 <+64%209-923%204611>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Adrian Croucher

On 16/10/17 13:12, Matthew Knepley wrote:


Okay, I was wrong about what gets produced. Barry rewrote the Fortran 
bindings last, and I did not understand how everything
was put together. You will need to make some modifications. Here is 
the idea: we build an F90 module for each class, and your

stuff will go in the Vec module here:

https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscvecmod.F?at=master=file-view-default

You will need to make a module in this file petscsf. It should look 
about like the petscao module. That means making

a petscsf.h90 with your new interfaces, like this one

https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscis.h90?at=master=file-view-default

and also a generic petscsf.h, like this one

https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscao.h?at=master=file-view-default



This last file petscao.h is including a "petsc/finclude/petscao.h", 
which contains various definitions including the definition of AO. Do I 
need to make some sort of similar "petsc/finclude/petscsf.h"?


It looks like PetscSF is already defined inside 
petsc/finclude/petscis.h. It's not clear to me which bits of the SF 
interface need to be in their own files and which are lumped in with the 
IS interface.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Matthew Knepley
On Sun, Oct 15, 2017 at 6:54 PM, Adrian Croucher 
wrote:

> hi
>
> On 14/10/17 00:47, Matthew Knepley wrote:
>
>
> If you want the wrapper to take in F90 pointer arguments, then you have to
> declare it or you get an SEGV. These
> get autogenerated in include/petsc/finclude/ftn-auto/*.h90 when you run
> 'make allfortranstubs'. Did this happen
> for your function?
>
>
> I just tried running make allfortranstubs, but I don't see anything new in
> include/petsc/finclude/ftn-auto/*.h90, and my test program still crashes.
>
> I also tried doing a make clean and make allfortranstubs all, but that
> didn't help.
>
> I even tried deleting my $PETSC_ARCH directory and rebuilding everything
> (including make allfortranstubs) but again it didn't help.
>
> I'm a bit surprised that custom Fortran bindings should produce anything
> in include/petsc/finclude/ftn-auto/, I thought that would only be for the
> auto-generated stuff that comes out of bfort?
>

Okay, I was wrong about what gets produced. Barry rewrote the Fortran
bindings last, and I did not understand how everything
was put together. You will need to make some modifications. Here is the
idea: we build an F90 module for each class, and your
stuff will go in the Vec module here:


https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscvecmod.F?at=master=file-view-default

You will need to make a module in this file petscsf. It should look about
like the petscao module. That means making
a petscsf.h90 with your new interfaces, like this one


https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscis.h90?at=master=file-view-default

and also a generic petscsf.h, like this one


https://bitbucket.org/petsc/petsc/src/c99b676fdc4d0978b5621d138ceee2e1b7869aa6/src/vec/f90-mod/petscao.h?at=master=file-view-default

Then in your example you need a

  use petscsf

which will bring in the interface for your function, causing Fortran to
pass the array descriptor instead of some bogus pointer.

  Thanks,

Matt


>
> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611 <+64%209-923%204611>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] PetscSF in Fortran

2017-10-15 Thread Adrian Croucher

hi


On 14/10/17 00:47, Matthew Knepley wrote:


If you want the wrapper to take in F90 pointer arguments, then you 
have to declare it or you get an SEGV. These
get autogenerated in include/petsc/finclude/ftn-auto/*.h90 when you 
run 'make allfortranstubs'. Did this happen

for your function?


I just tried running make allfortranstubs, but I don't see anything new 
in include/petsc/finclude/ftn-auto/*.h90, and my test program still crashes.


I also tried doing a make clean and make allfortranstubs all, but that 
didn't help.


I even tried deleting my $PETSC_ARCH directory and rebuilding everything 
(including make allfortranstubs) but again it didn't help.


I'm a bit surprised that custom Fortran bindings should produce anything 
in include/petsc/finclude/ftn-auto/, I thought that would only be for 
the auto-generated stuff that comes out of bfort?


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-10-13 Thread Matthew Knepley
On Fri, Oct 13, 2017 at 12:25 AM, Adrian Croucher  wrote:

> hi Jed
>
> I have had a go at writing a Fortran binding for the PetscSFGetGraph()
> function, along the lines of what you suggested:
>
> On 21/09/17 15:15, Jed Brown wrote:
> Unfortunately PetscSFSetGraph/PetscSFGetGraph need custom bindings for
> Fortran and they have not been written.  Shouldn't be difficult, but
> will take a little work/testing.  I would probably make the array of
> PetscSFNode be a PetscInt array of length 2n (or of shape (2,n)) --
> that's an equivalent memory representation to what we're using now.
>
> From looking at various other existing custom Fortran bindings that return
> arrays, I tried the following:
>
> PETSC_EXTERN void PETSC_STDCALL petscsfgetgraph_(PetscSF *sf, PetscInt
> *nroots, PetscInt *nleaves, F90Array1d *lptr, F90Array2d *rptr, int *ierr
> PETSC_F90_2PTR_PROTO(lptrd) PETSC_F90_2PTR_PROTO(rptrd))
> {
>   const PetscInt *ilocal;
>   const PetscSFNode *iremote;
>
>   *ierr = PetscSFGetGraph(*sf, nroots, nleaves, , ); if
> (*ierr) return;
>
>   *ierr = F90Array1dCreate((void*) ilocal, PETSC_INT, 1, *nleaves, lptr
> PETSC_F90_2PTR_PARAM(lptrd));
>   *ierr = F90Array2dCreate((void*) iremote, PETSC_INT, 1, 2, 1, *nleaves,
> rptr PETSC_F90_2PTR_PARAM(rptrd));
> }
>
> I put this in a new file zsff90.c, in a new directory
> /vec/is/sf/interface/f90-custom/, together with an appropriate makefile.
>
> However it crashes with a segmentation violation in the call to
> F90Array1dCreate(), when that goes into the Fortran routine
> F90Array1dCreateInt() and tries to assign the pointer.
>
> In my test calling program I declare the arrays for ilocal and iremote as
> PetscInt, pointer :: ilocal(:), iremote(:,:) and pass them in.
>
> Any clues? I have tried various variations on the above but I suspect I've
> either made some basic mistake or I just don't understand how this should
> be done.
>

If you want the wrapper to take in F90 pointer arguments, then you have to
declare it or you get an SEGV. These
get autogenerated in include/petsc/finclude/ftn-auto/*.h90 when you run
'make allfortranstubs'. Did this happen
for your function?

  Thanks,

 Matt


>
> - Adrian
>
> On 28/09/17 15:34, Adrian Croucher wrote:
>
> hi
> On 28/09/17 04:18, Matthew Knepley wrote:
>
>
> Okay, I think this should be easy to solve.
>
> First a little bit about SF. There are two parts to the specification. You
> have the communication part, which maps
> a certain location p on this process to another location q on another
> process. This might not change for you. The
> second part just tells it how big the data array is (numRoots), which is
> the thing that the locations in the communication
> part index into.
>
> So my question is, where did you put the numbers for the points you added?
> Are they after all points, or only after the old cells?
> I think you can easily create the new SF using the info from SFGetGraph().
>
>
> I've had a closer look at how the SF stuff works (partly by viewing the
> point SF from the original DM) and I think I understand it now. There were
> two things I hadn't realised:
>
> 1) It looks like all DM points are always considered potential roots for
> leaves on another process, which is why it complained with an error message
> when the number of roots was less than pEnd . I don't think this really
> makes so much sense in the dual-porosity mesh case I'm working on, because
> the new points I'm adding are all 'inside' the original cells and have no
> possible connection with any other points. So they can't be roots for any
> off-process leaf points. But I guess it won't hurt to tell it that they can
> (by increasing the number of roots passed to PetscSFSetGraph so it's equal
> to the new pEnd).
>
> 2) Because I'm doing finite volume I had only thought about ghost cells,
> but the SF needs to include leaf points in other height strata as well
> (vertices, edges and faces). My new points in each stratum have been added
> after the partition ghost points, so the leaf cells won't have changed
> their point indices. However the leaf points in other strata will have been
> shifted (because of points added into preceding strata).
>
>
> So I think I will need to use PetscSFSetGraph() after all, so I can
> increase the number of roots, update the leaf point indices, and also
> update the remote root index for each leaf (presumably I can use the
> original SF and PetscSFBcastBegin/ PetscSFBcastEnd to do that).
>
> If you agree, then I will need working Fortran interfaces to the
> PetscSFGetGraph/ PetscSFSetGraph functions, which are missing at present.
> Are you able to add those easily?
>
> Thanks,
> Adrian
>
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611 <+64%209-923%204611>
>
>


-- 
What most experimenters take for granted before they begin their

Re: [petsc-dev] PetscSF in Fortran

2017-10-12 Thread Adrian Croucher

hi Jed

I have had a go at writing a Fortran binding for the PetscSFGetGraph() 
function, along the lines of what you suggested:



On 21/09/17 15:15, Jed Brown wrote:
Unfortunately PetscSFSetGraph/PetscSFGetGraph need custom bindings for
Fortran and they have not been written.  Shouldn't be difficult, but
will take a little work/testing.  I would probably make the array of
PetscSFNode be a PetscInt array of length 2n (or of shape (2,n)) --
that's an equivalent memory representation to what we're using now. 


From looking at various other existing custom Fortran bindings that 
return arrays, I tried the following:



PETSC_EXTERN void PETSC_STDCALL petscsfgetgraph_(PetscSF *sf, PetscInt 
*nroots, PetscInt *nleaves, F90Array1d *lptr, F90Array2d *rptr, int 
*ierr PETSC_F90_2PTR_PROTO(lptrd) PETSC_F90_2PTR_PROTO(rptrd))

{
  const PetscInt *ilocal;
  const PetscSFNode *iremote;

  *ierr = PetscSFGetGraph(*sf, nroots, nleaves, , ); if 
(*ierr) return;


  *ierr = F90Array1dCreate((void*) ilocal, PETSC_INT, 1, *nleaves, lptr 
PETSC_F90_2PTR_PARAM(lptrd));
  *ierr = F90Array2dCreate((void*) iremote, PETSC_INT, 1, 2, 1, 
*nleaves, rptr PETSC_F90_2PTR_PARAM(rptrd));

}

I put this in a new file zsff90.c, in a new directory 
/vec/is/sf/interface/f90-custom/, together with an appropriate makefile.


However it crashes with a segmentation violation in the call to 
F90Array1dCreate(), when that goes into the Fortran routine 
F90Array1dCreateInt() and tries to assign the pointer.


In my test calling program I declare the arrays for ilocal and iremote 
as PetscInt, pointer :: ilocal(:), iremote(:,:) and pass them in.


Any clues? I have tried various variations on the above but I suspect 
I've either made some basic mistake or I just don't understand how this 
should be done.


- Adrian

On 28/09/17 15:34, Adrian Croucher wrote:


hi

On 28/09/17 04:18, Matthew Knepley wrote:


Okay, I think this should be easy to solve.

First a little bit about SF. There are two parts to the 
specification. You have the communication part, which maps
a certain location p on this process to another location q on another 
process. This might not change for you. The
second part just tells it how big the data array is (numRoots), which 
is the thing that the locations in the communication

part index into.

So my question is, where did you put the numbers for the points you 
added? Are they after all points, or only after the old cells?
I think you can easily create the new SF using the info from 
SFGetGraph().


I've had a closer look at how the SF stuff works (partly by viewing 
the point SF from the original DM) and I think I understand it now. 
There were two things I hadn't realised:


1) It looks like all DM points are always considered potential roots 
for leaves on another process, which is why it complained with an 
error message when the number of roots was less than pEnd . I don't 
think this really makes so much sense in the dual-porosity mesh case 
I'm working on, because the new points I'm adding are all 'inside' the 
original cells and have no possible connection with any other points. 
So they can't be roots for any off-process leaf points. But I guess it 
won't hurt to tell it that they can (by increasing the number of roots 
passed to PetscSFSetGraph so it's equal to the new pEnd).


2) Because I'm doing finite volume I had only thought about ghost 
cells, but the SF needs to include leaf points in other height strata 
as well (vertices, edges and faces). My new points in each stratum 
have been added after the partition ghost points, so the leaf cells 
won't have changed their point indices. However the leaf points in 
other strata will have been shifted (because of points added into 
preceding strata).



So I think I will need to use PetscSFSetGraph() after all, so I can 
increase the number of roots, update the leaf point indices, and also 
update the remote root index for each leaf (presumably I can use the 
original SF and PetscSFBcastBegin/ PetscSFBcastEnd to do that).


If you agree, then I will need working Fortran interfaces to the 
PetscSFGetGraph/ PetscSFSetGraph functions, which are missing at 
present. Are you able to add those easily?


Thanks,
Adrian 


--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-27 Thread Adrian Croucher

hi


On 28/09/17 15:34, Adrian Croucher wrote:


So I think I will need to use PetscSFSetGraph() after all, so I can 
increase the number of roots, update the leaf point indices, and also 
update the remote root index for each leaf (presumably I can use the 
original SF and PetscSFBcastBegin/ PetscSFBcastEnd to do that).


If you agree, then I will need working Fortran interfaces to the 
PetscSFGetGraph/ PetscSFSetGraph functions, which are missing at 
present. Are you able to add those easily?


It looks like the Fortran interfaces for PetscSFBcastBegin/ 
PetscSFBcastEnd are missing as well. I would need those to make this work.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-27 Thread Adrian Croucher

hi

On 28/09/17 04:18, Matthew Knepley wrote:


Okay, I think this should be easy to solve.

First a little bit about SF. There are two parts to the specification. 
You have the communication part, which maps
a certain location p on this process to another location q on another 
process. This might not change for you. The
second part just tells it how big the data array is (numRoots), which 
is the thing that the locations in the communication

part index into.

So my question is, where did you put the numbers for the points you 
added? Are they after all points, or only after the old cells?

I think you can easily create the new SF using the info from SFGetGraph().


I've had a closer look at how the SF stuff works (partly by viewing the 
point SF from the original DM) and I think I understand it now. There 
were two things I hadn't realised:


1) It looks like all DM points are always considered potential roots for 
leaves on another process, which is why it complained with an error 
message when the number of roots was less than pEnd . I don't think this 
really makes so much sense in the dual-porosity mesh case I'm working 
on, because the new points I'm adding are all 'inside' the original 
cells and have no possible connection with any other points. So they 
can't be roots for any off-process leaf points. But I guess it won't 
hurt to tell it that they can (by increasing the number of roots passed 
to PetscSFSetGraph so it's equal to the new pEnd).


2) Because I'm doing finite volume I had only thought about ghost cells, 
but the SF needs to include leaf points in other height strata as well 
(vertices, edges and faces). My new points in each stratum have been 
added after the partition ghost points, so the leaf cells won't have 
changed their point indices. However the leaf points in other strata 
will have been shifted (because of points added into preceding strata).



So I think I will need to use PetscSFSetGraph() after all, so I can 
increase the number of roots, update the leaf point indices, and also 
update the remote root index for each leaf (presumably I can use the 
original SF and PetscSFBcastBegin/ PetscSFBcastEnd to do that).


If you agree, then I will need working Fortran interfaces to the 
PetscSFGetGraph/ PetscSFSetGraph functions, which are missing at 
present. Are you able to add those easily?


Thanks,
Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-27 Thread Matthew Knepley
On Tue, Sep 26, 2017 at 11:49 PM, Adrian Croucher  wrote:

> hi Matt,
>
> On 25/09/17 23:12, Matthew Knepley wrote:
>
>
> If you truly need the exact same SF for your grid, you should be able to
> use DMGet/SetPointSF() since it will just reference count it for you. Then
> the default SF is created automatically from the point SF. Does this work?
>
>
> It doesn't seem to work, unfortunately. If I run in parallel I get an
> error on each processor like this:
>
> [0]PETSC ERROR: Nonconforming object sizes
> [0]PETSC ERROR: SF roots 447 < pEnd 543
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.6-5886-gc423942  GIT
> Date: 2017-06-28 18:43:52 -0500
> [0]PETSC ERROR: test_all on a linux-gnu-c-opt named en-354401 by acro018
> Wed Sep 27 16:31:28 2017
> [0]PETSC ERROR: Configure options --with-x --download-hdf5
> --download-netcdf --download-exodusii --downloa
> d-triangle --download-ptscotch --download-chaco
> --download-hypre
> [0]PETSC ERROR: #1 PetscSectionCreateGlobalSection() line 929 in
> /home/acro018/software/PETSc/code/src/vec
> /is/utils/vsectionis.c
>
> [0]PETSC ERROR: #2 DMGetDefaultGlobalSection() line 3458 in
> /home/acro018/software/PETSc/code/src/dm/inter
> face/dm.c
>
> [0]PETSC ERROR: #3 User provided function() line 0 in User file
>
> The reason I thought just copying the SF across might work is that the
> partition ghost cells in my modified DMPlex should be in the same locations
> (DMPlex points) as they were in the original DMPlex. My understanding was
> that the SF just stores the root corresponding to a leaf (ghost point?) on
> the current processor, so those ought to be unchanged. But maybe there are
> subtleties about the SF stuff that I don't yet understand.
>
> What I am doing is adding new points for the dual-porosity mesh into my
> modified DMPlex. I have added the new cells after the partition ghost
> cells, and before the boundary ghost cells (and similarly for other depth
> strata). So I have shifted the locations of the boundary ghost cells, but
> not the partition ghost cells. I give the new points the appropriate depth
> label values so that the depth stratum bounds are updated correctly. I also
> shift the end_interior value for each depth stratum, so that code relying
> on DMPlexGetHybridBounds() should still work.
>
> Would you expect copying the SF to work in this case?
>

Okay, I think this should be easy to solve.

First a little bit about SF. There are two parts to the specification. You
have the communication part, which maps
a certain location p on this process to another location q on another
process. This might not change for you. The
second part just tells it how big the data array is (numRoots), which is
the thing that the locations in the communication
part index into.

So my question is, where did you put the numbers for the points you added?
Are they after all points, or only after the old cells?
I think you can easily create the new SF using the info from SFGetGraph().

   Matt


>
> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611 <+64%209-923%204611>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] PetscSF in Fortran

2017-09-26 Thread Adrian Croucher

hi Matt,


On 25/09/17 23:12, Matthew Knepley wrote:


If you truly need the exact same SF for your grid, you should be able 
to use DMGet/SetPointSF() since it will just reference count it for 
you. Then

the default SF is created automatically from the point SF. Does this work?


It doesn't seem to work, unfortunately. If I run in parallel I get an 
error on each processor like this:


[0]PETSC ERROR: Nonconforming object sizes
[0]PETSC ERROR: SF roots 447 < pEnd 543
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.7.6-5886-gc423942  
GIT Date: 2017-06-28 18:43:52 -0500
[0]PETSC ERROR: test_all on a linux-gnu-c-opt named en-354401 by acro018 
Wed Sep 27 16:31:28 2017
[0]PETSC ERROR: Configure options --with-x --download-hdf5 
--download-netcdf --download-exodusii --downloa

d-triangle --download-ptscotch --download-chaco --download-hypre
[0]PETSC ERROR: #1 PetscSectionCreateGlobalSection() line 929 in 
/home/acro018/software/PETSc/code/src/vec

/is/utils/vsectionis.c
[0]PETSC ERROR: #2 DMGetDefaultGlobalSection() line 3458 in 
/home/acro018/software/PETSc/code/src/dm/inter

face/dm.c
[0]PETSC ERROR: #3 User provided function() line 0 in User file

The reason I thought just copying the SF across might work is that the 
partition ghost cells in my modified DMPlex should be in the same 
locations (DMPlex points) as they were in the original DMPlex. My 
understanding was that the SF just stores the root corresponding to a 
leaf (ghost point?) on the current processor, so those ought to be 
unchanged. But maybe there are subtleties about the SF stuff that I 
don't yet understand.


What I am doing is adding new points for the dual-porosity mesh into my 
modified DMPlex. I have added the new cells after the partition ghost 
cells, and before the boundary ghost cells (and similarly for other 
depth strata). So I have shifted the locations of the boundary ghost 
cells, but not the partition ghost cells. I give the new points the 
appropriate depth label values so that the depth stratum bounds are 
updated correctly. I also shift the end_interior value for each depth 
stratum, so that code relying on DMPlexGetHybridBounds() should still work.


Would you expect copying the SF to work in this case?

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-25 Thread Matthew Knepley
On Sun, Sep 24, 2017 at 8:57 PM, Adrian Croucher 
wrote:

>
>
> On 25/09/17 13:37, Adrian Croucher wrote:
>
>>
>> Actually it looks like DMGetDefaultSF() does already work in Fortran
>> though, and could be used to do the same thing.
>>
>> Do you think that would be a reasonable way to do it?
>>
>>
> Hmm, I just tried it and it looks like the default SF produced by this
> function doesn't have any overlap. So that won't work for me.
>
> Guess I might need a Fortran interface for PetscSFSetGraph/PetscSFGetGraph
> after all. If you could put one it that would be great.
>
> Another simpler possibility that would work for my case is if there was a
> DMCopySF() function for copying the point SF from one DM to another. Up to
> you if you think that would be of enough general usefulness to warrant
> inclusion.


If you truly need the exact same SF for your grid, you should be able to
use DMGet/SetPointSF() since it will just reference count it for you. Then
the default SF is created automatically from the point SF. Does this work?

  Thanks,

 Matt


> - Adrian
>
> --
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-dev] PetscSF in Fortran

2017-09-24 Thread Adrian Croucher



On 25/09/17 13:37, Adrian Croucher wrote:


Actually it looks like DMGetDefaultSF() does already work in Fortran 
though, and could be used to do the same thing.


Do you think that would be a reasonable way to do it?



Hmm, I just tried it and it looks like the default SF produced by this 
function doesn't have any overlap. So that won't work for me.


Guess I might need a Fortran interface for 
PetscSFSetGraph/PetscSFGetGraph after all. If you could put one it that 
would be great.


Another simpler possibility that would work for my case is if there was 
a DMCopySF() function for copying the point SF from one DM to another. 
Up to you if you think that would be of enough general usefulness to 
warrant inclusion.


- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-24 Thread Adrian Croucher



On 25/09/17 13:18, Adrian Croucher wrote:


Alternatively, it occurred to me I could probably just use 
DMCreateDefaultSF() to create the point SF on the new DM- would there 
be any disadvantage in doing it that way?


However, at the moment I can't, because it appears the Fortran 
interface for DMCreateDefaultSF() is missing. I would have thought the 
interface for that should be simple enough, as it just takes a DM and 
a couple of PetscSections.


Actually it looks like DMGetDefaultSF() does already work in Fortran 
though, and could be used to do the same thing.


Do you think that would be a reasonable way to do it?

- Adrian

--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-24 Thread Adrian Croucher

hi


On 21/09/17 15:15, Jed Brown wrote:

Unfortunately PetscSFSetGraph/PetscSFGetGraph need custom bindings for
Fortran and they have not been written.  Shouldn't be difficult, but
will take a little work/testing.  I would probably make the array of
PetscSFNode be a PetscInt array of length 2n (or of shape (2,n)) --
that's an equivalent memory representation to what we're using now.


That sounds like the sort of thing I need.

Alternatively, it occurred to me I could probably just use 
DMCreateDefaultSF() to create the point SF on the new DM- would there be 
any disadvantage in doing it that way?


However, at the moment I can't, because it appears the Fortran interface 
for DMCreateDefaultSF() is missing. I would have thought the interface 
for that should be simple enough, as it just takes a DM and a couple of 
PetscSections.



- Adrian


Adrian Croucher  writes:


hi

For the dual-porosity stuff I'm working on, I'm creating a modified
DMPlex. Looking at e.g. TS ex11.c (in the SplitFaces() routine), it
appears I am going to have to set up a PetscSF for the new DM (from what
I can gather, that is needed for managing parallel communication of
partition ghost cell values).

The way I've set up the modified DMPlex, the partition ghost cells are
in the same locations as they were in the original single-porosity DM.
So I'm thinking I basically just have to copy the SF straight over from
the original DM.

The routine DMGetPointSF() seems to work OK in Fortran, but I'm having a
bit of trouble with PetscSFGetGraph().

When I try to declare an array of type PetscSFNode for the iremote
parameter, it doesn't seem to know about PetscSFNode, even though I have
a 'use petsc' in my code, which normally gives access to everything.

Also I'm not sure of exactly how to set up the ilocal and iremote array
parameters for this function in Fortran. How should they be declared- as
pointer arrays?

- Adrian


--
Dr Adrian Croucher
Senior Research Fellow
Department of Engineering Science
University of Auckland, New Zealand
email: a.crouc...@auckland.ac.nz
tel: +64 (0)9 923 4611



Re: [petsc-dev] PetscSF in Fortran

2017-09-20 Thread Jed Brown
Unfortunately PetscSFSetGraph/PetscSFGetGraph need custom bindings for
Fortran and they have not been written.  Shouldn't be difficult, but
will take a little work/testing.  I would probably make the array of
PetscSFNode be a PetscInt array of length 2n (or of shape (2,n)) --
that's an equivalent memory representation to what we're using now.

Adrian Croucher  writes:

> hi
>
> For the dual-porosity stuff I'm working on, I'm creating a modified 
> DMPlex. Looking at e.g. TS ex11.c (in the SplitFaces() routine), it 
> appears I am going to have to set up a PetscSF for the new DM (from what 
> I can gather, that is needed for managing parallel communication of 
> partition ghost cell values).
>
> The way I've set up the modified DMPlex, the partition ghost cells are 
> in the same locations as they were in the original single-porosity DM. 
> So I'm thinking I basically just have to copy the SF straight over from 
> the original DM.
>
> The routine DMGetPointSF() seems to work OK in Fortran, but I'm having a 
> bit of trouble with PetscSFGetGraph().
>
> When I try to declare an array of type PetscSFNode for the iremote 
> parameter, it doesn't seem to know about PetscSFNode, even though I have 
> a 'use petsc' in my code, which normally gives access to everything.
>
> Also I'm not sure of exactly how to set up the ilocal and iremote array 
> parameters for this function in Fortran. How should they be declared- as 
> pointer arrays?
>
> - Adrian
>
> -- 
> Dr Adrian Croucher
> Senior Research Fellow
> Department of Engineering Science
> University of Auckland, New Zealand
> email: a.crouc...@auckland.ac.nz
> tel: +64 (0)9 923 4611