Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-31 Thread Pierpaolo Minelli
Hi,

I want to thank you for all your useful suggestions, but luckily fo me it was 
non a problem related to Petsc as I wrongly supposed reading this error. There 
was a problem on GPFS filesystem that CINECA support solved and also a problem 
related to my code that was writing too much files when using 512 nodes. Once 
these two problem were solved I was able to continue to use Hypre without any 
problem.
Sorry for the trouble and thanks again for your suggestions.

Pierpaolo


> Il giorno 22 lug 2020, alle ore 02:51, Barry Smith  ha 
> scritto:
> 
> 
>   I assume PIC_3D is your code and you are using OpenMP? 
> 
>   Are you calling hypre from inside your OpenMP parallelism? From inside 
> PIC_3D?
> 
>The SIGTERM is confusing to me. Are you using signals in any way? Usually 
> a sigterm comes outside a process not process or thread crash.
> 
>   I assume for__signal_handl... is a Fortran signal handler
> 
> forrtl: error (78): process killed (SIGTERM)
> Image  PCRoutineLineSource
>  
> libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
> libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
> libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
> libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
> libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
> libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
> libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
> libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
> libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
> PIC_3D 004071C0  Unknown   Unknown  Unknown
> PIC_3D 00490299  Unknown   Unknown  Unknown
> PIC_3D 00492C17  Unknown   Unknown  Unknown
> PIC_3D 0040562E  Unknown   Unknown  Unknown
> libc-2.17.so   2B33DC5BEB35  __libc_start_main Unknown  Unknown
> PIC_3D 00405539  Unknown   Unknown  Unknown
> 
> 
> 
>> On Jul 21, 2020, at 6:32 AM, Pierpaolo Minelli > <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> 
>> Hi,
>> 
>> I have asked to compile a Petsc Version updated and with 64bit indices.
>> Now I have Version 3.13.3 and these are the configure options used:
>> 
>> #!/bin/python
>> if __name__ == '__main__':
>>   import sys
>>   import os
>>   sys.path.insert(0, os.path.abspath('config'))
>>   import configure
>>   configure_options = [
>> '--CC=mpiicc',
>> '--CXX=mpiicpc',
>> '--download-hypre',
>> '--download-metis',
>> '--download-mumps=yes',
>> '--download-parmetis',
>> '--download-scalapack',
>> '--download-superlu_dist',
>> '--known-64-bit-blas-indices',
>> 
>> '--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
>> '--with-64-bit-indices=1',
>> 
>> '--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
>> '--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
>> '--with-debugging=0',
>> '--with-fortran-interfaces=1',
>> '--with-fortran=1',
>> 'FC=mpiifort',
>> 'PETSC_ARCH=arch-linux2-c-opt',
>>   ]
>>   configure.petsc_configure(configure_options)
>> 
>> Now, I receive an error on hypre:
>> 
>> forrtl: error (78): process killed (SIGTERM)
>> Image  PCRoutineLineSource   
>>   
>> libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
>> libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
>> libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
>> libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
>> PIC_3D 00

Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-21 Thread Pierpaolo Minelli


> Il giorno 21 lug 2020, alle ore 16:58, Mark Adams  ha 
> scritto:
> 
> This also looks like it could be some sort of library mismatch. You might try 
> deleting your architecture directory and start over. This PETSc's "make 
> realclean"

I hope this is not the case, because I am working on CINECA HPC Facility 
(Italy) and in this center I need to load modules for each software I need. I 
asked Cineca support to compile a version of Petsc with 64 bit integers and all 
that external packages and after they have done it, I loaded directly this new 
module, so the older version (3.8.x with integer in single precision) is not 
involved at all.
At least I hope…

Thanks

Pierpaolo

> 
> On Tue, Jul 21, 2020 at 10:45 AM Stefano Zampini  <mailto:stefano.zamp...@gmail.com>> wrote:
> 
> 
>> On Jul 21, 2020, at 1:32 PM, Pierpaolo Minelli > <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> 
>> Hi,
>> 
>> I have asked to compile a Petsc Version updated and with 64bit indices.
>> Now I have Version 3.13.3 and these are the configure options used:
>> 
>> #!/bin/python
>> if __name__ == '__main__':
>>   import sys
>>   import os
>>   sys.path.insert(0, os.path.abspath('config'))
>>   import configure
>>   configure_options = [
>> '--CC=mpiicc',
>> '--CXX=mpiicpc',
>> '--download-hypre',
>> '--download-metis',
>> '--download-mumps=yes',
>> '--download-parmetis',
>> '--download-scalapack',
>> '--download-superlu_dist',
>> '--known-64-bit-blas-indices',
>> 
>> '--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
>> '--with-64-bit-indices=1',
>> 
>> '--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
>> '--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
>> '--with-debugging=0',
>> '--with-fortran-interfaces=1',
>> '--with-fortran=1',
>> 'FC=mpiifort',
>> 'PETSC_ARCH=arch-linux2-c-opt',
>>   ]
>>   configure.petsc_configure(configure_options)
>> 
>> Now, I receive an error on hypre:
>> 
>> forrtl: error (78): process killed (SIGTERM)
>> Image  PCRoutineLineSource   
>>   
>> libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
>> libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
>> libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
>> libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
>> PIC_3D 004071C0  Unknown   Unknown  Unknown
>> PIC_3D 00490299  Unknown   Unknown  Unknown
>> PIC_3D 00492C17  Unknown   Unknown  Unknown
>> PIC_3D 0040562E  Unknown   Unknown  Unknown
>> libc-2.17.so <http://libc-2.17.so/>   2B33DC5BEB35  
>> __libc_start_main Unknown  Unknown
>> PIC_3D 00405539  Unknown   Unknown  Unknown
>> 
>> Is it possible
>> that I need to ask also to compile hypre with an option for 64bit indices?
> 
> These configure options compile hypre with 64bit indices support.
> It should work just fine. Can you run a very small case of your code to 
> confirm?
> 
> 
>> Is it possible to instruct this inside Petsc configure?
>> Alternatively, is it possible to use a different multigrid PC inside PETSc 
>> that accept 64bit indices?
>> 
>> Thanks in advance
>> 
>> Pierpaolo
>> 
>> 
>>> Il giorno 27 mag 2020, alle ore 11:26, Stefano Zampini 
>>> mailto:stefano.zamp...@gmail.com>> ha scritto:
>>> 
>>> You need a version of PETSc compiled with 64bit indices, since the message 
>>> indicates the number of dofs in this case is larger the INT_MAX
>>> 2501×3401×1601 = 13617947501
>>> 
>>> I also suggest you upgra

Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-21 Thread Pierpaolo Minelli


> Il giorno 21 lug 2020, alle ore 16:56, Mark Adams  ha 
> scritto:
> 
> 
> 
> On Tue, Jul 21, 2020 at 9:46 AM Matthew Knepley  <mailto:knep...@gmail.com>> wrote:
> On Tue, Jul 21, 2020 at 9:35 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Thanks for your reply.
> As I wrote before, I use these settings:
> 
> -dm_mat_type hypre -pc_type hypre -pc_hypre_type boomeramg 
> -pc_hypre_boomeramg_relax_type_all SOR/Jacobi 
> -pc_hypre_boomeramg_coarsen_type PMIS -pc_hypre_boomeramg_interp_type FF1 
> -ksp_type richardson
> 
> Is there a way to emulate this features also with GAMG?
> 
> Smoothers: You have complete control here
> 
>   -mg_levels_pc_type sor   (the default is Chebyshev which you could also try)
> 
> And you set the KSP type. You have -ksp_type richardson above but that is not 
> used for Hypre. It is for GAMG. Chebyshev is a ksp type (-ksp_type chebyshev).

> 
> Hypre is very good on Poisson. THe grid complexity (cost per iteration) can 
> be high but the convergence rate will be better than GAMG.
> 
> But, you should be able to get hypre to work. 

Yes it is very good for Poisson, and on a smaller case, at the beginning of my 
code development,  I have tried Hypre, ML, and GAMG (without adding more 
options I have to admit) and hypre was faster without losing in precision and 
accurateness of results (I have checked them with -ksp_monitor_true_residual).
I left -kps_type Richardson instead of default gmres only because from 
residuals it seems more accurate.

So first,  i will try again to see if hypre (with integer 64bit) is able to 
work on a smaller case as suggested by Stefano.
Then I will investigate GAMG options and I will give you a feedback.
The problem is that I need 64bit integers because of my problem size so I have 
to follow both path, but I hope that I will be able to continue to use hypre.

Thanks

Pierpaolo


>  
> 
> Coarsening: This is much different in agglomeration AMG. There is a 
> discussion here:
> 
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html>
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetSquareGraph.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetSquareGraph.html>
> 
> Interpolation: This is built-in for agglomeration AMG.
>  
> It would be better to use only native Petsc implementations, but these 
> settings, up to single precision indexing for integers, gave me optimal 
> performances. 
> For this reason I asked also, if it was possible to configure hypre (inside 
> Petsc) with 64bit integers.
> 
> Yes. That happened when you reconfigured for 64 bits. You may have 
> encountered a Hypre bug.
> 
>   Thanks,
> 
> Matt
>  
> Pierpaolo
> 
> 
>> Il giorno 21 lug 2020, alle ore 13:36, Dave May > <mailto:dave.mayhe...@gmail.com>> ha scritto:
>> 
>> 
>> 
>> On Tue, 21 Jul 2020 at 12:32, Pierpaolo Minelli > <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> Hi,
>> 
>> I have asked to compile a Petsc Version updated and with 64bit indices.
>> Now I have Version 3.13.3 and these are the configure options used:
>> 
>> #!/bin/python
>> if __name__ == '__main__':
>>   import sys
>>   import os
>>   sys.path.insert(0, os.path.abspath('config'))
>>   import configure
>>   configure_options = [
>> '--CC=mpiicc',
>> '--CXX=mpiicpc',
>> '--download-hypre',
>> '--download-metis',
>> '--download-mumps=yes',
>> '--download-parmetis',
>> '--download-scalapack',
>> '--download-superlu_dist',
>> '--known-64-bit-blas-indices',
>> 
>> '--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
>> '--with-64-bit-indices=1',
>> 
>> '--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
>> '--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
>> '--with-debugging=0',
>> '--with-fortran-interfaces=1',
>> '--with-fortran=1',
>> 'FC=mpiifort',
>> 'PETSC_ARCH=arch-linux2-c-opt',
>>   ]
>>   configure.petsc_configure(configure_options)
>> 
>> Now, I receive an error on hypre:
>> 
>> forrtl: error (78): process killed (SIGTERM)
>> Image  PC

Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-21 Thread Pierpaolo Minelli
Thanks for your useful suggestions.

Pierpaolo


> Il giorno 21 lug 2020, alle ore 15:45, Matthew Knepley  ha 
> scritto:
> 
> On Tue, Jul 21, 2020 at 9:35 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Thanks for your reply.
> As I wrote before, I use these settings:
> 
> -dm_mat_type hypre -pc_type hypre -pc_hypre_type boomeramg 
> -pc_hypre_boomeramg_relax_type_all SOR/Jacobi 
> -pc_hypre_boomeramg_coarsen_type PMIS -pc_hypre_boomeramg_interp_type FF1 
> -ksp_type richardson
> 
> Is there a way to emulate this features also with GAMG?
> 
> Smoothers: You have complete control here
> 
>   -mg_levels_pc_type sor   (the default is Chebyshev which you could also try)
> 
> Coarsening: This is much different in agglomeration AMG. There is a 
> discussion here:
> 
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html>
>   
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetSquareGraph.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetSquareGraph.html>
> 
> Interpolation: This is built-in for agglomeration AMG.
>  
> It would be better to use only native Petsc implementations, but these 
> settings, up to single precision indexing for integers, gave me optimal 
> performances. 
> For this reason I asked also, if it was possible to configure hypre (inside 
> Petsc) with 64bit integers.
> 
> Yes. That happened when you reconfigured for 64 bits. You may have 
> encountered a Hypre bug.
> 
>   Thanks,
> 
> Matt
>  
> Pierpaolo
> 
> 
>> Il giorno 21 lug 2020, alle ore 13:36, Dave May > <mailto:dave.mayhe...@gmail.com>> ha scritto:
>> 
>> 
>> 
>> On Tue, 21 Jul 2020 at 12:32, Pierpaolo Minelli > <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> Hi,
>> 
>> I have asked to compile a Petsc Version updated and with 64bit indices.
>> Now I have Version 3.13.3 and these are the configure options used:
>> 
>> #!/bin/python
>> if __name__ == '__main__':
>>   import sys
>>   import os
>>   sys.path.insert(0, os.path.abspath('config'))
>>   import configure
>>   configure_options = [
>> '--CC=mpiicc',
>> '--CXX=mpiicpc',
>> '--download-hypre',
>> '--download-metis',
>> '--download-mumps=yes',
>> '--download-parmetis',
>> '--download-scalapack',
>> '--download-superlu_dist',
>> '--known-64-bit-blas-indices',
>> 
>> '--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
>> '--with-64-bit-indices=1',
>> 
>> '--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
>> '--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
>> '--with-debugging=0',
>> '--with-fortran-interfaces=1',
>> '--with-fortran=1',
>> 'FC=mpiifort',
>> 'PETSC_ARCH=arch-linux2-c-opt',
>>   ]
>>   configure.petsc_configure(configure_options)
>> 
>> Now, I receive an error on hypre:
>> 
>> forrtl: error (78): process killed (SIGTERM)
>> Image  PCRoutineLineSource   
>>   
>> libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
>> libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
>> libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
>> libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
>> libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
>> PIC_3D 004071C0  Unknown   Unknown  Unknown
>> PIC_3D 00490299  Unknown   Unknown  Unknown
>> PIC_3D 00492C17  Unknown   Unknown  Unknown
>> PIC_3D 0040562E  Unknown   Unknown  Unknown
>> libc-2.17.so <http://libc-2.17.so/>   2B

Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-21 Thread Pierpaolo Minelli
Thanks for your reply.
As I wrote before, I use these settings:

-dm_mat_type hypre -pc_type hypre -pc_hypre_type boomeramg 
-pc_hypre_boomeramg_relax_type_all SOR/Jacobi -pc_hypre_boomeramg_coarsen_type 
PMIS -pc_hypre_boomeramg_interp_type FF1 -ksp_type richardson

Is there a way to emulate this features also with GAMG?

It would be better to use only native Petsc implementations, but these 
settings, up to single precision indexing for integers, gave me optimal 
performances. 
For this reason I asked also, if it was possible to configure hypre (inside 
Petsc) with 64bit integers.

Pierpaolo


> Il giorno 21 lug 2020, alle ore 13:36, Dave May  ha 
> scritto:
> 
> 
> 
> On Tue, 21 Jul 2020 at 12:32, Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Hi,
> 
> I have asked to compile a Petsc Version updated and with 64bit indices.
> Now I have Version 3.13.3 and these are the configure options used:
> 
> #!/bin/python
> if __name__ == '__main__':
>   import sys
>   import os
>   sys.path.insert(0, os.path.abspath('config'))
>   import configure
>   configure_options = [
> '--CC=mpiicc',
> '--CXX=mpiicpc',
> '--download-hypre',
> '--download-metis',
> '--download-mumps=yes',
> '--download-parmetis',
> '--download-scalapack',
> '--download-superlu_dist',
> '--known-64-bit-blas-indices',
> 
> '--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
> '--with-64-bit-indices=1',
> 
> '--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
> '--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
> '--with-debugging=0',
> '--with-fortran-interfaces=1',
> '--with-fortran=1',
> 'FC=mpiifort',
> 'PETSC_ARCH=arch-linux2-c-opt',
>   ]
>   configure.petsc_configure(configure_options)
> 
> Now, I receive an error on hypre:
> 
> forrtl: error (78): process killed (SIGTERM)
> Image  PCRoutineLineSource
>  
> libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
> libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
> libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
> libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
> libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
> libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
> libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
> libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
> libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
> PIC_3D 004071C0  Unknown   Unknown  Unknown
> PIC_3D 00490299  Unknown   Unknown  Unknown
> PIC_3D 00492C17  Unknown   Unknown  Unknown
> PIC_3D 0040562E  Unknown   Unknown  Unknown
> libc-2.17.so <http://libc-2.17.so/>   2B33DC5BEB35  __libc_start_main 
> Unknown  Unknown
> PIC_3D 00405539  Unknown   Unknown  Unknown
> 
> Is it possible that I need to ask also to compile hypre with an option for 
> 64bit indices?
> Is it possible to instruct this inside Petsc configure?
> Alternatively, is it possible to use a different multigrid PC inside PETSc 
> that accept 64bit indices?
> 
> You can use
>   -pc_type gamg
> All native PETSc implementations support 64bit indices.
>  
> 
> Thanks in advance
> 
> Pierpaolo
> 
> 
>> Il giorno 27 mag 2020, alle ore 11:26, Stefano Zampini 
>> mailto:stefano.zamp...@gmail.com>> ha scritto:
>> 
>> You need a version of PETSc compiled with 64bit indices, since the message 
>> indicates the number of dofs in this case is larger the INT_MAX
>> 2501×3401×1601 = 13617947501
>> 
>> I also suggest you upgrade to a newer version, 3.8.3 is quite old as the 
>> error message reports
>> 
>> Il giorno mer 27 mag 2020 alle ore 11:50 Pierpaolo Minelli 
>> mailto:pierpaolo.mine...@cnr.it>> ha scritto:
>> Hi,
>> 
>> I am trying to solve a Poisson equation on this grid:
>> 
>> Nx = 2501
>> Ny = 3401
>> Nz = 1601
>> 
>> I received this error:
>> 
>> [0]PETSC ERROR: - Error Message 
>> --

Re: [petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-07-21 Thread Pierpaolo Minelli
Hi,

I have asked to compile a Petsc Version updated and with 64bit indices.
Now I have Version 3.13.3 and these are the configure options used:

#!/bin/python
if __name__ == '__main__':
  import sys
  import os
  sys.path.insert(0, os.path.abspath('config'))
  import configure
  configure_options = [
'--CC=mpiicc',
'--CXX=mpiicpc',
'--download-hypre',
'--download-metis',
'--download-mumps=yes',
'--download-parmetis',
'--download-scalapack',
'--download-superlu_dist',
'--known-64-bit-blas-indices',

'--prefix=/cineca/prod/opt/libraries/petsc/3.13.3_int64/intelmpi--2018--binary',
'--with-64-bit-indices=1',

'--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl',
'--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.12.0/none',
'--with-debugging=0',
'--with-fortran-interfaces=1',
'--with-fortran=1',
'FC=mpiifort',
'PETSC_ARCH=arch-linux2-c-opt',
  ]
  configure.petsc_configure(configure_options)

Now, I receive an error on hypre:

forrtl: error (78): process killed (SIGTERM)
Image  PCRoutineLineSource  
   
libHYPRE-2.18.2.s  2B33CF465D3F  for__signal_handl Unknown  Unknown
libpthread-2.17.s  2B33D5BFD370  Unknown   Unknown  Unknown
libpthread-2.17.s  2B33D5BF96D3  pthread_cond_wait Unknown  Unknown
libiomp5.so2B33DBA14E07  Unknown   Unknown  Unknown
libiomp5.so2B33DB98810C  Unknown   Unknown  Unknown
libiomp5.so2B33DB990578  Unknown   Unknown  Unknown
libiomp5.so2B33DB9D9659  Unknown   Unknown  Unknown
libiomp5.so2B33DB9D8C39  Unknown   Unknown  Unknown
libiomp5.so2B33DB993BCE  __kmpc_fork_call  Unknown  Unknown
PIC_3D 004071C0  Unknown   Unknown  Unknown
PIC_3D 00490299  Unknown   Unknown  Unknown
PIC_3D 00492C17  Unknown   Unknown  Unknown
PIC_3D 0040562E  Unknown   Unknown  Unknown
libc-2.17.so   2B33DC5BEB35  __libc_start_main Unknown  Unknown
PIC_3D 00405539  Unknown   Unknown  Unknown

Is it possible that I need to ask also to compile hypre with an option for 
64bit indices?
Is it possible to instruct this inside Petsc configure?
Alternatively, is it possible to use a different multigrid PC inside PETSc that 
accept 64bit indices?

Thanks in advance

Pierpaolo


> Il giorno 27 mag 2020, alle ore 11:26, Stefano Zampini 
>  ha scritto:
> 
> You need a version of PETSc compiled with 64bit indices, since the message 
> indicates the number of dofs in this case is larger the INT_MAX
> 2501×3401×1601 = 13617947501
> 
> I also suggest you upgrade to a newer version, 3.8.3 is quite old as the 
> error message reports
> 
> Il giorno mer 27 mag 2020 alle ore 11:50 Pierpaolo Minelli 
> mailto:pierpaolo.mine...@cnr.it>> ha scritto:
> Hi,
> 
> I am trying to solve a Poisson equation on this grid:
> 
> Nx = 2501
> Ny = 3401
> Nz = 1601
> 
> I received this error:
> 
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Overflow in integer operation: 
> http://www.mcs.anl.gov/petsc/documentation/faq.html#64-bit-indices 
> <http://www.mcs.anl.gov/petsc/documentation/faq.html#64-bit-indices>
> [0]PETSC ERROR: Mesh of 2501 by 3401 by 1 (dof) is too large for 32 bit 
> indices
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html 
> <http://www.mcs.anl.gov/petsc/documentation/faq.html> for trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 
> [0]PETSC ERROR: 
> /marconi_scratch/userexternal/pminelli/PIC3D/2500_3400_1600/./PIC_3D on a 
> arch-linux2-c-opt named r129c09s02 by pminelli Tu
> e May 26 20:16:34 2020
> [0]PETSC ERROR: Configure options 
> --prefix=/cineca/prod/opt/libraries/petsc/3.8.3/intelmpi--2018--binary 
> CC=mpiicc FC=mpiifort CXX=mpiicpc 
> F77=mpiifort F90=mpiifort --with-debugging=0 
> --with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl 
> --with-fortran=1 
> --with-fortran-interfaces=1 
> --with-cmake-dir=/cineca/prod/opt/tools/cmake/3.5.2/none 
> --with-mpi-dir=/cineca/prod/opt/compilers/intel/pe-xe-
> 2018/binary/impi/2018.4.274 --download-scalapack --download-mumps=yes 
> --download-hypre --download-superlu_dist --download-parmetis --downlo
> ad-metis
> [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 218 in 
>

[petsc-users] Error on INTEGER SIZE using DMDACreate3d

2020-05-27 Thread Pierpaolo Minelli
Hi,

I am trying to solve a Poisson equation on this grid:

Nx = 2501
Ny = 3401
Nz = 1601

I received this error:

[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Overflow in integer operation: 
http://www.mcs.anl.gov/petsc/documentation/faq.html#64-bit-indices
[0]PETSC ERROR: Mesh of 2501 by 3401 by 1 (dof) is too large for 32 bit indices
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 
[0]PETSC ERROR: 
/marconi_scratch/userexternal/pminelli/PIC3D/2500_3400_1600/./PIC_3D on a 
arch-linux2-c-opt named r129c09s02 by pminelli Tu
e May 26 20:16:34 2020
[0]PETSC ERROR: Configure options 
--prefix=/cineca/prod/opt/libraries/petsc/3.8.3/intelmpi--2018--binary 
CC=mpiicc FC=mpiifort CXX=mpiicpc 
F77=mpiifort F90=mpiifort --with-debugging=0 
--with-blaslapack-dir=/cineca/prod/opt/compilers/intel/pe-xe-2018/binary/mkl 
--with-fortran=1 
--with-fortran-interfaces=1 
--with-cmake-dir=/cineca/prod/opt/tools/cmake/3.5.2/none 
--with-mpi-dir=/cineca/prod/opt/compilers/intel/pe-xe-
2018/binary/impi/2018.4.274 --download-scalapack --download-mumps=yes 
--download-hypre --download-superlu_dist --download-parmetis --downlo
ad-metis
[0]PETSC ERROR: #1 DMSetUp_DA_3D() line 218 in 
/marconi/prod/build/libraries/petsc/3.8.3/intelmpi--2018--binary/BA_WORK/petsc-3.8.3/src/dm/
impls/da/da3.c
[0]PETSC ERROR: #2 DMSetUp_DA() line 25 in 
/marconi/prod/build/libraries/petsc/3.8.3/intelmpi--2018--binary/BA_WORK/petsc-3.8.3/src/dm/impl
s/da/dareg.c
[0]PETSC ERROR: #3 DMSetUp() line 720 in 
/marconi/prod/build/libraries/petsc/3.8.3/intelmpi--2018--binary/BA_WORK/petsc-3.8.3/src/dm/interf
ace/dm.c
forrtl: error (76): Abort trap signal


I am on an HPC facility and after I loaded PETSC module, I have seen that it is 
configured with INTEGER size = 32

I solve my problem with these options and it works perfectly with smaller grids:

-dm_mat_type hypre -pc_type hypre -pc_hypre_type boomeramg 
-pc_hypre_boomeramg_relax_type_all SOR/Jacobi -pc_hypre_boomeramg_coarsen_type 
PMIS -pc_hypre_boomeramg_interp_type FF1 -ksp_type richardson

Is it possible to overcome this if I ask them to install a version with INTEGER 
SIZE = 64?
Alternatively, is it possible to overcome this using intel compiler options?

Thanks in advance

Pierpaolo Minelli

Re: [petsc-users] Using DMDAs with an assigned domain decomposition to Solve Poisson Equation

2020-02-24 Thread Pierpaolo Minelli


> Il giorno 24 feb 2020, alle ore 12:58, Matthew Knepley  ha 
> scritto:
> 
> On Mon, Feb 24, 2020 at 6:35 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> 
> 
>> Il giorno 24 feb 2020, alle ore 12:24, Matthew Knepley > <mailto:knep...@gmail.com>> ha scritto:
>> 
>> On Mon, Feb 24, 2020 at 5:30 AM Pierpaolo Minelli > <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> Hi,
>> I'm developing a 3D code in Fortran to study the space-time evolution of 
>> charged particles within a Cartesian domain.
>> The domain decomposition has been made by me taking into account symmetry 
>> and load balancing reasons related to my specific problem.
>> 
>> That may be a problem. DMDA can only decompose itself along straight lines 
>> through the domain. Is that how your decomposition looks?
> 
> My decomposition at the moment is paractically a 2D decomposition because i 
> have:
> 
> M = 251 (X)
> N = 341 (Y)
> P = 161 (Z)
> 
> and if i use 24 MPI procs, i divided my domain in a 3D Cartesian Topology 
> with:
> 
> m = 4
> n = 6
> p = 1
> 
> 
>>  
>> In this first draft, it will remain constant throughout my simulation.
>> 
>> Is there a way, using DMDAs, to solve Poisson's equation, using the domain 
>> decomposition above, obtaining as a result the local solution including its 
>> ghost cells values?
>> 
>> How do you discretize the Poisson equation?
> 
> I intend to use a 7 point stencil like that in this example:
> 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex22f.F90.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex22f.F90.html>
> 
> Okay, then you can do exactly as Mark says and use that example. This will 
> allow you to use geometric multigrid
> for the Poisson problem. I don't think it can be beaten speed-wise.
> 
>   Thanks,
> 
>  Matt
>  


Ok, i will try this approach and let you know.

Thanks again

Pierpaolo



>> 
>>   Thanks,
>> 
>> Matt
>>  
>> As input data at each time-step I know the electric charge density in each 
>> local subdomain (RHS), including the ghost cells, even if I don't think they 
>> are useful for the calculation of the equation.
>> Matrix coefficients (LHS) and boundary conditions are constant during my 
>> simulation.
>> 
>> As an output I would need to know the local electrical potential in each 
>> local subdomain, including the values of the ghost cells in each 
>> dimension(X,Y,Z).
>> 
>> Is there an example that I can use in Fortran to solve this kind of problem?
>> 
>> Thanks in advance
>> 
>> Pierpaolo Minelli
>> 
>> 
> 
> 
> Thanks
> Pierpaolo
> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>


Re: [petsc-users] Using DMDAs with an assigned domain decomposition to Solve Poisson Equation

2020-02-24 Thread Pierpaolo Minelli


> Il giorno 24 feb 2020, alle ore 12:24, Matthew Knepley  ha 
> scritto:
> 
> On Mon, Feb 24, 2020 at 5:30 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Hi,
> I'm developing a 3D code in Fortran to study the space-time evolution of 
> charged particles within a Cartesian domain.
> The domain decomposition has been made by me taking into account symmetry and 
> load balancing reasons related to my specific problem.
> 
> That may be a problem. DMDA can only decompose itself along straight lines 
> through the domain. Is that how your decomposition looks?

My decomposition at the moment is paractically a 2D decomposition because i 
have:

M = 251 (X)
N = 341 (Y)
P = 161 (Z)

and if i use 24 MPI procs, i divided my domain in a 3D Cartesian Topology with:

m = 4
n = 6
p = 1


>  
> In this first draft, it will remain constant throughout my simulation.
> 
> Is there a way, using DMDAs, to solve Poisson's equation, using the domain 
> decomposition above, obtaining as a result the local solution including its 
> ghost cells values?
> 
> How do you discretize the Poisson equation?

I intend to use a 7 point stencil like that in this example:

https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex22f.F90.html
 
<https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex22f.F90.html>


> 
>   Thanks,
> 
> Matt
>  
> As input data at each time-step I know the electric charge density in each 
> local subdomain (RHS), including the ghost cells, even if I don't think they 
> are useful for the calculation of the equation.
> Matrix coefficients (LHS) and boundary conditions are constant during my 
> simulation.
> 
> As an output I would need to know the local electrical potential in each 
> local subdomain, including the values of the ghost cells in each 
> dimension(X,Y,Z).
> 
> Is there an example that I can use in Fortran to solve this kind of problem?
> 
> Thanks in advance
> 
> Pierpaolo Minelli
> 
> 


Thanks
Pierpaolo

> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>



Re: [petsc-users] Using DMDAs with an assigned domain decomposition to Solve Poisson Equation

2020-02-24 Thread Pierpaolo Minelli


> Il giorno 24 feb 2020, alle ore 12:08, Mark Adams  ha 
> scritto:
> 
> 
> On Mon, Feb 24, 2020 at 5:30 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Hi,
> I'm developing a 3D code in Fortran to study the space-time evolution of 
> charged particles within a Cartesian domain.
> The domain decomposition has been made by me taking into account symmetry and 
> load balancing reasons related to my specific problem. In this first draft, 
> it will remain constant throughout my simulation.
> 
> Is there a way, using DMDAs, to solve Poisson's equation, using the domain 
> decomposition above, obtaining as a result the local solution including its 
> ghost cells values?
> 
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGlobalToLocalBegin.html#DMGlobalToLocalBegin
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMGlobalToLocalBegin.html#DMGlobalToLocalBegin>
> 
>  
> 
> As input data at each time-step I know the electric charge density in each 
> local subdomain (RHS), including the ghost cells, even if I don't think they 
> are useful for the calculation of the equation.
> Matrix coefficients (LHS) and boundary conditions are constant during my 
> simulation.
> 
> As an output I would need to know the local electrical potential in each 
> local subdomain, including the values of the ghost cells in each 
> dimension(X,Y,Z).
> 
> Is there an example that I can use in Fortran to solve this kind of problem?
> 
> I see one, but it is not hard to convert a C example:
> 
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex14f.F90.html
>  
> <https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex14f.F90.html>

Thanks, i will try to give a look to this example and i will try to check also 
C examples in that directory.

>  
> 
> Thanks in advance
> 
> Pierpaolo Minelli
> 



[petsc-users] Using DMDAs with an assigned domain decomposition to Solve Poisson Equation

2020-02-24 Thread Pierpaolo Minelli
Hi,
I'm developing a 3D code in Fortran to study the space-time evolution of 
charged particles within a Cartesian domain.
The domain decomposition has been made by me taking into account symmetry and 
load balancing reasons related to my specific problem. In this first draft, it 
will remain constant throughout my simulation.

Is there a way, using DMDAs, to solve Poisson's equation, using the domain 
decomposition above, obtaining as a result the local solution including its 
ghost cells values?

As input data at each time-step I know the electric charge density in each 
local subdomain (RHS), including the ghost cells, even if I don't think they 
are useful for the calculation of the equation.
Matrix coefficients (LHS) and boundary conditions are constant during my 
simulation.

As an output I would need to know the local electrical potential in each local 
subdomain, including the values of the ghost cells in each dimension(X,Y,Z).

Is there an example that I can use in Fortran to solve this kind of problem?

Thanks in advance

Pierpaolo Minelli



Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-08-03 Thread Pierpaolo Minelli
I tried to use these options with GAMG:

> -mg_levels_ksp_type richardson
> -mg_levels_pc_type sor

but also in this case i don’t obtain a convergence.

I used also direct solvers (MUMPS ad SuperLU) and they works fine, but slower 
(solve time) then iterative method with ML and hypre.

If i check with -ksp_monitor_true_residual, is it advisable to continue using 
the iterative method?

Pierpaolo


> Il giorno 03 ago 2018, alle ore 15:16, Mark Adams  ha 
> scritto:
> 
> So this is a complex valued indefinite Helmholtz operator (very hard to solve 
> scalably) with axisymmetric coordinates. ML, hypre and GAMG all performed 
> about the same, with a big jump in residual initially and essentially not 
> solving it. You scaled it and this fixed ML and hypre but not GAMG.
> 
> From this output I can see that the eigenvalue estimates are strange. Your 
> equations look fine so I have to assume that the complex values are the 
> problem. If this is symmetric the CG is a much better solver and eigen 
> estimator. But this is not a big deal, especially since you have two options 
> that work. I would suggest not using cheby smoother, it uses these bad eigen 
> estimates, and it is basically not smoothing on some levels. You can use this 
> instead:
> 
> -mg_levels_ksp_type richardson
> -mg_levels_pc_type sor
> 
> Note, if you have a large shift these equations are very hard to solve 
> iteratively and you should just use a direct solver. Direct solvers in 2D are 
> not bad,
> 
> Mark
> 
> 
> On Fri, Aug 3, 2018 at 3:02 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> In this simulation I'm solving two equations in a two-dimensional domain 
> (z,r) at each time step. The first is an equation derived from the Maxwell 
> equation. Taking advantage of the fact that the domain is axialsymmetric and 
> applying a temporal harmonic approximation, the equation I am solving is the 
> following:
> 
> 
> 
> 
> The second equation is a Poisson’s  equation in cylindrical coordinates (z,r) 
> in the field of real numbers.
> 
> This is the output obtained using these options (Note that at this moment of 
> development I am only using a processor):
> 
> -pc_type gamg -pc_gamg_agg_nsmooths 1 -pc_gamg_reuse_interpolation true 
> -pc_gamg_square_graph 1 -pc_gamg_threshold 0. -ksp_rtol 1.e-7 -ksp_max_it 30 
> -ksp_monitor_true_residual -info | grep GAMG
> 
> [0] PCSetUp_GAMG(): level 0) N=321201, n data rows=1, n data cols=1, nnz/row 
> (ave)=5, np=1
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 4.97011 nnz ave. (N=321201)
> [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> [0] PCGAMGProlongator_AGG(): New grid 45754 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.972526e+00 
> min=3.461411e-03 PC=jacobi
> [0] PCSetUp_GAMG(): 1) N=45754, n data cols=1, nnz/row (ave)=10, 1 active pes
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 10.7695 nnz ave. (N=45754)
> [0] PCGAMGProlongator_AGG(): New grid 7893 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=5.686837e+00 
> min=5.062501e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 2) N=7893, n data cols=1, nnz/row (ave)=23, 1 active pes
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 23.2179 nnz ave. (N=7893)
> [0] PCGAMGProlongator_AGG(): New grid 752 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.504451e+01 
> min=2.124898e-02 PC=jacobi
> [0] PCSetUp_GAMG(): 3) N=752, n data cols=1, nnz/row (ave)=30, 1 active pes
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 30.7367 nnz ave. (N=752)
> [0] PCGAMGProlongator_AGG(): New grid 56 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=7.781296e+00 
> min=2.212257e-02 PC=jacobi
> [0] PCSetUp_GAMG(): 4) N=56, n data cols=1, nnz/row (ave)=22, 1 active pes
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 22.9643 nnz ave. (N=56)
> [0] PCGAMGProlongator_AGG(): New grid 6 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.525086e+00 
> min=1.375043e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 5) N=6, n data cols=1, nnz/row (ave)=6, 1 active pes
> [0] PCSetUp_GAMG(): 6 levels, grid complexity = 1.43876
> [0] PCSetUp_GAMG(): level 0) N=321201, n data rows=1, n data cols=1, nnz/row 
> (ave)=5, np=1
> [0] PCGAMGFilterGraph():   100.% nnz after filtering, with threshold 0., 
> 4.97011 nnz ave. (N=321201)
> [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> [0] PCGAMGProlongator_AGG(): New grid 45754 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.972526e+00 
> min=3.461411e-03 PC=jacobi
> [0] P

Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-08-02 Thread Pierpaolo Minelli
 4.374073117273e+14 true resid norm 
3.190248474209e+03 ||r(i)||/||b|| 1.002591749552e+00
 18 KSP preconditioned resid norm 3.454318321770e+14 true resid norm 
3.195991944291e+03 ||r(i)||/||b|| 1.004396736143e+00
 19 KSP preconditioned resid norm 3.438483821871e+14 true resid norm 
3.196113376292e+03 ||r(i)||/||b|| 1.004434898287e+00
 20 KSP preconditioned resid norm 3.090011643835e+14 true resid norm 
3.199125186774e+03 ||r(i)||/||b|| 1.005381412756e+00
 21 KSP preconditioned resid norm 1.976112359751e+14 true resid norm 
3.209202875553e+03 ||r(i)||/||b|| 1.008548503879e+00
 22 KSP preconditioned resid norm 1.874048396302e+14 true resid norm 
3.210733816807e+03 ||r(i)||/||b|| 1.009029629122e+00
 23 KSP preconditioned resid norm 1.086663247242e+14 true resid norm 
3.224219602770e+03 ||r(i)||/||b|| 1.013267774787e+00
 24 KSP preconditioned resid norm 1.036005882413e+14 true resid norm 
3.224749127723e+03 ||r(i)||/||b|| 1.013434187326e+00
 25 KSP preconditioned resid norm 6.943244220256e+13 true resid norm 
3.227732423395e+03 ||r(i)||/||b|| 1.014371740514e+00
 26 KSP preconditioned resid norm 6.064453821356e+13 true resid norm 
3.227303114420e+03 ||r(i)||/||b|| 1.014236822611e+00
 27 KSP preconditioned resid norm 5.237876331038e+13 true resid norm 
3.226320811775e+03 ||r(i)||/||b|| 1.013928116710e+00
 28 KSP preconditioned resid norm 4.407157490353e+13 true resid norm 
3.226123552298e+03 ||r(i)||/||b|| 1.013866124447e+00
 29 KSP preconditioned resid norm 3.691867703712e+13 true resid norm 
3.226578266524e+03 ||r(i)||/||b|| 1.014009026398e+00
  0 KSP preconditioned resid norm 4.105203293801e+02 true resid norm 
1.879478105170e+02 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 1.404700393158e+02 true resid norm 
5.941722233814e+03 ||r(i)||/||b|| 3.161368157187e+01
  2 KSP preconditioned resid norm 3.849004101235e+01 true resid norm 
3.456178528017e+03 ||r(i)||/||b|| 1.838903320294e+01
  3 KSP preconditioned resid norm 8.384263059954e+00 true resid norm 
1.401839902990e+03 ||r(i)||/||b|| 7.458665781386e+00
  4 KSP preconditioned resid norm 1.363029002549e+00 true resid norm 
3.612540200618e+02 ||r(i)||/||b|| 1.922097517753e+00
  5 KSP preconditioned resid norm 3.734525206097e-01 true resid norm 
1.272038960182e+02 ||r(i)||/||b|| 6.768043515285e-01
  6 KSP preconditioned resid norm 8.146021124788e-02 true resid norm 
3.196642795661e+01 ||r(i)||/||b|| 1.700814064749e-01
  7 KSP preconditioned resid norm 2.110568459516e-02 true resid norm 
6.793208791728e+00 ||r(i)||/||b|| 3.614412305757e-02
  8 KSP preconditioned resid norm 4.044708317961e-03 true resid norm 
1.976363197299e+00 ||r(i)||/||b|| 1.051548933644e-02
  9 KSP preconditioned resid norm 9.154675861687e-04 true resid norm 
3.458820713849e-01 ||r(i)||/||b|| 1.840309128547e-03
 10 KSP preconditioned resid norm 2.106487039520e-04 true resid norm 
1.064965566696e-01 ||r(i)||/||b|| 5.666283442018e-04
 11 KSP preconditioned resid norm 5.145654020522e-05 true resid norm 
2.436255895394e-02 ||r(i)||/||b|| 1.296240636532e-04
 12 KSP preconditioned resid norm 1.223910887488e-05 true resid norm 
4.954689842368e-03 ||r(i)||/||b|| 2.636205140533e-05

> Il giorno 01 ago 2018, alle ore 15:41, Matthew Knepley  ha 
> scritto:
> 
> On Wed, Aug 1, 2018 at 9:02 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
>> Il giorno 27 lug 2018, alle ore 18:32, Smith, Barry F. > <mailto:bsm...@mcs.anl.gov>> ha scritto:
>> 
>>> On Jul 27, 2018, at 3:26 AM, Pierpaolo Minelli >> <mailto:pierpaolo.mine...@cnr.it>> wrote:
>>> 
>>> 
>>> Finally, I have a question. In my simulation I solve the two systems at 
>>> each step of the calculation, and it was my habit to use the following 
>>> option after the first resolution and before solving the system in the 
>>> second time step:
>>> 
>>> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)
>>> 
>>> Since this option was incompatible with the use of MUMPS or SuperLU_Dist, I 
>>> commented on it and noticed, to my surprise, that iterative methods were 
>>> not affected by this comment, rather they were slightly faster. Is this 
>>> normal? Or do I use this command incorrectly?
>> 
>>   Presuming this is a linear problem?
>> 
>>   You should run with -ksp_monitor_true_residual and compare the final 
>> residual norms (at linear system convergence). Since KSP uses a relative 
>> decrease in the residual norm to declare convergence what you have told us 
>> isn't enough to determine if one is converging "faster" to the solution then 
>> the other.
>> 
>>   Barry
> 
> Yes, there are two linear problems. The first in the field of complex numbers 
> (which I divided into two problems in the field of real numbers as suggested 
> by Matt

Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-08-01 Thread Pierpaolo Minelli


> Il giorno 27 lug 2018, alle ore 18:32, Smith, Barry F.  
> ha scritto:
> 
> 
> 
>> On Jul 27, 2018, at 3:26 AM, Pierpaolo Minelli  
>> wrote:
>> 
>> 
>> Finally, I have a question. In my simulation I solve the two systems at each 
>> step of the calculation, and it was my habit to use the following option 
>> after the first resolution and before solving the system in the second time 
>> step:
>> 
>> call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)
>> 
>> Since this option was incompatible with the use of MUMPS or SuperLU_Dist, I 
>> commented on it and noticed, to my surprise, that iterative methods were not 
>> affected by this comment, rather they were slightly faster. Is this normal? 
>> Or do I use this command incorrectly?
> 
>   Presuming this is a linear problem?



> 
>   You should run with -ksp_monitor_true_residual and compare the final 
> residual norms (at linear system convergence). Since KSP uses a relative 
> decrease in the residual norm to declare convergence what you have told us 
> isn't enough to determine if one is converging "faster" to the solution then 
> the other.
> 
>   Barry

Yes, there are two linear problems. The first in the field of complex numbers 
(which I divided into two problems in the field of real numbers as suggested by 
Matthew) and the second in the field of real numbers. 

These are the results obtained using the option -ksp_monitor_true_residual:

-pc_type gamg -pc_gamg_agg_nsmooths 1 -pc_gamg_reuse_interpolation true 
-pc_gamg_square_graph 1 -pc_gamg_threshold 0. -ksp_rtol 1.e-7 
-ksp_monitor_true_residual

  0 KSP preconditioned resid norm 1.324344286254e-02 true resid norm 
2.979865703850e-03 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 3.399266083153e-03 true resid norm 
1.196951495628e+05 ||r(i)||/||b|| 4.016796777391e+07
  2 KSP preconditioned resid norm 6.287919852336e-04 true resid norm 
4.429876706490e+04 ||r(i)||/||b|| 1.486602802524e+07
  3 KSP preconditioned resid norm 8.836547690039e-05 true resid norm 
1.182383826632e+04 ||r(i)||/||b|| 3.967909778970e+06
  4 KSP preconditioned resid norm 1.291370058561e-05 true resid norm 
2.801893692950e+03 ||r(i)||/||b|| 9.402751571421e+05
  5 KSP preconditioned resid norm 2.07398951e-06 true resid norm 
6.312112895919e+02 ||r(i)||/||b|| 2.118254150770e+05
  6 KSP preconditioned resid norm 3.283811876800e-07 true resid norm 
8.777865833701e+01 ||r(i)||/||b|| 2.945725313178e+04
  7 KSP preconditioned resid norm 5.414680273500e-08 true resid norm 
1.610127004050e+01 ||r(i)||/||b|| 5.403354258448e+03
  8 KSP preconditioned resid norm 9.645834363683e-09 true resid norm 
3.006444251909e+00 ||r(i)||/||b|| 1.008919377818e+03
  9 KSP preconditioned resid norm 1.915420455785e-09 true resid norm 
6.672996533262e-01 ||r(i)||/||b|| 2.239361500298e+02
 10 KSP preconditioned resid norm 3.334928638696e-10 true resid norm 
1.185976397497e-01 ||r(i)||/||b|| 3.979965929219e+01
  0 KSP preconditioned resid norm 1.416256896904e+04 true resid norm 
6.623626291881e+05 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 3.781264980774e+03 true resid norm 
1.283809201228e+11 ||r(i)||/||b|| 1.938227104995e+05
  2 KSP preconditioned resid norm 7.392413714593e+02 true resid norm 
4.990467508600e+10 ||r(i)||/||b|| 7.534343407504e+04
  3 KSP preconditioned resid norm 1.087641767696e+02 true resid norm 
1.367263690390e+10 ||r(i)||/||b|| 2.06470610e+04
  4 KSP preconditioned resid norm 1.627726591174e+01 true resid norm 
3.451534923895e+09 ||r(i)||/||b|| 5.210944536719e+03
  5 KSP preconditioned resid norm 2.564636460142e+00 true resid norm 
7.980525032990e+08 ||r(i)||/||b|| 1.204857381941e+03
  6 KSP preconditioned resid norm 4.252626820180e-01 true resid norm 
1.151817528111e+08 ||r(i)||/||b|| 1.738953070953e+02
  7 KSP preconditioned resid norm 6.758292325957e-02 true resid norm 
2.065701091779e+07 ||r(i)||/||b|| 3.118686050134e+01
  8 KSP preconditioned resid norm 1.099617201063e-02 true resid norm 
3.470561062696e+06 ||r(i)||/||b|| 5.239669192917e+00
  9 KSP preconditioned resid norm 2.195352537111e-03 true resid norm 
7.056823048483e+05 ||r(i)||/||b|| 1.065401750871e+00
 10 KSP preconditioned resid norm 4.380752631896e-04 true resid norm 
1.440377395627e+05 ||r(i)||/||b|| 2.174605468598e-01

  0 KSP preconditioned resid norm 4.970534641714e+02 true resid norm 
1.879478105170e+02 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 9.825390294838e+01 true resid norm 
6.304087699909e+03 ||r(i)||/||b|| 3.354169267824e+01
  2 KSP preconditioned resid norm 1.317425630977e+01 true resid norm 
1.659038483292e+03 ||r(i)||/||b|| 8.827123224947e+00
  3 KSP preconditioned resid norm 1.267331175258e+00 true resid norm 
3.723773819327e+02 ||r(i)||/||b|| 1.981280765700e+00
  4 KSP preconditioned resid norm 1.451198865319e-01 true res

Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-07-27 Thread Pierpaolo Minelli

Hi,

I tried to follow Matthew's suggestion by dividing the system in the complex 
field into two systems in the real numbers field. In practice I used three 
subspaces of Krylov, one for the first system in the real number field and two 
for the system in the complex number field. Using real numbers, I was able to 
use the ML and Hypre preconditioners again.
I have performed the following tests using the following default options and 
using the PETSc version configured with "--with-scalar-type=real (default 
value)":

(1) -pc_type gamg -pc_gamg_agg_nsmooths 1 -pc_gamg_reuse_interpolation true 
-pc_gamg_square_graph 1 -pc_gamg_threshold 0. -ksp_rtol 1.e-7
(2) -ksp_type preonly -pc_type lu -pc_factor_mat_solver_type mumps
(3) -ksp_type preonly -pc_type lu -pc_factor_mat_solver_type superlu_dist
4) -pc_type ml -ksp_monitor -ksp_rtol 1.e-7
(5) -pc_type hypre -ksp_rtol 1.e-7

All methods resolve both systems correctly, but iterative methods (1,4,5) are 
faster. Among iterative methods using Hypre or ML PCs, I get better performance 
(solve time) than GAMG. 

Previously configuring PETSc with --with-scalar-type=complex I had only been 
able to perform the tests (1,2,3) and in that case the resolution times were 
roughly the same and comparable to the resolution time of case 1) when using 
only real numbers.

Finally, I have a question. In my simulation I solve the two systems at each 
step of the calculation, and it was my habit to use the following option after 
the first resolution and before solving the system in the second time step:

call KSPSetInitialGuessNonzero(ksp,PETSC_TRUE,ierr)

Since this option was incompatible with the use of MUMPS or SuperLU_Dist, I 
commented on it and noticed, to my surprise, that iterative methods were not 
affected by this comment, rather they were slightly faster. Is this normal? Or 
do I use this command incorrectly?

Thanks

Pierpaolo

> Il giorno 23 lug 2018, alle ore 15:43, Mark Adams  ha 
> scritto:
> 
> Note, as Barry said, GAMG works with native complex numbers. You can start 
> with your original complex build and use '-pc_type gamg'.
> Mark
> 
> On Mon, Jul 23, 2018 at 6:52 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> 
> 
> > Il giorno 20 lug 2018, alle ore 19:58, Smith, Barry F.  > <mailto:bsm...@mcs.anl.gov>> ha scritto:
> > 
> > 
> > 
> >> On Jul 20, 2018, at 7:01 AM, Pierpaolo Minelli  >> <mailto:pierpaolo.mine...@cnr.it>> wrote:
> >> 
> >> Hi,
> >> 
> >> in my code I have to solve both a system in the field of real numbers and 
> >> in the field of complex numbers.
> >> My approach has been as follows.
> >> First I configured PETSc with the --with-scalar-type=complex option.
> >> Using this option I have unfortunately lost the possibility to use the two 
> >> preconditioners ML and Hypre.
> > 
> >You should still be able to use the PETSc PCGAMG algebraic multigrid 
> > solver. Have you tried that? If there are issues let us know because we 
> > would like to continue to improve the performance of PCGAMG to get it to be 
> > closer to as good as ML and hypre.
> > 
> >   Barry
> > 
> 
> I will try to convert, as suggested by Matthew, my complex system in a system 
> twice as large in real numbers. When i finish, i will try to use ML, Hypre 
> and GAMG and i let you know if there are any performance differences.
> 
> Thanks
> 
> Pierpaolo
> 
> 
> >> I later created two subspaces of Krylov and tried to solve the two systems 
> >> as I used to when solving the only equation in the real numbers field.
> >> In order to correctly solve the system in the field of real numbers I had 
> >> to transform the coefficients from real to complex with an imaginary part 
> >> equal to zero.
> >> 
> >> Is there a possibility to use a different and better approach to solve my 
> >> problem?
> >> 
> >> Perhaps an approach where you can continue to use ML and Hypre for system 
> >> solving in the real numbers field or where you don't need to use complex 
> >> numbers when real numbers would actually suffice?
> >> 
> >> Thanks in advance
> >> 
> >> Pierpaolo
> >> 
> > 
> 



Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-07-23 Thread Pierpaolo Minelli



> Il giorno 20 lug 2018, alle ore 19:58, Smith, Barry F.  
> ha scritto:
> 
> 
> 
>> On Jul 20, 2018, at 7:01 AM, Pierpaolo Minelli  
>> wrote:
>> 
>> Hi,
>> 
>> in my code I have to solve both a system in the field of real numbers and in 
>> the field of complex numbers.
>> My approach has been as follows.
>> First I configured PETSc with the --with-scalar-type=complex option.
>> Using this option I have unfortunately lost the possibility to use the two 
>> preconditioners ML and Hypre.
> 
>You should still be able to use the PETSc PCGAMG algebraic multigrid 
> solver. Have you tried that? If there are issues let us know because we would 
> like to continue to improve the performance of PCGAMG to get it to be closer 
> to as good as ML and hypre.
> 
>   Barry
> 

I will try to convert, as suggested by Matthew, my complex system in a system 
twice as large in real numbers. When i finish, i will try to use ML, Hypre and 
GAMG and i let you know if there are any performance differences.

Thanks

Pierpaolo


>> I later created two subspaces of Krylov and tried to solve the two systems 
>> as I used to when solving the only equation in the real numbers field.
>> In order to correctly solve the system in the field of real numbers I had to 
>> transform the coefficients from real to complex with an imaginary part equal 
>> to zero.
>> 
>> Is there a possibility to use a different and better approach to solve my 
>> problem?
>> 
>> Perhaps an approach where you can continue to use ML and Hypre for system 
>> solving in the real numbers field or where you don't need to use complex 
>> numbers when real numbers would actually suffice?
>> 
>> Thanks in advance
>> 
>> Pierpaolo
>> 
> 



Re: [petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-07-20 Thread Pierpaolo Minelli
Thanks a lot for the helpful tips.

Pierpaolo

> Il giorno 20 lug 2018, alle ore 14:08, Matthew Knepley  ha 
> scritto:
> 
> On Fri, Jul 20, 2018 at 8:01 AM Pierpaolo Minelli  <mailto:pierpaolo.mine...@cnr.it>> wrote:
> Hi,
> 
> in my code I have to solve both a system in the field of real numbers and in 
> the field of complex numbers.
> My approach has been as follows.
> First I configured PETSc with the --with-scalar-type=complex option.
> Using this option I have unfortunately lost the possibility to use the two 
> preconditioners ML and Hypre.
> I later created two subspaces of Krylov and tried to solve the two systems as 
> I used to when solving the only equation in the real numbers field.
> In order to correctly solve the system in the field of real numbers I had to 
> transform the coefficients from real to complex with an imaginary part equal 
> to zero.
> 
> Is there a possibility to use a different and better approach to solve my 
> problem?
> 
> Perhaps an approach where you can continue to use ML and Hypre for system 
> solving in the real numbers field or where you don't need to use complex 
> numbers when real numbers would actually suffice?
> 
> Yes, any linear system in complex numbers can be converted to a system twice 
> as large in real numbers. So far,
> I think this is the best way to handle it, especially the elliptic ones.
> 
>   Thanks,
> 
>  Matt
>  
> Thanks in advance
> 
> Pierpaolo
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>



[petsc-users] Solving Linear Systems with Scalar Real and Complex

2018-07-20 Thread Pierpaolo Minelli
Hi,

in my code I have to solve both a system in the field of real numbers and in 
the field of complex numbers.
My approach has been as follows.
First I configured PETSc with the --with-scalar-type=complex option.
Using this option I have unfortunately lost the possibility to use the two 
preconditioners ML and Hypre.
I later created two subspaces of Krylov and tried to solve the two systems as I 
used to when solving the only equation in the real numbers field.
In order to correctly solve the system in the field of real numbers I had to 
transform the coefficients from real to complex with an imaginary part equal to 
zero.

Is there a possibility to use a different and better approach to solve my 
problem?

Perhaps an approach where you can continue to use ML and Hypre for system 
solving in the real numbers field or where you don't need to use complex 
numbers when real numbers would actually suffice?

Thanks in advance

Pierpaolo