Re: [petsc-users] user provided local preconditioner with additive schwarz preconditioner

2016-06-30 Thread Barry Smith
I don't think we have an example that does exactly that. If you are working with KSP directly and not SNES here is how to proceed KSPGetPC(ksp,); PCSetType(pc,PCASM); KSPSetOperators() KSPSetUp()<--- this must be called before the code below otherwise

[petsc-users] user provided local preconditioner with additive schwarz preconditioner

2016-06-30 Thread Duan Zhaowen
Hi, I was trying to define a shell preconditioner for local partition, and let it work with global additive schwarz preconditioner for parallel computing. Is any one can give an example on this kind of preconditioners combination. Thanks! ZW

Re: [petsc-users] Binary input-output for parallel dense matrices (petsc4py)

2016-06-30 Thread Barry Smith
Great, I have added error checking for the matrix types that didn't have it so the problem will be clearer for users in the future. Thanks for reporting the problem, Barry > On Jun 30, 2016, at 5:22 PM, Analabha Roy wrote: > > Hi, > > Thanks for your

Re: [petsc-users] Binary input-output for parallel dense matrices (petsc4py)

2016-06-30 Thread Analabha Roy
Hi, Thanks for your attention. I created A_new explicitly and set the type to dense, and it ran as expected. I also did A_new = A.duplicate() A_new.load(viewer_new) and it worked too! Regards, AR On Thu, Jun 30, 2016 at 7:12 PM, Barry Smith wrote: > > Ahh. When

Re: [petsc-users] reusing matrix created with MatCreateMPIAIJWithSplitArrays

2016-06-30 Thread Barry Smith
> On Jun 30, 2016, at 2:40 PM, Hassan Raiesi > wrote: > > Hello, > > We are using PETSC in our CFD code, and noticed that using > “MatCreateMPIAIJWithSplitArrays” is almost 60% faster for large problem size > (i.e DOF > 725M, using GAMG each time-step

Re: [petsc-users] installing PETSc with mkl_pardiso solver

2016-06-30 Thread Satish Balay
Our current build works with: Configure Options: --configModules=PETSc.Configure --optionsModule=config.compilerOptions CC=icc CXX=icpc FC=ifort --with-blas-lapack-dir=/soft/com/packages/intel/15/update3/mkl --with-mkl_pardiso-dir=/soft/com/packages/intel/15/update3/mkl --download-mpich=1

[petsc-users] installing PETSc with mkl_pardiso solver

2016-06-30 Thread Hoffman, Galen
Greetings, I have been trying to install PETSc v3.7.2 with the MKL Pardiso add-on. I have been using the instructions at the bottom of the webpage at https://www.mcs.anl.gov/petsc/documentation/changes/35.html, which require setting the following options for the PETSc configuration:

Re: [petsc-users] reusing matrix created with MatCreateMPIAIJWithSplitArrays

2016-06-30 Thread Dave May
On Thursday, 30 June 2016, Hassan Raiesi wrote: > Hello, > > > > We are using PETSC in our CFD code, and noticed that using > “MatCreateMPIAIJWithSplitArrays” is almost 60% faster for large problem > size (i.e DOF > 725M, using GAMG each time-step only takes

[petsc-users] reusing matrix created with MatCreateMPIAIJWithSplitArrays

2016-06-30 Thread Hassan Raiesi
Hello, We are using PETSC in our CFD code, and noticed that using "MatCreateMPIAIJWithSplitArrays" is almost 60% faster for large problem size (i.e DOF > 725M, using GAMG each time-step only takes 5sec, compared to 8.3 sec when assembling the matrix one row at a time using

Re: [petsc-users] Solving generalized eigenvalue problem effeciently

2016-06-30 Thread Barry Smith
> On Jun 30, 2016, at 2:00 PM, Hassan Md Mahmudulla > wrote: > > Hi, > I have been trying to solve Generalized eigenvalue problem where the matrix > is symmetric and of size around 10,000-20,000. I have been using default > solver in PETSc. Not enough

[petsc-users] Solving generalized eigenvalue problem effeciently

2016-06-30 Thread Hassan Md Mahmudulla
Hi, I have been trying to solve Generalized eigenvalue problem where the matrix is symmetric and of size around 10,000-20,000. I have been using default solver in PETSc. But compared to ScaLAPACK, I am having really bad performance in terms of execution time even though I am requesting only

Re: [petsc-users] totalview cannot recognize the type Vec_Seq

2016-06-30 Thread Xiangdong
Sorry for the spam. I retyped the same command in front of my coworker, and it just works. Xiangdong On Thu, Jun 30, 2016 at 1:50 PM, Xiangdong wrote: > Hello everyone, > > I am trying to use totalview to debug my codes. For the same binary, in > gdb, (gdb) p ((Vec_Seq*)

[petsc-users] totalview cannot recognize the type Vec_Seq

2016-06-30 Thread Xiangdong
Hello everyone, I am trying to use totalview to debug my codes. For the same binary, in gdb, (gdb) p ((Vec_Seq*) v->data)->array[0] works fine. However, when I tried to view the value of v in totalview, it cannot recognize the data type Vec_Seq. Does anyone have similar experience? Any clue to

Re: [petsc-users] ISGetTotalIndices with Fortran

2016-06-30 Thread Barry Smith
> On Jun 30, 2016, at 8:43 AM, Constantin Nguyen Van > wrote: > > Hi again, > > I've noticed that the same error occurs with the subroutine > MatGetRedundantMatrix. > Is that possible to add the Fortran interface too? This one (it is

Re: [petsc-users] Binary input-output for parallel dense matrices (petsc4py)

2016-06-30 Thread Barry Smith
Ahh. When you use the "native" format with dense matrices you can only read the matrix back in later with a dense matrix. You need to set the A_new matrix type to dense before calling the load. We need more error checking in the PETSc MatLoad() for non-dense matrix formats that properly

[petsc-users] Binary input-output for parallel dense matrices (petsc4py)

2016-06-30 Thread Analabha Roy
Hi all, I'm trying to do basic binary input/output for parallel matrices. Following the example in "petsc4py/demo/binary-io/matvecio.py ", the code below seems to work fine (raw python file attached) with "mpirun -np 2

Re: [petsc-users] ISGetTotalIndices with Fortran

2016-06-30 Thread Constantin Nguyen Van
Hi again, I've noticed that the same error occurs with the subroutine MatGetRedundantMatrix. Is that possible to add the Fortran interface too? Thank you. Constantin. Le 2016-06-28 16:51, Constantin Nguyen Van a écrit : Alright! Thank you. Constantin. Le 2016-06-28 04:54, Barry Smith a