Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Smith, Barry F. via petsc-users

  Send configure.log and make.log to petsc-ma...@mcs.anl.gov

  Barry


> On Feb 5, 2019, at 10:48 PM, Yaxiong Chen  wrote:
> 
> Since mumps and scalapack are already installed on my computer, I only ran 
> ./configure with --download-superlu_dist . 
> After everything is done, I received the error: 
> dyld: lazy symbol binding failed: Symbol not found: 
> _MatSolverTypeRegister_SuperLU_DIST
>   Referenced from: 
> /Users/yaxiong/Downloads/petsc-3.9.4/arch-darwin-c-debug/lib/libpetsc.3.9.dylib
>   Expected in: flat namespace
> 
> dyld: Symbol not found: _MatSolverTypeRegister_SuperLU_DIST
>   Referenced from: 
> /Users/yaxiong/Downloads/petsc-3.9.4/arch-darwin-c-debug/lib/libpetsc.3.9.dylib
>   Expected in: flat namespace
> 
> [0]PETSC ERROR: 
> 
> [0]PETSC ERROR: Caught signal number 5 TRAP
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see 
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to 
> find memory corruption errors
> [0]PETSC ERROR: likely location of problem given in stack below
> [0]PETSC ERROR: -  Stack Frames 
> 
> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
> [0]PETSC ERROR:   INSTEAD the line number of the start of the function
> [0]PETSC ERROR:   is given.
> [0]PETSC ERROR: [0] MatInitializePackage line 150 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/mat/interface/dlregismat.c
> [0]PETSC ERROR: [0] MatCreate line 83 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/mat/utils/gcreate.c
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Signal received
> 
> 
> From: Smith, Barry F. 
> Sent: Tuesday, February 5, 2019 10:58 PM
> To: Yaxiong Chen
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] PETSC matrix assembling super slow
>  
> 
>   Run ./configure with --download-superlu_dist --download-mumps 
> --download-scalapack and the you can use either -pc_factor_mat_solver_type 
> superlu_dist or -pc_factor_mat_solver_type mumps
> 
>   Good luck
> 
> 
> > On Feb 5, 2019, at 9:29 PM, Yaxiong Chen  wrote:
> > 
> > > Also, I found the solving time is also shorter when I use the direct 
> > > solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? 
> > > When I have a very large (e.g., 10*10 ) system, can I expect 
> > > iterative solver being faster?
> > 
> > <<   It sounds like the default preconditioner is not working well on your 
> > problem. First run with -ksp_monitor -ksp_converged_reason to get an idea 
> > of how quickly < > iterations is high you are going to need to change the preconditioner to 
> > get one that converges well for your < > your code implementing, this will help in determining what type of 
> > preconditioner to use.
> > 
> >  << Barry
> > 
> > 
> > It seems I can only work with Jacobi pre-conditioner. When I try LU or 
> > Cholesky ,I got the error :
> > [0]PETSC ERROR: See 
> > http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for 
> > possible LU and Cholesky solvers
> > [0]PETSC ERROR: Could not locate a solver package. Perhaps you must 
> > ./configure with --download-
> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> > trouble shooting.
> > [0]PETSC ERROR: Petsc Release Version 3.9.4, Sep, 11, 2018 
> > [0]PETSC ERROR: ./optimal_mechanical_part.hdc on a arch-darwin-c-debug 
> > named hidac.ecn.purdue.edu by chen2018 Tue Feb  5 22:19:08 2019
> > [0]PETSC ERROR: Configure options 
> > [0]PETSC ERROR: #1 MatGetFactor() line 4328 in 
> > /Users/yaxiong/Downloads/petsc-3.9.4/src/mat/interface/matrix.c
> > [0]PETSC ERROR: #2 PCSetUp_Cholesky() line 86 in 
> > /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/impls/factor/cholesky/cholesky.c
> > [0]PETSC ERROR: #3 PCSetUp() line 923 in 
> > /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/interface/precon.c
> > [0]PETSC ERROR: #4 KSPSetUp() line 381 in 
> > /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
> > [0]PETSC ERROR: #5 KSPSolve() line 612 in 
> > /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
> >  solve time   3.85885025E-003
> >  reason   0
> >  It asks me to download the package. Do I want to run the command in the 
> > folder of PETSC? But I am not sure what the name of the package should be . 
> > I tried with ./configure --download- But it does not work.
> > 
> > From: Smith, Barry F. 
> > Sent: Tuesday, February 5, 2019 1:18 PM
> > To: Yaxiong Chen
> > Cc: PETSc users list
> > Subject: Re: [petsc-users] PETSC matrix assembling super slow
> >  
> > 
> > 
> > > On Feb 5, 2019, at 10:32 AM, Yaxiong Chen  wrote:
> > > 
> > > Thanks Barry,
> > >  I will explore how to partition for 

Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Yaxiong Chen via petsc-users
Since mumps and scalapack are already installed on my computer, I only ran 
./configure with --download-superlu_dist .

After everything is done, I received the error:

dyld: lazy symbol binding failed: Symbol not found: 
_MatSolverTypeRegister_SuperLU_DIST

  Referenced from: 
/Users/yaxiong/Downloads/petsc-3.9.4/arch-darwin-c-debug/lib/libpetsc.3.9.dylib

  Expected in: flat namespace


dyld: Symbol not found: _MatSolverTypeRegister_SuperLU_DIST

  Referenced from: 
/Users/yaxiong/Downloads/petsc-3.9.4/arch-darwin-c-debug/lib/libpetsc.3.9.dylib

  Expected in: flat namespace


[0]PETSC ERROR: 


[0]PETSC ERROR: Caught signal number 5 TRAP

[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger

[0]PETSC ERROR: or see 
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to 
find memory corruption errors

[0]PETSC ERROR: likely location of problem given in stack below

[0]PETSC ERROR: -  Stack Frames 


[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,

[0]PETSC ERROR:   INSTEAD the line number of the start of the function

[0]PETSC ERROR:   is given.

[0]PETSC ERROR: [0] MatInitializePackage line 150 
/Users/yaxiong/Downloads/petsc-3.9.4/src/mat/interface/dlregismat.c

[0]PETSC ERROR: [0] MatCreate line 83 
/Users/yaxiong/Downloads/petsc-3.9.4/src/mat/utils/gcreate.c

[0]PETSC ERROR: - Error Message 
--

[0]PETSC ERROR: Signal received



From: Smith, Barry F. 
Sent: Tuesday, February 5, 2019 10:58 PM
To: Yaxiong Chen
Cc: petsc-users@mcs.anl.gov
Subject: Re: [petsc-users] PETSC matrix assembling super slow


  Run ./configure with --download-superlu_dist --download-mumps 
--download-scalapack and the you can use either -pc_factor_mat_solver_type 
superlu_dist or -pc_factor_mat_solver_type mumps

  Good luck


> On Feb 5, 2019, at 9:29 PM, Yaxiong Chen  wrote:
>
> > Also, I found the solving time is also shorter when I use the direct 
> > solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? 
> > When I have a very large (e.g., 10*10 ) system, can I expect 
> > iterative solver being faster?
>
> <<   It sounds like the default preconditioner is not working well on your 
> problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
> how quickly < is high you are going to need to change the preconditioner to get one that 
> converges well for your < implementing, this will help in determining what type of preconditioner to 
> use.
>
>  << Barry
>
>
> It seems I can only work with Jacobi pre-conditioner. When I try LU or 
> Cholesky ,I got the error :
> [0]PETSC ERROR: See 
> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for 
> possible LU and Cholesky solvers
> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must 
> ./configure with --download-
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.9.4, Sep, 11, 2018
> [0]PETSC ERROR: ./optimal_mechanical_part.hdc on a arch-darwin-c-debug named 
> hidac.ecn.purdue.edu by chen2018 Tue Feb  5 22:19:08 2019
> [0]PETSC ERROR: Configure options
> [0]PETSC ERROR: #1 MatGetFactor() line 4328 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/mat/interface/matrix.c
> [0]PETSC ERROR: #2 PCSetUp_Cholesky() line 86 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/impls/factor/cholesky/cholesky.c
> [0]PETSC ERROR: #3 PCSetUp() line 923 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #4 KSPSetUp() line 381 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: #5 KSPSolve() line 612 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
>  solve time   3.85885025E-003
>  reason   0
>  It asks me to download the package. Do I want to run the command in the 
> folder of PETSC? But I am not sure what the name of the package should be . I 
> tried with ./configure --download- But it does not work.
>
> From: Smith, Barry F. 
> Sent: Tuesday, February 5, 2019 1:18 PM
> To: Yaxiong Chen
> Cc: PETSc users list
> Subject: Re: [petsc-users] PETSC matrix assembling super slow
>
>
>
> > On Feb 5, 2019, at 10:32 AM, Yaxiong Chen  wrote:
> >
> > Thanks Barry,
> >  I will explore how to partition for parallel computation later. But 
> > now I still have some confusion on the sequential operation.
> > I compared PETSC and Mumps. In both case, the subroutine for generating 
> > elemental matrix is very similar. However, the difference is in the 
> > following step:
> >call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr)
> > 

Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Smith, Barry F. via petsc-users

  Run ./configure with --download-superlu_dist --download-mumps 
--download-scalapack and the you can use either -pc_factor_mat_solver_type 
superlu_dist or -pc_factor_mat_solver_type mumps

  Good luck


> On Feb 5, 2019, at 9:29 PM, Yaxiong Chen  wrote:
> 
> > Also, I found the solving time is also shorter when I use the direct 
> > solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? 
> > When I have a very large (e.g., 10*10 ) system, can I expect 
> > iterative solver being faster?
> 
> <<   It sounds like the default preconditioner is not working well on your 
> problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
> how quickly < is high you are going to need to change the preconditioner to get one that 
> converges well for your < implementing, this will help in determining what type of preconditioner to 
> use.
> 
>  << Barry
> 
> 
> It seems I can only work with Jacobi pre-conditioner. When I try LU or 
> Cholesky ,I got the error :
> [0]PETSC ERROR: See 
> http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for 
> possible LU and Cholesky solvers
> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must 
> ./configure with --download-
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.9.4, Sep, 11, 2018 
> [0]PETSC ERROR: ./optimal_mechanical_part.hdc on a arch-darwin-c-debug named 
> hidac.ecn.purdue.edu by chen2018 Tue Feb  5 22:19:08 2019
> [0]PETSC ERROR: Configure options 
> [0]PETSC ERROR: #1 MatGetFactor() line 4328 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/mat/interface/matrix.c
> [0]PETSC ERROR: #2 PCSetUp_Cholesky() line 86 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/impls/factor/cholesky/cholesky.c
> [0]PETSC ERROR: #3 PCSetUp() line 923 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/pc/interface/precon.c
> [0]PETSC ERROR: #4 KSPSetUp() line 381 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
> [0]PETSC ERROR: #5 KSPSolve() line 612 in 
> /Users/yaxiong/Downloads/petsc-3.9.4/src/ksp/ksp/interface/itfunc.c
>  solve time   3.85885025E-003
>  reason   0
>  It asks me to download the package. Do I want to run the command in the 
> folder of PETSC? But I am not sure what the name of the package should be . I 
> tried with ./configure --download- But it does not work.
> 
> From: Smith, Barry F. 
> Sent: Tuesday, February 5, 2019 1:18 PM
> To: Yaxiong Chen
> Cc: PETSc users list
> Subject: Re: [petsc-users] PETSC matrix assembling super slow
>  
> 
> 
> > On Feb 5, 2019, at 10:32 AM, Yaxiong Chen  wrote:
> > 
> > Thanks Barry,
> >  I will explore how to partition for parallel computation later. But 
> > now I still have some confusion on the sequential operation.
> > I compared PETSC and Mumps. In both case, the subroutine for generating 
> > elemental matrix is very similar. However, the difference is in the 
> > following step:
> >call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr)
> > In this step ,each element cost about 3~4 e-2 second.
> > 
> > However, when I use mumps, I use the following code: 
> >preA(Aptr+1:Aptr+n*(n+1)/2) = pack(Ae, mask(1:n,1:n))
> >Aptr = Aptr +n*(n+1)/2
> >if (allocated(auxRHSe)) preRHS(idx) = preRHS(idx)+auxRHSe
> >  It just cost 10e-6~10e-5 second. For a 1*1 matrix, the assembling 
> > time for using PETSC is 300s while it cost 60s when using Mumps.
> > For a 1*1 system,the  Is there any way I can make it faster? 
> 
>   As we keep saying the slowness of the matrix assembly is due to incorrect 
> preallocation. Once you have the preallocation correct the speed should 
> increase dramatically.
> 
> > Also, I found the solving time is also shorter when I use the direct 
> > solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? 
> > When I have a very large (e.g., 10*10 ) system, can I expect 
> > iterative solver being faster?
> 
>It sounds like the default preconditioner is not working well on your 
> problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
> how quickly the iterative solver is converging. If the number of iterations 
> is high you are going to need to change the preconditioner to get one that 
> converges well for your problem. What mathematical models is your code 
> implementing, this will help in determining what type of preconditioner to 
> use.
> 
>   Barry
> 
> 
> > 
> > Thanks
> > 
> > Yaxiong
> > 
> > 
> > 
> > 
> > 
> > Yaxiong Chen, 
> > Ph.D. Student 
> > 
> > School of Mechanical Engineering, 3171 
> > 585 Purdue Mall
> > West Lafayette, IN 47907
> > 
> > 
> > 
> > 
> > 
> > From: Smith, Barry F. 
> > Sent: Monday, February 4, 2019 10:42 PM
> > To: Yaxiong Chen
> > Cc: PETSc users list
> > Subject: Re: [petsc-users] PETSC matrix assembling super slow
> >  
> > 
> > 
> > > On Feb 4, 2019, at 4:41 PM, Yaxiong Chen  

Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Yaxiong Chen via petsc-users
> Also, I found the solving time is also shorter when I use the direct 
> solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? When 
> I have a very large (e.g., 10*10 ) system, can I expect iterative 
> solver being faster?

<<   It sounds like the default preconditioner is not working well on your 
problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
how quickly  On Feb 5, 2019, at 10:32 AM, Yaxiong Chen  wrote:
>
> Thanks Barry,
>  I will explore how to partition for parallel computation later. But now 
> I still have some confusion on the sequential operation.
> I compared PETSC and Mumps. In both case, the subroutine for generating 
> elemental matrix is very similar. However, the difference is in the following 
> step:
>call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr)
> In this step ,each element cost about 3~4 e-2 second.
>
> However, when I use mumps, I use the following code:
>preA(Aptr+1:Aptr+n*(n+1)/2) = pack(Ae, mask(1:n,1:n))
>Aptr = Aptr +n*(n+1)/2
>if (allocated(auxRHSe)) preRHS(idx) = preRHS(idx)+auxRHSe
>  It just cost 10e-6~10e-5 second. For a 1*1 matrix, the assembling 
> time for using PETSC is 300s while it cost 60s when using Mumps.
> For a 1*1 system,the  Is there any way I can make it faster?

  As we keep saying the slowness of the matrix assembly is due to incorrect 
preallocation. Once you have the preallocation correct the speed should 
increase dramatically.

> Also, I found the solving time is also shorter when I use the direct 
> solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? When 
> I have a very large (e.g., 10*10 ) system, can I expect iterative 
> solver being faster?

   It sounds like the default preconditioner is not working well on your 
problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
how quickly the iterative solver is converging. If the number of iterations is 
high you are going to need to change the preconditioner to get one that 
converges well for your problem. What mathematical models is your code 
implementing, this will help in determining what type of preconditioner to use.

  Barry


>
> Thanks
>
> Yaxiong
>
>
>
>
>
> Yaxiong Chen,
> Ph.D. Student
>
> School of Mechanical Engineering, 3171
> 585 Purdue Mall
> West Lafayette, IN 47907
>
>
>
>
>
> From: Smith, Barry F. 
> Sent: Monday, February 4, 2019 10:42 PM
> To: Yaxiong Chen
> Cc: PETSc users list
> Subject: Re: [petsc-users] PETSC matrix assembling super slow
>
>
>
> > On Feb 4, 2019, at 4:41 PM, Yaxiong Chen  wrote:
> >
> >   Hi Barry,
> >
> >  !===
> >   mystart =rank +1! rank starts 
> > from 0
> >   do i=mystart,nelem,nproc! nelem: total number of 
> > elements  ; nproc :number of process
> > call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx)! 
> > Generate elemental matrix Ae and corresponding global Idx
> > ne=size(idx)
> > idx=idx-1   !-1 since PETSC index starts from zero
> > if (allocated(auxRHSe))  call 
> > VecSetValues(bvec,ne,idx,auxRHSe,ADD_VALUES,ierr)
> > call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr) ! Add 
> > elemental RHS to global RHS
> >   end do
> > 

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-05 Thread Abhyankar, Shrirang G via petsc-users
Hi Justin,
   Typically, the power grid distribution systems have a radial structure 
(unless it is an urban area) that leads to a, more or less, staircase type 
matrix. So a MatLoad() or VecLoad() would presumably just just splits the 
stairs, akin to a 1-D PDE. However, as you pointed out, it may cause the phases 
for a bus to be split across processors, but I don’t think it would cause a big 
performance hit.
I’ve worked with OpenDSS recently and have created a C wrapper for it that you 
can use to query values from it. I can help out with your experimentations, if 
you need.

Shri
From: petsc-users  On Behalf Of Justin Chang 
via petsc-users
Sent: Tuesday, February 5, 2019 5:23 PM
To: Mark Adams 
Cc: petsc-users 
Subject: Re: [petsc-users] Preconditioning systems of equations with complex 
numbers

Hi all,

So I *think* I know what the problem is. I am modeling a a distribution system 
where each bus can have anywhere from 1 to 3 phases. In the smallest example I 
had, all four buses had 3 phases (hence 12 dofs), and this was very easy to 
decompose - each MPI process should have a specific set of complex voltages on 
their respective nodes/buses. The larger test cases have lots of single phase 
buses attached, so a naive decomposition of the matrix isn't trivial. Again 
we're only extracting the Y-bus matrices out of an external software (OpenDSS 
to be specific) so we have no a priori information on how they organize their 
degrees-of-freedom - we let MatLoad() and VecLoad() do the decomposition. So 
it's possible that during this decomposition, a single bus' phases are being 
split across two or more MPI processes, which I assume would mess up the 
algorithmic performance of GAMG/ASM when more than one MPI process is needed.

Yeah DMNetwork will fix this and ensure that all dofs per vertex do not get 
split across different MPI procs, but for now we're just trying to see if we 
can get a simple proof-of-concept working. We're trying to come up with a toy 
distribution system where everything is three-phase (hence very straightforward 
to decompose even with MatLoad().

Justin

On Tue, Feb 5, 2019 at 9:26 AM Mark Adams 
mailto:mfad...@lbl.gov>> wrote:
I would stay away from eigen estimates in the solver (but give us the spectra 
to look at), so set -pc_gamg_agg_nsmooths 0 and use sor.

Applications that have lived on direct solvers can add sorts of crap like 
penalty terms.

sor seemed to work OK so I'd check the coarse grids in GAMG. Test with just two 
levels. That way you don't have to use sor on an internal MG grid, which can be 
bad and probably is here.


On Mon, Feb 4, 2019 at 10:59 PM Smith, Barry F. via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:


> On Feb 4, 2019, at 1:56 PM, Justin Chang via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Thanks everyone for your suggestions/feedback. So a few things:
>
> 1) When I examined larger distribution networks (~150k buses) my eigenvalue 
> estimates from the chebyshev method get enormous indeed. See below:
>
> [0] PCSetUp_GAMG(): level 0) N=320745, n data rows=1, n data cols=1, nnz/row 
> (ave)=6, np=1
> [0] PCGAMGFilterGraph():   98.5638% nnz after filtering, with threshold 
> 0., 6.01293 nnz ave. (N=320745)
> [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> [0] PCGAMGProlongator_AGG(): New grid 44797 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=2.403335e+00 
> min=4.639523e-02 PC=jacobi
> [0] PCSetUp_GAMG(): 1) N=44797, n data cols=1, nnz/row (ave)=7, 1 active pes
> [0] PCGAMGFilterGraph():   99.9753% nnz after filtering, with threshold 
> 0., 7.32435 nnz ave. (N=44797)
> [0] PCGAMGProlongator_AGG(): New grid 13043 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=8.173298e+00 
> min=9.687506e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 2) N=13043, n data cols=1, nnz/row (ave)=22, 1 active pes
> [0] PCGAMGFilterGraph():   99.684% nnz after filtering, with threshold 
> 0., 22.5607 nnz ave. (N=13043)
> [0] PCGAMGProlongator_AGG(): New grid 2256 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=5.696594e+00 
> min=6.150856e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 3) N=2256, n data cols=1, nnz/row (ave)=79, 1 active pes
> [0] PCGAMGFilterGraph():   93.859% nnz after filtering, with threshold 
> 0., 79.5142 nnz ave. (N=2256)
> [0] PCGAMGProlongator_AGG(): New grid 232 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.454120e+00 
> min=6.780909e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 4) N=232, n data cols=1, nnz/row (ave)=206, 1 active pes
> [0] PCGAMGFilterGraph():   99.1729% nnz after filtering, with threshold 
> 0., 206.379 nnz ave. (N=232)
> [0] PCGAMGProlongator_AGG(): New grid 9 nodes
> [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=2.443612e+00 
> min=2.153627e-01 PC=jacobi
> [0] PCSetUp_GAMG(): 5) N=9, n data cols=1, nnz/row (ave)=9, 1 active pes
> [0] PCSetUp_GAMG(): 6 levels, grid complexity = 1.44058
>
> 2) I tried all the suggestions 

Re: [petsc-users] reading petsc binary files.

2019-02-05 Thread Smith, Barry F. via petsc-users



> On Feb 5, 2019, at 5:06 PM, Sanjay Kharche via petsc-users 
>  wrote:
> 
> 
> Hi
> 
> I use two mpi clusters (cluster 1 and 2). Whereas the petsc binary files I 
> generate can be read on cluster 1, I get errors doing so on cluster 2. I also 
> output vts files corresponding to each binary file output, and it appears 
> that both clusters do produce meaningful results. I use ver 3.7.4 on cluster 
> 1, and 3.7.5 on 2. Also, the same simulation produces binary files of 
> slightly different sizes on the two clusters.

   This is truly strange. PETSc has always used the exact same format for its 
binary files and automatically handles any byte-swapping that may be need to 
always have a consistent representation on the disk. Also a difference in the 
file size is strange. In over 20 years we've never had an issue with unreadable 
binary files.

  Can cluster 1 read the binary files from cluster 1 and cluster 2 read the 
binary files from cluster 2. Is it just cluster 2 cannot read the files from 
cluster 1? 

  Could the file be changed somehow as it is copied between the clusters?

  Is this difference in size reproducible if you delete the files and create 
them again?

  Is one of the clusters by any chance a Microsoft Windows cluster?

   Barry

> Can you comment on what I need to do to be able to read binary files on 
> cluster 2.
> 
> thanks
> Sanjay
> 
> Code snippet from parallel petsc code that does output:
> 
> if(time_int % (int)(1.0/DELTAT) == 0){ // a smaller time step and more files 
> outputted makes the CV estimate better.
> sprintf(str,"my_2d%d.vts", file_Counter); // this confirms that simulation 
> does something meaningful.
> PetscViewer viewer;  PetscViewerCreate(PETSC_COMM_WORLD, ); 
> PetscViewerSetType(viewer, PETSCVIEWERVTK);
> PetscViewerFileSetName(viewer, str); VecView(u, viewer); 
> PetscViewerDestroy();
> sprintf(str,"my_3d%d.bin",(int)file_Counter);
> PetscViewer viewer2;
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,str,FILE_MODE_WRITE,);
> VecView(u,viewer2);
> PetscViewerDestroy();
> file_Counter++;
> }
> 
> 
> How I am trying to read it (typically serial code with binary called ecg):
> 
>   // inputs are Petsc binary files, create and destroy viewer at each file 
> for simplicity.
> PetscViewer viewer_in;
> sprintf(str,"my_3d%d.bin",file_Counter);
> PetscViewerBinaryOpen(PETSC_COMM_WORLD,str,FILE_MODE_READ,_in);
> VecLoad(u,viewer_in);
> PetscViewerDestroy(_in);
> 
> 
> Errors I got when I ran ecg:
> 
> login3 endo]$ ./ecg
> [0]PETSC ERROR: - Error Message 
> --
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: Not a vector next in file
> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
> trouble shooting.
> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017
> [0]PETSC ERROR: ./ecg on a arch-linux2-c-opt named gra-login3 by kharches Tue 
> Feb  5 17:43:36 2019
> [0]PETSC ERROR: Configure options 
> --prefix=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/petsc/3.7.5
>  --with-mkl_pardiso=1 
> --with-mkl_pardiso-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl
>  --with-hdf5=1 
> --with-hdf5-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/hdf5-mpi/1.8.18
>  --download-hypre=1 --download-metis=1 --download-triangle=1 
> --download-ptscotch=1 --download-superlu_dist=1 --download-ml=1 
> --download-superlu=1 --download-prometheus=1 --download-mumps=1 
> --download-parmetis=1 --download-suitesparse=1 --download-mumps-shared=0 
> --download-ptscotch-shared=0 --download-superlu-shared=0 
> --download-superlu_dist-shared=0 --download-parmetis-shared=0 
> --download-metis-shared=0 --download-ml-shared=0 
> --download-suitesparse-shared=0 --download-hypre-shared=0 
> --download-prometheus-shared=0 --with-cc=mpicc --with-cxx=mpicxx 
> --with-c++-support --with-fc=mpifort --CFLAGS="-O2 -xCore-AVX2 -ftz 
> -fp-speculation=safe -fp-model source -fPIC" --CXXFLAGS="-O2 -xCore-AVX2 -ftz 
> -fp-speculation=safe -fp-model source -fPIC" --FFLAGS="-O2 -xCore-AVX2 -ftz 
> -fp-speculation=safe -fp-model source -fPIC" --with-gnu-compilers=0 
> --with-mpi=1 --with-build-step-np=8 --with-shared-libraries=1 
> --with-debugging=0 --with-pic=1 --with-x=0 --with-windows-graphics=0 
> --with-scalapack=1 
> --with-scalapack-include=/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/include
>  
> --with-scalapack-lib="[/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/lib/intel64/libmkl_scalapack_lp64.a,libmkl_blacs_openmpi_lp64.a,libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a]"
>  
> --with-blas-lapack-lib="[/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/lib/intel64/libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a]"
>  --with-hdf5=1 
> 

Re: [petsc-users] [TimeStepping] Eventhandler

2019-02-05 Thread Jed Brown via petsc-users
Dolfin will need this PR to work with any PETSc 3.10.

https://bitbucket.org/fenics-project/dolfin/pull-requests/508/jed-petsc-310/diff

It's been chillin' there for a couple months; it appears that much of
the Dolfin development effort has moved to DolfinX and Firedrake.

"Huck, Moritz"  writes:

> I am using the fenics library for my PDE discretization.
> I can not compile fenics with PETSc 3.10.3.
> I will locate the exact error tomorrow.
>
> 
> Von: Jed Brown 
> Gesendet: Dienstag, 5. Februar 2019 15:30:38
> An: Huck, Moritz; Smith, Barry F.; Abhyankar, Shrirang G
> Cc: PETSc users list
> Betreff: Re: [petsc-users] [TimeStepping] Eventhandler
>
> "Huck, Moritz via petsc-users"  writes:
>
>> @Shri
>> The system is very stiff, but the stiffness is handled well by ARKIMEX.
>>
>> I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to
>> compatibilty with a third library),
>
> What compatibility problem is this?  3.10.3 should be (binary and
> source) backward compatible with other 3.10 releases, so if there is a
> case where it is not, we need to understand it.
>
>
> Regarding the event, it seems there are scenarios where something
> important changes in the model and a conservative time step is
> desirable, and other scenarios where the dynamics after the event are on
> the same scale as before so it makes sense to continue with the same
> step sizes.


[petsc-users] reading petsc binary files.

2019-02-05 Thread Sanjay Kharche via petsc-users


Hi

I use two mpi clusters (cluster 1 and 2). Whereas the petsc binary files I 
generate can be read on cluster 1, I get errors doing so on cluster 2. I also 
output vts files corresponding to each binary file output, and it appears that 
both clusters do produce meaningful results. I use ver 3.7.4 on cluster 1, and 
3.7.5 on 2. Also, the same simulation produces binary files of slightly 
different sizes on the two clusters. Can you comment on what I need to do to be 
able to read binary files on cluster 2.

thanks
Sanjay

Code snippet from parallel petsc code that does output:

if(time_int % (int)(1.0/DELTAT) == 0){ // a smaller time step and more files 
outputted makes the CV estimate better.
sprintf(str,"my_2d%d.vts", file_Counter); // this confirms that simulation does 
something meaningful.
PetscViewer viewer;  PetscViewerCreate(PETSC_COMM_WORLD, ); 
PetscViewerSetType(viewer, PETSCVIEWERVTK);
PetscViewerFileSetName(viewer, str); VecView(u, viewer); 
PetscViewerDestroy();
 sprintf(str,"my_3d%d.bin",(int)file_Counter);
PetscViewer viewer2;
PetscViewerBinaryOpen(PETSC_COMM_WORLD,str,FILE_MODE_WRITE,);
VecView(u,viewer2);
PetscViewerDestroy();
file_Counter++;
}


How I am trying to read it (typically serial code with binary called ecg):

   // inputs are Petsc binary files, create and destroy viewer at each file for 
simplicity.
PetscViewer viewer_in;
sprintf(str,"my_3d%d.bin",file_Counter);
PetscViewerBinaryOpen(PETSC_COMM_WORLD,str,FILE_MODE_READ,_in);
VecLoad(u,viewer_in);
PetscViewerDestroy(_in);


Errors I got when I ran ecg:

login3 endo]$ ./ecg
[0]PETSC ERROR: - Error Message 
--
[0]PETSC ERROR: Invalid argument
[0]PETSC ERROR: Not a vector next in file
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017
[0]PETSC ERROR: ./ecg on a arch-linux2-c-opt named gra-login3 by kharches Tue 
Feb  5 17:43:36 2019
[0]PETSC ERROR: Configure options 
--prefix=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/petsc/3.7.5
 --with-mkl_pardiso=1 
--with-mkl_pardiso-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl
 --with-hdf5=1 
--with-hdf5-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/hdf5-mpi/1.8.18
 --download-hypre=1 --download-metis=1 --download-triangle=1 
--download-ptscotch=1 --download-superlu_dist=1 --download-ml=1 
--download-superlu=1 --download-prometheus=1 --download-mumps=1 
--download-parmetis=1 --download-suitesparse=1 --download-mumps-shared=0 
--download-ptscotch-shared=0 --download-superlu-shared=0 
--download-superlu_dist-shared=0 --download-parmetis-shared=0 
--download-metis-shared=0 --download-ml-shared=0 
--download-suitesparse-shared=0 --download-hypre-shared=0 
--download-prometheus-shared=0 --with-cc=mpicc --with-cxx=mpicxx 
--with-c++-support --with-fc=mpifort --CFLAGS="-O2 -xCore-AVX2 -ftz 
-fp-speculation=safe -fp-model source -fPIC" --CXXFLAGS="-O2 -xCore-AVX2 -ftz 
-fp-speculation=safe -fp-model source -fPIC" --FFLAGS="-O2 -xCore-AVX2 -ftz 
-fp-speculation=safe -fp-model source -fPIC" --with-gnu-compilers=0 
--with-mpi=1 --with-build-step-np=8 --with-shared-libraries=1 
--with-debugging=0 --with-pic=1 --with-x=0 --with-windows-graphics=0 
--with-scalapack=1 
--with-scalapack-include=/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/include
 
--with-scalapack-lib="[/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/lib/intel64/libmkl_scalapack_lp64.a,libmkl_blacs_openmpi_lp64.a,libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a]"
 
--with-blas-lapack-lib="[/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/imkl/11.3.4.258/mkl/lib/intel64/libmkl_intel_lp64.a,libmkl_sequential.a,libmkl_core.a]"
 --with-hdf5=1 
--with-hdf5-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/hdf5-mpi/1.8.18
 --with-fftw=1 
--with-fftw-dir=/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/MPI/intel2016.4/openmpi2.1/fftw-mpi/3.3.6
[0]PETSC ERROR: #1 PetscViewerBinaryReadVecHeader_Private() line 28 in 
/dev/shm/ebuser/PETSc/3.7.5/iomkl-2016.4.11/petsc-3.7.5/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #2 VecLoad_Binary() line 90 in 
/dev/shm/ebuser/PETSc/3.7.5/iomkl-2016.4.11/petsc-3.7.5/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #3 VecLoad_Default() line 413 in 
/dev/shm/ebuser/PETSc/3.7.5/iomkl-2016.4.11/petsc-3.7.5/src/vec/vec/utils/vecio.c
[0]PETSC ERROR: #4 VecLoad() line 975 in 
/dev/shm/ebuser/PETSc/3.7.5/iomkl-2016.4.11/petsc-3.7.5/src/vec/vec/interface/vector.c
[0]PETSC ERROR: #5 VecLoad_Binary_DA() line 931 in 
/dev/shm/ebuser/PETSc/3.7.5/iomkl-2016.4.11/petsc-3.7.5/src/dm/impls/da/gr2.c
[0]PETSC ERROR: #6 VecLoad_Default_DA() line 964 in 

Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Mark Adams via petsc-users
On Mon, Feb 4, 2019 at 4:17 PM Yaxiong Chen  wrote:

> Hi Mark,
>
>  Will the parameter MatMPIAIJSetPreallocation in influence the
> following part
>   do i=mystart,nelem,nproc
> call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx)
> ne=size(idx)
> idx=idx-1   !-1 since PETSC index starts from zero
> if (allocated(auxRHSe))  call
> VecSetValues(bvec,ne,idx,auxRHSe,ADD_VALUES,ierr)
> call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr)
>   end do
>
> I found this part will impede my assembling process. In my case ,the total
> DOF is 20800. I estimated the upper bound  of number of nonzero entries in
> each row as 594.
>

Then you want to set f9 to 594


> So I set f9 to be 20206 and f6 to be 10103 in the following command:
>
 call
> MatMPIAIJSetPreallocation(Amat,f9,PETSC_NULL_INTEGER,f6,PETSC_NULL_INTEGER,
> ierr)
>
>
> Running the code in sequential mode with -info I got the following
> information.And I can get the result even thought the assembling process is
> kind of slow(several times slower than using Mumps).
>
> [0] MatAssemblyBegin_MPIBAIJ(): Stash has 0 entries,uses 0 mallocs.
>
> [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs.
>
> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 20800 X 20800, block size 1;
> storage space: 13847 unneeded, 1030038 used
>
> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is
> 62659
>
> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 70
>
> [0] MatCheckCompressedRow(): Found the ratio (num_zerorows
> 0)/(num_localrows 20800) < 0.6. Do not use CompressedRow routines.
>
> [0] PetscCommDuplicate(): Using internal PETSc communicator 4462889472
> 140509341618528
>
> [0] PetscCommDuplicate(): Using internal PETSc communicator 4462889472
> 140509341618528
>
> [0] PetscCommDuplicate(): Using internal PETSc communicator 4462889472
> 140509341618528
>
> [0] VecScatterCreate_Seq(): Special case: sequential copy
>
> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 20800 X 0, block size 1;
> storage space: 104000 unneeded, 0 used
>

This looks like your matrix was create with 0 columns.


> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 0
>
> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 0
>
> [0] MatCheckCompressedRow(): Found the ratio (num_zerorows
> 20800)/(num_localrows 20800) > 0.6. Use CompressedRow routines.
>
>  aseembel time   282.5487180001
>
> [0] PetscCommDuplicate(): Using internal PETSc communicator 4462888960
> 140509341442256
>
> [0] PCSetUp(): Setting up PC for first time
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> [0] PCSetUp(): Leaving PC with identical preconditioner since operator is
> unchanged
>
> However, when I use the parallel mode,I got the following information:
>
> [0] MatAssemblyBegin_MPIBAIJ(): Stash has 707965 entries,uses 6 mallocs.
>
> [0] MatAssemblyBegin_MPIBAIJ(): Block-Stash has 0 entries, uses 0 mallocs.
>
> it seems it never went to  call
> MatAssemblyEnd(Amat,MAT_FINAL_ASSEMBLY,ierr)
>
> Is there anything I am doing wrong?
>
> Thanks
>
>
> Yaxiong Chen,
> Ph.D. Student
>
> School of Mechanical Engineering, 3171
>
> 585 Purdue Mall
>
> West Lafayette, IN 47907
>
>
>
>
>
> --
> *From:* Mark Adams 
> *Sent:* Tuesday, January 29, 2019 10:02 PM
> *To:* Yaxiong Chen; PETSc users list
> *Subject:* Re: [petsc-users] PETSC matrix assembling super slow
>
> Optimized is a configuration flag not a versions.
>
> You need to figure out your number of non-zeros per row of you global
> matrix, or a bound on it, and supply that in MatMPIAIJSetPreallocation.
> Otherwise it has to allocate and copy memory often.
>
> You could increase your f9 on a serial run and see what runs best and
> then  move to parallel with a value in f6 of about 1/2 of f9.
>
> On Tue, Jan 29, 2019 at 9:13 PM Yaxiong Chen  wrote:
>
> Thanks Mark,
>
> I use PETSC 3.9.4, is this the optimized version you called?
>
> Actually f9 and f6 are from the PETSC example. I don't know how to set
> the value correctly so I commend them. The size of my elemental matrix may
> vary. For 2D problem, the size of elemental matrix can be 24*24 or 32*32 or
> some other sizes. And the index is not continuous. In this case, the
> elemental matrix may interlace with each other in the global matrix, and I
> may have thousands of elemental matrix to be assembled. Does the
> preallocating suitable for 

Re: [petsc-users] MWE for DMPlexCreateCGNS

2019-02-05 Thread Matthew Knepley via petsc-users
On Tue, Feb 5, 2019 at 11:13 AM Andrew Parker 
wrote:

> On Tue, 5 Feb 2019 at 15:27, Matthew Knepley  wrote:
>
>> On Tue, Feb 5, 2019 at 9:47 AM Andrew Parker via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Does anyone have a MWE for DMPlexCreateCGNS to use in parallel? Ideally,
>>> read parallel, distribute in parallel, construct ghost cells (for parallel
>>> comms and halos for physical boundaries)? It's for a cell-centered solver
>>> working with cgns meshes.  Is there any limitation on cell-types?
>>>
>>
>> 1) CGNS is a terrible format, unfortunately. Are you sure about
>> committing to it? Always best to question design decisions before lots of
>> code has been written.
>>
>
>> 2) DMPlexCreateCGNS() probably works, but it has not been tested
>> exhaustively.
>>
>
>> 3) It does not work in parallel, and is unlikely to in the near future.
>> In order to read in parallel, you should be able to nicely select
>> blocks from disk in the format, and also have local BC specification
>> (rather than global numbers). MED is a nice format like this.
>> It is the CASCADE format, and GMsh uses it internally. We can now
>> read MED in parallel. Without too much work, we could also
>> read ExodusII in parallel I think.
>>
>
> Do you have a MWE for either of those two formats then?
>
> Key for me is to be in parallel by the time the setup stage is finished,
> if there is some serial reading here and there at the initial phase that's
> fine.  For a cell-centered code I just want to go from file read to
> parallel distribute, parallel ghost-cell creation, physical halo creation,
> and boundary condition (faces, edge in 2D) marking re-using as much of the
> petsc DMPlex system as possible.
>

Oh, that is fine then.


> My main requirement with any file format is the ability in a mesh/cad
> package to mark complex groups of edges/faces/cells together (on the
> boundary but also groupings of cells in the volume) to infer groups and
> therefore bcs, or material regions etc.  I am aware med/exo provide this:
> my reason for picking cgns is that is central to their format also.  I need
> to ensure for complex geometry and meshes, that if the file format does
> encode groupings/bcs then these are accessible after the parallel
> distribution phase in complete.  It would help me massively if there were
> an example that does all of thisI'll jump ship on file format to get
> that.
>

I think it should be fine to use CGNS for you. How about you make a small
CGNS file with some marked boundaries
and I will help get that in as a PETSc example (we can use Plex ex1 to read
it in and write out the Plex).

  Thanks,

 Matt


> Thanks,
> Andy
>
>>
>> 4) Once read, you get a regular Plex, so you can redistribute, make ghost
>> cells, etc. as you can for any Plex mesh.
>>
>> 5) As long as all you want Plex to do is manage topology and field data,
>> then you can have whatever mix of cell types you want.
>>
>> 6) Limitations on cell types come from routines that calculate geometric
>> quantities, integrate, etc.
>>
>>   Thanks,
>>
>>  Matt
>>
>
>
>>
>>
>>> Thanks,
>>> Andy
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> 
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] PETSC matrix assembling super slow

2019-02-05 Thread Smith, Barry F. via petsc-users


> On Feb 5, 2019, at 10:32 AM, Yaxiong Chen  wrote:
> 
> Thanks Barry,
>  I will explore how to partition for parallel computation later. But now 
> I still have some confusion on the sequential operation.
> I compared PETSC and Mumps. In both case, the subroutine for generating 
> elemental matrix is very similar. However, the difference is in the following 
> step:
>call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr)
> In this step ,each element cost about 3~4 e-2 second.
> 
> However, when I use mumps, I use the following code: 
>preA(Aptr+1:Aptr+n*(n+1)/2) = pack(Ae, mask(1:n,1:n))
>Aptr = Aptr +n*(n+1)/2
>if (allocated(auxRHSe)) preRHS(idx) = preRHS(idx)+auxRHSe
>  It just cost 10e-6~10e-5 second. For a 1*1 matrix, the assembling 
> time for using PETSC is 300s while it cost 60s when using Mumps.
> For a 1*1 system,the  Is there any way I can make it faster? 

  As we keep saying the slowness of the matrix assembly is due to incorrect 
preallocation. Once you have the preallocation correct the speed should 
increase dramatically.

> Also, I found the solving time is also shorter when I use the direct 
> solver(0.432s  vs 4.332 s ). Is this due the small scale of the system? When 
> I have a very large (e.g., 10*10 ) system, can I expect iterative 
> solver being faster?

   It sounds like the default preconditioner is not working well on your 
problem. First run with -ksp_monitor -ksp_converged_reason to get an idea of 
how quickly the iterative solver is converging. If the number of iterations is 
high you are going to need to change the preconditioner to get one that 
converges well for your problem. What mathematical models is your code 
implementing, this will help in determining what type of preconditioner to use.

  Barry


> 
> Thanks
> 
> Yaxiong
> 
> 
> 
> 
> 
> Yaxiong Chen, 
> Ph.D. Student 
> 
> School of Mechanical Engineering, 3171 
> 585 Purdue Mall
> West Lafayette, IN 47907
> 
> 
> 
> 
> 
> From: Smith, Barry F. 
> Sent: Monday, February 4, 2019 10:42 PM
> To: Yaxiong Chen
> Cc: PETSc users list
> Subject: Re: [petsc-users] PETSC matrix assembling super slow
>  
> 
> 
> > On Feb 4, 2019, at 4:41 PM, Yaxiong Chen  wrote:
> > 
> >   Hi Barry,
> > 
> >  !===
> >   mystart =rank +1! rank starts 
> > from 0
> >   do i=mystart,nelem,nproc! nelem: total number of 
> > elements  ; nproc :number of process 
> > call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx)! 
> > Generate elemental matrix Ae and corresponding global Idx
> > ne=size(idx)
> > idx=idx-1   !-1 since PETSC index starts from zero 
> > if (allocated(auxRHSe))  call 
> > VecSetValues(bvec,ne,idx,auxRHSe,ADD_VALUES,ierr)  
> > call MatSetValues(Amat,ne,idx,ne,idx,Ae,ADD_VALUES,ierr) ! Add 
> > elemental RHS to global RHS
> >   end do
> > !===
> >   Maybe this is where I am wrong. The way I use MPI is to let each core 
> > generate the elemental matrix in turn.
> 
>This is very bad strategy because there is no data locality. 
> 
> > Which means I have one global matrix on each core and finally add them 
> > together. My case is similar to typical finite element method. But the 
> > problem is the Index is not continuous, in this case I don't know how I can 
> > partition the global matrix. Do you have any suggestions or do you have any 
> > template which can show me how finite element method use PETSC?
> 
>   What you need to do is partition the elements across the processors (so 
> that the each process has a contiguous subdomain of elements), The each 
> process computes the element stiffness for "its elements". There really isn't 
> a single PETSc example that manages this all directly for finite elements 
> because that is a rather involved process to do it so as to get good 
> performance.
> 
>Depending on how specialized your problem needs to be you might consider 
> using one of the packages libMesh, MOOSE or deal.ii to manage the elements 
> and element computations (they all use PETSc for the algebraic solvers) 
> instead of doing it yourself; it is an involved process to do it all your 
> self.
> 
>Barry
> 
> > 
> > Thanks
> > 
> > Yaxiong
> > 
> > 
> > From: Smith, Barry F. 
> > Sent: Monday, February 4, 2019 5:21 PM
> > To: Yaxiong Chen
> > Cc: Mark Adams; PETSc users list
> > Subject: Re: [petsc-users] PETSC matrix assembling super slow
> >  
> > 
> > 
> > > On Feb 4, 2019, at 3:17 PM, Yaxiong Chen via petsc-users 
> > >  wrote:
> > > 
> > > Hi Mark,
> > >  Will the parameter MatMPIAIJSetPreallocation in influence the 
> > > following part
> > >   do i=mystart,nelem,nproc
> > > call ptSystem%getElementalMAT(i, Ae, auxRHSe, idx)
> > > ne=size(idx)
> > > idx=idx-1 

Re: [petsc-users] [TimeStepping] Eventhandler

2019-02-05 Thread Huck, Moritz via petsc-users
I am using the fenics library for my PDE discretization.
I can not compile fenics with PETSc 3.10.3.
I will locate the exact error tomorrow.


Von: Jed Brown 
Gesendet: Dienstag, 5. Februar 2019 15:30:38
An: Huck, Moritz; Smith, Barry F.; Abhyankar, Shrirang G
Cc: PETSc users list
Betreff: Re: [petsc-users] [TimeStepping] Eventhandler

"Huck, Moritz via petsc-users"  writes:

> @Shri
> The system is very stiff, but the stiffness is handled well by ARKIMEX.
>
> I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to
> compatibilty with a third library),

What compatibility problem is this?  3.10.3 should be (binary and
source) backward compatible with other 3.10 releases, so if there is a
case where it is not, we need to understand it.


Regarding the event, it seems there are scenarios where something
important changes in the model and a conservative time step is
desirable, and other scenarios where the dynamics after the event are on
the same scale as before so it makes sense to continue with the same
step sizes.


Re: [petsc-users] [TimeStepping] Eventhandler

2019-02-05 Thread Abhyankar, Shrirang G via petsc-users
Ok, I'll add two things; (i) a TSEventSetPostEventTimeStep(), and (ii) a flag 
that allows the user to select either the previous time-step or the original 
time-step. For most users, the flag should suffice. For users who know how the 
dynamics would behave after the event, they can set the appropriate time-step 
with TSEventSetPostEventTimeStep().

>>>   It does seem absurd for the code to revert to a tiny step in dealing with 
>>> the
>>>Event if the time stepper is happy with a much much large time-step for
>>>solving the ODE at that time range.

Once the event is located the system equations get modified that may change the 
system behavior. As such, the current approach was to use
a conservative smaller time-step. With the above additions, a larger time-step 
will be allowed.

Shri

>>>-Original Message-
>>>From: Smith, Barry F. 
>>>Sent: Monday, February 4, 2019 9:53 PM
>>>To: Abhyankar, Shrirang G 
>>>Cc: Huck, Moritz ; PETSc users list >>us...@mcs.anl.gov>
>>>Subject: Re: [petsc-users] [TimeStepping] Eventhandler
>>>
>>>
>>>  Ahh, thanks, for some reason I didn't find that.
>>>
>>>  Based on Moritz's code example perhaps we need to add one or more
>>>TSEvent methods. The simplest thing would be to have a routine to allow
>>>reseting the "base" timestep that is used 1) within the TSEvent search and 2)
>>>after an event is found. Something like
>>>
>>>   TSEventSetBaseTimeStep()
>>>
>>>of course Base is a terrible name but I'm not sure what else to use.
>>>
>>>   This is, of course, not a perfect solution because the user would have to
>>>know what value to use (though they could set it using values from
>>>TSGetTimeStep().)
>>>
>>>   As an alternative or additional feature there could be a flag so that 
>>> TSEvent
>>>proceeds with the current time step for 1) and 2) not the timestep_orig?
>>>
>>>   It does seem absurd for the code to revert to a tiny step in dealing with 
>>> the
>>>Event if the time stepper is happy with a much much large time-step for
>>>solving the ODE at that time range.
>>>
>>>  Barry
>>>
>>>
>>>
>>>
>>>
 On Feb 4, 2019, at 7:48 PM, Abhyankar, Shrirang G
>>> wrote:

 It's stashed in event->timestep_orig. At the beginning of the 
 TSEventHandler
>>>there is this code that sets timestep_orig.

 if (event->status == TSEVENT_NONE) {
if (ts->steps == 1) /* After first successful step */
  event->timestep_orig = ts->ptime - ts->ptime_prev;

 After the step is resynchronized after event location, the next step 
 chosen is
>>>the min. of the time-step before the event interval (event->timestep_prev)
>>>and the original step.

 if (event->status == TSEVENT_RESET_NEXTSTEP) {
/* Restore time step */
dt = PetscMin(event->timestep_orig,event->timestep_prev);
ierr = TSSetTimeStep(ts,dt);CHKERRQ(ierr);

 So, if the event->timestep_prev is smaller than event->timestep_orig
 (which could be possibly with adaptivity ON, it is allowed)

 Shri

 -Original Message-
 From: petsc-users  On Behalf Of
 Smith, Barry F. via petsc-users
 Sent: Monday, February 4, 2019 4:37 PM
 To: Huck, Moritz 
 Cc: Abhyankar, Shrirang G ;
 petsc-users@mcs.anl.gov
 Subject: Re: [petsc-users] [TimeStepping] Eventhandler

>
> Is there any reason you'd like to use the last time step? We used the 
> initial
>>>time-step (which is assumed to be small) so as (a) to minimize any numerical
>>>issues that may be caused by a large last time-step, and (b) to not miss any
>>>ensuing events that may be triggered by the discontinuity.
>
> Shri

  Shri

 How do you get access to this initial time-step later in the run?
>>>TSSetTimeStep() sets the value ts->time_step but I don't see any place where
>>>that initial timestep is stashed so could be used later.

   Thanks

Barry


> On Feb 4, 2019, at 1:56 AM, Huck, Moritz via petsc-users >>us...@mcs.anl.gov> wrote:
>
> Hi,
> yes there is one.
> My system has a set of very different time contants.
> At some events the fast one may be fully developped and only the slower
>>>ones are of concern after the event (depending on the postevent).
> If the last step is not an option, setting it manually might be helpfull 
> since I
>>>generall know which timeconstants will be relevant (TSSetTime step seems
>>>not to have an effect during the postevent call).
>
> Moritz
> 
> Von: Abhyankar, Shrirang G 
> Gesendet: Montag, 4. Februar 2019 05:09:24
> An: Huck, Moritz; Zhang, Hong
> Betreff: RE: [petsc-users] [TimeStepping] Eventhandler
>
> Is there any reason you'd like to use the last time step? We used the 
> initial
>>>time-step (which is assumed to be small) so as (a) to minimize any numerical
>>>issues that may be caused by a large last time-step, and (b) to not 

Re: [petsc-users] Preconditioning systems of equations with complex numbers

2019-02-05 Thread Mark Adams via petsc-users
I would stay away from eigen estimates in the solver (but give us the
spectra to look at), so set -pc_gamg_agg_nsmooths 0 and use sor.

Applications that have lived on direct solvers can add sorts of crap like
penalty terms.

sor seemed to work OK so I'd check the coarse grids in GAMG. Test with just
two levels. That way you don't have to use sor on an internal MG grid,
which can be bad and probably is here.


On Mon, Feb 4, 2019 at 10:59 PM Smith, Barry F. via petsc-users <
petsc-users@mcs.anl.gov> wrote:

>
>
> > On Feb 4, 2019, at 1:56 PM, Justin Chang via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> >
> > Thanks everyone for your suggestions/feedback. So a few things:
> >
> > 1) When I examined larger distribution networks (~150k buses) my
> eigenvalue estimates from the chebyshev method get enormous indeed. See
> below:
> >
> > [0] PCSetUp_GAMG(): level 0) N=320745, n data rows=1, n data cols=1,
> nnz/row (ave)=6, np=1
> > [0] PCGAMGFilterGraph():   98.5638% nnz after filtering, with
> threshold 0., 6.01293 nnz ave. (N=320745)
> > [0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
> > [0] PCGAMGProlongator_AGG(): New grid 44797 nodes
> > [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=2.403335e+00
> min=4.639523e-02 PC=jacobi
> > [0] PCSetUp_GAMG(): 1) N=44797, n data cols=1, nnz/row (ave)=7, 1 active
> pes
> > [0] PCGAMGFilterGraph():   99.9753% nnz after filtering, with
> threshold 0., 7.32435 nnz ave. (N=44797)
> > [0] PCGAMGProlongator_AGG(): New grid 13043 nodes
> > [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=8.173298e+00
> min=9.687506e-01 PC=jacobi
> > [0] PCSetUp_GAMG(): 2) N=13043, n data cols=1, nnz/row (ave)=22, 1
> active pes
> > [0] PCGAMGFilterGraph():   99.684% nnz after filtering, with
> threshold 0., 22.5607 nnz ave. (N=13043)
> > [0] PCGAMGProlongator_AGG(): New grid 2256 nodes
> > [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=5.696594e+00
> min=6.150856e-01 PC=jacobi
> > [0] PCSetUp_GAMG(): 3) N=2256, n data cols=1, nnz/row (ave)=79, 1 active
> pes
> > [0] PCGAMGFilterGraph():   93.859% nnz after filtering, with
> threshold 0., 79.5142 nnz ave. (N=2256)
> > [0] PCGAMGProlongator_AGG(): New grid 232 nodes
> > [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.454120e+00
> min=6.780909e-01 PC=jacobi
> > [0] PCSetUp_GAMG(): 4) N=232, n data cols=1, nnz/row (ave)=206, 1 active
> pes
> > [0] PCGAMGFilterGraph():   99.1729% nnz after filtering, with
> threshold 0., 206.379 nnz ave. (N=232)
> > [0] PCGAMGProlongator_AGG(): New grid 9 nodes
> > [0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=2.443612e+00
> min=2.153627e-01 PC=jacobi
> > [0] PCSetUp_GAMG(): 5) N=9, n data cols=1, nnz/row (ave)=9, 1 active pes
> > [0] PCSetUp_GAMG(): 6 levels, grid complexity = 1.44058
> >
> > 2) I tried all the suggestions mentioned before: setting
> -pc_gamg_agg_nsmooths 0  -pc_gamg_square_graph 10 did not improve my
> convergence. Neither did explicitly setting -mg_coarse_pc_type lu or more
> iterations of richardson/sor.
> >
> > 3a) -pc_type asm works only if I set -sub_pc_type lu. Basically I'm just
> solving LU on the whole system.
> >
> > 3b)The problem is that I can't get it to even speedup across 2 MPI
> processes for my 150k bus case (~300k dofs) - and I already checked that
> this problem could in theory be parallelized by setting the -ksp_max_it to
> something low and observing that the KSPSolve time decreases as MPI
> concurrency increases. The potential speedup is countered by the fact that
> the algorithmic convergence rate blows up when either more MPI processes
> are added or when I tune the block_size/overlap parameters.
>
>Yes, we just haven't gotten a scalable preconditioner yet for your
> matrix.
>
> >
> > Which leads me to one of two conclusions/questions:
> >
> > A) Is my 300 DOF problem still too small? Do I need to look at a problem
> with, say, 10 Million DOF or more to see that increasing the asm_block_size
> will have better performance (despite having more KSP iterations) than a
> single giant asm_block?
>
>No, if you had a scalable preconditioner you would see good speed up
> with 300k unknowns, using 10 million won't help.
>
> >
> > B) Or is there something even more sinister about my system of equations
> in complex form?
>
>   There is a technical term for matrices for which both GAMG and ASM do
> very poorly on: nasty :-)
>
>   Have you tried using parallel LU (for example ./configure
> --download-superlu_dist). Direct LU is the last refuge of the
> unpreconditionable. How large will your matrices get?
>
>Barry
>
> >
> > Thanks,
> > Justin
> >
> > On Sat, Feb 2, 2019 at 1:41 PM Matthew Knepley 
> wrote:
> > The coarse grid is getting set to SOR rather than LU.
> >
> >Matt
> >
> > On Fri, Feb 1, 2019 at 3:54 PM Justin Chang  wrote:
> > I tried these options:
> >
> > -ksp_view
> > -ksp_monitor_true_residual
> > -ksp_type gmres
> > -ksp_max_it 50
> > -pc_type gamg
> > -mg_coarse_pc_type sor
> > 

Re: [petsc-users] Store type (Eigen::Vector2d) in a petsc vec

2019-02-05 Thread Jed Brown via petsc-users
Andrew Parker  writes:

> Thanks, so you would suggest a flat vector storing u, v, w (or indeed x, y,
> z) or interleaved and then construct eigen types on the fly?  

Interleaved if you want to use Eigen types in the same memory, or if
your code (like most applications) benefits more from memory locality
than from vectorization.  You can cast because Matrix has
the same storage representation as

struct {
  double values[2];
};

> Can I ask, is that because Vec cannot store user defined types (as in
> it's not templatetable?)

PETSc does not use templates, but you wouldn't want it to in this
circumstance.  Either your types have the same storage representation
and you can cast or they don't and templating would ruin what are now
contiguous operations.


Re: [petsc-users] Store type (Eigen::Vector2d) in a petsc vec

2019-02-05 Thread Andrew Parker via petsc-users
Thanks, so you would suggest a flat vector storing u, v, w (or indeed x, y,
z) or interleaved and then construct eigen types on the fly?  Can I ask, is
that because Vec cannot store user defined types (as in it's not
templatetable?)

Thanks,
Andy

On Tue, 5 Feb 2019 at 14:22, Jed Brown  wrote:

> My suggestion is to use PETSc like usual and inside your
> residual/Jacobian evaluation, for each cell or batch of cells, create
> Eigen objects.  For size 2d or 3d, it won't matter much whether you make
> them share memory with the PETSc Vec -- the Eigen types should mostly
> exist in registers.
>
> Andrew Parker via petsc-users  writes:
>
> > Hi,
> >
> > I have quite a bit of working code that uses eigen vector2d/3d (and some
> > corresponding matrix3d types) to allow for local linAlg.  I would like to
> > store these within a petsc vector. So for example to have a vector for
> all
> > cells of velocity-vectors in 2d held for each cell within an eigen
> > vector2d, within a Petsc Vec for all cells.  What would be the best way
> to
> > achieve this within petsc? I would like to make full use of petsc
> > capabilities including parallel (ghost updating, partitioning etc),
> > likewise those of eigen's for local linAlg on small fixed sized vectors,
> so
> > how best to achieve this to allow for both eco-systems?
> > Thanks,
> > Andy
>


Re: [petsc-users] Installing PETSc

2019-02-05 Thread Jed Brown via petsc-users
Fazlul Huq via petsc-users  writes:

> Hello PETSc Developers,
>
> may be this is a trivial question!
>
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't. As far as I can
> recall, I have installed PETSc in the Home directory. Is it the reason why
> I can't run PETSc code from other directory? Shall I install PETSc in the
> root directory?

What do you mean by "run PETSc"?  How did you link your executable?

> Again, if I run command "which petsc" I don't get any echo on the terminal.

That is searching for an executable named "petsc" in your PATH.

  echo $PATH

to see the value of that variable.


Re: [petsc-users] Installing PETSc

2019-02-05 Thread Balay, Satish via petsc-users
On Tue, 5 Feb 2019, Fazlul Huq via petsc-users wrote:

> Hello PETSc Developers,
> 
> may be this is a trivial question!
> 
> I usually run PETSc code from Home/petsc-3.10.2 directory. Last day I tried
> to run the code from Documents/petsc directory but I can't.

I don't know what you mean here - please copy/paste your attempt - and errors 
you've got.

> As far as I can
> recall, I have installed PETSc in the Home directory. Is it the reason why
> I can't run PETSc code from other directory?

nope

> Shall I install PETSc in the root directory?

Once you have a working install of petsc library - you should be able to use it 
from anywhere
in the file-system.

> 
> Again, if I run command "which petsc" I don't get any echo on the terminal.

There is no 'petsc' binary in petsc library.

Satish


Re: [petsc-users] [TimeStepping] Eventhandler

2019-02-05 Thread Jed Brown via petsc-users
"Huck, Moritz via petsc-users"  writes:

> @Shri 
> The system is very stiff, but the stiffness is handled well by ARKIMEX.
>
> I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to
> compatibilty with a third library), 

What compatibility problem is this?  3.10.3 should be (binary and
source) backward compatible with other 3.10 releases, so if there is a
case where it is not, we need to understand it.


Regarding the event, it seems there are scenarios where something
important changes in the model and a conservative time step is
desirable, and other scenarios where the dynamics after the event are on
the same scale as before so it makes sense to continue with the same
step sizes.


Re: [petsc-users] Store type (Eigen::Vector2d) in a petsc vec

2019-02-05 Thread Jed Brown via petsc-users
My suggestion is to use PETSc like usual and inside your
residual/Jacobian evaluation, for each cell or batch of cells, create
Eigen objects.  For size 2d or 3d, it won't matter much whether you make
them share memory with the PETSc Vec -- the Eigen types should mostly
exist in registers.

Andrew Parker via petsc-users  writes:

> Hi,
>
> I have quite a bit of working code that uses eigen vector2d/3d (and some
> corresponding matrix3d types) to allow for local linAlg.  I would like to
> store these within a petsc vector. So for example to have a vector for all
> cells of velocity-vectors in 2d held for each cell within an eigen
> vector2d, within a Petsc Vec for all cells.  What would be the best way to
> achieve this within petsc? I would like to make full use of petsc
> capabilities including parallel (ghost updating, partitioning etc),
> likewise those of eigen's for local linAlg on small fixed sized vectors, so
> how best to achieve this to allow for both eco-systems?
> Thanks,
> Andy


Re: [petsc-users] [TimeStepping] Eventhandler

2019-02-05 Thread Huck, Moritz via petsc-users
@Shri 
The system is very stiff, but the stiffness is handled well by ARKIMEX.

I'am using PETSc 3.10. (I cannot use 3.10.3 at the moment due to compatibilty 
with a third library), no special options for TS
Writing an MWE might take some time, if it is necessary I will write something 
at the weekend.

@Barry
A third option for 2) would be to let the adapator start from the event time, 
this might be usefull if a user doesnt know a good time step after the event.

Best Regards
Moritz

Von: Smith, Barry F. 
Gesendet: Dienstag, 5. Februar 2019 04:53
An: Abhyankar, Shrirang G
Cc: Huck, Moritz; PETSc users list
Betreff: Re: [petsc-users] [TimeStepping] Eventhandler

  Ahh, thanks, for some reason I didn't find that.

  Based on Moritz's code example perhaps we need to add one or more TSEvent 
methods. The simplest thing would be to have a routine to allow reseting the 
"base" timestep that is used 1) within the TSEvent search and 2) after an event 
is found. Something like

   TSEventSetBaseTimeStep()

of course Base is a terrible name but I'm not sure what else to use.

   This is, of course, not a perfect solution because the user would have to 
know what value to use (though they could set it using values from 
TSGetTimeStep().)

   As an alternative or additional feature there could be a flag so that 
TSEvent proceeds with the current time step for 1) and 2) not the timestep_orig?

   It does seem absurd for the code to revert to a tiny step in dealing with 
the Event if the time stepper is happy with a much much large time-step for 
solving the ODE at that time range.

  Barry





> On Feb 4, 2019, at 7:48 PM, Abhyankar, Shrirang G 
>  wrote:
>
> It's stashed in event->timestep_orig. At the beginning of the TSEventHandler 
> there is this code that sets timestep_orig.
>
> if (event->status == TSEVENT_NONE) {
>if (ts->steps == 1) /* After first successful step */
>  event->timestep_orig = ts->ptime - ts->ptime_prev;
>
> After the step is resynchronized after event location, the next step chosen 
> is the min. of the time-step before the event interval (event->timestep_prev) 
> and the original step.
>
> if (event->status == TSEVENT_RESET_NEXTSTEP) {
>/* Restore time step */
>dt = PetscMin(event->timestep_orig,event->timestep_prev);
>ierr = TSSetTimeStep(ts,dt);CHKERRQ(ierr);
>
> So, if the event->timestep_prev is smaller than event->timestep_orig (which 
> could be possibly with adaptivity ON, it is allowed)
>
> Shri
>
> -Original Message-
> From: petsc-users  On Behalf Of Smith, Barry 
> F. via petsc-users
> Sent: Monday, February 4, 2019 4:37 PM
> To: Huck, Moritz 
> Cc: Abhyankar, Shrirang G ; 
> petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] [TimeStepping] Eventhandler
>
>>
>> Is there any reason you'd like to use the last time step? We used the 
>> initial time-step (which is assumed to be small) so as (a) to minimize any 
>> numerical issues that may be caused by a large last time-step, and (b) to 
>> not miss any ensuing events that may be triggered by the discontinuity.
>>
>> Shri
>
>  Shri
>
> How do you get access to this initial time-step later in the run? 
> TSSetTimeStep() sets the value ts->time_step but I don't see any place where 
> that initial timestep is stashed so could be used later.
>
>   Thanks
>
>Barry
>
>
>> On Feb 4, 2019, at 1:56 AM, Huck, Moritz via petsc-users 
>>  wrote:
>>
>> Hi,
>> yes there is one.
>> My system has a set of very different time contants.
>> At some events the fast one may be fully developped and only the slower ones 
>> are of concern after the event (depending on the postevent).
>> If the last step is not an option, setting it manually might be helpfull 
>> since I generall know which timeconstants will be relevant (TSSetTime step 
>> seems not to have an effect during the postevent call).
>>
>> Moritz
>> 
>> Von: Abhyankar, Shrirang G 
>> Gesendet: Montag, 4. Februar 2019 05:09:24
>> An: Huck, Moritz; Zhang, Hong
>> Betreff: RE: [petsc-users] [TimeStepping] Eventhandler
>>
>> Is there any reason you'd like to use the last time step? We used the 
>> initial time-step (which is assumed to be small) so as (a) to minimize any 
>> numerical issues that may be caused by a large last time-step, and (b) to 
>> not miss any ensuing events that may be triggered by the discontinuity.
>>
>> Shri
>>
>> -Original Message-
>> From: petsc-users  On Behalf Of
>> Moritz Huck via petsc-users
>> Sent: Sunday, February 3, 2019 10:45 AM
>> To: Zhang, Hong 
>> Cc: petsc-users@mcs.anl.gov
>> Subject: Re: [petsc-users] [TimeStepping] Eventhandler
>>
>> Hi,
>>
>> I see I am sorry, I misinterpreted the output -ts_adapt_monitor.
>>
>> You are right point 2 is not an issue.
>>
>> I there a way to let the time stepping continue with the last time step 
>> instead of the initial time step?
>>
>> Best Regards,
>>
>> Moritz
>>
>> Am 03.02.19 um 17:04 schrieb 

Re: [petsc-users] Problem in Loading Matrix in Examples

2019-02-05 Thread Eda Oktay via petsc-users
Dear Matt,

Thank you for answering. I didn't recognize that part. But I still can't
find a matrix called "small" and there are some examples testing with
"medium". Am I missing something?

Thanks,

Eda

Matthew Knepley , 5 Şub 2019 Sal, 13:35 tarihinde şunu
yazdı:

> On Tue, Feb 5, 2019 at 4:44 AM Eda Oktay via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi,
>>
>> I am new in PETSc and trying to run some examples in MAT file.
>>
>> I cannot run the ones that need to load matrix from file. Here are the
>> examples:
>>
>> For ex12 in MAT file:
>> args: -f0
>> ${wPETSC_DIR}/share/petsc/datafiles/matrices/ns-real-int32-float64
>> requires: double !complex !define(PETSC_USE_64BIT_INDICES)
>>
>> When I type ./ex11 -f0
>> home/petsc-3.10.3/share/petsc/datafiles/matrices/ns-real-int32-float64, I
>> get the following error message:
>>
>> [0]PETSC ERROR: - Error Message
>> --
>> [0]PETSC ERROR: No support for this operation for this object type
>> [0]PETSC ERROR: This example is for exactly two processes
>>
>
> As it says above, the example is for 2 processes, but you are running it
> on 1.
>
>Matt
>
>
>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
>> for trouble shooting.
>> [0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
>> [0]PETSC ERROR: ./ex11 on a arch-linux2-c-debug named
>> bb91.wls.metu.edu.tr by edaoktay Tue Feb  5 12:41:49 2019
>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
>> --with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
>> --download-metis --download-parmetis --download-superlu_dist
>> --download-slepc --download-mpich
>> [0]PETSC ERROR: #1 main() line 31 in
>> /home/edaoktay/petsc-3.10.3/src/mat/examples/tutorials/ex11.c
>> [0]PETSC ERROR: PETSc Option Table entries:
>> [0]PETSC ERROR: -f0
>> home/petsc-3.10.3/share/petsc/datafiles/matrices/ns-real-int32-float64
>> [0]PETSC ERROR: End of Error Message ---send entire
>> error message to petsc-ma...@mcs.anl.gov--
>> application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0
>> [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=56
>> :
>> system msg for write_line failure : Bad file descriptor
>>
>>
>> For ex10 in MAT file:
>> suffix: seqaij
>> requires: datafilespath double !complex !define(PETSC_USE_64BIT_INDICES)
>> args: -f ${DATAFILESPATH}/matrices/small -a_mat_type seqaij
>>
>> There is not a matrix called small, however during configuration, it
>> should be tested successfully and there is an output. Also, I get pretty
>> much the same error here too.
>>
>> I believe there is a problem in requirements but I cannot find any
>> solution.
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> 
>


[petsc-users] Problem in Loading Matrix in Examples

2019-02-05 Thread Eda Oktay via petsc-users
Hi,

I am new in PETSc and trying to run some examples in MAT file.

I cannot run the ones that need to load matrix from file. Here are the
examples:

For ex12 in MAT file:
args: -f0 ${wPETSC_DIR}/share/petsc/datafiles/matrices/ns-real-int32-float64
requires: double !complex !define(PETSC_USE_64BIT_INDICES)

When I type ./ex11 -f0
home/petsc-3.10.3/share/petsc/datafiles/matrices/ns-real-int32-float64, I
get the following error message:

[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: No support for this operation for this object type
[0]PETSC ERROR: This example is for exactly two processes
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.10.3, Dec, 18, 2018
[0]PETSC ERROR: ./ex11 on a arch-linux2-c-debug named bb91.wls.metu.edu.tr
by edaoktay Tue Feb  5 12:41:49 2019
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --with-cxx-dialect=C++11 --download-openblas
--download-metis --download-parmetis --download-superlu_dist
--download-slepc --download-mpich
[0]PETSC ERROR: #1 main() line 31 in
/home/edaoktay/petsc-3.10.3/src/mat/examples/tutorials/ex11.c
[0]PETSC ERROR: PETSc Option Table entries:
[0]PETSC ERROR: -f0
home/petsc-3.10.3/share/petsc/datafiles/matrices/ns-real-int32-float64
[0]PETSC ERROR: End of Error Message ---send entire
error message to petsc-ma...@mcs.anl.gov--
application called MPI_Abort(MPI_COMM_WORLD, 56) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=56
:
system msg for write_line failure : Bad file descriptor


For ex10 in MAT file:
suffix: seqaij
requires: datafilespath double !complex !define(PETSC_USE_64BIT_INDICES)
args: -f ${DATAFILESPATH}/matrices/small -a_mat_type seqaij

There is not a matrix called small, however during configuration, it should
be tested successfully and there is an output. Also, I get pretty much the
same error here too.

I believe there is a problem in requirements but I cannot find any solution.


Re: [petsc-users] Doubt on how to copy a Mat into another (Fortran)

2019-02-05 Thread Marco Tiberga via petsc-users
Dear Matt,

Thanks for your reply, now it’s clear to me also why matArray is not destroyed, 
contrarily to Amat.

Do you think that declaring explicitly matArray as a (Fortran) pointer (at line 
113) and then using “matArray(1) => Amat” would be equivalent? It would be 
certainly clearer from the Fortran point of view.

With kind regards,
Marco Tiberga

From: Matthew Knepley [mailto:knep...@gmail.com]
Sent: maandag 4 februari 2019 17:46
To: Marco Tiberga
Cc: petsc-users@mcs.anl.gov; Aldo Hennink - TNW; Danny Lathouwers - TNW
Subject: Re: [petsc-users] Doubt on how to copy a Mat into another (Fortran)

On Mon, Feb 4, 2019 at 11:16 AM Marco Tiberga via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Dear PETSc developers,

Since I am learning how to use MatCreateNest, I was looking at example 
ex73f90t.F90
 (I am using Fortran).

At line 297, the array of matrices to be passed to MatCreateNest is initialized 
with code lines as “matArray(1) = Amat”.

That is a pointer. It does not create a replica in memory, as the calls below 
do. Fortran is not explicit about this distinction
which can lead to confusion.

  Thanks,

Matt


I was surprised to find such a simple command, I thought a matrix copy should 
be done by a sequence like
“ call MatDuplicate(Amat, MAT_SHARE_NONZERO_PATTERN,matArray(1),ierr); 
CHKERRA(ierr)
  call MatCopy(Amat,matArray(1), SAME_NONZERO_PATTERN,ierr); CHKERRA(ierr) “

So, I was wondering: are the two ways completely equivalent, and therefore the 
second unnecessarily more complex? or is the latter more robust and therefore 
preferable?
Is there any difference in terms of performance?

Thanks a lot for the clarification.

Best regards,
Marco Tiberga
PhD candidate
Delft University of Technology
Faculty of Applied Sciences
Radiation Science & Technology Department
Mekelweg 15, 2629 JB Delft, The Netherlands
E-Mail: m.tibe...@tudelft.nl
Website: http://www.nera.rst.tudelft.nl/





--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/