Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 4:11 PM Appel, Thibaut 
wrote:

> "Local" as in serial?
>

Block Jacobi with ILU as the solver on each block. Each block corresponds
to an MPI process by default. So it is completely parallel it is just not a
true ILU. I the limit of one equation per processor it is just (point)
Jacobi.


>
> Thibaut
>
> On 5 Nov 2018, at 20:12, Mark Adams  wrote:
>
>
>
> On Mon, Nov 5, 2018 at 12:50 PM Thibaut Appel 
> wrote:
>
>> Hi Mark,
>>
>> Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so
>> direct solvers are not a viable solution and PETSc' ILU is not parallel and
>> we can't use HYPRE (complex arithmetic)
>>
>
> I think SuperLU has a parallel ILU but in my opinion parallel ILU is not a
> big deal. Neither is optimal and the math win (faster convergence) with
> parallel is offset by the cost of synchronization, in some form, for a true
> parallel ILU. So I think the PETSc default gmres/(local)ILU is your
> best option.
>
>
>> Thibaut
>> On 01/11/2018 20:42, Mark Adams wrote:
>>
>>
>>
>> On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. 
>> wrote:
>>
>>>
>>>
>>> > On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users <
>>> petsc-users@mcs.anl.gov> wrote:
>>> >
>>> > Well yes naturally for the residual but adding -ksp_true_residual just
>>> gives
>>> >
>>> >   0 KSP unpreconditioned resid norm 3.583290589961e+00 true resid norm
>>> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>>> >   1 KSP unpreconditioned resid norm 0.e+00 true resid norm
>>> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>>> > Linear solve converged due to CONVERGED_ATOL iterations 1
>>>
>>>Very bad stuff is happening in the preconditioner. The preconditioner
>>> must have a null space (which it shouldn't have to be a useful
>>> preconditioner).
>>>
>>
>> Yea, you are far away from an optimal preconditioner for this system. In
>> low frequency (indefinite) Helmholtz is very very hard. Now, something very
>> bad is going on here but even if you fix it standard AMG is not good for
>> these problems. I would use direct solvers or grind away it with ILU.
>>
>>
>>>
>>> >
>>> > Mark - if that helps - a Poisson equation is used for the pressure so
>>> the Helmholtz is the same as for the velocity in the interior.
>>> >
>>> > Thibaut
>>> >
>>> >> Le 31 oct. 2018 à 21:05, Mark Adams  a écrit :
>>> >>
>>> >> These are indefinite (bad) Helmholtz problems. Right?
>>> >>
>>> >> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley 
>>> wrote:
>>> >> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel <
>>> t.appe...@imperial.ac.uk> wrote:
>>> >> Hi Mark, Matthew,
>>> >>
>>> >> Thanks for taking the time.
>>> >>
>>> >> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres for
>>> each field, are you?
>>> >>
>>> >> 2) No, the matrix has pressure in one of the fields. Here it's a 2D
>>> problem (but we're also doing 3D), the unknowns are (p,u,v) and those are
>>> my 3 fields. We are dealing with subsonic/transsonic flows so it is
>>> convection dominated indeed.
>>> >>
>>> >> 3) We are in frequency domain with respect to time, i.e.
>>> \partial{phi}/\partial{t} = -i*omega*phi.
>>> >>
>>> >> 4) Hypre is unfortunately not an option since we are in complex
>>> arithmetic.
>>> >>
>>> >>
>>> >>
>>> >>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on
>>> one block, and hence be a subpc. I'm not up on fieldsplit syntax.
>>> >> According to the online manual page this syntax applies the suffix to
>>> all the defined fields?
>>> >>
>>> >>
>>> >>
>>> >>> Mark is correct. I wanted you to change the smoother. He shows how
>>> to change it to Richardson (make sure you add the self-scale option), which
>>> is probably the best choice.
>>> >>>
>>> >>>   Thanks,
>>> >>>
>>> >>>  Matt
>>> >>
>>> >> You did tell me to set it to GMRES if I'm not mistaken, that's why I
>>> tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email).
>>> Also, it wasn't clear whether these should be applied to each block or the
>>> whole system, as the online manual pages + .pdf manual barely mention
>>> smoothers and how to manipulate MG objects with KSP/PC, this especially
>>> with PCFIELDSPLIT where examples are scarce.
>>> >>
>>> >> From what I can gather from your suggestions I tried (lines with X
>>> are repeated for X={0,1,2})
>>> >>
>>> >> This looks good. How can an identically zero vector produce a 0
>>> residual? You should always monitor with
>>> >>
>>> >>   -ksp_monitor_true_residual.
>>> >>
>>> >>Thanks,
>>> >>
>>> >> Matt
>>> >> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>>> >> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>>> >> -pc_type fieldsplit \
>>> >> -pc_fieldsplit_type multiplicative \
>>> >> -pc_fieldsplit_block_size 3 \
>>> >> -pc_fieldsplit_0_fields 0 \
>>> >> -pc_fieldsplit_1_fields 1 \
>>> >> -pc_fieldsplit_2_fields 2 \
>>> >> -fieldsplit_X_pc_type gamg \
>>> >> -fieldsplit_X_ksp_type gmres \
>>> >> -fieldsplit_X_ksp_rtol 1e-10 \
>>> >> -fieldsplit_X_mg_levels

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Appel, Thibaut via petsc-users
"Local" as in serial?

Thibaut

On 5 Nov 2018, at 20:12, Mark Adams mailto:mfad...@lbl.gov>> 
wrote:



On Mon, Nov 5, 2018 at 12:50 PM Thibaut Appel 
mailto:t.appe...@imperial.ac.uk>> wrote:

Hi Mark,

Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so direct 
solvers are not a viable solution and PETSc' ILU is not parallel and we can't 
use HYPRE (complex arithmetic)

I think SuperLU has a parallel ILU but in my opinion parallel ILU is not a big 
deal. Neither is optimal and the math win (faster convergence) with parallel is 
offset by the cost of synchronization, in some form, for a true parallel ILU. 
So I think the PETSc default gmres/(local)ILU is your best option.


Thibaut

On 01/11/2018 20:42, Mark Adams wrote:


On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. 
mailto:bsm...@mcs.anl.gov>> wrote:


> On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users 
> mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Well yes naturally for the residual but adding -ksp_true_residual just gives
>
>   0 KSP unpreconditioned resid norm 3.583290589961e+00 true resid norm 
> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>   1 KSP unpreconditioned resid norm 0.e+00 true resid norm 
> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
> Linear solve converged due to CONVERGED_ATOL iterations 1

   Very bad stuff is happening in the preconditioner. The preconditioner must 
have a null space (which it shouldn't have to be a useful preconditioner).

Yea, you are far away from an optimal preconditioner for this system. In low 
frequency (indefinite) Helmholtz is very very hard. Now, something very bad is 
going on here but even if you fix it standard AMG is not good for these 
problems. I would use direct solvers or grind away it with ILU.


>
> Mark - if that helps - a Poisson equation is used for the pressure so the 
> Helmholtz is the same as for the velocity in the interior.
>
> Thibaut
>
>> Le 31 oct. 2018 à 21:05, Mark Adams 
>> mailto:mfad...@lbl.gov>> a écrit :
>>
>> These are indefinite (bad) Helmholtz problems. Right?
>>
>> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley 
>> mailto:knep...@gmail.com>> wrote:
>> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel 
>> mailto:t.appe...@imperial.ac.uk>> wrote:
>> Hi Mark, Matthew,
>>
>> Thanks for taking the time.
>>
>> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres for each 
>> field, are you?
>>
>> 2) No, the matrix has pressure in one of the fields. Here it's a 2D problem 
>> (but we're also doing 3D), the unknowns are (p,u,v) and those are my 3 
>> fields. We are dealing with subsonic/transsonic flows so it is convection 
>> dominated indeed.
>>
>> 3) We are in frequency domain with respect to time, i.e. 
>> \partial{phi}/\partial{t} = -i*omega*phi.
>>
>> 4) Hypre is unfortunately not an option since we are in complex arithmetic.
>>
>>
>>
>>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one 
>>> block, and hence be a subpc. I'm not up on fieldsplit syntax.
>> According to the online manual page this syntax applies the suffix to all 
>> the defined fields?
>>
>>
>>
>>> Mark is correct. I wanted you to change the smoother. He shows how to 
>>> change it to Richardson (make sure you add the self-scale option), which is 
>>> probably the best choice.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>
>> You did tell me to set it to GMRES if I'm not mistaken, that's why I tried 
>> "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email). Also, it 
>> wasn't clear whether these should be applied to each block or the whole 
>> system, as the online manual pages + .pdf manual barely mention smoothers 
>> and how to manipulate MG objects with KSP/PC, this especially with 
>> PCFIELDSPLIT where examples are scarce.
>>
>> From what I can gather from your suggestions I tried (lines with X are 
>> repeated for X={0,1,2})
>>
>> This looks good. How can an identically zero vector produce a 0 residual? 
>> You should always monitor with
>>
>>   -ksp_monitor_true_residual.
>>
>>Thanks,
>>
>> Matt
>> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>> -pc_type fieldsplit \
>> -pc_fieldsplit_type multiplicative \
>> -pc_fieldsplit_block_size 3 \
>> -pc_fieldsplit_0_fields 0 \
>> -pc_fieldsplit_1_fields 1 \
>> -pc_fieldsplit_2_fields 2 \
>> -fieldsplit_X_pc_type gamg \
>> -fieldsplit_X_ksp_type gmres \
>> -fieldsplit_X_ksp_rtol 1e-10 \
>> -fieldsplit_X_mg_levels_ksp_type richardson \
>> -fieldsplit_X_mg_levels_pc_type sor \
>> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
>> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
>> -log_view
>>
>> which yields
>>
>> KSP Object: 1 MPI processes
>>   type: fgmres
>> restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization 
>> with no iterative refinement
>> happy breakdown tolerance 1e-30
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-08, absolute=1e-50, divergence=1000

Re: [petsc-users] TAOIPM for AC optimal power flow

2018-11-05 Thread Matthew Knepley via petsc-users
On Mon, Nov 5, 2018 at 3:23 PM Justin Chang via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi everyone,
>
> I am working on a generic AC optimal power flow solver, and I hope to use
> DMNetwork's data structure and TAO's optimization solvers for this purpose.
> Last time I inquired about IPM (maybe 3-4 years ago) I was told that it's
> not suitable for large-scale networks, which is the direction we're hoping
> to go here at NREL. I see right now from the documentation website:
>
> "This algorithm is more of a place-holder for future constrained
> optimization algorithms and should not yet be used for large problems or
> production code."
>
> Is there any plans at the moment to have a high performance implementation
> of IPM in TAO? It would be nice to have a purely PETSc code instead of
> interfacing to something like IPOPT for the optimization.
>

Preliminary question. Are there reasons we expect IPM to be better than
active sets?

  Thanks,

Matt


> Thanks,
> Justin
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 12:50 PM Thibaut Appel 
wrote:

> Hi Mark,
>
> Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so
> direct solvers are not a viable solution and PETSc' ILU is not parallel and
> we can't use HYPRE (complex arithmetic)
>

I think SuperLU has a parallel ILU but in my opinion parallel ILU is not a
big deal. Neither is optimal and the math win (faster convergence) with
parallel is offset by the cost of synchronization, in some form, for a true
parallel ILU. So I think the PETSc default gmres/(local)ILU is your
best option.


> Thibaut
> On 01/11/2018 20:42, Mark Adams wrote:
>
>
>
> On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. 
> wrote:
>
>>
>>
>> > On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>> >
>> > Well yes naturally for the residual but adding -ksp_true_residual just
>> gives
>> >
>> >   0 KSP unpreconditioned resid norm 3.583290589961e+00 true resid norm
>> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>> >   1 KSP unpreconditioned resid norm 0.e+00 true resid norm
>> 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>> > Linear solve converged due to CONVERGED_ATOL iterations 1
>>
>>Very bad stuff is happening in the preconditioner. The preconditioner
>> must have a null space (which it shouldn't have to be a useful
>> preconditioner).
>>
>
> Yea, you are far away from an optimal preconditioner for this system. In
> low frequency (indefinite) Helmholtz is very very hard. Now, something very
> bad is going on here but even if you fix it standard AMG is not good for
> these problems. I would use direct solvers or grind away it with ILU.
>
>
>>
>> >
>> > Mark - if that helps - a Poisson equation is used for the pressure so
>> the Helmholtz is the same as for the velocity in the interior.
>> >
>> > Thibaut
>> >
>> >> Le 31 oct. 2018 à 21:05, Mark Adams  a écrit :
>> >>
>> >> These are indefinite (bad) Helmholtz problems. Right?
>> >>
>> >> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley 
>> wrote:
>> >> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel <
>> t.appe...@imperial.ac.uk> wrote:
>> >> Hi Mark, Matthew,
>> >>
>> >> Thanks for taking the time.
>> >>
>> >> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres for each
>> field, are you?
>> >>
>> >> 2) No, the matrix has pressure in one of the fields. Here it's a 2D
>> problem (but we're also doing 3D), the unknowns are (p,u,v) and those are
>> my 3 fields. We are dealing with subsonic/transsonic flows so it is
>> convection dominated indeed.
>> >>
>> >> 3) We are in frequency domain with respect to time, i.e.
>> \partial{phi}/\partial{t} = -i*omega*phi.
>> >>
>> >> 4) Hypre is unfortunately not an option since we are in complex
>> arithmetic.
>> >>
>> >>
>> >>
>> >>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work on one
>> block, and hence be a subpc. I'm not up on fieldsplit syntax.
>> >> According to the online manual page this syntax applies the suffix to
>> all the defined fields?
>> >>
>> >>
>> >>
>> >>> Mark is correct. I wanted you to change the smoother. He shows how to
>> change it to Richardson (make sure you add the self-scale option), which is
>> probably the best choice.
>> >>>
>> >>>   Thanks,
>> >>>
>> >>>  Matt
>> >>
>> >> You did tell me to set it to GMRES if I'm not mistaken, that's why I
>> tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in the email).
>> Also, it wasn't clear whether these should be applied to each block or the
>> whole system, as the online manual pages + .pdf manual barely mention
>> smoothers and how to manipulate MG objects with KSP/PC, this especially
>> with PCFIELDSPLIT where examples are scarce.
>> >>
>> >> From what I can gather from your suggestions I tried (lines with X are
>> repeated for X={0,1,2})
>> >>
>> >> This looks good. How can an identically zero vector produce a 0
>> residual? You should always monitor with
>> >>
>> >>   -ksp_monitor_true_residual.
>> >>
>> >>Thanks,
>> >>
>> >> Matt
>> >> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>> >> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>> >> -pc_type fieldsplit \
>> >> -pc_fieldsplit_type multiplicative \
>> >> -pc_fieldsplit_block_size 3 \
>> >> -pc_fieldsplit_0_fields 0 \
>> >> -pc_fieldsplit_1_fields 1 \
>> >> -pc_fieldsplit_2_fields 2 \
>> >> -fieldsplit_X_pc_type gamg \
>> >> -fieldsplit_X_ksp_type gmres \
>> >> -fieldsplit_X_ksp_rtol 1e-10 \
>> >> -fieldsplit_X_mg_levels_ksp_type richardson \
>> >> -fieldsplit_X_mg_levels_pc_type sor \
>> >> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
>> >> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
>> >> -log_view
>> >>
>> >> which yields
>> >>
>> >> KSP Object: 1 MPI processes
>> >>   type: fgmres
>> >> restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>> >> happy breakdown tolerance 1e-30
>> >>   maximum iterations=1, initial guess is zero
>> >>   tolerances:  relative=1e

Re: [petsc-users] DIVERGED_NANORING with PC GAMG

2018-11-05 Thread Thibaut Appel via petsc-users

Hi Mark,

Yes it doesn't seem to be usable. Unfortunately we're aiming to do 3D so 
direct solvers are not a viable solution and PETSc' ILU is not parallel 
and we can't use HYPRE (complex arithmetic)


Thibaut

On 01/11/2018 20:42, Mark Adams wrote:



On Wed, Oct 31, 2018 at 8:11 PM Smith, Barry F. > wrote:




> On Oct 31, 2018, at 5:39 PM, Appel, Thibaut via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
>
> Well yes naturally for the residual but adding
-ksp_true_residual just gives
>
>   0 KSP unpreconditioned resid norm 3.583290589961e+00 true
resid norm 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
>   1 KSP unpreconditioned resid norm 0.e+00 true
resid norm 3.583290589961e+00 ||r(i)||/||b|| 1.e+00
> Linear solve converged due to CONVERGED_ATOL iterations 1

   Very bad stuff is happening in the preconditioner. The
preconditioner must have a null space (which it shouldn't have to
be a useful preconditioner).


Yea, you are far away from an optimal preconditioner for this system. 
In low frequency (indefinite) Helmholtz is very very hard. Now, 
something very bad is going on here but even if you fix it standard 
AMG is not good for these problems. I would use direct solvers or 
grind away it with ILU.



>
> Mark - if that helps - a Poisson equation is used for the
pressure so the Helmholtz is the same as for the velocity in the
interior.
>
> Thibaut
>
>> Le 31 oct. 2018 à 21:05, Mark Adams mailto:mfad...@lbl.gov>> a écrit :
>>
>> These are indefinite (bad) Helmholtz problems. Right?
>>
>> On Wed, Oct 31, 2018 at 2:38 PM Matthew Knepley
mailto:knep...@gmail.com>> wrote:
>> On Wed, Oct 31, 2018 at 2:13 PM Thibaut Appel
mailto:t.appe...@imperial.ac.uk>> wrote:
>> Hi Mark, Matthew,
>>
>> Thanks for taking the time.
>>
>> 1) You're not suggesting having -fieldsplit_X_ksp_type fgmres
for each field, are you?
>>
>> 2) No, the matrix has pressure in one of the fields. Here it's
a 2D problem (but we're also doing 3D), the unknowns are (p,u,v)
and those are my 3 fields. We are dealing with subsonic/transsonic
flows so it is convection dominated indeed.
>>
>> 3) We are in frequency domain with respect to time, i.e.
\partial{phi}/\partial{t} = -i*omega*phi.
>>
>> 4) Hypre is unfortunately not an option since we are in complex
arithmetic.
>>
>>
>>
>>> I'm not sure about "-fieldsplit_pc_type gamg" GAMG should work
on one block, and hence be a subpc. I'm not up on fieldsplit syntax.
>> According to the online manual page this syntax applies the
suffix to all the defined fields?
>>
>>
>>
>>> Mark is correct. I wanted you to change the smoother. He shows
how to change it to Richardson (make sure you add the self-scale
option), which is probably the best choice.
>>>
>>>   Thanks,
>>>
>>>      Matt
>>
>> You did tell me to set it to GMRES if I'm not mistaken, that's
why I tried "-fieldsplit_mg_levels_ksp_type gmres" (mentioned in
the email). Also, it wasn't clear whether these should be applied
to each block or the whole system, as the online manual pages +
.pdf manual barely mention smoothers and how to manipulate MG
objects with KSP/PC, this especially with PCFIELDSPLIT where
examples are scarce.
>>
>> From what I can gather from your suggestions I tried (lines
with X are repeated for X={0,1,2})
>>
>> This looks good. How can an identically zero vector produce a 0
residual? You should always monitor with
>>
>>   -ksp_monitor_true_residual.
>>
>>    Thanks,
>>
>>     Matt
>> -ksp_view_pre -ksp_monitor -ksp_converged_reason \
>> -ksp_type fgmres -ksp_rtol 1.0e-8 \
>> -pc_type fieldsplit \
>> -pc_fieldsplit_type multiplicative \
>> -pc_fieldsplit_block_size 3 \
>> -pc_fieldsplit_0_fields 0 \
>> -pc_fieldsplit_1_fields 1 \
>> -pc_fieldsplit_2_fields 2 \
>> -fieldsplit_X_pc_type gamg \
>> -fieldsplit_X_ksp_type gmres \
>> -fieldsplit_X_ksp_rtol 1e-10 \
>> -fieldsplit_X_mg_levels_ksp_type richardson \
>> -fieldsplit_X_mg_levels_pc_type sor \
>> -fieldsplit_X_pc_gamg_agg_nsmooths 0 \
>> -fieldsplit_X_mg_levels_ksp_richardson_self_scale \
>> -log_view
>>
>> which yields
>>
>> KSP Object: 1 MPI processes
>>   type: fgmres
>>     restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
>>     happy breakdown tolerance 1e-30
>>   maximum iterations=1, initial guess is zero
>>   tolerances:  relative=1e-08, absolute=1e-50, divergence=1.
>>   left preconditioning
>>   using DEFAULT norm type for convergence test
>> PC Object: 1 MPI processes
>>   type: fieldsplit
>> 

Re: [petsc-users] PetscViewerFileSetName() wrong error code in Fortran

2018-11-05 Thread Smith, Barry F. via petsc-users


   Fixed in the branch barry/fix-fortran-petscfileviewersetname soon to be 
fixed in maint and then the next patch release of PETSc.

   Thanks for the report

Barry

> On Nov 5, 2018, at 7:29 AM, Tim Steinhoff via petsc-users 
>  wrote:
> 
> Dear PETSc Team,
> 
> I am having the issue that petscviewerfilesetname_() and
> petscviewerfilegetname_() in
> src\sys\classes\viewer\impls\ascii\ftn-custom\zfilevf.c (Fortran) do
> not return proprer error codes, as those are not checked immediately
> after the actual call and will be overwritten afterwards within the
> macro FREECHAR() or by other subsequent calls respectively.
> 
> It would be very helpful if you could fix that.
> 
> Thanks and king regards,
> 
> Volker



Re: [petsc-users] Vec, Mat and binaryfiles.

2018-11-05 Thread Jed Brown via petsc-users
Sal Am via petsc-users  writes:

> Hi,
>
> I am trying to solve a Ax=b complex system. the vector b and "matrix" A are
> both binary and NOT created by PETSc. So I keep getting error messages that
> they are not correct format when I read the files with PetscViewBinaryOpen,
> after some digging it seems that one cannot just read a binary file that
> was created by another software.

Yes, of course the formats would have to match.  I would recommend
writing the files in an existing format such as PETSc's binary format.
While the method you describe can be made to work, it will be more work
to make it parallel.

> How would I go on to solve this problem?
>
> More info and trials:
>
> "matrix" A consists of two files, one that contains row column index
> numbers and one that contains the non-zero values. So what I would have to
> do is multiply the last term in a+b with PETSC_i to get a real + imaginary
> vector A.
>
> vector b is in binary, so what I have done so far (not sure if it works) is:
>
> std::ifstream input("Vector_b.bin", std::ios::binary );
> while (input.read(reinterpret_cast(&v), sizeof(float)))
>  ierr= VecSetValues(u,1,&iglobal,&v,INSERT_VALUES);CHKERRQ(ierr);
>
> where v is a PetscScalar.
>
> Once I am able to read both matrices I think I can figure out the solvers
> to solve the system.
>
> All the best,
> S


Re: [petsc-users] Force SNES diverge

2018-11-05 Thread Matthew Knepley via petsc-users
On Mon, Nov 5, 2018 at 8:41 AM Karol Lewandowski via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Hi,
>
> I am solving a highly nonlinear problem using SNES solver. Under certain
> conditions during the iterations I already know that the step will diverge
> (in the next few iterations). Is there a clever way to tell SNES to diverge
> immediately so I can start a new step with reduced step size?
>

Set your own convergence test I think:
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetConvergenceTest.html

   Matt


> Thank you,
>
> Karol



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Jed Brown via petsc-users
Yingjie Wu via petsc-users  writes:

> Thank you very much for your reply.
> My equation is a neutron diffusion equation with eigenvalues, which is why
> I use DMConposite because there is a single non-physical field variable,
> eigenvalue. I am not very familiar with FieldSplit. I can understand it
> first.
> It seems not the problem of DmConposite because the reason of error
> reporting is that the diagonal elements of the precondition matrix have
> zero terms.

Which entries are zero?  You can view the matrix to find out if it has
the structure you intend.  Make sure the index sets is[] are what you
intend.

> My requirement is to divide precondition matrix to sub-matrices, because I
> have neutrons of multiple energy groups (each is a physical field). I want
> to assign only diagonal block matrices, preferably using
> MatSetValuesStencil (which can simplify the assignment of five diagonal
> matrices).

Splitting all of those apart with DMComposite is clumsy and may deliver
poor memory performance.  Consider DMDASetBlockFills if you want to
allocate a matrix that has sparsity within blocks.

>
> Thanks,
> Yingjie
>
> Mark Adams  于2018年11月5日周一 下午11:22写道:
>
>> DMComposite is not very mature, the last time I checked and I don't of
>> anyone having worked on it recently, and it is probably not what you want
>> anyway. FieldSplit is most likely what you want.
>>
>> What are your equations and discretization? eg, Stokes with cell centered
>> pressure? There are probably examples that are close to what you want and
>> it would not be hard to move your code over.
>>
>> Mark
>>
>> On Mon, Nov 5, 2018 at 10:00 AM Yingjie Wu via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Dear Petsc developer:
>>> Hi,
>>> I have recently studied the preconditioner of the program, and some
>>> problems have arisen. Please help me to solve them.
>>> At present, I have written a program to solve the system of non-linear
>>> equations. The Matrix Free method has been used to calculate results. But I
>>> want to add a preprocessing matrix to it.
>>> I used the DMComposite object, which stores two sub-DM objects and a
>>> single value (two physical field variables and one variable). I want to use
>>> MatGetLocalSubMatrix to assign each physical field sub precondition matrix.
>>> At the same time, because my DM object is two-dimensional, I use
>>> MatSetValuesStencil() to fill the sub matrix.
>>> At present, I just want to fill in a unit matrix (for global vectors) as
>>> the precondition matrix of Matrix Free Method (just to test whether it can
>>> be used, the unit matrix has no preprocessing effect). But the procedure
>>> was wrong.
>>>
>>>yjwu@yjwu-XPS-8910:~/petsc-3.10.1/src/snes/examples/tutorials$
>>> mpiexec -n 1 ./ex216 -f wu-readtwogroups -snes_mf_operator -snes_view
>>> -snes_converged_reason -snes_monitor -ksp_converged_reason
>>> -ksp_monitor_true_residual
>>>
>>>   0 SNES Function norm 8.235090086536e-02
>>> iter = 0, SNES Function norm 0.0823509
>>> iter = 0, Keff === 1.
>>>   Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations
>>> 0
>>>  PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
>>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>>> SNES Object: 1 MPI processes
>>>   type: newtonls
>>>   maximum iterations=50, maximum function evaluations=1
>>>   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>>>   total number of linear solver iterations=0
>>>   total number of function evaluations=1
>>>   norm schedule ALWAYS
>>>   SNESLineSearch Object: 1 MPI processes
>>> type: bt
>>>   interpolation: cubic
>>>   alpha=1.00e-04
>>> maxstep=1.00e+08, minlambda=1.00e-12
>>> tolerances: relative=1.00e-08, absolute=1.00e-15,
>>> lambda=1.00e-08
>>> maximum iterations=40
>>>   KSP Object: 1 MPI processes
>>> type: gmres
>>>   restart=30, using Classical (unmodified) Gram-Schmidt
>>> Orthogonalization with no iterative refinement
>>>   happy breakdown tolerance 1e-30
>>> maximum iterations=1, initial guess is zero
>>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>>> left preconditioning
>>> using PRECONDITIONED norm type for convergence test
>>>   PC Object: 1 MPI processes
>>> type: ilu
>>>   out-of-place factorization
>>>   0 levels of fill
>>>   tolerance for zero pivot 2.22045e-14
>>>   matrix ordering: natural
>>>   factor fill ratio given 1., needed 1.
>>> Factored matrix follows:
>>>   Mat Object: 1 MPI processes
>>> type: seqaij
>>> rows=961, cols=961
>>> package used to perform factorization: petsc
>>> total: nonzeros=4625, allocated nonzeros=4625
>>> total number of mallocs used during MatSetValues calls =0
>>>   not using I-node routines
>>> linear system matrix followed by preconditioner matrix:
>>> Mat 

Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Mark Adams via petsc-users
On Mon, Nov 5, 2018 at 10:37 AM Yingjie Wu  wrote:

> Thank you very much for your reply.
> My equation is a neutron diffusion equation with eigenvalues, which is why
> I use DMConposite because there is a single non-physical field variable,
> eigenvalue.
>

OK, DMComposite might be your best choice.


> I am not very familiar with FieldSplit. I can understand it first.
> It seems not the problem of DmConposite because the reason of error
> reporting is that the diagonal elements of the precondition matrix have
> zero terms.
> My requirement is to divide precondition matrix to sub-matrices, because I
> have neutrons of multiple energy groups (each is a physical field). I want
> to assign only diagonal block matrices, preferably using
> MatSetValuesStencil (which can simplify the assignment of five diagonal
> matrices).
>

So you are trying to create the identity matrix with MatSetValuesStencil
but you are getting zeros on the diagonal.

I would debug this by looking at the matrix in matlab (or Octave). You can
use code like this:

  if (PETSC_TRUE) {
PetscViewer viewer;
ierr = PetscViewerASCIIOpen(comm, "Amat.m", &viewer);CHKERRQ(ierr);
ierr = PetscViewerPushFormat(viewer,
PETSC_VIEWER_ASCII_MATLAB);CHKERRQ(ierr);
ierr = MatView(Amat,viewer);CHKERRQ(ierr);
ierr = PetscViewerPopFormat(viewer);CHKERRQ(ierr);
ierr = PetscViewerDestroy(&viewer);
  }

Test on a small, serial problem and see where your data is actually going,
if not on the diagonal.


> Thanks,
> Yingjie
>
> Mark Adams  于2018年11月5日周一 下午11:22写道:
>
>> DMComposite is not very mature, the last time I checked and I don't of
>> anyone having worked on it recently, and it is probably not what you want
>> anyway. FieldSplit is most likely what you want.
>>
>> What are your equations and discretization? eg, Stokes with cell centered
>> pressure? There are probably examples that are close to what you want and
>> it would not be hard to move your code over.
>>
>> Mark
>>
>> On Mon, Nov 5, 2018 at 10:00 AM Yingjie Wu via petsc-users <
>> petsc-users@mcs.anl.gov> wrote:
>>
>>> Dear Petsc developer:
>>> Hi,
>>> I have recently studied the preconditioner of the program, and some
>>> problems have arisen. Please help me to solve them.
>>> At present, I have written a program to solve the system of non-linear
>>> equations. The Matrix Free method has been used to calculate results. But I
>>> want to add a preprocessing matrix to it.
>>> I used the DMComposite object, which stores two sub-DM objects and a
>>> single value (two physical field variables and one variable). I want to use
>>> MatGetLocalSubMatrix to assign each physical field sub precondition matrix.
>>> At the same time, because my DM object is two-dimensional, I use
>>> MatSetValuesStencil() to fill the sub matrix.
>>> At present, I just want to fill in a unit matrix (for global vectors) as
>>> the precondition matrix of Matrix Free Method (just to test whether it can
>>> be used, the unit matrix has no preprocessing effect). But the procedure
>>> was wrong.
>>>
>>>yjwu@yjwu-XPS-8910:~/petsc-3.10.1/src/snes/examples/tutorials$
>>> mpiexec -n 1 ./ex216 -f wu-readtwogroups -snes_mf_operator -snes_view
>>> -snes_converged_reason -snes_monitor -ksp_converged_reason
>>> -ksp_monitor_true_residual
>>>
>>>   0 SNES Function norm 8.235090086536e-02
>>> iter = 0, SNES Function norm 0.0823509
>>> iter = 0, Keff === 1.
>>>   Linear solve did not converge due to DIVERGED_PCSETUP_FAILED
>>> iterations 0
>>>  PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
>>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations
>>> 0
>>> SNES Object: 1 MPI processes
>>>   type: newtonls
>>>   maximum iterations=50, maximum function evaluations=1
>>>   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>>>   total number of linear solver iterations=0
>>>   total number of function evaluations=1
>>>   norm schedule ALWAYS
>>>   SNESLineSearch Object: 1 MPI processes
>>> type: bt
>>>   interpolation: cubic
>>>   alpha=1.00e-04
>>> maxstep=1.00e+08, minlambda=1.00e-12
>>> tolerances: relative=1.00e-08, absolute=1.00e-15,
>>> lambda=1.00e-08
>>> maximum iterations=40
>>>   KSP Object: 1 MPI processes
>>> type: gmres
>>>   restart=30, using Classical (unmodified) Gram-Schmidt
>>> Orthogonalization with no iterative refinement
>>>   happy breakdown tolerance 1e-30
>>> maximum iterations=1, initial guess is zero
>>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>>> left preconditioning
>>> using PRECONDITIONED norm type for convergence test
>>>   PC Object: 1 MPI processes
>>> type: ilu
>>>   out-of-place factorization
>>>   0 levels of fill
>>>   tolerance for zero pivot 2.22045e-14
>>>   matrix ordering: natural
>>>   factor fill ratio given 1., needed 1.
>>> Factored matrix follows:
>>>   Mat Object: 1 MPI process

Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Yingjie Wu via petsc-users
Thank you very much for your reply.
My equation is a neutron diffusion equation with eigenvalues, which is why
I use DMConposite because there is a single non-physical field variable,
eigenvalue. I am not very familiar with FieldSplit. I can understand it
first.
It seems not the problem of DmConposite because the reason of error
reporting is that the diagonal elements of the precondition matrix have
zero terms.
My requirement is to divide precondition matrix to sub-matrices, because I
have neutrons of multiple energy groups (each is a physical field). I want
to assign only diagonal block matrices, preferably using
MatSetValuesStencil (which can simplify the assignment of five diagonal
matrices).

Thanks,
Yingjie

Mark Adams  于2018年11月5日周一 下午11:22写道:

> DMComposite is not very mature, the last time I checked and I don't of
> anyone having worked on it recently, and it is probably not what you want
> anyway. FieldSplit is most likely what you want.
>
> What are your equations and discretization? eg, Stokes with cell centered
> pressure? There are probably examples that are close to what you want and
> it would not be hard to move your code over.
>
> Mark
>
> On Mon, Nov 5, 2018 at 10:00 AM Yingjie Wu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dear Petsc developer:
>> Hi,
>> I have recently studied the preconditioner of the program, and some
>> problems have arisen. Please help me to solve them.
>> At present, I have written a program to solve the system of non-linear
>> equations. The Matrix Free method has been used to calculate results. But I
>> want to add a preprocessing matrix to it.
>> I used the DMComposite object, which stores two sub-DM objects and a
>> single value (two physical field variables and one variable). I want to use
>> MatGetLocalSubMatrix to assign each physical field sub precondition matrix.
>> At the same time, because my DM object is two-dimensional, I use
>> MatSetValuesStencil() to fill the sub matrix.
>> At present, I just want to fill in a unit matrix (for global vectors) as
>> the precondition matrix of Matrix Free Method (just to test whether it can
>> be used, the unit matrix has no preprocessing effect). But the procedure
>> was wrong.
>>
>>yjwu@yjwu-XPS-8910:~/petsc-3.10.1/src/snes/examples/tutorials$
>> mpiexec -n 1 ./ex216 -f wu-readtwogroups -snes_mf_operator -snes_view
>> -snes_converged_reason -snes_monitor -ksp_converged_reason
>> -ksp_monitor_true_residual
>>
>>   0 SNES Function norm 8.235090086536e-02
>> iter = 0, SNES Function norm 0.0823509
>> iter = 0, Keff === 1.
>>   Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations
>> 0
>>  PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
>> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
>> SNES Object: 1 MPI processes
>>   type: newtonls
>>   maximum iterations=50, maximum function evaluations=1
>>   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>>   total number of linear solver iterations=0
>>   total number of function evaluations=1
>>   norm schedule ALWAYS
>>   SNESLineSearch Object: 1 MPI processes
>> type: bt
>>   interpolation: cubic
>>   alpha=1.00e-04
>> maxstep=1.00e+08, minlambda=1.00e-12
>> tolerances: relative=1.00e-08, absolute=1.00e-15,
>> lambda=1.00e-08
>> maximum iterations=40
>>   KSP Object: 1 MPI processes
>> type: gmres
>>   restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>   happy breakdown tolerance 1e-30
>> maximum iterations=1, initial guess is zero
>> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
>> left preconditioning
>> using PRECONDITIONED norm type for convergence test
>>   PC Object: 1 MPI processes
>> type: ilu
>>   out-of-place factorization
>>   0 levels of fill
>>   tolerance for zero pivot 2.22045e-14
>>   matrix ordering: natural
>>   factor fill ratio given 1., needed 1.
>> Factored matrix follows:
>>   Mat Object: 1 MPI processes
>> type: seqaij
>> rows=961, cols=961
>> package used to perform factorization: petsc
>> total: nonzeros=4625, allocated nonzeros=4625
>> total number of mallocs used during MatSetValues calls =0
>>   not using I-node routines
>> linear system matrix followed by preconditioner matrix:
>> Mat Object: 1 MPI processes
>>   type: mffd
>>   rows=961, cols=961
>> Matrix-free approximation:
>>   err=1.49012e-08 (relative error in function evaluation)
>>   The compute h routine has not yet been set
>> Mat Object: 1 MPI processes
>>   type: seqaij
>>   rows=961, cols=961
>>   total: nonzeros=4625, allocated nonzeros=4625
>>   total number of mallocs used during MatSetValues calls =0
>> not using I-node routines
>>
>> It seems that 

Re: [petsc-users] Problems about Assemble DMComposite Precondition Matrix

2018-11-05 Thread Mark Adams via petsc-users
DMComposite is not very mature, the last time I checked and I don't of
anyone having worked on it recently, and it is probably not what you want
anyway. FieldSplit is most likely what you want.

What are your equations and discretization? eg, Stokes with cell centered
pressure? There are probably examples that are close to what you want and
it would not be hard to move your code over.

Mark

On Mon, Nov 5, 2018 at 10:00 AM Yingjie Wu via petsc-users <
petsc-users@mcs.anl.gov> wrote:

> Dear Petsc developer:
> Hi,
> I have recently studied the preconditioner of the program, and some
> problems have arisen. Please help me to solve them.
> At present, I have written a program to solve the system of non-linear
> equations. The Matrix Free method has been used to calculate results. But I
> want to add a preprocessing matrix to it.
> I used the DMComposite object, which stores two sub-DM objects and a
> single value (two physical field variables and one variable). I want to use
> MatGetLocalSubMatrix to assign each physical field sub precondition matrix.
> At the same time, because my DM object is two-dimensional, I use
> MatSetValuesStencil() to fill the sub matrix.
> At present, I just want to fill in a unit matrix (for global vectors) as
> the precondition matrix of Matrix Free Method (just to test whether it can
> be used, the unit matrix has no preprocessing effect). But the procedure
> was wrong.
>
>yjwu@yjwu-XPS-8910:~/petsc-3.10.1/src/snes/examples/tutorials$
> mpiexec -n 1 ./ex216 -f wu-readtwogroups -snes_mf_operator -snes_view
> -snes_converged_reason -snes_monitor -ksp_converged_reason
> -ksp_monitor_true_residual
>
>   0 SNES Function norm 8.235090086536e-02
> iter = 0, SNES Function norm 0.0823509
> iter = 0, Keff === 1.
>   Linear solve did not converge due to DIVERGED_PCSETUP_FAILED iterations 0
>  PCSETUP_FAILED due to FACTOR_NUMERIC_ZEROPIVOT
> Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0
> SNES Object: 1 MPI processes
>   type: newtonls
>   maximum iterations=50, maximum function evaluations=1
>   tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
>   total number of linear solver iterations=0
>   total number of function evaluations=1
>   norm schedule ALWAYS
>   SNESLineSearch Object: 1 MPI processes
> type: bt
>   interpolation: cubic
>   alpha=1.00e-04
> maxstep=1.00e+08, minlambda=1.00e-12
> tolerances: relative=1.00e-08, absolute=1.00e-15,
> lambda=1.00e-08
> maximum iterations=40
>   KSP Object: 1 MPI processes
> type: gmres
>   restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>   happy breakdown tolerance 1e-30
> maximum iterations=1, initial guess is zero
> tolerances:  relative=1e-05, absolute=1e-50, divergence=1.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
>   PC Object: 1 MPI processes
> type: ilu
>   out-of-place factorization
>   0 levels of fill
>   tolerance for zero pivot 2.22045e-14
>   matrix ordering: natural
>   factor fill ratio given 1., needed 1.
> Factored matrix follows:
>   Mat Object: 1 MPI processes
> type: seqaij
> rows=961, cols=961
> package used to perform factorization: petsc
> total: nonzeros=4625, allocated nonzeros=4625
> total number of mallocs used during MatSetValues calls =0
>   not using I-node routines
> linear system matrix followed by preconditioner matrix:
> Mat Object: 1 MPI processes
>   type: mffd
>   rows=961, cols=961
> Matrix-free approximation:
>   err=1.49012e-08 (relative error in function evaluation)
>   The compute h routine has not yet been set
> Mat Object: 1 MPI processes
>   type: seqaij
>   rows=961, cols=961
>   total: nonzeros=4625, allocated nonzeros=4625
>   total number of mallocs used during MatSetValues calls =0
> not using I-node routines
>
> It seems that there are elements on the diagonal line that are not filled,
> but I don't understand what went wrong. Part of the code of my program is
> as follows. I added all the code to the appendix. I have tested that for
> the entire Jacobian matrix assignment unit matrix (no submatrix, direct
> operation of the global preprocessing matrix), the program can run
> normally. However, since the problems developed later may be more complex,
> involving multiple physical fields, it would be easier to implement
> submatrix.
>
> ..
>
> /* set DMComposite */
>
> ierr =
> DMDACreate2d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,20,24,PETSC_DECIDE,PETSC_DECIDE,1,1,0,0,&user.da1);CHKERRQ(ierr);
>   ierr = DMSetFromOptions(user.da1);CHKERRQ(ierr);
>   ierr = DMSetUp(user.da1);CHKERRQ(ierr);
>   ierr = DMCompositeAddDM(user.packer,user.da1);CHKERRQ(