Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-25 Thread Stefano Zampini
> I actually use hybridization and I was reading the preprint "Algebraic
> Hybridization and Static Condensation with Application to Scalable H(div)
> Preconditioning" by Dobrev et al. ( https://arxiv.org/abs/1801.08914 )
> and they show that multigrid is optimal for the grad-div problem
> discretized with H(div) conforming FEMs when hybridized. That is actually
> why I think that BDDC also would be optimal. I will look into ngsolve to
> see if I can have such a domain decomposition. Maybe I can do it manually
> just as proof of concept.
>
>
If you are using hybridization, you can use PCGAMG (i.e. -pc_type gamg)


> I am using GMRES. I was wondering if the application of BDDC is a linear
> operator, if it is not maybe I should use FGMRES. But I could not find any
> comments about that.
>
>
BDDC is linear. The problem is that when you disassemble an already
assembled matrix, the operator of the preconditioner is not guaranteed to
stay positive definite for positive definite assembled problems,



> I will recompile PETSc with ParMETIS and try your suggestions. Thank you!
> I will update you soon.
>
> Best wishes,
> Abdullah Ali Sivas
>
> On Thu, 25 Oct 2018 at 09:53, Stefano Zampini 
> wrote:
>
>> How many processes (subdomains) are you using?
>> I would not say the number of iterations is bad, and it also seems to
>> plateau.
>> The grad-div problem is quite hard to be solved (unless you use
>> hybridization), you can really benefit from the "Neumann" assembly.
>> I believe you are using GMRES, as the preconditioned operator (i.e
>> M_BDDC^-1 A) is not guaranteed to be positive definite when you use the
>> automatic disassembling.
>> You may slightly improve the quality of the disassembling by using
>> -mat_is_disassemble_l2g_type nd -mat_partitioning_type parmetis if you have
>> PETSc compiled with ParMETIS support.
>>
>>
>> Il giorno mer 24 ott 2018 alle ore 20:59 Abdullah Ali Sivas <
>> abdullahasi...@gmail.com> ha scritto:
>>
>>> Hi Stefano,
>>>
>>> I am trying to solve the div-div problem (or grad-div problem in strong
>>> form) with a H(div)-conforming FEM. I am getting the matrices from an
>>> external source (to be clear, from an ngsolve script) and I am not sure if
>>> it is possible to get a MATIS matrix out of that. So I am just treating it
>>> as if I am not able to access the assembly code. The results are 2, 31, 26,
>>> 27, 31 iterations, respectively, for matrix sizes 282, 1095, 4314, 17133,
>>> 67242, 267549. However, norm of the residual also grows significantly;
>>> 7.38369e-09 for 1095 and 5.63828e-07 for 267549. I can try larger sizes, or
>>> maybe this is expected for this case.
>>>
>>> As a side question, if we are dividing the domain into number of MPI
>>> processes subdomains, does it mean that convergence is affected negatively
>>> by the increasing number of processes? I know that alternating Schwarz
>>> method and some other domain decomposition methods sometimes suffer from
>>> the decreasing radius of the subdomains. It sounds like BDDC is pretty
>>> similar to those by your description.
>>>
>>> Best wishes,
>>> Abdullah Ali Sivas
>>>
>>> On Wed, 24 Oct 2018 at 05:28, Stefano Zampini 
>>> wrote:
>>>
 Abdullah,

 The "Neumann" problems Jed is referring to result from assembling your
 problem on each subdomain ( = MPI process) separately.
 Assuming you are using FEM, these problems have been historically
 named "Neumann" as they correspond to a problem with natural boundary
 conditions (Neumann bc for Poisson).
 Note that in PETSc the subdomain decomposition is associated with the
 mesh decomposition.

 When converting from an assembled AIJ matrix to a MATIS format, such
 "Neumann" information is lost.
 You can disassemble an AIJ matrix, in the sense that you can find local
 matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
 but you cannot guarantee (unless if you solve an optimization problem) that
 the disassembling will produce subdomain Neumann problems that are
 consistent with your FEM problem.

 I have added such disassembling code a few months ago, just to have
 another alternative for preconditioning AIJ matrices in PETSc; there are
 few tweaks one can do to improve the quality of the disassembling, but I
 discourage its usage unless you don't have access to the FEM assembly code.

 With that said, what problem are you trying to solve? Are you using
 DMDA or DMPlex? What are the results you obtained with using the automatic
 disassembling?

 Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
 abdullahasi...@gmail.com> ha scritto:

> Hi Jed,
>
> Thanks for your reply. The assembled matrix I have corresponds to the
> full problem on the full mesh. There are no "Neumann" problems (or any 
> sort
> of domain decomposition) defined in the code generates the matrix. 
> However,
> I think 

Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-25 Thread Abdullah Ali Sivas
Right now, one to four. I am just running some tests with small matrices.
Later on, I am planning to do large scale tests hopefully up to 1024
processes. I was worried that iteration numbers may get worse.

I actually use hybridization and I was reading the preprint "Algebraic
Hybridization and Static Condensation with Application to Scalable H(div)
Preconditioning" by Dobrev et al. ( https://arxiv.org/abs/1801.08914 ) and
they show that multigrid is optimal for the grad-div problem discretized
with H(div) conforming FEMs when hybridized. That is actually why I think
that BDDC also would be optimal. I will look into ngsolve to see if I can
have such a domain decomposition. Maybe I can do it manually just as proof
of concept.

I am using GMRES. I was wondering if the application of BDDC is a linear
operator, if it is not maybe I should use FGMRES. But I could not find any
comments about that.

I will recompile PETSc with ParMETIS and try your suggestions. Thank you! I
will update you soon.

Best wishes,
Abdullah Ali Sivas

On Thu, 25 Oct 2018 at 09:53, Stefano Zampini 
wrote:

> How many processes (subdomains) are you using?
> I would not say the number of iterations is bad, and it also seems to
> plateau.
> The grad-div problem is quite hard to be solved (unless you use
> hybridization), you can really benefit from the "Neumann" assembly.
> I believe you are using GMRES, as the preconditioned operator (i.e
> M_BDDC^-1 A) is not guaranteed to be positive definite when you use the
> automatic disassembling.
> You may slightly improve the quality of the disassembling by using
> -mat_is_disassemble_l2g_type nd -mat_partitioning_type parmetis if you have
> PETSc compiled with ParMETIS support.
>
>
> Il giorno mer 24 ott 2018 alle ore 20:59 Abdullah Ali Sivas <
> abdullahasi...@gmail.com> ha scritto:
>
>> Hi Stefano,
>>
>> I am trying to solve the div-div problem (or grad-div problem in strong
>> form) with a H(div)-conforming FEM. I am getting the matrices from an
>> external source (to be clear, from an ngsolve script) and I am not sure if
>> it is possible to get a MATIS matrix out of that. So I am just treating it
>> as if I am not able to access the assembly code. The results are 2, 31, 26,
>> 27, 31 iterations, respectively, for matrix sizes 282, 1095, 4314, 17133,
>> 67242, 267549. However, norm of the residual also grows significantly;
>> 7.38369e-09 for 1095 and 5.63828e-07 for 267549. I can try larger sizes, or
>> maybe this is expected for this case.
>>
>> As a side question, if we are dividing the domain into number of MPI
>> processes subdomains, does it mean that convergence is affected negatively
>> by the increasing number of processes? I know that alternating Schwarz
>> method and some other domain decomposition methods sometimes suffer from
>> the decreasing radius of the subdomains. It sounds like BDDC is pretty
>> similar to those by your description.
>>
>> Best wishes,
>> Abdullah Ali Sivas
>>
>> On Wed, 24 Oct 2018 at 05:28, Stefano Zampini 
>> wrote:
>>
>>> Abdullah,
>>>
>>> The "Neumann" problems Jed is referring to result from assembling your
>>> problem on each subdomain ( = MPI process) separately.
>>> Assuming you are using FEM, these problems have been historically  named
>>> "Neumann" as they correspond to a problem with natural boundary conditions
>>> (Neumann bc for Poisson).
>>> Note that in PETSc the subdomain decomposition is associated with the
>>> mesh decomposition.
>>>
>>> When converting from an assembled AIJ matrix to a MATIS format, such
>>> "Neumann" information is lost.
>>> You can disassemble an AIJ matrix, in the sense that you can find local
>>> matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
>>> but you cannot guarantee (unless if you solve an optimization problem) that
>>> the disassembling will produce subdomain Neumann problems that are
>>> consistent with your FEM problem.
>>>
>>> I have added such disassembling code a few months ago, just to have
>>> another alternative for preconditioning AIJ matrices in PETSc; there are
>>> few tweaks one can do to improve the quality of the disassembling, but I
>>> discourage its usage unless you don't have access to the FEM assembly code.
>>>
>>> With that said, what problem are you trying to solve? Are you using DMDA
>>> or DMPlex? What are the results you obtained with using the automatic
>>> disassembling?
>>>
>>> Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
>>> abdullahasi...@gmail.com> ha scritto:
>>>
 Hi Jed,

 Thanks for your reply. The assembled matrix I have corresponds to the
 full problem on the full mesh. There are no "Neumann" problems (or any sort
 of domain decomposition) defined in the code generates the matrix. However,
 I think assembling the full problem is equivalent to implicitly assembling
 the "Neumann" problems, since the system can be partitioned as;

 [A_{LL} | A_{LI}]  [u_L] [F]
 ---| 

Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-25 Thread Stefano Zampini
How many processes (subdomains) are you using?
I would not say the number of iterations is bad, and it also seems to
plateau.
The grad-div problem is quite hard to be solved (unless you use
hybridization), you can really benefit from the "Neumann" assembly.
I believe you are using GMRES, as the preconditioned operator (i.e
M_BDDC^-1 A) is not guaranteed to be positive definite when you use the
automatic disassembling.
You may slightly improve the quality of the disassembling by using
-mat_is_disassemble_l2g_type nd -mat_partitioning_type parmetis if you have
PETSc compiled with ParMETIS support.


Il giorno mer 24 ott 2018 alle ore 20:59 Abdullah Ali Sivas <
abdullahasi...@gmail.com> ha scritto:

> Hi Stefano,
>
> I am trying to solve the div-div problem (or grad-div problem in strong
> form) with a H(div)-conforming FEM. I am getting the matrices from an
> external source (to be clear, from an ngsolve script) and I am not sure if
> it is possible to get a MATIS matrix out of that. So I am just treating it
> as if I am not able to access the assembly code. The results are 2, 31, 26,
> 27, 31 iterations, respectively, for matrix sizes 282, 1095, 4314, 17133,
> 67242, 267549. However, norm of the residual also grows significantly;
> 7.38369e-09 for 1095 and 5.63828e-07 for 267549. I can try larger sizes, or
> maybe this is expected for this case.
>
> As a side question, if we are dividing the domain into number of MPI
> processes subdomains, does it mean that convergence is affected negatively
> by the increasing number of processes? I know that alternating Schwarz
> method and some other domain decomposition methods sometimes suffer from
> the decreasing radius of the subdomains. It sounds like BDDC is pretty
> similar to those by your description.
>
> Best wishes,
> Abdullah Ali Sivas
>
> On Wed, 24 Oct 2018 at 05:28, Stefano Zampini 
> wrote:
>
>> Abdullah,
>>
>> The "Neumann" problems Jed is referring to result from assembling your
>> problem on each subdomain ( = MPI process) separately.
>> Assuming you are using FEM, these problems have been historically  named
>> "Neumann" as they correspond to a problem with natural boundary conditions
>> (Neumann bc for Poisson).
>> Note that in PETSc the subdomain decomposition is associated with the
>> mesh decomposition.
>>
>> When converting from an assembled AIJ matrix to a MATIS format, such
>> "Neumann" information is lost.
>> You can disassemble an AIJ matrix, in the sense that you can find local
>> matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
>> but you cannot guarantee (unless if you solve an optimization problem) that
>> the disassembling will produce subdomain Neumann problems that are
>> consistent with your FEM problem.
>>
>> I have added such disassembling code a few months ago, just to have
>> another alternative for preconditioning AIJ matrices in PETSc; there are
>> few tweaks one can do to improve the quality of the disassembling, but I
>> discourage its usage unless you don't have access to the FEM assembly code.
>>
>> With that said, what problem are you trying to solve? Are you using DMDA
>> or DMPlex? What are the results you obtained with using the automatic
>> disassembling?
>>
>> Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
>> abdullahasi...@gmail.com> ha scritto:
>>
>>> Hi Jed,
>>>
>>> Thanks for your reply. The assembled matrix I have corresponds to the
>>> full problem on the full mesh. There are no "Neumann" problems (or any sort
>>> of domain decomposition) defined in the code generates the matrix. However,
>>> I think assembling the full problem is equivalent to implicitly assembling
>>> the "Neumann" problems, since the system can be partitioned as;
>>>
>>> [A_{LL} | A_{LI}]  [u_L] [F]
>>> ---|  = -
>>> [A_{IL}  |A_{II} ]   [u_I]  [G]
>>>
>>> and G should correspond to the Neumann problem. I might be thinking
>>> wrong (or maybe I completely misunderstood the idea), if so please correct
>>> me. But I think that the problem is that I am not explicitly telling PCBDDC
>>> which dofs are interface dofs.
>>>
>>> Regards,
>>> Abdullah Ali Sivas
>>>
>>> On Tue, 23 Oct 2018 at 23:16, Jed Brown  wrote:
>>>
 Did you assemble "Neumann" problems that are compatible with your
 definition of interior/interface degrees of freedom?

 Abdullah Ali Sivas  writes:

 > Dear all,
 >
 > I have a series of linear systems coming from a PDE for which BDDC is
 an
 > optimal preconditioner. These linear systems are assembled and I read
 them
 > from a file, then convert into MATIS as required (as in
 >
 https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
 > ). I expect each of the systems converge to the solution in almost
 same
 > number of iterations but I don't observe it. I think it is because I
 do not
 > provide enough information to the preconditioner. I can 

Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-24 Thread Abdullah Ali Sivas
Hi Stefano,

I am trying to solve the div-div problem (or grad-div problem in strong
form) with a H(div)-conforming FEM. I am getting the matrices from an
external source (to be clear, from an ngsolve script) and I am not sure if
it is possible to get a MATIS matrix out of that. So I am just treating it
as if I am not able to access the assembly code. The results are 2, 31, 26,
27, 31 iterations, respectively, for matrix sizes 282, 1095, 4314, 17133,
67242, 267549. However, norm of the residual also grows significantly;
7.38369e-09 for 1095 and 5.63828e-07 for 267549. I can try larger sizes, or
maybe this is expected for this case.

As a side question, if we are dividing the domain into number of MPI
processes subdomains, does it mean that convergence is affected negatively
by the increasing number of processes? I know that alternating Schwarz
method and some other domain decomposition methods sometimes suffer from
the decreasing radius of the subdomains. It sounds like BDDC is pretty
similar to those by your description.

Best wishes,
Abdullah Ali Sivas

On Wed, 24 Oct 2018 at 05:28, Stefano Zampini 
wrote:

> Abdullah,
>
> The "Neumann" problems Jed is referring to result from assembling your
> problem on each subdomain ( = MPI process) separately.
> Assuming you are using FEM, these problems have been historically  named
> "Neumann" as they correspond to a problem with natural boundary conditions
> (Neumann bc for Poisson).
> Note that in PETSc the subdomain decomposition is associated with the mesh
> decomposition.
>
> When converting from an assembled AIJ matrix to a MATIS format, such
> "Neumann" information is lost.
> You can disassemble an AIJ matrix, in the sense that you can find local
> matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
> but you cannot guarantee (unless if you solve an optimization problem) that
> the disassembling will produce subdomain Neumann problems that are
> consistent with your FEM problem.
>
> I have added such disassembling code a few months ago, just to have
> another alternative for preconditioning AIJ matrices in PETSc; there are
> few tweaks one can do to improve the quality of the disassembling, but I
> discourage its usage unless you don't have access to the FEM assembly code.
>
> With that said, what problem are you trying to solve? Are you using DMDA
> or DMPlex? What are the results you obtained with using the automatic
> disassembling?
>
> Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
> abdullahasi...@gmail.com> ha scritto:
>
>> Hi Jed,
>>
>> Thanks for your reply. The assembled matrix I have corresponds to the
>> full problem on the full mesh. There are no "Neumann" problems (or any sort
>> of domain decomposition) defined in the code generates the matrix. However,
>> I think assembling the full problem is equivalent to implicitly assembling
>> the "Neumann" problems, since the system can be partitioned as;
>>
>> [A_{LL} | A_{LI}]  [u_L] [F]
>> ---|  = -
>> [A_{IL}  |A_{II} ]   [u_I]  [G]
>>
>> and G should correspond to the Neumann problem. I might be thinking wrong
>> (or maybe I completely misunderstood the idea), if so please correct me.
>> But I think that the problem is that I am not explicitly telling PCBDDC
>> which dofs are interface dofs.
>>
>> Regards,
>> Abdullah Ali Sivas
>>
>> On Tue, 23 Oct 2018 at 23:16, Jed Brown  wrote:
>>
>>> Did you assemble "Neumann" problems that are compatible with your
>>> definition of interior/interface degrees of freedom?
>>>
>>> Abdullah Ali Sivas  writes:
>>>
>>> > Dear all,
>>> >
>>> > I have a series of linear systems coming from a PDE for which BDDC is
>>> an
>>> > optimal preconditioner. These linear systems are assembled and I read
>>> them
>>> > from a file, then convert into MATIS as required (as in
>>> >
>>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
>>> > ). I expect each of the systems converge to the solution in almost same
>>> > number of iterations but I don't observe it. I think it is because I
>>> do not
>>> > provide enough information to the preconditioner. I can get a list of
>>> inner
>>> > dofs and interface dofs. However, I do not know how to use them. Has
>>> anyone
>>> > have any insights about it or done something similar?
>>> >
>>> > Best wishes,
>>> > Abdullah Ali Sivas
>>>
>>
>
> --
> Stefano
>


Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-24 Thread Stefano Zampini
Abdullah,

The "Neumann" problems Jed is referring to result from assembling your
problem on each subdomain ( = MPI process) separately.
Assuming you are using FEM, these problems have been historically  named
"Neumann" as they correspond to a problem with natural boundary conditions
(Neumann bc for Poisson).
Note that in PETSc the subdomain decomposition is associated with the mesh
decomposition.

When converting from an assembled AIJ matrix to a MATIS format, such
"Neumann" information is lost.
You can disassemble an AIJ matrix, in the sense that you can find local
matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
but you cannot guarantee (unless if you solve an optimization problem) that
the disassembling will produce subdomain Neumann problems that are
consistent with your FEM problem.

I have added such disassembling code a few months ago, just to have another
alternative for preconditioning AIJ matrices in PETSc; there are few tweaks
one can do to improve the quality of the disassembling, but I discourage
its usage unless you don't have access to the FEM assembly code.

With that said, what problem are you trying to solve? Are you using DMDA or
DMPlex? What are the results you obtained with using the automatic
disassembling?

Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
abdullahasi...@gmail.com> ha scritto:

> Hi Jed,
>
> Thanks for your reply. The assembled matrix I have corresponds to the full
> problem on the full mesh. There are no "Neumann" problems (or any sort of
> domain decomposition) defined in the code generates the matrix. However, I
> think assembling the full problem is equivalent to implicitly assembling
> the "Neumann" problems, since the system can be partitioned as;
>
> [A_{LL} | A_{LI}]  [u_L] [F]
> ---|  = -
> [A_{IL}  |A_{II} ]   [u_I]  [G]
>
> and G should correspond to the Neumann problem. I might be thinking wrong
> (or maybe I completely misunderstood the idea), if so please correct me.
> But I think that the problem is that I am not explicitly telling PCBDDC
> which dofs are interface dofs.
>
> Regards,
> Abdullah Ali Sivas
>
> On Tue, 23 Oct 2018 at 23:16, Jed Brown  wrote:
>
>> Did you assemble "Neumann" problems that are compatible with your
>> definition of interior/interface degrees of freedom?
>>
>> Abdullah Ali Sivas  writes:
>>
>> > Dear all,
>> >
>> > I have a series of linear systems coming from a PDE for which BDDC is an
>> > optimal preconditioner. These linear systems are assembled and I read
>> them
>> > from a file, then convert into MATIS as required (as in
>> >
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
>> > ). I expect each of the systems converge to the solution in almost same
>> > number of iterations but I don't observe it. I think it is because I do
>> not
>> > provide enough information to the preconditioner. I can get a list of
>> inner
>> > dofs and interface dofs. However, I do not know how to use them. Has
>> anyone
>> > have any insights about it or done something similar?
>> >
>> > Best wishes,
>> > Abdullah Ali Sivas
>>
>

-- 
Stefano


Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-23 Thread Abdullah Ali Sivas
Hi Jed,

Thanks for your reply. The assembled matrix I have corresponds to the full
problem on the full mesh. There are no "Neumann" problems (or any sort of
domain decomposition) defined in the code generates the matrix. However, I
think assembling the full problem is equivalent to implicitly assembling
the "Neumann" problems, since the system can be partitioned as;

[A_{LL} | A_{LI}]  [u_L] [F]
---|  = -
[A_{IL}  |A_{II} ]   [u_I]  [G]

and G should correspond to the Neumann problem. I might be thinking wrong
(or maybe I completely misunderstood the idea), if so please correct me.
But I think that the problem is that I am not explicitly telling PCBDDC
which dofs are interface dofs.

Regards,
Abdullah Ali Sivas

On Tue, 23 Oct 2018 at 23:16, Jed Brown  wrote:

> Did you assemble "Neumann" problems that are compatible with your
> definition of interior/interface degrees of freedom?
>
> Abdullah Ali Sivas  writes:
>
> > Dear all,
> >
> > I have a series of linear systems coming from a PDE for which BDDC is an
> > optimal preconditioner. These linear systems are assembled and I read
> them
> > from a file, then convert into MATIS as required (as in
> >
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
> > ). I expect each of the systems converge to the solution in almost same
> > number of iterations but I don't observe it. I think it is because I do
> not
> > provide enough information to the preconditioner. I can get a list of
> inner
> > dofs and interface dofs. However, I do not know how to use them. Has
> anyone
> > have any insights about it or done something similar?
> >
> > Best wishes,
> > Abdullah Ali Sivas
>


Re: [petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-23 Thread Jed Brown
Did you assemble "Neumann" problems that are compatible with your
definition of interior/interface degrees of freedom?

Abdullah Ali Sivas  writes:

> Dear all,
>
> I have a series of linear systems coming from a PDE for which BDDC is an
> optimal preconditioner. These linear systems are assembled and I read them
> from a file, then convert into MATIS as required (as in
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
> ). I expect each of the systems converge to the solution in almost same
> number of iterations but I don't observe it. I think it is because I do not
> provide enough information to the preconditioner. I can get a list of inner
> dofs and interface dofs. However, I do not know how to use them. Has anyone
> have any insights about it or done something similar?
>
> Best wishes,
> Abdullah Ali Sivas


[petsc-users] Using BDDC preconditioner for assembled matrices

2018-10-23 Thread Abdullah Ali Sivas
Dear all,

I have a series of linear systems coming from a PDE for which BDDC is an
optimal preconditioner. These linear systems are assembled and I read them
from a file, then convert into MATIS as required (as in
https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
). I expect each of the systems converge to the solution in almost same
number of iterations but I don't observe it. I think it is because I do not
provide enough information to the preconditioner. I can get a list of inner
dofs and interface dofs. However, I do not know how to use them. Has anyone
have any insights about it or done something similar?

Best wishes,
Abdullah Ali Sivas