On Wed, 22 Jan 2014 15:25:50 +0100
Nico Schlömer <[email protected]> wrote:

> Hi all,
> 
> the problem is fully symmetric and can be solved with CG, even though
> the null space is nontrivial (see, e.g., Iterative Krylov methods for
> large linear systems, Henk A. van der Vorst). As a preconditioner you
> can use AMG, the only thing that you have to take care of is the
> coarse solve; by default, some LU strategy is deployed which messes
> around with the null space. Take something iterative, e.g. Jacobi, and
> you'll be fine.
> 
> Check out
> 
> prec = PETScPreconditioner('hypre_amg')
> prec.parameters['hypre']['BoomerAMG']['relax_type_coarse'] = 'jacobi'
> solver = PETScKrylovSolver('cg', prec)
> solver.parameters['absolute_tolerance'] = 0.0
> solver.parameters['relative_tolerance'] = tol
> solver.parameters['maximum_iterations'] = 100
> solver.parameters['monitor_convergence'] = verbose
> # Create solver and solve system
> A_petsc = as_backend_type(A)
> b_petsc = as_backend_type(b)
> phi_petsc = as_backend_type(phi.vector())
> solver.set_operator(A_petsc)
> solver.solve(phi_petsc, b_petsc)

This does not seem to work. Have you tried it? Can you post a complete
code? In my opinion, you can't solve it using CG because the problem is
indefinite. At least, without some additional trick.

Jan

> 
> Alternatively, the PETSc option can be set using the PETScOption
> framework that Garth added a while ago.
> 
> I've encountered the same problem a while ago in the pressure update
> for Chorin's method (Navier-Stokes solvers).
> 
> Cheers,
> Nico
> 
> On Tue, Jan 21, 2014 at 8:15 PM, Jan Blechta
> <[email protected]> wrote:
> > On Tue, 21 Jan 2014 11:01:50 -0800
> > Nikolaus Rath <[email protected]> wrote:
> >
> >> On 01/21/2014 09:43 AM, Simone Pezzuto wrote:
> >> > 2014/1/21 Jan Blechta <[email protected]
> >> > <mailto:[email protected]>>
> >> >
> >> >     On Tue, 21 Jan 2014 17:18:54 +0000
> >> >     "Garth N. Wells" <[email protected] <mailto:[email protected]>>
> >> > wrote:
> >> >
> >> >     > On 2014-01-21 17:01, Nikolaus Rath wrote:
> >> >     > > Hello,
> >> >     > >
> >> >     > > I noticed that the neumann-poisson demo
> >> >     > >
> >> >     
> >> > (http://fenicsproject.org/documentation/dolfin/1.3.0/python/demo/documented/neumann-poisson/python/documentation.html
> >> >     
> >> > <https://urldefense.proofpoint.com/v1/url?u=http://fenicsproject.org/documentation/dolfin/1.3.0/python/demo/documented/neumann-poisson/python/documentation.html&k=Izx05CQZXsnLXkTIfmT7FQ%3D%3D%0A&r=1KW6QPJUrZMjRkn7m6Ouj0V90HWobEY8fXrhlFmuc%2Bc%3D%0A&m=v%2F3Ze6UBI9gR2l%2Fu82%2F4Y4fHujXMqW47hgHfmNXSjGw%3D%0A&s=e188e7efbe5d6d1deabbd7fb92be8e35f91642b13aacc34896849ef0c60c62e7>)
> >> >     > > fails when using a different solver, e.g. when replacing
> >> >     > >
> >> >     > > solve(a == L, w)
> >> >     > >
> >> >     > > with
> >> >     > >
> >> >     > > solve(a == L, w,
> >> >     > >       solver_parameters = {'linear_solver': 'cg',
> >> >     > >                            'preconditioner': 'ilu'})
> >> >     > >
> >> >     >
> >> >     > This problem needs very careful treatment when using
> >> >     > iterative solvers. Simple block-box preconditioners and
> >> >     > solvers will very likely fail.
> >> >
> >> >     AMG preconditioning based on operator
> >> >
> >> >       (inner(grad(u), grad(v)) + c*d)*dx
> >> >
> >> >     could perform well. This operator does not have a dense row
> >> > like the original one. This is a strategy similar to
> >> > demo_stokes-iterative.
> >> >
> >> >
> >> > In this case the preconditioner is singular (pure neumann), no it
> >> > cannot be used.
> >> >
> >> > As Garth was mentioning, this problem is delicate for iterative
> >> > solver, not only because
> >> > its indefiniteness, but because the Lagrangian constraint you're
> >> > imposing yields
> >> > a column (the last one) of the full matrix that belongs to the
> >> > kernel of the top-left block.
> >> >
> >> > Since the nullspace is at hands, I would provide it to the solver
> >> > and then use CG+AMG,
> >> > with Jacobi relaxation at coarser scale instead Gauss elimination
> >> > (at least with petsc boomeramg).
> >>
> >>
> >> Why is there a nullspace? Doesn't the \int u = 0 constraint remove
> >> the remaining degree of freedom resulting from the pure Neumann
> >> boundary conditions?
> >
> > It does not remove any DOF. It adds just one DOF - the Lagrange
> > multiplier and an equation which makes the system regular. But this
> > is evil as corresponding row and column in the assembled matrix is
> > dense/full.
> >
> > Nullspace is an alternative (and probably better) approach.
> >
> > Jan
> >
> >>
> >>
> >>
> >> Thanks,
> >> Nikolaus
> >>
> >> _______________________________________________
> >> fenics mailing list
> >> [email protected]
> >> http://fenicsproject.org/mailman/listinfo/fenics
> >
> > _______________________________________________
> > fenics mailing list
> > [email protected]
> > http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to