On Wed, 22 Jan 2014 16:13:51 +0100
Jan Blechta <[email protected]> wrote:

> On Wed, 22 Jan 2014 15:56:51 +0100
> Nico Schlömer <[email protected]> wrote:
> 
> > Here's a complete example:
> > 
> > =============== *snip* ===============
> > from dolfin import *
> > 
> > mesh = UnitSquareMesh(20, 20)
> > V = FunctionSpace(mesh, 'CG', 1)
> > 
> > f = Expression('sin(x[0]) * sin(x[1])')
> > 
> > u = TrialFunction(V)
> > v = TestFunction(V)
> > a = dot(grad(u), grad(v)) * dx
> > L = f * v * dx
> > 
> > A = assemble(a)
> > b = assemble(L)
> > 
> > # Make sure the system is consistent, i.e., the right-hand side is
> > in the image # of A. This is equivalent to the rhs not having a
> > component in the kernel of # A (since A is symmetric).
> > e = Function(V)
> > e.interpolate(Constant(1.0))
> > evec = e.vector()
> > evec /= norm(evec)
> > alpha = b.inner(evec)
> > b -= alpha * evec
> > 
> > x = Function(V)
> > 
> > prec = PETScPreconditioner('hypre_amg')
> > prec.parameters['hypre']['BoomerAMG']['relax_type_coarse'] =
> > 'jacobi' solver = PETScKrylovSolver('cg')
> > solver.parameters['absolute_tolerance'] = 0.0
> > solver.parameters['relative_tolerance'] = 1.0e-10
> > solver.parameters['maximum_iterations'] = 100
> > solver.parameters['monitor_convergence'] = True
> > # Create solver and solve system
> > A_petsc = as_backend_type(A)
> > b_petsc = as_backend_type(b)
> > x_petsc = as_backend_type(x.vector())
> > solver.set_operator(A_petsc)
> > solver.solve(x_petsc, b_petsc)
> > =============== *snap* ===============
> > 
> > What you might have missed is that the system needs to be consistent
> > (which is enforced in the above example, but may be true
> > automatically in other contexts, e.g., Navier-Stokes).
> 
> No, I missed that you was talking about a problem without Lagrange
> multiplier, which is only semi-definite, not indefinite.
> 
> Regarding problem with multiplier, I tweaked your suggestion using
> MINRES instead of CG and preconditioner
>   ap = (inner(grad(u), grad(v)) + c*d)*dx
> Code is below. This converges much faster than your code. In fact,
> your code can't converge to rel tol 1e-15, probably because 'the
> consistency' is not met to such a precision.
> 
> To be honest - comparing the residuals of these two problems (w and
> w/o multiplier) is not much meaningful. One should rather compare
> errornorms using exact solution.

Now I see that you also use different mesh and right-hand side. So the
comparison is totally nonsensense.

Jan

> 
> ========================================
> from dolfin import *
> 
> # Create mesh and define function space
> mesh = UnitSquareMesh(64, 64)
> V = FunctionSpace(mesh, "CG", 1)
> R = FunctionSpace(mesh, "R", 0)
> W = V * R
> 
> # Define variational problem
> (u, c) = TrialFunction(W)
> (v, d) = TestFunctions(W)
> f = Expression("10*exp(-(pow(x[0] - 0.5, 2) + pow(x[1] - 0.5, 2)) /
> 0.02)") g = Expression("-sin(5*x[0])")
> a  = (inner(grad(u), grad(v)) + c*v + u*d)*dx
> ap = (inner(grad(u), grad(v)) + c*d)*dx
> L  = f*v*dx + g*v*ds
> 
> # Assemble
> A  = assemble(a)
> Ap = assemble(ap)
> b  = assemble(L)
> 
> # Create Krylov solver
> prec = PETScPreconditioner('hypre_amg')
> prec.parameters['hypre']['BoomerAMG']['relax_type_coarse'] = 'jacobi'
> solver = PETScKrylovSolver('minres', prec)
> solver.parameters['absolute_tolerance'] = 0.0
> solver.parameters['relative_tolerance'] = 1e-15 
> solver.parameters['maximum_iterations'] = 100
> solver.parameters['monitor_convergence'] = True
> 
> # Solve system
> w = Function(W)
> solver.set_operators(A, Ap)
> solver.solve(w.vector(), b)
> (u, c) = w.split()
> 
> # Plot solution
> plot(u, interactive=True)
> ========================================
> 
> Jan
> 
> > 
> > --Nico
> > 
> > On Wed, Jan 22, 2014 at 3:43 PM, Jan Blechta
> > <[email protected]> wrote:
> > > On Wed, 22 Jan 2014 15:25:50 +0100
> > > Nico Schlömer <[email protected]> wrote:
> > >
> > >> Hi all,
> > >>
> > >> the problem is fully symmetric and can be solved with CG, even
> > >> though the null space is nontrivial (see, e.g., Iterative Krylov
> > >> methods for large linear systems, Henk A. van der Vorst). As a
> > >> preconditioner you can use AMG, the only thing that you have to
> > >> take care of is the coarse solve; by default, some LU strategy is
> > >> deployed which messes around with the null space. Take something
> > >> iterative, e.g. Jacobi, and you'll be fine.
> > >>
> > >> Check out
> > >>
> > >> prec = PETScPreconditioner('hypre_amg')
> > >> prec.parameters['hypre']['BoomerAMG']['relax_type_coarse'] =
> > >> 'jacobi' solver = PETScKrylovSolver('cg', prec)
> > >> solver.parameters['absolute_tolerance'] = 0.0
> > >> solver.parameters['relative_tolerance'] = tol
> > >> solver.parameters['maximum_iterations'] = 100
> > >> solver.parameters['monitor_convergence'] = verbose
> > >> # Create solver and solve system
> > >> A_petsc = as_backend_type(A)
> > >> b_petsc = as_backend_type(b)
> > >> phi_petsc = as_backend_type(phi.vector())
> > >> solver.set_operator(A_petsc)
> > >> solver.solve(phi_petsc, b_petsc)
> > >
> > > This does not seem to work. Have you tried it? Can you post a
> > > complete code? In my opinion, you can't solve it using CG because
> > > the problem is indefinite. At least, without some additional
> > > trick.
> > >
> > > Jan
> > >
> > >>
> > >> Alternatively, the PETSc option can be set using the PETScOption
> > >> framework that Garth added a while ago.
> > >>
> > >> I've encountered the same problem a while ago in the pressure
> > >> update for Chorin's method (Navier-Stokes solvers).
> > >>
> > >> Cheers,
> > >> Nico
> > >>
> > >> On Tue, Jan 21, 2014 at 8:15 PM, Jan Blechta
> > >> <[email protected]> wrote:
> > >> > On Tue, 21 Jan 2014 11:01:50 -0800
> > >> > Nikolaus Rath <[email protected]> wrote:
> > >> >
> > >> >> On 01/21/2014 09:43 AM, Simone Pezzuto wrote:
> > >> >> > 2014/1/21 Jan Blechta <[email protected]
> > >> >> > <mailto:[email protected]>>
> > >> >> >
> > >> >> >     On Tue, 21 Jan 2014 17:18:54 +0000
> > >> >> >     "Garth N. Wells" <[email protected]
> > >> >> > <mailto:[email protected]>> wrote:
> > >> >> >
> > >> >> >     > On 2014-01-21 17:01, Nikolaus Rath wrote:
> > >> >> >     > > Hello,
> > >> >> >     > >
> > >> >> >     > > I noticed that the neumann-poisson demo
> > >> >> >     > >
> > >> >> >     
> > >> >> > (http://fenicsproject.org/documentation/dolfin/1.3.0/python/demo/documented/neumann-poisson/python/documentation.html
> > >> >> >     
> > >> >> > <https://urldefense.proofpoint.com/v1/url?u=http://fenicsproject.org/documentation/dolfin/1.3.0/python/demo/documented/neumann-poisson/python/documentation.html&k=Izx05CQZXsnLXkTIfmT7FQ%3D%3D%0A&r=1KW6QPJUrZMjRkn7m6Ouj0V90HWobEY8fXrhlFmuc%2Bc%3D%0A&m=v%2F3Ze6UBI9gR2l%2Fu82%2F4Y4fHujXMqW47hgHfmNXSjGw%3D%0A&s=e188e7efbe5d6d1deabbd7fb92be8e35f91642b13aacc34896849ef0c60c62e7>)
> > >> >> >     > > fails when using a different solver, e.g. when
> > >> >> >     > > replacing
> > >> >> >     > >
> > >> >> >     > > solve(a == L, w)
> > >> >> >     > >
> > >> >> >     > > with
> > >> >> >     > >
> > >> >> >     > > solve(a == L, w,
> > >> >> >     > >       solver_parameters = {'linear_solver': 'cg',
> > >> >> >     > >                            'preconditioner': 'ilu'})
> > >> >> >     > >
> > >> >> >     >
> > >> >> >     > This problem needs very careful treatment when using
> > >> >> >     > iterative solvers. Simple block-box preconditioners
> > >> >> >     > and solvers will very likely fail.
> > >> >> >
> > >> >> >     AMG preconditioning based on operator
> > >> >> >
> > >> >> >       (inner(grad(u), grad(v)) + c*d)*dx
> > >> >> >
> > >> >> >     could perform well. This operator does not have a dense
> > >> >> > row like the original one. This is a strategy similar to
> > >> >> > demo_stokes-iterative.
> > >> >> >
> > >> >> >
> > >> >> > In this case the preconditioner is singular (pure neumann),
> > >> >> > no it cannot be used.
> > >> >> >
> > >> >> > As Garth was mentioning, this problem is delicate for
> > >> >> > iterative solver, not only because
> > >> >> > its indefiniteness, but because the Lagrangian constraint
> > >> >> > you're imposing yields
> > >> >> > a column (the last one) of the full matrix that belongs to
> > >> >> > the kernel of the top-left block.
> > >> >> >
> > >> >> > Since the nullspace is at hands, I would provide it to the
> > >> >> > solver and then use CG+AMG,
> > >> >> > with Jacobi relaxation at coarser scale instead Gauss
> > >> >> > elimination (at least with petsc boomeramg).
> > >> >>
> > >> >>
> > >> >> Why is there a nullspace? Doesn't the \int u = 0 constraint
> > >> >> remove the remaining degree of freedom resulting from the pure
> > >> >> Neumann boundary conditions?
> > >> >
> > >> > It does not remove any DOF. It adds just one DOF - the Lagrange
> > >> > multiplier and an equation which makes the system regular. But
> > >> > this is evil as corresponding row and column in the assembled
> > >> > matrix is dense/full.
> > >> >
> > >> > Nullspace is an alternative (and probably better) approach.
> > >> >
> > >> > Jan
> > >> >
> > >> >>
> > >> >>
> > >> >>
> > >> >> Thanks,
> > >> >> Nikolaus
> > >> >>
> > >> >> _______________________________________________
> > >> >> fenics mailing list
> > >> >> [email protected]
> > >> >> http://fenicsproject.org/mailman/listinfo/fenics
> > >> >
> > >> > _______________________________________________
> > >> > fenics mailing list
> > >> > [email protected]
> > >> > http://fenicsproject.org/mailman/listinfo/fenics
> > >
> 
> _______________________________________________
> fenics mailing list
> [email protected]
> http://fenicsproject.org/mailman/listinfo/fenics

_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to