Thanks for the prompt response. Both answers look like what I'm doing.

After playing a bit more with solver, I managed to make it run in parallel with different boundary conditions (full dirichlet bcs, vs mixed newmann + dirichlet). This raises two questions:

- How relevant are boundary conditions (eliminating dirichlet rows/cols vs weak newmann bcs) to the solver? Should I modify something when changing boundary conditions?

- Also, the solver did well with the old bcs when run in a single processor (but not in parallel). This seems odd, since parallel and serial behavior should be consistent (or not?). Could it be fault of the PCGAMG? I believe the default local solver is ILU, shoud I be changing it to LU or something else for these kind of problems?

Thank you both again,

Jordi


On 5/12/23 04:46, Matthew Knepley wrote:
On Mon, Dec 4, 2023 at 12:01 PM Jordi Manyer Fuertes via petsc-users <petsc-users@mcs.anl.gov> wrote:

    Dear PETSc users/developpers,

    I am currently trying to use the method `MatNullSpaceCreateRigidBody`
    together with `PCGAMG` to efficiently precondition an elasticity
    solver
    in 2D/3D.

    I have managed to make it work in serial (or with 1 MPI rank) with
    h-independent number of iterations (which is great), but the solver
    diverges in parallel.

    I assume it has to do with the coordinate vector I am building the
    null-space with not being correctly setup. The documentation is
    not that
    clear on which nodes exactly have to be set in each partition.
    Does it
    require nodes corresponding to owned dofs, or all dofs in each
    partition
    (owned+ghost)? What ghost layout should the `Vec` have?

    Any other tips about what I might be doing wrong?


What we assume is that you have some elastic problem formulated in primal unknowns (displacements) so that the solution vector looks like this:

  [ d^0_x d^0_y d^0_z d^1_x ..... ]

or whatever spatial dimension you have. We expect to get a global vector that looks like that, but instead of displacements, we get the coordinates that each displacement corresponds to. We make the generators of translations:

  [ 1 0 0 1 0 0 1 0 0 1 0 0... ]
  [ 0 1 0 0 1 0 0 1 0 0 1 0... ]
  [ 0 0 1 0 0 1 0 0 1 0 0 1... ]

for which we do not need the coordinates, and then the generators of rotations about each axis, for which we _do_ need the coordinates, since we need to know how much each point moves if you rotate about some center.

  Does that make sense?

   Thanks,

      Matt

    Thanks,

    Jordi



--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

Reply via email to