Re: [petsc-users] Block preconditioning for 3d problem

2019-10-14 Thread Dave Lee via petsc-users
(log.fgmres).
> >
> > One thing I've noticed - I've been dumping the index of the slab with
> the maximum normalised error: norm_L2(delta_x) / norm_L2(x). If I solve for
> each slab separately, then the maximum normalised errors are typically in
> slab 3 (of 0-7). This makes sense as this is probably the most dynamically
> active layer of the atmosphere. However if I solve for all the slabs in a
> single solve (as is the cases for these tests), then the greatest error is
> always in slab 7, the top of the atmosphere. Obviously the pressure is
> lowest in this level, so I'm wondering if the GMRES iterations are just
> weighting for the lower layers of the atmosphere too much.
> >
> > Cheers, Dave.
> >
> > On Sun, Oct 13, 2019 at 3:42 AM Smith, Barry F. 
> wrote:
> >
> >
> > > On Oct 10, 2019, at 2:58 AM, Dave Lee via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> > >
> > > Hi PETSc,
> > >
> > > I have a nonlinear 3D problem for a set of uncoupled 2D slabs. (Which
> I ultimately want to couple once this problem is solved).
> > >
> > > When I solve the inner linear problem for each of these 2D slabs
> individually (using KSPGMRES) the convergence of the outer nonlinear
> problem is good. However when I solve the inner linear problem as a single
> 3D problem (with no coupling between the 2D slabs, so the matrix is
> effectively a set of uncoupled blocks, one for each 2D slab) the outer
> nonlinear convergence degrades dramatically.
> > >
> > > Note that I am not using SNES, just my own quasi-Newton approach for
> the outer nonlinear problem.
> > >
> > > I suspect that the way to recover the convergence for the 3D coupled
> problem is to use some sort of PCBJACOBI or PCFIELDSPLIT preconditioner,
> but I'm not entirely sure. I've tried following this example:
> > >
> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex7.c.html
> > > but without any improvement in the convergence.
> > >
> > > On each processor the slab index is the major index and the internal
> slab DOF index is the minor index. The parallel decomposition is within the
> 2D slab dimensions only, not between slabs.
> >
> >   So each slab lives on all the processes?
> >
> > >
> > > I'm configuring the 3D inner linear solver as
> >
> >   How many processes and how many slabs are you running with?
> >
> > >
> > > KSPGetPC(ksp, &pc);
> > > PCSetType(pc, PCBJACOBI);
> > > PCBJacobiSetLocalBlocks(pc, num_slabs, NULL);
> >
> > There will be num_slabs blocks per processor times the number of
> processors. Depending on the layout of the unknowns each block will consist
> of a entire local slap or each block will consist of pieces from each slap.
> >
> > > KSPSetUp(ksp);
> > > PCBJacobiGetSubKSP(pc, &nlocal, &first_local, &subksp);
> > > for(int ii = 0; ii < nlocal; ii++) {
> > > KSPGetPC(subksp[ii], &subpc);
> > > PCSetType(subpc, PCJACOBI);
> > > KSPSetType(subksp[ii], KSPGMRES);
> > > KSPSetTolerances(subksp[ii], 1.e-12, PETSC_DEFAULT, PETSC_DEFAULT,
> PETSC_DEFAULT);
> > > }
> >
> >   Please run the linear solver with
> >
> > -ksp_monitor_true_residual -sub_ksp_monitor_true_residual
> -ksp_converged_reason -sub_ksp_converged_reason
> >
> >
> >   Do you get huge iteration counts somewhere?
> >
> >   Next run the same thing but add -ksp_type fgmres does that change
> anything?
> >
> >   Send all the results in an attached file.
> >
> > Barry
> >
> >
> >
> > >
> > > However this does not seem to change the outer nonlinear convergence.
> When I run with ksp_view the local block sizes look correct - number of
> local 2D slab DOFs for each block on each processor.
> > >
> > > Any ideas on what sort of KSP/PC configuration I should be using to
> recover the convergence of the uncoupled 2D slab linear solve for this 3D
> linear solve? Should I be rethinking my node indexing to help with this?
> > >
> > > Cheers, Dave.
> >
> > 
>
>


Re: [petsc-users] Block preconditioning for 3d problem

2019-10-10 Thread Dave Lee via petsc-users
Hi Jed and Mark,

thanks for your helpful comments. Yes the nonlinear outer problem is
uncoupled between the slabs, it is only the linear inner problem where they
are coupled.

I've tried to make the slab DOFs close in memory, and also tried using a
tight tolerance on the outer KSP (1.0e-20), but without success.

You make some really good points about scaling issues Jed and Mark. This is
a solve for a global atmosphere, and each 2D slab is a horizontal layer of
the atmosphere, so the pressure (which the linear solve is for) will vary
dramatically between slabs. Perhaps I can additionally precondition the
linear problem to normalise the pressure in each slab so that they stay
close to unity.

Cheers, Dave.

On Fri, Oct 11, 2019 at 2:52 AM Mark Adams  wrote:

> I can think of a few sources of coupling in the solver: 1) line search and
> 2) Krylov, and 3) the residual test (scaling issues). You could turn
> linesearch off and use Richardson (with a fixed number of iterations) or
> exact solves as Jed suggested. As far as scaling can you use the same NL
> problem on each slab? This should fix all the problems anyway. Or, on the
> good decouple solves, if the true residual is of the same scale and *all*
> of the slabs converge well then you should be OK on scaling. If this works
> then start adding stuff back in and see what breaks it.
>
> On Thu, Oct 10, 2019 at 11:01 AM Jed Brown via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Dave Lee via petsc-users  writes:
>>
>> > Hi PETSc,
>> >
>> > I have a nonlinear 3D problem for a set of uncoupled 2D slabs. (Which I
>> > ultimately want to couple once this problem is solved).
>> >
>> > When I solve the inner linear problem for each of these 2D slabs
>> > individually (using KSPGMRES) the convergence of the outer nonlinear
>> > problem is good. However when I solve the inner linear problem as a
>> single
>> > 3D problem (with no coupling between the 2D slabs, so the matrix is
>> > effectively a set of uncoupled blocks, one for each 2D slab) the outer
>> > nonlinear convergence degrades dramatically.
>>
>> Is the nonlinear problem also decoupled between slabs?
>>
>> If you solve the linear problem accurately (tight tolerances on the
>> outer KSP, or global direct solve), is the outer nonlinear convergence
>> good again?  If not, test that your Jacobian is correct (it likely isn't
>> or you have inconsistent scaling leading to ill conditioning).  SNES has
>> automatic tests for that, but you aren't using SNES so you'd have to
>> write some code.
>>
>> What happens if you run the 2D problem (where convergence is currently
>> good) with much smaller subdomains (or -pc_type pbjacobi)?
>>
>> > Note that I am not using SNES, just my own quasi-Newton approach for the
>> > outer nonlinear problem.
>> >
>> > I suspect that the way to recover the convergence for the 3D coupled
>> > problem is to use some sort of PCBJACOBI or PCFIELDSPLIT preconditioner,
>> > but I'm not entirely sure. I've tried following this example:
>> >
>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex7.c.html
>> > but without any improvement in the convergence.
>> >
>> > On each processor the slab index is the major index and the internal
>> slab
>> > DOF index is the minor index. The parallel decomposition is within the
>> 2D
>> > slab dimensions only, not between slabs.
>>
>> For convergence, you usually want the direction of tight coupling
>> (sounds like that is within slabs) to be close in memory.
>>
>> In general, use -ksp_monitor_true_residual -ksp_converged_reason.
>>
>> > I'm configuring the 3D inner linear solver as
>> >
>> > KSPGetPC(ksp, &pc);
>> > PCSetType(pc, PCBJACOBI);
>> > PCBJacobiSetLocalBlocks(pc, num_slabs, NULL);
>> > KSPSetUp(ksp);
>> > PCBJacobiGetSubKSP(pc, &nlocal, &first_local, &subksp);
>> > for(int ii = 0; ii < nlocal; ii++) {
>> > KSPGetPC(subksp[ii], &subpc);
>> > PCSetType(subpc, PCJACOBI);
>> > KSPSetType(subksp[ii], KSPGMRES);
>> > KSPSetTolerances(subksp[ii], 1.e-12, PETSC_DEFAULT, PETSC_DEFAULT,
>> > PETSC_DEFAULT);
>> > }
>> >
>> > However this does not seem to change the outer nonlinear convergence.
>> When
>> > I run with ksp_view the local block sizes look correct - number of
>> local 2D
>> > slab DOFs for each block on each processor.
>> >
>> > Any ideas on what sort of KSP/PC configuration I should be using to
>> recover
>> > the convergence of the uncoupled 2D slab linear solve for this 3D linear
>> > solve? Should I be rethinking my node indexing to help with this?
>> >
>> > Cheers, Dave.
>>
>


[petsc-users] Computing residual norm in KSPFGMRESCycle()

2019-07-04 Thread Dave Lee via petsc-users
Hi PETSc,

I have a problem for which I need to exclude certain degrees of freedom
from the evaluation of the norm as used to monitor the residual in FGMRES
(don't ask!). Therefore I would like to replace the existing norm
evaluation with my own version.

Currently this is done within:
KSPFGMRESUpdateHessenberg(ksp,loc_it,hapend,&res_norm)
as:
*res = PetscAbsScalar(*RS(it+1));

Firstly, I would like to make sure I can replicate the existing residual
norm. I tried to do this as:
KSPFGMRESUpdateHessenberg(ksp,loc_it,hapend,&res_norm);
KSPFGMRESBuildSoln(fgmres->nrs,ksp >vec_sol,tmp_sol,ksp,loc_it);
VecNorm(tmp_sol,NORM_2,&res_tmp);
VecNorm(ksp->vec_rhs,NORM_2,&res_rhs);
res_norm = fabs(res_tmp-res_rhs);

But this doesn't match the norm that comes out of UpdateHessenberg.

Any ideas on how I can re-create the norm derived from UpdateHessenberg as
an expansion over the Kyrlov vectors in FGMRES?

Cheers, Dave.


Re: [petsc-users] Singlar values of the GMRES Hessenberg matrix

2019-05-27 Thread Dave Lee via petsc-users
Hi Matt and PETSc.

Thanks again for the advice.

So I think I know what my problem might be. Looking at the comments above
the function
KSPInitialResidual()
in
src/ksp/ksp/interface/itres.c
I see that the initial residual, as passed into VEC_VV(0) is the residual
of the *preconditioned* system (and that the original residual goes
temporarily to gmres->vecs[1]).

So I'm wondering, is the Hessenberg, as derived via the *HES(row,col) macro
the Hessenberg for the original Krylov subspace, or the preconditioned
subspace?

Secondly, do the vecs within the KSP_GMRES structure, as accessed via
VEC_VV() correspond to the (preconditioned) Krylov subspace or the
orthonormalized vectors that make up the matrix Q_k in the Arnoldi
iteration? This isn't clear to me, and I need to access the vectors in Q_k in
order to expand the corrected hookstep solution.

Thanks again, Dave.

On Sat, May 25, 2019 at 6:18 PM Dave Lee  wrote:

> Thanks Matt, this is where I'm adding in my hookstep code.
>
> Cheers, Dave.
>
> On Fri, May 24, 2019 at 10:49 PM Matthew Knepley 
> wrote:
>
>> On Fri, May 24, 2019 at 8:38 AM Dave Lee  wrote:
>>
>>> Thanks Matt, great suggestion.
>>>
>>> I did indeed find a transpose error this way. The SVD as reconstructed
>>> via U S V^T now matches the input Hessenberg matrix as derived via the
>>> *HES(row,col) macro, and all the singular values are non-zero. However
>>> the solution to example src/ksp/ksp/examples/tutorials/ex1.c as
>>> determined via the expansion over the singular vectors is still not
>>> correct. I suspect I'm doing something wrong with regards to the expansion
>>> over the vec array VEC_VV(), which I assume are the orthonormal vectors
>>> of the Q_k matrix in the Arnoldi iteration
>>>
>>
>> Here we are building the solution:
>>
>>
>> https://bitbucket.org/petsc/petsc/src/7c23e6aa64ffbff85a2457e1aa154ec3d7f238e3/src/ksp/ksp/impls/gmres/gmres.c#lines-331
>>
>> There are some subtleties if you have a  nonzero initial guess or a
>> preconditioner.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Thanks again for your advice, I'll keep digging.
>>>
>>> Cheers, Dave.
>>>
>>> On Thu, May 23, 2019 at 8:20 PM Matthew Knepley 
>>> wrote:
>>>
>>>> On Thu, May 23, 2019 at 5:09 AM Dave Lee via petsc-users <
>>>> petsc-users@mcs.anl.gov> wrote:
>>>>
>>>>> Hi PETSc,
>>>>>
>>>>> I'm trying to add a "hook step" to the SNES trust region solver (at
>>>>> the end of the function: KSPGMRESBuildSoln())
>>>>>
>>>>> I'm testing this using the (linear) example:
>>>>> src/ksp/ksp/examples/tutorials/ex1.c
>>>>> as
>>>>> gdb  --args ./test -snes_mf -snes_type newtontr -ksp_rtol 1.0e-12
>>>>> -snes_stol 1.0e-12 -ksp_converged_reason -snes_converged_reason
>>>>> -ksp_monitor -snes_monitor
>>>>> (Ignore the SNES stuff, this is for when I test nonlinear examples).
>>>>>
>>>>> When I call the LAPACK SVD routine via PETSc as
>>>>> PetscStackCallBLAS("LAPACKgesvd",LAPACKgesvd_(...))
>>>>> I get the following singular values:
>>>>>
>>>>>   0 KSP Residual norm 7.071067811865e-01
>>>>>   1 KSP Residual norm 3.162277660168e-01
>>>>>   2 KSP Residual norm 1.889822365046e-01
>>>>>   3 KSP Residual norm 1.290994448736e-01
>>>>>   4 KSP Residual norm 9.534625892456e-02
>>>>>   5 KSP Residual norm 8.082545620881e-16
>>>>>
>>>>> 1 0.5 -7.85046e-16 1.17757e-15
>>>>> 0.5 1 0.5 1.7271e-15
>>>>> 0 0.5 1 0.5
>>>>> 0 0 0.5 1
>>>>> 0 0 0 0.5
>>>>>
>>>>> singular values: 2.36264 0.409816 1.97794e-15 6.67632e-16
>>>>>
>>>>> Linear solve converged due to CONVERGED_RTOL iterations 5
>>>>>
>>>>> Where the lines above the singular values are the Hessenberg matrix
>>>>> that I'm doing the SVD on.
>>>>>
>>>>
>>>> First, write out all the SVD matrices you get and make sure that they
>>>> reconstruct the input matrix (that
>>>> you do not have something transposed somewhere).
>>>>
>>>>Matt
>>>>
>>>>
>>>>> When I build the solution in terms of the leading two right singular
>>>>&

Re: [petsc-users] Calling LAPACK routines from PETSc

2019-05-20 Thread Dave Lee via petsc-users
Thanks Barry,

I found some helpful examples on the intel lapack site - moral of the
story: using C ordering for input matrix, but transposed output matrices
leads to a consistent solution.

Cheers, Dave.

On Mon, May 20, 2019 at 6:07 PM Smith, Barry F.  wrote:

>
>
> > On May 20, 2019, at 2:28 AM, Dave Lee  wrote:
> >
> > Thanks Jed and Barry,
> >
> > So, just to confirm,
> >
> > -- From the KSP_GMRES structure, if I call *HH(a,b), that will return
> the row a, column b entry of the Hessenberg matrix (while the back end
> array *hh_origin array is ordering using the Fortran convention)
> >
> > -- Matrices are passed into and returned from
> PetscStackCallBLAS("LAPACKgesvd",LAPACKgesvd_() using Fortran indexing, and
> need to be transposed to get back to C ordering
>
>   In general, I guess depending on what you want to do with them you don'
> need to transpose them. Why would you want to? Just leave them as little
> column oriented blogs  and with them what you need directly.
>
>Just do stuff and you'll find it works out.
>
> >
> > Are both of these statements correct?
> >
> > Cheers, Dave.
> >
> > On Mon, May 20, 2019 at 4:34 PM Smith, Barry F. 
> wrote:
> >
> >The little work arrays in GMRES tend to be stored in Fortran
> ordering; there is no C style p[][] indexing into such arrays. Thus the
> arrays can safely be sent to LAPACK.  The only trick is knowing the two
> dimensions and as Jed say the "leading dimension parameter. He gave you a
> place to look
> >
> > > On May 20, 2019, at 1:24 AM, Jed Brown via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
> > >
> > > Dave Lee via petsc-users  writes:
> > >
> > >> Hi Petsc,
> > >>
> > >> I'm attempting to implement a "hookstep" for the SNES trust region
> solver.
> > >> Essentially what I'm trying to do is replace the solution of the least
> > >> squares problem at the end of each GMRES solve with a modified
> solution
> > >> with a norm that is constrained to be within the size of the trust
> region.
> > >>
> > >> In order to do this I need to perform an SVD on the Hessenberg matrix,
> > >> which copying the function KSPComputeExtremeSingularValues(), I'm
> trying to
> > >> do by accessing the LAPACK function dgesvd() via the
> PetscStackCallBLAS()
> > >> machinery. One thing I'm confused about however is the ordering of
> the 2D
> > >> arrays into and out of this function, given that that C and FORTRAN
> arrays
> > >> use reverse indexing, ie: C[j+1][i+1] = F[i,j].
> > >>
> > >> Given that the Hessenberg matrix has k+1 rows and k columns, should I
> be
> > >> still be initializing this as H[row][col] and passing this into
> > >> PetscStackCallBLAS("LAPACKgesvd",LAPACKgrsvd_(...))
> > >> or should I be transposing this before passing it in?
> > >
> > > LAPACK terminology is with respect to Fortran ordering.  There is a
> > > "leading dimension" parameter so that you can operate on non-contiguous
> > > blocks.  See KSPComputeExtremeSingularValues_GMRES for an example.
> > >
> > >> Also for the left and right singular vector matrices that are
> returned by
> > >> this function, should I be transposing these before I interpret them
> as C
> > >> arrays?
> > >>
> > >> I've attached my modified version of gmres.c in case this is helpful.
> If
> > >> you grep for DRL (my initials) then you'll see my changes to the code.
> > >>
> > >> Cheers, Dave.
> > >>
> > >> /*
> > >>This file implements GMRES (a Generalized Minimal Residual) method.
> > >>Reference:  Saad and Schultz, 1986.
> > >>
> > >>
> > >>Some comments on left vs. right preconditioning, and restarts.
> > >>Left and right preconditioning.
> > >>If right preconditioning is chosen, then the problem being solved
> > >>by gmres is actually
> > >>   My =  AB^-1 y = f
> > >>so the initial residual is
> > >>  r = f - Mx
> > >>Note that B^-1 y = x or y = B x, and if x is non-zero, the initial
> > >>residual is
> > >>  r = f - A x
> > >>The final solution is then
> > >>  x = B^-1 y
> > >>
> > >>If left preconditioning is

[petsc-users] testing for and removing a null space using JFNK

2019-04-04 Thread Dave Lee via petsc-users
Hello PETSc,

I'm attempting to solve a JFNK problem for a system where I only have a
function to compute the residual, but no matrix.

I wanted to know if there exists functionality in PETSc to do the following:

1) approximate a null space from a set of Krylov vectors

2) remove such a null space if it exists

I'm vaguely familiar with the MatNullSpaceCreate/Remove() functionality,
however I don't know the precise form of a null space, so I don't have a
set of vectors I can assemble and pass to this.

Cheers, Dave.


Re: [petsc-users] Vecs and Mats with non-contiguous parallel layout

2019-01-31 Thread Dave Lee via petsc-users
Whoops, thanks Matt, well spotted. Looks a lot better now.

Cheers, Dave.

On Thu, Jan 31, 2019 at 10:08 PM Matthew Knepley  wrote:

> On Thu, Jan 31, 2019 at 12:22 AM Dave Lee  wrote:
>
>> Hi Barry and Matt,
>>
>> This seems to work for the outer (composite multiplicative) field split,
>> but fails for the inner (schur) field split (output error file attached).
>>
>
> The inner FS numbering is with respect to the reduced problem, not the
> original global problem (it can't know you did one FS on the outside).
>
>   THanks,
>
> Matt
>
>
>> I'm setting up the nested field splits as:
>>
>>   SNESGetKSP(snes, &ksp);
>>
>>   KSPGetPC(ksp, &pc);
>>
>>   PCSetType(pc, PCFIELDSPLIT);
>>
>>   PCFieldSplitSetType(pc, PC_COMPOSITE_MULTIPLICATIVE);
>>
>>
>>   for(int slice_i = 0; slice_i < nSlice; slice_i++) {
>>
>> if(Geometry::procID() == slice_i) {
>>
>>   ISCreateStride(MPI_COMM_SELF, nDofsSlice, slice_i * nDofsSlice, 1,
>> &is_s[slice_i]);
>>
>> } else {
>>
>>   ISCreateStride(MPI_COMM_SELF, 0, 0, 1, &is_s[slice_i]);
>>
>> }
>>
>> MPI_Barrier(MPI_COMM_WORLD);
>>
>> PCFieldSplitSetIS(pc, NULL, is_s[slice_i]);
>>
>>   }
>>
>>   PCSetUp(pc);
>>
>>
>>   PCFieldSplitGetSubKSP(pc, &n_split, &ksp_i);
>>
>>   for(int slice_i = 0; slice_i < nSlice; slice_i++) {
>>
>> KSPGetPC(ksp_i[slice_i], &pc_i);
>>
>> PCSetType(pc_i, PCFIELDSPLIT);
>>
>>
>> if(Geometry::procID() == slice_i) {
>>
>>   ISCreateStride(MPI_COMM_SELF, nDofs_u, slice_i * nDofsSlice, 1,
>> &is_u[slice_i]);
>>
>>   ISCreateStride(MPI_COMM_SELF, nDofs_p, slice_i * nDofsSlice +
>> nDofs_u, 1, &is_p[slice_i]);
>>
>> } else {
>>
>>   ISCreateStride(MPI_COMM_SELF, 0, 0, 1, &is_u[slice_i]);
>>
>>   ISCreateStride(MPI_COMM_SELF, 0, 0, 1, &is_p[slice_i]);
>>
>> }
>>
>> MPI_Barrier(MPI_COMM_WORLD);
>>
>> PCFieldSplitSetIS(pc_i, "u", is_u[slice_i]);
>>
>> PCFieldSplitSetIS(pc_i, "p", is_p[slice_i]);
>>
>>
>> PCFieldSplitSetType(pc_i, PC_COMPOSITE_SCHUR);
>>
>> PCSetUp(pc_i);
>>
>>
>> if(Geometry::procID() == slice_i) {
>>
>>   PCFieldSplitGetSubKSP(pc_i, &m_split, &ksp_j);
>>
>>       KSPSetType(ksp_j[0], KSPGMRES);
>>
>>   KSPSetType(ksp_j[1], KSPGMRES);
>>
>>   KSPGetPC(ksp_j[0], &pc_j);
>>
>>   PCSetType(pc_j, PCJACOBI);
>>
>>   KSPGetPC(ksp_j[1], &pc_j);
>>
>>   PCSetType(pc_j, PCJACOBI);
>>
>> }
>>
>> MPI_Barrier(MPI_COMM_WORLD);
>>
>>   }
>>
>>
>> I could re-arrange the indexing and then use PCFieldSplitSetBlockSize(),
>> however this will undermine the existing block diagonal structure of the
>> system.
>>
>> Cheers, Dave.
>>
>> On Thu, Jan 31, 2019 at 1:28 PM Smith, Barry F. 
>> wrote:
>>
>>>
>>>
>>> > On Jan 30, 2019, at 7:36 PM, Dave Lee via petsc-users <
>>> petsc-users@mcs.anl.gov> wrote:
>>> >
>>> > Thanks Matt,
>>> >
>>> > I'm currently working through the details of the nested field split.
>>> I've decided to re-organise my application data layout so that the local
>>> portions are contiguous in the global space for consistency with the
>>> default PETSc layout.
>>> >
>>> > Does the IndexSet need to be the same size on each processor in order
>>> for this IndexSet to be supplied to a FieldSplit PC?
>>>
>>>No, since it is the local size each process can have a different one.
>>> >
>>> > Moreover can an IndexSet have 0 entries on a particular processor and
>>> still be part of a global FieldSplit PC?
>>>
>>>Yes (at least in theory, hopefully there are not bugs for this corner
>>> case).
>>> >
>>> > Cheers, Dave.
>>> >
>>> >
>>> > On Wed, Jan 30, 2019 at 11:10 PM Matthew Knepley 
>>> wrote:
>>> > On Tue, Jan 29, 2019 at 7:58 PM Dave Lee via petsc-users <
>>> petsc-users@mcs.anl.gov> wrote:
>>> > Hi,
>>> >
>>> > just wondering if its possible to manually specify the parallel
>>> decomposit

Re: [petsc-users] Vecs and Mats with non-contiguous parallel layout

2019-01-30 Thread Dave Lee via petsc-users
Thanks Matt,

I'm currently working through the details of the nested field split. I've
decided to re-organise my application data layout so that the local
portions are contiguous in the global space for consistency with the
default PETSc layout.

Does the IndexSet need to be the same size on each processor in order for
this IndexSet to be supplied to a FieldSplit PC?

Moreover can an IndexSet have 0 entries on a particular processor and still
be part of a global FieldSplit PC?

Cheers, Dave.


On Wed, Jan 30, 2019 at 11:10 PM Matthew Knepley  wrote:

> On Tue, Jan 29, 2019 at 7:58 PM Dave Lee via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hi,
>>
>> just wondering if its possible to manually specify the parallel
>> decomposition of data within PETSc Vecs and Mats? And if so, will the
>> FieldSplit preconditioning handle this also?
>>
>> I have an application (non-PETSc) data layout as:
>> DATA[16][4][num_procs][3472]
>>
>> and if possible I would like to use FieldSplit preconditioning without
>> re-arranging this layout.
>>
>
> Yes. You need to call
>
>
> https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFieldSplitSetIS.html
>
> in order to define the splits. Note that if you have nested splits, you
> will have to also call it on the
> nested PC, which stinks, but we have no other way of getting the
> information.
>
> Nested splits can also be handled by implementing the DM interface, but
> that is a lot of work. It is,
> however, the way we handle it internally because it makes things
> much easier and more flexible.
>
>   Thanks,
>
>  Matt
>
>
>> Currently I have a nested FieldSplit preconditioning with 16 outer field
>> splits, and a schur complement field split within each. however this throws
>> up an indexing error as FieldSplit assumes that the parallel decomposition
>> is contiguous (I think).
>>
>> [0]PETSC ERROR: #1 ISComplement() line 750 in
>> /home/dlee0035/soft/petsc-3.9.3/src/vec/is/is/utils/iscoloring.c
>>
>> [0]PETSC ERROR: #2 PCSetUp_FieldSplit() line 703 in
>> /home/dlee0035/soft/petsc-3.9.3/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>>
>> [0]PETSC ERROR: #3 PCSetUp() line 923 in
>> /home/dlee0035/soft/petsc-3.9.3/src/ksp/pc/interface/precon.c
>>
>> Cheers, Dave.
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>