My take is that someone looking for CPR is more likely to end up on the
PCFIELDSPLIT manual page than the PCGALERKIN page.  The solver you have
configured in your mail is not the CPR we have been asked about here.
See Barry's message and example code below.

--- Begin Message ---
   Ok, I have implemented the algorithm using PETSc PCCOMPOSITE and PCGALERKIN 
and get identical iterations as the code you sent. PCFIELDSPLIT is not intended 
for this type of solver composition. Here is the algorithm written in 
"two-step" form

   x_1/2   = P KSPSolve( R A P, using BoomerAMG) R b
   x          =  x_1/2 + PCApply( A, using Hypre PILUT preconditioner) ( b - A 
x_1/2)

PCCOMPOSITE with a type of multiplicative handles the two steps and PCGALERKIN 
handles the P KSPSolve(R A P) R business.

You will need to use the master version of PETSc because I had to add a feature 
to PCGALERKIN to allow the solver to be easily used for and A that changes 
values for later solvers.


Here is the output from -ksp_view 

KSP Object: 1 MPI processes
  type: fgmres
    GMRES: restart=100, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: composite
  Composite PC type - MULTIPLICATIVE
  PCs on composite preconditioner follow
  ---------------------------------
    PC Object: (sub_0_) 1 MPI processes
      type: galerkin
      Galerkin PC
      KSP on Galerkin follow
      ---------------------------------
      KSP Object: (sub_0_galerkin_) 1 MPI processes
        type: richardson
          Richardson: damping factor=1.
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using PRECONDITIONED norm type for convergence test
      PC Object: (sub_0_galerkin_) 1 MPI processes
        type: hypre
          HYPRE BoomerAMG preconditioning
          HYPRE BoomerAMG: Cycle type V
          HYPRE BoomerAMG: Maximum number of levels 25
          HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
          HYPRE BoomerAMG: Convergence tolerance PER hypre call 0.
          HYPRE BoomerAMG: Threshold for strong coupling 0.25
          HYPRE BoomerAMG: Interpolation truncation factor 0.
          HYPRE BoomerAMG: Interpolation: max elements per row 0
          HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
          HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
          HYPRE BoomerAMG: Maximum row sums 0.9
          HYPRE BoomerAMG: Sweeps down         1
          HYPRE BoomerAMG: Sweeps up           1
          HYPRE BoomerAMG: Sweeps on coarse    1
          HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
          HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
          HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
          HYPRE BoomerAMG: Relax weight  (all)      1.
          HYPRE BoomerAMG: Outer relax weight (all) 1.
          HYPRE BoomerAMG: Using CF-relaxation
          HYPRE BoomerAMG: Not using more complex smoothers.
          HYPRE BoomerAMG: Measure type        local
          HYPRE BoomerAMG: Coarsen type        Falgout
          HYPRE BoomerAMG: Interpolation type  classical
        linear system matrix = precond matrix:
        Mat Object: 1 MPI processes
          type: seqaij
          rows=50, cols=50
          total: nonzeros=244, allocated nonzeros=244
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=100, cols=100, bs=2
        total: nonzeros=976, allocated nonzeros=100000
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 50 nodes, limit used is 5
    PC Object: (sub_1_) 1 MPI processes
      type: hypre
        HYPRE Pilut preconditioning
        HYPRE Pilut: maximum number of iterations 1
        HYPRE Pilut: drop tolerance 0.1
        HYPRE Pilut: default factor row size 
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: seqaij
        rows=100, cols=100, bs=2
        total: nonzeros=976, allocated nonzeros=100000
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 50 nodes, limit used is 5

Note that one can change the solvers on the two "stages" and all their options 
such as tolerances etc using the options database a proper prefixes which you 
can find in the output above. For example to use PETSc's ILU instead of hypre's 
just run with -sub_1_pc_type ilu or to use a direct solver instead of boomer 
-sub_0_galerkin_pc_type lu 

I've attached a new version of testmain2.c that runs your solver and also my 
version.

Given the "unique" nature of your R = [ I I ... I] and P = [0 I 0 0 ... 0] I am 
not sure that it makes sense to include this preconditioner directly in PETSc 
as a new PC type; so if you are serious about using it you can take what I am 
sending back and modify it for your needs.  As a numerical analyst who works on 
linear solvers I am also not convinced that this is likely to be a particular 
good preconditioner

  Let me know if you have any questions,

   Barry
// make && mpirun -np 2 ./testmain2 -ksp_error_if_not_converged

#include "MCPR.h"


/*
       Computes the submatrix associated with the Galerkin subproblem Ap = R A P
*/
PetscErrorCode ComputeSubmatrix(PC pc,Mat A, Mat Ap, Mat *cAp,void *ctx)
{
  PetscErrorCode ierr;
  PetscInt       b,Am,An,start,end,first = 0, offset = 1;
  IS             is,js;
  Mat            Aij;

  PetscFunctionBegin;
  ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr);
  ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr);
  ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr);

  ierr = 
ISCreateStride(PetscObjectComm((PetscObject)A),Am/b,start+offset,b,&js);CHKERRQ(ierr);
  if (!Ap) {
    ierr  = 
ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+0,b,&is);CHKERRQ(ierr);
    ierr  = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Ap);CHKERRQ(ierr);
    ierr  = ISDestroy(&is);CHKERRQ(ierr);
    *cAp  = Ap;
    first = 1;
  } else {
    ierr = MatZeroEntries(Ap);CHKERRQ(ierr);
  }

  for(PetscInt k=first;k<b;++k) {
    ierr = 
ISCreateStride(PetscObjectComm((PetscObject)A),An/b,start+k,b,&is);CHKERRQ(ierr);
    ierr = MatGetSubMatrix(A,is,js,MAT_INITIAL_MATRIX,&Aij);CHKERRQ(ierr);
    ierr = MatAXPY(Ap,1.0,Aij,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);
    ierr = MatDestroy(&Aij);CHKERRQ(ierr);
    ierr = ISDestroy(&is);CHKERRQ(ierr);
  }
  ierr = ISDestroy(&js);CHKERRQ(ierr);
  PetscFunctionReturn(0);
}

/*
    Apply the restriction operator for the Galkerin problem
*/
PetscErrorCode ApplyR(Mat A, Vec x,Vec y)
{
  PetscErrorCode ierr;
  PetscInt       b;
  PetscFunctionBegin;
  ierr = VecGetBlockSize(x,&b);CHKERRQ(ierr);
  ierr = VecStrideGather(x,0,y,INSERT_VALUES);CHKERRQ(ierr);
  for (PetscInt k=1;k<b;++k) {ierr = 
VecStrideGather(x,k,y,ADD_VALUES);CHKERRQ(ierr);}
  PetscFunctionReturn(0);
}

/*
    Apply the interpolation operator for the Galerkin problem
*/
PetscErrorCode ApplyP(Mat A, Vec x,Vec y)
{
  PetscErrorCode ierr;
  PetscInt       offset = 1;
  PetscFunctionBegin;
  ierr = VecStrideScatter(x,offset,y,INSERT_VALUES);CHKERRQ(ierr);
  PetscFunctionReturn(0);
}



int main( int argc, char *argv[] )
{
        PetscInitialize(&argc,&argv,PETSC_NULL,PETSC_NULL);
        int rank, size;
        MPI_Comm_rank (MPI_COMM_WORLD, &rank);  /* get current process id */
        MPI_Comm_size (MPI_COMM_WORLD, &size);  /* get number of processes */
        MPI_Comm C = PETSC_COMM_WORLD;
        PetscRandom    rnd;
        PetscRandomCreate(C,&rnd);
        PetscRandomSetInterval(rnd,0.0,1.0);
        PetscRandomSetFromOptions(rnd);
        int M = 100;
        int N = size*M;

        Mat A = getSampleMatrix(M);

        Mat T1 = getT1(A,1);
        Mat T2 = getT2(A,0.1);
        Mat MCPR = getMCPR(A,T1,T2);

        Vec u,v,w,z;
        VecCreate(C,&u);
        VecSetBlockSize(u,2);
        VecSetSizes(u,M,N);
        VecSetFromOptions(u);
        VecDuplicate(u,&v);
        VecDuplicate(u,&w);
        VecDuplicate(u,&z);
        VecSetRandom(u,rnd);

        Mat mats[] = {T2,MCPR};
        const char *names[] = {"T2","MCPR"};
        for(int k=1;k<2;++k) {
                KSP solver;
                KSPCreate(C,&solver);
                KSPSetOperators(solver,A,A);
                KSPSetType(solver,KSPFGMRES);
                KSPGMRESSetRestart(solver,N);
                PC pc;
                KSPGetPC(solver,&pc);
                putMatrixInPC(pc,mats[k]);
        KSPSetFromOptions(solver);
        KSPSolve(solver,u,v);
                KSPConvergedReason reason;
                int its;
                KSPGetConvergedReason(solver,&reason);
                KSPGetIterationNumber(solver,&its);
                PetscPrintf(PETSC_COMM_WORLD,"testmain2: %s converged reason 
%d; iterations %d.\n",names[k],reason,its);
            KSPView(solver,PETSC_VIEWER_STDOUT_WORLD);
                KSPDestroy(&solver);
        }


        /*
                   The preconditioner is

                                           x_1/2   = P KSPSolve( R A P, using 
BoomerAMG) R b
                                           x       =  x_1/2 + PCApply( A, using 
Hypre PILUT preconditioner) ( b - A x_1/2)

                   where the first line is implemented using PCGALERKIN with 
BoomerAMG on the subproblem so can be written as

                                         x_1/2   = PCApply(A, using PCGALERKIN 
with KSPSolve( R A P, using BoomerAMG) as the inner solver
                                           x      =  x_1/2 + PCApply( A, using 
Hypre PILUT preconditioner) ( b - A x_1/2)

                   Which is implemented using the PETSc PCCOMPOSITE 
preconditioner of type multiplicative so can be written as

          x = PCApply(A, using PCCOMPOSITE using (PCGALERKIN with KSPSolve( R A 
P, using BoomerAMG) as the inner solver) as the first solver and PCApply( A, 
using Hypre PILUT preconditioner) as the second solver)


        */
        {

        PetscErrorCode ierr;
        KSP ksp;
        ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr);
        ierr = KSPSetType(ksp,KSPFGMRES);CHKERRQ(ierr);
        ierr = KSPGMRESSetRestart(ksp,100);CHKERRQ(ierr);
        ierr = KSPSetOperators(ksp,A,A);CHKERRQ(ierr);
        PC pc;
        ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
        ierr = PCSetType(pc,PCCOMPOSITE);CHKERRQ(ierr);
        ierr = PCCompositeSetType(pc,PC_COMPOSITE_MULTIPLICATIVE);CHKERRQ(ierr);

        /* Create first sub problem solver Hypre boomerAMG on Ap */
        PC t1;
        ierr = PCCompositeAddPC(pc,PCGALERKIN);CHKERRQ(ierr);
        ierr = PCCompositeGetPC(pc,0,&t1);CHKERRQ(ierr);
        KSP Ap_ksp;
        ierr = PCGalerkinGetKSP(t1,&Ap_ksp);CHKERRQ(ierr);
        ierr = KSPSetType(Ap_ksp,KSPRICHARDSON);CHKERRQ(ierr);
        PC Ap_pc;
        ierr = KSPGetPC(Ap_ksp,&Ap_pc);CHKERRQ(ierr);
        ierr = PCSetType(Ap_pc,PCHYPRE);CHKERRQ(ierr);
        /* this tells the PC how to compute the reduced matrix */
        ierr = 
PCGalerkinSetComputeSubmatrix(t1,ComputeSubmatrix,NULL);CHKERRQ(ierr);
        PetscInt b,Am,An;
        ierr = MatGetLocalSize(A,&Am,&An);CHKERRQ(ierr);
        ierr = MatGetBlockSize(A,&b);CHKERRQ(ierr);
        int start,end;
        ierr = MatGetOwnershipRange(A, &start, &end);CHKERRQ(ierr);
        /* create the R operator */
        Mat R;
        ierr = 
MatCreateShell(PetscObjectComm((PetscObject)A),Am/b,An,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&R);
        ierr = MatShellSetOperation(R,MATOP_MULT,(void 
(*)(void))ApplyR);CHKERRQ(ierr);
        ierr = PCGalerkinSetRestriction(t1,R);CHKERRQ(ierr);
        /* create the P operator */
        Mat P;
        ierr = 
MatCreateShell(PetscObjectComm((PetscObject)A),Am,An/b,PETSC_DETERMINE,PETSC_DETERMINE,NULL,&P);
        ierr = MatShellSetOperation(P,MATOP_MULT,(void 
(*)(void))ApplyP);CHKERRQ(ierr);
        ierr = PCGalerkinSetInterpolation(t1,P);CHKERRQ(ierr);

        /*  Create the second subproblem solver Block ILU */
        PC t2;
        ierr = PCCompositeAddPC(pc,PCHYPRE);CHKERRQ(ierr);
        ierr = PCCompositeGetPC(pc,1,&t2);CHKERRQ(ierr);
        ierr = PCSetType(t2,PCHYPRE);CHKERRQ(ierr);
        ierr = PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_maxiter","1");
        char s[100];
        sprintf(s,"%e",.1);
        ierr = 
PetscOptionsSetValue(NULL,"-sub_1_pc_hypre_pilut_tol",s);CHKERRQ(ierr);
        ierr = PCHYPRESetType(t2,"pilut");CHKERRQ(ierr);

        ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr);
        ierr = VecZeroEntries(v);CHKERRQ(ierr);
        ierr = KSPSolve(ksp,u,v);CHKERRQ(ierr);
        ierr = KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);
        }
        return 0;
}



> On Jan 23, 2017, at 8:25 AM, Sébastien Loisel <sloi...@gmail.com> wrote:
> 
> Hi Barry,
> 
> Thanks for your email. Thanks for pointing out my PetSC sillyness. I think 
> what happened is I played with matrices as well as with preconditioners so I 
> initially implemented it as MATSHELL and at the end wrapped it in a PCSHELL. 
> :)
> 
> On Mon, Jan 23, 2017 at 2:26 AM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> I've started to look at this; it is a little weird using MATSHELL within your 
> PCSHELL, seems unnecessary. Not necessarily wrong, just strange. I'll 
> continue to try to understand it.
> 
>   Barry
> 
> 
> 
> > On Jan 20, 2017, at 5:48 PM, Sébastien Loisel <sloi...@gmail.com> wrote:
> >
> > OK I'm attaching the prototype.
> >
> > It won't be 100% plug-and-play for you because in principle T1 and T2 are 
> > built on top of "sub-preconditioners" (AMG for T1 and BILU for T2) and 
> > judging from the PetSC architecture elsewhere I must assume you're going to 
> > want to expose those in a rational way. At present, I've hard-coded some 
> > sub-preconditioners, and in particular for T2 I had to resort to PILUT 
> > because I didn't have a BILU(0) handy.
> >
> > Also, I broke the PetSC law and my functions don't return integers, so I 
> > also assume you're going to want to tweak that... Sorry!
> >
> > On Fri, Jan 20, 2017 at 11:29 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> >   Sure, email the PCSHELL version, best case scenario  I only need change 
> > out the PCSHELL and it takes me 5 minutes :-)
> >
> >
> > > On Jan 20, 2017, at 5:07 PM, Sébastien Loisel <sloi...@gmail.com> wrote:
> > >
> > > Hi all,
> > >
> > > Thanks for your emails. I'm willing to help in whatever way. We have a 
> > > "PCSHELL" prototype we can provide on request, although PetSC experts can 
> > > no doubt do a better job than I did.
> > >
> > > Thanks,
> > >
> > > On Fri, Jan 20, 2017 at 9:50 PM, Robert Annewandter 
> > > <robert.annewand...@opengosim.com> wrote:
> > > Indeed that would be very helpful! We're very happy to support to test 
> > > things out, provide feedback etc
> > >
> > > Many thanks!
> > > Robert
> > >
> > >
> > >
> > > On 20/01/17 21:22, Hammond, Glenn E wrote:
> > >> That sounds great.  I do know that there is also much interest 
> > >> state-side in CPR preconditioning within PFLOTRAN.  I have a Sandia 
> > >> colleague in Carlsbad, NM who has been asking about it.  I am sure that 
> > >> Sebastien and/or Robert will help out in any way possible.
> > >>
> > >> Thanks,
> > >>
> > >> Glenn
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Barry Smith [
> > >>> mailto:bsm...@mcs.anl.gov
> > >>> ]
> > >>> Sent: Friday, January 20, 2017 12:58 PM
> > >>> To: Hammond, Glenn E
> > >>> <geha...@sandia.gov>
> > >>>
> > >>> Cc: Sébastien Loisel
> > >>> <sloi...@gmail.com>
> > >>> ; Robert Annewandter
> > >>>
> > >>> <robert.annewand...@opengosim.com>; Jed Brown <j...@jedbrown.org>
> > >>> ;
> > >>> Paolo Orsini
> > >>> <paolo.ors...@opengosim.com>
> > >>> ; Matthew Knepley
> > >>>
> > >>> <knep...@gmail.com>
> > >>>
> > >>> Subject: Re: [EXTERNAL] CPR preconditioning
> > >>>
> > >>>
> > >>>   Glenn,
> > >>>
> > >>>     Sorry about the delay in processing this, too much going on ...
> > >>>
> > >>>     I think the best thing is for us (the PETSc developers) to 
> > >>> implement a CPR
> > >>> preconditioner "directly" as its own PC and then have you guys try it 
> > >>> out. I am
> > >>> planning to do this.
> > >>>
> > >>>    Barry
> > >>>
> > >>>
> > >>>> On Jan 20, 2017, at 2:50 PM, Hammond, Glenn E <geha...@sandia.gov>
> > >>> wrote:
> > >>>
> > >>>> Barry, Jed or Matt,
> > >>>>
> > >>>> Do you have any suggestions for how to address the limitations of
> > >>>>
> > >>> PetscFieldSplit() discussed below.  Will they need to manipulate the 
> > >>> matrices
> > >>> manually?
> > >>>
> > >>>> Thanks,
> > >>>>
> > >>>> Glenn
> > >>>>
> > >>>> From: Sébastien Loisel [
> > >>>> mailto:sloi...@gmail.com
> > >>>> ]
> > >>>> Sent: Wednesday, January 11, 2017 3:33 AM
> > >>>> To: Barry Smith
> > >>>> <bsm...@mcs.anl.gov>
> > >>>> ; Robert Annewandter
> > >>>>
> > >>>> <robert.annewand...@opengosim.com>
> > >>>> ; Hammond, Glenn E
> > >>>>
> > >>>> <geha...@sandia.gov>; Jed Brown <j...@jedbrown.org>
> > >>>> ; Paolo Orsini
> > >>>>
> > >>>> <paolo.ors...@opengosim.com>
> > >>>>
> > >>>> Subject: [EXTERNAL] CPR preconditioning
> > >>>>
> > >>>> Dear Friends,
> > >>>>
> > >>>> Paolo has asked me to write this email to clarify issues surrounding
> > >>>> the CPR preconditioner that is widely used in multiphase flow. I know
> > >>>> Barry from a long time ago but we only met once when I was a PhD
> > >>>> student so I would be shocked if he remembered me. :)
> > >>>>
> > >>>> I'm a math assistant professor and one of my areas of specialization 
> > >>>> is linear
> > >>>>
> > >>> algebra and preconditioning.
> > >>>
> > >>>> The main issue that is useful to clarify is the following. There was a 
> > >>>> proposal
> > >>>>
> > >>> to use PetSC's PETSCFIELDSPLIT in conjunction with PCCOMPOSITE in order 
> > >>> to
> > >>> implement CPR preconditioning. Although this is morally the right idea, 
> > >>> this
> > >>> seems to be currently impossible because PETSCFIELDSPLIT lacks the
> > >>> capabilities it would need to implement the T1 preconditioner. This is 
> > >>> due to
> > >>> limitations in the API exposed by PETSCFIELDSPLIT (and no doubt 
> > >>> limitations
> > >>> in the underlying implementation).
> > >>>
> > >>>> In order to be as clear as possible, please allow me to describe
> > >>>>
> > >>> unambiguously the first of the two parts of the CPR preconditioner using
> > >>> some MATLAB snippets. Let I denote the N by N identity, and Z the N by N
> > >>> zero matrix. Put WT = [I I I] and C = [Z;Z;I]. The pressure matrix is 
> > >>> Ap =
> > >>> WT*A*C, and the T1 preconditioner is C*inv(Ap)*WT, where inv(Ap) is to 
> > >>> be
> > >>> implemented with AMG.
> > >>>
> > >>>> This T1 preconditioner is the one that would have to be implemented by
> > >>>>
> > >>> PETSCFIELDSPLIT. The following limitations in PETSCFIELDSPLIT prevents 
> > >>> one
> > >>> to implement T1:
> > >>>
> > >>>>    • One must select the "pressure" by specifying an IS object to
> > >>>>
> > >>> PCFieldSplitSetIS(). However, since WT = [I I I], the pressure is 
> > >>> obtained by
> > >>> summing over the three fields. As far as I can tell, an IS object does 
> > >>> not allow
> > >>> one to sum over several entries to obtain the pressure field.
> > >>>
> > >>>>    • The pressure matrix is Ap = WT*A*C; note that the matrix WT on
> > >>>>
> > >>> the left is different from the matrix C on the right. However, 
> > >>> PCFIELDSPLIT
> > >>> has no notion of a "left-IS" vs "right-IS"; morally, only diagonal 
> > >>> blocks of A can
> > >>> be used by PCFIELDSPLIT.
> > >>>
> > >>>>    • PCFIELDSPLIT offers a range of hard-coded block structures for the
> > >>>>
> > >>> final preconditioner, but the choice T1 = C*inv(Ap)*WT is not one of 
> > >>> these
> > >>> choices. Indeed, this first stage CPR preconditioner T1 is *singular*, 
> > >>> but there
> > >>> is no obvious way for PCFIELDSPLIT to produce a singular preconditioner.
> > >>>
> > >>>> Note that the documentation for PETSCFIELDSPLIT says that "The
> > >>>>
> > >>> Constrained Pressure Preconditioner (CPR) does not appear to be 
> > >>> currently
> > >>> implementable directly with PCFIELDSPLIT". Unless there are very 
> > >>> significant
> > >>> capabilities that are not documented, I don't see how CPR can be
> > >>> implemented with PETSCFIELDSPLIT.
> > >>>
> > >>>> Elsewhere, someone proposed putting the two preconditioners T1 and T2
> > >>>> on each side of A, e.g. T1*A*T2. That is a very bad idea because T1 is
> > >>>> singular and hence T1*A*T2 is also singular. The correct CPR
> > >>>> preconditioner is nonsingular despite the fact that T1 is singular,
> > >>>> and MCPR is given by the formula MCPR = T2*(I-A*T1)+T1, where T2 =
> > >>>> BILU(0) of A. (There is a proof, due to Felix Kwok, that BILU(0) works
> > >>>> even though ILU(0) craps out on vanishing diagonal entries.)
> > >>>>
> > >>>> I'm also attaching a sample MATLAB code that runs the CPR 
> > >>>> preconditioner
> > >>>>
> > >>> on some fabricated random matrix A. I emphasize that this is not a 
> > >>> realistic
> > >>> matrix, but it still illustrates that the algorithm works, and that 
> > >>> MCPR is better
> > >>> than T2 alone. Note that T1 cannot be used alone since it is singular. 
> > >>> Further
> > >>> gains are expected when the Robert's realistic code with correct 
> > >>> physics will
> > >>> come online.
> > >>>
> > >>>> <image003.jpg>
> > >>>> I hope this clarifies some things.
> > >>>>
> > >>>>
> > >>>> Sébastien Loisel
> > >>>> Assistant Professor
> > >>>> Department of Mathematics, Heriot-Watt University Riccarton, EH14 4AS,
> > >>>> United Kingdom
> > >>>> web:
> > >>>> http://www.ma.hw.ac.uk/~loisel/
> > >>>>
> > >>>> email: S.Loisel at
> > >>>> hw.ac.uk
> > >>>>
> > >>>> phone:
> > >>>> +44 131 451 3234
> > >>>>
> > >>>> fax:
> > >>>> +44 131 451 3249
> > >
> > >
> > >
> > >
> > > --
> > > Sébastien Loisel
> > > Assistant Professor
> > > Department of Mathematics, Heriot-Watt University
> > > Riccarton, EH14 4AS, United Kingdom
> > > web: http://www.ma.hw.ac.uk/~loisel/
> > > email: S.Loisel at hw.ac.uk
> > > phone: +44 131 451 3234
> > > fax: +44 131 451 3249
> > >
> >
> >
> >
> >
> > --
> > Sébastien Loisel
> > Assistant Professor
> > Department of Mathematics, Heriot-Watt University
> > Riccarton, EH14 4AS, United Kingdom
> > web: http://www.ma.hw.ac.uk/~loisel/
> > email: S.Loisel at hw.ac.uk
> > phone: +44 131 451 3234
> > fax: +44 131 451 3249
> >
> > <petsc.zip>
> 
> 
> 
> 
> -- 
> Sébastien Loisel
> Assistant Professor
> Department of Mathematics, Heriot-Watt University
> Riccarton, EH14 4AS, United Kingdom
> web: http://www.ma.hw.ac.uk/~loisel/
> email: S.Loisel at hw.ac.uk
> phone: +44 131 451 3234
> fax: +44 131 451 3249
> 


--- End Message ---

Reply via email to