On 26 October 2016 at 09:38, Mark Adams <mfad...@lbl.gov> wrote:

> Please run with -info and grep on GAMG and send that. (-info is very
> noisy).
>
> ​I cat the grep at the end of the log file (see attachment
petsc-3.7.4-n2.log).
Also, increasing the local number of iterations in SOR, as suggested by
Barry, removed the indefinite preconditioner (file
petsc-3.7.4-n2-lits2.log).


> I'm not sure what is going on here. Divergence with parallelism.  Here are
> some suggestions.
>
> Note, you do not need to set the null space for a scalar (Poisson) problem
> unless you have some special null space. And not getting it set (with the 6
> rigid body modes) for the velocity (elasticity) equation will only degrade
> convergence rates.
>
> There was a bug for a while (early 3.7 versions) where the coarse grid was
> not squeezed onto one processor, which could result in very bad
> convergence, but not divergence, on multiple processors (the -info output
> will report the number of 'active pes'). Perhaps this bug is causing
> divergence for you.  We had another subtle bug where the eigen estimates
> used a bad seed vector, which gives a bad eigen estimate. This would cause
> divergence but it should not be a parallelism issue (these two bugs were
> both regressions in around 3.7)
>
> Divergence usually comes from a bad eigen estimate in a Chebyshev
> smoother, but this is not highly correlated with parallelism. The -info
> data will report the eigen estimates but that is not terribly useful but
> you can see if it changes (gets larger) with better parameters. Add these
> parameters, with the correct prefix, and use -options_left to make sure
> that "there are no unused options":
>
> -mg_levels_ksp_type chebyshev
> -mg_levels_esteig_ksp_type cg
> -mg_levels_esteig_ksp_max_it 10
> ​​
> -mg_levels_ksp_chebyshev_esteig 0,.1,0,1.05
>
> ​petsc-3.7.4-n2-chebyshev.log contains the output when using the default
KSP Chebyshev.
When estimating the eigenvalues using cg with the translations [0, 0.1; 0,
1.05] (previously using default gmres with translations [0, 0.1; 0, 1.1]),
the max eigenvalue decreases from 1.0931 to 1.04366 and the indefinite
preconditioner appears ealier after 2 iterations (3 previously).
I attached the log (see petsc-3.7.4-chebyshev.log).


> chebyshev is the default, as Barry suggested, replace this with gmres or
> richardson (see below) and verify that this fixed the divergence problem.
>
> ​
Using gmres (-poisson_mg_levels_ksp_type gmres) fixes the divergence problem
​ (file petsc-3.7.4-n2-gmres.log)​
.
​
Same observation with richardson (file petsc-3.7.4-n2-richardson.log).


> If your matrix is symmetric positive definite then use
> '-mg_levels_esteig_ksp_type cg', if not then use the default gmres.
>

I checked and I still get an indefinite preconditioner when using gmres to
estimate the eigenvalues.​


>
> Increase/decrease '-mg_levels_esteig_ksp_max_it 10', you should see the
> estimates increase and converge with higher max_it. Setting this to a huge
> number, like 100, should fix the bad seed vector problem mentioned above.
>
> ​I played with the maximum number of iterations. Here are the min/max
eigenvalue estimates for the two levels:
- max_it 5: (min​=0.0975079, max=1.02383) on level 1, (0.0975647, 1.02443)
on level 2
- max_it 10: (0.0991546, 1.04112), (0.0993962, 1.04366)
- max_it 20: (0.0995918, 1.04571), (0.115723, 1.21509)
- max_it 50: (0.0995651, 1.04543), (0.133744, 1.40431)
- max_it 100: (0.0995651, 1.04543), (0.133744, 1.40431)

Note that all those runs ended up with an indefinite preconditioner, except
when increasing the maximum number of iterations to 50 (and 100, which did
not improve the eigenvalue estimates).


> If eigen estimates are a pain, like with non SPD systems, then
> richardson is an option (instead of chebyshev):
>
> -mg_levels_ksp_type richardson
> -mg_levels_ksp_richardson_scale 0.6
>
> You then need to play with the scaling (that is what chebyshev does for
> you essentially).
>
>
> On Tue, Oct 25, 2016 at 10:22 PM, Matthew Knepley <knep...@gmail.com>
> wrote:
>
>> On Tue, Oct 25, 2016 at 9:20 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
>>
>>>
>>>   Olivier,
>>>
>>>     Ok, so I've run the code in the debugger, but I don't not think the
>>> problem is with the null space. The code is correctly removing the null
>>> space on all the levels of multigrid.
>>>
>>>     I think the error comes from changes in the behavior of GAMG. GAMG
>>> is relatively rapidly moving with different defaults and even different
>>> code with each release.
>>>
>>>     To check this I added the option -poisson_mg_levels_pc_sor_lits 2
>>> and it stopped complaining about KSP_DIVERGED_INDEFINITE_PC. I've seen this
>>> before where the smoother is "too weak" and so the net result is that
>>> action of the preconditioner is indefinite. Mark Adams probably has better
>>> suggestions on how to make the preconditioner behave. Note you could also
>>> use a KSP of richardson or gmres instead of cg since they don't care about
>>> this indefinite business.
>>
>>
>> I think old GAMG squared the graph by default. You can see in the 3.7
>> output that it does not.
>>
>>    Matt
>>
>>
>>>
>>>    Barry
>>>
>>>
>>>
>>> > On Oct 25, 2016, at 5:39 PM, Olivier Mesnard <
>>> olivier.mesna...@gmail.com> wrote:
>>> >
>>> > On 25 October 2016 at 17:51, Barry Smith <bsm...@mcs.anl.gov> wrote:
>>> >
>>> >   Olivier,
>>> >
>>> >     In theory you do not need to change anything else. Are you using a
>>> different matrix object for the velocity_ksp object than the poisson_ksp
>>> object?
>>> >
>>> > ​The matrix is different for the velocity_ksp and the poisson_ksp​.
>>> >
>>> >     The code change in PETSc is very little but we have a report from
>>> another CFD user who also had problems with the change so there may be some
>>> subtle bug that we can't figure out causing things to not behave properly.
>>> >
>>> >    First run the 3.7.4 code with -poisson_ksp_view and verify that
>>> when it prints the matrix information it prints something like has attached
>>> null space if it does not print that it means that somehow the matrix is
>>> not properly getting the matrix attached.
>>> >
>>> > ​When running with 3.7.4 and -poisson_ksp_view, the output shows that
>>> the nullspace is not attached to the KSP (as it was with 3.5.4)​; however
>>> the print statement is now under the Mat info (which is expected when
>>> moving from KSPSetNullSpace to MatSetNullSpace?).
>>> >
>>> >     Though older versions had MatSetNullSpace() they didn't
>>> necessarily associate it with the KSP so it was not expected to work as a
>>> replacement for KSPSetNullSpace() with older versions.
>>> >
>>> >     Because our other user had great difficulty trying to debug the
>>> issue feel free to send us at petsc-ma...@mcs.anl.gov your code with
>>> instructions on building and running and we can try to track down the
>>> problem. Better than hours and hours spent with fruitless email. We will,
>>> of course, not distribute the code and will delete in when we are finished
>>> with it.
>>> >
>>> > ​The code is open-source and hosted on GitHub (
>>> https://github.com/barbagroup/PetIBM)​.
>>> > I just pushed the branches `feature-compatible-petsc-3.7` and
>>> `revert-compatible-petsc-3.5` that I used to observe this problem.
>>> >
>>> > PETSc (both 3.5.4 and 3.7.4) was configured as follow:
>>> > export PETSC_ARCH="linux-gnu-dbg"
>>> > ./configure --PETSC_ARCH=$PETSC_ARCH \
>>> >       --with-cc=gcc \
>>> >       --with-cxx=g++ \
>>> >       --with-fc=gfortran \
>>> >       --COPTFLAGS="-O0" \
>>> >       --CXXOPTFLAGS="-O0" \
>>> >       --FOPTFLAGS="-O0" \
>>> >       --with-debugging=1 \
>>> >       --download-fblaslapack \
>>> >       --download-mpich \
>>> >       --download-hypre \
>>> >       --download-yaml \
>>> >       --with-x=1
>>> >
>>> > Our code was built using the following commands:​
>>> > mkdir petibm-build
>>> > cd petibm-build
>>> > ​export PETSC_DIR=<directory of PETSc>
>>> > export PETSC_ARCH="linux-gnu-dbg"
>>> > export PETIBM_DIR=<directory of PetIBM git repo>
>>> > $PETIBM_DIR/configure --prefix=$PWD \
>>> >       CXX=$PETSC_DIR/$PETSC_ARCH/bin/mpicxx \
>>> >       CXXFLAGS="-g -O0 -std=c++11"​
>>> > make all
>>> > make install
>>> >
>>> > ​Then
>>> > cd examples
>>> > make examples​
>>> >
>>> > ​The example of the lid-driven cavity I was talking about can be found
>>> in the folder `examples/2d/convergence/lidDrivenCavity20/20/`​
>>> >
>>> > To run it:
>>> > mpiexec -n N <path-to-petibm-build>/bin/petibm2d -directory
>>> <path-to-example>
>>> >
>>> > Let me know if you need more info. Thank you.
>>> >
>>> >    Barry
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > > On Oct 25, 2016, at 4:38 PM, Olivier Mesnard <
>>> olivier.mesna...@gmail.com> wrote:
>>> > >
>>> > > Hi all,
>>> > >
>>> > > We develop a CFD code using the PETSc library that solves the
>>> Navier-Stokes equations using the fractional-step method from Perot (1993).
>>> > > At each time-step, we solve two systems: one for the velocity field,
>>> the other, a Poisson system, for the pressure field.
>>> > > One of our test-cases is a 2D lid-driven cavity flow (Re=100) on a
>>> 20x20 grid using 1 or 2 procs.
>>> > > For the Poisson system, we usually use CG preconditioned with GAMG.
>>> > >
>>> > > So far, we have been using PETSc-3.5.4, and we would like to update
>>> the code with the latest release: 3.7.4.
>>> > >
>>> > > As suggested in the changelog of 3.6, we replaced the routine
>>> `KSPSetNullSpace()` with `MatSetNullSpace()`.
>>> > >
>>> > > Here is the list of options we use to configure the two solvers:
>>> > > * Velocity solver: prefix `-velocity_`
>>> > >   -velocity_ksp_type bcgs
>>> > >   -velocity_ksp_rtol 1.0E-08
>>> > >   -velocity_ksp_atol 0.0
>>> > >   -velocity_ksp_max_it 10000
>>> > >   -velocity_pc_type jacobi
>>> > >   -velocity_ksp_view
>>> > >   -velocity_ksp_monitor_true_residual
>>> > >   -velocity_ksp_converged_reason
>>> > > * Poisson solver: prefix `-poisson_`
>>> > >   -poisson_ksp_type cg
>>> > >   -poisson_ksp_rtol 1.0E-08
>>> > >   -poisson_ksp_atol 0.0
>>> > >   -poisson_ksp_max_it 20000
>>> > >   -poisson_pc_type gamg
>>> > >   -poisson_pc_gamg_type agg
>>> > >   -poisson_pc_gamg_agg_nsmooths 1
>>> > >   -poissonksp_view
>>> > >   -poisson_ksp_monitor_true_residual
>>> > >   -poisson_ksp_converged_reason
>>> > >
>>> > > With 3.5.4, the case runs normally on 1 or 2 procs.
>>> > > With 3.7.4, the case runs normally on 1 proc but not on 2.
>>> > > Why? The Poisson solver diverges because of an indefinite
>>> preconditioner (only with 2 procs).
>>> > >
>>> > > We also saw that the routine `MatSetNullSpace()` was already
>>> available in 3.5.4.
>>> > > With 3.5.4, replacing `KSPSetNullSpace()` with `MatSetNullSpace()`
>>> led to the Poisson solver diverging because of an indefinite matrix (on 1
>>> and 2 procs).
>>> > >
>>> > > Thus, we were wondering if we needed to update something else for
>>> the KSP, and not just modifying the name of the routine?
>>> > >
>>> > > I have attached the output files from the different cases:
>>> > > * `run-petsc-3.5.4-n1.log` (3.5.4, `KSPSetNullSpace()`, n=1)
>>> > > * `run-petsc-3.5.4-n2.log`
>>> > > * `run-petsc-3.5.4-nsp-n1.log` (3.5.4, `MatSetNullSpace()`, n=1)
>>> > > * `run-petsc-3.5.4-nsp-n2.log`
>>> > > * `run-petsc-3.7.4-n1.log` (3.7.4, `MatSetNullSpace()`, n=1)
>>> > > * `run-petsc-3.7.4-n2.log`
>>> > >
>>> > > Thank you for your help,
>>> > > Olivier
>>> > > <run-petsc-3.5.4-n1.log><run-petsc-3.5.4-n2.log><run-petsc-3
>>> .5.4-nsp-n1.log><run-petsc-3.5.4-nsp-n2.log><run-petsc-3.7.4
>>> -n1.log><run-petsc-3.7.4-n2.log>
>>> >
>>> >
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
======================
*** PetIBM - Start ***
======================
directory: ./

Parsing file .//cartesianMesh.yaml... done.

Parsing file .//flowDescription.yaml... done.

Parsing file .//simulationParameters.yaml... done.

---------------------------------------
Cartesian grid
---------------------------------------
number of cells: 20 x 20
---------------------------------------

---------------------------------------
Flow
---------------------------------------
dimensions: 2
viscosity: 0.01
initial velocity field:
	0
	0
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: xPlus (right)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0
		1 	 DIRICHLET 	 0
	->location: yPlus (top)
		0 	 DIRICHLET 	 1
		1 	 DIRICHLET 	 0
---------------------------------------

---------------------------------------
Time-stepping
---------------------------------------
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---------------------------------------

----------------------------------------
KSP info: Velocity system
----------------------------------------
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0, divergence=10000
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

----------------------------------------
KSP info: Poisson system
----------------------------------------
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0, divergence=10000
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using Galerkin computed coarse grid matrices
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 0] Writing grid into file... done.
  Residual norms for velocity_ solve.
  0 KSP preconditioned resid norm 1.148589971947e-03 true resid norm 2.324746103222e+00 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 8.875846864181e-09 true resid norm 1.793849713607e-05 ||r(i)||/||b|| 7.716325284387e-06
  2 KSP preconditioned resid norm 1.418061760018e-13 true resid norm 2.860743141913e-10 ||r(i)||/||b|| 1.230561538720e-10
Linear solve converged due to CONVERGED_RTOL iterations 2
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0, divergence=10000
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for poisson_ solve.
  0 KSP preconditioned resid norm 3.540871046457e+00 true resid norm 3.733946078026e-04 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 1.257128124270e-01 true resid norm 7.334555515521e-05 ||r(i)||/||b|| 1.964290689329e-01
  2 KSP preconditioned resid norm 1.250635679151e-02 true resid norm 5.229463758195e-06 ||r(i)||/||b|| 1.400519356445e-02
  3 KSP preconditioned resid norm 7.044787396120e-04 true resid norm 4.496085949179e-07 ||r(i)||/||b|| 1.204111107988e-03
  4 KSP preconditioned resid norm 5.752276071869e-05 true resid norm 3.304114740417e-08 ||r(i)||/||b|| 8.848854995153e-05
  5 KSP preconditioned resid norm 4.501641492631e-06 true resid norm 2.475433259121e-09 ||r(i)||/||b|| 6.629536708334e-06
  6 KSP preconditioned resid norm 2.447627881266e-07 true resid norm 1.929318717776e-10 ||r(i)||/||b|| 5.166969949379e-07
  7 KSP preconditioned resid norm 1.594610003820e-08 true resid norm 1.215791508998e-11 ||r(i)||/||b|| 3.256049989990e-08
Linear solve converged due to CONVERGED_RTOL iterations 7
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0, divergence=10000
  left preconditioning
  has attached null space
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=2 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (poisson_mg_coarse_)     2 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_coarse_)     2 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 2
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5, needed 2.45338
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=66, cols=66
                package used to perform factorization: petsc
                total: nonzeros=1526, allocated nonzeros=1526
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=66, cols=66
          total: nonzeros=622, allocated nonzeros=622
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=66, cols=66
        total: nonzeros=622, allocated nonzeros=622
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0990463, max = 2.07997
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=400, cols=400
        total: nonzeros=1920, allocated nonzeros=1920
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 1] Writing fluxes into file... done.

[time-step 1] Writing pressure into file... done.

=====================
*** PetIBM - Done ***
=====================
#PETSc Option Table entries:
-options_left
-poisson_ksp_atol 0.0
-poisson_ksp_converged_reason
-poisson_ksp_max_it 20000
-poisson_ksp_monitor_true_residual
-poisson_ksp_rtol 1.0E-08
-poisson_ksp_type cg
-poisson_ksp_view
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_gamg_type agg
-poisson_pc_type gamg
-velocity_ksp_atol 0.0
-velocity_ksp_converged_reason
-velocity_ksp_max_it 10000
-velocity_ksp_monitor_true_residual
-velocity_ksp_rtol 1.0E-08
-velocity_ksp_type bcgs
-velocity_ksp_view
-velocity_pc_type jacobi
#End of PETSc Option Table entries
There are no unused options.
======================
*** PetIBM - Start ***
======================
directory: .

Parsing file ./cartesianMesh.yaml... done.

Parsing file ./flowDescription.yaml... done.

Parsing file ./simulationParameters.yaml... done.

---------------------------------------
Cartesian grid
---------------------------------------
number of cells: 20 x 20
---------------------------------------

---------------------------------------
Flow
---------------------------------------
dimensions: 2
viscosity: 0.01
initial velocity field:
	0.
	0.
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: xPlus (right)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yPlus (top)
		0 	 DIRICHLET 	 1.
		1 	 DIRICHLET 	 0.
---------------------------------------

---------------------------------------
Time-stepping
---------------------------------------
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---------------------------------------

----------------------------------------
KSP info: Velocity system
----------------------------------------
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

----------------------------------------
KSP info: Poisson system
----------------------------------------
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 0] Writing grid into file... done.
  Residual norms for velocity_ solve.
  0 KSP preconditioned resid norm 1.148589971947e-03 true resid norm 2.324746103222e+00 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 8.875846864181e-09 true resid norm 1.793849713607e-05 ||r(i)||/||b|| 7.716325284387e-06
  2 KSP preconditioned resid norm 1.418061760018e-13 true resid norm 2.860743141913e-10 ||r(i)||/||b|| 1.230561538720e-10
Linear velocity_ solve converged due to CONVERGED_RTOL iterations 2
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for poisson_ solve.
  0 KSP preconditioned resid norm 3.663481636744e+00 true resid norm 3.733946078026e-04 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 1.682146562271e-01 true resid norm 2.719558922539e-05 ||r(i)||/||b|| 7.283337428313e-02
  2 KSP preconditioned resid norm 7.119755653883e-03 true resid norm 1.420925878483e-06 ||r(i)||/||b|| 3.805426882957e-03
  3 KSP preconditioned resid norm 2.543557070242e-04 true resid norm 5.182168464649e-08 ||r(i)||/||b|| 1.387853053140e-04
  4 KSP preconditioned resid norm 1.098456886904e-05 true resid norm 2.119096316615e-09 ||r(i)||/||b|| 5.675219385428e-06
  5 KSP preconditioned resid norm 2.051269834107e-07 true resid norm 7.232535590070e-11 ||r(i)||/||b|| 1.936968407935e-07
  6 KSP preconditioned resid norm 2.591096621283e-09 true resid norm 2.471788276719e-12 ||r(i)||/||b|| 6.619774964789e-09
Linear poisson_ solve converged due to CONVERGED_RTOL iterations 6
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=3 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  Coarse grid solver -- level -------------------------------
    KSP Object:    (poisson_mg_coarse_)     2 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_coarse_)     2 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 2
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.09231
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=12, cols=12
                package used to perform factorization: petsc
                total: nonzeros=142, allocated nonzeros=142
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 4 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=12, cols=12
          total: nonzeros=130, allocated nonzeros=130
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 7 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=12, cols=12
        total: nonzeros=130, allocated nonzeros=130
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 7 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0998467, max = 1.09831
        Chebyshev: eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object:        (poisson_mg_levels_1_esteig_)         2 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using PRECONDITIONED norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 2, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=62, cols=62
        total: nonzeros=556, allocated nonzeros=556
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.14153, max = 1.55683
        Chebyshev: eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object:        (poisson_mg_levels_2_esteig_)         2 MPI processes
          type: gmres
            GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
            GMRES: happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using PRECONDITIONED norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 2, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=400, cols=400
        total: nonzeros=1920, allocated nonzeros=1920
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node (on process 0) routines

[time-step 1] Writing fluxes into file... done.

[time-step 1] Writing pressure into file... done.

=====================
*** PetIBM - Done ***
=====================
#PETSc Option Table entries:
-options_left
-poisson_ksp_atol 0.0
-poisson_ksp_converged_reason
-poisson_ksp_max_it 20000
-poisson_ksp_monitor_true_residual
-poisson_ksp_rtol 1.0E-08
-poisson_ksp_type cg
-poisson_ksp_view
-poisson_mg_levels_pc_sor_lits 2
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_gamg_type agg
-poisson_pc_type gamg
-velocity_ksp_atol 0.0
-velocity_ksp_converged_reason
-velocity_ksp_max_it 10000
-velocity_ksp_monitor_true_residual
-velocity_ksp_rtol 1.0E-08
-velocity_ksp_type bcgs
-velocity_ksp_view
-velocity_pc_type jacobi
#End of PETSc Option Table entries
There are no unused options.

      GAMG specific options
[0] PCSetUp_GAMG(): level 0) N=400, n data rows=1, n data cols=1, nnz/row (ave)=5, np=2
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 4.8 nnz ave. (N=400)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 62 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.977326e+00 min=3.920450e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Number of equations (loc) 30 with simple aggregation
[0] PCSetUp_GAMG(): 1) N=62, n data cols=1, nnz/row (ave)=8, 1 active pes
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 8.96774 nnz ave. (N=62)
[0] PCGAMGProlongator_AGG(): New grid 12 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.556629e+00 min=1.967905e-03 PC=jacobi
[0] PCSetUp_GAMG(): 2) N=12, n data cols=1, nnz/row (ave)=10, 1 active pes
[0] PCSetUp_GAMG(): 3 levels, grid complexity = 1.35729
      GAMG specific options
======================
*** PetIBM - Start ***
======================
directory: .

Parsing file ./cartesianMesh.yaml... done.

Parsing file ./flowDescription.yaml... done.

Parsing file ./simulationParameters.yaml... done.

---------------------------------------
Cartesian grid
---------------------------------------
number of cells: 20 x 20
---------------------------------------

---------------------------------------
Flow
---------------------------------------
dimensions: 2
viscosity: 0.01
initial velocity field:
	0.
	0.
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: xPlus (right)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yPlus (top)
		0 	 DIRICHLET 	 1.
		1 	 DIRICHLET 	 0.
---------------------------------------

---------------------------------------
Time-stepping
---------------------------------------
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---------------------------------------

----------------------------------------
KSP info: Velocity system
----------------------------------------
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

----------------------------------------
KSP info: Poisson system
----------------------------------------
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 0] Writing grid into file... done.
  Residual norms for velocity_ solve.
  0 KSP preconditioned resid norm 1.148589971947e-03 true resid norm 2.324746103222e+00 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 8.875846864181e-09 true resid norm 1.793849713607e-05 ||r(i)||/||b|| 7.716325284387e-06
  2 KSP preconditioned resid norm 1.418061760018e-13 true resid norm 2.860743141913e-10 ||r(i)||/||b|| 1.230561538720e-10
Linear velocity_ solve converged due to CONVERGED_RTOL iterations 2
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for poisson_ solve.
  0 KSP preconditioned resid norm 3.594808901578e+00 true resid norm 3.733946078026e-04 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 2.501868872941e-01 true resid norm 5.302866028514e-05 ||r(i)||/||b|| 1.420177452407e-01
Linear poisson_ solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=3 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  Coarse grid solver -- level -------------------------------
    KSP Object:    (poisson_mg_coarse_)     2 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_coarse_)     2 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 2
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.09231
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=12, cols=12
                package used to perform factorization: petsc
                total: nonzeros=142, allocated nonzeros=142
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 4 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=12, cols=12
          total: nonzeros=130, allocated nonzeros=130
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 7 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=12, cols=12
        total: nonzeros=130, allocated nonzeros=130
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 7 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0991546, max = 1.04112
        Chebyshev: eigenvalues estimated using cg with translations  [0. 0.1; 0. 1.05]
        KSP Object:        (poisson_mg_levels_1_esteig_)         2 MPI processes
          type: cg
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using PRECONDITIONED norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=62, cols=62
        total: nonzeros=556, allocated nonzeros=556
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0993962, max = 1.04366
        Chebyshev: eigenvalues estimated using cg with translations  [0. 0.1; 0. 1.05]
        KSP Object:        (poisson_mg_levels_2_esteig_)         2 MPI processes
          type: cg
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using PRECONDITIONED norm type for convergence test
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=400, cols=400
        total: nonzeros=1920, allocated nonzeros=1920
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node (on process 0) routines

[time-step 0] ERROR: Poisson solver diverged due to reason: -8
#PETSc Option Table entries:
-options_left
-poisson_ksp_atol 0.0
-poisson_ksp_converged_reason
-poisson_ksp_max_it 20000
-poisson_ksp_monitor_true_residual
-poisson_ksp_rtol 1.0E-08
-poisson_ksp_type cg
-poisson_ksp_view
-poisson_mg_levels_esteig_ksp_max_it 10
-poisson_mg_levels_esteig_ksp_type cg
-poisson_mg_levels_ksp_chebyshev_esteig 0,0.1,0,1.05
-poisson_mg_levels_ksp_type chebyshev
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_gamg_type agg
-poisson_pc_type gamg
-velocity_ksp_atol 0.0
-velocity_ksp_converged_reason
-velocity_ksp_max_it 10000
-velocity_ksp_monitor_true_residual
-velocity_ksp_rtol 1.0E-08
-velocity_ksp_type bcgs
-velocity_ksp_view
-velocity_pc_type jacobi
#End of PETSc Option Table entries
There are no unused options.

      GAMG specific options
[0] PCSetUp_GAMG(): level 0) N=400, n data rows=1, n data cols=1, nnz/row (ave)=5, np=2
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 4.8 nnz ave. (N=400)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 62 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.977326e+00 min=3.920450e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Number of equations (loc) 30 with simple aggregation
[0] PCSetUp_GAMG(): 1) N=62, n data cols=1, nnz/row (ave)=8, 1 active pes
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 8.96774 nnz ave. (N=62)
[0] PCGAMGProlongator_AGG(): New grid 12 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.556629e+00 min=1.967905e-03 PC=jacobi
[0] PCSetUp_GAMG(): 2) N=12, n data cols=1, nnz/row (ave)=10, 1 active pes
[0] PCSetUp_GAMG(): 3 levels, grid complexity = 1.35729
      GAMG specific options
======================
*** PetIBM - Start ***
======================
directory: .

Parsing file ./cartesianMesh.yaml... done.

Parsing file ./flowDescription.yaml... done.

Parsing file ./simulationParameters.yaml... done.

---------------------------------------
Cartesian grid
---------------------------------------
number of cells: 20 x 20
---------------------------------------

---------------------------------------
Flow
---------------------------------------
dimensions: 2
viscosity: 0.01
initial velocity field:
	0.
	0.
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: xPlus (right)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yPlus (top)
		0 	 DIRICHLET 	 1.
		1 	 DIRICHLET 	 0.
---------------------------------------

---------------------------------------
Time-stepping
---------------------------------------
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---------------------------------------

----------------------------------------
KSP info: Velocity system
----------------------------------------
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

----------------------------------------
KSP info: Poisson system
----------------------------------------
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 0] Writing grid into file... done.
  Residual norms for velocity_ solve.
  0 KSP preconditioned resid norm 1.148589971947e-03 true resid norm 2.324746103222e+00 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 8.875846864181e-09 true resid norm 1.793849713607e-05 ||r(i)||/||b|| 7.716325284387e-06
  2 KSP preconditioned resid norm 1.418061760018e-13 true resid norm 2.860743141913e-10 ||r(i)||/||b|| 1.230561538720e-10
Linear velocity_ solve converged due to CONVERGED_RTOL iterations 2
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for poisson_ solve.
  0 KSP preconditioned resid norm 3.424335725400e+00 true resid norm 3.733946078026e-04 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 2.659151327207e-01 true resid norm 2.770573855571e-05 ||r(i)||/||b|| 7.419962146415e-02
  2 KSP preconditioned resid norm 9.652628613282e-03 true resid norm 1.575860242402e-06 ||r(i)||/||b|| 4.220361541041e-03
  3 KSP preconditioned resid norm 6.766395999128e-04 true resid norm 8.952331318965e-08 ||r(i)||/||b|| 2.397552383429e-04
  4 KSP preconditioned resid norm 4.812796694023e-05 true resid norm 5.719272397815e-09 ||r(i)||/||b|| 1.531696569341e-05
  5 KSP preconditioned resid norm 2.911550268135e-06 true resid norm 4.528520225057e-10 ||r(i)||/||b|| 1.212797434786e-06
  6 KSP preconditioned resid norm 2.676930489338e-07 true resid norm 2.976304875449e-11 ||r(i)||/||b|| 7.970936947817e-08
  7 KSP preconditioned resid norm 1.556661791263e-08 true resid norm 2.106196187333e-12 ||r(i)||/||b|| 5.640671138042e-09
Linear poisson_ solve converged due to CONVERGED_RTOL iterations 7
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=3 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  Coarse grid solver -- level -------------------------------
    KSP Object:    (poisson_mg_coarse_)     2 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_coarse_)     2 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 2
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.09231
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=12, cols=12
                package used to perform factorization: petsc
                total: nonzeros=142, allocated nonzeros=142
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 4 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=12, cols=12
          total: nonzeros=130, allocated nonzeros=130
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 7 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=12, cols=12
        total: nonzeros=130, allocated nonzeros=130
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 7 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: gmres
        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
        GMRES: happy breakdown tolerance 1e-30
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=62, cols=62
        total: nonzeros=556, allocated nonzeros=556
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: gmres
        GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
        GMRES: happy breakdown tolerance 1e-30
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=400, cols=400
        total: nonzeros=1920, allocated nonzeros=1920
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node (on process 0) routines

[time-step 1] Writing fluxes into file... done.

[time-step 1] Writing pressure into file... done.

=====================
*** PetIBM - Done ***
=====================
#PETSc Option Table entries:
-options_left
-poisson_ksp_atol 0.0
-poisson_ksp_converged_reason
-poisson_ksp_max_it 20000
-poisson_ksp_monitor_true_residual
-poisson_ksp_rtol 1.0E-08
-poisson_ksp_type cg
-poisson_ksp_view
-poisson_mg_levels_esteig_ksp_max_it 10
-poisson_mg_levels_esteig_ksp_type cg
-poisson_mg_levels_ksp_chebyshev_esteig 0,0.1,0,1.05
-poisson_mg_levels_ksp_type gmres
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_gamg_type agg
-poisson_pc_type gamg
-velocity_ksp_atol 0.0
-velocity_ksp_converged_reason
-velocity_ksp_max_it 10000
-velocity_ksp_monitor_true_residual
-velocity_ksp_rtol 1.0E-08
-velocity_ksp_type bcgs
-velocity_ksp_view
-velocity_pc_type jacobi
#End of PETSc Option Table entries
There are 3 unused database options. They are:
Option left: name:-poisson_mg_levels_esteig_ksp_max_it value: 10
Option left: name:-poisson_mg_levels_esteig_ksp_type value: cg
Option left: name:-poisson_mg_levels_ksp_chebyshev_esteig value: 0,0.1,0,1.05

      GAMG specific options
[0] PCSetUp_GAMG(): level 0) N=400, n data rows=1, n data cols=1, nnz/row (ave)=5, np=2
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 4.8 nnz ave. (N=400)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 62 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.977326e+00 min=3.920450e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Number of equations (loc) 30 with simple aggregation
[0] PCSetUp_GAMG(): 1) N=62, n data cols=1, nnz/row (ave)=8, 1 active pes
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 8.96774 nnz ave. (N=62)
[0] PCGAMGProlongator_AGG(): New grid 12 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.556629e+00 min=1.967905e-03 PC=jacobi
[0] PCSetUp_GAMG(): 2) N=12, n data cols=1, nnz/row (ave)=10, 1 active pes
[0] PCSetUp_GAMG(): 3 levels, grid complexity = 1.35729
      GAMG specific options
======================
*** PetIBM - Start ***
======================
directory: .

Parsing file ./cartesianMesh.yaml... done.

Parsing file ./flowDescription.yaml... done.

Parsing file ./simulationParameters.yaml... done.

---------------------------------------
Cartesian grid
---------------------------------------
number of cells: 20 x 20
---------------------------------------

---------------------------------------
Flow
---------------------------------------
dimensions: 2
viscosity: 0.01
initial velocity field:
	0.
	0.
boundary conditions (component, type, value):
	->location: xMinus (left)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: xPlus (right)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yMinus (bottom)
		0 	 DIRICHLET 	 0.
		1 	 DIRICHLET 	 0.
	->location: yPlus (top)
		0 	 DIRICHLET 	 1.
		1 	 DIRICHLET 	 0.
---------------------------------------

---------------------------------------
Time-stepping
---------------------------------------
formulation: Navier-Stokes solver (Perot, 1993)
convection: Euler-explicit
diffusion: Euler-implicit
time-increment: 0.0005
starting time-step: 0
number of time-steps: 1
saving-interval: 1
---------------------------------------

----------------------------------------
KSP info: Velocity system
----------------------------------------
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  PC has not been set up so information may be incomplete
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

----------------------------------------
KSP info: Poisson system
----------------------------------------
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using DEFAULT norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines

[time-step 0] Writing grid into file... done.
  Residual norms for velocity_ solve.
  0 KSP preconditioned resid norm 1.148589971947e-03 true resid norm 2.324746103222e+00 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 8.875846864181e-09 true resid norm 1.793849713607e-05 ||r(i)||/||b|| 7.716325284387e-06
  2 KSP preconditioned resid norm 1.418061760018e-13 true resid norm 2.860743141913e-10 ||r(i)||/||b|| 1.230561538720e-10
Linear velocity_ solve converged due to CONVERGED_RTOL iterations 2
KSP Object:(velocity_) 2 MPI processes
  type: bcgs
  maximum iterations=10000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(velocity_) 2 MPI processes
  type: jacobi
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=760, cols=760
    total: nonzeros=3644, allocated nonzeros=3800
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
  Residual norms for poisson_ solve.
  0 KSP preconditioned resid norm 3.207494894887e+00 true resid norm 3.733946078026e-04 ||r(i)||/||b|| 1.000000000000e+00
  1 KSP preconditioned resid norm 2.665754589097e-01 true resid norm 4.745469052195e-05 ||r(i)||/||b|| 1.270899191641e-01
  2 KSP preconditioned resid norm 1.340279285368e-02 true resid norm 2.487245167918e-06 ||r(i)||/||b|| 6.661170557751e-03
  3 KSP preconditioned resid norm 9.327267911746e-04 true resid norm 1.104982045593e-07 ||r(i)||/||b|| 2.959287634324e-04
  4 KSP preconditioned resid norm 6.260935918573e-05 true resid norm 7.047641929772e-09 ||r(i)||/||b|| 1.887451447477e-05
  5 KSP preconditioned resid norm 4.524061727977e-06 true resid norm 4.788798715978e-10 ||r(i)||/||b|| 1.282503446999e-06
  6 KSP preconditioned resid norm 3.220602857052e-07 true resid norm 4.345347247124e-11 ||r(i)||/||b|| 1.163741295756e-07
  7 KSP preconditioned resid norm 1.968010362397e-08 true resid norm 3.380926909350e-12 ||r(i)||/||b|| 9.054568112932e-09
Linear poisson_ solve converged due to CONVERGED_RTOL iterations 7
KSP Object:(poisson_) 2 MPI processes
  type: cg
  maximum iterations=20000
  tolerances:  relative=1e-08, absolute=0., divergence=10000.
  left preconditioning
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 2 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=3 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
      GAMG specific options
        Threshold for dropping small values from graph 0.
        AGG specific options
          Symmetric graph false
  Coarse grid solver -- level -------------------------------
    KSP Object:    (poisson_mg_coarse_)     2 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_coarse_)     2 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 2
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (poisson_mg_coarse_sub_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.09231
            Factored matrix follows:
              Mat Object:               1 MPI processes
                type: seqaij
                rows=12, cols=12
                package used to perform factorization: petsc
                total: nonzeros=142, allocated nonzeros=142
                total number of mallocs used during MatSetValues calls =0
                  using I-node routines: found 4 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=12, cols=12
          total: nonzeros=130, allocated nonzeros=130
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 7 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=12, cols=12
        total: nonzeros=130, allocated nonzeros=130
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 7 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: richardson
        Richardson: damping factor=1.
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_1_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=62, cols=62
        total: nonzeros=556, allocated nonzeros=556
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: richardson
        Richardson: damping factor=1.
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (poisson_mg_levels_2_)     2 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
      linear system matrix = precond matrix:
      Mat Object:       2 MPI processes
        type: mpiaij
        rows=400, cols=400
        total: nonzeros=1920, allocated nonzeros=1920
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   2 MPI processes
    type: mpiaij
    rows=400, cols=400
    total: nonzeros=1920, allocated nonzeros=1920
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node (on process 0) routines

[time-step 1] Writing fluxes into file... done.

[time-step 1] Writing pressure into file... done.

=====================
*** PetIBM - Done ***
=====================
#PETSc Option Table entries:
-options_left
-poisson_ksp_atol 0.0
-poisson_ksp_converged_reason
-poisson_ksp_max_it 20000
-poisson_ksp_monitor_true_residual
-poisson_ksp_rtol 1.0E-08
-poisson_ksp_type cg
-poisson_ksp_view
-poisson_mg_levels_esteig_ksp_max_it 10
-poisson_mg_levels_esteig_ksp_type cg
-poisson_mg_levels_ksp_chebyshev_esteig 0,0.1,0,1.05
-poisson_mg_levels_ksp_type richardson
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_gamg_type agg
-poisson_pc_type gamg
-velocity_ksp_atol 0.0
-velocity_ksp_converged_reason
-velocity_ksp_max_it 10000
-velocity_ksp_monitor_true_residual
-velocity_ksp_rtol 1.0E-08
-velocity_ksp_type bcgs
-velocity_ksp_view
-velocity_pc_type jacobi
#End of PETSc Option Table entries
There are 3 unused database options. They are:
Option left: name:-poisson_mg_levels_esteig_ksp_max_it value: 10
Option left: name:-poisson_mg_levels_esteig_ksp_type value: cg
Option left: name:-poisson_mg_levels_ksp_chebyshev_esteig value: 0,0.1,0,1.05

      GAMG specific options
[0] PCSetUp_GAMG(): level 0) N=400, n data rows=1, n data cols=1, nnz/row (ave)=5, np=2
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 4.8 nnz ave. (N=400)
[0] PCGAMGCoarsen_AGG(): Square Graph on level 1 of 1 to square
[0] PCGAMGProlongator_AGG(): New grid 62 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.977326e+00 min=3.920450e-02 PC=jacobi
[0] PCGAMGCreateLevel_GAMG(): Number of equations (loc) 30 with simple aggregation
[0] PCSetUp_GAMG(): 1) N=62, n data cols=1, nnz/row (ave)=8, 1 active pes
[0] PCGAMGFilterGraph(): 	 100.% nnz after filtering, with threshold 0., 8.96774 nnz ave. (N=62)
[0] PCGAMGProlongator_AGG(): New grid 12 nodes
[0] PCGAMGOptProlongator_AGG(): Smooth P0: max eigen=1.556629e+00 min=1.967905e-03 PC=jacobi
[0] PCSetUp_GAMG(): 2) N=12, n data cols=1, nnz/row (ave)=10, 1 active pes
[0] PCSetUp_GAMG(): 3 levels, grid complexity = 1.35729
      GAMG specific options

Reply via email to