On May 25, 2012, at 9:42 AM, Jed Brown wrote:

> The high end of the GS preconditioned operator is still high frequency. If it 
> wasn't, then GS would be a spectrally equivalent preconditioner.
> 

Huh?  If I damp Jacobi on the 3-point stencil with 0.5 then the high frequency 
is _not_ the "high end of the preconditioned operator". It is asymptotically 0. 
Does that mean it is spectrally equivalent? 

> There is work on optimizing polynomials for damping on a chosen part of the 
> spectrum. I think what we really want is to take estimates of a few extremal 
> eigenvalues and construct a polynomial that damps them.
> 
> On May 25, 2012 8:34 AM, "Mark F. Adams" <mark.adams at columbia.edu> wrote:
> Barry,
> 
> The trick with Cheby is that it exploits special knowledge: the spectrum of 
> the error reduction with an additive (diagonal) and Richardson -- on the 
> 5-point stencil of the Laplacian -- is the worst (in the part of the spectrum 
> that the smoother has to worry about) at the high end.  2) it is cheap to 
> compute the highest eigenvalue and 3) Cheby is made for eigenvalues (and you 
> only need the high end so the highest is fine and you can makeup the lowest).
> 
> The spectrum for multiplicative is quite different (I think. I've never 
> actually seen it) so I would expect to see worse performance with Cheby/GS.
> 
> This trick works for elasticity but higher order discretizations can cause 
> problems.  Likewise, as Jed points out, unsymmetric are another issue.  Cheby 
> is a heuristic for the damping for unsymmetric problems and you are really on 
> your own.  Note, people have done work on Cheby for unsymmetric problems.
> 
> Mark
> 
> On May 25, 2012, at 8:35 AM, Barry Smith wrote:
> 
> >
> > On May 24, 2012, at 10:35 PM, Jed Brown wrote:
> >
> >> On Thu, May 24, 2012 at 10:19 PM, Barry Smith <bsmith at mcs.anl.gov> 
> >> wrote:
> >>
> >> I think I've fixed PCPre/PostSolve_Eisenstat() to work properly with 
> >> kspest but seems to be with ex19 a much worse smoother than SOR (both with 
> >> Cheby). I'm not sure why. With SOR we are essentially doing a few 
> >> Chebyshev iterations with
> >> (U+D)^-1 (L+D)^-1 A  x  = b while with Eisenstat it is (L+D)^-1 A (U + 
> >> D)^-1 y = (L+D)^-1 b.  Well no time to think about it now.
> >>
> >>  Mark,
> >>
> >>         Has anyone looked at using Cheby Eisentat smoothing as opposed to 
> >> Cheby SOR smoothing?
> >>
> >> Aren't they supposed to be the same?
> >
> >  Well one is left preconditioning while the other is "split" 
> > preconditioning so iterates could be different.   There might also some 
> > difference with diagonal scalings?
> >
> >   Barry
> >
> >>
> >>
> >>   Barry
> >>
> >> On May 24, 2012, at 1:20 PM, Jed Brown wrote:
> >>
> >>> On Wed, May 23, 2012 at 2:52 PM, Jed Brown <jedbrown at mcs.anl.gov> 
> >>> wrote:
> >>> On Wed, May 23, 2012 at 2:26 PM, Barry Smith <bsmith at mcs.anl.gov> 
> >>> wrote:
> >>>
> >>> Note that you could use -pc_type eisenstat perhaps in this case instead. 
> >>> Might save lots of flops?  I've often wondered about doing Mark's 
> >>> favorite chebyshev smoother with Eisenstat, seems like it should be a 
> >>> good match.
> >>>
> >>> [0]PETSC ERROR: --------------------- Error Message 
> >>> ------------------------------------
> >>> [0]PETSC ERROR: No support for this operation for this object type!
> >>> [0]PETSC ERROR: Cannot have different mat and pmat!
> >>>
> >>> Also, I'm having trouble getting Eisenstat to be more than very 
> >>> marginally faster than SOR.
> >>>
> >>>
> >>> I think we should later be getting the eigenvalue estimate by applying 
> >>> the preconditioned operator to a few random vectors, then 
> >>> orthogonalizing. The basic algorithm is to generate a random matrix X 
> >>> (say 5 or 10 columns), compute
> >>>
> >>> Y = (P^{-1} A)^q X
> >>>
> >>> where q is 1 or 2 or 3, then compute
> >>>
> >>> Q R = Y
> >>>
> >>> and compute the largest singular value of the small matrix R. The 
> >>> orthogonalization can be done in one reduction and all the MatMults can 
> >>> be done together. Whenever we manage to implement a MatMMult and PCMApply 
> >>> or whatever (names inspired by VecMDot), this will provide a very low 
> >>> communication way to get the eigenvalue estimates.
> >>>
> >>>
> >>> I want to turn off norms in Chebyshev by default (they are very 
> >>> wasteful), but how should I make -mg_levels_ksp_monitor turn them back 
> >>> on? I'm already tired of typing -mg_levels_ksp_norm_type unpreconditioned.
> >>
> >>
> >
> >
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120525/06d36e23/attachment.html>

Reply via email to