> On Aug 25, 2015, at 3:22 PM, Matthew Knepley <knep...@gmail.com> wrote:
> 
> On Tue, Aug 25, 2015 at 12:13 PM, Mark Adams <mfad...@lbl.gov> wrote:
> 
> 
> On Sat, Aug 22, 2015 at 10:39 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> 
> > On Aug 22, 2015, at 9:26 PM, Mark Adams <mfad...@lbl.gov> wrote:
> >
> > Good point.  I can not see any reason to use the initial guess for the 
> > eigen estimate.
> 
>    Why not, won't it better select for the eigen space actually seen by the 
> linear solver since the linear solver starts with that guess? I am just 
> making this up because I haven't looked how the initial guess affects 
> eigenanalysis for Chebyshev  it but ...
> 
> I've not looked at it, and it would be hard to because it is not a well 
> defined problem.  But, if the initial guess is low frequency then it will 
> give a poor estimate for the highest eigen value.  It is not clear to me what 
> the relationship is, generally, of an initial guess and the solution, 
> spectrally.  Initial guesses will change as the problem evolves but we don't 
> update the eigen estimates.  If the user's initial guess happens to be zero 
> then god knows what happens. (This is actually the case for the XGC1 code!!!) 
>  It adds one more variable in debugging AMG, which is hard enough as it is.
>  
> 
> >  I would vote for (1).
> >
> > Also, I hope cheb->random is the default.
> 
>    Well then different machines will produce different convergence histories 
> which is annoying for any kind of "no change" daily testing. Except for you, 
> most of the rest of us don't like the random default, sorry :-)
> 
> We can use the determinate random number that I added to GAMG (eg, v(i) = 
> ((double)(gid(i)*51)%100) - 50.)/50.
> 
> Wait, why are you doing this? Why not just create a PetscRandom and set the 
> seed?

   Yes, he should create a new PetscRandom implementation called for example 
deterministic that would do some simple minded thing. All the current 
PetscRandom implementations might create different numbers with different OS, 
or versions of OS thus making "no change" tests impossible.

   Barry

> 
>    Matt
>  
> 
> >  One of my apps uses a zero RHS for the first solve, just because they did 
> > not care about adding some logic like: if (.not. first_solve) solve()  
> > Using a zero RHS would be catastrophic.
> 
>    There code be a check that rhs norm is zero (cost of a global reduction?) 
> and then use a nonzero initial guess to do the eigenanalysis
> 
> Too complicated for little gain, or loss even, and requires a reduction.
>  
> 
> >  I trust this is true, because the code works, but we should make sure.  
> > And perhaps Cheby should check that that KSPSolve did all of its iterations 
> > (ie, DIVERGE_ITS, or whatever).  Getting this wrong leads to silent errors 
> > that are a pain to debug.
> 
>    Good point, the KSPChebyshevComputeExtremeEigenvalues_Private() routine 
> should check that n returned from  KSPGetIterationNumber() is not zero etc.
> 
>   Barry
> 
> So I think we should:
> 
> 1) set the initial guess with my new sort of random number (we don't need 
> good quality random number here)
> 2) add the check for DIVERGE_ITS, or something, in 
> KSPChebyshevComputeExtremeEigenvalues_Private.  Should it stop?  Probably.
> 
> Mark
>  
> 
> >
> > I can do this.
> >
> > Mark
> >
> >
> >
> > On Sat, Aug 22, 2015 at 6:35 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> >    From KSPSolve_Chebyshev()
> >
> >      X = ksp->work[0];
> >       if (cheb->random) {
> >         B    = ksp->work[1];
> >         ierr = VecSetRandom(B,cheb->random);CHKERRQ(ierr);
> >       } else {
> >         B = ksp->vec_rhs;
> >       }
> >       ierr = KSPSolve(cheb->kspest,B,X);CHKERRQ(ierr);
> >
> >       if (ksp->guess_zero) {
> >         ierr = VecZeroEntries(X);CHKERRQ(ierr);
> >       }
> >       ierr = 
> > KSPChebyshevComputeExtremeEigenvalues_Private(cheb->kspest,&min,&max);CHKERRQ(ierr);
> >
> >    This seems to do strange stuff with the initial guess for the 
> > eigenanalysis. ksp->work[0] is a work vector used within the Chebyshev 
> > algorithm, so at this point in the code it will have just whatever stuff it 
> > had in it from a previous Chebyshev solver or a zero the first time 
> > through. It seems bad to use this vector as the initial guess for 
> > estimator. Then AFTER the KSPSolve() it zeros  ksp->work[0], sometimes? If 
> > the original system being solved has zero initial guess, even though the 
> > values in X will not be used again. WTF?
> >
> >    Shouldn't the code either
> >
> > 1) zero X = ksp->work[0] everytime BEFORE the KSPSolve() or
> > 2) zero X if ksp->guess_zero and otherwise copy into X the initial guess 
> > vec_sol from the caller before computing the eigenvalues to use that 
> > initial guess in estimating the eigenvalues?
> >
> >   Barry
> >
> >
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to