> El 23 oct 2018, a las 15:17, Ale Foggia <amfog...@gmail.com> escribió:
> 
> Hello Jose, thanks for your answer.
> 
> El mar., 23 oct. 2018 a las 12:59, Jose E. Roman (<jro...@dsic.upv.es>) 
> escribió:
> There is an undocumented option:
> 
>   -bv_reproducible_random
> 
> It will force the initial vector of the Krylov subspace to be the same 
> irrespective of the number of MPI processes. This should be used for scaling 
> analyses as the one you are trying to do.
> 
> What about when I'm not doing the scaling? Now I would like to ask for 
> computing time for bigger size problems, should I also use this option in 
> that case? Because, what happens if I have a "bad" configuration? Meaning, I 
> ask for some time, enough if I take into account the "correct" scaling, but 
> when I run it takes double the time/iterations, like it happened before when 
> changing from 960 to 1024 processes?

When you increase the matrix size the spectrum of the matrix changes and 
probably also the convergence, so the computation time is not easy to predict 
in advance.

> 
> An additional comment is that we strongly recommend to use the default solver 
> (Krylov-Schur), which will do Lanczos with implicit restart. It is generally 
> faster and more stable.
> 
> I will be doing Dynamical Lanczos, that means that I'll need the "matrix 
> whose rows are the eigenvectors of the tridiagonal matrix" (so, according to 
> the Lanczos Technical Report notation, I need the "matrix whose rows are the 
> eigenvectors of T_m", which should be the same as the vectors y_i). I checked 
> the Technical Report for Krylov-Schur also and I think I can get the same 
> information also from that solver, but I'm not sure. Can you confirm this 
> please? 
> Also, as the vectors I want are given by V_m^(-1)*x_i=y_i (following the 
> notation on the Report), my idea to get them was to retrieve the invariant 
> subspace V_m (with EPSGetInvariantSubspace), invert it, and then multiply it 
> with the eigenvectors that I get with EPSGetEigenvector. Is there another 
> easier (or with less computations) way to get this?

In Krylov-Schur the tridiagonal matrix T_m becomes arrowhead-plus-tridiagonal. 
Apart from this, it should be equivalent. The relevant information can be 
obtained with EPSGetBV() and EPSGetDS(). But this is a "developer level" 
interface. We could help you get this running. Send a small problem matrix to 
slepc-maint together with a more detailed description of what you need to 
compute.

Jose

> 
> 
> Jose
> 
> 
> 
> > El 23 oct 2018, a las 12:13, Ale Foggia <amfog...@gmail.com> escribió:
> > 
> > Hello, 
> > 
> > I'm currently using Lanczos solver (EPSLANCZOS) to get the smallest real 
> > eigenvalue (EPS_SMALLEST_REAL) of a Hermitian problem (EPS_HEP). Those are 
> > the only options I set for the solver. My aim is to be able to 
> > predict/estimate the time-to-solution. To do so, I was doing a scaling of 
> > the code for different sizes of matrices and for different number of MPI 
> > processes. As I was not observing a good scaling I checked the number of 
> > iterations of the solver (given by EPSGetIterationNumber). I've encounter 
> > that for the **same size** of matrix (that meaning, the same problem), when 
> > I change the number of MPI processes, the amount of iterations changes, and 
> > the behaviour is not monotonic. This are the numbers I've got:
> > 
> > # procs   # iters
> > 960          157
> > 992          189
> > 1024        338
> > 1056        190
> > 1120        174
> > 2048        136
> > 
> > I've checked the mailing list for a similar situation and I've found 
> > another person with the same problem but in another solver ("[SLEPc] GD is 
> > not deterministic when using different number of cores", Nov 19 2015), but 
> > I think the solution this person finds does not apply to my problem 
> > (removing "-eps_harmonic" option).
> > 
> > Can you give me any hint on what is the reason for this behaviour? Is there 
> > a way to prevent this? It's not possible to estimate/predict any time 
> > consumption for bigger problems if the number of iterations varies this 
> > much.
> > 
> > Ale
> 

Reply via email to