Sadly, I forgot to upload my data before I left the office.  However, a new 
thought occurred to me.  I've been trying to determine the eigenvalues of 
this system to determine the stability of a finite difference scheme 
(unstable if any Re(lambda) > 0).  This scheme will (of course) be coded up 
in double precision, so rather than trying to find "pure" eigenvalues 
(undirtied by finite-precision mathematics), I should probably focus my 
attention on determining the eigenvalues of the actual system of equations 
as they will be implemented.  As such, I think the LinearOperator approach 
would be best to focus on.  I think this may be part of the reason that my 
eigenvalues say the system should be stable but the numerical 
implementation is unstable.  I'll spend some time trying to get this work 
and let you know how it goes.

Thanks again for your help.  

On Friday, April 25, 2014 4:43:56 PM UTC-6, Ondřej Čertík wrote:
>
> On Fri, Apr 25, 2014 at 4:29 PM, Peter Brady 
> <peter...@gmail.com<javascript:>> 
> wrote: 
> > Hi guys, 
> > 
> > I appreciate you taking the time to test some things out.  Ondrej, I had 
> > never looked into the 'LinearOperator' function before. I tried your 
> idea 
> > and ended up with an arpack error: 
> > 
> > ValueError: Error in inverting M: function gmres did not converge (info 
> = 
> > 800). 
>
> Can you post the whole script and the whole error? Did you try my 
> script on your data 
> from the gist and if so, do you get the same error? ff you do, then 
> something 
> is different for your scipy installation. If it works, then maybe it 
> doesn't work for some 
> other configuration that you tried. 
>
> > 
> > 
> > Here's what I did 
> > import sympy as sp 
> > from sympy import symbols,SparseMatrix 
> > import scipy.sparse as scp 
> > import scipy.sparse.linalg as linalg 
> > 
> > L = SparseMatrix(..) 
> > R = SparseMatrix(..) 
> > c,d = 0.9,0.01 
> > 
> > Ls = scp.csc_matrix(np.array(L).astype('double')) 
> > Rs = scp.csc_matrix(np.array(R).astype('double')) 
> > 
> > def matvec(x): 
> >     Sx = linalg.spsolve(Ls,Rs*x) 
> >     S2x= linalg.spsolve(Ls,Rs*Sx) 
> >     return -c*Sx+d*S2x 
> > A = linalg.LinearOperator((npoints,npoints),matvec=matvec) 
> > 
> > 
> > The eigenvalues should look like the attached plot.  The data in the 
> gist 
> > corresponds to r=1.0 (the best conditioned case)  Jason's approach works 
> for 
> > the better conditioned L and R but once r < 0.8, computing A from 
> > numerically evaluated B leads to spurious eigenvalues. (Numerics rather 
> than 
> > symbolics was my first approach).  I can update my gist with more L and 
> R 
> > matrices if anyone is interested. 
> > 
> > On Friday, April 25, 2014 4:11:03 PM UTC-6, Ondřej Čertík wrote: 
> >> 
> >> On Fri, Apr 25, 2014 at 3:52 PM, Jason Moore <moore...@gmail.com> 
> wrote: 
> >> > Does this work for you? 
> >> > 
> >> > from __future__ import division 
> >> > import numpy as np 
> >> > from scipy.sparse import csr_matrix 
> >> > from scipy.sparse.linalg import spsolve 
> >> > 
> >> > def get_matrix(filename): 
> >> > 
> >> >     d = np.zeros(238) 
> >> >     row = np.zeros(238) 
> >> >     col = np.zeros(238) 
> >> >     with open(filename,'r') as f: 
> >> >         for k, line in enumerate(f): 
> >> >             i,j,v = line.strip().split() 
> >> >             d[k] = eval(v) 
> >> >             row[k] = int(i) 
> >> >             col[k] = int(i) 
> >> >     return csr_matrix((d, (row, col)), shape=(80, 80)) 
> >> > 
> >> > R,L = get_matrix('R_80'),get_matrix('L_80') 
> >> > 
> >> > B = spsolve(L, R) 
> >> > c = 2.0 
> >> > d = 3.0 
> >> > A = -c * B + d * B ** 2 
> >> 
> >> I use Jason's get_matrix() but my sparse solve for the eigenvalues: 
> >> 
> >> https://gist.github.com/certik/11304917 
> >> 
> >> Peter, are those eigenvalues correct? You can see that my 
> >> matrix-vector is correct, 
> >> as I test it on a random vector, but the eigenvalues look off. 
> >> 
> >> Ondrej 
> >> 
> >> > 
> >> > 
> >> > 
> >> > Jason 
> >> > moorepants.info 
> >> > +01 530-601-9791 
> >> > 
> >> > 
> >> > On Fri, Apr 25, 2014 at 5:42 PM, Ondřej Čertík <ondrej...@gmail.com> 
> >> > wrote: 
> >> >> 
> >> >> On Fri, Apr 25, 2014 at 3:17 PM, Peter Brady <peter...@gmail.com> 
> >> >> wrote: 
> >> >> > I've created a gist with the text for my L and R matrices as well 
> as 
> >> >> > a 
> >> >> > simple function to read them in and turn them into SparseMatrices 
> at 
> >> >> > https://gist.github.com/pbrady/11303375 
> >> >> > 
> >> >> > I can switch from B = L.inv()*R to B = solve(L,R) but most of the 
> >> >> > time 
> >> >> > is 
> >> >> > not spent computing B but rather A from B. (i.e in computing 
> >> >> > A=-c*B+d*B**2) 
> >> >> > 
> >> >> > @Ondrej, I don't think I know what 'x' is 
> >> >> 
> >> >> It's given to you by arpack. I've updated the script for you: 
> >> >> 
> >> >> from numpy import dot 
> >> >> from numpy.linalg import inv, solve 
> >> >> from numpy.random import random 
> >> >> from scipy.sparse.linalg import LinearOperator, eigs 
> >> >> 
> >> >> n = 5 
> >> >> R = random((n, n)) 
> >> >> L = random((n, n)) 
> >> >> 
> >> >> def matvec(x): 
> >> >>     Sx = solve(L, dot(R, x)) 
> >> >>     S2x = solve(L, dot(R, Sx)) 
> >> >>     return -c*Sx + d*S2x 
> >> >> 
> >> >> c, d = 1.5, 2.5 
> >> >> S = dot(inv(L), R) 
> >> >> A = -c*S + d*dot(S, S) 
> >> >> x = random(n) 
> >> >> print "A*x calculation using L^-1:" 
> >> >> print dot(A, x) 
> >> >> print "A*x calculation without using L^-1:" 
> >> >> print matvec(x) 
> >> >> 
> >> >> print "Largest real part n-2 eigenvalues of A using L^-1:" 
> >> >> print eigs(A, k=n-2, which="LR")[0] 
> >> >> 
> >> >> print "Largest real part n-2 eigenvalues of A without using L^-1:" 
> >> >> A = LinearOperator((n, n), matvec=matvec) 
> >> >> print eigs(A, k=n-2, which="LR")[0] 
> >> >> 
> >> >> 
> >> >> 
> >> >> This prints: 
> >> >> 
> >> >> A*x calculation using L^-1: 
> >> >> [-33.82564316  -8.22498638   7.44165407 36.72209849  -1.70563463] 
> >> >> A*x calculation without using L^-1: 
> >> >> [-33.82564316  -8.22498638   7.44165407 36.72209849  -1.70563463] 
> >> >> Largest real part n-2 eigenvalues of A using L^-1: 
> >> >> [ 473.88681348+0.j    1.82296261+0.j    1.06790730+0.j] 
> >> >> Largest real part n-2 eigenvalues of A without using L^-1: 
> >> >> [ 473.88681348+0.j    1.82296261+0.j    1.06790730+0.j] 
> >> >> 
> >> >> As you can see, the last line is computed completely without the 
> >> >> actual knowledge of A, only using the "solve" mechanism. 
> >> >> So this should fix the problem for you. 
> >> >> 
> >> >> Can you please try this on your matrices and report back? 
> >> >> 
> >> >> Ondrej 
> >> >> 
> >> >> > 
> >> >> > On Friday, April 25, 2014 2:30:24 PM UTC-6, Ondřej Čertík wrote: 
> >> >> >> 
> >> >> >> On Fri, Apr 25, 2014 at 11:54 AM, Ondřej Čertík 
> >> >> >> <ondrej...@gmail.com> 
> >> >> >> wrote: 
> >> >> >> > Hi Peter, 
> >> >> >> > 
> >> >> >> > On Fri, Apr 25, 2014 at 11:35 AM, Peter Brady <
> peter...@gmail.com> 
> >> >> >> > wrote: 
> >> >> >> >> So I start with two sparse matrices, L, and R each with data 
> on 
> >> >> >> >> just 
> >> >> >> >> a 
> >> >> >> >> few 
> >> >> >> >> bands (ie 3 to 5) 
> >> >> >> >> 
> >> >> >> >> My goal is compute the largest and smallest eigenvalues of the 
> >> >> >> >> matrix A 
> >> >> >> >> given by: 
> >> >> >> >> 
> >> >> >> >> A = -c*(L^-1*R)+d*(L^-1*R)**2     where c and d are constants 
> >> >> >> >> 
> >> >> >> >> In my code this is written as: 
> >> >> >> >> 
> >> >> >> >>     L = SparseMatrix(...) 
> >> >> >> >>     R = SparseMatrix(...) 
> >> >> >> >> 
> >> >> >> >>     B = L.inv()*R 
> >> >> >> >> 
> >> >> >> >>     A = np.array(-c*B+d*B**2).astype('double') 
> >> >> >> >> 
> >> >> >> >> I can then use scipy/ARPACK to get the values I want.  If I 
> >> >> >> >> convert 
> >> >> >> >> L,R 
> >> >> >> >> or B 
> >> >> >> >> to numpy arrays before computing A, I get crappy eigenvalues 
> so 
> >> >> >> >> this 
> >> >> >> >> has to 
> >> >> >> >> be done symbolically.  My problem is that while computing B is 
> >> >> >> >> manageable 
> >> >> >> >> for the matrices I'm interested (from 20x20 to 160x160), 
> >> >> >> >> computing A 
> >> >> >> >> takes 
> >> >> >> >> about 5 minutes and eats up a 15-30% of my memory so I need to 
> >> >> >> >> run 
> >> >> >> >> this 
> >> >> >> >> in 
> >> >> >> >> serial.  In contrast, if I convert B to a numpy array, it 
> takes < 
> >> >> >> >> 1s 
> >> >> >> >> to 
> >> >> >> >> compute A (although it is the wrong A, so it's essentially 
> >> >> >> >> worthless). 
> >> >> >> > 
> >> >> >> > Can you post code that does the above? You can use gist: 
> >> >> >> > https://gist.github.com/ 
> >> >> >> > or IPython notebook or whatever you prefer. 
> >> >> >> > 
> >> >> >> > Essentialy doing L.inv() means you can't use Lapack, because 
> the 
> >> >> >> > condition number 
> >> >> >> > will be bad. But I wonder if there is a way to rewrite the 
> problem 
> >> >> >> > without doing L.inv(), 
> >> >> >> > possibly using some generalized eigenvalue problem or 
> something, 
> >> >> >> > so 
> >> >> >> > that you can still 
> >> >> >> > use Lapack. 
> >> >> >> 
> >> >> >> There is! 
> >> >> >> 
> >> >> >> The idea is that for arpack, you only need to know A*x, not the A 
> >> >> >> directly. So here is 
> >> >> >> how to calculate A*x without explicitly calculating L^-1: 
> >> >> >> 
> >> >> >> from numpy import dot 
> >> >> >> from numpy.linalg import inv, solve 
> >> >> >> from numpy.random import random 
> >> >> >> 
> >> >> >> n = 5 
> >> >> >> R = random((n, n)) 
> >> >> >> L = random((n, n)) 
> >> >> >> c, d = 1.5, 2.5 
> >> >> >> S = dot(inv(L), R) 
> >> >> >> A = -c*S + d*dot(S, S) 
> >> >> >> x = random(n) 
> >> >> >> print "A*x calculation using L^-1:" 
> >> >> >> print dot(A, x) 
> >> >> >> print "A*x calculation without using L^-1:" 
> >> >> >> Sx = solve(L, dot(R, x)) 
> >> >> >> S2x = solve(L, dot(R, Sx)) 
> >> >> >> print -c*Sx + d*S2x 
> >> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >> This prints: 
> >> >> >> 
> >> >> >> A*x calculation using L^-1: 
> >> >> >> [-0.21153976  2.50748822  2.03177497  4.24144355 -4.15743541] 
> >> >> >> A*x calculation without using L^-1: 
> >> >> >> [-0.21153976  2.50748822  2.03177497  4.24144355 -4.15743541] 
> >> >> >> 
> >> >> >> You can see that both approaches exactly agree. So you can 
> continue 
> >> >> >> using 
> >> >> >> numerics for this, which is always better, if you can. 
> >> >> >> 
> >> >> >> Ondrej 
> >> >> >> 
> >> >> >> > 
> >> >> >> > It's always good to do solid numerics, it's faster, you can use 
> >> >> >> > Fortran and so on. But a separate issue is how to do this 
> >> >> >> > symbolically, and SymPy should have the best algorithms. I need 
> to 
> >> >> >> > play with your code 
> >> >> >> > to get better opinion how to speed this up. 
> >> >> >> > 
> >> >> >> > 
> >> >> >> > 
> >> >> >> >> Is there some way to speed this up and/or reduce the memory 
> >> >> >> >> footprint? 
> >> >> >> >> Ideally, I would like to run hundreds (maybe thousands) of 
> >> >> >> >> different 
> >> >> >> >> cases. 
> >> >> >> >> I'm fine with installing the necessary libraries on my machine 
> >> >> >> >> (linux). 
> >> >> >> > 
> >> >> >> > Perfect. This might be another great application for CSymPy. We 
> >> >> >> > are 
> >> >> >> > implementing the matrix 
> >> >> >> > support this summer as part of GSoC. 
> >> >> >> > 
> >> >> >> > Ondrej 
> >> >> > 
> >> >> > -- 
> >> >> > You received this message because you are subscribed to the Google 
> >> >> > Groups 
> >> >> > "sympy" group. 
> >> >> > To unsubscribe from this group and stop receiving emails from it, 
> >> >> > send 
> >> >> > an 
> >> >> > email to sympy+un...@googlegroups.com. 
> >> >> > To post to this group, send email to sy...@googlegroups.com. 
> >> >> > Visit this group at http://groups.google.com/group/sympy. 
> >> >> > To view this discussion on the web visit 
> >> >> > 
> >> >> > 
> >> >> > 
> https://groups.google.com/d/msgid/sympy/714de0b2-3915-4455-a72e-4c06b8367f1e%40googlegroups.com.
>  
>
> >> >> > 
> >> >> > For more options, visit https://groups.google.com/d/optout. 
> >> >> 
> >> >> -- 
> >> >> You received this message because you are subscribed to the Google 
> >> >> Groups 
> >> >> "sympy" group. 
> >> >> To unsubscribe from this group and stop receiving emails from it, 
> send 
> >> >> an 
> >> >> email to sympy+un...@googlegroups.com. 
> >> >> To post to this group, send email to sy...@googlegroups.com. 
> >> >> Visit this group at http://groups.google.com/group/sympy. 
> >> >> To view this discussion on the web visit 
> >> >> 
> >> >> 
> https://groups.google.com/d/msgid/sympy/CADDwiVBLTUjJ3z%2B-dgKOo3dMgW7SVcTEm_VqKwNc3hRJpeQVqQ%40mail.gmail.com.
>  
>
> >> >> 
> >> >> For more options, visit https://groups.google.com/d/optout. 
> >> > 
> >> > 
> >> > -- 
> >> > You received this message because you are subscribed to the Google 
> >> > Groups 
> >> > "sympy" group. 
> >> > To unsubscribe from this group and stop receiving emails from it, 
> send 
> >> > an 
> >> > email to sympy+un...@googlegroups.com. 
> >> > To post to this group, send email to sy...@googlegroups.com. 
> >> > Visit this group at http://groups.google.com/group/sympy. 
> >> > To view this discussion on the web visit 
> >> > 
> >> > 
> https://groups.google.com/d/msgid/sympy/CAP7f1AhuqsuuUSnjPD7jPpJCXdyC7p8cb2n0_SSNDMgjQAp6Yw%40mail.gmail.com.
>  
>
> >> > 
> >> > For more options, visit https://groups.google.com/d/optout. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "sympy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to sympy+un...@googlegroups.com <javascript:>. 
> > To post to this group, send email to sy...@googlegroups.com<javascript:>. 
>
> > Visit this group at http://groups.google.com/group/sympy. 
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/sympy/5a5ea24f-8cab-45e4-8d66-3082284e50f7%40googlegroups.com.
>  
>
> > 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/88fa65cc-d303-47d8-a127-daeaaf29af30%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to