Alan Bromborsky wrote:
> Luke wrote:
>   
>> On Sep 29, 1:09 pm, Ondrej Certik <ond...@certik.cz> wrote:
>>   
>>     
>>> On Tue, Sep 29, 2009 at 12:49 PM, Luke <hazelnu...@gmail.com> wrote:
>>>     
>>>       
>>>> I'm using Sympy from within PyDy to generate the equations of motion for
>>>> mechanical systems.  At the end of the day,  the equations can be most
>>>> generally written as:
>>>> M(x) * x'' = F(x, x', t)
>>>> M(x) is what is known as the mass matrix, and will in general depend on the
>>>> configuration of the system (positions and angles).  This matrix needs to 
>>>> be
>>>> inverted in order to solve for x'', which then will allow for numerical
>>>> integration or stability analysis.
>>>> I am generating M(x) symbolically, and in some case is it sparse, but in
>>>> many cases, it isn't.  Each entry of the matrix is a Sympy expression, but
>>>> instead of trying to invert a matrix of Sympy expressions, I introduce a
>>>> dummy symbol for each non-zero entries and then invert the matrix of dummy
>>>> symbols.  Humble systems might have 5 degrees of freedom (so a 5x5 needs to
>>>> be inverted), so this inversion isn't so bad, but beyond this, the matrix
>>>> inversions take a really long time, especially when the matrices are full
>>>> (no zero entries).
>>>> I was thinking that it might be nice to have pre-computed matrix inverses
>>>> for n x n matrices.  Matrix inversion is O(n^3), so it would be nice to 
>>>> have
>>>> all this precomputed symbolically, and this would greatly speed up Sympy's
>>>> matrix capabilities.  Inverses up to say 100x100 could be computed (or 
>>>> maybe
>>>> something smaller), and then when you need an inverse, everything would be
>>>> fast.  This could also be used behind the scenes (by introduction of
>>>> symbolic substitution dictionaries) for inverting a matrix full of sympy
>>>> expressions.
>>>>       
>>>>         
>>> The inversion of a 100x100 dense matrix would be quite a big mess, wouldn't 
>>> it?
>>>
>>>     
>>>       
>> Yes, it would be disaster.
>>
>>   
>>     
>>> But I see what you mean, and I think it makes sense to cache it, if it
>>> speeds things up. But see below.
>>>
>>>     
>>>       
>>>> Does anybody know if this has been done by somebody somewhere, or have any
>>>> other ideas on how it could be done better than the way I suggested?
>>>>       
>>>>         
>>> I would first try to profile the inversion code to see why it is slow.
>>> Because for example the adjoint method is just a ratio of two
>>> determinants, so it may be the determinants calculation that is slow
>>> (and should be cached). However, there are 100^2 different
>>> determinants (right?), but I guess caching 10000 expressions should
>>> still be doable. But if this is the case, we should just speed up the
>>> determinant calculation, imho. Looking at the code, we have 2
>>> algorithms implemented, bareis and berkowitz.
>>>
>>>     
>>>       
>> I did some timings of the three matrix inversion (GE, ADJ, LU) using
>> the timeit module.  I also timed the two determinant methods as well
>> to see they the two perform side by side.  It seems that bareis (the
>> default one) is drastically slower than berkowitz, even when you
>> call .expand() on the berkowitz determinant to make it end up with
>> identical symbolic expressions.  Maybe this should be the default
>> method for .det()?  I went up to 5x5 matrices, it started to get
>> really slow after that.  Gaussian elimination seems to be the slowest
>> one, at least for dense matrices like the ones I used.  Here is the
>> code I used to do the timings:
>>
>> import timeit
>> from numpy import zeros, max
>> import matplotlib.pyplot as plt
>>
>> # Dimension of matrix to invert
>> n = range(2, 6)
>> # Number of times to invert
>> number = 20
>> # Store the results
>> t = zeros((len(n), 5))
>>
>> for i in n:
>>     setup_code = "from sympy import Matrix, Symbol\nM = Matrix(%d,\
>>         %d"%(i,i)+",lambda i,j: Symbol('m%d%d'%(i,j)))"
>>     t[i-2, 0] = timeit.Timer('M.inverse_GE()', setup_code).timeit
>> (number)
>>     t[i-2, 1] = timeit.Timer('M.inverse_ADJ()', setup_code).timeit
>> (number)
>>     t[i-2, 2] = timeit.Timer('M.inverse_LU()', setup_code).timeit
>> (number)
>>     t[i-2, 3] = timeit.Timer('M.det(method="bareis")',
>>             setup_code).timeit(number)
>>     t[i-2, 4] = timeit.Timer('M.det(method="berkowitz").expand()',
>>             setup_code).timeit(number)
>>
>>
>> plt.plot(n, t[:,0]/number, label='GE')
>> plt.plot(n, t[:,1]/number, label='ADJ')
>> plt.plot(n, t[:,2]/number, label='LU')
>> plt.plot(n, t[:,3]/number, label='bareis_det')
>> plt.plot(n, t[:,4]/number, label='berkowitz_det.expand()')
>> plt.legend(loc=0)
>> plt.title('Average time to complete 1 matrix inversion/determinant')
>> plt.xlabel('matrix dimension')
>> plt.ylabel('Time [seconds]')
>> plt.xticks(n)
>> plt.axis([2, n[-1], 0, max(t/number)])
>> plt.show()
>>
>>
>> I'd be curious to know if others get similar results as I do.  I
>> posted the results of the above script here:
>> http://i35.tinypic.com/so09hw.jpg
>>
>> It looks like Gaussian elimination is suffering from the bottleneck in
>> the Bareis determinant since it has an assertion that calls .det() to
>> make sure the determinant is non-zero.
>>
>>   
>>     
>>> Also how about the creation of so many expressions, it could be slow too.
>>>
>>>     
>>>       
>> Yeah, this is true.  It seems to me though that if you had all the
>> inverses computed for fully populated matrices, then if some of the
>> entries were zero these things would simplify some (or a lot), and
>> then those expressions would be used rather than the full ones.
>>
>> Maybe it could be an optional module that gets imported in when you
>> need to work with large matrices and want to have inverses precomputed
>> so that you don't have to sit and wait for them for hours?  That way
>> it wouldn't affect the normal behavior of Sympy, but could be imported
>> if desired.
>>
>>   
>>     
>>> So what I want to say is that we should profile it and determine
>>> exactly what things are slowing things down, then we should speed them
>>> up (or cache them).
>>>     
>>>       
>> Should I use cProfile for this?  Or do you recommend another profiler?
>>
>> ~Luke
>>
>>   
>>     
>>> Ondrej
>>>     
>>>
>>>       
>>   
>>     
> Are there differential equation solvers where you don't have to invert 
> the matrix?
>
> >
>
>   
Also, what does the system look like if written as a first order system 
of 2n equations?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com
To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sympy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to