On Mon, Feb 13, 2012 at 10:29 AM, Robert Bradshaw
<rober...@math.washington.edu> wrote:
> On Mon, Feb 13, 2012 at 10:06 AM, William Stein <wst...@gmail.com> wrote:
>> On Mon, Feb 13, 2012 at 9:59 AM, Pierre <pierre.guil...@gmail.com> wrote:
>>> I see. Well I *do* have hundreds of 2x2 matrices to multiply out so
>>> i'm better off storing them as numpy matrices throughout... thanks for
>>> your explanations though.
>>>
>>> Pierre
>>
>> You might consider using Cython and writing a custom 2x2 matrix class.
>>  It wouldn't be difficult... so I'll write one right now and respond
>> with the benchmarks.

Here it is:  http://480.sagenb.org/home/pub/97/

I ended up using GSL's complex matrix data type and the BLAS level 3
routine to do the multiplication.    I did not add any other
convenience functions to the class, so some more will probably be
needed for your application.

The results:

sage: timeit('a*a2')
625 loops, best of 3: 643 ns per loop
sage: timeit('b*b2')
625 loops, best of 3: 25.8 µs per loop
sage: timeit('c.dot(c2)')
625 loops, best of 3: 2.52 µs per loop
sage: 2.52/.643
3.91912908242613
sage: 25.8/.643
40.1244167962675

So, my one off GSL based class is 4 times faster than numpy and 40
times faster than Sage at matrix squaring.

http://480.sagenb.org/home/pub/97/



>
> +1, exactly what I was going to consider. Depending on how naturally
> it fits your problem, you could also consider packing your 2x2
> matrices into larger arrays (e.g. representing n 2x2 matrices by a 4 x
> n matrix and manually doing the multiplication) so you can your
> computations in a "vectorized" form.
>
>>> On 13 fév, 18:32, Robert Bradshaw <rober...@math.washington.edu>
>>> wrote:
>>>> On Sun, Feb 12, 2012 at 4:30 PM, Nils Bruin <nbr...@sfu.ca> wrote:
>>>> > On Feb 12, 1:39 pm, Pierre <pierre.guil...@gmail.com> wrote:
>>>> >> i think zz above might still be considered as a 1 x 1 matrix instead
>>>> >> of a complex number, somehow, and this may be slowing things down.
>>>> > No, that's not the problem. It's simply that numpy's default complex
>>>> > number type is apparently a bit slower for individual element
>>>> > arithmetic. It may well be that you're mainly measuring overhead,
>>>> > though, so you should really test in a more representative situation
>>>> > before committing to a particular implementation choice. numpy does
>>>> > allow arbitrary types in its arrays. I doubt they're as optimized as
>>>> > its own types, but you can try:
>>>>
>>>> > sage: A= MatrixSpace(CDF, 2).random_element()
>>>> > sage: B= MatrixSpace(CDF, 2).random_element()
>>>> > sage: %timeit A*B
>>>> > 625 loops, best of 3: 11.8 µs per loop
>>>> > sage: import numpy
>>>> > sage: AA= numpy.array(A); BB= numpy.array(B)
>>>> > sage: %timeit AA.dot(BB)
>>>> > 625 loops, best of 3: 1.28 µs per loop
>>>> > sage: AAA= numpy.array(A,dtype=type(A[0,0])); BBB=
>>>> > numpy.array(B,dtype=type(B[0,0]))
>>>> > sage: %timeit AAA.dot(BBB)
>>>> > 625 loops, best of 3: 2.33 µs per loop
>>>> > sage: z=A[0,0]
>>>> > sage: %timeit z*z
>>>> > 625 loops, best of 3: 101 ns per loop
>>>> > sage: zz=AA[0,0]
>>>> > sage: %timeit zz*zz
>>>> > 625 loops, best of 3: 253 ns per loop
>>>> > sage: zzz=AAA[0,0]
>>>> > sage: %timeit zzz*zzz
>>>> > 625 loops, best of 3: 107 ns per loop
>>>> > sage: type(z); type(zz); type(zzz)
>>>> > <type 'sage.rings.complex_double.ComplexDoubleElement'>
>>>> > <type 'numpy.complex128'>
>>>> > <type 'sage.rings.complex_double.ComplexDoubleElement'>
>>>>
>>>> With such small matrices (and elements), you're essentially measuring
>>>> overhead rather than arithmetic here. Of course if you have lots of
>>>> small matrices, that may be a relavant thing to measure. As the matrix
>>>> size grows, they should be the same, as multiplying CDF matrices
>>>> simply defers to multiplying numpy matrices.
>>>>
>>>> sage: A= MatrixSpace(CDF, 200).random_element()
>>>> sage: B= MatrixSpace(CDF, 200).random_element()
>>>> sage: %timeit A*B
>>>> 125 loops, best of 3: 7.31 ms per loop
>>>> sage: AA= numpy.array(A); BB= numpy.array(B)
>>>> sage: %timeit AA.dot(BB)
>>>> 125 loops, best of 3: 7.34 ms per loop
>>>>
>>>> - Robert
>>>
>>> --
>>> To post to this group, send email to sage-support@googlegroups.com
>>> To unsubscribe from this group, send email to 
>>> sage-support+unsubscr...@googlegroups.com
>>> For more options, visit this group at 
>>> http://groups.google.com/group/sage-support
>>> URL: http://www.sagemath.org
>>
>>
>>
>> --
>> William Stein
>> Professor of Mathematics
>> University of Washington
>> http://wstein.org
>>
>> --
>> To post to this group, send email to sage-support@googlegroups.com
>> To unsubscribe from this group, send email to 
>> sage-support+unsubscr...@googlegroups.com
>> For more options, visit this group at 
>> http://groups.google.com/group/sage-support
>> URL: http://www.sagemath.org
>
> --
> To post to this group, send email to sage-support@googlegroups.com
> To unsubscribe from this group, send email to 
> sage-support+unsubscr...@googlegroups.com
> For more options, visit this group at 
> http://groups.google.com/group/sage-support
> URL: http://www.sagemath.org



-- 
William Stein
Professor of Mathematics
University of Washington
http://wstein.org

-- 
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to 
sage-support+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org

Reply via email to