I can't tell exactly what GAP does. It is beautifully documented, but
it talks about "grease units", which is terminology I don't
understand. It does look like M4RM though.

One trick they use is to handle the case where the bits they get from
the A matrix equals 1. But I think they only do this to speed it up
when the "grease level" is 1, which I think is our k. So no ideas for
us there.

I don't see where they get so much speed from.

I agree, it is highly likely Magma uses this algorithm. It seems like
a relatively well known technique.

Bill.

On 19 May, 23:19, Martin Albrecht <[EMAIL PROTECTED]>
wrote:
> On Monday 19 May 2008, Bill Hart wrote:
>
> > Martin,
>
> > That's all excellent news!! So on the c2d we are caning magma. But we
> > should try and figure out if your magma version is optimised for c2d
> > or for amd64, since that will make a big difference. Is your machine
> > some kind of 64 bit Intel OSX machine? I don't see a specific core 2
> > version of Magma on their current list. Of course if you just had a
> > generic linux x86 version of Magma, that would be much slower than
> > optimal.
>
> My computer is a Macbook Pro so it is one of those 64-bit Intel (OSX) machines
> but I'm running Debian/GNU Linux. According to
>
>  https://magma.maths.usyd.edu.au/magma/export/x86_64-linux/
>
> there is a special Intel64 version of Magma 2.14. But even though I have a
> license to use it, I can't download it since my Uni keeps the login data for
> Magma and only puts versions on an internal server for me to grab. So
> basically I have to way until they grabbed the Intel64 version for me.
>
> Maybe William could run some benchmarks on his machine which is identical to
> mine (except that I upgraded my RAM and he is running OSX not Linux)?
>
> > It's amazing how much difference the SSE makes on your machine. The
> > AMD does essentially use its MMX or SSE hardware to read in cache
> > lines I believe, so basically unless you are doing something requiring
> > lots of wide arithmetic/logic, you aren't going to get anything more
> > out of the chip.
>
> > I look forward to seeing the new code now that you've cleaned it up.
>
> The tarball is here:
>
>    http://m4ri.sagemath.org/downloads/m4ri-20080519.alpha0.tar.gz
>
> and the SPKG is here:
>
>  http://sage.math.washington.edu/home/malb/spkgs/libm4ri-20080519.p0.spkg
>
> The SPKG needs a patch:
>
>  http://sage.math.washington.edu/home/malb/new_m4ri_2.patch
>
> > I'm going to try and figure out what GAP does, in case there's any
> > ideas we missed. It's surely old code, but there might be lots of
> > interesting things in there.
>
> I'll also check again but it seems they are doing M4RM with a fixed k and
> matrix blocking.
>
> > Anyhow, who would have thought that one would see 1.22s for a
> > 10000x10000 matrix multiply. That's pretty exciting.
>
> Yeah, good work Bill!
>
> Martin
>
> PS: I now actually believe that it is possible that Magma uses M4RM (but not
> M4RI maybe). If GAP has it and it is old code, I don't see why Magma
> wouldn't. So having a hard time beating them isn't that implausible anymore,
> since one doesn't have a better algorithm just like that.
>
> --
> name: Martin Albrecht
> _pgp:http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
> _www:http://www.informatik.uni-bremen.de/~malb
> _jab: [EMAIL PROTECTED]
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to