Hi Ivo,

On Jul 18, 1:32 pm, hedtke <hed...@me.com> wrote:
> With the in-memory variant I mean: "Boyer, Dumans, Pernet and Zhou: Memory 
> efficient scheduling of Strassen-Winograd's matrix multiplication algorithm. 
> International Symposium on Symbolic and Algebraic Computation 2009."
>
> This needs only a constant number of auxiliary variables, i think.

Well, we wrote this paper from our experience implementing Strassen-
Winograd's algorithm in LinBox. The version in LinBox is actually not
the fully in-memory version, for it has a sligthly larger constant in
the time complexity.
We use the classic 2/3n^2 extra memory version (for C<-AxB) and the
1n^2 version for (C<-AxB+-C), as the computation time is usually a
bigger concern than memory.
>
> Loop Unrolling or Tilling doesn't change the asymptotic complexity. But we 
> can use Knowledge about the computer architecture (L1 Cache, etc), to speed 
> up the computations by a factor of 2.

As Martin already wrote, this is already taken into consideration for
the base case: using the numercial routines BLAS for LinBox, or table
precomputations+fine tuning+... for M4RI over GF2
>
> Please remember that the parameters for Strassens algorightm from his 
> original paper are far away from optimal. I mean that the original method 
> only was build for matrices of the size m*2^k. Strassen said something like: 
> If you ant to calculate the product of two n x n matrices for an arbitrary n, 
> choose m=f(n) and k=g(n) for some f and g (see his paper from 1969). These f 
> and g are NOT a good choice. There are better ones from R. Probert and others.
Of course we do not use these parameters. In LinBox, they are
determined at install time, by benchmarks.

Clément Pernet

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to