> > > I believe that my implementation of the basic Gaussian
> > > elimination using Markowitz Pivoting is comparable to Magma's.
> >
> > How do you measure this? As far as I know Magma does not expose its
> > sparse linear algebra to the end user.
>
> You are right, but Magma has a sparse matrix type and related algorithms,
> so I compare to those. Say, I compare Magma's Rank procedure for sparse
> matrices, with my own Markowitz Pivoting gaussian elimination.

I forgot about that. But still, IIRC Magma's F4 linear algebra is not just the 
standard linear algebra exposed to the user via sparse matrices (maybe that's 
just a rumour though). 

> > > But the
> > > problem is that the Markowitz Pivoting doesn't respect the order of the
> > > columns so we need to adapt the algorithm for F4. I've tried all kinds
> > > for adaptations, but nothing seems to be good enough.
> >
> > You mean 'good enough' in terms of performance?
>
> I compare matrix reduction time, sparsity before and after reduction,
> matrix sizes.

This seems to support the hypothesis that the LA used in F4 is not the LA you 
compare with when computing the rank of sparse matrices. On the other hand, 
did you profile your code: where is all the time spent?

> > What examples/ideals are you optimising for and what are the times you
> > get for those?
>
> I've tested some ideals that come from cryptography, specifically some from
> MPKCs over GF(2^8) and also random sets of generators with similar
> characteristics. The times are between 10 and 100 times higher than Magma
> in terms of matrix reduction time.
>
> > Also, do you cross over to a dense implementation if the matrices get too
> > dense?
>
> Yes and not. At this point I don't change the data structure when it gets
> too dense, but I do switch to a dense algorithm. I don't really know if a
> dense representation would make a big difference. Also I don't know exactly
> at what level of density it would.

Did you profile how much time you spend in the dense part of the algorithm? If 
it is considerable time, then it would make sense optimizing this. From what 
you write I assume that your dense algorithm is not asymptotically fast or 
even particularly optimized? Of course the change of representation 
(sparse -> dense) can be quite costly too.

> > > If this makes sense to anybody else, I'd like to discuss how to
> > > overcome this obstacle.
> >
> > I was toying with the idea to have a hybrid implementation where
> > left-most columns are sparse and the right-most columns are dense. It
> > seems the matrices which arise during F4 and friends usually follow this
> > pattern.

I never worked out any details, it is just one of those ideas that keeps 
floating around.

> Do you use different data structures?

Yes, you'd basically have a dense and a sparse matrix.

> how do you choose pivots?

just like you would in your sparse matrix.

> How do you decide where to split?

I guess some cheap heuristic. 

> Do you keep the separation static or do you change it along the way?

I'd assume that changing the representation is rather expensive, although one 
could have a block'd representation with one sparse stripe and then several 
dense stripes. The asymptotically fast dense PLUQ factorization would cut the 
matrix in two blocks anyway, so I guess this is feasible? 

Again, I never really thought this through.

Cheers,
Martin

-- 
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_otr: 47F43D1A 5D68C36F 468BAEBA 640E8856 D7951CCF
_www: http://www.informatik.uni-bremen.de/~malb
_jab: martinralbre...@jabber.ccc.de


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to