On May 29, 2007, at 12:50 AM, Michel wrote:

> Hi,
>
> Thanks for taking the time to make up this example.  I think it is
> good to put it in the documentation.
> Yes you can break factorcaching (with auto_reduce=False) the way you
> do it. ***But***
> let me point out that by rewriting your example only slightly it
> becomes *much* faster than the current
> implementation

[...]

> So maybe in some (I claim non-typical) cases factorcaching requires a
> bit understanding from the
> user but the payoff is enormous. Anyway as I said factor caching
> should be optional.

Factor caching should be optimal if the operands are completely  
factored, which is essentially what you enforced here for the speedup:

> sage: TT=[Q(s,1) for s in T]

However, I think it is to much to assume that elements are always  
known factors of primes, and factoring the input on initialization  
would probably be prohibitively expensive. If it's not (even an  
average of two prime factors per denominator) things may quickly get  
out of hand which was the point of my example. Doing arithmetic on a  
set of elements with an absolutely bounded denominator is not that  
atypical, nor is being handed two fraction field elements with an  
unknown non-trivial gcd in their denominator (especially if they  
relate to the same problem).

> And furthermore there
> is an auto_reduce parameter, which when true emulates the current
> behaviour.

This is good.

> PS. Perhaps it is possible to make your example work out of the box by
> somehow using the real gcd
> for elements which have no cached factors, or to apply the real gcd to
> the factors, caching the result etc...

I did think about this... One would then have to find gcd's between  
all pairs of (non-prime) factors which would get expensive very  
quickly compared to a single gcd of the product.

> But I would prefer to take the wrinkles out of the current
> implementation first.

I agree.

I don't want to give the impression that I think your code is without  
merit--I think it is quite worthwhile and probably worth including in  
SAGE (though that's in Nick and William's ballpark now). For example,  
it's much better for rings that don't implement any gcd, or are  
inexact where no reduction is currently performed at all. And  
obviously you have a case in mind where it helps immensely. I do have  
to say I am wary of including something (by default at least) that  
can give a 10x speedup in some cases and 1000x slowdown in others.

- Robert

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to