On Apr 26, 2010, at 8:07 PM, William Stein wrote:

On Mon, Apr 26, 2010 at 8:00 PM, Jason Grout
<jason-s...@creativetrax.com> wrote:
When multiplying two univariate polynomials over RR[x], Sage uses the naive O(n^2) algorithm. The docstring for _mul_ cites numerical stability as the
reason to not use asymptotically faster algorithms:

"Here we use the naive `O(n^2)` algorithm, as asymptotically faster
algorithms such as Karatsuba can have very inaccurate results due to
intermediate rounding errors. "

(this is followed by an example where Karatsuba multiplication is shown to
mess up compared to naive multiplication)

Can we do any better? Does anyone know of any algorithms for numerically stable fast univariate polynomial multiplication (better than the naive
algorithm)?

1) I don't know what I'm talking about, but the first thing that pops
into my mind (in 1 second, literally) is to simply bump up the
precision some heuristic amount, then do the arithmetic using some
asymptotically fast algorithm using interval arithmetic, and see what
the precision of the result is.  If it is sufficient, return the
non-interval answer.  If it is not sufficient, increase the precision
and try again.  I think the complexity of arithmetic with intervals is
only a constant factor (2? 4?) compared to not using intervals, so the
complexity of what I just suggested should usually have nearly the
same O complexity as some asymptotically fast algorithm.

I should comment on this, as I wrote the code and comments in question. There actually is a fair amount of research out there on stable multiplication of polynomials over the real numbers, but (if I remember right, it was a while ago) there were some results to the effect that no asymptotically fast algorithm has good stability properties over all possible coefficients. Of course, in practice one often runs into "well behavied" polynomials in R[x]. (In particular, most of the interest is, understandably, about truncated power series, though there are other uses.) The obvious thing to do is a (non- discrete) FFT. What I was thinking about implementing (though I never got back to it) is something that works like this:

1) Choose some a and rescale f(x) -> f(alpha*x) so all coefficients have roughly the same size. 2) Multiply the resulting polynomial over Z using the insanely fast code in FLINT. The resulting answer will be exact product of the truncated input, and the precision loss (conversely, required precision for no loss) was fairly easy to work out if I remember right .
3) Cast the result back into R[x].

Some heuristic would have to be put in place to see if and alpha can be found such that this is actually a win, but my experiments seemed to indicate it was for many power series. Using interval arithmetic might be a good idea as well, though the cutoff is a bit high unless one implements a special interval-karatsuba.

sage: R.<x> = RR[]
sage: f = (x+1)^100
sage: timeit("f*f")
125 loops, best of 3: 1.65 ms per loop
sage: timeit("f._mul_karatsuba(f)")
25 loops, best of 3: 19.5 ms per loop
sage: f = (x+1)^1000
sage: timeit("f*f")
5 loops, best of 3: 142 ms per loop
sage: timeit("f._mul_karatsuba(f)")
5 loops, best of 3: 655 ms per loop

sage: R.<x> = RIF[]
sage: f = (x+1)^100
sage: timeit("f*f")
25 loops, best of 3: 14.7 ms per loop
sage: timeit("f._mul_karatsuba(f)")
25 loops, best of 3: 36.3 ms per loop
sage: f = (x+1)^1000
sage: timeit("f*f")
5 loops, best of 3: 1.34 s per loop
sage: timeit("f._mul_karatsuba(f)")
5 loops, best of 3: 1.24 s per loop

- Robert

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to